Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Signal into vision : medical imaging as instrumentally aided perception Semczyszyn, Nola 2010

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata


24-ubc_fall_2010_semczyszyn_nola.pdf [ 3.89MB ]
JSON: 24-1.0071202.json
JSON-LD: 24-1.0071202-ld.json
RDF/XML (Pretty): 24-1.0071202-rdf.xml
RDF/JSON: 24-1.0071202-rdf.json
Turtle: 24-1.0071202-turtle.txt
N-Triples: 24-1.0071202-rdf-ntriples.txt
Original Record: 24-1.0071202-source.json
Full Text

Full Text

SIGNAL INTO VISION: MEDICAL IMAGING AS INSTRUMENTALLY AIDED PERCEPTION by Nola Semczyszyn A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES (Philosophy) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2010 © Nola Semczyszyn, 2010 Abstract Imaging has become central to many branches of science. Ultrasound, PET, MRI, fMRI, CT, and various kinds of high powered microscopy are used biologically and medically and are taken to be extending the reaches of these sciences. I propose two features of imaging that need to be explained in order to situate these technologies in the epistemology of science: images are useful, and how imaging acts as a kind of visual prostheses. My solution is to appeal to pictorial representation in order to understand both how these images represent, and how we access the content of the images. I argue that imaging technologies take advantage of our ability to have visual experiences of three-dimensional objects in two-dimensional representations. In doing so they create images that are used for instrumentally aided perception into the body. My dissertation defends three theses: that imaging technology produces images as vehicles for seeing-in; that these images are visual prosthetics, they extend our perceptual capacities; and that images are used for instrumentally aided perception. I argue for these through both theoretical and pragmatic arguments. Throughout the dissertation I appeal to how the images are used and interpreted, and develop this through three case studies of MRI, ultrasound and fMRI. Table of Contents Abstract.........................................................................................................................................ii Table of Contents.........................................................................................................................iii List of Figures..............................................................................................................................iv Acknowledgements.......................................................................................................................v 1. Introduction...............................................................................................................................1 2. Historical and Technical Overview.......................................................................................21 3. Representation and Vision.....................................................................................................45 4. Technology as Visual Prosthesis............................................................................................66 5. Representation and Real Similarity......................................................................................94 6. Case Studies Introduction....................................................................................................112 6.1.Using Magnetic Resonance for Imaging...................................................................113 6.2.Ultrasound: Interfering for Greater Visibility...........................................................136 6.3.Seeing the Mind with Functional Imaging................................................................162 7. Medical Imaging and Expert Vision...................................................................................194 8. Conclusion: Instrumentally Aided Perception...................................................................210 Bibliography..............................................................................................................................217 iii List of Figures Figure 2.1: X-ray Image of a Hand, Wilhelm Roentgen,1896.....................................................21 Figure 3.1: Cameraman Light Switch Paste up............................................................................50 Figure 3.2: Nympheas Eduard Monet, oil on canvas, 1897–98....................................................53 Figure 3.3: Garden of Earthly Delights, Hieronymus Bosch, 1500- 1505...................................61 Figure 5.1: Charles Minard Napoleon’s Russian Campaign......................................................100 Figure 6.1: Two-dimensional Ultrasound at 12 Weeks Gestation..............................................142 Figure 6.2: Three-dimensional Ultrasound at 20 Weeks Gestation............................................143 Figure 6.3: Crab Nebula from the Hubble Space Telescope......................................................146 Figure 6.4: Phrenological Chart of the Faculties........................................................................166 iv Acknowledgements I would like to take this opportunity to thank my committee for their support of this project, and for always being so generous with their time and ideas. Our meeting were always inspiring. Individual thanks to my supervisor Dominic Lopes for suggesting I work on images in science, and all of the close careful reading, working with you I couldn’t help but be a better philosopher. Thanks to Bob Brain for introducing me to different ways of looking at pictures over the years, from lectures on crystallography and workshops on pictures in science to every time I ran into you on campus. Thanks to John Beatty for both making sure I had the big picture and for keeping me on track. I would also like to thank the faculty staff and students in the Philosophy department for so many conversations over the years. Special thanks to Nissa and Rhonda for keeping me sane. Thanks to everyone who read chapters and gave me feedback. My twin Alyssa Semczyszyn,  Kelly Gray for being my direct connection to obstetrics,  Brendan Mcleod for helping me be clear, Randall Okita for enthusiasm and to Derek Matravers for helpful last minute comments. I would like to thank my sister Samantha and my parents for all of the emotional support and financial support. A huge thanks to Jill Isenberg for letting me treat her living room as my other home and for being involved with the dissertation the entire way, and Joshua Johnston for fielding phone calls about the dissertation at odd hours. You guys are great friends and colleagues. The dissertation could not have been pulled together without Mike Gray helping me with emergency formatting, you are a wizard with MS Word. Finally I would like to thank those who contributed pictures. v Dedication 1.Dedication This thesis is dedicated to the memory of my niece Iris. vi 1.   Introduction What is Medical Imaging and Why is it Philosophically Interesting? The topic of my dissertation is medical imaging, a general term that covers a broad array of technologies developed in medical contexts for the purpose of producing images of the body. The technologies I will be focusing on are what Bettyann Kevles calls “the daughter technologies of X-ray.”1 Computed Tomography (CT), Positron Emission Tomography (PET), ultrasound, Magnetic Resonance Imaging (MRI), functional Magnetic Resonance Imaging (fMRI), and related technologies produce images of in vivo tissue that are used by physicians and doctors to see inside the body for diagnostic and research purposes. That medical imaging technologies are of philosophical interest is not obvious. The terms used to describe them – as scans, visualization technologies, sonograms, radiographs – give us reason to treat them as relatives of other diagnostic tools. Philosophical interest in biopsy or Western Blot tests, as examples of diagnostic tests, generally concerns probabilistic reasoning about false positives and negatives,2 ethical issues about patient care,3 and perhaps the technological nature of contemporary medical practice.4 The focus tends not to be on the tools themselves, but in their use in a field of practice. My interest in imaging technologies used as diagnostic tools begins with the technologies themselves; in particular the aspects of these technologies that differentiate them from other kinds of medical exams – namely that they are image producing technologies. 1 Bettyann Holtzman Kevles, Naked to the Bone; Medical Imaging in the 20th Century (New Brunswick: Rutgers University Press, 1997), 1. 2Gerd Gigerenzer, Adaptive Reasoning: Rationality in the Real World (Oxford: Oxford University Press, 2000), 77– 91. 3  H Carel, “Can I Be Ill and Happy?” Philosophia 35 (2007): 95–110. 4 For the classic discussion of this see, Stanley Joel Reiser, Medicine and the Reign of Technology (Cambridge: Cambridge University Press, 1978). More contemporary issues include genetic testing, see for example, Elizabeth Chapman, “The Social and Ethical Implications of Changing Medical Technologies: The Views of People Living with Genetic Conditions,” Journal of Health Psychology 5 (2002):195–206. 1 Imaging has come to play a central role in the practice of medicine and associated sciences. Without ultrasound the field of obstetrics would be very different. The standard use of ultrasound to determine foetal age, measure nuchal transparency (an early test for Down Syndrome), and perform procedures in utero has shaped contemporary obstetric practice. The widespread growth of imaging across medicine has led to an interesting phenomenon, which is that image interpretation has come to be central to practicing medicine. For example, ultrasound is used to determine foetal age, but not by feeding out a reading. Ultrasound produces images that are used by medical professionals – midwives, doctors, nurses, or sonographers – to make measurements. These measurements are then used to determine the age of the zygote, embryo, or foetus. Certain demands are made of imaging and of image interpreters: the images must be reliable and they must be accessible to interpretation. Like other kinds of medical diagnostic tools, imaging technologies measure the body in a way that provides reliable and useful information. Unlike other diagnostic tools, however, they do not merely indicate the presence or absence of a condition. A Western Blot will indicate the presence or absence of certain proteins by the appearance of the treated blotter strip. The blotter strip represents a state of the body via its appearance, by changing in response to the proteins present. Interpretation of the strip is in terms of what its changes say about levels of proteins, and what their presence or absence can tell us about the condition of the body. This does not seem like the same representational relationship as between an MRI of my brain and my brain, though appearance plays a role in both. An MRI of my brain seems to grant visual access to my brain; to allow me to see my brain in a way that I do not see my protein levels in a Western Blot. The interpretation of images is related to this visual experience. It is also what makes MRI, and other imaging technologies, so useful and variable. Imaging technologies visually represent the body in a way that supports their diverse uses across different contexts. MRI is used to assess the extent of cerebral haemorrhage, to 2 check for changes in brain lesions, and to measure tumour volume to plan surgery. Ultrasound is used to perform procedures – such as egg retrieval and embryo implantation – as well as to assess foetal normality or abnormality. Besides its applications in research, fMRI is used to plan surgery and to assess responses to treatment. Imaging technologies are also being combined to perform surgeries, and image-guided surgery is becoming more common. Image-guided surgery is being pursued because it reduces recovery times: both by eliminating exploratory surgery, and by allowing more surgeries to be minimally invasive. The versatility of imaging across research, diagnostic, and surgical contexts seems to stem from its allowing us to see inside the body. This notion of “seeing inside the body” is the reason medical imaging is philosophically interesting. Often in discussions of imaging, seeing is treated elliptically, as if it were a metaphor without explanation, or is placed inside scare quotes, “see,” to signify a non-literal use of the term. This convention does nothing to explain the relationship between imaging technologies, images, the bodies visualized, and our visual experiences of them. Furthermore, it is a convention that crosses disciplines: it is neither a philosophical nor a scientific peculiarity. Unfortunately, this leaves the sense in which “seeing” is like or unlike ordinary seeing is left unexplained and only ambiguously related to vision. Not only is this an unsatisfying explanation of a central role vision and visual representation plays in contemporary medicine; it also has the potential to cause harm. Several recent controversies involving imaging reflect the ambiguity of vision in relation to medical images. On May 13th 2010, a Daubert hearing (assessing whether a new technology should be admissible as evidence in court cases) rejected fMRI for use as a lie detector in American courts. This case was closely followed by ethicists and neuroscientists because of the issues it raised about the technology. These issues included: how well the brain is understood, the error rates of imaging studies, what fMRI images can be evidence for or against, and the effect of the 3 images on jurors.5 The viewing and interpreting ultrasound images has become a central issue in debates over abortion. A recently overturned bill in Oklahoma (HB 2780) would have required women to both view an ultrasound image and hear a detailed description of their foetus prior to having an abortion. In the United Kingdom, 3-D ultrasound is being brought into debates over limitations on elective abortions. Some argue that what is seen in the images provides evidence of foetal personhood, and so abortions should be limited to the first 12 weeks of pregnancy.6 The use of ultrasound in these debates has been challenged and the meaning of the images critiqued: often for their role in the construction of the foetus as a subject, and for their naturalization of the foetus as visualized.7 Who sees what in medical images, and what the images can tell us about their subjects, are important issues that reflect the ambiguity around their visuality. While an explanation of imaging and vision need not resolve these controversies, they show that there is too much at stake to leave the question of what it means to see in images open. Some Explananda An account of medical imaging technology needs to explain these features of imaging: that it produces representations of the body; that it is used, in some sense, to enable visual access inside the body; that it is versatile; and how it has come to play a dominant role in science and medicine. 5 Neuroscientist Martha Farah describes her first hand impressions of the hearing in an interview on the Science Magazine website. Greg Miller, “Can Brain Scans Detect Lying? Exclusive New Details from Court Hearing,” Science Magazine, May 14, 2010, accessed August 05, 2010, 6 Julie Palmer, “Seeing and Knowing; Ultrasound Images in the Contemporary Abortion Debate,” Feminist Theory 10 (2009): 180. 7 Ibid., 176. See also: Barbara Duden, Disembodying Women: Perspectives on Pregnancy and the Unborn (Cambridge, MA: Harvard University Press, 1993). For a Canadian perspective on ultrasound see: Lisa M Mitchell, Baby’s First Pictures; Ultrasound and the Politics of Fetal Subjects (Toronto: University of Toronto Press, 2001). 4 Discussions of medical imaging within medicine most often concern patient care, such as doing less exploratory surgery, performing procedures more efficiently (or at all) because they are guided by MRI or ultrasound, or on diagnostic criteria.8 The scientific and medical literatures detail the use of imaging where there are already standards of practice in place. The technology becomes part of the background conditions of the discussion. When imaging or image interpretation are studied, the focus tends to be on error rates, improving patient outcomes, or, recently, on computer assisted diagnosis.9 Medical images are visual representations that play evidentiary roles in medicine and science. This requires some explanation since most often the rationality and explanatory power of these disciplines is cashed out in linguistic terms. While visual representations have been widely discussed in the history of science, and are discussed more in the social sciences, for the most part philosophers have not had much to say about medical imaging. A few exceptions to this include work done on functional imaging by William Uttall and Jim Bogen.10 Outside of the philosophy of art, discussion of pictures and other images are still uncommon in philosophy. Recently in philosophy, however, there has been an increase in discussions of representation in science – including visual representations.  Bubble chamber photographs, Feynman diagrams, electron micrographs, and biological diagrams in science textbooks are all scientific representations that are not only non-linguistic but also made to be visually 8 Some examples include:  Amis Kurjak’s groups’ research with 4-D ultrasound such as Kurjak and others, “Behavioral Pattern Continuity from Prenatal to Postnatal Life: A Study by Four-dimensional (4D) Ultrasonography,” Journal of Perinatal Medicine 32 (2004): 346.; studies on issues in patient care such as Antonio Amuro, Claudia C Leite, Karima Mokhtari, Jean-Yves Delattre, “Pitfalls in the Diagnosis of Brain Tumours,” Lancet Neurology 5 (2006): 937–48. Other examples are discussed throughout. 9 Errors in radiological perception are discussed in Leonard Berlin, “Malpractice issues in Radiology: perceptual errors,” American Journal of Radiology 167 (1996): 125–128.; computer assisted diagnosis in L Warren-Burhenne, S Wood, and C D’Orsi, “Potential Contribution of Computer-aided Detection to the Sensitivity of Screening Mammography,” Radiology 215 (2000): 554–62. 10 William Uttal, The New Phrenology (Cambridge: MIT Press, 2001); Jim Bogen, “Epistemological Custard Pies from Functional Brain Imaging” Philosophy of Science 3 (2000) Supplement: Proceedings of the 2000 Biennial Meeting of the Philosophy of Science Association Part II: Symposia Papers,  S59-S71. 5 interpreted. One might ask, as people have, whether there is a special problem of representation in science or only the problem of representation more generally.11 The debate in the scientific representation literature has focused around two central questions. First is how scientific representation should be approached. Some philosophers think the main thing to be explained is the representational relationship between the “source” (the representation) and the “target” (the real world system or object being represented). Others think that scientific representation can only be understood in terms of how representations are used, in terms of the inferences and interpretations they permit.12 This aspect of the debate is often drawn along the lines of scientific realism and empiricism. The representational relationship sought is an objective and mind-independent similarity between source and target; a structural similarity that can ground how the representation is like the world. Candidates for structures have included isomorphism, partial isomorphism, and homomorphism.13 On the other side of the debate are those who think of representation in science as mainly pragmatic, who are less concerned with being able to establish representation in a cognitive framework and more interested in the way representations function in practice.14 The second side of the debate concerns the ontological question of things represented in science. Models and theories are sometimes the subjects of representations, yet these have some problematic features. Often these contents are literally false – idealizations and abstractions are ways of understanding systems that are useful because they are not true. If understanding scientific representation concerns how it relates to the world, or how representations allow inferences in science there is a problem. If we think representation should have some analogue 11  Chris Callender and Jonathan Cohen, “There is No Special Problem of Scientific Representation,” Theoria 55 (2006): 67–85. 12  Mauricio Suarez, “Scientific Representation: Against Similarity and Isomorphism,” International Studies in Philosophy of Science 3 (2003):225–244. 13 Anjan Chakravartty, “Informational versus Functional Theories of Scientific Representation,” Synthese 72 (2009) 197–213. 14 Mauricio Suarez, “An Inferential Conception of Scientific Representation,” Philosophy of Science 71 (2004): 767–779. 6 to truth, the accepted falsity of these representations needs to be explained. If we think representation should be understood in terms of enabling inferences, we need to separate out those that are good or useful, from those that are incorrect.15 There have been some attempts in the philosophy of science to understand representation by examining commonalities with areas where representation is more established. Work by a number of philosophers has drawn comparisons between representation in art and in science. Some compare these looking for underlying similarities or lessons about representation, while others emphasize the diversity in art as a problem.16 Embedded within the broader discussions of representation in science is the question of visual representation, which raises its own challenges. Unlike language, or even mathematics, visual representations do not obviously fit into the rigorous demands for objectivity and truth of the scientific enterprise. How images, diagrams, graphs, and pictures fit into various sciences is being explored in a small but growing literature within philosophy. The range of visual representations in science is diverse. Many scientific journals dedicate sections to images; some of which can be very beautiful and inspiring. Physical models in chemistry are used in reasoning and justification; as in the famous example of their role in Watson and Crick’s discovery of the structure of DNA. X-ray crystallography also played an important role in that discovery. Others, such as geometric drawing and biological diagrams, are seen as central to reasoning in physics and biology. Graphs, maps, and charts present data and can often play an important role in how research findings are understood or shared. Feynman diagrams, models, and geometric drawings have all been discussed as heuristic devices for scientific problem solving. What all these various 15 Otávio Bueno, “Scientific Representation and Nominalism: An Empiricist View,” Principia 12 (2008): 177–192. 16 For comparisons drawing from art see, Steven French, “A Model-Theoretic Account of Representation (Or I don’t know Much About Art... but I Know it Involves Isomorphism),” Philosophy of Science 70 (2003): 1472– 1483. For a critical view of French and others see Stephen Downes, “Models, Pictures, and Unified Accounts of Representation: Lessons from Aesthetics for Philosophy of Science,” Perspectives on Science 17 (2009): 417–428. 7 representations share is that they are not linguistic although there is a complicated relationship between language and images in science. A fundamental problem raised by non-linguistic representations is whether they are necessary for scientific arguments, discovery, and justification, or whether they could be eliminated without a loss of content or meaning. One central concern is that, unlike linguistic structures expressing propositions, visual representations might seem as though they cannot be assessed for truth value. For this reason, they are often discussed in terms of rhetorical or heuristic roles. Another side of this is the question of whether (at least some) visual representations are propositional. If this is the case, then it could be that the heuristic value of images is their carrying propositional information more succinctly than sentences. If they are non-propositional, then understanding their content, or the role they play, is a separate challenge.17 The question of what images can represent and what we can see in them has been taken up by a number of different angles. Meghan Delehanty has examined the epistemic benefits of videomicroscopy in light of claims that it aids in seeing causastion.18 Work by James Brown, Barwise and Etchemendy, and Edward Tufte have examined visual representation in terms of the ease of grasping complicated data;19 logical languages, concepts, and proofs;20 and mathematical objects.21 Often the idea of visually representing abstract or unobservable entities is problematic for nominalists or empiricists who must find different ways of explaining these representations.22 On this question, Kitcher and Varsi argue that pictures have been underrated and can express an infinite number of sentences: many of which will be true.23 Griesemer compares path 17 See for example James R Brown, “Proofs and pictures,” British Journal for the Philosophy of Science 48 (1997): 161–180. Brown examines the role of visual representations in mathematical proofs. 18 Megan Delehanty, “Perceiving Causation Via Videomicroscopy,” Philosophy of Science 74 (2007): 996-1006. 19 Edward Tufte, Visual Explanations (Cheshire, Connecticut: Graphics Press, 1997). 20 J Barwise and J Etchemendy, “Visual Information and Valid Reasoning,” in Visualisation in Teaching and Learning Mathematics ed. W Zimmerman, and S Cunningham (Washington: MAA, 1991): 9–24. 21 Brown, “Proofs and Pictures,” 161–180. 22 Otávio Bueno, “Representation at the Nanoscale,” Philosophy of Science 73 (2006): 617–628. 23 Philip Kitcher and Achille Varzi, “Some Pictures Are Worth 2%N Sentences,” Philosophy 8 diagrams and equations in path analysis, and argues that the diagrams cannot be reduced to equations.24 Ron Giere discusses non-linguistic representations in terms of models. He takes images, physical models, diagrams, and other such visual representations to play a mediating role with theories. He examines this mediation in terms of analogy, where features of representations map features of the world. Representations are models in the sense that they provide an interpretation of the world, they do this by being structurally similar in selective ways. This similarity allows scientists to make inferences about the world from the representation.25 Laura Perini argues that visual representations can be truth bearers without being reduced to linguistic symbols, that they have a unique role to play in scientific arguments.26 For the most part, philosophers take the way visual representations are used by scientists seriously. The lessons they draw from examining representations in context can be affected by a number of different factors; the primary one of which is how they are treating representation. The philosophical literature on visual representation, and specifically on pictorial representation or depiction, is quite extensive and sophisticated. Theories of depiction vary in approach, but all emphasize the distinction between pictorial and other kinds of representation. This is important since a picture, in some context, could represent something else non-pictorially – for instance the 1968 picture of the earth from space represents the earth pictorially, and a turning point in popular conceptions of environmentalism only symbolically.  The question of how pictures represent was initially cast in terms of resemblance; pictures depict their subjects insofar as they resemble (are visually similar to) them. Understanding depiction in terms of resemblance raises problems because there are many ways 75 (2000): 377–381. 24 James Giesemer, “Must Scientific Diagrams be Eliminable?” Philosophy and Biology 6 (1991): 155–180. 25 Ronald Giere, “How Models are Used to Represent Reality,” Philosophy of Science 71 (2004): 742–752. 26 Laura Perini, “The Truth in Pictures,” Philosophy of Science 72 (2005): 262–285. Laura Perini, “Visual Representation and Confirmation,” Philosophy of Science 72 (2005): 913–926. 9 in which two things can be said to resemble each other. For example, a charcoal sketch of a man resembles a page of mathematical proofs in that both are flat surface covered in marks. Furthermore, this resemblance does not entail that one thing represents the other. Resemblance, unlike representation, is a symmetrical relation. Much of the contemporary literature on depiction has been a response to such approaches, attempting to capture and clarify the intuitions from resemblance accounts. Given the difficulties in explaining depiction in terms of resemblance, one approach has been to analyse the relation between representational systems – rather than between a token picture and its subject. In structural accounts, pictures, like other kinds of representations, are symbols which denote only within a system with principles of correlation linking the symbols with a set of extensions. What defines how a token picture represents is the way that it relates to other representations within a system.27 Structural accounts of representation take systems of representations to be pictorial or not in relation to other systems. A token picture P is a picture if its representing S is in a system of representation that fulfills a number of formal, structural criteria. What makes the representational relationship between two objects a pictorial one is that the representational system those criteria. Pictorial, linguistic, and other kinds of representational systems are defined by their structural differences. In different systems of representation, governed by different plans of correlation, the same symbol may come to represent any number of different objects. The structural account  allows for the delineation of systems without regards to similarity or resemblance. It is meant to explain how pictures represent without relying on how they resemble their subjects.28 An often cited issue with structural accounts is the emphasis placed on convention. This has been taken to neglect the centrality of vision and visual similarity to pictorial, but not to 27 John Kulvicki, On Images: Their Structure and Content (Oxford: Oxford University Press, 2006). 28 Ibid.  Kulvicki also has a very interesting discussion here of graphs, charts, and other figures. It is developed more in John Kulvicki, “Knowing with Images: Medium and Message,” Philosophy of Science 77 (2010): 295–313. 10 linguistic or graphical, representation. Another approach in the depiction literature has been to treat pictorial representation as strongly connected to human visual capabilities. In perceptual accounts of depiction vision takes center stage. Pictures have the content they do because of what we see in them: a picture of a horse is of a horse because we see one in the picture. There have been two main perceptual approaches to depiction. Recognition accounts take pictures to represent by allowing us to recognize their subjects. Once we have grasped the subject of the picture, then we can ask about similarity or what features of the image give rise to this experience.29 Experienced resemblance accounts aim to clarify the resemblance relationship between a picture and its subject in a way that explains what is unique about pictorial representation. Another approach is to accept that there may be numerous ways in which pictures might resemble their subjects and to emphasize the similarity between pictorial and ordinary visual experience. The resemblance to be understood is limited to resemblance in experience. We experience part of a picture as being similar, in some way, to how we experience its subject. A suggestion for this is that the resemblance holds between the picture’s subject and the visual field. Malcolm Budd suggests that there is an isomorphism between a picture and the visual field someone would have were they to see the scene depicted from that point of view.30 Another suggestion has been that pictures resemble their subjects in outline shape. Outline shape is a technical term denoting the solid angle subtended by an object at a point. If we imagine any point from an object, and imagine lines from that object to the point the solid angle subtended by that object is its outline shape at that point. Outline shape is also a property that can be shared by two and three dimensional objects, since both would subtend the same angles and share 29 Flint Schier, Deeper Into Pictures: An Essay on Pictorial Representation (Cambridge: Cambridge University Press, 1986). Dominic Lopes, Understanding Pictures (Oxford: Oxford University Press, 1996). 30 Malcolm Budd, “On Looking at a Picture” in Aesthetic Essays (Oxford: Oxford University Press, 2008), 185– 215. 11 geometrical shape from a certain point of view.31 On this view when we experience a picture of a horse as of a horse the relevant way in which the surface marks and a horse resemble each other is in outline shape. More akin to the functional view is that pictorial representation takes advantage of the same visual capacities as ordinary vision. Recognitional accounts of depiction attempt to account for the experience of similarity between pictures and their subjects by their triggering the same recognitional capacities. Our ability to grasp the content of pictures, to recognize that P is a picture of a horse and not of President Obama, takes primacy over explaining what the resemblance is between the picture surface and the object depicted.  The question of denotation, besides raising issues in the realism/ anti-realism debate, drives a number of other discussions in the literature on images in science. This becomes evident in the role that conventions are given in particular discussions, and can be influenced by different theories of representation. Perini, for instance, draws on Goodman’s account of representation as do Kitcher and Varzi. Goodman’s theory is a structuralist account that explains representational systems as symbol systems, with principles of correlation linking the symbols with a set of extensions, and utilizing both syntactic and semantic conventions. From this perspective, the role of conventions becomes a guiding concern in explaining visual representations. This is useful in explaining interpretation and differences between various kinds of visual representations. Work by Perini has explored the way different systems have to be understood as attributing features to their subjects and how these affect how different figures are used in scientific arguments, explanations, and inferences.32 Explanations, and explananda, look different from other views of representation. Work by Letitia Meynell draws on Kendall Walton’s explanation of representation in terms of make- 31 Robert Hopkins, Picture, Image and Experience (Cambridge: Cambridge University Press, 1998), 50–62. 32 Perini, “Truth in Pictures,” and “Confirmation,” see also: Laura Perini, “Explanation in Two Dimensions: Diagrams and Biological Explanation,” Biology and Philosophy 20 (2005): 257- 269. 12 believe. One of her aims is to explain how Feynman diagrams represent things we cannot see, and which may not exist. Following Walton, Meynell describes Feynman diagrams as ‘props,’ whose role is to prescribe imaginings within games of make-believe. She argues that Goodman’s account is unhelpful in explaining Feynman diagrams, in part because the emphasis on denotation suggests a realist interpretation of unobservables. Rather than conventions and principles of correlation, the Waltonian view draws on the kind of rules that define and constitute games. A game of make-believe has rules, or principles of generation, that generate fictional truths within that game. Feynman diagrams prescribe that we imagine certain things about particles and processes, and so generate some fictional truths about them, because of the rules of the Feynman diagram game. As representations, they are depictive, that is to say that they are pictorial representations because of their specifically visual nature. Visual representations of unobservable entities and processes is problematic, more so if one wants to give an account of representation in terms of vision. Meynell overcomes this issue by linking Feynman diagrams to bubble chamber photographs, and interpretive diagrams. They prescribe that we imagine seeing certain scenes in bubble chambers.33 The variety of visual representations in science, and the different roles that they might be thought to play, suggests the need for some kind of taxonomy of visual representations in science. In a paper examining representational conventions in lithic illustration, Dominic Lopes suggests that a full taxonomy of images in science would include image type, imaging task, and context of use.34 Image types are differentiated into systems if they are produced differently, this includes representing different determinables. Lopes calls the “Galison – Topper hypothesis” the version of the history of images in science that machine representation is used for representing particulars, while hand drawing is used to represent types.35 He argues that this version of the 33 Letitia Meynell, “Why Feynman Diagrams Represent,” International Studies in the Philosophy of Science 22 (2008): 39–59. 34 Dominic Lopes, “Drawing in a Social Science: Lithic Illustration,” Perspectives on Science 17 (2009): 5–25. 35 Ibid. 13 history is incomplete. Lopes’ account of the role of lithic illustration in archaeology examines the selectivity of this representational system. The rules for lithic illustration make them the best way of representing stone tools where the task of imaging is not only to depict the tool but to depict its intentional artefactual properties. Photographs cannot depict the direction or force of blows, neither can photographs select for the intentional properties, they also capture geological and other properties not of interest. This selectivity requires the trained judgement of humans. Lithic drawings challenge the Galison – Topper hypothesis because they are hand drawings of particulars that cannot be replaced by machine representations. An important feature that comes out of this is that the human interpretation skills (of the object) are important for the kind of information available in the representation. There are reasons why we should take ideas of representation and the informativeness or value of a representational system to be tied to image type, task, and context. The conventions of the system of representation are such that the kind of counterfactual dependency we associate with machine imaging can be maintained in the representation of the properties of the artefacts. Imaging tasks are defined, in a large part, by the needs of specific practices. Though the discovery of x-rays was accidental, it was quickly taken up for certain tasks for which it seemed suited. Being able to see bones in x-ray is helpful if we want to know if bones are broken, knowing if bones are broken is helpful in making decisions about medical treatment. Other cases of medical imaging come from different needs and the specific powers and limitations of different imaging types. Medical images raise a number of concerns that are perhaps orthogonal to those of other visual representations used in science. A major difference is due, perhaps, to the pragmatic nature of medicine wherein the emphasis is often what works rather than why and decisions sometimes have to be made without certainty. If an emergency room doctor has a patient with a possible stroke, she will send that patient for an MRI or a CT before opening up the skull for 14 exploratory surgery. These images are data; they are also uniquely action guiding. While all visual representations require interpretation in terms of their visible features, medical images seem more closely tied to vision. For this reason, it seems important to consider how vision has been taken up in discussions of depiction. For many non-linguistic representations, vision is central to both how we understand them and what we understand them to represent.  Even those who examine scientific representation in terms of interpretation and inference pay little attention to the role of these images in our visual experience. Both EEG and MRI can represent the brain and both are used for particular purposes, but seeing a brain in an MRI also seems to allow us to have a direct visual experience of the brain and not only a pattern of electrical activity. In the case of medical imaging we seem to have visual experiences of recognizable objects when we see the images, and this seems important to both how they represent and how they are used. Imaging in medicine seems more closely related, in some ways, to laparoscopy than it does to charts and graphs. One thesis in this dissertation is that medical imaging should be explained in terms of interpretation, and that interpretation of these images involves our visual experience of objects. A satisfying explanation of medical imaging has to do two things then. First, it has to say something about vision in image producing technologies. Second, it has to say something about use. On this second point there is a literature in philosophy that is helpful in explaining how imaging is used, and that is the literature, limited though it might be, on instrumentally aided perception. Taking up this point, we have a way of explaining the relation between the first and second explananda; a second thesis of this dissertation is that medical imaging should be analysed in terms of technologies for instrumentally aided perception of the body. 15 Initial Concerns and Limitations  There are two main challenges to the seemingly straightforward claim that medical imaging technologies allow us to peer into the body. The first is the charge that any instrumentally aided perception is fundamentally different from unaided perception, that vision mediated by instruments loses its status as a source of direct knowledge.  A second problem concerns the nature of the instrumentation. One might accept instrumentally aided perception, yet deny that pictures can be perceptual instruments. What is interesting about medical imaging technologies – from x-ray to MRI and 4-D ultrasound – is that they are used when the tissue in question is not visible in a straightforward way. Yet magnetic resonance values, tissue density, and echogenic properties lie outside of what the human visual system can detect – our detection of these is as they are represented in images. Reconciling the technology based creation and detection of these measures of tissue, and the visual experiences that the images support, is a central issue of understanding imaging. There are two key elements in these arguments against instrumentally aided perception. The first concerns the entities that furnish the world and our theoretical commitment to them. The distinction between unproblematic entities of everyday phenomenal experience and those whose existence is theoretical is often drawn precisely along the lines of what we can observe. Brains, bones, and babies are all perfectly respectable observables – unlike particles or mitochondria – so the problem does not seem to be whether our belief in them is warranted. The other side of these arguments differentiates between the epistemic status of naked eye and instrument aided perception, based on the nature of human perception and its limitations. Bas van Fraassen argues that it does not even make sense to say that we see through an electron microscope, since the idea of sight using electrons instead of photons is outside of our epistemic 16 community.36 The limitations of human vision are defined by the physics and biology of our bodies and the world that we have evolved in. Given the way humans are biologically situated, our visual systems do not detect (or make sense of) echogenic or x-ray attenuation properties. While our success using these images needs to be explained, these limitations suggest that that explanation cannot be in terms of vision.   Using pictures to see presents additional problems. If we think that vision requires a certain causal or experiential relation to the objects of the world, such as being in the immediate presence of something, we might deny that pictures grant visual access. Ordinary perception and picture perception are different. A picture is a series of marks; a graphical surface whose features are such that they bear a representational relationship to an object. While to some this might be intuitively visual, to others it is not. Even if one grants that pictorial representation should be understood visually, one does not need to allow that this provides visual access beyond the marked surface. This issue becomes more acute when we examine the marked surfaces as representations of non-visual measurements of tissue. Visual depiction of visual properties is one matter, but visual depiction of non-visual properties is another. Unlike more workaday visual aids such as eyeglasses, microscopes, and telescopes, imaging technologies do not work by using magnifying lenses to enhance our eyes’ ability to resolve the photons bouncing off something. Instead they use a range of measures of tissue – echogenic properties, the absorption of x-ray, magnetic properties of protons, cerebral blood flow among others- to create images in which we seem to see bodily tissues and features. The question becomes how a visual representation of these properties can be used in place of actually being able to see into the body. Can images, viewed in the usual way, but generated without the use of photons, act as visual prosthetics? Can our visual capabilities be so extended 36 Bas van Fraassen, “Empiricism in the Philosophy of Science,” in Images of Science: Essays on Realism and Empiricism with a Reply by Bas van Fraassen, ed. Paul Churchland and Clifford Hooker (Chicago: University of Chicago Press, 1985), 245–308. 17 that this could be considered a kind of vision? Rather than merely extending our eyesight, these imaging technologies allow us to use other measures where photons cannot be used. The third thesis of this dissertation is that imaging technologies are visual prosthetics that extend our perceptual capacities. In sum, I will argue that medical imaging technologies are technologies that produce marked surfaces for the purpose of seeing-in, the perceptual experiences that arise from picture perception, and that in so doing they extend our visual capacities in a way that we are able to use the images to see novel features of the body. This characterization is the only one that supports our use of these images in the instrumentally aided perception into the body that explains all the roles imaging plays in medical practice. Methodology and a Look Ahead Given what needs to be balanced to give a philosophical account of medical imaging, I have taken a particular approach to the topic. Because I take the above visual constraint, or call it an empiricist constraint, seriously, more has to be said about a) the limitations of perception, and b) what it means to say that it is extended. A visual technology cannot create new abilities (unless perhaps it does so by altering the brain); rather, I think that we develop ways to exploit the visual skills that we have and extend them along certain axes. The account I want to give of imaging requires some historical situation; the story of imaging is, in part, a story about learning or developing ways to exploit our visual capacities. The visual is not everything; there is also an important story to tell about technologies. I take Patrick Maynard’s analysis of photography as a starting point; a lesson to be drawn from his examination is that photographs as products cannot be understood without photography as process.37 Similarly, medical images cannot be understood separately from imaging technologies.  There are enormous differences among these image producing technologies 37 Patrick Maynard, The Engine of Visualization: Thinking Through Photography (Cornell: Cornell University Press, 1997). 18 which make it a challenge to give an account of imaging across the board. So while I will be attempting to do just that, there is a limit to the number of technologies one can discuss clearly and cogently in a document of this size. My discussion focuses on MRI and fMRI imaging of the brain and ultrasonography in obstetrics; these are developed in case studies detailing specific aspects of imaging.  The technologies I will be discussing are ones that capture a broad range of the features of imaging and the uses to which it is put. An obvious omission here is an extended discussion of any imaging technology involving x-rays. This is due in part to a desire to fill a gap – very little has been written about nuclear technologies like MRI and fMRI and the contrast between the two technologies is interesting. The case studies also perform two important descriptive tasks. They examine how imaging is used in specific fields, and why.  Each case study details a visual issue within the context(s) of use of the technology, and interrogates the tasks, and demands made of that technology to resolve the issue. So the case studies are meant to establish that, given these criteria, we treat images as means to see inside the body and that doing so is of great pragmatic value within that context. That we actually see inside the body, that we are really seeing a particular brain when we see it in an MRI has to be supported independently. It is in this sense that it is important to closely consider how the technologies produce images that serve the functions detailed in the practices. The next question concerns how these images are used as evidence and how we should talk about this. The general approach in philosophy of science is to talk about evidence in terms of propositions. This kind of approach does not seem to capture how imaging is used. This becomes salient when we see the variety of ways that images, or representations, are made, and the variety of demands that are made of representations. Interpretation of images in medicine seems to demand greater rigour than interpretation in art; at stake is someone’s health and someone else’s livelihood. These, we see, are criteria that also go beyond the distinction 19 between the semantics of images and the social context of use. The connections between these offer ways of understanding images more generally and of tracking their epistemic role in science. 20 2.   Historical and Technical Overview From Probes to Signals to Pictures As the technologies I will be focussing on are what Bettyann Kevles calls “the daughter technologies of X-ray,” some discussion of x-ray is required of an account that treats them in this lineage.38 Wilhelm Roentgen’s discovery of x-rays was announced in November 1896. This was mere weeks after he had first noticed that the cathode ray tube he was working on cast shadows that could penetrate some tissue, but not others, and that these shadows could be captured on a photographic plate.39 The first x-ray image produced was of his wife’s hand, clearly showing the bones of her hand and a signet ring she was wearing, as seen in Figure 1.40 38 Kevles, Naked to the Bone, 3; Also see William H. Oldendorf, The Quest for an Image of Brain (New York: Raven Press, 1980); Jose van Djick, The Transparent Body a Cultural Analysis of Medical Imaging (Seattle: University of Washington Press, 2007); Anthony Brinton Wolbarst, Looking Within (Berkeley: University of California Press, 1999). 39 Oldendorf, Quest, 7. 40 Ibid. 21 Figure 2.1: X-ray Image of a Hand, Wilhelm Roentgen, 1896: public domain. Within a month of Roentgen’s announcement the first roentgenograms, as x-ray shadow images were called, of the head were produced.41 The desire to image the brain, to be able to bypass the skull in a non-surgical way to gain information about this mysterious and delicate organ, drove much of the medical development of imaging in the 20th century.42 X-ray images were quickly taken up for medical purposes; their ability to visualize bones, lungs, and other tissue made them revolutionary to medical practice. For the first time physicians had a non- surgical means to see inside the body. A number of different fields took up imaging in hopes of resolving medical debates, and improving the care of their patients. X-ray images had many issues, from the dangers of radiation exposure to the difficulties of interpreting shadow images, but they taught the world an important lesson. While light reflecting off the surfaces of objects creates visual appearances, other kinds of radiation could be used to pass through the body to create images of what previously could not be seen. They created a desire for increased visibility. Imaging technologies have several things in common across the board. Most important is that they rely on differences in tissues to produce the data for the images.43 Wolbarst, a radiologist, describes this relation, they share a fundamental communality of approach: they create medical images by following and recording, by some means, the progress of suitable probes that are attempting to pass through a patient’s body. The body must be partially, but only partially, transparent to the probes. If the probes slip right through bones and organs without interacting with them, like light through a pane of clear glass, no differences among the tissues can be visualized. Different probes, different interactions with the tissues, and different means of detecting the probes give rise to different images conveying different types of clinical information.44 41 Ibid., 9. 42 Ibid., 7. 43 Wolbarst, Looking Within, 8. 44 Ibid. 22 I will follow Jim Bogen in calling the tissue characteristic measured the biological indicator.45 Different probes indicate different characteristics of tissue. In x-ray imaging this is something like tissue density, in ultrasound it is acoustic properties. The probe/biological indicator interaction is what is measured and recorded by the technology.  I will call this the signal. The signal is what carries information about the tissue by carrying information about its interaction with the probe. The usefulness of the signal data depends on the task to which it is put.  For an imaging system to be useful for a task it must,  be able to display the specific, distinctive aspects of a patient’s anatomy or physiology that are causing a problem and be sensitive enough to pick up even very faint signs of it.46 In x-ray images signal information is recorded as luminosity values where increased signal is mapped as increased luminosity. In contemporary x-ray systems this is recorded and displayed on a monitor while older versions used plates prepared with light sensitive material; the rays that passed through tissue affected photographic plates much as light. Areas of low tissue density appear dark on these slides, while areas of greater tissue density appear whiter. Original x-ray images were discussed as shadows because they were literally the patterns of density cast on the photographic slide by the selective attenuation of x-rays by tissue. The signal in this case is x- ray attenuation; areas of high signal are whiter and areas of low signal are darker.47 All of the imaging technologies discussed here utilize this relationship between signal values and luminosity values. We cannot see these signals, but we can measure them in terms of luminosity values. In contemporary imaging, luminosity values per pixel represent signal values per voxel of tissue. For this reason I am going to go slightly outside of standard usage and refer to this relationship as a representational system. First, because the fact that this mapping 45 Jim Bogen, “Epistemological Custard Pies from Functional Brain Imaging,” Philosophy of Science 69, Sipplement: proceedings of the 2000 biennial meeting of the philosophy of science association, (2000): S59–S71. 46 Wolbarst, Looking Within, 7. 47These plates were like negatives, so when they were printed, as in Figure 2.1, the areas of high signal appear dark. Later x-ray films and digital monitors do not do this, and in contemporary x-ray images high signal appears brighter as discussed. 23 relationship is representational turns out to be very important; and second because it holds across the various modalities. I will be discussing the representational system that holds for the descendant technologies of x-ray as the signal→ luminosity representational system. Often representational systems are discussed in terms of kinds of visual representations; maps, pictures, graphs, and models might all be different representational systems and that is fine. The signal→ luminosity system also falls into this category, and I will attempt to make my reasoning for that clear as we go along.  Imaging types within the signal→ luminosity representational system are defined by their signals, and signals provide information about determinate properties of materials. This is why ultrasound, MRI, CT, fMRI, and PET are all different imaging modalities. Moreover, different signals are useful for different imaging tasks. Roentgenograms turned out not to be very useful for imaging heads. X-rays do not penetrate the skull, and while they are sensitive to bone, sensitivity to bone is not useful when we want to see inside the head. The experiments with x-rays suggested that other ways could be found of creating images. The problems encountered with x-rays – difficulty of interpretation on one side, burns and radiation poisoning on the other – increased the urgency. Ultrasound The biological indicator used in ultrasound is the acoustic properties of tissue. Different materials have different acoustic properties so when the probe, sound, is introduced we can record how long it takes to bounce off the tissue – the signal recorded is echogenicity. Ultrasound works according to basic acoustical principles. Waves of sound are produced in a transducer, the transducer is placed on the skin and a narrow beam of pulses of very high frequency sound energy, 1 to10 MHz is directed into the body and swept back and forth. These waves travel in approximately straight lines until they hit a medium with different density and 24 sound velocity.48 The waves then reflect or refract off the medium, they either bounce off or are partially absorbed by it. The echo information is picked up by the transducer as signal with the time of return of the echo being proportional to the depth of the interface that produced it.49 This interface between media is of particular concern to ultrasound and is how it manages to have good spatial resolution for imaging the shape of areas of different tissue. Echo intensity depends on the acoustic characteristics of the material on the two sides of the interface.50 History of Ultrasound Like the brain, the occupied uterus was an early target for imaging, especially in hopes of finding an early way of confirming pregnancy.51 The problems with radiation exposure put limitations on the use of x-ray based modalities in obstetrics, creating a gap filled by ultrasonography. Ultrasound has its origins in the 1877 discovery of piezoelectricity by Pierre and Jacques Currie, in which crystals were mechanically distorted to produce electrical potentials and hence sound energy.52 The applications of this discovery for acoustic sound ranging were worked on early on as ways to visualize the shapes of things under water. The first working system was developed around 1914, in part by Reginald Fessenden, a Canadian expatriate living in the United States inspired by the sinking of the Titanic, as a way of detecting the shapes of submerged icebergs. Fessenden’s system included the same basic components as contemporary ultrasound, with an electromagnetic coil oscillator that produced low frequency sound pulses and a receiver to capture the echo in order to determine the distance of the object 48 Oldendorf, Quest, 51. 49 R Blackwell, “The Basic Physical Principles of Real-time Ultrasound,” in Real-time Ultrasound in Obstetrics, ed. MJ Bennett and S Campbell  (Oxford: Blackwell Scientific Publications, 1980), 1–27. 50 Wolbarst, Looking Within, 68. 51 Kevles, Naked to the Bone, 230. 52 Joseph Woo A Brief History of Ultrasound in Obstetrics and Gynaecology. http://www.ob- (accessed, Jan 10, 2009). 25 the sound waves bounced off. Fessenden’s early sonar system could detect an iceberg at 2 km away, but with poor spatial resolution. 53 Sound travels at a constant speed in a uniform medium such as water or air, and this makes it a useful and reliable probe. Also because of this, sound can be used to make measurements to determine distances between, as well as shapes of, different materials.54 The need for such measurements and a machine to make them became more pressing in World War I when submarines were used for the first time and proved so problematic to the Allied forces.55 French physicist Pierre Langevin discovered that by running alternating current through a quartz transducer the crystals resonated with the magnetic field creating ultrasound.56 The real drive for SONAR and funding for research and development was during World War II, helped along by the discovery of ferroelectric materials, such as some kinds of ceramics, which allowed for more powerful transducers to be built which could record. In the Soviet Union, and between the wars, developments in ultrasound technology were used in metal flaw detection which exploited the acoustic differences of material of different densities. The first attempt to use ultrasound diagnostically to image the human body was done in 1937 by brothers Karl and Fredrich Dussik, one of whom was a neurologist and one of whom was a physicist. The goal of course was to image the brain. Their images, called ventriculograms were meant to be images of the ventricles.57 The images, however, turned out to be artefacts of the image making process; the skull interfered with the imaging of underlying tissues, and between this failure and the onset of World War II, research into imaging of the human body with ultrasound came to an early end.58 53 Ibid. 54 Oldendorf, Quest, 50. 55 Ibid. 56 Kevles, Naked to the Bone, 232. 57 Ibid., 234. 58 Ibid., 234. 26 World War II left behind surplus SONAR and metal flaw detection machines which became useful to the development of ultrasound as a medical technology.59 The first B-mode was built by American Douglas Howry in 1952 from old flaw detection machines. The first recorded echogram of tissue was published by Wild and Reid in their 1952 article “Application of Echo-ranging Techniques to the Determination of Structure of Biological Tissue.” This article concerns an apparatus developed to create two-dimensional pictures of tissues.60 In discussing the new apparatus and method of imaging they draw on an analogy with needle biopsy, (Initial method) analogous to needle biopsy [this method is] designed to give a two- dimensional picture such as would be obtained by adding up the information from a series of needle biopsies taken in one plane x of a given tissue.61  This analogy plays a role in how they discuss the differences between the one-dimensional use of ultrasonography and the two-dimensional use they discuss as more theoretically than practically possible. While the one-dimensional use of ultrasonography could determine if there were differences between two separately sampled tissue sites, the two-dimensional images show the outline of a different area of tissue in a certain plane. Their apparatus was similar in many ways to what we now think of as ultrasound machines, a piezoelectric crystal was set up to transmit ultrasonic pulses at 15 MHz and receive echoes which would then generate an electric charge. These charges were amplified in such a way that they “modify a beam of electrons sweeping back and forth on the face of a television screen at such a rate as to take advantage of persistence of vision.”62 These states of the television screen could then be photographed for prosperity. While they considered their development to be more theoretical than practical, since the images they presented were quite unclear – a fact they blamed on their own inexperience 59 Ibid., 235. 60 J Wild and J Reid, “Application of Echo-ranging Techniques to the Determination of Structure of Biological Tissue,” Nature 115 (1952): 226–230. 61 Ibid., 226. 62 Ibid., 227. 27 with the apparatus – this was the beginning of the use of ultrasound for imaging tumours. 63 B- mode (for brightness) remains more or less standard in ultrasound. It uses the signal→ luminosity representational system on a one-dimensional plane; signal value is represented by luminosity value. Ultrasound in Obstetrics By 1956 Wild and Reid had examined 116 breast tumours with their scanners.64 The use of ultrasound did not really become widespread until it was taken up by obstetrics, which had a particular set of needs for which this signal was particularly well suited. Contemporary sonography involves the production of real time images of slices of tissue in the body. Perhaps the best known use of ultrasound is in obstetrics where the pie shaped images are familiar glances in utero at developing foetuses that figure in much popular media concerning pregnancy. The very best sonograms clearly show heads, limbs, fingers, and the beating hearts of foetuses, and even in unclear images some of these features are visible. Ultrasound is routine in obstetric medicine and is routinely used to confirm pregnancy, and to establish gestational age. It is invaluable for determining molar ectopic pregnancies, which can be fatal, and for monitoring pregnancies with multiples, which also have increased risks. The use of ultrasound in obstetric practice began in Glasgow with Dr. Ian Donald. Ultrasound is well suited to foetal imaging because of the water environment. The desire to see what is going on inside a woman’s body was also becoming seen as a more pressing need. The maternal mortality rate was quite high and could be complicated by any number of factors including babies in atypical positions, multiple births, or problems with the placenta. Knowing 63 Ibid., 230. 64 Joseph Woo, A Brief History of Ultrasound in Obstetrics and Gynaecology. 28 what was going on in utero could be medically beneficial in preparing for difficult deliveries or foreseeing potential medical problems.65 Donald first experimented with surplus metal flaw detectors on various tumours, cysts and other tissue types. In 1958 he published a paper, “Investigation of Abdominal Masses by Pulsed Ultrasound,” detailing the use of ultrasound in determining that a mass was an ovarian cyst and not cancer.66 By 1959 Donald and his colleagues had built early prototypes and discovered how to use ultrasound to measure foetal head size and to use that to determine gestational age. This was significant because until then the main way of determining gestational age was through radiographs of the length of the foetal thighbone, which were harmful to the mother and the foetus.67 Later crown-rump length became, and remains, an important measure of foetal age.  The first mention of ultrasound in obstetrics textbooks was in 1970.68 By 1971 ultrasound could be used to assess gestational age from 8 weeks, to confirm multiple gestations, and to locate the placenta and diagnose difficulties such as placenta previa (which often causes fatal haemorrhaging).69 By the mid 70s grey scale (or luminosity scale, instead of black and white) was being used to represent signal strength, resulting in greater resolution and ability to discriminate between tissue types.70 The development of real time imaging at a rate above 14 frames per second was made possible with the computerization of ultrasound in the late 70s and early 80s.71 This allowed obstetricians to measure foetal heart rate, and to see movement. Ultrasound is currently used in every aspect of obstetrics. The most recent development in ultrasonography has been 3-D visualization, which produces volume rendered reconstruction 65 Ibid. 66 Ian Donald et al., “Investigation of Abdominal Masses by Pulsed Ultrasound,” Lancet 1 (1958): 1188–1195. 67 Joan Lynn Arehart, “Sounding out the Womb,” Science News, December 25 1971, 424–425. 68 Kevles, Naked to the Bone, 243. 69 Arehart, “Sounding out the Womb,” 424. 70 EE Carlsen, “Grey Scale Ultrasound,” Journal of Clinical Ultrasound 1 (1973): 193. 71 Kevles, Naked to the Bone, 244. 29 images of foetuses from ultrasound information. These images are often treated as novelty “first baby pictures” but have important clinical value. They are being used to confirm once difficult to image facial malformations, such as cleft palate, and other markers of chromosomal disorders such as Trisomy 21 and hydrops.72 This imaging technique constructs voxel based volumetric data sets by placing each two-dimensional slice in its correct place.73 Smoothing of the data and various surface rendering techniques are used to examine different features of the foetus or embryo, particularly malformations and anomalous development.74 Image Quality Ultrasound images show luminosity values for areas where different tissues interface, this creates a kind of luminosity map of these interfaces in a slice of the body, so it is not useful for areas with very similar tissue density. There are five established criteria for image quality in obstetric ultrasonography. These are: 1) detail/spatial resolution, “The ability to visualize minute anatomic detail.”75 2) Obvious resolution of tissue texture and contrast; as one text book points out, “ Each tissue type ideally should have a different appearance from other tissues…The ability to perceive these differences depends on “contrast resolution”.76 This pertains to the ability to clearly and easily differentiate between tissue types in an image.; 3) Sensitivity or the ability to visualize detail at depth- this is particularly important with heavier patients.77 4) Low noise, the same textbook as above describes this feature: Tissues, such as liquor amnii and urine, that do not produce echoes should appear on the display as echo-free areas. Spurious echoes, termed “noise”, are annoying and may cause misdiagnosis by the inexperienced operator.78 72 Roger Pierson, “Three-dimensional Ultrasonography of the Embryo and Fetus,” in Textbook of Fetal Ultrasound, ed. R Jaffe and T Bui (New York: Parthenon Books, 1999), 317–342. 73 Ibid., 319. 74  Eberhard Merz, “3D Ultrasound,” in Textbook of Fetal Abnormalities, ed. Peter Twining, Josephine McHugo and David Pilling (London: Elsevier, 2006), 483–494. 75 Z Friedman and R Jaffe, “Physics of Real-time Ultrasound Imaging and Doppler,” Textbook of Fetal Ultrasound, ed. R Jaffe and T Bui (New York: Parthenon Books, 1999), 1–22. 76 R Blackwell, Diagnostic Ultrasound in Obstetrics (London: Elsevier, 1993), 3. 77 Friedman and Jaffe, “Physics,” 3. 78 Blackwell, Diagnostic Ultrasound, 3. 30 Non-echogenic objects should “black” as spurious echoes resulting from a variety of artefacts can make these areas to appear as echogenic.79 5) Consistency: the above should be uniform throughout the image.80 All contemporary machines allow beam width and pulse length for both transmission and reception to be adjusted to individual situations.81 Because ultrasonography is used as a routine diagnostic technology, textbooks on foetal ultrasonography tend to focus on the many things that go wrong in many places in utero. Often, sonograms of foetuses with particular malformations of congenital anomalies are presented next to photographs of the same foetuses after removal. This practice emphasizes the physiological features that suggest a problem, and how those might appear in two dimensions. Magnetic Resonance Imaging The brain has long presented a problem for the medical profession. Unlike other organs which can yield information to touch, palpation, or at an extreme surgical examination, the brain is encased in skull and susceptible to damage. It is then easy to understand why it was only a month after Roentgen’s 1895 publication announcing the discovery of x-rays that the first radiograms of the human head were done. Driven by the success of x-rays, techniques were developed to overcome the limitations of x-ray imaging to increase visualization of the brain. Early shadow radiography clearly showed differences between air, water, and bone but the capability for delineating structures in soft tissue were extremely limited. Techniques for increasing visibility of brain structures included pneumoencephalography a procedure whereby air was injected into the ventricles of the brain to displace cerebrospinal fluid thereby causing the ventricles to be visible in outline.82 The problem was that this procedure was excruciatingly 79 Friedman and Jaffe, “Physics,” 3. 80 Ibid. 81 Ibid. 82 Oldendorf, Quest, 22. 31 painful to patients while yielding little information about the structure or function of the brain. Another development was angiography, a procedure that involved injecting radio dense solution into the brain. Blood is only 2% more radiodense than either cerebrospinal fluid or brain tissue, so injecting a solution of greater radio density into the bloodstream allowed for images of the blood flow in the brain.83 While angiography was begun on cadavers within months of Roentgen’s announcement, injections into humans that yielded medically useful images did not occur for 30 years. This is because the radiodense solutions used, such as strontium bromide, were extremely toxic. After these attempts to make use of x-ray imaging proved to show aspects of brain structure or function that were of limited clinical value further developments focused on slices based representations to simplify structural analysis.84 Technical Overview Magnetic resonance imaging is an imaging technology based around induced changes to hydrogen protons in the human body, and measurement of the properties of these protons in order to map out their spatial locations. It is more complicated than other technologies because there is no simple probe, but rather a process to create signal within the body. Nuclear magnetic resonance involves complicated physics, so the description here will only be able to scratch the surface of all the factors that are involved, many of which are still currently being investigated. Hydrogen is useful for imaging in the context of the human body because water is present in all tissues of the human body, and because hydrogen nuclei have only one proton.85 Any nucleus with unpaired protons or neutrons will have a magnetic moment, which is the intrinsic strength of the magnetic field of a nucleus.86 Nuclei also have a characteristic spin, or frequency at which they rotate, which is tied to the production of magnetism. In MRI a patient enters into a MRI 83 Ibid., 25. 84 Ibid. 85 Westbrook et al., MRI in Practice, 3. 86 Ibid. 32 machine where the body is introduced to a strong electromagnetic field produced by the machine, this first field is called the principal field, B0, and is generally between 0.1 and 1.5 Tesla (T). Magnets up to 14 T have been built, and ultra high field magnets up to 8 T are in use in some research centers although they have not been accepted into clinical practice yet. The purpose of such high field magnets is to image sodium, carbon, oxygen, and other atoms.87 Normally nuclei point randomly in any direction in the body, but in a field of this strength the protons line up with the direction of the magnetic field, like compass needles – either parallel to the field if they are spin up, or anti parallel to the field if they are spin down.88 When the principal field is applied, the atomic nuclei begin to spin around the field. Introducing the magnetic field causes changes in the energy states of the protons; some become more energized and enter into the higher energy spin down position while most stay in the lower energy spin up position.  MRI is based around what is called the Net Magnetization Vector which is a relationship between the number of protons with spin up and spin down properties and the net magnetic moment of hydrogen.89 Stronger radio frequency excitation pulses are then applied in a sequence and the magnetic moments of the nuclei begin to wobble around the introduced field in a process called precession.  Because the magnetic moment of hydrogen is known, the resonant frequency of hydrogen in different magnetic field strengths can be calculated according to the Larmor frequency – which is a constant.90  Resonance is achieved when the precessional frequency of the nucleus equals that of the magnetic field.91 There are a number of different sequences, many of them pre-programmed into MRI machines, which apply different radio frequency pulses in different configurations of 90º and 87 Beth Orenstein, “Ultra High Field MRI: the Pull of Big Magnets,” Radiology Today 7, February 2006, 10. 88 Westbrook et al., MRI in Practice, 5. 89 Ibid., 7. 90 Ibid., 10. 91 Ibid. 33 180º pulses transverse to the principal field.92 Once the pulse ends, receiver coils then pick up signals from the tissue. The repetition time of the pulses (TR as in Time to Repeat) and the echo time (TE as in Time to Echo the time between the pulse ending and the first signal), determine the weighting of the signal and what kind of image will be produced.93 There are a number of different measurements in MRI that provide different kinds of information about the tissues imaged – and serve as contrasts in the imaging process. Proton or spin density can be measured, but there are other spin relaxation times more typically used to define the parameters of an MRI image. While there are now a variety of sequences used and measurements made in MRI, there are two main ones. The first is called T1 spin relaxation (or spin lattice relaxation), and the second is T2 relaxation (or spin-spin relaxation). T1 is the time it takes for 63 percent of the nuclei to achieve equilibrium after the radio frequency pulse is switched off. That is, if Mz = 0, it is the time it takes for it to return to equilibrium. T1 for pure hydrogen is 4 seconds in a 1.5 T field, that is the time it takes 63% of the nuclei to achieve equilibrium.94 In brain imaging, it is significant that T1 is different for fat, soft tissue and cerebrospinal fluid – the differences in the relaxation times of these tissues will make them appear different in the image. T2 measures the decay time for voltage from the resonant state as 63% of the population of nuclei achieves equilibrium. This is described by the time it takes for magnetization vectors on the transverse X and Y planes to return to Mo.95 These relaxation times are useful for imaging because different kinds of tissues have different relaxation times. So tumours, for example, have a higher relaxation time than regular tissue.96 T1 weighted images show different tissue types as bright or dark than T2 weighted 92 William Oldendorf and William Oldendorf Jr., Basics of Magnetic Resonance Imaging (Boston: Martinus Nijhoff, 1988), 73. 93 Westbrook et al., MRI in Practice, 25. 94 Oldendorf and Oldendorf, Basics, 78. 95 Westbrook et al., MRI in Practice, 27. 96 Raymond Damadian, “Tumour Detection by Nuclear Magnetic Resonance,” Science 171 (1971): 1151–1153. 34 images. Areas with long T1 appear dark and areas of short T1 appear bright in a T1 weighted images. Areas of short T2 appear dark and those of longer T2 appears bright on a T2 weighted scan. This makes the different kinds of images better for particular purposes, T1 weighted images are better for anatomical detail, while T2 weighted images are better for showing differences between normal tissue and tumours.97 Lesions show up as dark on T1 weighted scans while they appear bright on T2 weighted scans.98 If you spin a compass needle with your finger it wobbles around and then returns to normal, this is similar to what happens in MRI. When the magnetic field is interrupted, (you let go of the compass needle), the nuclei return to their original position. They also absorb some of the extra energy so more nuclei will be in the higher energy state. Because the resonant frequency of nuclei depends upon the strength of the magnetic field, the different resonant frequencies in a gradient field can help to localize the nuclei within that field. To explain this localization more clearly, the principal magnetic field follows a Z-axis that runs head to toe along the body and this field is gradient – it is stronger on one side than on the other. To obtain slice information excitation pulses of radio wave are applied on a transverse X-Y axis perpendicular to the principal field at 90° and 180°.99 This provides localized spatial information about the spin properties of hydrogen nuclei across a slice of tissue based on their position in the gradient field. The spin echo pulse sequence described here is the most common in MRI imaging. The technical details of differences in pulse sequences are beyond the scope of this work. The application of all the gradients determines the slice to be imaged. It produces a frequency shift along one axis of the slice (number of times the magnetic moments cross the receiver) and a phase shift (position along precessional path) along the other, so that the position 97 Westbrook et al., MRI in Practice, 27. 98 Oldendorf and Oldendorf, Basics, 128. 99 Ibid, 73. 35 of the signal in the slice can be identified.100 The time between these excitation pulses per slice is the Time to Return, which is part of the frequency measurement in the scan. K-space and Fourier Transforms Signal information is recorded in arrays in the computer in what is called K-space.101 The information is recorded in radians per cm, regarding frequencies in space or distance.102 K-space is a collection of lines103 which are filled according to gradients. In a 256 phase matrix (256 x 256) the highest line will be +128 and the lowest line will be −128.104 The gradient determines which line will be filled with data as the scan progresses. When the K-space is filled there will be an equal number of data points across each line as there are lines, so the grid is even.105 K- space is information space but is not itself an image; each data point contains information about phase and frequency for the entire scan. The centre of K-space is high signal, low resolution, while the outside is low signal high resolution.106  In order to produce an image, Fast Fourier Transforms convert the data signal from each point from the frequency and phase information into amplitude, which is then correlated with brightness on the grey scale, and this luminosity value is assigned to a pixel on an image matrix the same size as the K-space. 107 The balance between signal intensity and resolution is important for image quality. Resolution is important, but because there can be no resolution of features smaller than the pixel size there is a point where it has to be balanced with signal information.108 100 Westbrook et al., MRI in Practice, 81. 101 Ibid., 82. 102 Ibid. 103 Spiral K-space filling techniques are also used but these are more common with Echo planar pulse sequences and fMRI. 104 Westbrook et al., MRI in Practice, 82–86. 105 Ibid., 86. 106 Ibid., 94. 107 Ibid., 90. 108 Ibid., 88. 36 A Brief History of MRI As the history of MRI is discussed in great detail in one of the case studies, the overview here will be both brief and technical. The history of MRI officially began with the co-discovery of the Magnetic Resonance (nuclear magnetic resonance) signal by Bloch and Purcell 1946.109 Bloch was working on properties of liquids at Stanford and Purcell’s team was working on solids at Harvard, so neither was aware of the work the other was doing. Bloch heard about the research coming out of Harvard weeks after his discovery of the magnetic resonance signal and the results of both groups were published in the January 1946 edition of Physical Review.110 What they discovered was that when atoms were hit with a magnet field, and then the field was removed, they give off energy which can then be recorded. This is the basis of the MRI signal. The magnetic resonance signal was exploited in different ways, examining the different relaxation times of chemically different materials in what was called magnetic resonance spectroscopy was a way of determining the chemical properties of different materials. Richard Ernst discovered the spin-echo phenomenon which improved the information obtained about nuclear relaxation times.111 He was also the first to use Fourier analysis to extract information about the amplitude of signals, although he did not use it for imaging.112 The use of nuclear magnetic resonance for medical purposes was concurrently developed by a number of people. Raymond Damadian, an American physician, performed a study where he found that the T1 relaxation time of tumours was significantly higher than that of normal tissue.113 He suggested ways in which nuclear magnetic resonance could be used medically in order to detect cancerous tissue. He saw this in terms of being able to detect the presence of 109 Erwin Hahn, “Felix Bloch and Magnetic Resonance,” Bulletin of Magnetic Resonance 7 (1984): 1. 110 Ibid., 3. 111 Richard Ernst, “Ernst, Richard R.: The Success Story of Fourier Transforms in MRI,” Encyclopedia of Magnetic Resonance, ed. R. K. Harris and R. E. Wasylishen, John Wiley: Chichester. DOI: 10.1002/9780470034590.emrhp0051. (published March 15 2007), 1. 112 Ibid., 4. 113 Raymond Damadian, “Tumour Detection by Nuclear Magnetic Resonance,” Science 171 (1971): 1151–1153. 37 cancerous regions rather than as an imaging technology. Paul Lauterbur, a chemist, began working on how nuclear magnetic resonance could be harnessed as an imaging technology. Physicist Peter Mansfield was also working on imaging, using magnetic resonance to do crystallography. 114 Lauterbur’s first published paper on the subject, “Image Formation by Induced Local Interaction; Examples from NMR” opens with a discussion worth quoting in full: An image of an object may be defined as a graphical representation of the spatial distribution of one or more of its properties. Image formation usually requires that the object interact with a matter or radiation field characterized by a wavelength comparable to or smaller than the smallest features to be distinguished, so that the region of interaction may be restricted and a resolved image generated. This limitation on the wavelength of the field may be removed, and a new class of images generated, by taking advantage of induced local interactions.115 Lauterbur called his new images zeugmatograms and suggested that the process by which they were produced be called zeugmatography from the Greek word for “that which is used for joining.”116 His work revolved around using gradient fields of different and higher magnetic strength introduced perpendicular to the principal field to knock the nuclei out of phase so that measurements could be made of how quickly they returned to a stable state.117 This process is close to how contemporary MRI works. Lauterbur used the same kind of back projection technique for image reconstruction that was used in CT. More about this later. He recorded T1 information from four projections and then calculated it in a 20 x 20 square image matrix; his representation was made by shading in from the contours provided by the reconstruction algorithm.118 By contrast, contemporary MRI imaging uses a 512 x 512 matrix for imaging. 114 Kelly Joyce, Magnetic Appeal MRI and the Myth of Transparency (Ithica: Cornell University Press, 2008), 30. 115 Paul Lauterbur, “Image Formation by Induced Local Interaction; Examples from NMR,” Nature 242 (1973): 190–191. 116 Ibid. 190. 117 Ibid. 118 Ibid. 38 During the same period, British physicist Peter Mansfield was experimenting with the use of nuclear magnetic resonance for diffraction studies in solids.119 Mansfield was also producing images of a sort from nuclear magnetic resonance data but was not pursuing imaging in particular as a way of presenting or collecting data. He began to pursue imaging more directly after he was introduced to Lauterbur’s paper, and so a kind of race began for human body scanning.120 At this time, quite a lot was up in the air. From our historically advantageous position, the clear images of tissue that MRI produces seem almost inevitable, but in the early 1970s it wasn’t even clear if a magnet could be made big enough to scan a body.  The first MRI scanner that could fit a human body was built by Damadian in 1977. It was called “Indomitable” after the seven years of work that his lab had put into it.121 This did not yet settle any controversies, since Damadian was using a different style of scanning than the others. He had envisioned the role of nuclear magnetic resonance in cancer detection, but did not pursue imaging the way that Lauterbur did, and the gradient approach ended up being the one accepted (and that won Lauterbur and Mansfield the Nobel Prize later). Moreover, this only sparked a number of new controversies once there was technology capable of utilizing magnetic resonance for medical imaging. How the images would look, who would use them, and who controlled the technology as it moved into the medical field became sites of controversy. Magnets are built to obtain as homogenous a strength as possible across the magnetic field. They are either closed or (more recently) open scanners, consisting of gradient coil magnets. Because each of the gradient coils must physically move for each slice selection, MRIs have a characteristic “machine gun” sound which is generally considered unpleasant.122 Some companies, such as Fonar, have started to create machines that are less claustrophobia inducing 119 Joyce, Magnetic Appeal, 32. 120 Ibid., 34. 121 Sonny Kleinfield, A Machine Called Indomitable (New York: Times Books, 1985), 12. 122 Oldendorf and Oldendorf, Basics, 80. 39 and less noisy.123 MRI provides non-invasive, safe, and high-resolution images of anatomical features of the body. It is also an extremely flexible imaging technology as slices can be taken along any plane although the most common remain the coronal, axial, and sagittal planes. Image Quality and Use Good MRI images have a high signal to noise ratio, good contrast, and a lack of artefacts. Some features of image quality, such as tissue contrast, depend upon the task and these are mostly decided by how the images are weighted. T2 weighted images have less detail about white matter and grey matter in the brain, but make tumours easier to see. Artefacts are image features that look like signal but are not. They can range from acceptable to completely unacceptable, depending on how much they interfere with the quality of the picture. Some pulse sequences are more prone to artefacts than others – fast scanning techniques and fat suppression sequences tend to have more artefacts. They can also result from physical problems with the magnet, thermal noise (as in T2*), or movement. Movement can be particularly problematic in MRI and scanners are generally built with both movement suppressing physical structures (such as head restraints and bite bars), and with movement correcting software. A great deal of the technical research on MRI, since its beginning, has been on artefact reduction.  Learning to read MRI images is done alongside learning anatomy in medical school. Case studies where diagnosis is made using a particular imaging modality are used to guide students in developing descriptive language around the images and in using imaging in the course of diagnosis.124 MRI scans can be used to detect lesions in brain tissue, such as brain haemorrhage or tumours, but they are also used diagnostically in different ways.  For example, MRI is used to diagnose Multiple Sclerosis. While symptomatology guides the decision to make MRI images, changes in location of lesions over multiple MRIs performed at different times are 123 Fonar 360 web site (accessed January 20, 2009). 124 M Tintore et al., “New Diagnostic Criteria for Multiple Sclerosis,” Neurology 60 (2003): 27–30. 40 evidentiary of the process of the disease.125 Similarly, in diagnosing Alzheimer’s, MRIs have proved useful in establishing abnormal parameters for the size of the hippocampus that can be telling of the course of the disease.126 So MRI can be useful in learning to distinguish normal from abnormal anatomical features of the brain and using those in diagnostic assessments. Functional Imaging with MRI Functional MRI is a later development of MRI that continues to be used as a way of imaging functional activity in the brain. fMRI utilizes the fact that deoxygenated haemoglobin has a stronger magnetic field than oxygenated haemoglobin to produce a signal correlative of brain blood flow. Paradoxically, metabolic activity is followed by a compensatory increase in oxygenation after 1-2 seconds.127 These features are the basis of the BOLD contrast signal measured in fMRI.128  Ogawa and his team, working at AT&T Bell labs, discovered that deoxygenated haemoglobin could act as a contrast agent in T2* weighted MRI images in 1990. In their initial theoretical work on imaging with BOLD, they correlating the BOLD signal with increased demand for metabolic activity.129 Deoxyhemoglobin is paramagnetic, which means that it can act as a contrast agent in MRI scans. It produces inhomogeneity in T2* weighted images where it generates a lower signal; this means that the T2* signal intensity increases with an increase in oxygen metabolism- the spike after oxygen metabolism.  Brain tissue relaxation time is influenced by the oxygenation of blood due to T2* effects. T2* is the time it takes for the transverse magnetization to decay to 37% of its original magnitude. It is characterized by principal 125 Ibid., 27. 126 T Stoub et al., “Hippocampal Disconnection Contributes to Memory Dysfunction in Individuals at Risk for Alzheimer’s,” Proceedings of the National Academy of Sciences of the United States of America 103, no. 26 (2006):10041–10045. 127 GK Aguirre and M D’Esposito, “Experimental Design for Brain MRI” in Functional MRI, ed. C.Moonen and P Bandetti (Berlin: Springer-Verlag, 2004): 366–380. 128 S Ogawa et al., “Brain Magnetic Resonance Imaging with Contrast Dependent on Blood Oxygenation,” Proceedings of the National Academy of Sciences of the United States of America 87, no. 24 (1990): 9868–9872. 129 Ibid., 9869. 41 magnetic field inhomogeneity and loss of transverse magnetization at a rate greater than T2. It is caused by local inhomogeneity in the magnetic field caused by paramagnetic interference, uneven magnetic fields, and molecular interactions. The increase in the BOLD signal is correlated with an increase in blood flow to an area which in turn is correlated with increased metabolic and neuronal activity. Because fMRI measures changes in the BOLD signal, images are taken every 1 to 3 seconds in order to compare levels of signal over time. They are recorded in slices with voxels between 3 and 4 mm³ whose signals are recorded in K-space and imaged much in the same way as with MRI. The pulse sequence is echo planar instead of spin echo, which uses a single radio frequency pulse to capture all of the information needed for the image in a single reading. For this reason imaging a slice takes 100 ms as opposed to the longer times for the spin echo pulse sequence described above.130 A result of this is a significant loss of resolution in the phase gradient axis. fMRI is considered to have very good spatial but not very good temporal resolution of signal compared to other measures of brain activity such as EEG. Another significant difference from MRI is in how the images are used. Statistical Issues in fMRI fMRI has a high rate of noise over time, which makes it difficult (at least using BOLD) to study slow changes. It is also susceptible to a range of artefacts. Many are the same as artefacts in MRI, but there are additional problems with fMRI that arise from the particularities of the imaging tasks. In a standard fMRI study, thousands of images are produced as raw data, sometimes hundreds per subject.131 Many studies, especially those examining regional activation, compare the results from numerous subjects in order to attempt to make localization correlations between brain and mental states, to establish normal patterns of activation, and to 130 Westbrook et al., MRI in Practice, 128. 131 Marcus Raichle, “Behind the Scenes of Functional Brain Imaging: A Historical and Physiological Perspective,” Proceedings of the National Academy of Sciences of the United States of America 95 (1998) 765–772. 42 study disorders such as strokes.132  The signal time course can then be correlated with voxels in the region of interest, and the differences averaged out to obtain statistical results from the group and eliminating outliers.  Often these regions of interest or activation blobs are labelled according to a neuroanatomical coordinate system such as Talairach coordinates.133 These coordinate systems work by situating anatomical structures in the brain according to an “atlas.” In the Talairach system the anterior and posterior commissures are used to divide the brain into 12 volumes in order to locate specific structures in images. The atlas uses histological slices of a post mortem brain as a reference in developing the coordinate system into what is known as a brain space. This feature of fMRI and problems with the Talairach atlas will be discussed later in the fMRI case study.134 Imaging with fMRI cannot be separated from the kinds of tasks used to determine metabolic activity. The first fMRI experiment measured BOLD contrast changes in the visual cortex in response to light stimulation and darkness.135 The increase in metabolism in visual cortex area V1 was known, through PET studies, to be significant in this kind of condition.136 The findings of the experimenters replicated the findings of PET studies, but without the use of radioactive isotopes as contrast agents.137 Functional imaging pioneer Raichle suggests that it was in fact the involvement of cognitive psychologists in the 1980’s that brought functional imaging into center stage as a research tool.138 The change was in developing experimental designs to isolate the signal for a particular task. This involved imaging the brain during a task 132 PM Matthews and P Jezzard, “Functional Magnetic Resonance Imaging,” Journal of Neuroscience, Neurology and Psychiatry 75 (2004): 6–12. 133 J Talairach and P Tournoux, Co-planar Stereotaxic Atlas of the Human Brain: 3-Dimensional Proportional System – an Approach to Cerebral Imaging (New York: Thieme Medical Publishers, 1988). 134 Toga et al., “High Resolution Anatomy from in situ Human Brain,” NeuroImage 1 (1994): 334–344. 135 Kwong et al., “Dynamic Magnetic Resonance Imaging of Human Brain Activity During Primary Sensory Stimulation,” Proceedings of the National Academy of Sciences of the United States of America 89, no. 12 (1992): 5675-5679. 136 Kwong et al., “Dynamic MRI,” 5675. 137 Ibid. 138 Raichle, “Behind the Scenes,” 766. 43 state and during a control state, and this is still the prototypical study with fMRI.139 The images can be compared against each other to examine differences in the region of interest between the two states, or the two task states can be presented graphically as a curve.140 The effects of the task signal (light shining in eyes) can only be measured as compared to the control signal (darkness).  In the first fMRI study, both images and graphs are presented. Because there are many images produced during a study, only some can be reproduced in the article.141 Functional imaging is often used in different contexts than structural imaging. Until recently it has more often been used in cognitive neuroscience research, and in cognitive psychological studies rather than being used diagnostically by physicians.142  In the development of PET and fMRI the input of cognitive psychology was methodologically invaluable in designing experiments to isolate particular cognitive activities so that the functional activity of the brain could be imaged.143 There have been a host of recent problems raised with the misuse of fMRI, from attempting to image voter preference in US swing states, to ethical concerns over the use of fMRI as a lie detector.144 These issues with the use of fMRI have been around since it was invented, but there do seem to be interesting changes in the use of the technology that do not have all the same problems. 139 Ibid. 140 Aguirre and D’Esposito, Experimental Design, 374. 141 Kwong et al., Dynamic MRI, 5676. 142 Aguirre and D’Esposito, Experimental Design, 378. 143 Raichle, “Behind the Scenes,” 780. 144 Greg Miller, “Growing Pains for fMRI,” Science, June 2008, 1412–1414. 44 3.Representation and Vision In the last chapter I examined the representational system shared by these medical imaging technologies. In the signal→ luminosity representational system determinate information from a voxel of tissue is translated into luminosity values per pixel in an image. I also examined the interdisciplinary technological development that lies behind medical images, as well as some of the imaging tasks these instruments were developed to solve. The survey in the last chapter also allowed for a consideration of how these images are used in particular contexts. I hope it showed well enough that the visual aspect of imaging is its most significant; the value of medical imaging, within medicine, seems to be in their allowing us to “see” inside the body. There is a pair of entwined representational ideals that can be drawn from the histories and the co-development of the imaging technologies with imaging tasks. The different probes, and the different signals they produce by interacting with biological indicators, are medically significant and of interest in two ways. First, there is the idea of signal as a material measure of determinate properties. We cannot see how dense tissue is, or what its acoustic properties are; however, we can measure these features using a probe, and represent them using luminosity values. In some ways this is not terribly different from a Western Blot, where different hues represent different levels of protein. On the one hand, medical images are an array of luminosity values which provide information, on a voxel to pixel basis, about a slice of tissue. On the other hand, these luminosity patterns are also pictures. Here I mean pictures in the rather ordinary sense of images that look like something. We are able to have visual experiences of foetuses when we see them in ultrasound images. Furthermore, our seeming to see foetuses in the images does not seem to be purely epiphenomenal; our ability to see things in 45 images seems central to their development and their use. These are exactly the two sides that must be reconciled in an account of medical imaging. When we look at medical images we see a two-dimensional array of luminosity values; however, we also have a visual experience as of brain slices, placenta, or bones. This may also be true of digital photographs of these objects. The difference is that photography, in so far as it is a measuring device, measures light – ambient light, and that reflecting off a surface. Seeing visual objects in images requires explanation: a theory of pictorial experience. Sometimes a theory of visual experience is part of a theory of depiction, but it does not have to be. Medical imaging makes different demands. The images elicit visual experience, like photographs; unlike photographs, we need to also understand the luminosity values as representing tissue properties that are not generally visible to us. I am beginning with the hypothesis that if we want to explain the role of vision in imaging technologies we should begin by examining the experiential side of imaging. I have briefly examined the use of imaging technologies in measuring and representing bodies; I have also examined the way they have been developed in medical contexts. Now this must be tied in to vision; to examine what it means to say that we see bodies in the images. In this chapter, I take up a discussion from the literature in philosophy of art on the nature of pictorial experience. I will use this discussion as a way to first analyze, and eventually reconcile, the two competing ways in which images represent: as measures of signal value, and as pictures sustaining object presenting experiences. The most important outcome of this is to begin to define what visual capacity medical imaging extends, and how it does so. I argue that medical images should be thought of as vehicles for seeing-in. 46 Seeing-in When I was a child my family would drive, a couple of times a summer, to our cottage on a lake in the Muskoka region of Ontario. My twin and I would watch rapt out the windows of the car waiting for the first appearance of the area’s lakes and granite cliffs. Inevitably, having children’s sense of time, we jumped the gun on our identification of lakes and mountains. Every year one of us would point out the window and ask our parents which lake that was in the distance, pointing out what our parents would have to explain were just cloud formations that looked like lakes and mountains. This was both disappointing and fascinating and we would pass a huge amount of time trying to see the clouds as clouds and then trying to see them as mountains again. This phenomenon is called seeing-in by Richard Wollheim and is widely discussed in philosophical literature on the experience of representational art.145 Seeing-in is thought to be a natural perceptual capacity that precedes representation and is considered commonplace. Humans have the ability to see lakes and mountains in clouds, to see faces in rock formations, to see landscapes in damp walls, or faces in grilled cheese sandwiches in the famous Virgin Mary sighting. Wollheim differentiates between seeing-in and seeing as, we see clouds as turtles but we see northern Ontario landscapes in clouds.146  It is not just the ability to see one thing as another, but to see one object whose surface elicits a visual experience as of another or in whose surface features we recognize a scene or object. This does not seem to require training. In an anecdotal experiment psychologist John Kennedy kept his child away from pictures until the age of two and the child still had no problem with picture perception.147 Neither are humans the only species that has this ability. Horses and pigeons seem to be able to see things in pictures as well.148 145 Richard Wollheim, Art and Its Objects, 2nd ed. (Cambridge: Cambridge University Press, 1980). 146Ibid., 224–225. 147 John Kennedy, A Psychology of Picture Perception  (London: Jossey-Bass Publishers, 1974). 148 Joel Fagot, ed., Picture Perception in Animals (Philadelphia: Psychology Press, 2000). 47 Humans are, however, the only species to have put this visual peculiarity to intentional work. We are the only ones who develop ways of creating differentiated surfaces to elicit visual experiences of subjects that are not in front of us. Divine intervention aside, there are numerous more mundane ways in which this phenomenon has been exploited by humans. Wollheim implicated it in our ability to see things in representational pictures such as  paintings and drawings. Others have extended seeing-in to explain representational photographs and film. There is no reason to think that seeing-in in crafted images is a separate ability from seeing-in in grilled cheese sandwiches. Intuitively the difference is normative. There are standards of correctness for seeing things in our crafted images. When I see a photograph of Obama there is something that makes it correct that I see Obama in the picture and not his wife. While we might see faces in clouds and figures in rocks, there are no standards of correctness for those perceptions. Seeing a man in a differentiated surface does not make it a representation. A marked surface pictorially represents a man if we see a man in the marks and some standard of correctness tells us that it is right that we see a man there. The intention of the artist in the case of hand made pictures, and perhaps the causal history in the case of photography, gives us standards of correctness to see some object or scene in that surface.149 Components of Pictorial Experience There is wide agreement that representational pictures are mimetic – that they are somehow like the things that they represent. This is what we mean when we say a portrait looks just like someone. The problem is in explaining what this visual experience amounts to, and how it relates to our ordinary visual experience of objects. What does it mean for a flat, painted, surface to look just like a person? In order to begin an analysis of pictorial experience it is important to make some important distinctions concerning pictures. First, pictures are two- 149  For a discussion of  the artists intentions see Wollheim, Art and Its Objects, 207. 48 dimensional marked surfaces. Some pictures, the ones that are representational, also have subjects. The subject of the picture is the object or scene that it is correct to see in the picture. The design of the picture is the series of marks on the surface that give rise to the experience of the subject. Some surface marks will, and some will not, be part of the design and so play a role in our experience of the subject and what the picture says about it. The content of a picture is what it says about or attributes to its subject. This is different from the content of our experience of the picture, since the content of our experience could have all sorts of things not related to the picture in it. If I am drunk or half-asleep the content of my experience of the picture will be affected by this.150 Wollheim thought that there was a phenomenological incommensurability between seeing-in and straightforward seeing. Some accounts of pictorial experience discuss seeing-in as though it were just the same as seeing face to face; as if pictures manipulate our visual systems in such a way that they trigger visual responses. So called illusion accounts explain pictorial experience by appeal to the response of our visual systems to certain visual cues. Unlike real visual illusions, however, pictures do not normally lead us to believe that we are seeing what they depict. So we might think that pictures fool our eyes when we see them as pictures, but that we are also able to see their designs and not be taken in. There are cases of trompe l’oeil pictures, and it can be disconcerting to have a visual experience you took to be veridical turn out to be a cleverly rendered image. I once took a paste up of a giant electrical switch under a Vancouver bridge to be a giant electrical switch, despite the unlikeliness of that (see Figure 3.1). My belief in what was before my eyes over took background knowledge. 150 Dominic Lopes, Sight and Sensibility (Oxford: Oxford University Press, 2006), 25. 49 Approached in admittedly non-ideal lighting, the trompe l’oeil effect was so cleverly done that it took standing at an angle where I could see that it was a flat, printed, surface before I realized that it was a paste up. However, once one has seen through the illusion as it were, one is unlikely to be taken in by it again. Illusion accounts do not entail that one has false beliefs, but rather that the visual system is taken in by particular cues that artists have learned to exploit over the years. While this kind of an account seems as if it explains some of our experiences with pictures, it does not explain them all. A problem with this sort of view is that if we try to see the  marked surface as the design, instead of viewing it illusionistically, we cannot see the subject. If we view the subject, the design ceases to be a design. The illusion account then loses its ability to explain how the marked surface can give rise to a subject presenting experience. There are two sides to be considered here. The first is that pictures of things are often used to stand in for the things themselves – in this way seeing something in a picture is taken to Figure 3.1: Cameraman Light Switch Paste up. Photograph courtesy of Andrew Ferguson 50 be like seeing it face to face. Many psychological tests use pictures for object recognition,151 for sorting tasks, or perception tests.  Functional imaging studies of perception almost inevitably use small screens to display pictures of objects.152 We also know that people who suffer from visual agnosia and are incapable of recognizing objects are often also incapable of seeing (or drawing) things in pictures.153  This gives us reason to think that there is a lot of similarity between seeing something face to face and seeing something in a picture. Despite the fact that seeing something in a picture is somehow similar to seeing that thing straightforwardly, there is no reason to draw either of the following two conclusions: a) that seeing object O in picture P is experientially the same as seeing O straightforwardly, or b) that seeing O in P is informationally the same as seeing O straightforwardly. While the illusion account does a good job explaining the development of renaissance naturalism, it is less able to explain even some of the cases discussed above such as line drawings of objects used in psychological tests.  The experience of seeing-in is not the same as “straightforward” seeing,154 so the next question is how it differs and what that tells us about the nature of seeing in pictures.  Twofoldness Wollheim takes seeing-in to be a single perceptual experience of a surface, but with two aspects or folds to its phenomenology. On the one fold we see the marks that make up the designed surface of this picture, which is called the configurational aspect. On the other we see the subject, and this is the recognitional aspect. If we are looking at a Degas painting of ballerinas we see the swooshes, gobs, and scribbles of paint that make up the surface, and we see part of this surface as giving rise to the experience of a ballerina tying her shoe. This doubling of vision 151 J Mazer, “Object Recognition: Seeing us Seeing Shapes,” Current Biology 10 (2000):R668–R670. 152 Vincent P. Clark, “fMRI Study of Face Perception and Memory Using Random Stimulus Sequences,” The Journal of Neurophysiology 79 (1998): 3257–3265. 153 Martha Farah, The Cognitive Neuroscience of Vision, Fundamentals of Cognitive Neuroscience, 3 (Malden, Massachusetts: Blackwell Publishers, 2000). 154 Wollheim, Art and Its Objects, 208. 51 between seeing-in and seeing the marked surface as the design is meant to be distinctive of our visual experience when we experience pictures.155 Twofoldness is meant to account for the difference between straightforward perception and picture perception. When we perceive things in pictures, our experience has this twofold phenomenology; when we straightforwardly perceive things the phenomenology is different. More than just phenomenology is different, and the differences may be for every kind of system by which we represent things. Line drawings represent the outline shapes of things, cave paintings exploit features of rocks and shading in order to depict the form and the volume of animals, photographs capture the surface reflectance properties of a three-dimensional scene into a two-dimensional space. The pictures’ marked surfaces are such that we see object O in the design of picture P. Taking twofoldness as distinctive of picture perception aims to account for the relationship between experience of the design and experience of the subject in visual experience. Wollheim thought that a philosophical explanation of how a differentiated surface could give rise to the experience of seeing the subject was not available, but current philosophical accounts demand more. Wollheim’s account is strongest when discussing certain kinds of pictures for which simultaneously seeing the design and the subject are important; where our experience of the subject in the design is part of the content of the painting. Part of appreciating the works of Turner, Cézanne, Monet, Renoir, and other modernist painters involves seeing how the design and surface features of the paintings give rise to the image and play a role in determining the content of the picture. This can be seen in Figure 3.2 where the intense colours and the bold lines contribute to our sense of motion and life in both the surface and in the water lilies. 155 Richard Wollheim Painting as an Art (Princeton: Princeton University Press, 1987), 53. 52 Figure 3.2: Nympheas Eduard Monet, oil on canvas, 1897– 98 Public Domain. This seems to be less the case with other kinds of images, especially those images whose main purpose is something other than to be aesthetically appreciated. Furthermore, while some pictures, notable some art pictures, need to be understood as twofold, many pictures do not seem to give rise to this experience as a central way of perceiving or interpreting them.156 Some have argued that Wollheim’s twofoldness version of seeing-in attempts to do two different things. On the one hand, it is meant to explain picture perception, and on the other, it is 156 Dominic Lopes, Understanding Pictures (Oxford: Oxford University Press, 1996), 43–51. 53 meant to explain picture appreciation.157 The ways in which picture appreciation has been tied to seeing-in does a disservice to our attempts to understand the very notion of seeing-in. A problem has been the separation of aesthetic from non-aesthetic properties where representational properties can sometimes be lumped in with aesthetic properties. But seeing-in on its own does not have to imply aesthetic evaluation of pictures as vehicles for seeing-in. Wollheim requires both sides of the twofold phenomenology for picture perception. We are capable of focusing on one or the other fold, but to lose sight of the surface and merely see the ballerina, or to lose sight of the ballerina and merely see the surface, is to engage in a different perceptual activity.158 When we just see the design, we lose the ballerina. If we just see the ballerina, we are back to an illusion account. Wollheim also took the distinctiveness of pictorial perception to be (partially) constitutive of a theory of pictures. He controversially claimed that trompe l’oeil images, discussed above, were not pictures because they did not elicit pictorial experience. This aspect of the account has been offered as a challenge to twofoldness by Malcolm Budd.159 Wollheim argues that this is a distinctive phenomenology of picture perception, and that it is incommensurate with face to face seeing. Budd argues that twofoldness is undercharacterized; that there is nothing that grounds the twofold phenomenology in such a way that it can remain either a single experience, or distinctive of picture perception.160 If either of the folds is minimized, Budd argues, we are either left with either nothing, or with two experiences neither of which is distinctive of picture perception. The configurational fold becomes seeing face to face, and the recognitional fold becomes visualization – a kind of mental construction.161 This suggests that twofoldness cannot be two folds of visual experience. 157 Bence Nanay, “Is Twofoldness Really Necessary for Representational Seeing?” British Journal of Aesthetics 45 (2005): 248–257. 158 Wollheim, Painting as an Art, 53. 159Malcolm Budd, Aesthetic Essays (Oxford: Oxford University Press), 185. 160Ibid. 200. 161Ibid. 54 If we were to accept twofoldness as what is distinct about pictorial experience, than we either: accept that the relationship between our awareness of the surface and awareness of the subject is mysterious; or accept that an explanation of pictorial experience has to explain how the design is like the subject or how it gives rise to the subject. Unless we accept the idea that the two aspects of twofoldness are not both visual, one approach might be to elucidate the relationship between the design and the subject. That is to say, to attempt to explain how the design resembles its subject. Experienced resemblance theories take on this project by limiting the resemblance to a viewpoint. This has been handled in two different ways: resemblances can either be internal to the viewing subject, or they can be external, objective relations. Christopher Peacocke takes the shape property of objects as experienced, what he calls silhouette shape, to resemble internal representations of those objects or to be similar to how we would experience them in the visual field.162 Budd argues for visual field isomorphism between the subject in the painting and the subject seen face to face. Hopkins’ experienced resemblance in outline shape provides an external relation between the design of a picture and our visual experience of its subject – we experience the marked surface as resembling the subject in outline shape. While all of these experienced resemblance accounts elucidate the issues around seeing-in and visual similarity in explaining depiction, Hopkins’ account defends the twofoldness condition. It is the marked surface, as design, that we experience as resembling the subject in outline shape; the picture depicts what it does in virtue of this resemblance. In some ways, it might seem as though a twofold phenomenology would be the best way to explain our visual experiences of medical images. It is appealing to say that we see the pattern of luminosity values as the design of the image, and that we see the subject in virtue of this. The depictive role of the luminosity values could then be explained in terms of eliciting a visual experience as of a brain by bearing some visual resemblance to it. This, in turn, could ground 162 Christopher Peacocke, “Depiction,” Philosophical Review 96 (1987): 383– 410. 55 the claim that the signal → luminosity representational system is one that produces marked surfaces for the purpose of seeing-in: it does so by producing resemblances. Unfortunately, this is not without its problems. The first issue, which I will set aside for the moment, is that it is still not clear where the resemblance would lie. If we take the signal values as medically important measures of tissue, we would want to say that the picture is experienced as resembling the subject (in outline or some other external and objective shape) where the subject is understood in terms of the signal values. We do not, however, have any experience of brains-seen-by-proton-density other than our experience of these in MRI. I will return to this problem later in this chapter.  A second issue is more complicated. If our experience is twofold between design seeing and subject seeing, then we see the patterns of luminosity as part of our experience of the subject. Seeing the luminosity patterns as design has them  playing a  representational role in depiction, by being experienced as similar to the subject in some way. This leaves no clear way to explain the fact that while resembling the subject in some way they are also providing information about it. Neither is it clear that the same parts of the marked surface in virtue of which we see the subject will be the most relevant from a medical perspective. If our experience of the subject is partly determine by features of the design, as in Nympheas, this could help ease the tension. Medical image seeing-in could entail that we do not see the luminosity patterns as a differentiated surface, but that we see them as signal values as part of seeing them as design. The luminosity value of each pixel represents the signal value from a particular voxel of tissue, and this should remain central to how these images represent their subjects. Collapsing luminosity values as signal values into the design creates another set of problems. Twofoldness alone does not capture the representational content that we ascribe to the images; however, taking the luminosity values to represent signal values as part of taking them to resemble the subject quickly undermines the priority of the visual experience. In order to capture this second 56 aspect of imaging our experience would have to be three-fold between the patterned surface as representing signal values, the patterned surface as design, and the visual experience. It isn’t clear how the luminosity values could represent signal values as part of the visual experience of the subject in this case. While this might not be impossible, neither is it very helpful. The strength of a twofold phenomenology for picture perception is that it ties mimetic experience to features of the marked surface. Twofoldness seems to capture our visual experience of some pictures, and some of our picture appreciation seems to demand a twofold experience. It does not, however, seem to cover all cases. When the marked surface itself represents properties of the subject, independent of its being seen as design, then we cannot appeal to this explanation without major revision. Even if twofoldness could explain our seeing brains in MRI images, it does not help explain the difference between seeing a brain in T1, T2, or even CT. All of these technologies produce images in which we can see brains, but they each depict specific properties of the brain tour experience of which twofoldness does not adequately explain. The images are not merely of bodies, they are of bodies presented via the measurement of biological indicators appropriate for a significant task. A Pluralistic Account of Seeing-in An alternative to the twofoldness account of seeing-in is a pluralistic account of seeing- in.163 The pluralistic account accommodates our experiences of twofoldness yet does not make twofoldness necessary for seeing-in. Trompe l’oeil pictures are often taken to be definitive examples of pictures in a realist style – like Michaelangelo’s Sistine Chapel – and ought to be explained by an account of pictorial experience. Furthermore, these pictures can be experienced as illusionistic even once we have understood what we are seeing, illusion does not have to 163 Lopes, Sight and Sensibility, 39–45. 57 imply delusion.164 If seeing-in is an experience of “doubled vision,”165 it does not follow that the configurational fold has to occur with design seeing. It could be that trompe l’oeil images are best explained in terms of seeing-in doubled with seeing the marked surface. While this means accepting that some of our experiences of pictures are illusionistic, this is not a prima facie reason to reject the view. Because the pluralistic account of seeing-in neither entails a particular theory of depiction nor claims all pictorial experience is illusionistic, it avoids some of the afflictions of earlier illusion accounts. The pluralistic account differentiates between two different kinds of seeing that could double with seeing-in: surface seeing and design seeing. In design seeing the two folds of our experience are seeing the image depicted and seeing the design of the surface that gives rise to it. In many cases of picture perception we are not, nor do we need to be, aware of the design properties as design. We can simultaneously have a visual experience of the subject and see the two-dimensional, coloured, and textured, surface of the picture without the need to see that surface as design.166 This has the explanatory dimension that many naturalistic pictures, such as Dutch master paintings and photographs, are not experienced as twofold either. Our experience of seeing-in in these pictures has a double aspect, we know we are seeing a picture, and we do not experience them as twofold. When seeing-in is doubled with design seeing, the experience is twofold. When seeing-in is doubled with surface seeing the experience can be illusionistic, but it does not have to be.  There are different ways in which the experience of seeing-in can be tied to the picture surface.  If trompe l’oeil images trick us into ignoring their surfaces, then this is purely illusion without doubled seeing. Illusionistic pictures separate the experience of seeing-in from either design seeing or surface seeing. This does not, however, mean that there is no design or that we 164 Ibid., 30. 165 Ibid., 28–45. 166 Ibid., 31. 58 cannot ever see it. We can examine the marked surface to locate design features even if, in doing so, our experience ceases to be illusionistic. If this were not the case it would be difficult to learn about different styles of visual representation, let alone draw them. A great pleasure of trompe l’oeil images is being able to view them from different positions: to move beyond the fixed perspective to see the interplay between surface marks, design, and subject. In the paste up in Figure 3.1, the angle of the shadows and highlights around the edges of the light switch give it a three-dimensional appearance from a certain perspective. To see the surface as the design is, in part, to see how the picture works – how the giant-light-switch presenting experience is tied to the picture. What makes these cases worthy of our scrutiny is how unusual they are. Much of our ordinary viewing of pictures neither demands nor rewards such close attention to our pictorial experience. The experience I have of seeing my dog Miguel in a picture does not seem to include seeing how the colours combine to represent him. I see the shiny coloured surface of the paper, and I also see Miguel. Once I try to see Miguel in the design I lose sight of him. The Miguel presenting experience the picture elicits is not compatible with seeing the surface as design, but is instead only compatible with seeing it as a surface. Snapshots and other kinds of naturalistic representations are not necessarily twofold, and neither are they illusionistic. The pluralistic account of seeing-in maintains the distinction between  picture seeing and  seeing face to face, we are not deluded by pictorial illusions. It also allows more freedom in understanding pictorial resemblance by separating pictorial experience from depiction. Doubled with design seeing, seeing-in is twofold. Independent of doubled seeing, seeing-in is illusionistic. Lopes develops several other possibilities for pictorial experience along these axes. Illusionistic seeing-in doubled with design seeing could be “actual.” This might capture our experience of something that is what it depicts such as Jasper Johns’ Target.167 167 Ibid., 40. 59 Non-illusionistic seeing-in is naturalistic when it is doubled with surface seeing. Between these four possible arrangements is a fifth category of seeing-in that captures our visual experience of non-illusionistic pictures such as cubist portraits or picture puzzles. These are images whose surfaces can appear chaotic or even abstract until one sees what the image depicts. In order to see the surface as design, one has to first see the subject. After that the design is easily seen. Seeing-in is not doubled with design, but with “pseudo design.” These are surface markings that seem like aspects of the design but are not part of what makes the picture depict what it does. Perceptual experience is not twofold here but “pseudo-twofold” – the subject is seen in the markings, but it is necessary to see the subject prior to the marks being seen as part of the design.168  An upshot of the pluralistic account of seeing-in is that different pictures, or even different areas of the same picture, can give rise to pictorial experience in different ways.169 This preserves seeing-in as a visual capacity while giving it greater explanatory breadth. Pictorial experience includes a visual experience as of the subject doubled with the marked surface unless it is trompe l’oeil. On a pluralistic account this can be twofold, illusionistic, pseudo-twofold, or, maybe, actual. Unlike twofold accounts, this leaves room for different ways of explaining similarity between pictorial seeing and seeing face to face. Without the limitations on picture perception imposed by a twofold view the task of explaining representation and vision in medical imaging becomes less daunting. 168 Ibid., 40–42. 169 Ibid., 45. 60 Resemblance and Visual Experience Pictorial experience and interpretation are linked. In many cases, our grasp of pictorial content is automatic or seems that way. A quick glance at a painting is often enough to tell us what it is about. Conversely, other paintings are valued for the way that increased scrutiny yields increased payoff in visual experience. Looking closely at the painting The Garden of Earthly Delights by Hieronymus Bosch in Figure 3.3, we discover that there are many little scenes and tiny figures that are not immediately evident. Figure 3.3: Garden of Earthly Delights, Hieronymus Bosch, 1500- 1505, public domain  Pictures do not merely elicit visual experiences the way rocks and clouds do; they also sustain those experiences as long as we are looking at the picture. Some pictures reward careful attention with rich visual experiences, while others show the barest appearance of their subject. Different kinds of pictures can serve different purposes. The same picture can serve different purposes in different contexts. The same photograph could be used for tender reminiscence or for identification at the border. Our visual skills are executed in many ways. A child can learn to 61 identify freesias by seeing them in her garden or by seeing them in pictures. If someone tells her “that is a freesia” she will also learn what to call them, but she might also look for a similar flower in an illustrated field guide and recognize a freesia there and learn from the book that her two similar flower presenting experiences were of freesias. This is part of what makes it appealing that there is some kind of a connection between our experience of things and our experience of them in pictures. The knowledge that we bring to bear on images is visual. Much of our visual knowledge comes from experience, from repeated exposure to the visual appearances of things. But pictorial experience is compatible with the fact that some pictures, or some appearances, are more or less familiar. Drawings in linear perspective are very familiar to us, but it can take a while to learn how to interpret axonometric projections with their unfamiliar perspective. In two-point linear perspective, parallel lines are represented by non-parallel lines receding to a vanishing point on the horizon. In axonometric perspective, parallel lines are represented by parallel lines. In both cases, once we understand the representational systems the lines appear parallel. Despite this, linear perspective is generally easier to interpret than axonometric. Visual Limitations and Picture Perception Visual concepts concern our knowledge of the visual appearance of things. We might not know that a freesia is a freesia, or even a flower, but we could nevertheless be able to bring the flowering plant in front of us under a visual concept. We know the look of a freesia as a visual object. Lopes discusses the knowledge required for picture perception in terms of our recognitional capacities, “The knowledge requirement is met if seeing-in includes the exercise of a visual concept of the depicted object, where a visual concept of O is an ability to reliably identify O by its visual appearance in varying circumstances.”170 We acquire visual concepts by 170 Ibid., 46. 62 seeing things face to face or by seeing them in pictures.  Most of the time pictures represent visible things as visible by representing them as having visual properties like those we experience them as having (fictional characters aside). The extent and order of resemblance deemed acceptable will depend on the purpose to which a picture is put. This can be an important point for pictures held to strict epistemic standards, like medical images. A passport photo has to capture features that make the subject identifiable around the world: your face must not be occluded and your expression must be neutral. A photo might be a nice picture, perhaps capturing your joie-de-vivre, without having the identifying features required for a passport. Furthermore, resemblance alone does not suffice. A passport photo is expected to be caused by the subject and not their identical but more photogenic twin. Similarly, a painted portrait of me may fail as a portrait if it does not resemble me at all while succeeding as a painting of a woman. The degree and type of resemblance that is considered acceptable depends upon the imaging task and how the picture will be used. What matters is that we are able to deploy concepts correctly and with fine enough grain for the image to serve its purpose. Two standard views on the limits on how pictures can visually represent their subjects are that they can only represent objects with appearances, and must represent those objects as having at least some of their visible features; to the extent that it does not cease to be informative for a particular task.171 Here we return to the question raised and set aside a few pages back concerning pictorial experience and medical images. Our visual knowledge is knowledge of the appearances of things, and our visual concepts are developed in the context of the idiosyncrasies of human visual systems. While pigeons seem to be able to see in some pictures,172 it is unlikely that they would be able to see in all of the kinds of systems of representation that we do. Likewise, we can imagine a race of aliens who only see in magnetic 171 Ibid. 172Fagot, Picture Perception in Animals, 1–71. 63 resonance and for whom time rather than depth counted as a third dimension. In order to capture their perceptions of a particular height-length-time scene, they use a special gun to transfer the random proton spin times of the various aspects of their field of vision onto a sheet of specialized putty that will spin at the same frequencies as the objects they measure in their environment. Any other alien can then copy this representation by copying the spin frequencies. This system of representation would be pictorial to aliens, but it would not be pictorial to us. We would see nothing in it. I have said that we need to take these visual limitations seriously. It is one thing to discuss medical images as vehicles for seeing-in, but we also need a principled way of discussing what it is right to see in these images. Pictures have often been derided for merely presenting the appearances of things. Of course this reflects the issue of the changing surface appearances of things over time, contrasted with understanding the thing itself. To some, appearances capture non-essential elements of things: moments whose representativeness can be taken out of context. To others, appearances and the systems of knowledge we build around them are all that we can access. In medical imaging it is not clear that we can have visual concepts of the objects depicted in the images. Despite eliciting visual experiences, medical imaging does not seem to deal in appearances but in measurement. While brain and bones presenting experiences are connected to the patterned surfaces of the images, it is not yet clear what relationship holds between the two varieties of representation. The mapping from signal per voxel to luminosity value per pixel is itself understandable both as a representation and as an instrumental output. It is the connection between mapping and a correct interpretation of what the picture is attributing to its subject that remains out of reach. While we might have visual concepts of the brain and of foetal morphology, they are defined in terms of how things appear to us and not how things appear in T1 weighting or by their echogenic properties. 64 Being vehicles of seeing-in is what differentiates imaging technologies that I am talking about from other kinds of medical tests such as EEG. It puts them in line with some representational pictures. Medical images elicit and sustain body presenting experiences, but that is not all they do. To explain both how these images are used and how they sustain scrutiny, requires that we understand what role the signal plays in attributing features to the body seen in the images. The relationship between signal representation and pictorial representation seems to be what defines these technologies not only as visual technologies, as but technologies that could extend our visual abilities. If imaging can extend our vision, the capacity through which it does that is seeing-in. In the next section I want to give a more specific view of the visual processes these images extend. I will do that in the context of a theory of visual prosthesis. 65 4.Technology as Visual Prosthesis In the last chapter we examined issues with medical images are vehicles for seeing-in and found that the body presenting experiences they elicit are difficult to reconcile with them also being representations of signal values. The question is how else to reconcile these two ways of representing that could adequately explain these technologies. Contemporary discussion of representation in science by philosophers of science often concerns understanding representation in a more general way than the last chapter. Concerns that arise in the philosophy of science surround the role of representation, and representations, in science. In general, this discussion tends to emphasize representation in use rather than in “specifiable relationships between representor and represented.”173 What do we use a representation R to represent? Who uses R? For what purpose is R used? Ronald Giere lays this out in a discussion of the relationality of scientific representation, If we think of representation as a relationship, it should be a relationship with more than two components. One component should be the agents, the scientists who do the representing. Because scientists are intentional agents with goals and purposes I propose explicitly to provide a space for purposes in my understanding of representational practices in science. So we are looking at a relationship with roughly the following form: S uses X to represent W for purposes P.174 At the beginning I said that one explanandum of medical imaging was that medical images act like visual prostheses – they allow us to, in some sense, see things that are not accessible to our ordinary perception. So following Giere there are two separate claims we might make about imaging as a representational practice. On the one hand representing could be understood generally, or in a specific community; one the other hand, we can see it as intentional and 173 Bas van Fraassen, Scientific Representation: Paradoxes of Perspective (Oxford: Oxford University Press, 2008), 189. 174 Ronald Giere, “Visual Models and Scientific Judgment,” in Picturing Knowledge: Historical and Philosophical Problems Concerning the use of Art in Science ed. Brian Baigre (Toronto: University of Toronto Press, 1996). 66 individual. First, we might say “trained medical professionals use imaging technologies to represent the body in order to perform visual acts as if they were able to see the body.” This seems to fit the format and the basics of imaging as representational. This fit is lost when we examine particular agents and images. Consider, “Jamie uses MRI image I to represent Chris’s brain B, for the purpose of checking whether her symptoms are caused by a tumour.” The second context, when we are actually describing the purposeful use of an image, does not seem to benefit from a discussion of representation. It would be more natural to say, “Jamie uses I for the purpose of examining B, to see whether Chris’s symptoms’ are caused by a tumour” or “Jamie uses I for the purpose of representing B as tumoured, or not.” This gets closer to the formula, if we accept tumoured as a predicate. An issue with this is that often medical imaging is not done for such specific purposes. The variety of uses of medical images seems to undermine the value of specifying purpose in order to understand them as representations. While a univocal account of representation seems a virtuous pursuit, there are many things to take into account. In the narrower realm of depiction we saw that explaining a picture as a representation and explaining how it represents its subject are separate questions. The addition of intentional purposes would not clear the field. The difficulties encountered in the last chapter are perhaps not specific to medical imaging, but may be endemic to understanding representation at all. A difficulty seems to hinge on whether philosophers are talking about representations and their relationship to their subjects, or they are talking about representation as a scientific practice. How I represents B demands different explanations than asking how/ why  X can come to stand for Y. In Scientific Representation van Fraassen examines the question of use more specifically. In the quotation above, Giere seems to be talking about the act of representing. Van Fraassen is discussing specific representations and their content. The term ‘use’ does not only encompass one kind of intentional act, it covers many, van Fraassen says, 67 This term “use” can assimilate “make” and “take”: the caricaturist made the caricature to depict Mrs. Thatcher as draconian while I, seeing the caricature, take it to depict Mrs. Thatcher as draconian, and display it to you so as to depict her to you as draconian.175 One of the points that is significant here is the way in which the representation R represents some entity E as F. In the case above it is “draconian” being predicated of Mrs. Thatcher. The content of representations is often thought about in terms of what they predicate of, say about, or attribute to their subjects. If we take representation in use in the way described though, there is an unsettling sense in which anything gains representational content by being used to predicate of some thing. Van Fraassen seems to accept this point, he takes as the heart of his account that “There is no representation except in the sense that some things are used, made, or taken, to represent some things as thus or so.”176 If we accept this kind of a functional or interpretive view both in terms of representation in science and in terms of representations, there is a problem. This kind of an account does not offer an explanation of how I represents B as tumoured, or as tumoured in a certain area. The problem is that it does not say in virtue of what I does this. While it is important to examine purpose and use in any account of representation, this does not explain the following: that an MRI of a brain can be used or taken to represent B as tumoured because it is a representation of B.  Particular images have to be understood as image types.. They represent as products of image producing technologies. Technology as an Amplifier Individual medical images do not help to explain medical imaging. Instead they need to be understood as the products of image producing technologies. This is a lesson of Patrick Maynard’s analysis of photography, another imaging technology that is versatile. Photographs 175 van Fraassen, Scientific Representation, 21. 176 Ibid., 23. 68 analysed in an art context, or in a scientific context, or in a documentary context will yield different perspectives on photographs as images and as objects.177 This is because the technology is used for different purposes and does not tell us much about what a photograph is. Rather, we need to examine photography as a process in order to understand all of these different contexts. Maynard takes a theory of technology from Clifford Hooker and others who argue that technologies are amplifiers. Maynard claims, “In general, that is what we can say all technologies are; extenders or amplifiers of our powers to do things”.178 This is not to say that technologies extend our powers simpliciter. There is always a context to consider, technological entrenchment in our social practices incurs a host of “contextual supports”. If we think of bicycles as extending our powers to move around, this is only the case if there are roads to cycle on, people who make, sell and repair cycles, and perhaps even the desire to increase our speed.179 Technologies can also alter the nature of the things we do. Consider the following claim from a surgeon quoted from an enthnographic study of the inclusion of images in patient records, There’s not a single GI case we do these days without it [the imaging system]. It makes a tremendous difference. We know what’s there, [like] the tumour size. We can plan the operation. We can decide not to operate if [it’s a] marginal case. [When we don’t have x-rays, we] repeat them, or postpone surgery, or do something ridiculous.180 Technology as an Amplifier of Vision The purpose of this chapter is to explain and ground the claim that technology is an amplifier of our ability to do things. To tie imaging technology into the body in a way that sustains the view that our visual abilities are extended by these technologies and that we can start from seeing-in and go on to ground the usefulness of the technologies in vision. I will follow Maynard’s 177 Maynard, Engine of Visualization, 35. 178 Ibid., 75. 179 Kai Hahlweg and Clifford Hooker, ed., Issues in Evolutionary Epistemology (New York:  State University of New York Press, 1989), 137. 180 B. Kaplan, “Objectification and Negotiation in Interpreting Clinical Images: Implications for Computer Based Patient Records,” Artificial Intelligence in Medicine 7 (1995): 443. 69 analysis of photography insofar as imaging technology is in an important sense very like photography – it is a technology whose purpose is the creation of two-dimensional marked surfaces. In the rest of this chapter I develop and defend the claim that medical imaging technologies are primarily instruments for creating marked surfaces for the purpose of eliciting and sustaining visual experiences. These technologically marked surfaces are vehicles for seeing-in that allow us to perform procedures, rule out or make diagnoses, and in other ways extend our abilities within medicine. The way in which medical imaging technologies amplify our powers has to be grounded in the visual. We see bones, brains, joints in images, but we do so in laparoscopy and photographic medical technologies as well. It seems as if a large number of technologies amplify our medical powers to do things by extending our ability to see inside the body. Yet this does not capture what is unique about the descendants of x-ray. Laparoscopy and other photographic technologies use the usual biological indicators of vision. I will say surface reflectance properties but it should not matter what object properties we think contribute to our ability to see them. Laparoscopy shines light into the body and records the colours, textures, lines, edges of the tissue there on a monitor that the physician or surgeon uses to guide her examination. When ultrasound is used this way, the biological indicator is echogenicity and not surface reflectance. So more needs to be said to differentiate how what is seen in ultrasound, x- ray, and MRI is unlike ordinary vision and is not something we could see any other way. Given the specificity of our perceptual systems, what does it mean to extend them? What makes something a prosthesis? Prostheses as Extensions of the Body The terms prosthesis and prosthetic can be difficult to avoid in discussions of technology. This is especially so if we are discussing technologies as amplifiers of our abilities. “Prosthetic” is 70 used to describe an important relationship we have with technology, that technologies can extend the capabilities and limits of our minds and bodies. Analysing technologies as prostheses is a way of mediating between the body and technology, of understanding how the body is implicated by the technology.181  In its medical sense, ‘prosthetic’ was introduced in 1703 to mean “replacement of a missing limb with an artificial one.”182 Our use of prostheses goes back further; a prosthetic toe, artificial limbs, and wigs date back to ancient times. “Prosthesis” has adopted additional meanings beyond merely replacing missing limbs with artificial ones. Digging sticks, pocket knives, airplanes, and houses have all been discussed as prostheses. How literally this is taken varies based on how the body is viewed and on what aspects of it are implicated in the prosthetic relationship. Taking our bodies as deficient in terms of defence against the elements we can view clothing and homes as prosthetic.183 Taking the entire world as our environmental niche, we can see airplanes as prosthetics for breathing and for flying.184 Discussions of prostheses are often tied into narratives of how the body and identity as grounded in the body are constituted, which goes back to the psychoanalytic tradition. Sigmund Freud (who lived with his own prosthetic jaw) discussed consciousness as a prosthetic addition to the body. His ideal of man as a “prosthetic God” putting on “auxillary organs” implicates both the physical and psychological as being capable of disability as well as extension.185 Lou Andreas Salome, another psychoanalyst, thought of the body as a prosthetic of consciousness. What it means to be human and how humanity is constituted in body and consciousness is tied in with concerns of amendment, addition, and embodiment. Our identity as human (or 181Sarah  Jain, “The Prosthetic Imagination: Enabling and Disabling the Prosthesis Trope,” Science, Technology, and Human Values 24 (1999): 31–54. 182 Marquard Smith and Joanne Morra, The Prosthetic Impulse (Cambridge, Mass: MIT press, 2006). 183 Mark Wigley, “Prosthetic Theory: the Disciplining of Architecture,” Assemblage 15 (1991): 6–39. 184 Ibid., 6. 185 Sigmund Freud, Civilization and its Discontents (New York: W.W. Norton and Co, 2005), 76. 71 posthuman) along with the normativity of ability and social discourse surrounding able and disabled bodies is taken to be challenged by prostheses. Mark Wigley casts architecture as “human-limb objects” and as “technological extensions of the body that are neither natural nor cultural.”186 Wigley’s analysis of architecture stems from a very particular view of what architecture, as artificial, is a prosthetic of. It is worth examining his description in full. Prosthetic architecture becomes a surrogate body “intended to second the person as such”; in this it recalls Freud’s claim, in Civilization and its Discontents, that, like the other technologies of communication and perception – the aircraft, the telescope, the photographic camera, the telephone, and writing – the dwelling is a prosthetic extension, an “auxiliary organ,” but one worn as a substitute for the woman’s body, “the first lodging”.187 If architecture is a prosthetic, a surrogate body as Wigley describes, this presents the human body as incomplete. In this view prosthetics are abundant. Technology can be anything that mediates between perceived psychological inadequacies and the world, no matter however metaphorical the connection to our biology. Marshall McLuhan’s analysis of media as “extensions of man” is more of a metaphorical sense of the body as well.188 If a wheel is an extension of a foot we might ask both what and how it extends. Perhaps the function of mobility? Yet function alone does not differentiate between artificial legs, wheels, wheelchairs, bicycles, and other tools that might functionally extend mobility. The explanatory force of the prosthetic trope, in terms of understanding the specific mediation between our body and a technology is lost. A worry we might have about this discourse is that it casts the entire body as defective thus undermining analysis of ability and disability. As Jain asks, “How do body-prosthesis relays transform individual bodies as well as entire social notions about what a properly 186 Wigley, “Prosthetic Theory,” 6. 187 Ibid., 15. 188 Marshall McLuhan, Understanding Media: The Extensions of Man (Cambridge; MIT Press, 1994). 72 “functioning” physical body might be?”189 Overgeneralization of the trope is problematic. If any tool (perhaps even any artefact) can be classed as prosthetic then the benefit of this analysis, in terms of how deficiencies are constructed in social discourse and technologically resolved, loses its power. If we do not have a starting point of ability, it becomes difficult to say of any technology how it is prosthetic. Is it because of its functions, its structures? It cannot be decided when the body is simultaneously being amended and re-constituted by use of prosthetics. The idea that there is no way of discussing the body as biological, evolved, and with species specific abilities forces the entire question of the mediation between body and technology to the realm of fantasy. If we do take the body and its abilities as a fixed point from which to begin a discussion, the best tact is to take the trope as meaningful and examine the mediation between body and technology. My interest is not in examining how prostheses constitute the body. It is in investigating how technology needs our bodies to be useful, and how we develop technologies that take advantage of our abilities. There seem to be several overlapping categories of prostheses from the discussion in the literature; they can be cosmetic, structural, or functional. Prostheses can be either of structures such as limbs, or of functions such as vision or hearing. These categories are not mutually exclusive, an artificial hand is mainly structural in that it restores something in place of the missing limb, and it also restores some abilities or functions of a hand. Prostheses can extend function if they enable abilities beyond what a hand could do such as improving grip strength. A hand prosthetic could also be cosmetic if it is designed to look like a hand and could in fact be nicer looking. “Cosmetic” here is meant not to suggest vanity in any way: glass eyes are cosmetic in looking like real eyes but they could also be structural prosthetics by filling the eye socket and reducing social stigma attached to missing body parts. Prosthetics can replace, improve, and extend. 189 Jain, “Prosthetic Imagination,” 39. 73 Now the differences here can be very slippery. An artificial leg replaces a missing leg which would seem to be structural, but in doing so it restores part of the function of the leg (mobility). Some artificial legs are designed to have knees and ankles, which allow for increased functionality of the leg as a leg as well as structural similarity. Other artificial legs are designed to enhance function by minimizing structural similarities to actual legs. The Cheetah Flex-feet made by Ossur that are worn by South African sprinter Oscar Pistorius are an example. The Cheetahs are flexible carbon blades with grip on the bottom for traction: they are made specifically for sprinting so they take advantage of kinaesthetic features of the body used in that activity but look nothing like legs at all. They apparently reduce the amount of energy needed while sprinting and for this reason they came under scrutiny in 2007 when Pistorius was competing internationally in a sprinting event with able bodied runners. There was controversy over whether the artificial legs granted him an unfair advantage over other runners, given the functional improvements. It was ultimately decided that he would be allowed to compete.190 If prostheses are considered only to be restorative or compensatory, then anything that is improved on is cast as a disability. Separating out the particulars of the use of “prosthesis” is not meant to downplay the importance of these discussions, but rather to analyse ways that technology is dependent on features of our bodies. A prosthetic that extends what we can do, either by utilizing structural or functional aspects of the body, should not be assumed to transform those abilities into disabilities. To do so is to take a rather dim view of the situated and embodied human experience. What might be a disability in one situation could be an ability in another. Most of our tools and technologies do extend our abilities in some way; they are developed for and used in specific situations where a specific ability is required. Prosthetic devices take advantage of our bodily peculiarities and harness them in order to extend our abilities. Our development of technologies is needs based and action oriented. 190 The Mirror web site for-olympics-115875-20420240/. (Accessed Oct. 06 2009). 74 Despite its many problems the prosthetics analysis is useful because of how it connects technology to the body, not just in terms of structures of the body but also the functional aspects that are pushed by the action oriented requirements of our interactions with and knowledge of the world. Being situated and physically embodied as we are does not mean that we cannot exploit our abilities in order to do things better, more efficiently, or at all. I argue that in discussions of the mediation between imaging technology and human visual capacities it is useful to discuss these as visual prostheses because they a) allow us to see objects in ways we could not see them otherwise by b) taking advantage of the visual abilities that we have. Prostheses as Extensions of Vision Two important issues arise when we turn to prosthetics of vision: 1) how should we understand the mediation between our bodies and technology and 2) can vision be extended and what are the limits? There are many different items and technologies that are claimed to be, in some way, prostheses of vision: eyeglasses restore or improve our vision; glass eyes replace a missing body part; and new computer/brain interface devices plug directly into the visual cortex. Eyeglasses restore visual function to a level of normality developed over years of optometry by adjusting carved glass or plastic lenses, while other visual prostheses take on the role of the eye in providing information to the brain. Eyeglasses are prostheses that take advantage of structural features of the eyes; they channel light in a way similar to the cornea and lenses of the eye and in doing so can improve our vision. If a machine is designed to stimulate the visual cortex in a way that allows one to navigate the world and distinguish between objects, this restores some of the functions of vision but not by mimicking structural features of the body. Patterns of brain stimulation and light striking our retinas are not informationally identical  even if the function restored is similar. This machine would take advantage of functional features of human vision such as providing us with visual experiences. 75 Beyond these restorative prostheses, some technologies seem to extend our visual capacities. Common examples include periscopes, microscopes, and telescopes. They extend capacity not just within the range of normal visual field or normal visual acuity but seem to allow us visual access to things that we could not see before. Scopes make things visible by bringing information to our eyes by use of physical and mechanical processes. Special lenses or mirrors result in information that is similar enough to what we get from ordinary vision that we can perform many of the same actions. Light microscopes are also structural prosthetics.  Above I say they “seem to” because this point is debated. What philosophical discussion there is of instruments as prosthetics of vision examines whether or not we actually see in these cases. If “prosthesis” is used in these cases and instruments or technologies can extend our perceptual abilities in more than a metaphorical sense, then they need to be tied in to our visual perception as we are biologically, evolutionarily situated. The debate can be cast as whether our visual perception is something that can be extended. There are several different ways in which we could think of vision being extended. It can be extended by allowing us to see things that we could not see before, by allowing us to see objects that were previously visually unavailable to us, as microscopes do. Vision could be extended by allowing us to see better or with more acuity than we could before. The visual objects are the same but we are able to draw more (different or new?) information from them. Vision could also be extended by allowing us to see novel features of known objects. In all of these cases the “things”, the objects of perception in question, could be ordinary everyday objects or they could be unusual objects. I think that vision can be extended in all of these ways. There are also, however, reasons to think that vision, or indeed any perception, cannot be extended at all. 76 That isn’t Seeing: Challenge One – Antirealism and Entity Realism Ian Hacking’s paper “Do We See Through a Microscope?” draws out the problematic nature of seeing through even low powered microscopes. Where it turns out that some biologists think that the kind of seeing we do through a microscope is of a different kind from ordinary seeing. Consider the quotation that Hacking uses from Gage’s The Microscope, an American textbook in microscopy, and attributed to a president of the Royal Microscopical Society. It is demonstrated that microscopic vision is sui generis. There is and can be no comparison between microscopic and macroscopic vision. The images of minute objects are not delineated microscopically by means of the ordinary laws of refraction; they are not dioptical results, but depend entirely on the laws of diffraction.191  A central concern for extending vision is whether any mediated vision is sui generis, and as such something that has to be understood differently than ordinary visual perception. There is a slippery slope argument often invoked here: we see through windows, mirrors, eyeglasses, and binoculars so why not through a microscope? One answer to this is that the cut off is what objects we are able to observe with the naked eye. The passage from a magnifying glass to even a low powered microscope is the passage from what we might be able to observe with the eye unaided, to what we could not see except with instruments.192 This passage suggests that there is a cut off between naked eye and instrument aided perception that puts these things into different genera. If this is right, it is difficult to talk about extending vision: the things that we can see are the things that we can see and that is all there is to it. There are, however, a number of reasons for rejecting the claim that these are different kinds of vision. A point raised by Hacking is that “observation” is an active perception term. By using and working with microscopes we learn not how to see through microscopes but how to see with them. We learn about the functioning of microscopes by making and developing microscopic technology. This is in part evidenced by our ability to interfere with the technology 191 Ian Hacking, Representing and Intervening (Cambridge: Cambridge University Press, 1983), 187. 192 Ibid., 189. 77 and the objects that we perceive under a microscope. Microscopic vision requires learning to see in a particular way – part of this is determining, with practice, what we see that is an artefact of the technology and what is on the slide. We need theory to make microscopes and other imaging technologies, but to use them requires practice not theoretical knowledge.193 Hacking’s arguments are set against those presented by Bas Van Fraassen around the problem of unobservables. There has been a traditional distinction in philosophy of science between observational and theoretical terms. Observational terms have been those that can be read off the world, while theoretical terms have meaning only in the context of a particular theory.194 Observational terms name entities that are available to all of us: regular objects, entities, and scenes. Theoretical terms refer to things not directly visible: to those entities, such as quarks, whose existence is defined within scientific theories instead of through observation. This is not a thesis about our eyesight but is rather a thesis about our epistemic situation. Observation grounds scientific claims by referring to the objects of the world and if we are to be empiricists then our theoretical claims are meant to be grounded in observation. Claims not so grounded have to find different epistemic footing for their truth.  Van Fraassen rejects the distinction between observational and theoretical terms, and indeed between theoretical and non-theoretical terms, and focuses instead on the distinction between observable and unobservable entities. Observable is meant to capture something specific. It is not just things that have not been observed, or cannot be observed because they no longer exist, such as dinosaurs. Things that cannot be observed because they do not exist, can still be observable. The term ‘observable’ classifies putative entities (entities which may or may not exist). A flying horse is observable—that is why we are so sure that there aren’t any—and the number seventeen is not. There is supposed to be a correlate classification of human acts: an unaided act of perception, for instance is an observation. A calculation of the 193 Ibid. 194 Bas van Fraassen, The Scientific Image (Oxford: Oxford University Press, 1980). 78 mass of a particle from the deflection of its trajectory in a known force field, is not an observation of that mass.195 Acts of observation are those which are neither instrumentally, nor conceptually mediated. Instruments can be used for detection only: a cloud chamber can be said to detect the atoms but detection has different epistemic characteristics than observation. Hacking’s “entity realism” (a realism about specific entities, rather than about theories) contrasts with the general antirealism of van Fraassen’s constructive empiricism. To van Fraassen, extending our visual capacities in the sense discussed above makes no sense. Telescopic vision fits with his view of empiricism because of the nature of things that we see through telescopes. Could we travel through space, we could see the planets with our naked eyes. Microscopic vision, on the other hand, can yield observation claims but about what is seen in the microscope image, not about the entities represented there. Because of this we should remain agnostic about the entities purported in the theory. Another theory could support the same observed situations but without the entities, unobservable entities can be empirically adequate and fit the phenomena but only that which is observable is the phenomenon our theories explain.196  Empirically adequate claims are about the image created by the microscope and not about the entities seen through the microscope. Our minds and conceptual capabilities are developed within an “epistemic community” which is bounded by our physiological and evolved capabilities.197 The microscopic scale does not fit with the epistemological community we have developed through our evolved responses to the world around us. Humans are not the sorts of things that can see the objects rendered by an electron microscope: we see the visible image while the entities remain unobservable. Electron microscopes create new phenomena – electron micrographs – whose appearance needs to be accounted for in our scientific theories. 195 Ibid., 15. 196 Ibid., 7–20. 197 Ibid. 79 Claims that we make about electrons need to be modified to fit the appearance of these images, but we no not thereby see microscopic entities using electrons.198 Hacking argues that we do not (and should not) support claims from vision in isolation but in terms of what our technology, science, and vision tells us there is. We develop technologies based on knowledge that we already have. Hacking examines technologies in use within scientific contexts and argues that seeing is not something that can be separated out from the scientific practice concerning the instruments. This approach is not just about our relationship to purported objects. It concerns the role of our perceptual systems, our past knowledge, and our technologies in our scientific practice and in how we understand the technology in our cognitive and epistemic relationship to the world. Van Fraassen takes this issue on directly in a recent monograph on scientific representations. There he argues that what is represented in images (and the microscopic is no different here) is not features of the represented, images are not “windows onto the invisible world” but are rather “engines of creation” that are themselves new phenomena which our theories need to explain.199 Rather than being a visual process of discovery, instrument based representation is creative – it creates phenomena. Van Fraassen says, whether we extend the meanings of words like “see”, “perceive”, “observe” to “see through a microscope” and the like, or whether we refuse this extended usage (as I do), does not affect our sense of novelty. In either case, there is a significant extension, of some sort, to empirical inquiry.200 Van Fraassen endorses the importance of microscopic or other instrumental representations for science, but this extension of empirical inquiry is not because they extend our vision. Instead he thinks they extend our capacities for measuring and also create new phenomena. All observation is a kind of measurement to van Fraassen, the outcome of which he calls appearances. The appearance of our ordinary observable entities is the outcome of measurement 198 Churchland and Hooker, Images of Science, 342. 199 van Fraassen, Scientific Representation, 99–100. 200 Ibid., 96. 80 of the environment by our visual systems. For any other system of representation the outcome will be a different appearance based on the measurements used and the relationship between measuring and the entity. Whether or not one subscribes to the idea that these devices extend the range of our vision, it is indisputable that they serve for the systematic creation of new phenomena— new phenomena that must also be saved by our theories, suffice to refute theories to be discarded, and serve to gather empirical information.201 Appearances can be empirically useful and most representations in science have a specific purpose within use. Even once embedded representations remain products of the relationship of use described at the beginning of this chapter: someone S uses representation R to represent some entity E as F (where F is some predicate). Images, to van Fraassen, can reflect appearances of things the way paintings or photographs do but they can also be public hallucinations like microscopic images, reflections, or rainbows. Artificially produced phenomena provide data about an entity by providing a view of it, where a view is understood as our experience of the relationship between the measured and the measurement outcome.202 The New Phenomena Hypothesis Let’s call the view that representation in science should be thought of ‘engines of creation’ the New Phenomena Hypothesis. There are two related but separate arguments that support this hypothesis. The first argument I’ll call the Argument from Artifice. The second argument I’ll call the Argument from Representation. The gist of the Argument from Artifice is as follows: 1) in preparing objects for observation or imaging we alter them; 2) this alteration is such that it creates new objects; 3) these are novel phenomena that have never been seen before. Michael Lynch gives an argument of this sort when he argues that part of the work in a lab is in rendering objects docile in order to render them visible.203 Tissue prepared for microscopic 201 Ibid., 100. 202 Ibid., 99–100. 203 Michael Lynch, “Discipline and the Material Form of Images: An Analysis of Scientific Visibility,” Social Studies of Science 15 (1985): 37–66. 81 analysis is manipulated, stained, and plated in order for it to be visible. What is seen, analysed, recorded, and diagrammed based on these images is an object that has been disciplined by the demands of scientific visibility, that has been rendered visible for the demands of scientific practices but that is not the natural object. We do not see axons with electron microscopes except under the strict regimes of slicing and preparing that are required by the machine. In this way we are not seeing the objects except as they are changed into new phenomena. The Argument from Representation is the one van Fraassen makes. He argues that in creating scientific representations we are creating new phenomena that serve to extend our empirical enquiry but not our vision. Some of his arguments will sound familiar from the discussion thus far. His emphasis is on how these images are used in representing an object in a way that maintains a selective resemblance between the object and the representation. Images are measurement outcomes produced by the use of X to represent Y as Z (or thus and so). Van Fraassen discusses linear perspective as a development that measures scenes in terms of appearances. Instead of lines known to be parallel being represented as parallel, they are shown as receding to a point which is how they appear to us. With other kinds of representations, the selectivity involved might have nothing to do with what is observable to us, this view neither demands nor allows that our vision be extended. We see the surface markings of images. These patterns carry the resemblance relationships to certain other objects; even if the resemblances themselves are not visual (as we might think luminosity values so not visually resemble signal values). Yet: if representations extend our abilities to do things and they are visual representations, it seems as if they must be exploiting some aspect of our visual systems and amplifying them. There is a possibility here. Instead of vision being extended we might think that representation extends our ability to extract new information using our usual naked eye vision. Representational systems could exploit our pattern recognition skills and so extend our 82 vision in that sense without compromising our epistemic limitations. This is indeed an exciting result, one that might allow us to explain medical imaging without making radical claims about what we can see. Unfortunately it is also incomplete: not only for the images we are trying to explain but for many others. A problem discussed earlier was the unhelpfulness of lumping all representation together and trying to treat it as a single phenomenon. Within a scientific context treating representation as univocal (at least in terms of explanation) makes it difficult to assess whether there are important differences that come from different tools being used in different contexts. While van Fraassen examines some differences in images types, that is not his major concern. His interest is in images as measurement outcomes reflecting selective resemblances between two things. Van Fraassen examines a large number of ways in which we should consider selective resemblance; the resemblance he wants to discuss is always between the object, scene, process, or entity and the image. He argues that instead of treating representations as if they were windows through which we look at the world, we should treat them as generating and presenting new phenomena. We should deal with representations as the things which we look at, instead of thinking about them as things we look through. This is to say that van Fraassen’s view limits our understanding of representation to the surface of the image. Unless dealing with visual representation of visual things, the patterns are never seen as design but only as patterns: like graphs rather than pictures. There are times when images show things that we cannot see otherwise, and those things are artefacts of the technology. Double exposures of photographs, horses’ feet blurred by race cameras, and some digitally manipulated images are examples of this. While there are some contexts in which these phenomena are of great interest, our interest in understanding the image is often in the object depicted and not in these properties. We might be interested in why the horses’ feet appear blurred in the picture and the measurements behind it, but more often we are 83 interested in which horse got her foot or nose across the finish line first. To see that in the image, we must often learn to interpret around artefacts of different processes and not, as van Fraassen argues, to understand all of the design features as artefacts of the technology.204 Images created through the mapping of magnetic resonance properties from voxels of tissue to luminosity values of pixels on a screen may be a novel phenomenon, and we might even think of a brain so pictured as a new phenomenon; but our interest in these images is not merely an interest in their patterns of luminosity values. This is what van Fraassen misses. The phenomenology of picture perception is not binary between illusion and inference. As we saw, many pictures fall on a range of doubled vision that is in between. Medical images may not be windows through which we look, but neither are they mere surfaces to be interpreted as outcomes of measurement. Seeing in medical images is not phenomenologically indistinguishable from seeing their subjects, but neither are they like Western Blots. Our interest in these technologies is not just an interest in their representational systems: we are not interested in them as patterns, but in those patterns as design. It is not enough to treat them as measurement outcomes, without that being related to visual interpretation. Before we can understand or interpret the representational system/design features of MRI we have to be able to see those features as belonging to specific areas of the brain. If increased signal intensity in an area is diagnostically relevant, this requires seeing brains in the images and being able to recognize and identify anatomical features and being able to navigate our way spatially through territory that is familiar not only as an artefact of the technology but as familiar from knowledge of anatomy. The New Phenomena Hypothesis does not provide a way of closing off the idea that these images extend our perception. As with microscopes, it could be the case that we do not see ‘through’ images the way we see ‘through’ a window but rather use them to see – we see with them. We motivate a lot of 204 van Fraassen, Scientific Representation, 141–160. 84 background theory and knowledge in making and using these representations.  Seeing, whether aided by microscopes, telescopes, mirrors, ultrasound is still seeing; we might be seeing things in a different manner, but that does not make it a sui generis kind of visual activity. Not only observation, but vision in general is an active process of obtaining information, specific sorts of information that we use for a variety of ends, and not a passive process of mere optical contact. That isn’t Seeing: Challenge Two – Transparency and Mediated Perception  Seeing-in is a relationship that we have to some pictures (whether or not we think that depiction rides on it). Beyond the question of whether or not our visual experiences as of subjects is supported by the marked surfaces of images, there is the question of the relationship between pictures and their subjects – whether we literally see the subjects when we see them in pictures, or whether we see them only in the design. The Transparency Thesis, first articulated by Kendall Walton, holds that we see through at least some pictures.205 That is that in those pictures we are able to literally see the subjects when we see the pictures. Walton offers this account as a challenge which has the form of a slippery slope argument. It is unproblematic that we see through windows, eyeglasses, and binoculars. We use mirrors and periscopes to see around corners, telescopes to see things that are too far away, and microscopes to see things too small to be seen with the naked eye. It isn’t much of a stretch to say that security guards see things on closed circuit cameras, or that we see live telecast events on television. If we accept these we could perhaps accept that we see delayed broadcasts of events, or our own families in old home movies. And if we are willing to come this far why not accept that we see through photographs, or even through paintings and drawings? Walton ends the slippery slope at photographs and other mechanically produced images, others extend it to all pictures.206 205 Kendall Walton, “Transparent Pictures: On the Nature of Photographic Realism,” Critical Inquiry 11 (1984): 246–77. 206Ibid. 85 Arguments for transparency tend to have some similar features. The first is that pictures stand in an appropriate relation with vision. Given some version of the causal theory of perception, pictures are such that they can carry the causal chain from object or scene to visual experience. A second feature is counterfactual dependency; for something to play the appropriate role in the causal chain of vision requires that it would appear different were the scene different. We see through pictures because they produce visual experiences that are counterfactually dependent on the visible properties of objects such that if the scene had been different, the picture would have been different. A third feature is belief independence. Were someone’s visual experiences controlled by or mediated by another’s beliefs, then that is not seeing – if my optic nerve is severed from my eyes and then stimulated by an experimental neuroscientist to produce visual experiences based on what he sees, this does not restore my vision – any counterfactual dependency is on the doctor’s beliefs about the world. The final feature concerns the preservation of “real similarity relations”. In a picture, a horse is more likely to be mistaken for a donkey than a house. ‘Horse’ and ‘house’ are similar in English, but not pictorially.207 A kind of camera that produced accurate descriptions of the scenes before it would not allow us to thereby see those scenes. Even if it maintained counterfactual dependence and belief-independence, the production of words would not preserve real similarity relations between the scene and the description. Imaging technologies, such as some electron microscopes, ultrasound, magnetic resonance imaging, and x-ray based modalities, fit these criteria. They allow us to see into bodies and to see things that we cannot see. They also give us a reason to re-examine the validity of the transparency hypothesis. The challenge Walton presents is to find a principled reason why the natural stopping point on the slippery slope should be before photographs. Walton argues that handmade pictures do not stand in the counterfactual, belief independent relation to the depicted. As he points out, 207 Ibid. 86 he is not saying that there is no significant sense of “see” that picks out seeing through windows and glasses but not through pictures; he is saying that there is a sense of “see” that includes those things above as well as seeing through photographs (but not handmade pictures).208 So the slippery slope ends at mechanically produced images.  In his recent paper “Opaque Pictures” Berys Gaut argues that there is a principled reason to stop the slippery slope before it gets to any pictures. We do not see through pictures because “see” captures a natural kind that includes seeing with the naked eye and some instruments, but not seeing through pictures. All pictures, mechanically produced or handmade, are opaque. If we are willing to extend the term to include pictures, then our term “see” fails to capture any useful natural kind distinction. Gaut says, So here is a response to that challenge: the ordinary sense of “seeing” does capture a clear distinction, and indeed one that is a natural kind distinction. The idea of seeing involves that of being in unmediated or direct contact with an object. Mediation here is understood in natural kind terms. For us, constituted biologically as we are, we see an object only if rays of light pass uninterruptedly from it to our eyes.209 One of Gaut’s concerns is that the conditions under which pictures can be thought of as transparent recast seeing-in unacceptable ways. A picture or process could meet the requirements described above without seeming like seeing at all. He presents as an example robotic gorillas designed as perfect replicas of real gorillas living at a distance. The movements and gestures of the robotic gorillas are counterfactually dependent on the real gorillas. Their movements and appearance are belief independent and preserve real similarity relations. In this case, Gaut says, we do not think that we see the real gorillas through the robotic ones. If we found out we were not watching the real gorillas, we would be disappointed.210 Intuitively Gaut seems correct: we do not see the real gorillas through the robots. 208 Kendall Walton, Marvelous Images (Oxford: Oxford University Press, 2008), 111–112. 209 Berys Gaut, “Opaque Pictures,” Revue Internationale de Philosophie 62 (2008): 381–96. 210 Ibid., 392. 87 Appeal to Viewing Practices A problem with the transparency debate is that, too often, it comes down to clashing intuitions. Walton claims to find it intuitive that photographs are transparent and his detractors do not. A stalemate of this sort makes it worth examining whether the claim is being cast in a problematic way. Gaut’s hard line on the question of transparency helps to recast the issue. Instead of asking when we are or are not seeing, we should ask what kinds of mediations our vision is treated as surviving. I want to do this by offering some cases from scientific practice where we undoubtedly see pictures but also treat them as mediated perception. Gaut accepts that we see through optical telescopes because rays of light pass unmediated between the object and our eyes. Yet, as many more technologies become computerized their displays are often on screen rather than through traditional eyepieces. If I am looking through a telescope (or a microscope for that matter) and I see something interesting that I want to share with my lab mates I might change the view from the eyepiece I am looking through to a screen display. What I saw, in this sense, becomes public; it is displayed on the screen for all of us to examine. In this case, it does not make sense to say toute court that I (or my lab mates) cease to see the cosmos, or even that I am suddenly seeing something different. While there may be a sense in which it is right to say “now I am seeing through the scope, and now we are seeing a representation of what we would see through the scope”, this difference is not one that plays a role in scientific practice. Rather it extends our practices by allowing us to talk about the scenes in front of the scope. A television camera operator who looks at scenes on a display screen rather than through the viewer merely for matters of framing or getting a good shot of something hardly notices if he suddenly sees only a representation. That he is seeing a scene, and then seeing a naturalistic representation. Even if Gaut cannot accept that we see through television, it seems strange to say 88 that the change in technology from viewer to display screen makes it a grave challenge to the ordinary notion of “see.” If using new technology radically altered our visual access to things, it seems that it could hardly play the role it does in our visual practices. Mediation might slightly alter (and sometimes improve) our experience but that does not make it a difference in kind. This challenge increases if we consider laparoscopy. To think that we use a camera for the purpose of seeing images of the body, without thinking that those images also allow us visual access inside the body seems to problematize it unnecessarily. The sheer number of surgical procedures now done laparoscopically suggests that some mediation, some interruption, seems to be adequately accepted in our visual practices. Gaut’s criterion is just too strict. Our kind determinations have to be based on our language as well as on our actions and it could be that both are more flexible than Gaut allows. It could be that our usual sense of seeing works well enough to capture a general idea of what seeing is, without being specific enough to accommodate the more technical cases and ways that we are able to extend our visual abilities. In a scientific context, images are often the only way we have of seeing objects. Mediation does not change the activity we are performing or the cognitive resources we are using when we are concerned with seeking specific information, or performing procedures. In trying to make a principled distinction between cases where we see and do not see Gaut’s argument is too restrictive: it limits cases where we would ordinarily talk about seeing. Rather than taking “see” as a natural kind distinction, perhaps we should consider it as a success term for a certain kind of visual act within a larger cluster of such acts. Argument from Aspection “Seeing” has been taken to be a success term, basic to visual perception, with a form of “I see x” with the opposite being something like “I cannot see x”. Usually this is unproblematic. I can see the tree outside my window, but I cannot see Grouse Mountain because it is occluded by houses. 89 I believe that some of the particular confusions detailed above stem from a failure to properly appreciate vision as an intentional activity itself. This is to suggest that when visual perception is considered as an active process simple “seeing” might not be definitive of any natural kind. Vision consists in different perceptual acts that are for acquiring specific kinds of information. Paul Ziff calls these “acts of aspection”:211 to contemplate a painting is to perform one act of aspection; to scan it is to perform another; to study, observe, survey, inspect, examine, scrutinize etc… are still other acts of aspection…I survey a Tintoretto while I scan an H. Bosch. Thus I step back to look at the Tintoretto, up to look at the Bosch. Different actions are involved. Do you drink brandy in the way that you drink beer?212 Frank Sibley also investigated differences among  visual acts. Sibley divides them into quest and scrutiny verbs; one set concerning seeking particular information, the other being investigation of minute features of something.213 Scanning, scrutinizing, peering, inspecting, staring, watching are all different visual acts that require different kinds of attention and different kinds of actions towards the visual world that are definitive of perceptual learning and expertise. While Ziff discusses these acts in terms of the different aesthetic demands of different styles of art works, this is equally true about our acquisition of information from the world. We scan for familiar faces, we scrutinize a lover’s face for signs that they are lying. When we scan a crowded parking lot looking for a car or a crowd looking for a friend we are searching the visual environment for specific features, overlooking the wealth of visual detail not relevant to our search. This is true even if there is no specific object for which we are looking. Scanning a horizon we might be seeking to register motion, or dark areas that suggest predators or visiting friends – scanning can be successful even if nothing particular is searched for or found. Scrutinizing is the same. I successfully scrutinize my skin for flaws even if I find 211 Paul Ziff, Antiaesthetics; An Appreciation of the Cow With the Subtile Nose (Boston : D Reidel, 1984). 212 Paul Ziff, “Reasons in Art Criticism,” Philosophical Turnings Essays in Conceptual Appreciation (Ithaca: Cornell University Press, 1966), 71. 213 Frank Sibley, “Seeking, Scrutinizing, and Seeing,” Mind 64 (1955): 455–478. 90 no flaws. It is close and careful visual attention to something. Scrutinizing is overlooked in philosophical discussion of science and medicine, though I take it that it is one of the usual modes of visual attention. Inspecting involves taking in the features of something, sometimes with a view to normativity, sometimes as a catalogue. We inspect an injury and inspection is not geared to any particular end. There is nothing specific that we are looking for except maybe for anything we should notice. We gaze at something when we focus our attention steadily on it specifically, for a period of time. Gazing can have emotion behind it; it can be kind, or loving, or it can be interested, or intimidating. We stare when we focus our attention on something with surprise, anger, or other emotions. Different intentional acts can underlie visual acts, and we both bring to and take from these acts different kinds of information. “See” is a success term which generally means success in making visual contact. If visual attention is a central condition for seeing (which it could be, since non-epistemic seeing remains controversial), then there is reason to consider specific perceptual acts as having their own conditions for success.  Vision perception is often treated as if it were a simple and indivisible whole, as if successfully making contact were all there is to vision. If vision is treated as a cluster of actions motivating different resources of vision and attention for acquiring information from the world it becomes less clear that there is any basic visual success condition. Undoubtedly we can observe things that we cannot see on Gaut’s condition. Hidden video camera and even night vision video cameras are sometimes the only way that we can observe the behaviour of some creatures. One would think that if “see” were basic to or definitive of the kind that none of our other visual acts would be possible in its absence. Can we watch, scrutinize, or notice that which we cannot see? This is an ill formed question. We often watch for things we may never see. We also use many different visual act terms as if we were able to see through pictures. For example if I watch the Oscars I might well concede that I am not, literally, seeing the Oscars. I might notice someone at the Oscars, or that someone cried 91 while giving her acceptance speech. I might scrutinize someone’s face to see if they have had plastic surgery. It becomes unwieldy to relay all of these things and still accept that we are not seeing them. “See” loses its real power as definitive of a natural kind if we are able to perform all kinds of other visual acts even in its absence. Moreover our acts of aspection are often guided by other kinds of knowledge. To consider the content of perception merely propositionally is to neglect the very different avenues by which information is sought and acquired and the different forms it can take. If I am knowledgeable about 17th century Dutch painting, this gives me reason to scrutinize a newly discovered masterpiece in order to establish its authenticity. In doing so, I may perceive anomalies that determine that it is by Van Meegeren the faker rather than Vermeer the Dutch master. In inspecting a rash, my doctor may be able to determine things about my health that are not available to me no matter how much or how closely I look: the content of our visual experiences could be different. Counterfactual dependency, causal connection, and belief independence (even together) lack the experiential aspect, the “real similarity relation” that defines the experience as visual. It seems as though “real similarity” is a relation that would hold between and object and its representation. Given the discussion of problems with similarity and its breadth this gives us reason to examine if the similarity is an experienced similarity.214 This is a more promising approach but again there are challenges; given the variety of visual acts, how can we say we can see that there are only one specifically visual set? Or only a few?215 What we are able to see in a picture differs between people, and what it is right to see in a picture changes between contexts. To a doctor who has seen hundreds of rashes, a rash’s being dangerous or not may be visually salient. Repeated looking, looking in different ways, and learning things about what is seen effect what becomes visible to us. We visually learn through action, use, and engagement. We 214Lopes, Understanding Pictures, 188–190. 215 Ibid. 92 learn how different kinds of representations and instruments (from microscopes, to cameras, from photographs, to drawings) allow us to visually acquire information about our environment. We do not just peer, we interfere, and bring all of our knowledge to bear in our visual experience when it matters to us.216  These will be taken up in the next section. There are cases when we treat seeing something in a picture as if we were seeing the object and cases when we don’t and it might have to do with how we are using the images. There are times when we need to attend to an image as a representation like van Fraassen says we do. Then there are times when we need to act as if we are visually examining the object imaged and the image is treated as presenting rather than representing the object. With the rise in imaging and visualization, examining how these mediate between our visual abilities and the objects that we see in these displays is important. The question should be less whether these can extend our visual capacities, but how we use these as technologies which extend our visual capacities and how they can be improved. For this reason, I do not accept that imaging technologies are only visual prostheses in the representational sense described by Gaut and van Fraassen. Rather, I will argue, they extend our abilities to see things in part by producing transparent images. Medical imaging technologies then allow us to see things we literally could not see before, and seeing these things grounds all of the other things we can then do. Moreover they do this without us requiring visual apparatus other than that we have. 216 Hacking, Representing, 141. 93 5.Representation and Real Similarity The discussion above explores two perspectives on how we should understand the limitations of vision and whether or to what extent it can be extended. In both arguments there are limits to vision that affect how we should consider visual representations. If we were to accept these limits, images or representations should be dealt with by understanding and interpreting the surface features of the image and their genesis. Van Fraassen is right to consider images as novel production in scientific practice; these are marked surfaces of new measuring technologies that generate novel signals. If we do not see through pictures yet we see in pictures, our experience is of what is on the marked surface and ends there. In this chapter I will argue that well we see-in medical images, these also have features of transparency. Furthermore, examining these features allows us to explain how they extend our vision. I agree that we have biological limitations on our vision; we cannot discern more than a certain level of difference in luminosity values, neither can we see without aids things that are too small, of the wrong wavelength, or that are too spatiotemporally distant from us. I accept, however our ability to extend our perceptual capacities using representations. No medical imaging system uses a signal that is visible to us in the usual way: this is why we have created imaging technologies in the first place. Our seeming to see is just what needs to be explained. What we do see is the marked surface. A defence of the claims made about extending vision requires examining design and surface seeing and the role they play in image interpretation and use. Medical imaging technologies are measuring technologies that create marked surfaces, so to understand them as representations requires that we understand the design, its representational relationship to the tissue, and the role design seeing plays in image interpretation. What is novel 94 in terms of measuring is signal so this is to say that signal is important, even of primary importance, for an account of medical images.  My claim has been that we can extend the range of visible things by taking advantage of these limitations and by marking surfaces in such a way that we can see things we couldn’t see before, including properties of things we couldn’t see before. Because of this we can make judgements, take actions and have knowledge of previously unseen things. Neither should we think of this kind of seeing as sui generis; this is an actual extension of what we can see. We create pictures in order to have visual experiences of objects and these experiences go beyond the surface of an image. The claim from van Fraassen that representation involves specific resemblance between target and representation does not do justice to our use of these images, and the claim for the opacity of pictures is too strong. Of course, there are limitations on what we see in these pictures just as there are limits to what we can see, but if I am right that these are visual prosthetic technologies, what we can see in these images outstrips what we can ordinarily see. Remember that seeing-in is a kind of doubled vision that is either illusionistic or not. When it is illusionistic it is separated from design seeing and from surface seeing. None of the images discussed are illusionistic. If it is twofold it doubles subject seeing with design seeing: the subject is seen in the parts of the marked surface that give rise to it. If it is naturalistic, subject seeing is doubled with surface seeing but the surface patterns are not seen as design. If it is pseudo-twofold subject seeing is doubled with pseudo design seeing: design seeing follows subject seeing. In other cases, as discussed in the Novel Phenomena Hypothesis section, surface seeing can be design seeing separate from subject seeing. In these cases we see the surface patterns as design – as bearing resemblance relations to what is represented – separate from the visual experience of the subject. This covers cases such as graphs, charts, and perhaps images of 95 things that do not have visual properties. In this case it is signal seeing, and not subject seeing that is important to understanding the images. Signal, Luminosity and Data Medical imaging technologies create new signals which are informative measurements of tissue. The instruments collect signal data from the tissue in order to produce images that are used in practice. In describing the signal→ luminosity representational system used in medical imaging, I mentioned that there is a counterfactually dependent causal chain between the biological indicator of tissue and luminosity value – if the tissue were different, the luminosity value within a system would be different. Imaging maps signal from voxel of tissue to luminosity of pixel, thus the pattern of luminosity values is a map of patterns of signal in the tissue. If we accept that representation involves resemblance, the main resemblance or relation at the level of signal data seems to be that higher signal values are brighter. Increased brightness means increased signal in the voxel of tissue. No one would claim that the likeness between signal and luminosity is a visual likeness, other than how we see it in images there is nothing these signals look like to us. If we want to talk about signal value as like luminosity value, that likeness has to be cashed out in terms of intensity – greater signal intensity means greater intensity of luminosity. This is a way in which signal→ luminosity differs in medical imaging from its use in photographic technologies. In black and white photographs the signal is something like spectral reflectance – the relation between surface reflectant properties of objects and light. This signal is spatially mapped out from a scene onto a two-dimensional plate, film, or screen. Because photography captures many of the same object properties that our visual systems detect, areas that we would experience as brighter are of higher luminosity in a photograph. There is a kind of similarity in our experience between areas that we would perceive as bright in a scene and areas that are mapped as 96 luminous in a photograph. This could explain why photographs are naturalistic, our experiences of their subjects is separate from our experience of the surface as design. The appearance of things in terms of their brightness is similar in photographs and in ordinary viewing. Our eyes and photographic technologies detect or measure similar things and represent them as similar. This is not the case with medical imaging. The fact that we see bone as white and that it appears white in images, does not mean our experience of white in the image is tracking whiteness in the object. Whatever we think colours are, they require the reflectance of visible spectrum light off objects and are likely our way of tracking surface reflectance properties. Luminosity values track signal values in a similar way, but there is no way that this is a perceptual similarity as there is nothing it is like for us to see a particular value of signal except as it appears in images. A difference in luminosity means a difference in signal, but signal itself is novel and does not track anything that our vision does. It does not track visual properties of objects. There is a further complication in this. The relationship between signal value and luminosity value is not diagnostically meaningful on a pixel by pixel basis. Luminosity represents level of signal, but the measurement of any pixel is only relevant in relation to other pixels or voxels around it. Ultrasound and MRI were both originally developed for one- dimensional detection and were found to be uninformative for medical purposes. Signal value per pixel is not useful for evaluating tissue for diagnostic purposes. For other materials and other material indicators there may be a useful one dimensional relationship between signal and material, as in magnetic resonance spectroscopy, but tissue is too heterogeneous.217 The important representational relationship cannot merely be in the mapping from voxel to pixel. In medical imaging luminosity values are representationally important in their spatial relationship. 217 Kelves, Naked to the Bone, 159. 97 Luminosity values form the design of the image, but this does not entail we see the subject there. If we think that medical imaging can extend our perception, but that the visuality of medical imaging needs to be explained in terms of the surface of images, then the place to start is with the similarity between tissue patterns in the body and luminosity patterns on the image surface. Because of the mapping from voxel to pixel there is an actual similarity or resemblance between patterns of biological indicator dispersal in the two. Of course, this resemblance holds whether we use luminosity values, numerical values, different colours, or different shapes as the output: these are not visual properties. However, there does seem to be something about luminosity that makes the signal→ luminosity system a useful way of displaying data and that makes it seem as if we are looking into the body. The extent to which imaging technologies are amplifiers of our visual capacities has to do with the appearances of the marked surfaces produced. The only information that can be useful in such systems, if they are to serve as communicative systems, is what information we can extract. Representational systems are not useful if we cannot reliably extract patterns from them. If we had to discriminate between line lengths, colour differences, or subtle spatial differences that exceeded our capabilities to use an imaging system it would not be very useful. Indiscernible differences cannot make a difference in the meaning or the content of the images we make. Design Seeing and Information in Graphical Representation How then does the representation of signal in the surface patterns of these images extend what we can see in a way that could underlie how technologies amplify our abilities? We might think that by examining them as  surface patterns that we can become aware of a resemblance between the design properties and the object – even if this is not a visual resemblance. If we 98 think that our seeming to see brains in MRIs is constrained to seeing a brain in the design of the image, then we need to examine how design features might correspond to the measured features of brains imaged. This is to put seeing-in as a secondary feature, and to focus not on the surface marks as sustaining an experience, but on the surface marks as representing features of the subject. It is signal seeing rather than subject seeing. Examining a surface marked in the way described above allows us to perform new cognitive and other tasks by taking advantage of our ability to extract information from the design. In a comprehensive work on graphical representation Manfredo Massironi lists three ways a graphic mark can be used to show some aspect of reality: 1) correspondence of edges; 2) geometric correspondence; or 3) visual organization.218 In this case edges include textures and cracks as well as internal and outline shape edges. The lines, shading, and cross hatching that define the planes of a figure may not correspond to the object – human bodies do not have dark outlines except in amateur figure drawings – but they correspond to our experience of edges, shadows, and visual texture.219 The second condition is meant to capture the link between Euclidean geometry and the behaviour of visible forms and geometric figures. If we accept van Fraassen’s characterization of representation as measuring, many of our pictorial representations in linear perspective are measurements of the appearance of space in vision.220 Our graphical surface markings correspond to the way our visual systems detect geometric patterns within the visual field. The markings share in this geometric relationship. 218 Manfredo Massironi, The Psychology of Graphic Images: Seeing, Drawing, Communicating (Mahwah, NJ: Lawrence Erlbaum Associates Inc., 2002), 142. 219 Ibid. 220 van Fraassen, Scientific Representation, 180. 99 Visualization of Data: the Tufte Cases Visual organization can be considered a long history of trial and error in artists’ work. This includes how we classify things together in groups, perceiving things as different and perceiving in gestalts. For example, work done by Edward Tufte on visualization takes advantage of visual organization. Statistical relations in data, or between groups of data, are easier to grasp when presented in some ways instead of others. When presenting quantitative data visually, some manners of presentation obscure information while others enhance the informativeness of the data. For example, Charles Minard’s chart of the Napoleonic campaign to Russia (Figure 5.1) captures the number of soldiers heading for Russia (in beige), the casualties along the way, the numbers leaving Russia (in black), the distances travelled, and the temperatures. Figure 5.1: Charles Minard Napoleon’s Russian Campaign : public domain  This is clearly a case of developing tools that capitalize on our visual abilities, in this case visual organization. It is also the case that such data visualization techniques could allows us to see patterns, connections, and relations in the data that we could not see before. We are able to see the data in these images in ways it would not be were it presented in other graphical formats. 100 That is to say that we could extract information about the data from the image that was not (perhaps could not be) available were the data presented differently. This is important for a number of reasons having to do with pedagogy and public relations to data analysis, such as surveying trends, and identifying outliers. These principles are used in understanding human/computer interface, visual analytics, as well as in imaging. For example, gray scale is often used in imaging when finer comparisons must be made within an image, using colour to correspond with differences in the biological indicators can make areas seem more dissimilar than they are.221 A key consideration here is what we take the representational system to be. In the Minard image above, different colours represent direction of the flow of soldiers, length represents distance, and width represents numbers. The correspondence in this case is symbolic yet not arbitrary these correspondences were chosen for the ease with which they present the encoded information. The same is true of pie charts, bar graphs, and other familiar kinds of graphical representations. These kinds of representations can allow us to see new relations between the data imaged. The data is presented in ways that make salient certain of its gross features. In these cases, which I will call Tufte cases, data is presented in a way that exploits our visual organizational abilities for the sake of communicative clarity.222 Not only are we able to extract the data more clearly, but it is easier to understand and facilitates further inferences about the relationships between the data points plotted. In the image above, line width represents the number of soldiers in the army, the narrowing of lines clearly shows the seriousness of the reduction in troops over the campaign, line colour track direction of movement, and as this difference is a direction in kind (forward, retreat) the change of colour makes this clear. Other 221 Joyce, Magnetic Appeal, 47. 222 Edward Tufte, Envisioning Information (Connecticut: Graphics Press, 1990). 101 aspects of visual organization could include gestalt features such as figure and ground, things being grouped together or distinct.223 It could be that medical imaging should also be understood as taking advantage of our visual organizational skills, that imaging technologies produce images whose patterns of signal take advantage of our visual organizational capacities. Higher levels of signal appear as white and are foregrounded over the lower levels of signal. The surfaces are scanned for areas of difference in signal that are indicative of problems in tissue. The patterns of tissue dispersal in the image have a geometric correspondence to the patterns of tissue dispersal in the brain and areas of higher signal in a certain kind of tissue stand out as importantly different or significant. This also helps to explain some features of image interpretation. There are particular patterns of signal that radiologists or sonographers learn to look for during tasks. These signs are evidence of certain states of the body, either disease states or not, and being able to recognize signs and what they could mean is an important aspect of learning image interpretation in the medical context. These signs, furthermore, are the appearance of bodily states in imaging modalities – appearances which are novel creations of the imaging technologies. In radiograph interpretation especially, there are hundreds of signs that radiologists must learn to recognize as part of learning diagnostic techniques and performing differential diagnosis. The bamboo spine sign, for instance, is a sign of Ankylosing spondylitis, a fusing of vertebrae. In radiographs a spine with this condition has the appearance of a strand of bamboo, and textbooks even display images of bamboo beside images of the radiograph in order to emphasize the resemblance.224 The hamburger sign and turtle sign in ultrasound are signs for male and female genitalia respectively. For diagnosis or for gender evaluation we might think that the appearance of these patterns of signal in the images are evidence of the fact that design 223Ibid. 224 Michael E. Mulligan, Classic Radiologic Signs: An Atlas and History (New York: Parthenon Publishing, 1997), 6–7. 102 seeing is indeed what is important to medical imaging – but that this is separate from subject seeing. In medical images the appearances that matter are patterns on the surface separate from our visual experience of the subjects. The main way that medical images represent, in this case, is not pictorial since the subjects do not have any visual properties. The signal produces these characteristic appearances and so experts are able to make inferences from the appearance of these patterns to certain diseases. These conditions are seen in the images in that they have certain appearances in imaging. To use these images diagnostically, then, is to see certain things in the images. What is seen in the images are these characteristic patterns which represent the appearances of certain bodily states in different imaging modalities. Design seeing gives rise to subject seeing, and the subjects are seen in the design, but the subjects are not those objects of ordinary perception they are bodily states as represented by the technology. If this is the case then signal seeing could be adequate for explaining how our vision is extended in a way that underlies our other abilities. Imaging takes advantage of our abilities of visual organization – we are able to see things we could not see before because these are appearances that never previously existed. Recognition of these patterns of signal allows us to infer various bodily states from the appearance of the image without the assumption that we use these images as windows to peer into the body or see non-visual properties of objects when we see the subjects of pictures. Yet, there is an intuitive difference between how an EEG and an MRI represent the brain. We can learn to see and diagnose hypsarrythmia, a specific pattern of brain waves characteristic of certain kinds of epilepsy, because we can learn to recognize its particular pattern of lines in EEG. EEG seems to have the kind of counterfactual dependency that imaging has, and to record functional activity – electrical patterns – in the brain. Yet saying that we see patterns of electrical activity of brains in EEG seems different than saying that we see functional activity in an fMRI, and the difference seems to be a perceptual one. It seems as if in the EEG 103 we see electrical activity only as patterns of lines in the EEG output, where the power of fMRI seems to be that we see what is going on in the brain. EEG extends our vision because we can come to visually discern these patterns and make diagnoses based on visual information, yet it does so in a way that does not go beyond the surface of the page. Signal→ Luminosity→ Anatomy There are two factors to consider that should give us pause before accepting this view. One is that design seeing does not entail attending to or awareness of the representational aspect of the design. Some of the surface features that are important in medical images are important as design features, and some are important as representations of signal. Recognizing radiologic signs seems to be a case of signal seeing, of attending, in the case of radiograms, to patterns of higher tissue attenuation. In order for these patterns to be salient, however, they need to be localized into the context of a body. Before anyone could ever recognize the tree in bud sign in a chest radiogram, there had to be an understanding of the anatomic features of the chest cavity and lungs and the ability to differentiate between important and unimportant features. In order to determine foetal sex, the sonographer has to be able to navigate her view of the foetus to make sure she has the right parts of its body in view. The appearances of these bodily states presupposes the ability to visually navigate the body. That is to say, the ability to recognize the signal features as meaningful presupposes recognition of the body. The appearances captured in medical imaging are useful because of their similarity to the way the body appears to us – not under a specific description via the signal, but based on ordinary visual anatomical knowledge of the kind that comes from having ordinary visual contact with, and ordinary visual knowledge of, these perfectly observable entities. 104 In MRI reports, radiologists will often talk of ‘increased signal intensity’ in an area. This sounds as if what is being discussed is the representational system and design features of the image. Take the following sentences of a radiology report, a) “As noted previously, there are numerous foci of increased T2/FLAIR signal in the cerebral white matter bilaterally.”225 b) “A new lesion is now seen to involve the right medullary pyramid and olive (image 59, and 60, series 4). Maximum dimension of this new lesion is approximately 6 mm.”226 Sentence a) seems to be remarking on the luminosity values as markers of signal, on increased signal as represented in the image. Sentence b) on the other hand directly discusses areas of the patient’s brain and what is seen there (a lesion). The area with a lesion is identified and the lesion can be measured approximately, changes in lesion size and position can be tracked as in diagnostic criteria for MS. Even in a) when the description is in terms of signal intensity, the radiologist remarks what tissue type has evidence of increased signal. The luminosity values as markers of signal are diagnostically relevant in a) they are perceptibly different from the values of the cerebral white matter. Cerebral white matter has a standard appearance in any magnetic resonance sequence, and areas appearing darker or brighter are taken to be areas where the tissue is different. The importance of luminosity values is not in specific signal values per voxel, but in the uniformity of luminosity values for different tissue types. When tissue is different the signal will be different and so will the luminosity value. Tissue that is deviant in signal is of potential diagnostic value.  Both a) and b) above also make different diagnostically relevant claims about specific brain areas. The interest is not merely in differences in tissue, but also in differences in tissue in specific anatomical regions of the brain. The structural anatomy of the brain is different for 225 Personal report on being diagnosed with MS, Smith family web site, (accessed November 25 2009). 226 Ibid. 105 everyone, and can be part of what is diagnostically important. Seeing that the midline of a brain has been pushed because of a space occupying lesion of some kind can explain neurological problems. Diagnostic significance of signal comes from being able to make distinctions about its significance in particular tissues. In this case, subject seeing must come before signal seeing in order to see the signal in the correct anatomical area. This is why the signal→luminosity representational system is useful when numerical values are not, using this system allows for representation of anatomical features as they are useful to us, which is to say as they appear to us in ordinary face to face vision. There are similarities between the appearance of brains in MRI and brains seen face to face, similarities in our visual experience of these things, which is to say in their appearance to us. These appearances are not merely a new phenomenon, rather our experiences and knowledge of the appearances of bodies is exploited by imaging technologies. Imaging technologies extend our visual abilities by producing marked surfaces in a way that preserves real similarity relationships both in a mapping relationship and in experience. They produce images that are transparent for features that we need to visually navigate the anatomy of the body in order to take advantage of the diagnostic usefulness of signal data in determining diagnostically useful information about the tissue. Subject seeing is important to understanding medical imaging. Likely it is mixed between naturalistic, twofold, and pseudo- twofold. These systems are also transparent. Novel Views and Real Similarity So what are the real similarity relations that images maintain between the objects imaged and our experience of them? In the case of medical imaging it is tempting to confuse the actual similarity between patterns of biological indicator in the object and the patterns of tissue in the image with the real similarity Walton describes. However, as was said before, these could be 106 represented in different ways that maintained that relationship without eliciting visual experiences as of the object in us. The real similarity relations are those that play a role in our ability to have visual experiences of the patterns as tissue differences and so to be able to recognize anatomical features, assess the normality of anatomic structures, localize and identify pathologies, and thereby perform diagnoses. In the case of medical imaging, these may differ between modalities, but they will include features whose usual visibility is in terms of tissue differences – shape, spatial arrangements that are part of our visual experience of and knowledge of anatomy. This, I think, is what makes the daughter technologies of x-ray different from other imaging technologies, even those that are also visual prostheses. The difference begins with the use of probes to interact with biological indicators in tissue in order to produce a signal. This signal tracks differences in tissue that when displayed as luminosity values create images which are transparent, at least partly. This separates them from others image types such as EEG which create recognizable patterns, and allow for localization of signal but do so in a non-visual way. When the first CT scans were created, Hounsfield and Ambrose were so excited about their images that they showed them to a number of radiologists wanting to share their breakthrough imaging of the brain with others. To their dismay, the radiologists they showed the images to did not share their excitement: to these radiologists the images were just meaningless blobs. To their credit, Hounsfield and Ambrose then showed their images to neurologists, who were blown away by them. It turned out that radiologists at the time were unfamiliar with the brain and unfamiliar with seeing objects in cross section. The radiologists could not decipher the images while the neurologists, familiar with both the brain and seeing the brain in slices, were able to correctly interpret the images.227 That neurologists didn’t require training to recognize features of the brain in CT suggests that it is not so different from the cross sections of actual 227 Wolbarst, Looking Within, 94. 107 brains that they were accustomed to seeing. The technology allowed them to see these things in living brains and not only in cadavers. While anecdotes of themselves are not conclusive, this one suggests some conceptual elements that are at play when we look at images. Seeing unfamiliar objects in unfamiliar views does not immediately yield the kind of visual clarity that may have been suggested above.  This in itself is not inconsistent with pictures being perceptual. To someone with no training in viewing anything in cross section, a physical slice of a brain will fail to be informative. With such specific information requirements as are found in medicine, prior knowledge plays a more important role than it might with perception of scenes and objects in photographs. Seeing-in is exploited because it allows us to have visual experiences as of objects presented as featured in some way. We are able to have experiences as of seeing objects with features that those objects enjoy. In the usual case, these are visual properties that objects enjoy, but, in the case of imaging, the features are not properties that are normally visible. It is important to keep separate the content of pictures from the content of pictorial experience- because the two can come apart. I can misperceive a picture due to bad lighting, blurry vision or almost any other reason. This does not therefore mean that the picture somehow shows the world as blurry. Seeing-in is important to distinguish the content of pictures from the content of other kinds of visual representations, in science and elsewhere. Seeing a brain in an MRI we see the brain as having certain folds and contours, we see the relationship between white matter, grey matter, and cerebro-spinal fluid, we see whether or not there are lesions. From this a suitable viewer can determine whether or not a brain is normal.228 Here we can explain the intuition that there is a difference between EEG and fMRI. In an EEG we see the electrical activity of the brain without seeing it as a property of the brain. The spikes and dips of EEG representation 228 More will be said about this in the next chapter and in the case study on MRI. 108 accurately measure electrical activity, and do so through counterfactual dependency between graphic marks and brain activity but they do so without these features being part of a visual experience as of seeing a brain. I have already discussed ways in which seeing-in is not like everyday seeing. For one, we rarely believe that the object of our perceptual experience is there in front of us (although in trompe l’oeil cases this is possible). Furthermore, pictures generally resemble, in some sense, what they are of. This is not to say that this resemblance is adequate for a theory of depiction, but it is a salient perceptual fact about experience and one that is ordinarily explained by the visual features of both pictures and their subjects. In seeing-in we are often simultaneously aware of the design or surface features of the picture and of what is there represented. Yet I am arguing that seeing things in images is a way of seeing. Visual prosthetics, in the way I am developing the idea, are ways of hooking us up with the world perceptually. For this reason other aspects of everyday visual perception need to be examined. In ordinary perception the visual world is a nearly inexhaustible source of information. Different systems of representation abstract out and represent different features depending on what they need to communicate. Naturalistic paintings represent colour, edge, relative size, shape, shadows, and a number of other visual features from a particular perspective. Axonometric drawings represent edge, shape, and other features from a particular perspective different from that of linear perspective. Goodman’s claim that realism is a matter of familiarity is right in the sense that we are accustomed to a visual world where we identify objects based on certain sets of features and are rewarded in our visual endeavours with detailed information. Consider photography. We would not say that very high resolution digital photographs are more realistic than ones which are of slightly lower resolution. In fact, quite the opposite is true. There is something disconcerting about seeing pores, tiny hairs, skin textures in high resolution images of people. Not only do we not normally see these details in representations, 109 we almost never see them face to face. They are above and beyond the demands we have for naturalism, since our demands end at a certain point of visual familiarity. This is not to say that these are not important visual properties of an object, they might be, but they are not criteria in ordinary appearance. We are able to identify and re-identify particular objects and kinds of objects by their visual properties, and learning to distinguish things from each other, even having very fine discrimination skills, is part of perception. Imaging likewise licenses a variety of visual acts. A glance at an MRI of a brain may be enough to draw some conclusion, or it may require intense scrutiny and attention to specific structures. A brain with an enormous space occupying tumour on certain views will be evident to even a non-expert with some knowledge of the brain. Images can be scanned, scrutinized, inspected to gain more information, not only about the image itself, but about the object that was imaged. An image may be scanned and found to be normal (where this undoubtedly differs depending on the context), or may be scrutinized to find a lesion or other pathology.  Different visual acts will be appropriate in different contexts; seeing when something is normal and knowing when to examine an image more closely comes from extensive experience looking at things in images.229 Most visual experience is tied to our perception of ordinary objects. What we see in pictures are usually just those objects with more or less just the properties we ordinarily experience them as having. We see surface and the surface properties of things such as their shape, colours, outline shapes, textures. Our visual experiences of visual objects have their content rather directly. We see trees as having particular shapes, being particular shades of green, and we can distinguish them from other trees. Similarly, when we see trees in pictures they normally have particular shapes and particular colours as well – a pine tree may be represented as red, but not as rounded and deciduous. There are limits to misrepresentation. 229 Jacob Beutel et al., Handbook of Medical Imaging vol 1 (Bellingham, WA: International Society for Optical Engineers, 2000), 870. 110 Imaging allows us to see ordinary objects, and features of those objects, even though we do not see them in the same way. We also see, in ultrasound, that cysts are lower in echogenicity than surrounding tissue and that cerebrospinal fluid has slower magnetic resonance relaxation rates than cortical matter. This is, in some ways, no different than what we would say about black and white photographs: some areas are brighter, some are darker, and this is because they reflect more or less of certain kinds of light. The difference is that these are not questions we generally ask about photographs. What we see are tissue types, organs, follicles, specific anatomical structures, we see shape, motion (in ultrasound), and some more abstract properties such as density, and normality/ abnormality. It is about these things that we can make claims about the properties of biological indicators and not merely places in the surface array of the image. Imaging is not just in the business of introducing us to new kinds of objects for perception nor can we make discriminations about echogenicity or other biological indicators except comparatively (and given the adjustments available on machines there are no ways to make such determinations across the board). The medical language and practice around imaging is not a language or practice of biological indicators: radiologists and doctors speak in terms of lesions, nodules, and abnormalities. They talk about tissue and body parts. How these are used is examined in the case studies. 111 6.  Case Studies Introduction The following case studies each take up a particular imaging modality and explore it as a technology for producing vehicles for seeing-in. Each case study centres around a contemporary or historical debate that challenges or questions the visuality of the images and what they represent. Each debate brings with it its own assumptions about and challenges to the role of imaging in scientific and medical practices. For this reason the case studies range over a broad number of approaches to images in the literature, from historical methodologies in history and philosophy of science, to more ethnographic and anthropological approaches. Scientific praxis is at the center of all of these methodologies, and so discussions of imaging tasks, imaging contexts, and medical use supply material for explanations of both expert medical vision and more normative claims about how imaging should be considered. 112 6.1.  Using Magnetic Resonance for Imaging The greatest impact of MRI has been its ability to create startlingly clear images of soft tissue, which has led to its widespread use as a diagnostic tool across nearly all fields of medicine. The contemporary MRI images that are widely distributed are clear, high resolution, anatomical images of brains and other body parts rendered in grey scale which look like black and white photographs of the inside of the human body. MRI measures differences in levels of signal between tissue types and displays these as luminosity values to produce anatomic images – images of the appearance of internal structures. This imaging task allows expert viewers to differentiate between normal and damaged tissue and to recognize numerous pathological features for diagnosis. Image production now seems to be an inevitable part of an imaging technology, yet in the history of all of CT, MRI, and PET there is debate over the role of imagistic representation in the clinical and research use of these technologies. The contemporary appearance of MRI is taken for granted, but it is contingent; the output of early scanners was contested through a number of parameters. Early MRI scans were brightly coloured and sometimes had number arrays printed over the image, the machines were built to produce these kinds of images. The history of magnetic resonance imaging is, in part, a history of the debate over the technology by different groups and of the role of picture interpretation in medicine. There has been some analysis of these debates in science studies emphasizing the multidisciplinary use of imaging and the differences in language, professional vision, and representational practices between scientific disciplines. These discussions of the imaging modality do not examine in depth the kind of imaging practice and experimentation needed for MRI to develop as it has. The history of MRI is often recounted as a history of scientific rivalries: the quest for big money from funding agencies and the race to perform a human body 113 scan. It was also a process of developing a new way of making images, a search for a suitable imaging task for the technology and of thinking of images in medicine. It might have been that the data from these nuclear magnetic resonance scans was presented as numerical arrays rather than images. The fact that MRI images now look the way they do will be historically situated. I will proceed by analyzing the history of the debates over how the data from magnetic resonance scanners should be presented and how the pictures should look. In particular I want to extend this discussion by examining the epistemic issues at stake in the debate and the repercussions this has had for scientific knowledge. Prelude to Imaging–the Importance of Numbers There are multiple factors at play in the development of MRI. As with any technology these factors include; the social pressures supporting it, the individuals and groups involved in its development, where the funding was coming from for the project, and why the project was considered important.230 With MRI the two biggest breakthroughs were the decision to experiment with measuring magnetic resonance of biological tissue and the invention of a way to use this information to create images. The context in which these breakthroughs were made is nevertheless important in shaping the way that the technology developed. Magnetic resonance was being exploited by chemists and by physicists as a way of measuring the chemical and structural properties of both liquids and solids; magnetic resonance rates were read in order to take advantage of the differences between materials. Magnetic resonance spectrography, as this is called, allows scientists to determine the composition of solutions (or solids) and gain information about the physical and chemical structure of molecules by the different resonance rates of the magnetic nuclei. (For details about magnetic resonance I refer you back to the technical details of MRI section of Chapter 1.) Magnetic 230 Joyce, Magnetic Appeal, 12. 114 resonance spectrography has since become very important for mining as well as other applications. What really spurred the use of magnetic resonance into the medical arena was the push to find a method for the early detection and diagnosis of cancer.231 United States President Richard Nixon announced his ‘War on Cancer’ in 1971, the same year that the first publications using magnetic resonance to examine tumours were published.232 Nixon’s State of the Union address that year not only announced greater funding for cancer research, but urged scientists to rally to find a cure for cancer in five years. Nixon’s words in that speech set the tone for the drive to find a cure for cancer and also reflect the fear of cancer in the populace.233 I will also ask for an appropriation of an extra $100 million to launch an intensive campaign to find a cure for cancer, and I will ask later for whatever additional funds can effectively be used. The time has come in America when the same kind of concentrated effort that split the atom and took man to the moon should be turned toward conquering this dread disease. Let us make a total national commitment to achieve this goal.234 The allusion to the ‘Big Science’ of the development of the atom bomb and the race for the moon landing captures the idea of what can be achieved if scientists band together with enough money. It also suggests a technological solution that scientists would be given money for results rather than searching for preventatives such as nutritional or environmental factors which are now studied. The ‘War on Cancer’ and both the issuance of enormous funding towards cancer research and the fear of cancer this reflects are important in the history of MRI. In particular the “War on Cancer” encouraged more scientists who were using magnetic resonance to turn their attention to biology.235 Early experimentation with magnetic resonance and tumours suggested that it could play a role in the goal of conquering cancer. Raymond 231 Ibid. 232 Joyce, Magnetic Appeal, 28. 233 Ibid. 234 Richard Nixon: Annual Message to the Congress on the State of the Union, 1971, Presidency Project website (accessed April 17, 2010). 235 Joyce, Magnetic Appeal, 28. 115 Damadian, a New York physician, had worked on electrolytes levels of cells as a postgraduate. He had begun using nuclear magnetic resonance as a way to measure positive electrolytes in living cells since 1969. Following up on Gilbert Ling’s suggestion that structural changes in cancerous tissue had to do with its use of water, Damadian began analyzing the nuclear magnetic resonance values of hydrogen in tumours compared to normal tissue. He found that the rates for one parameter of measuring tissue, termed T1, was significantly different in tumour tissue.236 His first publication on this, in 1971, claimed that one could examine the resonance rates in order to determine spectrographically the presence of cancerous tissue in samples.237  In 1972 Damadian got a small grant from the National Cancer Institute and put together his first design for a full body scanner using measurement of nuclear magnetic resonance values of hydrogen.238 Damadian’s full body scanner was designed to read the magnetic resonance values of hydrogen throughout the body and to present the values as a graph on a screen. Physicists and chemists had used magnetic resonance spectrography in this way for years so there was reason to hope that magnetic resonance rate quantification would play an equally important role in medicine, especially in the elusive search for a way of detecting cancer.239 Damadian’s early findings were replicated and these results really spurred scientists to pursue biological research using magnetic resonance with the idea that the differences in magnetic resonance rates between kinds of tissue could be used as a way of detecting cancer. The National Cancer Institute took the research seriously enough to fund Damadian, D.P. Hollis, and another team on a two-year contract to explore magnetic resonance and cancer.240 While Damadian was attempting to create a malignancy index of the magnetic resonance of cancer types, Hollis and others were finding the correlations between nuclear magnetic resonance data 236 Sonny Kleinfield, A Machine Called Indomitable (New York: Times Books, 1985), 31–37. 237 R Damadian, “Tunour Detection by Magnetic Resonance,” Science 171 (1971): 1151–1153. 238 Kleinfield, Indomitable, 120. 239 Ibid., 7. 240 Ibid., 121. 116 and cancer less than overwhelming and were exploring different territory. This quantification – the specific numerical data read off tissue samples – played an interesting role at the beginning of imaging, in particular in guiding the development of the technology and the imaging tasks to which the new technology could be put. These findings spurred more research into magnetic resonance in biology, including Lauterbur’s research, which will be discussed below. Prelude to Imaging – the Importance of Pictures Lauterbur didn’t initially write about medical benefits in his 1973 note in Nature detailing his discovery of zeugmatography, he wrote about a new way of creating images. In fact, he had to amend the paper with a sentence about potential medical uses for the journal to accept it as the benefits were not obvious. Lauterbur, however, was thinking about zeugmatography in medical terms. Damadian seemed to be onto something with his use of nuclear magnetic resonance to biopsy tissue taken from patients, but Lauterbur was squeamish of taking chunks of flesh out of people for analysis. He was advancing the new zeugmatographic technique as a way of applying Damadian’s insight, of doing a chemical analysis of the interior of objects by figuring out where exactly the magnetic resonance signal was coming from.241 Instead of removing samples of tissue from different areas for testing, zeugmatography could spatially locate the malignant tissue. The spatial location was meant to be used alongside chemical analysis of the tissues, not instead of it. The discovery of a way of making images from magnetic resonance was not yet a discovery of what imaging task the technology could be applied to. There were a number of different groups working on issues related to imaging, notably one led by Peter Mansfield (who along with Lauterbur won the Nobel Prize in Medicine for MRI in 2003) in Nottingham. Mansfield had independently discovered a technique similar to Lauterbur’s though his interest was initially in crystallography rather than imaging tissue. When 241 Ibid., 58–59. 117 Mansfield and his team presented their findings at a conference, everyone asked if they were familiar with Lauterbur’s zeugmatography.242 As Mansfield was working in physics and Lauterbur was working in chemistry, they had managed to miss one another’s work. Mansfield decided to drop the crystallography angle and to begin working on the physical and chemical magnetic resonance rates of biological tissues. The finding that using gradient magnetic fields to record magnetic resonance data could allow for a reconstruction of a spatial display of the data, set off a “race” to find applications for the technique. With so many promising discoveries, there was great excitement about the possibilities for magnetic resonance in biological and medical contexts. Histories of MRI often describe a race towards the first human MRI scan.243 This race involved Damadian, Lauterbur, and Mansfield, as well as another group in Nottingham led by Andrews, and one in Aberdeen led by Mallard. In interviews with Kleinfield both Lauterbur and Damadian called the push to human imaging a race.244 Yet to describe it this way makes it seem as if they were racing towards a goal that was perfectly clear, which was not the case. Everyone wanted to do scans of human tissue, but they also wanted them to be clinically useful and safe scans. By this time CT was already becoming widespread. While x-ray attenuation values were considered important as biological indicators for visualization, the stigma of cancers and radiation poisoning still hung over them. An important and less discussed aspect of the development of MRI was the race to find an imaging task appropriate to the technique that had been developed, a way of using the nuclear magnetic resonance signal as an important biological indicator in medicine. 242 Kelves, Naked to the Bone, 243. 243 Joyce, Magnetic Appeal, 6. 244 Kleinfield, Indomitable, 74. 118 Search for an Imaging Task In the mid 1970’s no one could have predicted how useful MRI would become. They had not yet found the particular strength of the technology and this is reflected in both the appearance of the output of nuclear magnetic resonance scans and the way they are discussed. Some of this ambivalence was because these early magnets were so small. Until 1977 when Damadian’s group built a 10 cm, magnets could only accommodate objects up to about 1cm.245 Although Lauterbur had performed an in vivo scan on a small clam in 1974, Damadian was considered something of a crackpot at this time for wanting to put humans into a magnet.246 For one, the magnets were tiny and no one could imagine one with a bore big enough to contain a human. Furthermore, even though magnetic fields were presumed safe there was no information on their safety for living creatures – most experiments were being done on excised tissue – and the fear of radiation was still significant. Computed Tomography (CT) had been introduced in 1973 so the idea of doing scans in slices was in the air. This didn’t greatly impact the originators of MRI at the beginning as most of them worked in areas far removed from radiology. Working in chemistry Lauterbur knew so little about what was going on in radiology that when he figured out how to extract information from field gradients and reconstruct an image he thought he had invented a new field of mathematics. More important than images to those working on the technologies was the quantifiable numerical information of nuclear magnetic resonance rates. In a note on the topic Lauterbur writes, “In addition to studies of chemical systems, biological applications, such as the differentiation between fat and other tissues in organisms, are under investigation.” 247  How 245 R Damadian et al., “NMR in Cancer: XVI: FONAR Image of the Live Human Body,” Physiological Chemistry and Physics 9 (1977): 97. 246 Kleinfield, Indomitable, 73. 247 Paul Lauterbur et al., “Zeugmatographic High Resolution Nuclear Magnetic Resonance Spectroscopy; Images of Chemical Inhomogeneity Within Macroscopic Objects,” Journal of the American Chemical Association 23 (1975):6866–68. 119 images could be made diagnostically or clinically useful was not yet evident. CT at the time was based on averaging the attenuation values of tissue at the various voxel points and the machines produced both images and numerical output. The idea that nuclear magnetic resonance could similarly provide useful quantifiable information about tissue types, but without the radiation, was what was driving much of the industry around biological nuclear magnetic resonance. This is evident in the kinds of experiments that were being done and also in how the images were being made. Lauterbur was the first to produce an in vivo image, in 1973. It was of a clam, one of the few animals small enough and still enough to fit into a magnet. Lauterbur also produced an image of the thorax of a mouse in 1974. The first in vivo scan of human tissue was in 1977, when Peter Mansfield scanned his own finger without negative effects. This image was made using a line scan technique developed by Mansfield’s team that allowed scans to be made in 5 minutes and that opened up more possibilities for human scanning. Damadian’s paper detailing the first human body scans in 1978 shows the ambivalence about using nuclear magnetic resonance for imaging. He discusses the images in terms of the chemical nature of organs, tying them into the usefulness of other biochemical tests in medicine rather than comparing it to CT or x-ray. Quite boldly, he states: The practice of medicine today is largely rooted in the anatomic desciptions of Vesalius and his intellectual successors who advanced the anatomic data base of diagnosis and treatment from the gross to microscopic description. Thus, common medical diagnoses such as cirrhosis of the liver, glomerulonephritis, Hodgkins sarcoma etc. connote alterations in the microscopic architecture of the diseased organs. The intuitive driving force behind the clinical application of biochemistry, however, has been the prospect of one day converting the practice of medicine from an anatomic to a chemical footing… The non-invasive determination of the chemistry of diseased organs and tumours in humans imposed requirements that could not be met by the existing nuclear magnetic resonance technology.248 248 R Damadian, “Field-focusing Nuclear Magnetic Resonance; The Formation of Chemical Scans in Man,” Naturwissenschaften 65 (1978): 250. 120 Besides stressing the importance of the technology he developed, this shows that to Damadian the imaging task emphasized the chemical nature of the signal. He considered images maps of chemical properties of the body whose importance is what could be determined about that chemistry. A 1980 review by Mallard’s group in Aberdeen suggests that nuclear magnetic resonance imaging would be limited to abnormalities of fluid formation.249 Even at this date they thought imaging should be done along side quantitative studies of specific tissues in order to determine specific values for different kinds of tissue. Mallard’s group didn’t think nuclear magnetic resonance imaging would have the spatial resolution of CT, and saw only a limited use for it in breast and liver tumours where the difference between tumour and surrounding tissue was highest.250 Even into the 1980s research continued on the differences in magnetic resonance signal between malignant tumours, benign tumours, and normal tissue looking for correlations between tissue type and magnetic resonance rates that could be of diagnostic benefit. If they had found a correspondence between tissue type and magnetic resonance response rates the emphasis would not have shifted to imaging but to one-dimensional recording and display of numerical information. If chemical information about different tissues types had been reliable for diagnostic purposes as numbers, magnetic resonance could then have served as a kind of non- invasive biopsy to test for cancer more akin to spectrography than imaging. This would have altered the role of images, image interpretation, and the place of magnetic resonance in medical practice. In fact, it turned out that there is far too much variation amongst tissues and across different bodies for any such correlation to be drawn. The uncertainty about the imaging task to which nuclear magnetic resonance could be set is reflected in the appearance of the images 249 J Mallard et al., “In Vivo N.M.R. Imaging in Medicine: the Aberdeen Approach, both Physical and Biological,” Philosophical Transactions of the Royal Society of London, Series B Biological Sciences 289 (1980): 519–33. 250 Ibid., 530. 121 around this time. Most were in colour, some included numerical arrays, and others included drawings of the slice of tissue alongside the image. The variation in imaging tasks across experiments and between labs was exacerbated by the concurrent development of scanning technology itself. There was not yet consensus on what information would be relevant, let alone how to extract that information from tissue. Over the course of the experiments and the technological development that underlay this discovery, the use of gradient fields and image reconstruction techniques were taking off. Experimental imaging systems were being developed and early images made of different kinds of tissue. There was no standardized way of creating, or interpreting images. All were cross-sectional, following Lauterbur and Mansfield’s initial imaging techniques, but the ways of displaying the resultant information were different. Imaging task is closely connected with the appearance of images. Imaging with Nuclear Magnetic Resonance By the mid 1970s, researchers had begun to understand that nuclear magnetic resonance had unexpected uses beyond spectrography. As Maudsley points out is his retrospective, By the mid 70’s it was clear that nuclear magnetic resonance could not only be used as a probe into molecular structure but it could also be used to map macroscopic structure within large objects. It was also clear that the typical research topics of the physics department were going to dramatically change.251 As hopes for nuclear magnetic resonance as a cancer detector waned its other strengths were being explored. This is clearly sorting out the imaging task of MRI, the spatial representation of slices of tissue enabling determination of tissue types and difference. Around this time imaging was becoming standard in medicine, CT and ultrasound played a role in this, and playing more of a role in clinical practice as a way of looking inside the body. This diagnostic context is important because it helped to cement the specific imaging task of MRI and also the adoption of 251 A.A. Maudsley, “Early Development of Line-scan Nuclear Magnetic Resonance Imaging,” Magnetic Resonance Materials in Physics, Biology and Medicine 9 (1999):100–102. 122 the technology into radiology departments which in turn impacted the appearance of the images. MRI historian Joyce describes the context, By the end of the 70’s , the research teams had arrived at a working consensus about the name of the technology, the representation of the data, and the machine design. The common name for the technology was nuclear magnetic resonance imaging, or NMR. The display of the data included printouts of multicolour images and arrays of numbers, and the machines were designed to produce this output. The chemists, physicists, and research physicians who worked on this technique drew on professional knowledge and exchanges with each other to stabilize the features of NMR imaging.252 An aspect of the standardization of imaging was developing representational and interpretive conventions for these new kinds of images.  While the negotiation of imaging task was going on, the appearance of nuclear magnetic resonance images was not what it is today. The relationship between imaging task and the appearance of the images is fundamental to understanding the development of MRI, especially as it became incorporated into a diagnostic context. The images were produced with colour correlating with levels of signal (mostly proton density at this point), so that the images were brightly coloured unlike the grey scale images we are familiar with. Numbers representing proton density were printed in each pixel array, so that the quantified data was easily accessible. The colour system for nuclear magnetic resonance images was detailed in 1977 by Holland and Bottomley, and was used by early imagers (including Damadian and Mansfield) into the 1980s.253 At this point the images were being discussed by researchers as “proton density maps” whose referents were the spatial distribution of free hydrogen molecules in objects.254 Holland and Bottomley argued that “Optimum visual resolution can therefore be achieved by using a colour display system,” 255  monochromatic 252 Joyce, Magnetic Appeal, 36. 253 G N Holland and P Bottomley, “A Colour Display Technique for Nuclear Magnetic Resonance Imaging,”Journal of Physics E:Scientific Instruments 10 (1977): 714-16. 254 Ibid., 716. 255 Ibid., 714. 123 displays were considered to make it more difficult to decipher the differences. Others accepted their arguments from the human visual system, Coloured images so produced present a considerable advance over black and white displays for two reasons. Firstly, the human eye is far more sensitive to colour contrast than to grayscale variations, and secondly, the system’s resolution is enhanced because the number of colours that can be produced is equal to the third power of the maximum number of intensity levels that the oscilloscope can display.256 The use of colour took advantage of the abilities of the new display technology. Levels of signal corresponded with different colours, and the images then demarcated up to eight different areas of colour (black, blue, magenta, cyan, green, red, yellow, and white). This correspondence suggests two things. First, that the important interpretive point was taken to be distinguishing between different discrete levels of signal, and there is no doubt that colour made this easier. As we saw in the discussion of Tufte cases in the last chapter, colour differences facilitate seeing differences in kind. Second, that the chemists and physicist working on the technology were not yet interested in anatomical appearances. The emphasis was still on imaging as producing chemical scans and not on anatomical pictures. The imaging task was not on seeing differences in soft tissue, but on clearly differentiating relevant levels of signal. The display of numbers in nuclear magnetic resonance images reflects the desire for specificity in imaging in line with the practices of the chemists and physicists who were developing the technology. Associating themselves with the field of nuclear medicine, they did not see themselves as making pictures like those associated with radiology and x-ray. Unlike radiologists nuclear magnetic resonance researchers did not deal with image interpretation. Nuclear magnetic resonance was not an x-ray modality and was developed outside of a radiological context, so there was no real reason to have them look like x-ray images or like CT. Furthermore there were professional differences involved. The perceived difference between 256 Holland and Bottomley, “Colour Display,” 716. 124 what radiologists do and what could be done with MRI is emphasized by Lauterbur in an interview with Joyce, People in nuclear medicine, I heard say around that time, said that radiologists could not be trusted with nuclear magnetic resonance imaging. It was too complicated for them. People in nuclear medicine were used to thinking about chemistry and complex physics while radiologists just looked at fuzzy pictures.257 Comparison with Computed Tomography Both CT and MRI were born out of the increase in computational power and digitization in the postwar era. Their development and adoption into the medical mainstream would not have been possible without the work of people doing mathematical reconstructions of images in both astronomy and in molecular biology. This work on Fourier (and fast Fourier) Transforms was also pushed along by work in computers. The slow speed and the low memory of 1960s computers necessitated finding ways to perform better image reconstruction techniques with fewer artefacts. CT and MRI both take advantage of the massive amount of mathematics that can be performed by computers. Neither of these techniques would have been possible without the computational power to run multiple simultaneous algorithms.258  The strength of CT (over other x-ray based imaging modalities) comes from this reconstruction. CT, which became more popular as MRI was being developed, is an important point of comparison when looking at the development of the appearance and use of images in diagnoses. Issues of representation and interpretation arise in the history of CT while it remained firmly in the realm of radiology, being based on x-ray.259 These issues introduce some of the challenges presented by vision and interpretive conventions within a discipline, what I have been discussing as professional vision. The original CT images of the brain, for instance, made no 257 Joyce, Magnetic Appeal, 37. 258 Kelves, Naked to the Bone, 146–150. 259 Ibid. 125 sense to radiologists of the time. However accustomed they might have been to “looking at fuzzy pictures” they were not used to looking at fuzzy pictures of slices of the brain.260 In their two papers introducing CT, Godfrey Hounsfield and James Ambrose do not merely explain the technology,261 and discuss its clinical applications. 262 They stress that the information from CT can be presented in three different ways: as a grey scale display on the monitor, as a Polaroid photograph of the monitor, and as a print out of the numerical array of attenuation numbers.263 These introductory papers show images produced in these ways, as well as CT images compared to both drawings and photographs of the brain.264 Attempts to image the brain had proven difficult as has been discussed. When the possibility arose with CT of creating, for the first time, real radiograms of the brain, it immediately took off. Machines were built with the dual capacity to print out numerical representations and imagistic ones. In standard x-ray radiograms the luminosity of an area of the image represents an area of higher absorption rate by the tissue; the tissue allows the x-rays to pass through to the plate or detector, or they are absorbed by the tissue. Because the x-rays have to pass through different tissues – skin, muscle, organs, bones – before they are detected, standard radiograms only display the mean absorption of the tissue as represented by luminosity, the ‘shadows’ discussed in radiograms. This can make discrimination between some tissues, and especially soft tissues, difficult. CT, on the other hand, takes thousands of volumetric measurements that are then used to reconstruct a spatial representation of absorption allowing for greater resolution and clarity of soft tissue. CT was different from other kinds of radiograms in a number of ways.265 260 Joseph Dumit, Picturing Personhood; Brain Scans and Biomedical Identity (Princeton: Princeton University Press, 2004), 58. 261 GN Hounsfield, “Computerised Transverse Axial Scanning (tomography) Part 1,” British Journal of Radiology 46 (1973): 1016- 1022. 262 J Ambrose, “Computerised Transverse Axial Scanning (tomography) Part 2,” British Journal of Radiology 46 (1973): 1023- 1047. 263 It is customary to refer to these together rather than separately, henceforth they will be Hounsfield and Ambrose, CT Axial 1 and 2. 264 Hounsfield and Ambrose, “CT Axial 1 and 2,” 1021–1036. 265 GN Hounsfield, “Computed Medical Imaging,” Nobel Lecture, December 8, 1979. 126 The absorption values of these volumetric units (voxels) measured in CT can also be printed out as quantitative data, which traditional x-ray modalities cannot present. Attenuation numbers were only being used in CT and not in other x-ray based technologies where the average of tissue attenuation values was represented by luminosity and “shadows” on the radiogram. There are several reasons for this. First, the values were computationally reconstructed unlike normal radiograms so the numerical values were available for the first time. These Houndsfield Numbers were on a scale of 0 to 1000 with water chosen for 0 and dense bone chosen for 1000.266 Second, since the tissue of the brain had never been x-rayed there was some uncertainty about the appearance of different kinds of lesions. In x-ray based modalities, attenuation values are the signal of tissue density and are quite stable. Tissue density is also fairly well understood, so seeing an area of tissue as brighter – and therefore denser – is usually diagnostically relevant. Radiologists come to recognize the density of different lesions in different kinds of tissue. The tissues of the brain were not known and since most of it is soft tissue they hoped that the numerical values would be a finer diagnostic measurement. Hounsfield and Ambrose discuss this in their papers, While no really distinctive pattern has emerged from our study of numerical absorption coefficient values in low density lesions, it should be possible in the majority of cases to differentiate tumours from degenerative and other non-neoplastic lesions. The pattern of tissue involvement, the tendency to necrosis and cyst formation in malignant tumours and the displacement and distortion or identifiable structures such as the ventricle system and the pineal body may aid in identifying the nature of an abnormality.267 Even as they studied the numerical values their technique produced, Hounsfield and Ambrose recognized the role that the knowledge of radiologists, both of anatomy and of pathology, would play in the interpretation of images. They understood that radiologists would use signal values to make determinations about tissue density within particular structures. Calcified tissue is a marker of malignancy that shows up as areas of high signal because it is very dense. The many 266 Ibid., 1016–1020. 267 Ibid., 1030–1034. 127 images of different types of brain lesions that Hounsfield and Ambrose presented in their papers captured the usefulness of CT – for showing anatomy of the brain when it as normal, and for making diagnoses when it was not. CT was rapidly adopted into medical practice, which provided enormous economic incentive for MRI to find a place in the diagnostic realm.268 From Nuclear Magnetic Resonance to Magnetic Resonance Imaging By the time commercial MRI machines were being built in the early 1980s it was purported to be able to do every thing that CT could, “spot tumours… save lives through rapid diagnosis of organ trauma, fractures, internal bleeding, masses and swelling.”269 The strength of MRI at first was that it could create images without exposure to ionizing radiation that was known to be problematic. It also turned out to be superior for soft tissue visualization. While numerical display in both MRI and CT through the 1970s shows the hopes still in place for the value of the molecular analysis of tissue, the colour displays had to do more with how and what the images were to represent. Work by Mansfield and others seemed to show that colour display allowed for greater visibility over black and white. This is only the case however if what one needs to see is difference in levels of signal. Colours may be easier to discriminate than luminosity values, but this is true only where areas of difference are important to spot. On the other hand, the use of colour can make some subtle differences seem more significant than they are which has proved to be problematic in other imaging modalities such as Positron Emission Tomography and fMRI. As technological developments pushed imaging along, imaging studies by different groups invented new ways of scanning tissue (field focusing, line scan). In these studies they found that different scanning sequences, and different measurements of signal (proton density, T1 and T2) were better for imaging different aspects of tissue. As the imaging system itself was 268 Kelves, Naked to the Bone, 165. 269 Ibid., 243. 128 developed new roads were made into defining the imaging tasks appropriate for the modality in the medical context. The faster line scan technique was developed, and after Ernst developed new ways of reconstructing images, imaging became faster and more detailed. Works by Mansfield and Maudsley and by Pykett and Mansfield began to experiment with and get results using scanning techniques that emphasize the differences between soft tissues. Pykett and Mansfield enjoyed success in differentiating between fat, tendon, and muscle tissue.270 Their success in this imaging task demonstrates the power of discovering which protocols were effective for imaging and also how to best display the data. Researchers found that they were able to determine differences using colour display which were not evident using black and white. This raises several issues. First is the expectation of researchers in different fields in terms of the appearance of images, and second what they expected from the imaging task itself. We see some of this in the claim from Pykett and Mansfield’s paper that, It is interesting to note that water content of various skeletal muscle tissues, as reported in the literature (Kiricuta and Simplaceanu 1975)) is not normally differentiated. There is, therefore, no a priori reason to expect such differentiation in nuclear magnetic resonance images.271 There was debate between those who thought that the imagistic presentation of data was important and those who thought displaying the numerical data was more informative. In part this in an issue of what was considered to be evidence in the two groups. To those working in more quantified sciences such as chemistry and physics the kind of specificity of information given by the numerical data was a better fit. To those used to working in radiology, image interpretation was more common and the numbers did not mean very much. Specialists in nuclear medicine, with backgrounds in physics and chemistry, were more accustomed to dealing with numerically presented information about biochemical differences in tissue and thought the new nuclear magnetic resonance technology should be based around matrices of numbers. 270  Ian Pykett and Peter Mansfield, “A Line Scan Image Study of a Tumorous Rat Leg by Nuclear Magnetic Resonance,” Physics in Medicine and Biology 23 (1978): 961-67. 271  Ibid., 966. 129 At this point in the development of nuclear magnetic resonance images they were beginning to be shown compared with photographs and drawings. These comparisons draw out the visuality of the nuclear magnetic resonance images. They show what is being represented as well as how it compares in appearance to other visual representations. These studies are still investigatory, examining what tissues record high and low signal intensity and how to best make use of this with biological systems. For instance, without knowing some issues about how things can be represented it would be difficult to see directions in which resolution can be improved. A 1980 discussion of nuclear magnetic resonance imaging examines the question of what is displayed in the image.272 Here the image is treated as more than a 2-D display of signal at specific points in the object – it is more than a chemical map. Andrew describes his team’s images as spatial representations of the nuclear magnetic resonance signal but also compares these representations to other visual representations. Nuclear magnetic resonance images of a lemon are compared with photographs of a slice of lemon. An image of a slice of a rabbit’s head is presented alongside a line drawing of the same head with different aspects of the tissue labelled. An image through the wrist of one of the researchers is compared with a photograph of a wrist bone. This is the beginning of MRI as using tissue difference to show anatomic structure. The shift between thinking of the technology as a chemical scan and as a tissue scan is what matters here; chemicals, unlike tissues, do not have any appearances to us. The shift to considering MRI as a tissue scanning technology had, therefore, to do with what could be represented in the images. The clear appearance of recognizable structures of the body, in slices, became the goal of imaging over the presentation of slices of chemical information. Both interpretive conventions and the use of the technology were altered with the decision that tissue, rather than chemical, imaging was what was going to be the useful task for the technology. 272 Raymond Andrew, “Nuclear Magnetic Resonance Imaging of Intact Biological Systems,” Philosophical Transactions of the Royal Society of London, Series B Biological Sciences 289 (1980): 471- 81. 130 The Move Into the Diagnostic Arena This so called ‘turf war’ between radiologists and those in nuclear medicine also underlay the change of name from nuclear magnetic resonance to magnetic resonance and eventually MRI. While it is popularly accepted that dropping the word ‘Nuclear’ from the beginning of nuclear magnetic resonance was to allay public fears associated with radiation, part of it was also from the desire of radiologists not to have the new technology too closely associated with nuclear medicine. The debate is not only over the numbers or the pictures, it is also a debate in professional vision or the interpretive practices standard to different kinds of expertise. Quantification and precision are important to the chemists and the physicist, while the radiologists’ approach is historically more visual. Picture interpretation, and learning to both view and diagnose bodies within a pictorial representational scheme, are part of the radiological practice. This was finally settled with the failure of quantification to provide medically useful information and the incorporation of MRI into radiology departments. This in turn is best explained by the discovery that the magnetic resonance signal could show spatial information about tissue type which allows for a clear picture of soft tissue to be created. By 1979 when Hounsfield and Cormack won the Nobel Prize for CT it was well established in clinical practice, and imaging was building steam as a diagnostic tool. There is suggestion that the appearance of MRI was informed by the appearance of CT images, but there is another shift going on as well that also has central epistemic import for the direction of medicine.273 As has been discussed, Lauterbur’s first paper on nuclear magnetic resonance imaging was a paper on a new way of creating pictures by inducing local changes. Photographs, and other mechanically produced pictures like x-rays, traditionally use the properties of light in order to capture properties of objects on photosensitive paper. The step to using monitors was 273Joyce, Magnetic Appeal, 128 131 not terribly different. The density of tissue in x-ray is represented by luminosity brighter areas represent denser tissue because more x-ray is stopped by the tissue and less gets through to the paper or screen. So Lauterbur’s new technology was interesting in creating spatial representations of properties of objects using not reflection or absorption of photons, but by measuring differences of hydrogen protons which are all through the body. The decision to use grey scale, as in MRI, seems to be in line with the convention that levels of signal are represented by luminosity and to make its appearance consistent with x-ray. As a representational scheme, this makes use of the same convention as x-ray or black and white photography. MRI did not translate into grey scale so simply, as researchers struggled to find an appropriate appearance for the images and an imaging task appropriate for clinical use of the technology. As imaging became more important than the quantified data itself numbers became more hidden. It has been suggested that this shift occured because of the customs of radiologists, but also because while numbers were important in spectronomy and other application when it came to diagnostic and clinical practice, not important. There was a balance between the interpretation of images and the importance of number in itself that occurred. The biological indicators came to be seen more as a way of allowing us to see into the body, and less medically useful themselves. As the images and image interpretation became more central, some specificity of the data was lost, but the specificity lost was not medically useful; while other protons are being examined for their spectronomic significance, the usefulness of magnetic resonance rates of hydrogen protons is in the relative differences between tissue types in the body. To put this down merely to professional vision is to disregard some important things about the representational scheme at use in grey scale imaging. While brains and bones have appearances to us, their tissue attenuation of x-ray or their magnetic resonance rates do not 132 appear any way to us. To say that the grey scale representation is more naturalistic is, in one sense, not meaningful. On the other hand, the claim is often made that grey scale images seem more naturalistic or realistic because they are easier to interpret. This on its own does not seem satisfactory, because numerical values may have been easier to interpret than pictorial representation had they proved to be reliable determiners of tissue. So while it may be true that there is no natural connection between magnetic resonance signal and either luminosity or colour, the shift from use of colour to grey scale in MRI tells us some significant things about the imaging modality and about imaging itself. The process of experimentation with visuality was a process of discovering that MRI could create transparent images, and also that it could create very clear and legible images, which allowed for greater medical impact. As researchers began to use more imagistic presentations of the magnetic resonance rates, they found that the spatial array of properties in grey scale allowed for detailed views of soft tissue. As MRI began to be used diagnostically, it was found to be useful for what physicians could see in the images – the anatomy and any pathologies– and so the colour was dropped. By using grey scale anatomical structures could be seen when differences in signal from different tissues were displayed as differences in luminosity values and not colour. Of course, there is variation across MRI images and imagers that have to do with the age and quality of the machine used, the experience of the technician, familiarity with the body on the part of radiologists. For the sake of making claims about the technology, however, we must discuss what can be done with it. All images will be assumed to be done well and interpreted correctly. There is something about grey scale representation that is easier to interpret and more informative. Here is a claim: the representational scheme wherein signal is represented by luminosity is one that is closely connected with human vision it is easier to interpret because it is a transparent system. In the cases of x-ray, MRI and other technologies the spatial organization of signal is such that we are able to see the relevant features of the object imaged. 133 For MRI this is soft tissue. That is to say that the signal→luminosity  representational scheme is one that preserves those real similarity relations that are important for pictorial transparency. Viewed in grey scale we are able to examine the represented features (in MRI these are local differences in tissue type) by viewing the image much as we would were we viewing the object itself. Colour display actually inhibits transparency. One of the things that makes MRI so useful diagnostically is the ability to give a clear view of structural properties of the body. This includes being able to differentiate between different tissue types, which can be spatially located within the slices so practitioners can then see whether there are abnormalities or problems present in the part imaged. For instance, tumours, torn muscles, or ligaments are clearly visible in MRI scans. If the image itself provides data about the body, by allowing us to see through to the object imaged, then discussion of the biological indicator itself as data confuses the issue. Images can be data, show data, or do both; but MRI is best considered as an imaging technology which itself provides data. If we consider the role of colour in MRI and what makes it seem wrong, I think it is because of what it confuses. Not only does it make boundaries between regions artificially severe, but by being an arbitrary marker of specific values of the signal, colour interferes with the ability to see the body in MRI images. Black and white photography also utilizes this representational system – coloured things are represented by a spatial display of luminosity values – we are able to see many different features of the objects or scene because of this. It allows us to have a visual experience like seeing the scene face to face, minus colour. There is counterfactual dependency between the signal (light) and the luminosity values recorded on the photographic print. Colour photography has (more or less) counterfactual dependence between spectral values recorded and the colour of the images. Colour preserves a real similarity relation in colour photography, but in the case of MRI, colour did not preserve this. In the case of MRI, one of the problems with colour being 134 used representationally is that it does not preserve any real similarity – colour blocks transparency by adding an arbitrary signifier in the system. This is problematic when interpretation of the image is visual. Imagine a photographic system where instead of colours corresponding in the usual way (e.g. things that appear green face to face appear green in pictures) we were to randomly assign colours to various places on the spectrum of wavelength. This is a system that would still be counterfactually dependent  but it would not preserve real similarity. This system would, in one sense, carry more information than one which did not incorporate spectral information at all but in carrying this information it loses something important which is our ability to easily make use of it as a representation. This comes from it being transparent. 135 6.2. Ultrasound: Interfering for Greater Visibility The development of ultrasonography as a medical tool led to many advances in obstetrics as well as in related sciences and fields of medicine. Indeed much of the modernization of obstetric and perinatal care and knowledge of foetal development is tied to the use of ultrasound. The very best sonograms clearly show heads, limbs, fingers, and the beating hearts of foetuses, and even in unclear images some of these features are visible. Ultrasound is routine in obstetric medicine and is used to confirm pregnancy, to establish gestational age, to check for molar, ectopic, or multiple pregnancies, and to observe foetal health.274 Moreover, ultrasound exams during pregnancy are one of the few medical procedures that are eagerly anticipated,275 and from which people take mementos to share with their family.276 Ultrasound works according to basic acoustical principles. Waves of sound are produced in a transducer and travel in approximately straight lines until they hit a medium with different density and sound velocity. The waves then reflect or refract off the medium – either bouncing off  it, or being partially absorbed by it. This interface between media is of particular concern to ultrasound and is how it achieves good spatial resolution. Ultrasound takes advantage of the different acoustic properties of tissue and specifically the interfaces between types of tissue, so it is very useful for visualizing the shapes of things where shape is defined by tissues with different acoustic properties. The transducer is placed on the skin and a narrow beam of pulses of very high frequency sound energy, 1 to 10 MHz is directed into the body and swept back and forth- the time of return of the echo is proportional to 274 Jane Alty et al., Practical Ultrasound (London: Royal Society of Medicine Press, 2006) 1–5. 275 Jeanette Burtbaw, “Obstetric Sonography – That’s Entertainment?” Journal of Diagnostic Medical Sonography 20 (2004): 444–448. 276 I have heard several stories of people keeping their kidney stones as mementos, and of eager internists offering their clients endoscopic images, these are generally not anticipated keepsakes of passing kidney stones or having endoscopic exams. 136 how deep the interface is that produced it.277 The echo signal intensity depends on the acoustic characteristics of the material on the two sides of the interface.278 Signal information is then translated into luminosity on a monitor allowing, first of all, for structures, organs, and features of the object to be seen, and second the relative echogenicity of tissues to be noted. The resolution and real time imaging power of ultrasound make it possible to perform procedures using the technology. Structures can be visualized on the monitor, and needles or other tools can then be seen in relation to the region of interest using the image to guide the procedure. While traditional B-Mode (2-D) ultrasound has been incredibly useful, it also has some limitations.279 The beam from an ultrasound transducer can only capture a two-dimensional sample of tissue and this means that the image displayed will always be particularly limited. The signal information carried to the monitor is from a thin slice of tissue, but this information is carried in real time following the movement of the transducer wielded by the operator. While any image only shows a small piece of the puzzle, overall ultrasound is capable of capturing information about the entire foetus. The limitation, however, is with the way this information is displayed. Because only a thin slice can be shown at a time, ultrasound is not useful for imaging small objects such as foetal ears. Nor can it display entire structures of interest, such as the spine, rib cage, or face of a developing foetus.280 The material limitations of ultrasound have, however, been overcome in remarkable ways by those who needed the technology to perform procedures, make diagnoses, and reassure parents. More recent developments in ultrasonography include 3-D and 4-D ultrasound. 3-D ultrasound “freezes” slice data and compiles it to form a volumetric record. The data can be volume rendered (taking voxel data and turning it into viewable 2-D images on the monitor) in a 277 Jane Alty et al., Practical Ultrasound, 1–5. 278 Wolbarst, Looking Within, 68. 279 A-mode ultrasound was one-dimensional, providing echogenic properties at a point. 280 Asim Kurjak et al., “How Useful is 3D and 4D Ultrasound in Perinatal Medicine?” Journal of Perinatal Medicine 35 (2007): 10–27. 137 number of different modes. Basic modes include: surface smooth, surface texture, transparency maximum, transparency minimum, x-ray mode, and light mode. Two of these can be combined in one rendering. Each mode takes different aspects of the original data (for example, x-ray mode captures the mean grey value for each pixel on the screen), including initial grey values, and uses it to create a selective data set in the 3-D volume. Ray casting algorithms are then used to map different points onto the 2-D screen in terms of light and dark.281 The most popular modes for foetal imaging are the light and surface modes. These images are easier to read because they are more like photographs, realist paintings, or well developed computer animations.282 They represent surface features of the object of interest including facial planes, limb details, and gestures as if using sound in order to see what would normally require light. This is to say that they capture the visual appearance of the subject in a way that is more like the appearance seen face to face. The similarity of these images to more familiar images may also make them easier to understand. 4-D ultrasound is 3-D images with the added dimension of time. They look like short, slightly jerky, 3-D animations in which foetuses can be seen blinking, stretching, yawning, and moving around in other ways. The development of 3-D and 4-D imaging in ultrasound has led to a surge of questions about, and research into, the usefulness of these new techniques in obstetrics and whether they can make up for limitations of B-mode ultrasound, or are mostly procedures for the entertainment of parents. On the one hand, the images are widely appealing to non-experts because they provide more easily decipherable images of the foetus. On the other, they are often treated only as “pretty” images, appealing to the sonographically uninitiated, but informationally shallow. This study will examine the development of 3-D imaging and its consequent 281 General Electric Medical: “Online CME courses; the basics of 3-D and 4-D Ultrasound,” General Electric Medical, (accessed January 2010). 282 Essentially, they are well developed computer animations based on recorded biological indicators, ray tracing programs, and some representational standards. This will be discussed in greater length in a few pages. 138 implementation in obstetric practice in the context of debates over its usefulness. The extent to which the images play a role in medical visibility and in the greater context of obstetric practice will be considered. For Entertainment Purposes Only? Three main reasons that 3 and 4-D ultrasound might be considered technologies for entertainment are a) how they are used, b) what they represent, or c) how they represent; all of these are related. How the technologies, as image producing technologies, are used is important in determining the role they play within a medical or scientific context. If there is little or no benefit within this context, it is possible that the images should not be considered according to the demands made of medical and scientific images. What they represent is important, the objects and features of objects that are represented in part determine what their role can be given medical or scientific needs and interests. How they represent determines which features will be extractable, as well as what can be done with the technology. One of the main ways that 3 and 4-D ultrasound have been used, and how they are most familiar generally, is by private clinics that create videos and images of developing foetuses in utero. 3 and 4-D ultrasound imaging is sometimes done as part of a package including clinical reports, but are often done separately from clinical obstetric exams. These are often called ‘boutique imaging’ clinics and are only rarely connected with the routine medical exams of obstetrics. Normally ultrasound is only done twice during a pregnancy, once to establish gestational age and confirm pregnancy, a second time to check for gross abnormalities in the second trimester. The purpose of boutique imaging scans seems to be the production of keepsake images and for parents to be able to see the developing foetus. The term “entertainment” comes from legal waivers that patients sign stressing that the exam is for their enjoyment and entertainment only, and not for medical purposes. It has gone beyond this, 139 however, with physicians and sonographers discussing the importance of entertainment value in even diagnostic ultrasound exams. Diagnostic and Non-Diagnostic Uses of Ultrasound Diagnostic and non-diagnostic uses of ultrasound are defined in discourses of medical expertise. The Society of Diagnostic Medical Sonography defines diagnostic imaging as that which is: requested by a physician, performed by a sonagrapher, and interpreted by a physician.283 The SDMS issued a statement of indictment against non-diagnostic imaging, Because the service is provided for entertainment purposes only, it is considered nondiagnostic. The use of two-dimensional (2D), three-dimensional (3D) or four- dimensional (4D) ultrasound to only view the fetus, obtain a picture of the fetus or determine the fetal gender without a medical indication is inappropriate and, in the view of the Society of Diagnostic Medical Sonography (SDMS), contrary to responsible medical practice.284 The American Food and Drug Administration furthermore calls imaging “for keepsake images” an unapproved use of a medical instrument with the potential to be prohibited by law.285 The American Institute of Ultrasound in Medicine uses much the same language as the SDMS, with both  also  issuing  a  claim  about  the  safety  of  ultrasound  and  the  responsible  use  of  the technology in medical practice: Although there are no confirmed biological effects on patients caused by exposure from present diagnostic ultrasound instruments, the possibility exists that such biological effects may be identified in the future. Thus ultrasound should be used in a prudent manner to provide medical benefit to the patient.286 Diagnostic imaging is imaging done at the recommendation of a medical professional – physicians, nurse practitioners, and midwives are mentioned – as part of a plan of medical care and for the medical benefit of the patient. Most 3 and 4-D ultrasound falls outside of this, since 283 Burtbaw, “That’s Entertainment,” 444. 284 Ibid. 285 Burtbaw, “That’s Entertainment,” 445. 286 Ibid. 140 the purpose of it seems to be for the patient and her family to see the foetus and so be entertained. We can call this the satisfaction of curiosity – what does she look like? What is he doing in there?  Imaging for these purposes uses the same technologies as produce medical images, but without the same context of image use – they are for entertainment purposes only, as the satisfaction of curiosity is not considered to be a psychosocial defect that requires medical treatment.287 While there is nothing wrong with this purpose, it is not medical – other than checking for certain landmarks for abnormalities, and whether movements are normal, the best marker of foetal health is the mother’s familiarity with and continued monitoring of foetal movement.288 There is an added risk that problems, such as foetal abnormalities, might be evident in boutique imaging where women would not have adequate support and counselling that they would have in a medical environment.289 B-mode ultrasound images can be difficult to read by lay people, and this is a complaint that many women have with their standard medical scans.290 First of all, most people are unfamiliar with seeing objects in slices, the black and white moving images on a standard ultrasound capture only partial sections of the foetus at a time so there are less familiar landmarks. Second, standard ultrasound shows the foetus according to relative echogenicity of its tissues rather than surface properties. The representational scheme does capture many features similar to our usual visual experiences, allowing us to recognize fingers, limbs, and faces, but it does so from unfamiliar perspectives. Skin is not usually seen in B-mode ultrasound just as hearts and other organs are not usually seen in ordinary vision. Third, most people are 287 Frank Chervenak and Laurence McCullough, “An Ethical Crtique of Boutique Foetal Imaging: A Case for the Medicalization of Foetal Imaging,” American Journal of Obstetrics and Gynecology 192 (2005): 32. 288 Conversation with Kelly Gray, Clinical Nurse Specialist for Perinatal and NICU at Royal Columbian Hospital. 289 Chervenak and McCullough, “Ethical Critique,” 33. 290S Simonsen et al., “The Complexity of Foetal Imaging,” American Journal of Obstetrics and Gynecology 112 (2008): 1371. 141 unfamiliar with the morphological changes associated with foetal development or with the anatomy of the uterus. So in ultrasound people are seeing unfamiliar things in unfamiliar ways (see Figure 6.1). Figure 6.1: Two-dimensional Ultrasound at 12 Weeks Gestation.  Image courtesy of Kelly Gray. 3-D and 4-D ultrasound use the same signal information as B-mode to create images, but the data is processed differently and presented differently.  That is to say, it could be considered that informationally the procedures are the same if by this we consider only signal data. In reconstructing the data volumetrically, 3 and 4-D ultrasound transform the mapping between signal and luminosity value into a different representational scheme that more closely mirrors data modelling on the one hand, and surface reflectance modalities like photography on the other. Rather than mapping signal intensity values as luminosity values, these values are processed by algorithms that track different features of signal in the volumetric data set. Surface rendering works by taking advantage of the interface between areas of high and low echogenicity – such as a foetus in amniotic fluid. In the 3-D data set, the ray casting algorithm passes through the volume until it locates an area with a sharp change in echogenicity, in this case amniotic fluid. This is similar to measuring hundreds of points around the foetus and then 142 plotting them into a three dimensional volume. From this volume images can be generated that show the surface feature, or the density features of the object. The image generated is one that captures features of the visual appearance of the foetus in a way that B-mode does not (see Figure 6.2). While B-mode images allow for visual experiences that are similar to those we have of actual anatomy, they require a great deal of knowledge both in terms of the anatomical appearance of things seen (such as the developmental morphology of embryos and foetuses) and the appearance of things in ultrasound. Figure 6.2: Three-dimensional Ultrasound at 20 Weeks Gestation. Image Courtesy of Kelly Gray.  If we think of the relationship between signal and design features as the representational system of the modality, this representational system plays a role in determining both what images can show (B-mode ultrasound does not represent surface properties of objects ) and how they represent what they represent (B-mode ultrasound represents areas of higher echogenicity as brighter). B-mode ultrasound allows us to see the foetus by representing its different 143 echogenic properties as lighter and darker, in a pattern that allows viewers to determine organs, length, shape features, number of fingers, and toes among other things. Representational systems, then, are extremely important for the tasks to which images are put in scientific and other contexts. In 3-D ultrasound, the representational system presents aggregated signal data as processed by algorithms as volumetric renderings of shapes of bordered tissue. The interest in most cases, of course, is the external visual appearance of the foetus and its features so defined. The question is whether seeing things in this way is beneficial, for what purpose, and to whom. Problems with Pictures There remains an entrenched idea that the kinds of pictures used by doctors and scientists for encounters with non-experts are different than those used by scientific professionals. This is explored in an ethnographic study conducted by Lynch and Edgerton of astronomers and their use of images.291 Lynch and Edgerton studied aesthetic considerations in astronomical images, where aesthetic considerations are those that concern the appearance of the image. One of the things they found is that different images were valued for different purposes – some for “public” purposes and some for “scientific” purposes – with distinctions mainly around use of colour, and the images being understood as quantitative versus qualitative.292  Images used by non- experts are meant to look nice and be easy to understand, generally at the expense of scientific usefulness or rigour.293 Generally the best images – that is, the clearest, most visually compelling, or ones that most vividly demonstrate a point – are chosen, and are then beautified to be presented to a non-scientific audience.294 This beautification can include adding colour, as in the cases of electron micrographs, fMRI, or astronomical images, combining pictures 291 M Lynch and S Edgerton, "Aesthetics and Digital Image Processing: Representational Craft in Contemporary Astronomy," in Picturing Power: Visual Depiction and Social Relations, Sociological Review Monograph, ed. Gordon Fyfe and John Law (London: Routledge, 1992), 184–220. 292 Ibid., 191–195. 293 Ibid., 191–195. 294 Dumit, Picturing Personhood, 45–60. 144 together, among other things. The kinds of pictures that have garnered wide public attention on the covers of scientific magazines and in news releases about medical research are also examples of this; they are extraordinary pictures different from the ordinary pictures of scientific work and serve a different purpose. Different contexts of use demand different things from the images. In the case of astronomy, the scientific pictures are monochromatic images that astronomers use in their day to day work. These images are treated as data about the objects of interest and play a central role in the work getting done. The monochromatic grey scale images used in day to day astronomical practice are considered by scientists to be more ‘realistic’ even though there is nothing that the subjects of these images look like, in a conventional sense, since they measure light that is outside of the visible spectrum.295 “Realism” here seems to play the role of images as measurements; the scientific pictures are thought to be somehow more objective or closer to life even though this does not concern appearances (there are none). 295 Lynch and Edgerton, “Astronomy,” 199. 145 Figure 6.3: Crab Nebula from the Hubble Space Telescope. Source Wikimedia Commons: public domain. The purpose of scientific images for the public is not measurement, but rather to play a role in the communication of key ideas, to summarize research findings, or to inspire (see Figure 6.3). These images are crafted to make them look a certain way, visually compelling, and these “aesthetic” changes were thought to render them useless for scientific work. The above image is a composite of 24 images taken over three years with false colours added to demarcate elements released in the explosion. “Pretty pictures” are the domain of media, funding agencies, publication, parents and other non-experts.296 Of course, the images in both cases are crafted in 296 Ibid. 146 the sense that a lot of attention has been paid to their appearance and the technology that produces them. The idea seems to be that further processing of images with additional elements such as false colours interferes with their “realism” the images are no longer measurements of the heavens, but graphical objects. As with 3 and 4-D ultrasound they are for entertainment purposes only. Pictures and Appearance The worry seems to be that the appearance of images, if they are to be considered quantifiable, should be directly caused by the entity imaged, instead of scientists judging what the image should look like. This worry over pictures and the roles they play in science can be considered in light of Daston and Galison’s exploration of images in science and objectivity.297 One view of objectivity discussed by Daston and Galison is mechanical objectivity. On this view images are created as measurement outcomes in which nature inscribes itself in a way that limits the subjective input or perspective of the scientist. Mechanical objectivity was thought to reduce interpretation of, and therefore bias towards, phenomena. On Daston and Galison’s analysis, part of our contemporary distrust of the epistemological import of images comes from the rise of structuralism combined with the study of perception and psychology which further widened the gulf between objective reality and subjective experience. The emphasis on individual differences in perceptual experience, especially of properties like colour, made subjective reports seem less truthful as the ideal shifted to measurement of invariant properties. Studies in photography seemed to show the extent to which the perception of scientists was theory laden – scientists saw what they wanted to see.298 They perceived perfection where there was none to see, photographs captured the imperfection of particulars undeniably. If nature could inscribe itself, it could do so independent of the beliefs of individual scientists. This changed the way 297 Lorraine Daston and Peter Galison, Objectivity (Boston: MIT Press, 2007). 298 Ibid., 338. 147 that pictures were used. One change was attempting to get nature to write itself, and another was the presentation of multiple images of particulars. Linguistic, logical, and mathematical systems of representation aimed to strip away the subjectivity of perception completely and to get to the invariant structural core of things regardless of individual beliefs or species specific perspectives.  Another view of objectivity, trained judgement, incorporates expertise and expert knowledge into the notion of objectivity. In terms of images trained judgement brings the interpretive capabilities and expertise of scientists back into the realm of science, making routine interventions and interpretation of images part of the epistemology of science. In the early 20th century there was a rise in the trust of scientists such that their judgement no longer introduced an element of subjectivity, rather their expertise ensures that interpretation is correct. The understanding and comprehensive knowledge that scientists bring to bear on their subjects allows them to make judgements. While photographs may capture in great detail the features of a cell under a microscope, for example, drawings do a better job at representing the features as features of the cell. Whether this is accomplished through drawing or intervening with photographs, the judgement of scientists in producing images is no longer thought to contaminate the objectivity of pictures.299  Through training and expert knowledge scientists learn how to extract features from images and to fit them into judgements. The production and correct interpretation of images, and the subjects of the images, is central to their playing a role in science at all. Altering images to make them pleasing to look at is not considered to be scientifically or objectively valuable as an aim. The astronomers in Lynch and Edgerton’s study seem to be thinking of realism in their images in terms of mechanical objectivity, as representing measurable invariant features, while the false colour pictures lack this. Aesthetic changes can 299 Ibid. 148 only affect the image as a more or less appealing surface and not as a measurement outcome. The realism attributed to the monochromatic images, seems to reflect their being unaltered, closer to the way they were inscribed by the heavens. The false colour images are interpreted; they have the mark of the scientists and their theories and values crafted into them in the form of the images chosen and how they are made to look. If scientific images are thought of in terms of mechanical objectivity, and objectivity, in this sense, is an important virtue in the science, the images that lack this feature are not successful within the same context.  Images that are not objective can play an auxillary role to science, but not a scientific one. I say there is a tension because image processing is a necessary part of computer based imaging systems – those used in medicine as much as science. Monochromatic images are, in one sense, equally crafted for their appearance and therefore exhibit as much concern for the aesthetic as the false colour images do.300 While context of use (scientific work versus public relations) is important in discussing images, this alone does not explain the perceived difference between these image types. The emphasis on image appearance is, likewise, not enough to act as an explanation that ranges across different image types and different sciences. The expertise of scientists remains important in considering how the images are crafted for matters of interpretation and accessing the data. While it might seem as if the difference can be explained in terms of interference with the data, this alone is not the case. Sometimes more interference is necessary for greater visibility, as in the discussion of trained judgement. The issue then is when interference does, and when it does not, fit with the demands a science has for the images it will use in a particular context. We should more often ask why the pictures look the way they do. Some “aesthetic” choices allow for ease of interpretation, as was discussed in the Tufte example earlier and some do not. This is of course relative to who is looking and what they are using the images for. 300 Lynch and Edgerton, “Astronomy,” 205. 149 On this point, the notions of trained judgement and expert vision are important for understanding perceived differences in image types. False colour images are meant to be simplified and interpreted, rather than being data themselves. While they might be products of trained judgement, they do not require expertise to view. The suggestion with 3/ 4-D ultrasound is that making images easier to interpret for non-experts is likewise not of scientific benefit, that the interference, in the form of volume rendering, with ultrasound data is this kind of simplification and interpretation. These representations demand less expertise in looking, and simplify the data in a way that undermines its usefulness to experts. 3 and 4-D ultrasound images are thought to be for patients, not their doctors. They fall out of the category of medical images by losing purpose for diagnosis and monitoring. Like the images discussed by Lynch and Edgerton the 3-D images 4-D shorts are nice to look at, but are in a different realm. While issues of ultrasound exposure safety and the unavailability of counselling services at boutique imaging clinics are real issues, they alone do not explain the vitriol against non- diagnostic imaging. If ultrasound exposure is a problem, it is a problem across obstetric use. Ultrasound is used to scan diabetics past 30 weeks, to reassure women with concerns about their pregnancies, continually during some deliveries and for many artificial reproductive procedures.301 To draw from the above discussion, we can extract three issues at stake in debates over 3-D ultrasound. First, that the image is somehow less objective. Second, the appearances of the objects that images capture are not of medical benefit. Third, non-experts are not trained in the appropriate way to make judgements from what they see. Curiosity about foetal development can be solved by reading a book, unlike curiosity about your developing foetus if you are pregnant. A study by Simonsen, Branch, and Rose found that the top three reasons given for women seeking out ultrasound were: 1) meeting the baby, 2) having a visual confirmation of the pregnancy and 3) assurance of foetal well being.302 301 Burtbaw, “That’s Entertainment,” 445. 302 Simonsen et al., “Complexity,” 1373. 150 While ultrasound is not the only or the best assessment of foetal well being, its very visuality, and the sense of being in visual contact with the foetus seem to contribute to the popularity of ultrasound even if it is not medically recommended. 3 and 4-D ultrasound are generally more appealing to look at than B-mode images, at least for the audience of such images in obstetrics. Simonsen, Branch, and Rose’s study looked at women accessing entertainment imaging adjunct to their medically appointed ultrasounds and found that 53% of women who had had the 3-D ultrasound (in this case at boutique clinics) were dissatisfied with their medical ultrasound compared to 13% who did not; one of the main complaints was poor image quality.303 The lack of time spent by sonographers or physicians going over the 2-D image and pointing out anatomical landmarks to help women interpret images was another complaint.304 Wanting keepsake images and wanting to know the gender of the foetus were also cited as reasons to access these services. Being able to observe the foetus, having images, and knowing the gender are among the features that have been described both as non-diagnostic and as entertainment features of ultrasound. There are current debates in the literature on the effect of patient pressures on clinicians to add this entertainment value to medical exams.305 As Simonsen, Branch, and Rose point out, Devoting extra time to the ultrasound session for the sake of entertainment and providing keepsake images generates costs. If these costs are not directly transferred to the patient, they are borne by the offices that perform these procedures. In addition, physicians must insist that the systematic evaluation of the fetus is paramount; entertainment ultrasonography must not distract from the quality of the medical study.306 Women and their medical practitioners seem to have different desires and expectations from foetal ultrasound. And the desires of women that concern knowledge that is either connected to their own vision (meeting the baby) or that is not seen as medically relevant are cast as entertainment. 3 and 4-D ultrasound seems more equipped to provide this whether even if it 303 Ibid., 1372. 304 Ibid. 305 Ibid., 1342. 306 Ibid. 151 were done as part of a medical exam; how it represents the foetus seems to add the entertainment value by producing images that are easier to interpret and more desirable for non- experts. Medical vision includes measuring the foetus for what are considered relevant features, non-expert vision includes observing the foetus for entertainment and curiosity; but this is far too simple. Women’s curiosity about their own bodies and developing foetuses is not like curiosity about their physician’s children or how successful their office is. Women’s curiosity about their pregnancies is a medical curiosity in that it is a desire to know how their own body is changing and how the pregnancy is developing.  People generally know their blood type, even without understanding antigens and without this knowledge being clinically significant as blood typing is always done before surgery. Curiosity about blood type is still a medical curiosity. In British Columbia it has been routine for physicians to not disclose the gender of the foetus unless it was medically necessary (as in the case of sex linked genetic abnormalities), even though it is information routinely gathered during exams. This has now changed so that there is a $50 charge that women must pay for this information. The rationale for this fee is that the information is not medically relevant, and so is not covered by provincial insurance.307 Questions about what knowledge counts as medically relevant, and to whom, become issues concerning expert vision and the values implicit in it. Better Baby Pictures The ‘aesthetic’ elements of 3 and 4-D ultrasound include using fixed point lighting in the rendering of the three dimensional shapes. This means that facial planes, limbs, ears, and other surface features appear shadowed as if there were a light shining on them. Of course, this is not accurate since light is not involved in ultrasound at all. Representing objects in this way makes 307 CBC News CBC website: ultrasound-gender-fee.html (accessed May 11, 2010). 152 it easier to interpret the three-dimensionality of the objects rendered on a screen, in this case the foetus. Shadows are markers that allow us to identify subtle curves and shapes on a face or a body. Ray casting algorithms calculate the precise curves and distances of the foetus and represent them as lighter or darker depending on their distance from the screen. Computer modelling, from video games to scene reconstruction in forestry, uses these representational conventions because they contribute to the ‘realism’ of the scene much the way that shading does in life drawing. Like drawing, these images are based on appearances. The properties we are concerned with in a drawing are those that are carried by light and reflected off the surfaces of the objects we see around us. The transmission is between visual properties presented three-dimensionally and two-dimensionally, which requires training and attention but still uses light. In ultrasound sound is used to measure features of the object which are then represented as luminosity values. Translating this signal→luminosity representational scheme into one that is meant to mimic both the appearance of surface reflectance to us and exploit known visual cues for depth means that standard ultrasound signs are no longer available. Yet despite tracking outside form in a way that captures appearance, these images are still mechanically produced and causally counterfactually dependent. The information carried preserves counterfactual dependence between signal and luminosity values while also preserving similarity relations in terms of spatial distribution of biological indicators. This counterfactual dependence may be what appears like mechanical objectivity, but the system of encoding also preserves real similarity relations in terms of appearance features allowing us to see the foetus through the image. It is unclear whether the volume rendering scheme preserves this. The move from echogenic properties to volume rendering suggests a fairly significant translation of information and considering it this way, its easy to see why people using ultrasound might think that 3 and 4-D images were less scientific 153 because the images are less transparent. However, it turns out that these images do play a unique role in obstetrics and have allowed for increases in scientific knowledge in this field. 3 and 4-D ultrasound are not just pretty pictures. They are transparent for different features than B-mode ultrasound, and implicate visual knowledge in a different way. Benefits of 3/ 4-D Ultrasound Despite the issues raised above, it turns out that 3 and 4-D ultrasound imaging do have a role beyond entertainment – they have added value to the science of obstetrics whose explanation is mainly in terms of the very features that initially made them seem problematic. Their benefits are not only to non-experts who want to be able to see for themselves what is going on in their bodies, but for experts who are able to use this aspect of ultrasound vision to extend the reaches of medical and scientific practice. We can do more because we can see more. It turns out that there is more to 3 and 4-D ultrasound than just attractive pictures and the satisfaction of curiosity. Parents who receive 3-D images rather than 2-D images have been found to share the images with more than twice as many people (median 27.5 versus 11).308 This suggests two things. One is that the images are considered to be widely legible by non-experts – why send indeterminate images to your friends and family? Second, sharing pictures strengthens a woman’s support system. Those including more people in their support system often have fewer emotional social problems that can arise with having a newborn. A review by Ji et al. found that women who had 3-D ultrasound had a greater likelihood over those receiving 2-D images (82% versus 39%) to form mental images of their foetus.309 This suggests a greater ability to visualize the foetus as their developing child. Seeing what the foetus is doing and being able to watch some of its behaviours (kicking, sucking the umbilical 308 EK Ji et al., “Effects of Ultrasound on Maternal-Foetal Bonding: A Comparison of Two and Three-dimensional Imaging,” Ultrasound in Obstetrics and Gynecology  25 (2005): 473. 309 Ibid. 154 cord, touching its face) have emotional benefits for parents. “Meeting the baby” makes it less of a stranger. Developing concepts of the foetus has been linked to greater maternal foetal bonding and being more emotionally prepared to deal with a newborn.310  In turn, greater maternal foetal bonding has been shown to ameliorate the care that is taken for both the health of the mother and the growing foetus. This includes less drinking and drugs, greater adherence to healthy eating, taking prenatal vitamins, and other physician recommended health care. Women with greater support networks during pregnancy have been found to also have less incidence of post- partum depression.311 In cases where there are problems found, 3 and 4-D imaging, especially having visual confirmation, can make it easier for parents to accept if things have gone wrong. Being able to see the congenital problem can make these negative outcomes easier to accept for parents. “The 3D reconstruction of foetal morphology and the presentation of realistic photographic images to the parents enable better counseling and thus lead to the parents’ acceptance of some unfavorable situations in foetal development.”312 This is a boon for both parents, who may previously have been devastated to find out about issues at the birth of their child and spent a lot of time and money trying to decide what to do, and physicians who may have to put plans in place for immediate treatment. What seems like entertainment actually does have medical value and can also have benefits for professionals. Better adherence to recommendations can lead to better obstetric and perinatal outcomes. A caveat with this is that there still need to be practices in place for when these techniques are most effective. 3-D ultrasound excels for foetal face examination but findings show that these images during the first trimester are ‘counterproductive’ to encouraging bonding as “the image 310 Kurjak et al., “How Useful is 3D and 4D Ultrasound,” 19–20. 311 Ibid. 312 Ibid. 155 appears to be strange and it can create a distorted image of their child, which will not reinforce the affective bonds.”313 The fact that 3-D ultrasound is better able to visualize foetal morphology, both in the face and the body, can prepare not only parents but also doctors in the case of negative outcomes. Many chromosomal and developmental problems have typical visual forms which are easier viewed in 3-D. They are being used to confirm once difficult to image facial malformations, such as cleft palate, and other markers of chromosomal disorders such as Trisomy 18 and hydrops.314 For physicians, this can include being aware of the extent of problems and being able to have surgical teams available at the birth of the baby prepared to step in with life saving measures. An example of this is cleft palates, which can cause extensive difficulties breathing and swallowing. Cleft palates can be visualized from the front of the face in surface mode as well as in reverse mode, which allows doctors to see inside the face cavities in order to determine the extent of the damage.315 Another limitation of 2-D ultrasound is that it is difficult to see entire forms at once. Foetal ears can be an important marker of certain disorders that have not been available to doctors to measure.316 Foetal ears are a problem in 2-D ultrasound since a) they are small and curved and b) the entire ear cannot be shown on a single page. Seeing and being able to measure the entire ear (e.g. its structure) is what matters, especially in testing for Down syndrome where discrepancies in ear development can be used as evidence of the syndrome.317 3-D imaging also has strengths in other areas. Kurjak et al.’s review of 438 papers on the topic found benefits with 3-D imaging in the ability to rotate images, which allows for better determination of multiple pregnancies and potential problems developing in multiples, such as 313 Kurjak et al., “How Useful is 3D and 4D Ultrasound,” 2. 314 Roger Pierson, “Three-dimensional Ultrasonography of the Embryo and Fetus,” in Textbook of Foetal Ultrasound, ed. R Jaffe and T Bui (Nashville: Parthenon, 1999), 317– 326. 315 Kurjak et al., “How Useful is 3D and 4D Ultrasound,” 11–12. 316 Ibid., 13. 317 Ibid. 156 vanishing twin syndrome, conjoined twins and twin to twin transfusion syndrome.318  Other strengths included visualizing the entire spine, which can be used to measure foetal lung capacity as well as spinal problems; visualizing placenta, umbilical cord length, as well as the foetal nasal bone. The 3-D technique allows a better description of  “absent,” “hypoplastic,” “unilateral absent” nasal bones in fetuses with Down syndrome.319 All of these play important roles in the diagnostic context where medical expertise is aided by the greater ability to visually access the ultrasound information using volume rendering. Outside of a diagnostic context, there are additional ways in which 3 and 4-D ultrasound technology can extend the scope of practice. 4-D imaging can be used to guide procedures and has so far been found to have some benefits over traditional real time ultrasound, Needle guidance is currently obtainable with the introduction of 4DUS. This technology has potential for synchronized visualization of two or three perpendicular (orthogonal) planes of view in real time. Imaging a needle tip in two or three such planes with this system permits for accurate needle position with the potential elimination of lateralization occurring with 2-D scanning.320 A surprising area where it particularly excels, by overcoming difficulties with the 2-D display, is in foetal behaviour. In foetal behaviour the technology literally allows for the study of events that could not be previously studied. Foetal behaviour can be studied in utero followed by neonatal check-up in comparison studies examining the continuity of behaviour from foetus to neonate.321 Studies of foetal facial expression could not be done using 2-D display. Therefore foetal behaviour could be studied but was handicapped by the display. When data is displayed in planes the entire face cannot be visible at once. 3 and 4-D display have allowed researchers to study the expressions of foetuses and the contiguity of these behaviours with post-partum 318 Ibid., 11. 319 Ibid., 4. 320 Ibid., 12. 321 A Kurjak et al., “Behavioral Pattern Continuity from Prenatal to Postnatal Life: A Study by Four-dimensional (4D) Ultrasonography,” Journal of Perinatal Medicine 32 (2004): 346. 157 expressions.322 Notable studies of handedness capture arm movements, grasping, and thumb sucking in utero that could not previously be seen. This behaviour can then be compared  with the behaviour of neonates.323  Behaviour and expression are important markers not just to understand what the foetus is doing at different developmental stages, but also to understand when different behaviours begin to see where they come from. For example, studies of handedness using 3-D and 4-D display have shown preference as early as 12 weeks. This is significant because it is before cortical development and handedness has long been thought to be controlled by the cortex.324 Furthermore, these studies have allowed researchers to learn things that were previously unknown about foetal behaviour in utero. For instance, studies using 4-D ultrasound have found that, in contrast with previous wisdom, foetal defecation is a normal part of foetal behaviour. Studies have examined this behaviour over the duration of a pregnancy, attempting to understand the role it plays in the amniotic environment.325 Since it has been thought that any foetal defecation in utero was damaging to the development of the foetus, this is a finding that is important in reviewing obstetric understanding of early human development. In the studies mentioned above, the imaging itself has been taken to be an auxillary question in the studies. The assumption is that foetal behaviour can be observed using these imaging techniques, that the foetus itself is seen in the images. The questions in the studies emphasize how foetuses behave, with no emphasis on the technique of ultrasound itself. This is not the creation of a new phenomena which is studied, rather it is treated as a new way to view a previously unperceivable phenomena. Reviewing how it is used and the role it plays in the 322 Ibid. 323 A Kurjak et al., “The Antenatal Development of Fetal Behavioral Patterns Assessed by Four-dimensional Sonography,” Journal of  Maternal Fetal and  Neonatal Medicine 17 (2005): 401. 324 A Kurjak et al., “The Role of 4D Sonography in the Neurological Assessment of Early Human Development,” Ultrasound Review Obstetrics and Gynecology 4 (2004): 148. 325 CL Ramon y Cajal and RO Martinez, “Prenatal Observation of Fetal Defecation Using Four-Dimensional Ultrasonography,” Ultrasound in Obstetrics and Gynecology 26 (2005): 794. 158 science of obstetrics, we see that ultrasound is treated as a visual prosthesis, one which allows us to watch, observe, and study the behaviour of feotuses in order to learn things that were inaccessible prior to advent of this technology. This usefulness is not despite the ease of the representational scheme, or making things look like solid three-dimensional objects but rather because of that. The increased interference with data in the case of ultrasound seems then to augment, rather than to inhibit, the visibility of the images thereby allowing them in turn to increase the scope of obstetric practice. The fact that ultrasound is used to increase visibility in obstetrics, for both experts and non-experts, demands a re-examination of the interference in ultrasound images; in this case the role of volume rendering. That the images are used to see things, both aspects of objects in still images, and behaviours in real time images, that were not previously visible is supported by the use of the images in the science, so the question remains what supports this use. For one, volume rendering presents things as if they were visible: as if we were looking at three-dimensional objects on a flat screen, rather than the usual two-dimensional slice. This makes it not only easier to interpret objects, because our experience of them is more akin to familiar views of anatomy, but allows for different and novels ways of examining them (such as reverse, and x-ray mode). The basic physical working of x-ray is a kind of algorithm whereby the luminosity is calculated from the accumulated attenuation values of all of the tissues in its path; by comparison, volume rendering algorithms in ultrasound perform similar functions, but with greater specificity. The algorithms used in volume rendering have been designed to maintain counterfactual dependency, in order to be information preserving over a larger data set. They have also been designed to increase the visibility of certain features of the object that are carried in the data, by making it easier to interpret or more lifelike. For example, surface smoothing averages voxel values with the ones next to them, which can help to control for movement 159 artefacts, and can make images clearer images of their objects.326 Algorithms are like measuring devices, taking both the echogenic data and the grey scale information in order to create images. They become part of the causal transfer of information between the object and the image we perceive. Even if, in some cases such as surface smoothing, some data is lost, this is no different than cameras that adjust for light conditions to correct colour. The worry is that this incapacitates the transfer of real similarity relations, but it does not. 3-D ultrasound is accessible to more people because it is more transparent. Rather than capitalizing on some appearance features and then signal information being valuable for making diagnostic distinctions within visible anatomical structures the benefits come from the application of anatomical knowledge in the ordinary visible domain. In all of the modes, the image would be different were the object different in its shape, size, or other features that can be determined by measuring its echogenic properties. Being able to manipulate the presentation of the features makes 3-D rendering useful for broader applications. This kind of three-dimensional rendering is becoming more and more common across different fields of medical practice, 3-D reconstruction is used in CT, MRI, PET, fMRI and ultrasound for imaging as well as for performing procedures including planning and performing surgeries. One of the strengths of this kind of visualization is that any plane of the object can be examined, which overcomes past limitations of technologies such as PET and CT which can normally only show slices in a single plane. What is interesting about it is the extent to which it bypasses the traditional distinctions between expert and non-expert vision. Non-experts are able to see more, and to find the images more informative, because more of their regular visual knowledge is implicated. Knowledge of the relevance of signal intensity in specific locations requires specialized knowledge, both of anatomy and the appearance of things within an imaging system (e.g. that a brighter spot in the kidney could be tissue damage).  Changing 326 Pierson, “Three-Dimensional,” 320. 160 image appearance in order to implicate a wider range of possible visual knowledge makes images more useful in a wider range of applications, yet presents serious challenges to the role of trained judgement and expert vision. It also reflects the intense human reliance on vision for science and medicine, and the desire to overcome the limitations of our visual access to those things we cannot normally see. The balance between expert and non-expert vision is one that is central to what we are to make of images in our visual culture. 161 6.3. Seeing the Mind with Functional Imaging The imaging technologies discussed so far perform so called structural imaging; they are used to image stable anatomical structures of the objects scanned. What we see in the images is fairly obvious, in one sense; the bones, brains, and babies seen in the images are the concern of imaging tasks as well as the object of the images. Medical imaging uses x-rays, radio waves, and magnetic resonance or sound instead of light, and what we are able to see are those properties accessible by that signal. Functional imaging, such as fMRI and PET, is used to visualize brain function. Admittedly, the difference between “structural” and “functional” in imaging can be a “forced dichotomy” that can lead to “oversimplification” and misuse of the terms.327 For one, MRI and fMRI are done using the same machine but with different protocols. When examining brain imaging, however, there are a number of reasons, including different histories, to accept the gross distinctions between structural and functional imaging. One significant difference is that functional imaging of the brain is time based; it tracks changes in signal over time, rather than using the signal to track similarities in tissue. FMRI is a method of examining the brain while the subject is performing a cognitive task, and the images reflect this ‘what’ that is happening as well as ‘where’. Another difference is that the significant information used to produce the image is not a direct measure of signal intensity. Instead, two separate cognitive tasks are performed and then subtracted in order to isolate a task relevant signal. There is some debate in the literature over just what function is represented in functional MRI. That is, there is debate over the correlation between the mental acts tested and the brain activity detected by the scanner – whether what is imaged is brain function or mental function. 327 Scott Small, “Neurobiological Correlates of Imaging,” in Functional MRI: Applications in Clinical Neurology and Psychiatry, ed. Mark D’Esposito (Oxford : Informa Healthcare, 2006): 1–8. 162 While it is commonly written up as brain “activation” this is no less problematic. Functional MRI tracks subtle changes in blood oxygenation while the person in the scanner is performing a task, and this acts as a contrast agent that allows blood flow to be imaged by the magnet. This is the basis of the BOLD (blood oxygenation level dependent) signal. How the signal is interpreted, analysed, and studied, as well as how it is used to make arguments about brain function and mental activity has been a source of contention. The descriptions of fMRI both as a brain imaging technology and as a visual technology seem to differ across research and clinical contexts of use as well as between different researchers and in public discourses including media and science journalism. fMRI is described as showing mental activity, and also as showing metabolic activity in the brain.328 These issues will be discussed in detail below as they affect our understanding of how we should interpret fMRI images, and what they can be used to see. These differing ways of discussing fMRI create confusion over what the technology is and the role it plays as evidence in different arguments and across different contexts. There is a public perception of fMRI images, which can be traced through media reporting, as visual representations of mind. This naive view is at odds with researchers’ denial of the importance of the visual component of fMRI over its use as a study of mental function, or their emphasis on the visual while denying the connection to the mental. The naive view seems to affect how fMRI is treated by both experts and non-experts. The contexts in which the technology is being used and in which the images are being discussed are often important in this discussion. Sorting through these differing views of fMRI is important for understanding the content of these images. 328 Marcus Raichle, “Behind the Scenes of Functional Brain Imaging: A Historical and Physiological Perspective,” Proceedings of the National Academy of Sciences of the United States of America 95 (1998) 765–772. 163 Structure and Function in Understanding the Brain The most prevalent use of fMRI is in brain mapping, that is, in performing experiments in order to determine connections between specific mental activities (e.g. seeing a face compared to other mental acts such as seeing other stimuli) and functional areas of the brain. Most fMRI images that appear outside of professional journals, the ones that most of us are exposed to, are images produced from these brain mapping experiments. Since the first development of functional imaging techniques for the brain, there has been a great deal of controversy over what these images can do and the role they can play in various medical and scientific practices. The images, in turn have been discussed according to a number of different pictorial motifs. The analogy with cartography when researchers are engaged in functional mapping is especially prevalent. The idea of mapping suggests geographically distinct regions, as well as the idea of the brain as uncharted territory, and so already imports a set of beliefs about the nature of the connection between mental acts and the physical brain. When imaging the brain, structural MRI uses response times of the magnetic resonance signal in order to create images of the brain, taking advantage of differences of signal intensity across tissue types – white matter, grey matter, cerebrospinal fluid – to show both recognizable anatomical structures in the brain and any abnormalities in tissue. Attempts to create anatomical images of the brain go back to the beginning of x-ray imaging, when it was found that x-rays do not penetrate the skull adequately to make clear images.329 The development of CT (computed tomography) is tied very much to this desire to visualize the brain, while the history of MRI was much more closely linked to the search for a way to detect tumours in the human body, as has been discussed. 329  Oldendorf, Quest, 7. 164 While fMRI is done on the same machine as an MRI, the history of functional imaging is different from that of structural imaging; both in terms of who the main users of the technology are and the kind of research and scientific contexts that have preceded it. Attempts to gain scientific information about the working brain are tied in to theories of the modality of brain function in underlying specific mental activities. This has lead to a drive to localize specific mental activities in specific sites in the brain. The link between the mind, and the brain as the material seat of the mind, goes back to Galen in Roman times.330 The attempt to define the functional architecture of the brain goes back to the mid 1800’s. Research by Pierre Paul Broca found that people with aphasia (a speech deficit) had a lesion in a specific area of the brain (now called Broca’s area) which was responsible for the control of speech. Richard Caton explored the sensory regions of the brains of animals in 1877, and Munk found the visual cortex to be in the occipital lobes of the brain.331 Unusual cases of personality change due to brain damage help to ground the idea that specific brain areas control specific mental faculties. Some areas of the brain are functionally well established through lesion studies, and animal experiments including single neuron studies. The visual cortex, for example has been extensively studied for years. The auditory and motor cortices are likewise well established.332 The Holy Grail for brain mappers is higher order cognitive states – memory, preferences, thoughts, and beliefs and the connection with behaviour. It was not mere embodiment sought in the brain, but the mind.333 The above mentioned findings occurred in a context where specific mental faculties were being associated with specific brain areas – as in phrenology. Phrenology was the mapping of 330 William Uttal, The New Phrenology (Cambridge: MIT Press, 2001), 6–9. 331 Uttal, Phrenology, 10–11. 332 Ibid. 333 Ibid. 165 mental or personal traits with the topography of the head, which was thought to be consistent with that of the brain.334 Figure 6.4: Phrenological Chart of the Faculties.  Source Wikimedia commons: public domain. Phrenological maps show very specific faculties that reflect the values and attitudes of the time period as can be seen in Figure 6.4. Mental faculties such as hope, veneration, and suavity have a place in the map along with circulation, time, and language. While phrenology was found to 334 Uttal, Phrenology, 102–09. 166 be scientifically suspect, the idea of taxonomies of mental faculties finding a home in brain localization has continued on. Positron Emission Tomography (PET) was the first functional imaging used to study the brain. Developed at the same time as CT and MRI, PET uses radioactive isotopes to study metabolic activity in the brain. For this reason, it is used slightly differently than fMRI which is non-invasive. PET studies in the 1980’s, however, caught the interest of cognitive psychologists. These studies brought together, for the first time, physicists, neurologists, and psychologists to study the working brain and to devise tasks in order to isolate particular mental activities. Without these inventive ways of isolating particular mental acts, and subtracting them from the overall activity in the brain, current functional imaging practices would not be available.335  The following quote is from a researcher working with functional imaging who is discussing the appeal of visual images: Well, I think it does touch a lot of people’s imaginations. The idea that you can quote- unquote ‘see’ what a brain is doing during a particular task, that does intrigue people. So apart from what you can learn scientifically, about where glucose is being used in a brain, yes, it does have a sort of man-in-the-street appeal: “Wow, that is a brain at work, and I can see how it’s different when it thinks about an animal and when it thinks about a mathematics problem”.336 We see here the difference between “learning scientifically” and the kinds of things that can be noticed by non-experts. As in other technologies there is a continuum between what experts see and what non-experts see, but in fMRI the range of expert and non-expert vision and interpretation is more varied. Perhaps more so than in any other kind of imaging, fMRI captures the imagination and reflects our desires to investigate, to measure, and to see into ourselves. 335 Raichle, “Behind the Scenes,” 765–772. 336 Newsweek science editor Sharon Begley on PET, in J Dumit, Picturing Personhood: Brain Scans and Biomedical Identity (Princeton: Princeton University Press, 2004), 22–23. 167 The Naive View of Functional Imaging  Functional brain imaging has popular appeal. The difficulty of taking something as technologically complex and interdisciplinary as cognitive neuroscience and fMRI research and publicizing it is that it is inevitably simplified. This simplification affects public perception of the images; how they should be visually interpreted and what they are evidence of. There is a kind of standard idea that fMRI imaging supports functional localization by showing the neural correlates of mental activity. There is what I will call the naïve view of fMRI which is propagated by how fMRI is presented in media; glossed over in introductory texts; presented as evidence in some psychological and philosophical arguments as well as in court.337 I’m calling it the naïve view, not to say that those who uphold this view are unsophisticated, but rather because it falls in line with common sense or folk theories about both pictures and brains. This could also be called the folk view for how it reflects folk epistemological ideas about the brain, the mind, and perception. The naïve view is meant to capture an understanding of the abilities of fMRI as an imaging technology. I take it as a view about what fMRI images represent, or how they should be interpreted. There are a number of factors that underlie the view. A review by Racine and others identified three trends in how fMRI is reported in the media, that are thought to reflect on how people take fMRI; what they called neuro-realism, neuro-essentialism and neuro- policy.338 Neuro-realism makes a phenomenon uncritically real, objective, or effective. This includes the idea that fMRI serves as validation or invalidation of our ordinary conceptions of things. An example cited in the Racine et al. study is the headline “Fat really does bring pleasure” for a story on fMRI studies of pleasure centres in the brain.339 In terms of imaging, this reflects the 337 E Racine et al., “fMRI in the Public Eye,” Nature Reviews Neuroscience 6 (2005): 159–164. 338 Racine et al., “fMRI in the Public Eye,” 161. 339 Ibid. 168 idea that images can be visual proof of a view. It further implies that the brain phenomena are real phenomena that can be imaged. Images can be visual proof because they visually show a real phenomenon. Neuro-essentialism is the equation of subjectivity or personal identity with the brain. This follows from neuro-realism – that seeing representations of someone’s brain states is a way of seeing deeper truths about them. Headlines such as “FMRI knows your secrets” and “The brain can’t lie: brain scans reveal how you think and feel and even how you might behave. No wonder the CIA and big businesses are interested” reinforce the view that fMRI is a kind of mind reading, so that getting more information about brain states will allow us to better understand ourselves. The question of neuro-essentialism has been explored at length by Dumit who examines both the negative and the positive sides of it. On the one hand it seems to define limiting and inescapable ontological kinds that diminish personal responsibility. Part of this is reducing individual differences to brain differences – as though the difference between gamblers and non-gamblers is something that can be read off the brain.On the other it can help reduce personal stigma for thing like depression. In either case it seems as if we do not know enough about neurology or about the interaction of nature and nurture to make very strong claims about these issues.340 According to the naïve view, fMRI is a way to look into the mind by showing what areas of the brain are active during different tasks, and can be used to trace abnormalities.341 What is assumed to be seen in the images is brain activation correlated with mental tasks. The content of fMRI images is thought to include the mental acts associated with brain activity. Conceived of in this way, fMRI images are images of mental activity in the brain where the easy identity between mental states and brain states is taken for granted.342 Reporting on fMRI generally 340 Racine et al., “fMRI in the Public Eye,” 162. See also Dumit, Picturing Personhood, 170–186. 341 McCabe et al., “Seeing is Believing: The Effect of Brain Images on Judgments of Scientific Reasoning,” Cognition 107 (2008): 343–352. 342 This assumption of mind/ brain identity is also discussed in Uttal, Phrenology, 2–5. 169 reflects this. Consider the following recent reporting on fMRI used on patients in a persistent vegetative state: British and Belgian researchers used a brain scanner called functional magnetic resonance imaging to show the man, who suffered a severe traumatic brain injury in a road accident in 2003, was able to think “yes” or “no” answers to questions by willfully changing his brain activity. Experts say the result means all patients in coma-like states should be reassessed and it may change the way they are cared for in future. After detecting signs of awareness, the doctors scanned the man’s brain while he was asked to say “yes” or “no” to questions such as “is your father’s name Thomas?.” The results showed that by changing his brain activity, the man communicated his answer. “We were astonished when we saw the results of the patient’s scan and that he was able to correctly answer the questions that were asked by simply changing his thoughts,” said Adrian Owen, co-author of the study from the Medical Research Council. “Not only did these scans tell us that the patient was not in a vegetative state but, more importantly, for the first time in five years it provided the patient with a way of communicating his thoughts to the outside world.” […] The fMRI method used can decipher the brain’s answers to questions in healthy people with 100 percent accuracy, but it has never been tried before in patients unable to move or speak.343 The article discusses the findings in broad terms that reinforce a particular view of what is possible with fMRI, and give little room for a critique of the methods of the experiment. It mentions, first, that the man could willfully change his brain activity and could communicate with doctors by this method. From the last paragraph quoted above, we are led to infer that this method (which isn’t discussed at all) is evidence of the findings because of studies with conscious people.  There is also implicitly the suggestion that there are clear delineations between yes and no mental states in human brain states; that we can somehow see “yes” states and “no” states in the brain. A further complication is that the study was done “after detecting signs of awareness,” what this consisted of is not stated. There is no clear answer to how one 343 Reuters Science News, Reuters web site, (accessed March, 2009). 170 might detect awareness using brain scans, given the information above. No mention is made of the fact that fMRI is subtractive, that data from one set of responses is deducted from another to find the difference. Instead it discusses ‘seeing’ the results and what the scans ‘tell us’ which implies both a directness of the evidence and the authority of scientific testimony. Another news article on the same study presents a different angle, instead of the breakthrough being the communication itself it is on the technique used for “mind reading.” By helping doctors accurately ascertain whether a patient is mentally responsive, the technique “will change patient care, improve our diagnostics and avoid useless treatment,” Belgian neuropsychologist Audrey Vanhaudenhuyse told AFP. The car crash victim had not responded to any external stimuli, leading others to conclude he was in a vegetative state. When the British-Belgian team examined him, they found small signs of consciousness. “But we couldn’t communicate with him,” Vanhaudenhuyse said. The team had already examined a number of healthy people using fMRI, viewing which parts of their brains reacted when asked to respond to simple “yes” or “no” questions by imagining playing tennis or walking around a room. They then placed the patient into an MRI. “We asked him the same things, to imagine he was playing tennis to respond to questions. And he succeeded,” said Vanhaudenhuyse. “That confirms he is not in a vegetative state.”344 The image that accompanies the text tells a different story, however. The accompanying text states that different parts of the brain “light up” during different tasks and these two task states are also represented in different colours which makes them read as different in kind. Again, the article suggests that fMRI scanning is not statistical, and it reinforces the view that there are stable, consistent, and known patterns of brain activity associated with different activities. The original study details both the activities, how they have been established using normal controls (a group of 16) and what the statistical findings were. The journal article that sparked this 344 Yahoo Science News, Yahoo web site, (accessed March 2009). 171 discussion clearly delineates the different things that were being done with the experiment. The visualization task states are well defined, the communication tasks are explained in terms of the control subjects, and the findings are cautiously discussed in relation to both the literature on these kinds of experiments, and practices around diagnosis of persistent vegetative states.345 Media reporting can help shape our perceptions of functional imaging through presentation of the studies as if the distinct brain areas and functional architecture were well established rather than partial and debated. These kinds of studies are popular to report in the media however, which contributes to the view that fMRI is a way of simply observing brain activity, and to the view that areas of the brain can be isolated that correspond with specific mental states.  Recent reporting of fMRI used as a lie detector or in place of cinema focus groups shows this as well.346 In a recent Wired magazine article on neurocinema (which research is being done by a company that also does fMRI lie detection) the interviewer and researcher easily discuss fMRI detecting emotions that subjects do not know they are feeling, seeing fear in the brain, and using this information to see what parts of movies are really scary and what parts are boring.347 The suggestion from the researcher is that fMRI is better than focus groups, because it judges emotion more accurately than the reports of participants.348 These examples show the prevalence of neuro-realism and neuro-essentialism as ideals, and also some of the implications for accepting these for interpretation of images. In part, both the media interest in fMRI and the naïve view are driven by the compelling images presented in these studies. Images themselves are central to the language of the above discussions and to what makes them seem so compelling – the idea that we can use this imaging technology to see into the working brain. The 345 Martin M Monti et al., “Willful Modulation of Brain Activity in Disorders of Consciousness,” New England Journal of Medicine 362 (2010): 579–589. 346 Curtis Silver, “Neurocinema Aims to Change the Way Movies are Made,” Wired Magazine website, (accessed March 2009). 347 Ibid. 348 Ibid. 172 clarity of fMRI images presented in public arenas (rather than used in research) make them very appealing, as researchers recognize. For the most part it would seem as if the naïve view of fMRI is adequate to a general and non-specialist understanding of the role of functional imaging in science. As long as the science itself is done well, a failure to fully understand the technology should not create severe problems any more than a failure to properly understand the technology of photography hinders our ability to understand documentary or photojournalism, when those things are done well. Recently however, there have been a number of discussions about the compelling visual quality of fMRI images, and the evidentiary role they play in research and in public arenas where this “man-on- the-street” appeal becomes problematic. Views of fMRI in Use The appearance of functional images in court cases has driven some to question whether brain images are misleading, or prejudicial to jurors.349 Court cases are good theatres for understanding public perceptions of brain imaging, because they involve non-experts looking at images and determining their significance for claims made about people, criminal responsibility, and the presentation of mental traits and capacities.350 This becomes especially important as things like fMRI lie detection begins to show up in court cases.351 Findings from mock trials show that juries presented with photographs of a crime scene, even when those photographs are neutral in terms of evidence, are more likely to convict. Photography, as a medium, has a high evidentiary status, and this seems to affect jurors more than the content of the photographs. A study by McCabe and others, found that people rank bad arguments as making more sense if they are accompanied with brain images, rather than bar 349 Walter Sinnott-Armstrong et al., “Brain Images as Legal Evidence,” Episteme (2008): 359–373. 350 Ibid. 351 The Law and Neuroscience Project newsletter, issue 4, 2009, pg 3–4. Stanford Law web site, (accessed March 2009). 173 graphs or no images. They also found that subjects were less critical of studies reported in the media when they contained brain images.352 Clearly images have an important role in how people consider arguments and evidence. One suggestion is that jurors are treating brain images much as they treat photography. This is to say that people mistakenly take the way brain images represent to be comparable to the way photographs represent in terms of how they are to interpret these images, and the kind of object presented in the image. Roskies argues that fMRI images are naïvely thought to present a brain at work. That they are pictures of brains at work, the way a photograph of a brain is a picture of a brain.353 A part of the naïve view of fMRI imaging could be this interpretive question: how do people without background training in the making and interpretation of medical images interpret the images compared to those who use them across different fields? If it is the case that people are interpreting fMRI images in the same way as photographs (and Roskies treats it as a hypothesis to explain other data, rather than something she argues for and stresses that it is really an empirical question), then the images could well be being treated as if their pictorial content were accessible in the same way and so be given the same evidentiary weight as photography. This is to say they may be treated as if fMRI images are really a way of seeing (in a straightforward sense) brain activation in response to mental tasks or the mind at work in the brain, in accordance with neuro-realism.354 The question of whether functional brain images are problematic as evidence is intimately tied up with people’s understanding of the technology and how the images are constructed. I would argue that, if this is the case, it is because fMRI images are understood according to the naïve view. Of course, we could also talk about a naïve view of photography that is at play in the kinds of assumptions that underlie such a comparison. 352 McCabe et al., “Seeing is Believing,” 345. 353 Adina Roskies, “Are Neuroimages Like Photographs of the Brain?” Philosophy of Science 74 (2007): 860–872. 354 Racine et al., “fMRI in the Public Eye,” 159. 174 Researchers are cautious about the role that images play in their work and the epistemic weight they should be given. Studies by Beaulieu have examined the attitudes of brain map researchers towards fMRI and PET images and have found several common themes, among them that they are doing quantificative work rather than making ‘pretty pictures,’355 and that their work makes concrete what was previously only theoretical.356 Unlike with other kinds of imaging, in fMRI there is an attempt to separate measurement information from the visual by some people using the technology. This, however, is at odds with the naïve view, where the content is treated as if it were visual, and also with the dissemination of images through publications and in research.357 fMRI is not exactly what the naïve view thinks it is. The content of the images and what must be brought to bear in the interpretation of the images is different to the two groups. On the naïve view, fMRI is a powerful imaging technology capable of showing activity in the brain through images; and while this is true, it leaves room for misinterpretation about the role the images themselves play in the technology. Images seem to be what carries a great deal of epistemic weight, but the fMRI images that are publicly shown are not produced in the same signal→ luminosity kind of representational system as other kinds of imaging technologies. What fMRI images are of and what fMRI can be evidence of has to be considered, by examining how the everyday fMRI images we see are constructed, what they are and what they are not, and the actual role they play in various scientific contexts. MRI measures a variety of state changes of hydrogen protons in tissue after inducing local changes, and using several different kinds of scanning protocols to capture these changes. fMRI operates in a similar way: in fact, fMRI can often be done on the same machine as MRI, but this does not mean that the visual content of the images is comparable. fMRI measures 355 Beaulieu, “Pictures and Truth,” 62. 356 Ibid.,  64. 357 Ibid.,  79. 175 differences between oxygenated and deoxygenated haemoglobin, it can do so because the increased magnetic signal in deoxygenated haemoglobin acts as a contrast agent in T2* scanning protocols. The BOLD signal is based on what is known as the “haemodynamic response,” this is the upsurge of oxygenated blood to areas of higher metabolic activity in order to replenish deoxygenated blood.358 For this reason, BOLD is not a direct way of measuring neural activity as is sometimes thought. Rather, it indirectly tracks a biological response that is correlated with increased neural activity, though the relationship between actual neural activity and haemodynamic response is not yet well understood.359 Unlike MRI, which represents tissue properties that remain stable over time, fMRI records dynamic changes in the brain following the flow of blood. In order to track these changes, fMRI images need to be taken very quickly. So while MRI images may take a few minutes to make, fMRI images are taken every few seconds. The result of the speed at which fMRI images are taken is that raw fMRI images do not look like the pretty images presented in magazines, journal publications. They do not have the detailed resolution that MRI images do and few anatomical structures of the brain are recognizable. fMRI images likewise do not have the coloured areas seen in published images. Of course,  T2* can be used in MRI but what differentiates fMRI, what makes it a functional rather than a structural imaging technology, is how the images are produced. The functional use of this imaging requires measuring changes in signal over time and across tasks to be significant. This means that a single image from a single scan from a single person cannot tell a researcher with any specificity where brain activity was occurring. The signal is not visibly localizable in single scans, nor is it significant. Furthermore, the entire brain is active, in a sense, all of the time, so in order to measure relevant activity fMRI research makes use of tasks performed in the scanner which are meant to isolate a specific correlate between mental activity and brain activity. For instance, one might 358 Small, “Neural Correlates,” 2–3. 359 Ibid. 176 have someone open their eyes and then close them. Or, in the experiment discussed above, imagine performing a particular activity and then rest. No single picture within this set of images will be particularly informative even in terms of signal, because what matters is changes over time and across states related to tasks.  In order to ensure that increases or decreases of signal are coming from the same area of the brain, and for this to be interesting to a cognitive neurologist, the activity must be mapped onto an anatomical picture. Research neurologists and cognitive neurologists (like the chemists and physicist discussed in the history of MRI) do not like looking at fuzzy pictures. For this reason any region of interest can be recalled and time-based information from that region presented graphically as a chart. In a simplified version, if a researcher was using a single subject for their experiment, they could map the patterns of activation onto an MRI of that person’s brain. Real-time fMRI seems to do this. Localizing signal according to anatomical structures is important. First because many of the experiments utilizing fMRI are aimed at determining what specific brain areas are utilized during a task and those brain areas have, traditionally, been structurally distinguished. Second, even if brain regions are being functionally determined, as they are in some recent studies, it is important to be able to then locate these areas within the anatomical structure of the brain. To do either of these a clear image of the brain, showing its anatomy, is important. In order to create an image of what is going on over time in a brain, researchers need to feed the BOLD signal information over top of an MRI or other image of a brain that can serve as a guide to where increased (or decreased) signal is coming from. Atlas Brains and the Map Analogy Of course, brain mapping is not the only context of use for fMRI but it is the one with which most people are most familiar. In most cases there are many subjects in fMRI experiments and 177 their data is compiled together in order to statistically measure differences between the two task states. Since every brain is different, the data sets must be stretched and warped to fit them into the standardized atlas brain. One of the earliest brain atlases was Talairach-Tournoux, which was derived from photographs of cross sections of a single brain.360 Certain anatomical landmarks were then used to plot out a three-dimensional brain space that can be used to plot functional data sets onto the brain, and that are used to define brain structures of interest such as the gyri and sulci of the cerebral cortex, or larger areas such as temporal lobes. This brain space is taken to be a quantitative space of measurement, much like the Cartesian coordinate system used in certain kinds of perspective. 361 A problem with the Talairach-Tournoux atlas has been that it does not adequately stand for all brains and so problematizes what information is carried by the biological indicator in any given test. The brain used for the Talairach-Tournoux atlas comes from a woman whose brain was 10% smaller than normal, and only one hemisphere was used, the other being a mirror image of the one half. Nevertheless, Talairach space is commonly used in fMRI studies (and software) in order to localize data into a space determined by particular anatomical landmarks. An alternative to Talairach is the newer Montreal Neurological Institute or MNI atlas, a compendium of a number of different brains averaged together.362 The current international standard adopted by the International Consortium of Brain Mapping (ICBM) is the ICBM152 which is based off the MNI template. For this atlas, 241 brains were scanned and then manually examined and scaled according to anatomical features described in the Talairach atlas, and 305 normal MRI scans were then averaged to match up with this. The MNI152 uses 152 brains 360 J Talairach and P Tournoux, Co-Planar Stereotaxic Atlas of the Human Brain: 3-Dimensional Proportional System – An Approach to Cerebral Imaging (New York: Thieme Medical Publishing, 1988). 361 Beaulieu, “Pictures and Truth,” 68. 362 M Brett, “The MNI Brain and the Talairach Atlas,” Medical Research Council, Cognition and Brain Sciences Unit Technical Report, 1999, available at: (Accessed March 08 2010). 178 averaged together and normalized to the 305 template.363 The MNI template is much larger than Talairach though and there is still no real way of consistently comparing the two spaces. These anatomical atlases are used by brain mappers to ‘plot’ their data within the anatomically defined brain space. Using this brainspace regions of interest can be isolated and statistical information can be pulled out to be presented as a graph, as is commonly done. The information from these brain mapping experiments can then be considered numerically in terms of levels of signal on the x,y,z axis, or as colour-coded images.  When images are made, statistically relevant differences between the two data sets are found and that information is mapped onto an image of a brain and colour-coded according to pre-determined levels. Usually red codes high levels of signal, yellow and green medium levels of signal, and blue low levels of signal. The data produced in brain mapping experiments is necessarily statistical, deductive, and comparative, and it is these data sets that cannot be produced without the use of brain atlases. Real-time fMRI Of course, fMRI is not only used for brain mapping. Recently, fMRI has moved into a number of different areas and has been used in diagnostic, therapeutic, surgical, and commercial contexts.364 In a large part, this has been the result of the development of software allowing for fMRI analysis to be done in real time.365 Rather than gathering information from many subjects in a study, real-time fMRI is used to analyse the patterns of BOLD signal in single individuals while they are in the scanner. An upshot of this is that the tasks can be altered in response to observed patterns of response.366 Real time fMRI is used for research as well. Depending on the 363 W Chau and A McIntosh, “The Talairach Coordinate of a Point in the MNI Space: How to Interpret It,” NeuroImage 25 (2005): 408–416. 364 Ibid. 365 Nikolaus Weiskopf et al., “Real-time Functional Magnetic Resonance Imaging: Methods and Applications,” Magnetic Resonance Imaging 25 (2007): 989–1003. 366 SM Laconte et al., “Real Time fMRI Using Brain State Classification,” Human Brain Mapping 28 (2007): 1033– 1044. 179 research paradigms, or the context, the signal data from the fMRI can be translated into Talairach (or MNI) brain space or can be manually located on the brain of the individual in the scanner. This is especially important in surgical applications where individual functional areas are important to planning brain surgery. fMRI is a versatile technology for examining the human brain at work. It is also a difficult and contested technology as we shall see. Pictures and Quantification Having examined the naïve view and then the construction of pictures, it is interesting to consider what anthropologists have found about researchers’ perspectives on the technology they use. Here is a segment of an interview from Beaulieu by a senior fMRI researcher explaining the process of acquiring fMRI data sets: Particularly now with image editing and enhancing, you can manipulate images, and ultimately, one should be able to track everything back down to tables of numbers, which are locations of activations in Talairach space, it’s the sort of concept that underneath a glossy exterior, there’s a strong skeleton, that refers back to [inaudible] that is what gives some coherence and credibility.367 The functional data set used for fMRI images is a series of numbers and coordinates usually collected from a large group, smoothed to reduce noise and outliers that captures measurable and significant differences between two task states. Here from a different researcher, Now, is that ever the point of any of these studies, is what the pictures look like? Not really. [it can show signal over noise]… But beyond that what they’re talking about is what the mental operation was that they used, what the paradigm was, what trick they used to isolate this out; they may show you a picture to prove that a function that you thought was unitary is really two. And it’s two because it has different cognitive characteristics, different reaction times, or, and it’s two spatially, and I’ll show you a picture of it and it’s here and here. So the point is for you to look at that and to admire that it’s laid out over space? No, the point is so you can see that there’s not a lot of overlap, that it’s two discrete areas and so two processing areas.368 367 Beaulieu, “Pictures and Truth,” 61. 368 Beaulieu, “Pictures and Truth,” 58–59. 180 On the one hand this denies the pictorial by denying that the appearance of the images plays a significant role in the studies: the studies are really about the tasks developed to isolate cognitive activities. On the other hand, he seems to rely on the pictorial presentation in describing how studies can “show” differences. This study suggests that fMRI researchers themselves think of the images as secondary to the data – the images present data, but are not themselves data. The images are merely a way of presenting the data, in public, and can sometimes be a very problematic way of doing so. The idea that fMRI is quantitative and not pictorial reflects similar issues with representation in science: the idea that numbers, not pictures are the domain of sciences and scientific objectivity. They are the ‘strong skeleton’ of scientific work. Once again, there is an association of the pictorial with beauty and admiration, which reflects the idea that images can capture the imagination but cannot be a serious part of the scientific endeavour. Consider the following from a senior researcher, But I think that for a field to grow up –  and everybody thinks the same, I don’t think I’m giving you views that are different from anybody else, in the best places, I don’t think I’m putting forward views that are the slightest bit heretical– it becomes important that it become quantitative, science doesn’t just deal with pictures, it deals with counting things, graphing things, plotting things.369 In opposition to the naïve view, these interviews suggest that to (at least some of) those involved in the field the images are merely another way of presenting hard quantitative data, as if pictures themselves cannot be a way of measuring, graphing, or plotting. The visual content of the images is taken to be adjunct to the data. While the researchers see the studies as the main important work they are doing, the production of images is what is considered important elsewhere. There is a tension here however, since the researchers treat fMRI as if it were allowing them to look into the brains of subjects while at the same time treating the images as secondary. The images themselves are seen as data by the public, while the role of images as data is less 369 Ibid. 181 valued by those making the brain scans. This perhaps has to do with a different understanding not only of the role the images play, but what the images represent and can be visual evidence of. As Beaulieu suggests, however, the truth has to lie somewhere between the research being purely quantitative and it being purely visual. MRI images, as has been discussed, represent magnetic resonance signal as luminosity. Contrast media can be used to increase signal from certain kinds of tissue, so that they stand out more in relation to each other. fMRI uses de-oxygenated haemoglobin as a contrast agent and uses different, faster scanning protocols in order to receive increased signal from areas of higher de-oxygenated haemoglobin. The signal→ luminosity representational scheme is again the one that is at use here, basically the same as MRI. What is different in fMRI is that colour is used to represent areas of signal whose strength is higher in a statistically significant way than that from the surrounding tissue. If we recall the history of MRI, colour was initially used to represent areas of different levels of signal until it was found to be more difficult to interpret (at least to those who used the images in practice) than using luminosity values. In MRI images, luminosity values (in a pixel) play several roles: they show level of signal, they delineate the spatial position of that signal within a slice, and they help to display the anatomical structures of the brain in terms of differences in tissue type. Tissue type cannot be read off the luminosity values on a pixel by pixel basis. Rather the relative signal differences from different types of tissue allows for a grey scale pattern in which we can visually distinguish the different kinds of tissue. Colour is used to represent areas of the brain with higher levels of signal over time, which is thought to suggest the use of that area in processing a particular task. The colour system generally used represents higher measures of signal as red, and the lowest ones as blue with those in between being represented in descending order as orange, yellow, and green if it is used. 182 Colour plays a multiple role here. It shows intensity of signal in the differences between the tasks and tells us where that signal was. It does not represent time or duration. Colour is used to represent the specifically functional in fMRI. The pictorial features of the kind of fMRI images we often see are provided by the MRI. According to Tufte, colour can play four different roles in images to display information: it can be used as a noun to label, can measure or show quantity, it can represent or imitate reality or it can be used to enliven or add beauty to a display.370 In the images where he thinks colour is best used it plays multiple roles (even all of the above roles), making information clearly accessible with the beauty of images being part of this. In fMRI, colour is used as a measure, and to add beauty to the image. It plays multiple roles and yet does not seem to track the kinds of real similarity relations that MRI does –  both because of the indirectness of the measure and because colour makes areas appear vastly more different than they might actually be. This is more important when those colours are furthermore used to define kind differences not only in terms of levels of brain activity, but in arguments concerning different kinds of brains imaged: We do not have to suspect the accuracy of the underlying experiments to recognize that the visual appearance of “graphically” different brain types is produced, in part, by a choice to visualize the data as very different in colour371 The decisions made about how images are coloured determines how visually apparent the differences between kinds of brains are. Because of the way that colour differences are read as differences in kind, an area represented as red in one image and green in another can make differences in signal intensity seem more or less significant than they are. The way two different brain activation patterns are coloured can control what is read into the image. Questions concerning neuro-realism or neuro-essentialism are closely related to the way that statistical data are presented in coloured blobs of activation patterns. 370 Tufte, Envisioning Information, 81. 371 Dumit, Picturing Personhood, 17. 183 Interpreting Pictures and Statistics What makes functional images so powerful is the combination of the functional mapping and the clear detailed MRI images. What is important to understand about these images visually, is that there are two different measurements of signal in these images: there are two different representational systems mixed together. If MRI images are, representationally, like x-ray or ultrasound, fMRI are more like atlas maps. Like data sets imposed over aerial photographs. Sorting out the content of fMRI images as two-dimensional representations requires examining how we are to interpret these images. In a structural MRI image of the brain (or other body part) what we see is the tissue that caused the image and that is represented there. Ordinarily, MRI is a transparent medium. It stands in an appropriately visual causal chain to perception, is belief independent, and the signal is luminosity representation ensures that it maintains real similarity relations of the properties in question. MRI is usually in the business of representing particular brains and bodies –  and the one at the scanned end of the causal chain is the one that we see. MRI depicts the brain and shows its different tissue types. To interpret MRI images requires knowledge of the brain, if that is what one is looking at. In a diagnostic context, contextual knowledge can include knowing a patient’s history and symptoms, and past diagnoses, but the way of understanding the image is in terms of scrutinizing the image as if one were scrutinizing the brain for diagnostically relevant areas either in the anatomy or in the signal. The imaging system ensures counterfactual dependency between signal recorded in the brain and the luminosity values presented in the image – likewise, there is this relation between kinds of tissue and levels of signal. This makes the recording of signal in the images created in a causal relation independent of the beliefs of those making the images. 184 The MRI images used in many fMRI images, on the other hand, depict a particular brain but that brain is being used to stand for all brains (and this has been found problematic) or are abstractions from many brains that are used to represent the brain. This also occurs with photography: images of token particulars can be put to the task of standing generally for the type represented. A photograph of a crow can stand, say in a dictionary, for the type ‘crow.’ As discussed earlier, this exemplification is often how we initially learn about the visual appearance of things. Our visual recognition and categorization skills can be developed through seeing objects themselves or objects in images and learning their typical features. If we want to learn about the appearance and features of ordinary adult crows, an image of an albino crow will not do. Albino crows, being white, do not stand for the class Corvus brachyrhynchos which are normally black. Even though some variation in colour is found in crows, grey or whitish colouring is considered to be a variation and not what is standard of the appearance of crows. For this reason, zoological and botanical guides will often use paintings or drawings instead of photographs as representations to teach standard features of appearance. Within the context of saying what the definitive visual features of a class are, hand made images can be crafted in order to capture those features without adding extra features that might be idiosyncratic to a particular or that might be products of the research context. In fMRI we do not see a single source brain and its features the way we do in an ordinary structural MRI. When this is considered problematic, it is not for the fact that an image of a particular is standing for a class. The problem revolves around how well that particular represents the class in terms of the features that are salient in a particular context. In the case of fMRI and brain mapping, the problem is whether general claims can be made about brains and their functional features from the studies. The problem is not whether an MRI can stand for the type ‘brain’ because that depends on the demands of the context, but what the functional data 185 can represent.  In terms of the content of images, the question is whether they show patterns of activation for particular tasks as types, whether we can see patterns of activation for types. There are a number of problems with functional imaging as a visual technology. fMRI is not transparent the way that MRI is. Even if the relationship between colour and level of relative signal were one that preserved real similarity relationships (which is uncertain), the threshold levels set for significance determine what colour a level of signal will be assigned to, and both the task selected and the researcher’s interpretation of significant values will effect how the significance of signal will be understood. While we might say that there is the appropriate kind of causal relationship between physical signal and the colours assigned to it this is not yet to say that there is any such causal relationship between level of signal and the particular mental event being studied. What this implies is that there are extra levels of inference involved in interpreting fMRI images that are not involved in interpreting other medical images. This is why the scientists all discuss the experimental conditions as important. Inferences from pictures to claims about the mind, or even the brain, depend upon the tasks that were used to acquire the data.  It is only in the relevant differences and the degree to which the experiment is able to isolate specific areas that are well established as being connected to particular mental states, for example also through lesion studies. Researchers have been able to do inverse retinotopy experiments where they are able to tell by looking at patterns of activation what people in the scan are looking at with a high degree of accuracy.372 Being able to infer the visual stimulus by looking at activation patterns, however, is not just to say what it is about vision that makes that specific pattern usual for a stimulus –  there is still a great deal that needs to be understood about the brain and about vision prior to being able to establish the relationship between patterns of activation and particular brain (and especially mental) states. 372 Bertrand Thirion et al., “Inverse Retinotopy: Inferring the Visual Content of Images from Brain Activation Patterns,” NeuroImage 33 (2006): 1104–1116. 186 Part of the problem with interpreting fMRI, and why it seems visually appealing, seems to be that it is a mixed representational system –  a transparent image overlaid with data presented in a different representational system a token brain standing for every brain with a statistical and idealized data set over top. Jim Bogen suggests that fMRI images should be regarded as visual representations of idealized brains, more in line with scientific models than with pictures.373 There is something that seems correct about this position, yet it is also at odds with the idea that mental states are somehow part of the content of fMRI images. fMRI images can be seen as images of mental states only when there is a perfect correlation between the mental state in question and the brain state in question. There is, of course, no such correlation – this is why Bogen says they are idealized brains and not minds. These might be images of idealized brain states during a particular task, but that does not yet tell us what that brain state can represent. In many cases of imaging, to reason from the image is an acceptable way to justify answers to the question being investigated in the experiment.  fMRI is often used this way, but these scans do not seem capable of deciding between competing theories of cognitive psychology –  the same pattern of activation may be evidence for two different interpretations where mental events are under investigation, and competing understandings of the functional areas of the brain are at stake.374 There remain basic underdetermination problems with the claims that can be supported by the results of particular imaging procedures. Can ‘type’ factor into the causal chain of vision? Many people think that visual experience represents types, at least in terms of the kind of thing something is.375 One of the very interesting features of functional imaging, both fMRI and PET, is that they could be ways 373 Jim Bogen, “Epistemological Custard Pies,” S21. 374 Christopher Mole et al., “Faces and Brains: the Limitations of Brain Scanning in Cognitive Science,” Philosophical Psychology 20 (2007): 197–207. 375Susanna Siegel, “What Properties are Represented in Perception?” in Perceptual Experience Tamar Szabo Gendler and John Hawthorne ed., (Oxford: Oxford University Press, 2006). 187 of creating mechanically produced images of types. The problem is not whether this is possible, but whether the types themselves might be spurious and constructed through manipulation of the data. A problem that arises with this has to do with what kinds of brains can be represented. For example can MRIs be of a schizophrenic brain through averaging? One might think that one could average out signal data from schizophrenics and see what is particular about their brains. The problem is that behaviourally and cognitively determined afflictions are not known to have a particular neurological or biological source. The same symptoms that we describe and diagnose as schizophrenia may have multiple sources. Currently researchers can look for similarities in schizophrenic brains, cross check them with symptoms, but no meaningful picture of a schizophrenic brain can be produced. Using fMRI we might find similarities between the brains of schizophrenics, without having conclusively established that there is a biological basis for the various groups of conditions that will have someone be diagnosed schizophrenic.376 Of course there is nothing in the technology to stop it from representing a schizophrenic brain, or a gambler’s brain, or a depressed brain if there are measurable functional differences in each of these cases that can differentiate between these states. The issue is that without being able to control for all other kinds of contingencies that could exist as correlation we will not know when we have created such a picture. This is part of the problem with neuro-essentialism: if we measure the brains of a large number of gamblers and find differences we can still not say that we have a representation of a gambler’s brain where this could be usefully differentiated from a non-gambler’s brain. It could be that the patterns we see in a gambler’s brain underlie patterns of emotional response that are also evident in non-gamblers who engage in other risk taking activities.  In a less political case, the same kind of situation has come up for a attempts to support a functionally described ‘face recognition’ area of the brain. The use of different colours to 376 Dumit, Picturing Personhood, 144. 188 signify different levels of signal, the use of underlying MRI images, and the visual interpretation of this can make the images seem more concrete evidence within an experimental context than they really are.  Mole et al.’s discussion of brain scanning studies of the separate processing of face perception examines the purported directness of fMRI evidence for cognitive neurological claims. For example, claims about faces being processed differently than other stimuli (having a specialized area the Fusiform Facial Area) made by Kanwisher have been challenged by (Gauthier) as alternative explanations invoking other cognitive resources rather than just a special area for faces – this includes that faces are more particularly interesting while other objects are interesting in terms of sorts of objects, or that we bring more expertise to face looking. Studies show a particular brain area as having increased levels of signal during facial perception so that it was called the Fusiform Facial Area. However, this area is also active in studies on visual familiarity the data might reflect familiarity with a stimuli. Another challenge comes from newer multivariate processing which analyses not only gross signal from areas of the brain but interaction between brain areas. This has had some significant breakthroughs in understanding brain processing in terms of networks of activation. The assessment of what is going on with processing and the relation of the brain states to the mental states judged to be studied by research tasks reinforces the disconnect between fMRI images and the idea that they are images of mental states. We might be able to see a (idealized) brain at work; but we are not capable of seeing what it is at work doing. The Rhetorical Power of Pictures The combination of coloured functional data and MRI yields a powerful image. They are powerful not only in the clarity and detail they present, but also because they carry with them the authority of mechanically produced images. Whether or not only mechanically produced images are in fact belief independent they are culturally recognized as being authoritative as is 189 evidenced by photographs and not drawings being used in court.  The visual authority of the mechanical seems to be at play in the naïve view of fMRI. This explains the kind of realism described in the naïve view and seems to underlie Roskies’ hypothesis that fMRI images are being treated much like photographs. The idea of the mechanical being used to capture something at a general level is also powerful. The idea that mechanical objectivity can be harnessed to represent features of something as amorphous as the mind captures the imagination of both scientists and lay people, both in terms of there being a ‘gestalt’ and as quantification. The tension between these two reflects not only ways of interpreting but also ideals of science and how/ what it can represent. On the one hand, images are seen to have evidentiary certainty because they are coming from a scientific field. On the other, images and image interpretation are viewed with some mistrust over the ideal of quantification. Understanding the content of functional images; how they are made, how they are interpreted, and how they should be interpreted is important to understanding their centrality in science. When it comes to brain mapping or functional localization uses of fMRI, it is clear that the important issue is what experimental criteria are used. The fact that pictures themselves are often used as evidence without discussion of experimental procedures exacerbates the naïve view of fMRI and creates an inflated ideas of what precisely can be seen in fMRI and therefore what can be done with the technology. The certainty with which claims are sometimes made about what can be seen or discerned about functional architecture in the brain using fMRI also exacerbates this problem. Seeing is taken to entail believing. fMRI has more mystique even than photographs in that it allows us visual access to previously unseen things. The purported object of fMRI images is not merely the brain but the brain at work and perhaps even our very selves. fMRI not only satisfies our curiosity about what the brain is doing, but is used in answering some of our biggest scientific questions. Of course 190 fMRI is not like a photograph. Rather, fMRI draws its appearance of mechanical authority from the MRI that the functional data is mapped onto.  It seems on the naïve view as if it presents a mechanically produced image of a brain at work because the two different data sets are not recognized as entirely separate systems of measurement. The anatomical detail visible in an MRI carries into the interpretation of the fMRI image and it makes it seem as if we are seeing through the image both to a brain and to its function. It is like praising a map that superimposes data over an aerial photograph simply because of the detail of the photograph. The two systems need to be considered separately. This conflation is problematic because it can overshadow some of the very real benefits and problems with fMRI and the interpretation of fMRI experiments which have to do specifically with the experimental set up and with how the statistical data is treated. fMRI images can represent (pictorially) a brain at work, an idealized brain in brain state (S¹ minus S²) but we are still far from saying whether it represents some mental state M. In fMRI visual knowledge of the brain and its functioning cannot be brought to bear in interpreting the images without full knowledge of what exactly the states are that the images do represent. Without this background knowledge we cannot even say what it is that we are seeing except a brain and coloured blobs. fMRI (and other functional imaging such as PET) differs from other kinds of imaging in that it is generally used in a research rather than in a diagnostic context. The standards of scientific research demand the use of large groups and the analysis of large data sets, which are just the things that are at odds with the imagistic presentation (if it is being thought of as related to photography). Most medical imaging is done for diagnosis, where the information sought and dealt with is from a single particular brain. If we compare fMRI images in research, as discussed above, with fMRI in diagnosis some different questions arise that give us another way to consider fMRI. 191 Usually imaging in diagnostic and therapeutic contexts include making discriminations between what is normal and what is abnormal, but it can also be used to see progress in healing. fMRI is often used to check on brain function in the aftermath of a stroke.377 In this case, however, it is not being used to measure mental acts but instead to measure states in the brain where this is understood in terms of blood flow or the brain having any functional activity after damage.  A benefit of real time fMRI is that the operator can alter task parameters as they go in order to further investigate neural associations – this can be with behaviours such as speech, or other activities affected by stroke damage.378 These images do not, however, claim to represent mental states. In this context (some) researchers are more likely to discuss brain activity in terms of metabolic actions within the brain.379 Although the BOLD signal has come under some fire, and is not the only measure that can be used for metabolic activity,380 it has been useful in planning surgeries, and tracking rehabilitation despite what an indirect measure it is of neural activity.381 In real time imaging a specific brain is represented, and that brain is represented in a brain state defined by tasks –  but it is not being put to work as representing the mental acts in question. Again, context determines what is presented as the pictorial content of the images and how interpretation should take place. Like other imaging technologies, this examination of fMRI should emphasize the importance of professional vision in image interpretation. fMRI is a particularly interesting technology in that it has garnered such wide public distribution and has the potential to be extremely persuasive in determining public views of brains, minds, and what science can determine about them. The “neuro-policy” trend found by Racine and his collaborators shows 377 Geoffrey K Aguirre, “Interpretation of Clinical Functional Neuroimaging Studies,” in  Functional MRI: Applications in Clinical Neurology and Psychiatry ed. D’Esposito  (Oxford : Informa Healthcare, 2006): 9–24. 378 Aguirre, Clinical Functional Neuroimaging, 20. 379 Ibid., 12. 380 B Yevgeniy Sirotin and D Aniruddha, “Anticipatory Haemodynamic Signals in Sensory Cortex not Predicted by Local Neuronal Activity,” Nature 457 (2009): 475–479. 381  Aguirre, “Interpretation,” 253. 192 not only the use of neuroscience to define policy but the desire to use functional imaging studies in this way.382 This is important given how widespread the naïve view seems to be, and the realism and certainty attributed by it to fMRI technology. As more issues arise like using neuro- imaging for lie detection, it is important to differentiate between acceptable and unacceptable visual interpretation of these images. Researchers’ suspicion of images may be overblown in favour of the numbers –  because visually recognizable patterns of activation are important- but neither is the kind of visual training and interpretation fMRI demands disseminated in our visual culture. Just as geographers and police may be able to see things in scenes, or attributed specific meanings to them, in a way that non-specialists do not,383 so do we need to understand specifically what fMRI images can be said to attribute to the brain and then what they can be said to attribute to the mind. 382 Racine et al., “fMRI in the Public Eye,” 160. 383 Charles Goodwin “Professional Vision,” American Anthropologist 96 (1994): 606–633. 193 7.   Medical Imaging and Expert Vision That medical images are sometimes transparent does not mean one does not need to learn how to see objects in them. Indeed one feature that becomes central in the case studies is the importance of professional or expert vision. Seeing three-dimensional objects in slices is accepted as a way of seeing things that is of diagnostic and medical importance, but that is not an everyday way of seeing the body. While my lack of knowledge about brains would not prevent me from seeing one if it were in front of me, it might prevent me from seeing one in a brain scan. Some planes of view of the brain are a lot more familiar than others, and some are not.384 Knowledge plays an important role in imaging. While you might be in visual contact with a brain when you see it in a picture without knowing that you are seeing a brain, in an important sense you aren’t then seeing a brain. Your visual experience is causally connected to a brain, but your visual experience does not represent a brain. As an important part of medical imaging is seeing-in, we need to explain the role that knowledge plays both in seeing-in pictures, and in seeing-in a certain kind of picture. Seeing Through Different Systems Seeing a slice of brain in a T1 weighted MRI is different from seeing the same slice of brain in a T2 weighted MRI. In the first, bone appears as white and cerebrospinal fluid appears dark, while in the latter it is reverse. High levels of signal appear white in both but the different signals allow different relationships between tissues to be seen.  This is comparable to seeing things in 384 Examining expert vision here is not to take the place of an epistemic virtue such as Daston and Galison’s trained judgement. Understood in the sense I will be discussing, expert vision is a separate idea. Although it might be invoked in trained judgement, it would also have a role in truth to nature. Field naturalists depend on expert vision to differentiate different kinds of birds, for example, and to make fine discrimination between apparently similar species. 194 either black and white or in negative black and white. There is something a little spacey about seeing shadows on a body as white, or about seeing cerebrospinal fluid as white. Part of this is because brightness is often a visual cue to interpret something as highlighted, as figure over ground, or as closer. It is not impossible to see things in these images, and in fact we figure them out pretty quickly. Negative images look strange because the luminosity values of the design are reversed rather than higher signal (in this case brightness) being represented as higher luminosity as it is in a photograph: negative images represent higher levels of signal as lower levels of luminosity. In the negative view features that would be experienced as bright were we to see the scene face to face are seen as dark. While the image still encodes the same information, it is not the same to us. Brightness is preserved in the first case but not in the second. We get to experience what something might look like if we saw in reverse, but we are able to see less in the picture because it has less real similarity relations: it is less transparent. Our knowledge of the appearance of things allows us to correctly interpret an image despite its different design features. T1 images are the best for anatomical detail because T1 represents the body closest to the way that anatomical structures are usually seen. Anatomical features that are obvious in T1 might be obscured in another sequence because they appear more similar in luminosity or because the represented tissue contrast obscures usually apparent features. Negative images are not very common generally, but in MRI imaging they play a central role. The T2 weighting makes it easier to see tumours compared with normal tissue because the tumours appear as bright on these scans and is more clearly differentiated from surrounding tissue. They appear as figures more easily. In a sense, this might also be comparable with our familiarity of detection in other imaging modalitites –  in x-ray based modalities tumours generally appear as white because they attenuate more x-ray than normal soft tissue due to calcification. Other imaging sequences in MRI are useful for other diagnostic purposes. There remains a lot of research establishing the best imaging sequences for different 195 imaging tasks. Likewise in ultrasound the 2-D mode can make tissue appear bright and so closer to us when it is not. A benefit of 3-D ultrasound is that when we are interested in surface edges of the subject we do not have to interpret around competing areas of high signal. There are many ways that knowledge can and does play into what we are able to see in images, and what we want from an imaging task will affect the choice of system. This also concerns who needs to see what, since different contexts can demand very different visual and conceptual skills. For this reason, images have to be able to support a hierarchy of perceptual acts or interpretations from the very basic to the extremely involved. Visual Concepts and Expert Vision Expert vision has not been widely studied in psychology, but it has many features that are really important for understanding visual concepts and the kind of distinctions that can be made based on visual examination of something. If you and I are standing in the local park watching a wide variety of shapes and sizes of dogs run around we will both have visual experiences that represent each of these as furry moving things (unless my hairless dog Miguel were there); or as various sizes, shapes, and colours, in various locations against a background of greenery. As a dog novice, I categorize them all as dogs, the similarities between these different things are such that they all fall under that particular concept. You, however, are an expert on dogs and are able to tell what kind of dog each of them is. I cannot differentiate between a Boston Terrier and a French Bulldog, while there is an obvious difference to you. You have visual concepts of dog breeds that I lack. I can learn how to reliably separate the Boston Terriers from the French Bulldogs in two ways. One, you could tell me what to pay attention to in the appearance of the dog in order to properly categorize it. The jaw width, skull shape, and chest, for example, are all different in these two breeds. I could then apply these rules and determine whether the dog I am looking at 196 is a Boston Terrier or a French Bulldog. Another way I could learn to reliably distinguish these two breeds is to have you point out different exemplars of each. After a while I could develop a visual heuristic, a kind of gestalt appearance of each breed that allows me to categorize it correctly. The rule following case is more associated with novice viewers, and the latter more with how expert vision works, but the interesting point is that our classification of things into different categories depends on which visual features we are discriminating. The interplay between perception and conceptualization is important in how we understand visual objects. Any object, such as a dog, can be categorized at various levels of abstraction, for example: white thing, animal, mammal, dog, terrier, West Highland white terrier, Tybalt (my Westie). There seem to be different goals involved in categorizing objects that might compete with each other. At the park we generalize across a number of different visual features to see all of those moving things as dogs and to differentiate them from the humans and other animals in the park. If, on the other hand, I am trying to identify which Westie is Tybalt from a group of other Westies I am involved in discriminating between (very) similar objects. I am attending to specific visual features of each Westie and looking for the features that belong uniquely to my dog while ignoring those that are similar to all Westies. Generalizing across visual features, and discriminating between them are different ways of attending to aspects of the visual world and this is all based on gathering patterns of visual information and using visual concepts. Categorization happens at different levels of hierarchy. ‘Dog’ is considered a basic, or entry level category. ‘Terrier’, ‘Westie’, ‘Tybalt’ are all subordinate categories, while ‘Mammal’ and ‘Animal’ are super-ordinate categories.385 One would think that one could not have a visual concept of dog without that also carrying nested subordinate information that it is an animal and a mammal, but this is not the case. Our basic concepts are the ones that are most 385 Thomas Palmeri and Isabel Gauthier, “Visual Object Understanding,” Nature Reviews Neuroscience 5 (2004): 291–304. 197 salient, and on traditional models of mental representation subordinate categorization requires additional perceptual processing, while super-ordinate categorization requires further semantic processing.386 This aspect of visual categorization shows how relevant the visual aspects of something are for categorizing it. Neither ‘animal’ nor ‘mammal’ is a category that is defined visually: there are no specific visual features that would force an object into either category. Furthermore, one cannot see an exemplar of a mammal or animal without seeing some particular kind of mammal or animal. Whether we think of category formation as being exemplar or schema based, these subordinate categories are not entirely visually defined. Expert vision is interesting because the superordinate categories do seem to be visually defined and tracked by the visual concepts of experts. Experts categorize things into subordinate categories as quickly as they do basic categories and more quickly than subordinate categories.387 Studies of dog experts have found that categories such as ‘terrier’ are more quickly brought to mind than ‘dog’ or ‘mammal.’388 The view that this processing is semantic has been challenged. One could, for example, become a dog discerning expert and be able to group dogs by similarity without having words attached to the different categories. Another aspect of expert vision is its automaticity. In novice viewers who are learning about something, some kind of explicit rule following seems to occur. In order to see if a particular tree is a Douglas fir or a Cedar, because they have very similar overall features, one can look at bark and branching pattern. For some people who are experts, a Douglas Fir just looks like a Douglas Fir. Studies have found that alongside this is what is called a ‘holistic processing effect’ which is the inability to ignore areas that are not relevant to the categorization. 389  Certain features become visually salient when we become perceptual experts 386 Palmeri and Gauthier, “Visual Object Understanding,” 299. 387 Ibid., 300. 388 Jim Tanaka et al., “Object Categories and Expertise: Is the Basic Level in the Eye of the Beholder?” Cognitive Psychology 23 (1991): 457–482. 389 Palmeri and Gautier, “Visual Object Understanding,” 300–301. 198 that are not salient when we are novices. Expert categorization into super-ordinate categories seems to become a recognitional task, not one that requires inferences from the visible features of objects. Whether this means that it is being processed as identity (like when I identify Tybalt) or not is an active question in the literature because expert vision seems to utilize the same brain areas as face recognition. The exact mechanisms for this are interesting but are not relevant here. On the one hand, two things that look the same to me may look very different to someone with more expertise. Like Cedars and Douglas Firs. On the other hand, two things that do not look the same to me may appear similar to someone with expertise.  That these can be visual differences seems to me to be necessary to explain perceptual expertise and to support the various ways that visual differences and similarities enter into judgements that we make about what we are seeing. Depending on the visual task, we are mustering different visual skills. Expert Vision in Radiology One of the main areas where studies of expert vision are conducted and thought to have practical application, is in radiology. A study, by Harold Kundel and Calvin Nodine, of expert viewers compared with lay viewers, found that, contrary to how radiological interpretation is often taught, radiologists and lay people describe images in terms of their meaning, that is, what they see in them.390 The viewers were presented with a number of pictures and told that each picture was recognizable and that it could be described in a sentence. The study used a hidden object picture of a cow’s head as well as a number of different medical images chosen so as to be familiar to radiologists but unfamiliar to lay people. Kundel and Nodine examined eye saccade differences between those who recognized objects in images and those who did not and compared them to what they had previously hypothesized were the ‘meaningful’ aspects of the 390 Harold Kundel and Calvin Nodine, “A Visual Concept Shapes Image Perception,” Radiology 146 (1983):363– 368. 199 image. What they found was that the eye saccades of those who recognized the objects pictured went immediately to the relevant area, while those of the people who did not recognize the objects ranged over a number of non relevant surface features. Only two viewers recognized the cow’s head in the image, and those who did not recognize it (including all of the radiologists) gave descriptions of it in terms of frames of reference –  an abstract image –  or invented possible objects. The radiologists described it as “an abstract picture of a fish,” “something from the ceiling of the Sistine Chapel,” and “meaningless blobs very much like a whale.”391 The difference between the descriptions given by the laypeople and the radiologists of the medical images were also interesting. The laypeople described the medical images in terms of frames of reference; for a sonogram “an aerial photograph,” “an ultrasound of some body part,” and for a Retrograde Pyelogram “an x-ray of the abdomen” while the radiologists described the images in diagnostically meaningful terms “longitudinal ultrasound scan showing a dilated common duct from a pancreatic mass” or “Big kidneys, slightly stretched calyces, polycystic disease.”392 Their analysis of the descriptions showed that people who recognized the objects in the images made statements in terms of the meaning of the images rather than descriptions of the image, or the frame of reference, while the non-recognizers tended to generally make statements about the frame of reference i.e. what kind of image it was. Kundel and Nodine interpreted their findings to demonstrate the importance of a visual concept for recognition of diagnostically important objects in radiology.393 Their studies of medical image perception present a new dimension to research in the field. Often research in medical imaging focused on psychophysics for improving image quality, on finding the balance between contrast and clarity, and other features of the images. In turn, radiological teaching had often focussed on signs and on pattern recognition of design features 391 Kundel and Nodine, “Visual Concept,” 365. 392 Ibid. 393 Ibid., 368. 200 rather than on the more psychological aspects. Kundel and Nodine’s suggestion was that rather than teaching pattern recognition, trainee radiologists should be exposed to more images so they could develop visual concepts of the appearance of things.394 Knowledge of the visual appearances of anatomical structures, ability to recognize abnormalities, knowledge of conditions liable to cause such appearances are all part of the expert vision associated with medical image interpretation. Pictorial Content and Evidence The content of pictures –  what they attribute to their subjects – is often in line with what we see in images. On the one hand, the pictorial content of medical images can be described simply in terms of space, shape, and location of tissue. On the other hand, the interpretation of these features results in enormously complex recognitional, categorical, and inferential actions. Consider the kind of clinical claims that are made of what is seen in an MRI image: magnetic resonance imaging shows a late subacute to chronic hematoma as a space- occupying lesion in the right posterior fossa. The hematoma shows a large medial subacute component and a small lateral chronic component.395 Is “having a late subacute to chronic hematoma” or less specifically “having a space occupying lesion in the right posterior fossa” something that the image attributes to this slice of brain? Is it part of the content of the image in the same way that “having curly hair” is something a picture of Joshua Johnston may correctly attribute to him? Having curly hair is something that requires far less knowledge for us to be able to see Joshua’s hair as curly or to see that the picture attributes having curly hair to Joshua. That we can come to know the appearance of a haematoma in the right posterior fossa by seeing it as a space-occupying lesion in a specific area of the brain in the MRI is a little 394 Ibid. 395 J. Astekar et al., “Brain, MRI Appearance of Hemorhage: Multimedia”, medscape E-medicine, Radiology specialty section, (accessed April 17, 2010). 201 counterintuitive. A lot of specialist knowledge is required to be able to have such a perceptual experience. It also entails that the contents of a picture are determined by the most detailed and expert description of what could be said about what the picture attributes to the subject. This, however, seems to be the case. In the picture of Joshua the full contents of the picture could include all attributions by a knowledgeable viewer based on any visually discriminable properties that can be determined by a visual inspection of the image. If, for example, someone were able to determine something about the state of Joshua’s health based on the way his hair curls. Then that would be something the picture might attribute to Joshua, and would be among the contents. Of course, in order to grasp the entire content of the picture requires detailed visual discrimination abilities and visual concepts that most people do not possess. This suggests a nested hierarchy of contents where the super-ordinate determinable properties that are visually discriminable in a picture define its ultimate contents but carry with them a host of other properties. A photograph of an elm tree afflicted with Dutch elm disease cannot help but depict an elm, a particular pattern of branching and leaf distribution, a tree. If I know nothing about tree varieties and how diseases affect their appearance I will still be able to grasp some of the contents of the picture such as that it is a tree with a particular pattern of branches and leaves. What is interesting about this is that it can explain the different contexts in which grasping certain contents of pictures is useful. If I want to see what the new neighbourhood I am moving into looks like, what its general visual appearance is, I can use a photograph for that. If I also want to see what kind of flora there are I could use the same photograph, even if it meant looking though a field guide to plants to identify the ones I see in the picture. If I became more of an expert on visual discrimination of those plants I could perhaps also visually determine if they were healthy or unhealthy. 202 I have argued that the contents of pictures are defined by what could be seen in them by visual inspection and scrutiny. The limit has to do with what we can determine based on visual features presented. This is to say that the content of picture can outstrip what is seen in them by any particular viewer. From there, we also make a lot of other inferences which are not visual. I might infer from the state of the healthy plants in my new neighbourhood that it is generally sunny and quiet with good soil and few cars by working backwards from what I know about healthy plants of a certain sort through factors that generally affect the health of plants. Seeing healthy plants in the picture becomes evidence for the other claims that I want to make. But I do not see them, rather, I use what I do see as evidence. Determining what it is possible for, not to mention expected of, a viewer to be able to see in any image has become an issue in imaging. Research into medical image perception has found a number of kinds of errors that are made and attempted to devise models and suggestions for how to correct those issues. I will return to this point about image perception after discussing how images and image interpretation can play evidentiary roles. The contents of pictures underlie two kinds of evidence. One is what we can attribute to the subject based on the visual features discriminable in the picture. This is equivalent with what can possibly be seen in the picture. We can also use these determinations, this perceptual information, as the basis for further inferences. Reasoning with images is reasoning from visual information, from the content of pictures, and not only from the picture surface or from the representational system. Here are some examples contrasting these points. The most straightforward one first. 1a) a sonographer and a pregnant woman are looking at an ultrasound image of the woman uterus. They both want to know the sex of the baby. The sonographer moves the transducer around on the woman’s abdomen until she has a clear view of the genitalia. She sees 203 the genitalia in the image of the foetus and, in recognizing female genitalia, sees that it is a girl and so informs the mother. 1b) a sonographer and a pregnant woman are looking at an ultrasound image of the woman’s uterus. They both want to know the sex of the baby. The sonographer moves the transducer around on the woman’s abdomen until she has a clear view of the genitalia. They are both able to see the genitalia in the image and, in recognizing female genitalia, sees that it is a girl. In these two examples, the goal is specific, the imaging task is clearly defined and is solved using visual features of the object. The sonographer has to move the transducer around until she has a good view. Since ultrasound is a real time technology, the sonographer can examine the image and respond to what she sees there until she has an answer. Difficulties in determining the sex of a foetus in utero come from difficulties in getting a good view, which in turn depends on the position of the foetus. 2) a 20 year old man develops severe pain in his lower leg while in military training. His physician suspects a stress fracture and orders an MRI with Flair. The physician looks at the MRI images and sees that there are areas of higher signal intensity in the Flare image around the painful area. This is what he is looking for because it is the normal appearance of stress fractures in Flair. He checks through images of other slices to be certain of his diagnosis and once it is settled he tells the man he has a stress fracture and recommends treatment. These are all fairly simple ways in which images are used in medical reasoning, based on being able to recognize states of  the body by seeing them in the image. In these cases the goal of the imaging is clear and the answer is also clear. In other cases both questions and answers are not so simple, but can show us different features of medical reasoning. Here is a report from an ethnographic study examining how images are used in a hospital. The researchers followed a physician on his rounds and kept track of his use of imaging in his normal practice. 204 For the first patient, even before the history and diagnosis were stated, the radiologist had the films up on the light box. He looked at the films and immediately said that the patient had pneumonia. He then took the films down. The physician being observed asked if the heart was hiding the pneumonia, which the researcher understood to mean that he had not been sure of his own reading of the film. The radiologist put the films back up on the light box and showed the physician the evidence of pneumonia. For another patient, the radiologist asked the physician what he wanted to know before giving a reading. They read and discussed these films. As the radiologist was taking the films down from the light box, the physician asked him more questions: “Is there any evidence of obstruction? Of shortness of breath?” The radiologist answered, took the films down, then looked at the one he was holding in his hand, and modified his answer when he remembered something he had been told about the patient’s history. For a third patient, the radiologist said that the patient’s condition was bilateral, according to the CT scan, but that he was reluctant to “call it” on the basis of the scan and felt “very insecure” about making a specific diagnosis.396 These kinds of studies reinforce two ideas, first the difference between imaging and other sorts of tests – understanding the meaning of the appearance of the image in a way sufficient to make medical ́diagnoses requires a great deal of background knowledge about the appearance of anatomy, about the imaging technique, and about the patient being examined. Second it shows the kind of negotiation around images in medicine. There is still a great deal of inference that happens between seeing something in a scan and determining its medical significance. Often there is an underdetermination problem, where the appearance of something in the scan could be evidence for two or more possible conditions. Radiographers and others using and interpreting images often need to make differential diagnoses based on what they see in the scan. In this case imaging could be considered equal evidence for different conditions and a final diagnosis will require gathering other evidence from the patient history, from other examinations, and other tests. In other cases what is seen in the image might be unclear. In the case described above, the radiologist, with more experience, immediately recognized pneumonia where the physician had not seen it. Or the image might be ambiguous. A great deal of research, especially in radiology, has gone into making clear images and finding the best angles for imaging. 396 Kaplan, “Objectification and Negotiation,” 448. 205 As Gunderman points out in an article challenging typical ideals in radiological practice, In many cases, key pieces to the diagnostic puzzle are not on the image and can be elicited only by investigating the patient’s medical history and physical examination findings, laboratory results, and even the results of other imaging studies397 The point he is making concerns two things: one is the idea in radiology education that the signs radiologists learn to identify should be diagnostic evidence on their own, which is almost never the case; the other is that imaging approaches are only part of the knowledge applicable in diagnostic imaging.  Image quality is relative to task, a T1 MRI may be more detailed in terms of anatomy but a less detailed Flair image better shows the kind of tissue difference needed to see a stress fracture. In planar (slice based) technologies, when the object of interest is not the slice of tissue that is the subject of any given image but the whole body part, single images are generally not diagnostically useful themselves. MRI produces hundreds of images during a scan, and the viewer does not just look at one. Images can be displayed in a ‘tile’ format with twelve on a film, or in ‘stack’ mode on a monitor.398 This allows the viewer to examine separate slices next to each other in order to better understand what they are seeing.  Furthermore, the diagnostic task will determine which weighting are used: usually comparing T1 with other sequences increases certainty in making diagnostic decisions. Images from different sequences or from different sections can feed back into our understanding of what we are seeing-in the images. Once a region of interest is established, seeing multiple views of it can help us understand what we are seeing if it is ambiguous or help us be more certain about what we are seeing. Understanding the size and shape of a lesion can be important for identifying it, and is also necessary for making decisions about tumour volumes in planning for surgery. 397 M Gunderman, “The Tyranny of Accuracy in Radiologic Education,” Radiology 222 (2002): 297–300. 398 FL Jacobson, “Paradigms of Perception in Clinical Practice,” Journal of the American College of Radiology 3 (2006): 441–445. 206 Expert Vision and Contestation The question of professional vision as it arises in the case studies concerns the more specific question of who sees what. A sonographer can see more in an ultrasound than a lay person can, for two reasons: first is her knowledge of foetal development and morphology; second is her familiarity with the appearance of things in ultrasound. She has both expertise in the interpretation of images in the representational system, and knowledge of the object imaged. 3- D ultrasound is contested partly because the interpretation of the images is more accessible to more people,who lack the contextual knowledge of foetal development, morphology, and the appearance of pathology. This places limitations on the kind of inferences that can be drawn about the state of the pregnancy given what is seen in the images. Contestation of professional vision is a part of the history of imaging. The installation of MRI in radiology departments and the fact that radiologists are often the ones who write MRI reports shows this. There are currently debates between radiologists and cardiologists over MRI technology.399 Part of this concerns prestige and claims to expensive equipment, but part of it raises important issues about knowledge of objects and how that affects vision.400 The radiologists they claim expertise in image interpretation (familiarity with the signal), whereas cardiologists claim more anatomical knowledge and knowledge of pathology.401 As imaging becomes more widespread and of greater centrality to medical practice, there is bound to be even more of this contestation. Medical specialization requires in depth knowledge of specific areas of anatomy and function. Neurologists just know more about the brain than general radiologists and with new imaging technologies requiring less sign identification that past ones 399 Joyce, Magnetic Appeal, 158. 400 Ibid. 401 David Levin and Vijay Rao, “Turf Wars in Radiology: Should it be Radiologists or Cardiologists who do Cardiac Imaging?” Journal of the American College of Radiology 2 (2005): 749–752. 207 it seems as if the weight is shifting towards knowledge of anatomy. This is worth pointing out although the debate is far from settled, and in many cases radiologists are being trained in more and more specific fields of specialization. As it stands, medical practice using images tends to include a lot of negotiation between specialists. Transparency and Publicity An important point needs to be made here about medical contexts. The specificity of knowledge required in medical image interpretation, along with the repercussions for misdiagnosis, make negotiation important. Studies on diagnoses of conditions and seeing lesions in medical images findings indicate that even those who are experienced with images in a particular modality have perceptual errors that account for about 30% of missed diagnoses in radiology. This is especially true when failing to see a lesion can result in a failure to diagnose –  which constitute 60% of malpractice claims. 402  In addition to making sure images are clear (e.g. patients are well positioned with regards to what needs to be imaged or the right kind of imaging is being used), detailed patient histories and looking at images with other people are suggestions for reducing perceptual errors in interpreting images.403 A benefit of imaging is that the same image can be seen and discussed by different people who bring different expertise to the negotiation. For the most part medicine is practiced by teams, so the importance of making images instead of reports available to everyone makes sense. This seems to be happening with the Picturing Archive and Communication Systems being developed in hospitals making images available to all treating physicians.404 The content of medical images is exceptionally rich, and the knowledge needed for interpretation so specialist, that correct interpretation of medical images is difficult. The specific 402 Leonard Berlin, “Malpractice Issues in Radiology: Perceptual Errors,” American Journal of Radiology 167 (1996): 125–128. 403 Ibid., 128. 404 Kaplan, “Objectification and Negotiation,” 445. 208 appearance of stress fractures in different imaging modalities is discernible but requires much familiarity with these specific appearances. As with any other kinds of expert vision at an extremely high level of discernment there will be times when it is not obvious what is being seen. This is true with visual art experts trying to discern a real from a fake, or with wine experts trying to decide where a wine is from. The difference with medical imaging is that the expectations are higher. Once an imaging technology is a scan or a test then it seems a lot less interpretive than it is. While Computer Assisted Diagnostic systems are being developed and tested, none has been found to have the same level of accuracy as experts.405 405 EJ Potchen, “Prospects for Progress in Diagnostic Imaging,” Journal of Internal Medicine 247 (2000): 411–424. 209 8.   Conclusion: Instrumentally Aided Perception I have argued that imaging technologies are visual prosthetics –  they extend our powers of vision, and in doing so allow us to do other things. I have further argued that while all medical imaging technologies produce images, some are also transparent. In the last two chapters I argued that medical imaging technologies produce images which allow us to have visual experiences as of things we cannot otherwise see, and that some of these images are such that we also have visual contact with these things is what I have argued in the last couple of chapters. The case studies of ultrasound, MRI, and fMRI examined ways in which images are used to see inside the body, thereby extending some of our other abilities. We have descriptive cases of imaging being used to see inside the body and theoretical reasons for believing that we actually do see inside the body. The question remains how this plays out in our endeavours and what can be said about it philosophically. In this section I argue that the main use of medical imaging is for instrumentally-aided perception. In terms of imaging tasks the over-arching one that defines the reason medical imaging systems are developed the way they are is for instrumentally-aided perception of the body. In the philosophy of science a distinction is sometimes drawn between naked eye perception and that done with instruments. To many people this may seem like a difference that does not make a difference, to others it is very divisive.406 Usually this point is made concerning the epistemic status of the entities within a theory; quarks, electrons, and the like. This was discussed in chapter four where we saw some possible challenges to the idea that our vision could be extended. I have argued that our perception can be extended into the body by using images as well as extended to seeing features of tissue we could not previously see such as inflammation. The empiricist challenge to instrumentally-aided perception is not usually a 406 van Fraassen, The Scientific Image, 8. 210 challenge to our perception of ordinary objects of visual knowledge. I’m sure to van Fraassen foetuses are not theoretical entities merely because we cannot observe them in utero. The challenge, if there is one, comes from the idea that imaging (probably photography as well as any medical imaging system) is merely a way of measuring one limited set of features of a phenomenon. Imaging creates a new phenomenon, the object presented as outcome of a particular measuring system. In this section I am going to attempt to do away with this objection. Our scientific theories must explain the observable phenomena, and are empirically adequate when they do. So contemporary versions of constructive empiricism generally cast scientific theories as models of the world that are empirically adequate when they ‘preserve the phenomena.’ Instrumentally aided perception then is often cast in terms of detection, rather than observation. Seeing with the naked eye is given epistemic priority over seeing with instruments. The strength of this claim lies in the way observability is indexed to the human epistemic community. The idea that our perceptual abilities define limitations of our epistemic abilities is one that has been discussed earlier, and is one that I think is, for the most part, right. The problem with the constructive empiricist position is not that our perceptual abilities do not define our epistemic abilities, but rather in understanding what our perceptual abilities are. Van Fraassen opposes the perceptual with the conceptual: a cave man could observe a tennis ball, he claims, but not observe that it was a tennis ball.407 He argues that this should be the case with all observation. Observation, is meant to be a claim about our pure perceptual experience free from, our conceptual processing of that experience. The distinction between ‘seeing’ and ‘seeing that’ is mean to be an epistemic one – that yellow, round, fist sized object is what we see, its being a tennis ball motivates a lot more conceptual resources. In this he seems to agree with the logical empiricist notion that there can be, if not a pure observation language, 407 van Fraassen, The Scientific Image, 7–9. 211 because van Fraassen thinks that all of our language is theory laden, at least theory free observation that can be the basis for our scientific theories; that there can be some way to observe x, where x can be used as a basis for an empirically adequate scientific theory.408 The view that naked eye observation plays a special epistemic role seems to depend on this claim about perception. Theory and Observation  Theory-neutral observation seems to be important to ground knowledge. To question whether, or the extent to which, perception is theory-laden is, in part, to question the relation between our perception and our conceptualization. This is important for a number of reasons, one of which is how we mean to use perceptual experience as evidence for scientific claims. How we could get from noticing a coloured, round object to observation statements rich enough to describe a world of entities, scenes, interactions, causes, effects, or to justify even such everyday claims as “I observe that the cat is on the mat” has always been a problem for empiricism. The logical empiricist mapping of sense data from the visual field onto observational predicates to construct observation sentences, was problematic both as a theory of perception and as a way to ground observational claims.409 The distinctness of, and interaction between, perception and conceptualization has been debated in psychology as well as in philosophy, and for a very long time the two were treated as separate processes and studied as though visual representations were unaffected by knowledge or goals.410 One distinction, for example, concerned object recognition which has been thought to be perceptual, compared with perceptual categorization which was thought to be conceptual and inferential. Visual perception has traditionally been thought to create the representational 408 Ibid. 409 Norwood Hanson, Patterns of Discovery An Inquiry into the Conceptual Foundations of Science (Cambridge: Cambridge University Press, 1958). 410 Palmeri and Gauthier, “Visual Object Understanding,” 292. 212 input which was taken up by a conceptual system that then identified or categorized objects.411 Moreover these various acts were thought to be represented differently: in psychological studies of perception, this amounts to models of object recognition that emphasize object identity, and models of perceptual categorization that often simplify the representation of objects in order to explain categorization. 412  In philosophy, a sharp distinction is sometimes drawn between perception and cognition keeping perceptual processing separate from inferences, beliefs, and knowledge. This is meant to account for illusion and so to justify perception as a basis for knowledge and belief.413  The strong theory laden claim would be that our theories, scientific or otherwise, influence our visual experiences – that our perceptual experience is relative to our theories, with no experience being veridical except within a theory. A weaker claim, that I think is correct, is that there is a great deal of interaction between looking and knowing, and that our goals and our knowledge infiltrate our perception to an extent. Often we do not have a perceptual experience from which we make inferences about what we are seeing; rather our perceptual experience just represents things to us in a certain way. How our visual experience represents something to us can depend on our visual concepts of it. Even basic tasks such as discerning a Cadillac from a Volkswagen relies on our having visual concepts formed through looking at examples of these things. This is to say that our vision is theory laden in a particular way, that all of our visual concepts inform the others and our perceptions are laden with our knowledge of the appearances of things in the visual world: it is not a problem for perception that it functions this way, but a virtue. In the discussion of the Novel Phenomena Hypothesis we saw that van Fraassen emphasizes that the images themselves are observable objects, and while this is true it cannot be 411 Ibid. 412 Ibid., 292–295. 413 Jerry Fodor, “Observation Reconsidered,” Philosophy of Science 51 (1984): 23–43. 213 the entire story. Our interest in imaging cannot just be in the creation of marked surfaces. I have argued that the best way to explain this comes from discussions of seeing-in – that our interest in creating these marked surfaces has to do with thereby creating vehicles for seeing-in, where the main resemblance is between our face to face experiences with objects and our experiences as of those objects seen in images. Van Fraassen’s discussion of resemblance would attempt to force a kind of awkward fit between the design features of images and the luminosity patterns which are counterfactually dependent on the signal data. This puts pressure on the signal and on the luminosity patterns to not only bear a resemblance relationship to the object imaged, but also to somehow serve as an explanation for our increased abilities using imaging. While there are times when the representational system itself is of interest to us, as well as imaging tasks that engage mainly design seeing this does not account for all of our abilities. I think an account of imaging that did not discuss it as perceptual and experiential would be difficult to formulate. Berys Gaut argues that this kind of instrument aided perception is not seeing. That seeing should be defined as the unbroken transmission of light from an object to our eyes. I argued that his account was too strong, that seeing in this sense contradicted so much use of instrumentally aided perception with pictures. Of course instrumentally aided perception is not the same as seeing something face to face, seeing through a picture is not an illusion. In this area it seems to be that if we need to make distinctions in this area, it should be between face to face seeing, and instrumentally aided seeing instead of between face to face seeing and pictorial seeing, or between naked eye seeing and instrument aided seeing. For one thing, this allows us to emphasize how and where instrument aided seeing differs from face to face seeing and to emphasize the pragmatic, epistemic, or inferential features that differ between the two and also between different kinds of instruments. It also allows us to make an important distinction about our use of instruments as perceptual aides. 214 Just because a picture, or because a picture making process, is transparent, it does not automatically mean that it is being used to see the object or scene presented. Often when we look at photographs, their transparency is overshadowed by other, perhaps aesthetic, considerations. The transparency of the image, in this case, may not importantly contribute to what is aesthetically interesting or beautiful about the photograph although it may. Some of Jeff Wall’s work involves arranging real spaces and models into compositions that, in the large scale lightboxes he produces, use those spaces and models to recreate historical scenes in a way that reflects art historical elements of composition and picture space. Yet, this cannot be the entire story, in part because of how images are in fact used in scientific practice. Many things which can or could be seen with the naked eye are also seen using instruments and without thereby creating new phenomena. In ordinary scientific practices, pictures, time lapse videos of plant growth for example, are made because it is inconvenient and boring to sit and watch a flower open or a plant grow. While it seems in principle possible to observe, the event is of such duration that it is better viewed using instruments, especially if we are interested in carefully examining some aspect of the event. Vision cannot be rewound and replayed, or slowed down or sped up. While there is a sense in which this does create a new phenomena – a recording that it watched in lieu of a plant – it does not seem that this new phenomena has properties such that it should not also be understood as a new way of observing known phenomena; with the understanding that the temporal aspects are altered. I think that the following discussion from Hooker on technology in science offers a point of illumination On the one side there has been an increasingly refined critique of natural sensory perception for its limitations, biases, and imperfections (illusions etc.). Technology and theory were essential here; witness the camera and optics for detecting perspectival bias, infrared and x-ray technologies and backing theory for the use of the non-visible electromagnetic spectrum. On the other side there is the development of extensions to and substitutes for the senses, for example telescopes, microscopes, micrometers and x- 215 ray photography. These allow us to confine use of our senses to those narrow circumstances where they work best.414 We design our instruments to make use of our perceptual capacities, and we build them and learn to use them within a particular epistemic community, both as humans and more specifically as scientists. We cannot subtract our entire world of knowledge, theory and visual experience from our instrumentation- instruments are built in just that field and to fill in the gaps. When we look at the surface of images what we are interested in is the experience of seeing an object and more particularly in what so seeing the object tells us about it. I mentioned above that I thought of instrumentally aided perception as an imaging task. This is because when the goal of imaging is to see something it is to be interested in the visual perception of that thing and not in its design. We use images to see inside the body when it is otherwise inaccessible and when we can bring our visual experiential knowledge to bear on what we then see. Instrumentally aided perception should not be contrasted with naked eye seeing in the realm of possible visual experiences, but should instead be contrasted with face to face seeing. In this way, we can use instruments to aide our perception of things that we cannot see face to face of which we still have visual knowledge. We face to face see photographs, but use them as tools to see other objects. We see ultrasound images face to face; but see through them to see the very foetus which we cannot see, cannot diagnose, cannot measure or assess in a face to face manner. This is not a point that will necessarily make an anti-realist happy, but it is a point that should clarify some of the epistemic issues that arise if we accept that the only way of seeing things is with the naked eye.  414 Kai Hahlweg and Clifford Hooker, Issues in Evolutionary Epistemology (New York: State University of New York Press, 1989), 137. 216  Bibliography Alty, Jane, Hoey Edward, Wolstenhulme Stephen, and Michael Weston, Practical Ultrasound (London: Royal Society of Medicine Press, 2006), 1–5. Ambrose, J. “Computerised Transverse Axial Scanning (tomography) Part 2,” British Journal of Radiology 46 (1973): 1023- 1047. Amuro, A., Claudia C Leite, Karima Mokhtari, Jean-Yves Delattre. “Pitfalls in the Diagnosis of Brain Tumours.” Lancet Neurology 5 (2006): 937–48. Baigrie, B. Picturing Knowledge: historical and philosophical problems concerning the use of art in science. Toronto : University of Toronto Press, 1996. Barlow H., Blakemore C., Weston-Smith, M., ed. Images and Understanding : Thoughts About Images – Ideas About Understanding. Cambridge: Cambridge University Press, 1986. Barwise, J. and Etchemendy, J. “Visual Information and Valid Reasoning.” in Visualisation in Teaching and Learning Mathematics ed. W Zimmerman, and S Cunningham (Washington: MAA, 1991): 9–24. Beam, Craig A., Elizabeth A. Krupinski, Harold L. Kundel, Edward A. Sickles, Robert F. Wagner. “The Place of Medical Image Perception in 20th Century Health Care” Journal of the American College of Radiology 3 (2006): 409– 412. Berlin, Leonard. “Malpractice issues in Radiology: Perceptual Errors.” American Journal of Radiology 167 (1996): 125–128.  Bogen, J. “Epistemological Custard Pies from Functional Brain Imaging.” Philosophy of Science, Vol. 69, No. 3, Supplement: Proceedings of the 2000 Biennial Meeting of the Philosophy of Science Association Part II: Symposia Papers  (2002): S59–S71. Brown, James R. “Proofs and Pictures.” British Journal for the Philosophy of Science 48       (1997): 161–180. Bueno, Otávio. “Representation at the Nanoscale.” Philosophy of Science 73 (2006): 617–628. ——— “Scientific Representation and Nominalism: An Empiricist View,” Principia 12 (2008): 177–192. Burtbaw, Jeanette, “Obstetric Sonography–That’s Entertainment?” Journal of Diagnostic Medical Sonography 20 (2004): 444–448. Callender C. and Cohen J. “There is no Special Problem of Scientific Representation.” Theoria 55 (2006): 67–85. Carel, H. “Can I Be Ill and Happy?” Philosophia 35 (2007): 95–110. Chapman, Elizabeth, “The Social and Ethical Implications of Changing Medical Technologies: The Views of People Living with Genetic Conditions,” Journal of Health Psychology 5 (2002):195-206. Chakravartty, A. “Informational versus Functional Theories of Scientific Representation.” Synthese 72 (2009) 197–213. Chervenak Frank and Laurence McCullough. “An Ethical Crtique of Boutique Foetal Imaging: A Case for the Medicalization of Foetal Imaging.” American Journal of Obstetrics and Gynecology 192 (2005): 32. Clark Vincent P., Jose M. Maisog, and James V. Haxby. “fMRI Study of Face Perception and Memory Using Random Stimulus Sequences.” The Journal of Neurophysiology 79 (1998): 3257–3265. Damadian, R.“Tumour Detection by Magnetic Resonance,” Science 171 (1971): 1151– 1153. 217 ——— “Field-focusing Nuclear Magnetic Resonance; The Formation of Chemical Scans in Man,” Naturwissenschaften 65 (1978): 250. Damadian R. et al., “NMR in Cancer: XVI: FONAR Image of the Live Human Body,” Physiological Chemistry and Physics 9 (1977): 97. Daston, L. and Galison, P. Objectivity. Boston: MIT Press, 2007. Delehanty, Megan. “Perceiving Causation Via Videomicroscopy.” Philosophy of Science 74 (2007): 996-1006. Doby, T. and Alker, J. Origins and Development of Medical Imaging. Carbondale: Southern Illinois University Press, 2002. Doi, K. “Diagnostic Imaging Over the Last 50 Years: Research and Development in Medical Imaging Science and Technology.” Physics in Medicine and Biology 51 (2006): R5–R27. Downes, S. “Models, Pictures, and Unified Accounts of Representation: Lessons from Aesthetics for Philosophy of Science.” Perspectives on Science 17 (2009): 417–428. Dretske, F. Knowledge and the Flow of Information. Cambridge Massachusetts: MIT Press, 1980. Duden, Barbara. Disembodying Women: Perspectives on Pregnancy and the Unborn. Cambridge, MA: Harvard University Press, 1993. Edgerton, S. “The Renaissance Development of the Scientific Illustration”, in Science and arts in the Renaissance ed. John Shirley and David Hoeniger (1985): 168–97. Farah, Martha J. The Cognitive Neuroscience of Vision. Chicago: Blackwell French, S. “A Model-Theoretic Account of Representation (Or, I Don ’t Know Much About Art … But I Know It Involves Isomorphism.” Philosophy of Science 70, Supplement: Proceedings of PSA, 1472–83. Freud, S. Civilization and its Discontents. New York: W.W. Norton and Co, 2005. Galison, P. Image and Logic a Material Culture of Microphysics. Chicago: University of Chicago Press,1995. Gendler, T.S. and Hawthorne, J. ed. Perceptual Experience. Oxford: Oxford University Press, 2006. Giere, Ronald.  “Visual Models and Scientific Judgment.” in Picturing Knowledge: Historical and Philosophical Problems Concerning the use of Art in Science ed. Brian Baigre Toronto: University of Toronto Press, 1996. Giesemer, J. “Must Scientific Diagrams be Eliminable?” Philosophy and Biology 6 (1991): 155–180. Gigerenzer, Gerd. Adaptive Reasoning: Rationality in the Real World. Oxford: Oxford University Press, 2000. Gombrich, E.H. Art and Illusion: a Study in the Psychology of Pictorial Representation 2nd edn. Princeton: Princeton University Press, 1961. Goodman, Nelson. Languages of Art. Indianapolis: Bobbs-Merrill, 1976. Grice, H. P. “The Causal Theory of Perception.” Proceedings of the Aristotelean Society, supplementary vol. 35, 1961. Gruender, D. (1989) “Instrumentally Aided Perception and Ancient Ghosts” Philosophy and Phenomenological Research, 49, 3, 477–485 Guy, C. and Ffytch, D. An Introduction to the Principles of Medical Imaging. London: Imperial College Press, 2005. Hacking, I. Representing and Intervening. Cambridge: Cambridge University Press, 1983. Holland G.N., and P Bottomley, “A Colour Display Technique for Nuclear Magnetic Resonance Imaging,”Journal of Physics E:Scientific Instruments 10 (1977): 714-16. 218 Hopkins, R. “Explaining Depiction.” Philosophical Review 104 (1995): 425–455 ——— Picture, Image and Experience. Cambridge: Cambridge University Press, 1998. Hounsfield, G. “Computerised Transverse Axial Scanning (tomography) Part 1,” British Journal of Radiology 46 (1973): 1016- 1022. Jain, Sarah, "The Prosthetic Imagination: Enabling and Disabling the Prosthesis Trope," Science, Technology, and Human Values 24 (1999): 31–54. Ji EK, DH Pretorius, R Newton, K Uyan, AD Hull, K Hollenbach. “Effects of Ultrasound on Maternal-Foetal Bonding: a Comparison of Two- and Three-dimensional Imaging.” Ultrasound in Obstetrics and Gynecology  25 (2005): 472–475. Jacobson, FL., “Paradigms of Perception in Clinical Practice,” Journal of the American College of Radiology 3 (2006): 441–445. Joyce, Kelly A.  Magnetic Appeal MRI and the Myth of Transparency. Ithica: Cornell University Press, 2008. Kennedy, John. A Psychology of Picture Perception. London: Jossey-Bass Publishers, 1974. Kevles, Bettyann Holtzman. Naked to the Bone; Medical Imaging in the 20th Century.   New Brunswick: Rutgers University Press, 1997. Kitcher, Philip, and Achille Varzi. “Some Pictures Are Worth 20 Sentences.”Philosophy       75 : 377–381. Kleinfield, Sonny. A Machine Called Indomitable. New York: Times Books, 1985. Kulvicki, John. On Images: Their Structure and Content. Oxford: Oxford University Press, 2008. Kulvicki, John. “Knowing with Images: Medium and Message.” Philosophy of Science 77 (2010): 295–313. Kundel, Harold and Calvin Nodine, “A Visual Concept Shapes Image Perception,” Radiology 146 (1983):363–368. Kurjak and others, “Behavioral Pattern Continuity from Prenatal to Postnatal Life: a Study by Four-dimensional (4D) Ultrasonography,” Journal of Perinatal Medicine 32 (2004): 346– 358. Kwong and others, “Dynamic Magnetic Resonance Imaging of Human Brain Activity During Primary Sensory Stimulation,” Proceedings of the National Academy of Sciences of the United States of America 89, no. 12 (1992): 5675–5679. Laconte, SM, et al., “Real Time fMRI Using Brain State Classification,” Human Brain Mapping 28 (2007): 1033–1044. Lauterbur, Paul, “Image Formation by Induced Local Interaction; Examples from NMR,” Nature 242 (1973): 190–191. Lauterbur Paul C. et al., “Zeugmatographic High Resolution Nuclear Magnetic Resonance Spectroscopy; Images of Chemical Inhomogeneity Within Macroscopic Objects,” Journal of the American Chemical Association 23 (1975):6866–68. Levin, David, and Vijay Rao, “Turf Wars in Radiology: Should it be Radiologists or Cardiologists who do Cardiac Imaging?” Journal of the American College of Radiology 2 (2005): 749–752. Lopes, Dominic, “Drawing in a Social Science: Lithic Illustration,” Perspectives on Science 17 (2009): 5–25. Lopes, Dominic. Sight and Sensibility. Oxford: Oxford University Press, 2006. ——— Understanding Pictures. Oxford: Oxford University Press, 1996. Lynch, M., and S Edgerton, "Aesthetics and digital image processing: representational craft in contemporary astronomy," in Picturing Power: Visual Depiction and Social Relations, Sociological Review Monograph, ed. Gordon Fyfe and John Law (London: Routledge, 1992), 184–220. 219 Lynch, M. “Discipline and the Material Form of Images: An Analysis of Scientific Visibility.” Social Studies of Science 15 (1985): 37–66. Mallard J. et al., “In Vivo N.M.R. Imaging in Medicine: the Aberdeen Approach, both Physical and Biological,” Philosophical Transactions of the Royal Society of London, Series B Biological Sciences 289 (1980): 519- 33. Marquard Smith and Joanne Morra, The Prosthetic Impulse (Cambridge, Mass: MIT press, 2006). Massironi, M. The Psychology of Graphic Images: Seeing, Drawing, Communicating. Mahwah, NJ: Lawrence Erlbaum Associates Inc., 2002, Matthews, PM, and P Jezzard, “Functional Magnetic Resonance Imaging,” Journal of Neuroscience, Neurology and Psychiatry 75 (2004): 6–12. Maudsley, A. A. “Early Development of Line-scan Nuclear Magnetic Resonance Imaging,” Magnetic Resonance Materials in Physics, Biology and Medicine 9 (1999):100- 102. Maynard, Patrick, The Engine of Visualization: Thinking Through Photography. Cornell: Cornell University Press, 1997. Mazer, J., “Object Recognition: Seeing us Seeing Shapes,” Current Biology 10 (2000):R668– R670. McCabe and others, “Seeing is Believing: The Effect of Brain Images on Judgments of Scientific Reasoning,” Cognition 107 (2008): 343–352. Mcluhan, Marshall. Understanding Media: The Extensions of Man. Cambridge; MIT Press, 1994. Merz, Eberhard, “3D Ultrasound,” in Textbook of Fetal Abnormalities, ed. Peter Twining, Josephine McHugo and David Pilling (London: Elsevier, 2006), 483–494. Methods and Applications,” Magnetic Resonance Imaging 25 (2007): 989–1003. Meynell, L. “Why Feynman Diagrams Represent.” International Studies in the Philosophy of Science 22 (2008): 39–59. Miller, Greg. “Growing Pains for fMRI.” Science, June 2008, 1412–1414. Mitchell, L. Baby’s First Pictures; Ultrasound and the Politics of Fetal Subjects. Toronto: University of Toronto Press, 2001. Mole Christopher, and others. “Faces and Brains: the Limitations of Brain Scanning in Cognitive Science.” Philosophical Psychology 20 (2007): 197–207. Monti Martin M, and others, “Willful Modulation of Brain Activity in Disorders of Consciousness,” New England Journal of Medicine 362 (2010): 579–589. Nanay, Bence, “Is Twofoldness Really Necessary for Representational Seeing?” British Journal of Aesthetics 45 (2005): 248–257. Ogawa, S., and others, “Brain Magnetic Resonance Imaging with Contrast Dependent on Blood Oxygenation,” Proceedings of the National Academy of Sciences of the United States of America 87, no. 24 (1990): 9868–9872. Oldendorf, William H. The Quest for an Image of Brain. New York: Raven Press, 1980. Oldendorf, William, and William Oldendorf Jr. Basics of Magnetic Resonance Imaging. Boston: Martinus Nijhoff, 1988. Orenstein, Beth, “Ultra High Field MRI: the Pull of Big Magnets,” Radiology Today 7, February 2006, 10. Palmer, Julie. “Seeing and Knowing; Ultrasound Images in the Contemporary Abortion Debate.” Feminist Theory 10 (2009): 180. Palmeri, Thomas, and Isabel Gauthier, “Visual Object Understanding,” Nature Reviews Neuroscience 5 (2004): 291–304. Peacocke, Christopher. “Depiction.” Philosophical Review 96 (1987): 383– 410. Perini, L. “The Truth in Pictures,” Philosophy of Science 72 (2005): 262–285. 220 ——— “Explanation in Two Dimensions: Diagrams and Biological Explanation.” Biology and Philosophy 20 (2005): 257- 269. ——— “Visual Representation and Confirmation.” Philosophy of Science 72 (2005): 913-926. Pierson, Roger, “Three-dimensional Ultrasonography of the Embryo and Fetus.” in Textbook of Fetal Ultrasound, ed. R.Jaffe and T Bui (New York: Parthenon Books, 1999), 317–342. Potchen, EJ. “Prospects for Progress in Diagnostic Imaging,” Journal of Internal Medicine 247 (2000): 411–424. Pykett, Ian and Peter Mansfield. “A Line Scan Image Study of a Tumorous Rat Leg by Nuclear Magnetic Resonance.” Physics in Medicine and Biology 23 (1978): 961-67. Raichle, Marcus. “Behind the Scenes of Functional Brain Imaging: A Historical and Physiological Perspective.” Proceedings of the National Academy of Sciences of the United States of America 95 (1998) 765–772. Reiser, Stanley. Medicine and the Reign of Technology. Cambridge: Cambridge University Press, 1978. Roskies, Adina. “Are Neuroimages Like Photographs of the Brain?” Philosophy of Science 74 (2007): 860–872. Schier, Flint. Deeper Into Pictures: An Essay on Pictorial Representation. Cambridge: Cambridge University Press, 1986. Simonsen, S. and others. “The Complexity of Foetal Imaging.” American Journal of Obstetrics and Gynecology 112 (2008): 1371. Sinnott-Armstrong, Walter, and others, “Brain Images as Legal Evidence,” Episteme (2008): 359–373. Sirotin, B. and D Aniruddha. “Anticipatory Haemodynamic Signals in Sensory Cortex not Predicted by Local Neuronal Activation.” Nature 457 (2009): 475–479. Small, Scott. “Neurobiological Correlates of Imaging,” in Functional MRI: Applications in Clinical Neurology and Psychiatry, ed. Mark D’Esposito (Oxford : Informa Healthcare, 2006): 1–8. Stoub, T., and others, “Hippocampal Disconnection Contributes to Memory Dysfunction in Individuals at Risk for Alzheimer’s,” Proceedings of the National Academy of Sciences of the United States of America 103, no. 26 (2006):10041–10045. Talairach, J., and P Tournoux, Co-planar Stereotaxic Atlas of the Human Brain: 3-Dimensional Proportional System – an Approach to Cerebral Imaging (New York: Thieme Medical Publishers, 1988). Tanaka and others, “Object Categories and Expertise: Is the Basic Level in the Eye of the Beholder?” Cognitive Psychology 23 (1991): 457–482. Thirion, Bertrand, and others, “Inverse retinotopy: Inferring the Visual Content of Images from Brain Activation Patterns,” NeuroImage 33 (2006): 1104–1116. Tintore, M., and others, “New Diagnostic Criteria for Multiple Sclerosis,” Neurology 60 (2003): 27–30. Toga and others, “High Resolution Anatomy from in situ Human Brain,” NeuroImage 1 (1994): 334–344. Tufte, Edward. Envisioning Information. Cheshire, Connecticut: Graphics Press, 1990. ——— Visual Explanations (Cheshire, Connecticut: Graphics Press, 1997). van Djick, Jose. The Transparent Body a Cultural Analysis of Medical Imaging, (Seattle: University of Washington Press, 2007); Anthony Brinton Wolbarst, Looking Within, (Berkeley: University of California Press, 1999). van Fraassen, Bas, “Empiricism in the Philosophy of Science,” in Images of Science:   Essays on Realism and Empiricism with a Reply by Bas van Fraassen, ed. Paul Churchland and Clifford Hooker (Chicago: University of Chicago Press, 1985), 245–308. 221 ——— Scientific Representation: Paradoxes of Perspective. Oxford: Oxford University Press, 2008. Walton, Kendall, ‘Transparent Pictures: On the Nature of Photographic Realism,” Critical Inquiry 11 (1984): 246–77. Walton, Kendall. Marvelous Images. Oxford: Oxford University Press, 2008. Warren-Burhenne, S Wood, and C D’Orsi, “Potential Contribution of Computer-aided Detection to the Sensitivity of Screening Mammography,” Radiology 215 (2000): 554–62. Weiskopf, Nikolaus, and others, “Real-time Functional Magnetic Resonance Imaging: Wigley, Mark. “Prosthetic Theory: the Disciplining of Architecture,” Assemblage 15 (1991): 6– 39. Wild, J., and Reid, J., “Application of Echo-ranging Techniques to the Determination of Structure of Biological Tissue,” Nature 115 (1952): 226–230. Wollheim, Richard. Art and Its Objects, 2nd ed. Cambridge: Cambridge University Press, 1980. ——— Painting as an Art: The Andrew W Mellon Lectures in the Fine Arts. Princeton: Princeton University Press, 1987. Ziff, Paul. Antiaesthetics; An Appreciation of the Cow with the Subtile Nose. Boston : D. Reidel Publishing Co, 1984. ——— Reasons in Art Criticism. Ithica: Cornell University Press, 1966. 222


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items