THE NEUROSCIENCE OF MOVEMENT, TIME AND SPACE: AN ARTS EDUCATIONAL STUDY OF THE EMBODIED BRAIN by ANNE-MARIE R. LAMONDE B.P.E., The University of Calgary, 1990 B.Ed., The University of Calgary, 1991 M.A., The University of British Columbia, 2002 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES (Curriculum Studies) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) April 2011 © Anne-Marie R. LaMonde, 2011 ii ABSTRACT This thesis is an exploration of the contributions of contemporary theories in film and literacy with the purpose of understanding how those theories inform an arts-based researcher in education. Additionally, further insights are drawn from cognitive, social, and neurosciences with the purpose of broadening the scope of understanding that stretches across multiple disciplines wherein film and literacy education is found. By engaging in a wide exploration across multiple fields of knowledge, this thesis shows the extent to which the general belief of the incommensurability between the arts, philosophy, cognitive, social and neurosciences has impacted negatively on education. It is believed, however, that knowledge gained through the study of contemporary theories in film and literacy, which is founded upon the philosophical, psychological, and sociological, may achieve greater clarity and insight when framed within the scope of advanced studies in neurosciences. With the interweaving of autobiographical accounts, explorations in the theoretical and experimental lead to a renewed understanding of film, arts, and literacy pedagogy. Finally, it is believed that understanding the convergence of the brain’s cognitive, emotional, and sensorimotor functions and the primacy of movement, is pivotal to understanding the complex issues of brain-body-mind that range from consciousness to learning. iii TABLE OF CONTENTS Abstract ……………………………………………………………………………ii Table of Contents ……………………………………………………………………iii List of Figures ……….……..…………………………………………………………v Acknowledgements…………………………………………………………………..vi CHAPTER ONE Introduction and overview.....……………………………………1 1.1 Overture: A prologue to the themes………………………………………1 1.2 Investigative prelude…………………………………………………….13 CHAPTER TWO Literature review and discussion ……………………………….33 2.1 Situating the purpose, research questions and concerns ..………………33 2.2 A deeper look into the school’s philosophy …………………………….37 2.3 A surprising find: the puzzling field of film literacy ……………………39 2.4 Surveying the situation in education…………………………………….50 2.5 Motion pictures: the event that changed the world……………………...56 2.6 Graduate researchers and the emergence of a collective ………………..61 2.7 Film arts as literacy, communications, and technology…………………66 2.8 The rise of film research in education: in search of the expert………….72 2.9 The dizzying effects of revolution ………………………………………84 2.10 Searching for a new direction ………………………………………...91 2.11 Reflecting on the urgent call to action …………………………………94 2.12 The knowledge digital ethnographic experts presently offer ………….96 2.13 The knowledge communication experts presently offer...……………100 2.14 Questions and concerns that continue to haunt arts educators ………107 CHAPTER THREE Methodology………….……………………………………112 3.1 Modes of inquiry ………………………………………………………112 3.2 A deeper look into brain research .……………………………………..118 3.3 Consciousness and the flow of images ...………………………………122 3.4 Another perspective on the idea of image ……………………………..124 3.5 The brain, images, and film ……………………………………………126 3.6 The flow of movement and arts-based education ……………………...131 3.7 Sensory dispositions and image formation …………………………….134 3.8 The somatic marker hypothesis ………………………………………..136 3.9 Darwin’s insight into art ……………………………………………….141 3.10 The arts as concrete and symbolic inventory…………………………146 3.11 Understanding film in education ……………………………………..154 3.12 The convergence of specialists and generalists ………………………156 3.13 Dismantling the mirrored dissention …………………………………166 iv 3.14 Nature and nurture: The struggle continues…………………………..169 3.15 Language as grammar, syntax and semantics ………………………..172 3.16 Near and far: the spatial side of the part-whole dichotomy…………..178 3.17 Spatial reasoning: its impact on research methodologies …………….190 3.18 Emotions and cognition: the importance of things felt and the matter-of-fact …………………………………………………………192 CHAPTER FOUR Analysis ………………………………………………………199 4.1 In search of meaning: cognition, perception and language ……………199 4.2 Language and thought as equal, independent, or deterministic ………..207 4.3 Observations of a Deaf student negotiating music concepts …………..214 4.4 Is there such a thing as semantics in music? …………………………..224 4.5 The impact of language and cognition theories on pedagogy …………229 4.6 A cognitive music theory: neither language nor transcendent…………231 4.7 Rethinking imitation, meaning and understanding …………………….236 4.8 Creative reasoning: a cognitive-perceptive act of logic .………………242 4.9 Universals as determining language and cognition ……………………245 4.10 A cognitive and emotional register …………………………………252 4.11 The semantic brain: in search of the language of thought ……………254 4.12 The flip side: mirror neurons …………………………………………259 4.13 The discovery of mirror neurons shifts perception studies …………..261 4.14 Hypothesizing the function of mirror neurons ……………………….266 4.15 Mirror neurons beyond imitation: learning new action patterns ……..272 4.16 Hypothesizing the relationship between mirror neurons and conceptual semantics …………………………………………………273 4.17 Fussy verbs prove to be connected to reality…………………………278 4.18 Mirror neurons in action………………………………………………285 CHAPTER FIVE Synthesis ………………………………………………………291 5.1. Ethical chasm: an experiment in film pedagogy and research methodology ..…………………………………………………………291 5.2 The grammar of film: a temporal-spatial logic ………………………..298 5.3 Test audiences: film reception and interpretation ……………………..304 5.4 The experiment proves to be both a success and failure ………………306 5.5 Experiences not abstracted from our sensory experiences …………….316 5.6 Building capacity: the brain that changes through pedagogy ………….317 REFERENCES ……………………………………………………………………326 v LIST OF FIGURES Figure 1. Palindrome set-up………………………………………………………..288 vi ACKNOWLEDGEMENTS My deepest gratitude goes to my thesis supervisor Dr. Peter Gouzouasis for the many years of intellectual and artistic exchange, as well as his continuous support throughout. His confidence in my investigative process gave me the much appreciated space and time to allow me to plunge into the unknown and bring to the surface treasures from the deep. Also, I am deeply indebted to my two quintessential committee advisors, Dr. Carl Leggo and Dr. Shelley Hymel. Without their support, insights, challenges, encouragement, artistry, and dedication, I would not have been able to see this project through. I would also like to thank Dr. Joy Butler, Dr. Janet Jamieson, and Dr. Celeste Snowber for their probing questions and incisive anecdotes that set the tone during the examination period, which will continue to reverberate in me as an openness toward multi-disciplinary research. I am also grateful for the financial support, along with the support of the faculty and staff from the Department of Curriculum Studies at UBC. The department’s commitment to graduate students has been outstanding. To my children, Natalie, Marci, and Gregory, I thank them each for their intelligence, their penetrating views, and marvelous contributions to my work and studies. I deeply appreciate my parents, John and Paulette, for their unconditional love, support, and encouragement throughout the years, which has been a constant guide and motivational force. There are numerous students and friends who have been fundamental to my personal and professional growth. I will be forever grateful for their unbounded creativity and the stories they shared. Though there have been too many over the years to name them all, yet they will remain unforgettable; their presence will be as a thread of continuity to my life. Finally, to Marc Retailleau, whose companionship, love, faith, artistry, and uncompromising process toward a higher purpose and consciousness watered the seeds of knowledge and pressed me forward to finding my voice. 1 CHAPTER ONE Reasoning and education, though we are willing to put our trust in them, can hardly be powerful enough to lead us to action, unless besides we exercise and form our soul by experience to the way we want it to go; otherwise, when it comes to the time for action, it will undoubtedly find itself at a loss (Montaigne, 1958, p. 267). Overture: A prologue to the themes To give a sense of the work herein, I consider the importance of the title of this thesis: The neuroscience of movement, time and space: An arts educational study of the embodied brain. Carefully chosen for its ambiguities, implications, denotations, connotations and context, the title may mean everything and nothing. As a colleague of mine said jokingly when she first heard it, “You lost me at neuroscience.” Setting humor aside, her response was not surprising. At first glance, the term neuroscience carries an aura of scientific complexity and appears steeped in reductionism far removed from the demands of classroom practice or the kind of research that has been the hallmark of education, which for good reason has been rooted in philosophy, psychology, social sciences, and the arts. Though I gratefully acknowledge those areas of research, which continue to bring the human condition to light, my choice to devote time to the neurosciences seemed to be logical given our collective need to understand perception, cognition and emotion. Moreover, the discoveries in neuroscience appeared as a new means to confront dualistic thinking since the time of the Greeks. With a span of over hundred years of observation and study, the field of neuroscience has led me to challenging and astonishing new insights into the embodied brain. It is without question that research is dependent upon lived experiences that lead us to deeper investigations, though more precisely the objects of our fascination, for as Walt Whitman (2006) once wrote, 2 There was a child went forth everyday; and the first object he looked upon, that object he became; and that object became part of him for the day, or a certain part of the day, or for many years or stretching cycles of years. To those who know me well, it is clear that movement has been my object of fascination since childhood, wherein I trained and practiced in dance, gymnastics, sports, music, acting, speech arts, and filmmaking. Through flights of fancy, through applied skills, through composition, teaching, and study, my lifelong practice in the movement arts made the object of movement a part of me for many years and stretching cycles of years. Over the past five years, as my awareness grew with respect to the pivotal nature of movement in my collective experiences, I became increasingly frustrated with theories I encountered in philosophy, psychology, social sciences, and the arts, which submitted captivating descriptions but failed to offer adequate explanations. My recourse to brain research was a leap of faith that beyond descriptions I would find the explanatory. Indeed, through neuroscience I discovered the primacy of movement in the processes of perception, cognition and emotion as well as the arts. And with great delight, the cognitive sciences offered convincing theories on the primacy of movement in language and cognition. My interest in the study of the brain and body, in effect, began over twenty-five years ago while studying dance education. However, it was while apprenticing to become a teacher that I happened upon a collection of case histories recounted by humanist and neurologist Oliver Sacks, which awakened my passion. As the brain’s peculiar make up was described through telltale stories of neurological disorder, I could not help but be struck by wonder. 3 Yet it was but a few years ago that I learned about the brain’s holistic flow in creating what neurologist Alexander Luria (1972) called, kinetic melodies. First, by the interconnected and integrative nature of the brain, which coordination resembles an orchestral and movement ensemble and, second, by its extraordinary plasticity, which veritably mirrors the creative and flexible nature of linguistic and artistic expressions. As I gained knowledge of the brain’s complex neural communication networks, which creates mind through image spaces and dispositions that enable both a core and extended consciousness, I was driven to retrace dualisms in search of theories that hold the halves inseparable and whole. As an artist and arts educator, I have felt that the debates between nature/nurture, perception/cognition, reason/emotion, and, of course, mind/body, as mostly disingenuous. I imagine many academics and researchers have experienced battle fatigue over the debates that have formed in our collective Western minds to describe and explain the world in part. But having held faith in the corollary between arts and sciences, I readily leapt into the unknown, believing that if not now, one day the study of the brain may reveal the whole we all seek. In addition, I felt a certain kinship with neuroscience, which has up until recently remained on the fringes of knowledge for several hundred years, quietly operating through the observation and intuition of artists and scientists. Advances in technology over several centuries, which allowed biology, physics, and chemistry to make important discoveries were not yet available for the neurosciences. Up until fifteen years ago, we had not yet found adequate means to peer into the living brain. But with the invention of photography and the development of over hundred years of filmic images that soon led to the invention of magnetic and digital imaging, it has only been a matter of time that the neurosciences would leap into the fore. 4 Nonetheless, what began as a pastime to read the keen and sensitive portrayals of unique individuals, Oliver Sacks led me down the rabbit hole into the wonderland of neurology. Sacks introduced me to Russian neurologist and linguist Alexander Luria (1972), whose incisive account of a wounded soldier in, Man with a shattered world, influenced many generations of neuroscientists to follow. Hence, this thesis draws insights and theories from the recent generation of neuroscientists, namely, Antonio Damasio, V.S. Ramachandran, Rizzolatti and Sinigaglia, and Milner and Goodale to name a few. I also lean on the insights of cognitive linguists whose collaboration with neuroscience began long ago with Wernicke and Broca’s discoveries of the brain’s speech centers upon studying stroke victims in the late 1800s (Pinker, 2008). In any case, apart from neuroscience, it is the word movement, which holds utmost importance. As such, I include in the title the two interrelated and indivisible contexts within which movement flows, namely, time and space. Time and space are infinitely whole, flowing, and continuous. And though our human intellect can imagine and express infinity in poetic and rational ways, our lived experience tells us that our finite intelligence understands time and space best when they are divided, discontinuous, and examined in part. We understand and express time and space as finite instances that appear in the memorable past, the phenomenal present, and imagined future. Time and space are then and now, near and far, sudden and sustained, fast and slow, bounded and unbounded. Time and space being directional and containing shape, mass, and weight, position us by providing the figure and ground upon which we are able to interact in a finite reality. This interaction ensures our ability to adapt to the present by drawing from the past. By observing causal patterns in order to infer, predict, plan and create artifacts 5 for the future. And to bring to pass what Alfred N. Whitehead (1938) described as the ‘importance of things felt’ and ‘the matter-of-fact.’ None of which we would be able to do without movement, for it is by the flow of movement that we attend to and recall the objects of our existence. If this view of movement appears confusing, one may find clarity in reflecting that it is by the flow of movement that our brain takes note of a before and after, wherein we sense that some thing has moved from here to there or from then to now. It is by movement that we are able to reconstruct the worlds we sense within and without our bodies to attend to and retain in memory our lived reality, and by movement that we reach toward or pull away from what is imminently in our best interest. It is through movement, through the flow of electrical impulses and chemistry in the brain, which flow moves freely in and out of the cortical and sub-cortical systems, perceiving and deploying vital information to our bodies and to our emotional and cognitive centers whereby the human mind is created. It is this continuous flow that is ‘the stuff of our universe.’ The rest of the title speaks to the fact that far from being an expert in neurology, I am an artist and arts educator desiring to participate in a world where language and reason holds political and economic primacy. Methodologically, I was encouraged to follow in the footsteps of neurologists whose use of autobiographies (what neurologist V.S. Ramachandran calls, ‘the n of 1’) is crucial to our understanding of the embodied brain. Beyond the importance of the autobiographical as method, to truly understand the self, as Socrates once declared, positions us to better understand what lies outside of us. It is upon film and language that I chose to focus my research because of their structural semblance in grammar, syntax, and semantics. Since film and language are also profoundly acoustic and kinetic, I was able to draw from my years of experience in music and dance as 6 complimentary arts. Because dance respects a very specific form and style of movement, I chose to generalize elements of movement through the work of Rudolf Von Laban, who classified essential aspects across a spectrum of motion from the pedestrian and occupational to the highly skilled forms of artistry. Likewise, music is analyzed for its structure and developmental learning theories, rather than its emotional valence in filmed works. My film understanding, which began in childhood as a means to learn a second language, spans my experiences as a student in cinematic studies and film theory, as filmmaker, and film educator. The data for analysis includes a short film, which I produced for an educational conference to delineate the concept of near/far through the subject of a dancer, along with a short film produced by an undergraduate student whose playful elements of visual and acoustic time and space were made intelligible through examining the theory of spatial reasoning, such as near/far, put forward by neuroscientists Rizzolatti and Sinigaglia (2008). I also analyze a choreography, entitled Palindrome, which demonstrates the theory of “action understanding,” as also put forward by neurologists Rizzolatti and Sinigaglia (2008), and expounded upon by Milner and Goodale (2006), in relation to the manner by which the sensorimotor systems operate conjointly to interpret transitive and intransitive gestures for the purpose of imitating, deciding, planning and taking action. The theory of “action understanding” was aided by the discovery of mirror neurons, which was put forward by V.S. Ramachandran and associates (1998). Temporal/spatial reasoning, action understanding and mirror neurons are just a few of the several areas I draw from the neurosciences to better understand the processes and products of film, language, music, and dance. By drawing on the strengths and shortcomings of theories in the cognitive sciences and developmental psychology, I also analyze my experience in teaching a Deaf student in a music 7 context. By turning to cognitive and neurosciences, I attempt to explain the extraordinary capacities of a brain to understand music irrespective of possessing a sense of hearing. One key piece of evidence I chose from which to analyze and draw final conclusions, to which I dedicate the last chapter, was a short film I produced for my faculty’s former dean, Dr. Robert J. Tierney (2008). Though the film appeared unrelated to my thesis at its initial undertaking, its experimental nature was the impetus that changed the course of this thesis. It is upon this final analysis that I have concluded on the importance of film as research methodology and classroom practice. The making of the film commissioned by Tierney (2001-2001), was based on an article he published in the Journal of Adolescent Literacy, originally entitled: An ethical chasm: Jurisprudence, jurisdiction and the literacy profession. Written principally to identify the political and legal forces behind literacy education, it speaks to the primacy of literacy education to foster the ability to reason, most often termed ‘critical thinking.’ What arose in my mind as I pondered Tierney’s text was, what type of reasoning would affect behaviors and attitudes we are desperate to foster in schools—such as fairness, empathy, acceptance, and cooperation—and do literacy approaches, as those are implemented today in our schools, target that kind of reasoning? Tierney’s initial writing for the article took poetic license from a literary work, namely a courtroom drama depicted in David Guterson’s (1995), novel Snow falling on cedars, which set in motion his intent to express complex terms through a literary lens—one could say an experiment in its own right. He then re-contextualized the article as a Reader’s Theatre piece for educational audiences, followed by the production of a short film drama, which I directed, edited, and scored. Since my sensibilities, as an arts educator, has been to foster freedom of thought, agency and democracy—tenets upheld by the artistic creed—it was serendipitous that Tierney’s article 8 was founded upon those principles through a literacy lens. Unquestionably, I was drawn to this experiment both rationally and passionately because of its consciousness-raising medium and message—for instance, as a work of art revealing the world of literacy education, which world utilizes the arts for expression. Clearly, as McLuhan (1963) once noted, within a medium lies another medium. To identify my research questions, I was mindful of knowledge areas that today demand special attention in education, namely, (1) film and video, by virtue of an unprecedented era that is flooded with digital images, (2) literacy, by virtue of its perceived primacy in educational and political spheres, and (3) neuroscience, by virtue of a perceived incommensurability with classroom practice. I sought to know, therefore, how contemporary theories in film and literacy inform an arts-based researcher in education. I wondered what further insight is drawn from studies in the cognitive, social, and neurosciences. And whether the knowledge gained from those preceding questions, can enhance our understanding of arts and literacy so that change is positively affected in classroom pedagogy. In my review of literacy, I note its political and economic valence. Literacy is a weighty word, which was coined in 1883 and first used politically to raise funds in 1886 to address the appalling human conditions believed were caused by low literacy rates in the state of Louisiana. I note also the shift in the definition of and advocacy for literacy as digital media gained momentum. By 1996, The New London Group, an international group of literacy researchers, began to redefine and advocate for the kinds of literacy arising in a new digital economy beyond the printed word. I note also the various intentions behind research in new media as films progressed through its various forms. Media literacy sprang from a concern with the shaping of young minds, influenced by images rooted in political and economic agendas. Whilst media arts, by 9 contrast, sprang from an advocacy for the uses of new forms of communication and technology in schools, not the least of which include filmmaking and video production. Notably, the introduction of filmmaking in the classroom, as early as 1939, was by a group of English teachers. Prior to this new form of literacy, as it was called, studies were conducted on film’s so-called deleterious effects in the lives of children and youth Those studies were published as early as 1928 through the Payne Fund, and 1929 through the extensive qualitative study of behaviorist Alice Miller Mitchell. In terms of entering the school curriculum, filmmaking has a sketchy past. Over a period of more than hundred years, it has appeared and disappeared from classrooms approximately every forty years, an indication that despite some initial interest, whenever new technologies were invented to facilitate production, the making of films never developed a convincing rationale to remain a permanent fixture in schools. I note that as early as 1928 to our present day, an essential rhetoric on film in education has remained the purview of media, literacy, communications and film experts whose agenda is to direct film viewing and production for the purpose of steering an undiscerning public toward good citizenry, which the New London Group (1996) deemed to promote the good life and an equitable society. Whereas film studies stayed clear of directing film viewing or production, its specialization has challenged superficial viewing and processes by amateurs, including those in education. Those historical movements were founded on the perceived urgency of the times and a positioning of a new breed of experts. What is clear is that with each new consumer trend, educators and film experts resist or embrace the changing times out of sense of urgency or a feeling that something must be done to either stop the flow or ride the wave. 10 Notably, film theory and studies, which sprang from a passion for the cinema since the Lumière brothers’ invention of the movie camera in 1889, aligned itself with twentieth century research and theoretical paradigms. Beginning as a phenomenological enterprise among amateur film enthusiasts, who exchanged the ontological and epistemological virtues of the film image in new film journals, its study and theory arose as an expert field in academia by the 1950’s. Film studies have leaned heavily on the theories prevalent in philosophy, structural linguistics, psychology, semiotics, cognitive linguistics and the social sciences. Each enterprise has appeared like Russian nesting dolls, applying twentieth century grand theories to their particular object of study in film. In reviewing the whole of the literature, it became increasingly apparent that a vital piece of knowledge was missing, namely, understanding the image and how the human mind interprets the world through what Damasio (1999) termed “movies-in-the-brain.” Despite the availability of hundreds of theories describing the image, an adequate explanation of how the image forms in the brain and for what purpose it forms, has eluded educational researchers and film theorists. Neuroscience explains that it is our sensory-somatic system, which ‘translates’ incoming signals into images, the purpose for which appears to regulate a ‘mindful’ relationship with the world. Yet what connects images to judging and valuing the world? Damasio (1999) offers a telling proposal. Emotions of all shades eventually help connect homeostatic regulation and survival ‘values’ to numerous events and objects in our autobiographical experience. Emotions are inseparable from the idea of reward or punishment, of pleasure or pain, of approach or withdrawal, of personal advantage or disadvantage. Inevitably, emotions are inseparable from the idea of good and evil (pp. 54-55). It is thus a constant flow of images, which give rise to “the sense of self in the act of knowing,” and ultimately constitutes the seat of consciousness (p. 19). 11 Through my analysis of the short film experiment, namely Ethical Chasm, I conclude that it is both a failure and a success. Its success was in driving a passionate relationship with the making of the film and its content, which led me to the discovery of vital information for education. The failure of the film experiment, which I sensed and felt before I could analyze it, was brought to the fore once I understood the orchestration of the brain’s integrative systems. Neuroscience, which is just beginning to unpack how the brain-body creates mind, and how the mind subsequently constructs meaning in language, film, music and all other manner of human expression, is now unequivocally proving the intimacy between subject/object and between nature/nurture. Film and language are technologies that extend our bodies and senses. Film and language are constructed to mirror or give semblance to the perceptual, cognitive, and emotional fabric of our lived realities, which are indubitably rooted in movement, time and space. Every nuance of movement, as noted by expert and novice alike, will be processed by the brain and given value in our minds, which value must accord with our surroundings. Human development and the artifacts we produce, therefore are the processes of nature and nurture, subject and object, which retrieve from simple, finite elements, as Chomsky so noted, to create infinite expressions. As a script, Ethical Chasm is dependent upon complex dialogue to clearly delineate the issues founded in literacy education, but its message becomes lost in the medium of motion photography and sound. Without special attention to neurological principles, for instance, “action understanding,” which ability to perceive and attend to movement, helps to identify motives and intent that lead to deciding on a plan of action, the visual/aural medium portrays little more than the feeling one is watching a lot of hand waving—the importance of which can be guessed at through location, namely a courtroom, and facial expressions without the details. But the vital 12 message is lost in translation, precisely because film is not at its best when it depends on language to express meaning. It would have been best to leave it as a Reader’s Theatre or even a radio play. For the film to convey its message, many more images and actions would have had to be shot and edited. Nonetheless, the success in experimenting with film in research and pedagogy offers one a unique opportunity to study images that form within and without the brain, to engage in the autobiographical, and to participate in the shaping of consciousness, which many literacy educators call “critical thinking.” In an era, which heretofore had never provided novices and experts alike with the means to produce and distribute films worldwide, educators may come to view film as vital to raising consciousness. Art, which meaning is ‘to fit together,’ is indeed the most salient means to bridging the perceptual, cognitive, and emotional coherence of the brain. Because Literacy education’s current tenets lay beyond the boundaries of the printed word, it may deliver its promises to emancipate the learner if the movement arts are valued as vital, unique and equal in the quest to foster the development of the child. To what end do film arts and arts generally lead us? First, by experimenting with film and the arts beyond merely accommodating cognitive deficits or enhancing interests, we may begin to re-evaluate non-verbal reasoning, which expression is inevitably entwined with higher order thinking. Second, by experimenting with the arts, which emotional quality is inextricable from our ability to interpret, predict, and plan our actions, educators may be entreated to carefully attend to the pedagogy of emotion alongside cognition. Finally, judging from the narrative and experimental videos my elementary, secondary, and undergraduate learners have produced over the years through my instruction, I am convinced that filmic images and sound offer one of the most accessible means for constructing the 13 autobiographical and for changing cognitive and emotional neural patterns that may lead future generations toward higher consciousness. Investigative prelude Over fifteen years ago, while working predominantly as an arts educator with a music emphasis in the public school system, I ventured into the classroom with a strong desire to undertake the art of filmmaking. There had been plenty of educational reasons for taking up such a project. The first had centered on the ‘images’ of democracy and agency that had arose in my mind while working with inner-city youth who I imagined would be able to tell their stories through documentary film. A few years later, those reasons had centered on the ‘image’ of our time that postulated a ‘new knowledge society.’ There was, of course, the publication of the ‘manifesto’ on new literacies by The New London Group (1996), which collective of educators in literacy education strongly upheld the tenets of democracy and personal agency. But it was a description by Linda Darling-Hammond (1997), which I clearly remember gave importance to the times in which I had entered as an educator. The new basics demanded by today’s knowledge society require that all students be able to meet requirements previously reserved for only the ‘talented tenth.’ They must learn to: understand and use complex materials; communicate clearly and persuasively; plan and organize their own work; access and use resources; solve sophisticated mathematical and scientific problems; create new ideas and products, and use new technologies in all of these pursuits (p. 5). Generally speaking, images are not fully understood. The manner in which they arise as ‘artifacts’ in the mind, the fact that images are not solely visual, the manner in which they are stored in memory, recalled, and later ‘represented’ or expressed through varied modalities. Moreover, it is not fully understood why images often seem to return full circle, which phenomenon resembles what Nietzsche called, the eternal return. Darling-Hammond could have been speaking of another generation, in a time when ‘technologies’ or ‘machines’ were pushing the limits of our expectations of mental processes. 14 For instance, in a 1950 speech to the American Psychological Association (APA), J.P. Guilford’s (1987) urgent plea for cognitive scientists to devote their energies to studying creativity was based on two powerful images. With the advancement of computers or what Guilford called “thinking machines,” which he “expected to make man’s brain relatively useless,” he imaged “an industrial revolution that will pale into insignificance the first industrial revolution” (p. 36). Out of necessity, therefore, there was a growing need to “develop an economic order in which sufficient employment and wage would still be available, which would require creative thinking of an unusual order and speed” (p. 36). The second image was much darker in mood, namely, that “the only economic value of brains left would be in the creative thinking of which they are capable. Presumably, there would still be need for human brains to operate the machines and to invent better ones” (p. 36). I have no reason to doubt that Guilford’s address to the American Psychological Association was an echo of an image that rippled throughout society. Certainly Darling- Hammond (1997) was echoing an image already rendered by The New London Group (1996) who began their manifesto by reporting on a different kind of learner. As if to confirm Guilford’s prophetic view of a future generation, new definitive pronouncements were made such as one by Gayle Long (1997), “Teachers today are seeing a new kind of student enter their classrooms. Many children sat at a computer for the first time shortly after they received their first pair of shoes. They’re the Nintendo generation or the screenagers—the first to grow up with personal computers, video games and the Internet. They expect material to be presented to them in a creative and challenging way and are eager to experiment with innovations in technology” (p. 17). Nothing could have been more motivating to consider presenting film arts than the image of experimenting with innovations in technology. 15 Yet there were other reasons for venturing on such an ambitious project. The fact is that teaching artistic processes and products had always brought several conflicting issues to the fore. The first conflict was the issue of ensuring the right balance of ‘direct instruction’ for building skills with ‘open-ended’ compositional projects designed to foster creative processes and products. Direct instruction in an arts program could be viewed as ‘imitative’ for the purpose of building sensory-somatic memory (i.e., motor skills and conceptual awareness). But it could also be thought of as ‘image flooding,’ a useful notion to describe the numerous artistic activities or models to which learners are exposed. A thoughtfully structured, developmentally cognitive program of arts, which would ‘flood’ the learning with good modeling, had long been considered by artists and arts educators to foster creativity (H’Doubler, 1940; Orff & Walter, 1963). Along with ‘image flooding’ was the notion that young people create under the constraints of open-ended tasks, i.e., enabling constraints. As a performing artist, my improvisational and compositional skills in music, theatre, and dance were developed largely through frameworks that sufficiently constrained the magnitude of possibilities into a reasonable sphere of potentiality. Rather than beginning with a blank slate (i.e., tabula rasa), the arts educators I was fortunate to encounter as an apprentice encouraged playful investigations through an abundance of mental images framed by the limits of movement, time and space. Relying on my own apprenticeship as a means to creative thinking, I had already pursued this approach with learners long before entering the public school system and I wished to take more careful note of the results as a teacher-researcher. To my great surprise, I would later encounter a neuropsychological approach to enabling learning or relearning of disabled limbs or senses (e.g., stroke victims) whereby physical constraints are used, not unlike the constraints used in artistic contexts (Doidge, 2007). The essence of ‘learning’ may be understood as an innate evolutionary tendency for living organisms 16 to adapt to new environments. Notwithstanding, from a neuroscience perspective, higher order learning represents a creative process inherent of higher order brains, such that primates and humans possess, which may be challenged by new sensorimotor inputs (i.e., inputs to both motor and sensory areas of the brain). Quite literally, the brain creates neural maps according to sensorimotor inputs since birth and is capable of changing those maps with use or disuse (i.e., new inputs) until death (Doidge, 2007). But an understanding of the brain as possessing neural plasticity from birth to death has taken science more than a century to accept as a fact (Damasio, 1999; Doidge, 2007). As such, the Taub Therapy Clinic, which practices constraint-induced movement therapy (CI), is but one example of an enabling constraint in physical contexts. Taub’s therapy has unequivocally demonstrated that the brain may be challenged to overcome ‘learned behavior’ due to neural disuse (e.g., paralyzed limbs, phantom limb pain, loss of speech, etc.) by changing its neural maps (Doidge, 2007). Enabling constraints, therefore, are limits that ‘force’ the brain to seek alternate pathways and must be viewed as nothing less than enabling creativity or, essentially, the ability to learn. The second conflicting issue was in balancing the evaluative processes that were aimed not simply toward skill assessments but also creativity. Faced with the research that had been published on criterion-referenced assessments and rubric-based evaluations, I was more than curious. My view of assessment criteria and evaluation ‘rubrics’ came strictly from competitive arenas (e.g., music and speech festivals). I saw nothing wrong with preparing students for participation in festivals, any more than preparing students for science fairs (e.g., Odyssey of the Mind). I did, however, balk somewhat at the notion that an arts program would either lean more toward competition or try to fit creative processes and products into a ‘competitive’ framework. 17 In all honesty, evaluating creativity had always been far from simple by virtue of the fact that creativity is yet to be fully explained whether by creators or researchers. This explanatory gap exists despite six decades of exhaustive research on creativity set off by J.P. Guilford’s seminal address whereby he stated, “the neglect of this subject by psychologists is appalling” (Guilford, 1987, p. 34). The problem with creativity is not unlike other phenomena that take place in brain processes, such as consciousness, reason, emotion, thought, and images. The problem is exasperated by the fact that ‘the brain’ is literally trying to peer into ‘the brain’ to find an answer to its existence. Fortunately, advanced technologies in the past several years have achieved a level of sophistication that can truly enable us to peer inside the human brain. Therein lies some hope that studies involving creativity will be advanced. Of course, as with the birth of any passionate endeavor in the classroom, the real reason came down to the fact that there had been a succession of personal experiences in film that had led up to my new motivation and drive. And subsequent to implementing film arts, there was a succession of experiences that arose from the classroom that raised my curiosity and desire to investigate the phenomena I observed. In an educational setting, where it is generally accepted that learning is dependent on meaningful connections with the ‘objects’ upon which we come into contact, it may seem unnecessary to contemplate this experiential factor further. For clearly, there is nothing very remarkable about the fact that my artistic experiences prior to entering the classroom and after implementing arts instruction was the impetus for this current study. Certainly it would not surprise an arts educator to learn that my experiences led me to wonder in what manner we interact with film as a creative motion art that parallels areas of learning for which I sought explanatory approaches, i.e., language, music, drama, and dance. Nor would it surprise an educator, faced with countless conflicted learning theories, that I would be excited at the prospect of an instructional area that was, so to speak, ‘virgin ground.’ This new 18 ‘technologically’ based curriculum seemed to offer the opportunity to discover pedagogical principles not yet implemented or studied, unencumbered with instructional ‘best practices.’ Nonetheless, from an educational research standpoint, personal experiences would appear to merely serve to situate the study or position the researcher’s ‘subjectivity’ within an acceptable range of the scholarly. This factor of experience, however, requires a far deeper contemplation if we are to penetrate the problem of brain-mind-body that currently plagues educators who possess a profound interest in removing barriers to learning through general practice. Notions of experience, in fact, are as relevant to the educator as it is to the neuroscientist, since each are faced with understanding both the general and particular manner by which individuals apprehend their world. In educational circles, a particular means of knowing has engendered the notion of ‘individualized learning,’ whereas ‘best practices’ are viewed as general approaches across populations. Nonetheless, since the subjective experience is relevant to both areas of study (i.e., how an individual experiences the world directly), one could go so far as to suggest that education and neuroscience are dependent on phenomenal encounters whereby the subjective experience poses the greatest challenge to understanding the brain-body-mind complex. The subjective experience may be characterized as particular knowledge (i.e., images, thoughts and feelings) gained directly through the sensorimotor, which is difficult to assess directly through objective means (e.g., observation and testing). And what is not easily tested is not easily generalized to a population. If such a challenge were overcome, both fields would achieve their ultimate goal, namely, to alleviate suffering by helping individuals to reach personal fulfillment and happiness. Both education and neuroscience and their practices are inexorably, if unwittingly, connected by virtue of their core fascination with the development of the brain: its capacity for 19 learning and memory, as well as its relationship with mind, body, and the ‘objects’ with which the brain comes in contact. And paradoxically, both fields are faced by an insoluble gap that exists between the ‘objective’ and rational apprehension of phenomena (i.e., facts gained through testable means) and a purely subjective experience of knowing (i.e., the subject as knower). On the one hand, from the standpoint of ‘objectivity’ (earnestly sought after in science, philosophy, or education), experience tends to be equated with the purely ‘subjective,’ which presents itself as problematic. Daniel Dennett (1991) asserts that consciousness as subjective experience possesses four properties that present an epistemological impasse, namely, (1) the ineffable, which cannot be communicated; (2) the intrinsic, which exists independent of any external facts; (3) the private, which has no interpersonal means of comparison; and (4) the directly apprehensible in consciousness, which is that one knows they are doing the knowing. Understandably, the purely subjective experience, which in philosophical circles is called qualia and describes the subjective quality of conscious experience, poses a considerable problem for those seeking a comprehensive knowledge of phenomena in light of the particulars that prove to be difficult to explain by facts of a scientific, social, or cultural nature. It is possible, for instance, to describe the experience of an entire community as perceiving the world only in hues of black and white, as Oliver Sacks (1996) recounted in his book entitled, The island of the color blind. Other facts may be added, such as the dimensions of culture that arise from this perceptual anomaly or the scientific facts with respect to perception itself. One may begin to use a range of metaphors and simulations to try to imagine what it would be like to live in a world without color, as Gary Ross’s 1998 film Pleasantville depicted. Yet this gap between fact and experience is ever present with no clear means to bridge the distance between knowing and being, or between core and extended consciousness. For the 20 educator and neuroscientist, there continues to be what Vygotsky (1962) called a dialectic leap that has yet to be bridged. By the same token, education, philosophy, psychology, and science paradoxically thrive on the endless peculiarities that draw one’s attention, while simultaneously seeking universal principles that fit across diverse populations. When scrutinized further, this paradox appears pricklier in light of social and cultural objects that are shared and communicated between humans who possess vastly different experiences. Few researchers would deny that the ‘objects’ we encounter in this world act on the brain’s development, save for those who are staunchly positioned as extreme ‘nativists’ and, hence, view human nature as determined by genetic or innate factors. Conversely, save for those who are extreme ‘social constructionists,’ few would deny innate factors also prevail on the brain’s development. New evidence in the neurosciences shows that innate factors are the only explanation for certain mental dispositions observed in pre-linguistic infants. This evidence has redefined what we believed to be pre-linguistic stages of reasoning, which were once explained in the past as arising solely from environmental factors over longer periods of maturation. Through cleverly designed studies, the belief that children’s value judgments and abstract reasoning arise by socialization has been subsequently challenged (Hamlin et al., 2007; Newman et al., 2008; vanMarle & Wynn, 2005; Wynn, 2008). That challenge is also being made to the causes of ‘mental illness’ or neurological disorders, such as disturbed body images (e.g., schizophrenia, phantom limb, anorexia, etc.). What was once believed to occur solely by virtue of traumatic experiences is being revisited through new studies. The example of the ‘phantom limb’ syndrome is a case in point. One would think logically that the ‘phantom’ feelings would have been derived by having once had a limb and, while no longer sending signals to the brain, appears to remain mysteriously ‘in the mind.’ 21 Yet neurologists have discovered that this same phenomenon exists among those born without limbs. Clearly, our ‘experience’ of possessing a body is in sharp contradiction with the network of neurons that are designated to receive bodily signals if persons born without limbs are able to ‘feel’ their limbs (Damasio, 1999). Thus, those who overlook what is universally present at one’s birth, i.e., innate to humans, as essential to the interactivity of brain-body-mind are also prone to ignore innate structures that interact relationally with social and cultural objects. The distinction between what we know to be ‘universal’ or ‘particular,’ the epistemological distance between knowing that is ‘experiential’ versus ‘theoretical’ compounds our confusion of determining ‘innate’ versus ‘learned’ factors. Knowledge that is just beginning to come to light ought to unite the research in science, arts, philosophy, psychology, and education. As it stands, however, there appear to be many educators who view educational matters, which focus is principally tied to the particular in social and cultural contexts, as incompatible with the neurosciences or any ‘reductive’ view that attempts to ‘generalize’ complex systems of learning (Varma, McCandliss & Shwartz, 2008). According to Varma et al., despite that the mind is of interest to both the educator and neuroscientist, what makes educational research at odds with neurological research is the strongly held view that, “neuroscience methods do not provide access to important educational considerations such as context; localizing different aspects of cognition to different brain networks does not inform educational practice” and “reductionism is inappropriate” (p. 141). Educators, of course, have made many assumptions regarding cognitive science, which began in psychology from whence developmental theory was drawn, particularly conceptualized by Piaget. Accordingly, Loris Malaguzzi (1993a), founder of the Reggio Emilia early childhood programs in Italy, put these assumptions into perspective. With a simple-minded greed, we educators have tried too often to extract from Piaget’s psychology things that he did not consider at all usable in education. He 22 would wonder what use teachers could possibly have for his theories of stages, conservation of matter, and so on. Now we can see clearly how Piaget’s constructivism isolates the child. As a result we look critically at these aspects: the undervaluation of the adult’s role in promoting cognitive development; the marginal attention to social interaction and to memory, as opposed to inference; the distance interposed between thought and language; the lock-step linearity of development in constructivism; the way that cognitive, affective, and moral development are treated as separate, parallel tracks; the overemphasis on structured stages, egocentrism, and classificatory skills; the lack of recognition for partial competencies; the overwhelming importance given to logicomathematical thought; and the overuse of paradigms from the biological and physical sciences (p. 76). Those educators who have ‘overused paradigms from the biological and physical sciences’ have subsequently underestimated the contributions of neuroscience. Literature dating back to the nineteenth century illustrates that the cognitive sciences have drawn many insights from the neurosciences (Luria, 1972, 1976, 1982; Sacks, 1971, 1973; Vygotsky, 1962). Additionally, the notion that cognitive science contextualizes educational concerns while neuroscience is far removed from educational contexts is illusory. Malaguzzi (1993b) is clear on this point when he stated, “Piaget warned us that a decision must be made about whether to teach schemes and structures directly or to present the child with rich problem-solving situations in which the active child learns from them in the course of exploration. The objective of education is to increase possibilities for the child to invent and discover. Words should not be used as a shortcut to knowledge” (p. 77). What is apparent is that social science research, upon which a vast number of educational theories have come to rely, has generally held the view that experimentation removed from classroom contexts is not as reliable as direct observation in field studies. Finally, the view that neuroscience practices reductionism is an unfortunate image that is wide spread among educators, which most often is directed at all scientific endeavors. A categorical dismissal of ‘reductive’ logic ignores the medical, scientific, and technological advances that have been made 23 precisely because of our ability to examine the smallest parts from the whole and relate the parts to the whole. Abstract thinking, which reduces the whole to its component parts, has largely been responsible for much of human invention. Siegfried Kracauer (1960) spoke of a “relativistic reduction” in relation to the kind of theories that have risen from the social sciences and of postmodern philosophies. As he noted, the differences between one kind of reduction and another are simple a matter of degrees between them. Along with progressive social mobility, the large scale flow of information, so greatly facilitated by the media of mass communications, makes people realize that everything can be viewed from more than one angle and that theirs is not the only way of life which has a title to recognition (p. 293). The confusion as to what constitutes reduction from a theoretical standpoint in the social sciences is readily apparent. When one reads accounts regarding new theories in communication studies, for instance, the fundamental apprehension of what either the terms abstraction or theory seems lacking. Henry Jenkins (1999) states, “In general, the need to create theory one can use, the merger of humanities and engineering approaches, is producing a different style of scholarship from the more abstract theories that have dominated media studies in recent decades” (p. 241). The idea that the concept of theory can be modified by the adjective “abstract,” demonstrates a lack of precision in thinking. In any case, I could easily attribute the interest I suddenly acquired to bringing film arts into the classroom as arriving purely by cultural and social influences, the kind articulated by countless social science theorists. I could dismiss innate and universal contributions to my early attention to moving images or the manner in which my working memory was of importance to interpreting images in general. I could postulate that the birth of those experiences was solely rooted in my family’s passion for the cinema. And that it was this passion, which had effectively 24 provided the ground for feelings of deep respect and admiration, perhaps even obsession, for moving images at an early age. But that is not the direction I will choose in the present thesis. Certainly neuroscientist Antonio Damasio (1999) was clear in stating with respect to inducers of emotion, “regardless of the degree of biological presetting of the emotional machinery, development and culture have much to say regarding the final product” (p. 57). The cinema had been my family’s first and most important portal to North American culture and the English language upon immigrating to the Canadian West. And in effect, I am able to provide historical precedents for laying the foundation of a purely socio-cultural phenomenon that impacted my subjective, emotional viewpoint toward the cinema. Again Damasio clearly points out that these socio-cultural phenomena shape “what constitutes an adequate inducer of a given emotion” along with “aspects of the expression of emotion” and, finally, “the cognition and behavior which follows the deployment of emotion” (p. 57). Prior to the outbreak of the Second World War, while France had a reputable and prodigious cinematic tradition of its own, the ban of American films during the war years left the French with nothing more than war propaganda films. Left also with the poorly made Hollywood imitations from Germany, there were a mere few hundred German approved French films. All such cinematic constraints had left a gaping cultural need. Needless to say, five years of restriction meant that the French appetite for film was primed for new images. Not able to compete with the release of hundreds of Hollywood films during the post-war years, the American cinema turned into a fevered pastime across the nation, not the least of which had affected my mother. By the time my parents had made the decision to emigrate, possibly influenced by so many images, my brothers and I had developed our palate for American images, not the least of which was due to private screenings of Hollywood films that my grandfather had obtained. 25 In terms of style and form, Hollywood films contrasted sharply with the French New Wave cinema that had risen in the wake of political and social upheaval in a post-war Europe— those ‘foreign’ films I was only able to discover as a young adult. During this period of re- growth, a critical division arose between those whose concern with the political standpoint of film came to view the cinema as commercially-produced studio entertainment and those whose belief in the “camera as pen” came to view film as an auteur art. The latter staunchly defended the director’s artistic vision as it was first articulated by Andre Bazin and Alexandre Astruc, encouraged through Henri Langlois’ Cinemateque Française, and later articulated in the Cahiers du Cinema by the wave of new cinephiles. As the exuberant young critics ventured into filmmaking, their productions attempted to counter film’s classic forms and manifestly shun the typical Hollywood formula. Avant-garde films by François Truffaut, Jean-Luc Godard, Éric Rohmer, Claude Chabrol, and Jacques Rivette, however, were certainly not the kind of movies my parents found acceptable for children. Despite that my mother was a true cinephile in every sense, her strict Catholic upbringing found the ‘grotesque’ realism and confusing narrative highly objectionable and contrary to her core values. Additionally, my mother had been raised on classic cinema and after five years of wartime privation and degradation, nothing in the New Wave corresponded with her intense need to dream of ‘better living.’ Since Disney films, alongside the Western, Comedy, Suspense, Epic, or Musical fit within classic cinema, having then settled in Canada, my parents encouraged our weekly movie attendance. This often meant watching as many as three to four films per week, including at least one evening spent as a family bundled up in the car to watch double-billed movies at the local Drive-In. As a family, we were relentless in our pursuit of acquiring the language, values, and culture of our adopted nation. While the films we watched were predominantly made by 26 Hollywood, we were undisturbed by differences that may have existed between the two nations. A distinction was surely present, which we began to note as an economic and political gap. But in the end, that distinction did not deter us from our cultural and linguistic mission for, in any case, it seemed to us that the films closely paralleled the culture, values, and language of life in the Alberta prairies. At a young age, I was not conscious of the complexity involved in grasping the ‘foreign’ films that, naturally enough, were shown without subtitles once we were in Canada. Many years later, however, I made a rather startling discovery regarding the complexity of filmic ‘language’ while living abroad in Costa Rica and learning to speak Spanish as an adult. Although I will return to this discovery in more detail later in the present thesis, suffice it to say that of all the contexts in which I found myself attentive to native Spanish speakers—many in which I was proficient in understanding, including conversations, newspapers, magazines, books, television shows, and radio broadcasts—it was only in the context of films that the Spanish language eluded me entirely. Yet, an inability to apprehend the dialogue did not apparently diminish my interest in the films I watched. I was led to the thought that the ‘wordless’ narratives of those early film-going years remain so vivid in my mind, as a child I was seemingly able to interpret the images sufficiently to create meaning for myself. Without question, this disparity between then and now has led me to wonder how my brain interpreted the images that flooded my visual and aural senses. What exactly was interpreted through the cinematic image and how did this interpretation come about? As this passion for the cinema grew, I pursued ‘artistic’ experiences throughout my childhood and adolescence by engaging in movement and narrative arts. With great pleasure, I devoted long hours to the study of dance, music, speech and theatre arts. While all of those activities may be viewed in terms of what Susanne Langer (1953) expressed as “swallowed up” 27 by film arts (p. 412), the corollary between movement and images did not reach understanding until many years later. In fact, this corollary did not become apparent even during my next foray into the cinema, which was largely theoretical and undertaken as a first year undergraduate enrolled in a program of Theatre and Cinematic Arts. Therein rests the distinction between theory and practice, it took many more years of practice as an artist to make any real sense of the theory. Of the many film and theatre courses I was enrolled in, one entailed the viewing and critiquing of a series of classic and experimental films produced in the first half of cinematic history. Naturally enough, those images made an indelible impression and our discussions in class also heightened my pleasure of watching ‘experimental’ and ‘foreign’ movies. The carefully selected films from countries around the world, written and directed by pioneering film artists, formed the premise of our philosophical discussions and essays. I recall feeling very worldly and smart, especially when the films were in French—which always invited deeper discussions that went beyond the subtitles and visual images. Additionally, that experience proved useful several years later in my bid for a regular FM radio spot as a film critic. In those days, I gratefully used my radio press pass three to four times a week to attend feature films without having to purchase a box office ticket. In the darkened theatre, I would blindly scribble down notes, which I would then write up as a review in the form of a radio script. The ‘script’ allowed me to express my point of view in an intelligent manner while maintaining a conversational tone that appeared ‘ad lib.’ I was rather pleased that all those years of studying speech arts, watching movies, and thinking analytically on popular matters was providing a pleasurable and heady hobby (if not a future career). Fortunately, my pathway took a different turn in light of the ‘impractical’ nature of being a ‘film critic’ at a time when movie critics, such as, Roger Ebert and Gene Siskel, were 28 popularizing film criticism. Recently graduated with a degree in Dance Education, I sensibly pursued further studies in the field of education that still favored my artistic interests, all the while still engaged in personal artistic and film outlets. Hence, upon graduating with my second degree, having stepped into the classroom first as a kindergarten and elementary arts educator with a focus in music, then a secondary language, technology, and music educator, it was practically inevitable that I would take up several more film-based ventures. The first was producing and hosting a television talk show on a Calgary local cable network (1992-94), and the other was joining the Calgary Society of Independent Filmmakers. While the former gave me a whole new perspective on the production of the television image and its multi-camera form, the latter allowed me to explore my own filmmaking interests as a director and editor, along with my acting interests, which led to being cast as lead and principal actor on several Canadian Feature films (i.e., The Unspoken, 1995; Tearful, Fearful, 1996). Both experiences inspired me to bring filmic experiences into the classroom to develop skills in storytelling, as well as learning to manipulate visual and sound images and symbols—all of which I reasoned were legitimate ‘educational’ ventures. Clearly, I was socialized into a filmic context from a very early age. But what was it that solicited my attention toward moving images? Was it merely the passion of learning a new language, exposure to filmic works, the study and further pursuits of music, dance, and drama? Or was there something about moving images or images, for that matter, that primes what is uniquely human in quality and capacity? Today, I have become keenly aware of the role that movement—weight, flow, time, and space—plays in the manner in which my brain senses the world, interprets the stimuli, and reacts to sensory images. In so doing, I have become keenly aware of the importance of the movement arts across all domains of thought. It is also clear that 29 the manner whereby images are expressed in space and time through harmonized processes of memory, attention, and emotion, are necessary for the rise of an autobiographical self. According to Damasio (1999), it is without question that “memory, intelligent inferences, and language are critical to the generation of an autobiographical self and the process of extended consciousness” (p. 18). As part of higher order cognition, the autobiographical self is linked traditionally to the “idea of identity and corresponds to a nontransient collection of unique facts and ways of being which characterize a person” (p. 17). An autobiographical self, as anyone who is close to someone who suffers from a mental illness will attest, is fundamental to one’s sense of wellbeing. The overwhelming evidence in the neurosciences, as well as in my own personal life, leads me to believe that our primary goal as educators is to understand and utilize every means available that help foster a healthy autobiographical self, which is also the seat of consciousness. Nonetheless, it is not in higher order cognition wherein core consciousness and the emergent sense of the core self is found. Rather the core self, which is a “transient entity, ceaselessly recreated for each and every object with which the brain interacts” (Damasio, 1999, p. 17) begins with the “unvarnished sense of our individual organism in the act of knowing” (p. 125). Core consciousness, which is not unique to humans alone, “provides the organism with a sense of self about one moment—now—and about one place—here. The scope of core consciousness is the here and now” (p. 16). Time, space, and movement are critical to core consciousness or the sense of the core self, which in turn is critical to the emergence of both the autobiographical self and the processes of extended consciousness. Extended consciousness wholly depends on images and feelings as they arise over time, space and movement, which could never be arrived at without core consciousness or the innate, universal emotions (Damasio, 1999). 30 As increasing numbers of scientists, social scientists, and philosophers seek to understand consciousness, many in the neurosciences are finding evidence of a complex brain-body-mind connection, whereby the autobiographical self, which emerges from core consciousness, necessarily does so because of the somatic-sensory images (e.g., visual, auditory, olfactory, neural, visceral, etc.) attended to in the mind and stored in memory. While core consciousness “is separable from other cognitive processes,” nevertheless it exercises considerable “influence” on cognition. Remove all image-making capabilities and consciousness would be effectively abolished “because consciousness operates on images” (Damasio, 1999, p. 123). From this viewpoint, we may wonder how we “have a sense of self in the act of knowing” (p. 168). Damasio expounds further by submitting the following hypothesis. Core consciousness occurs when the brain’s representation devices generate an imaged, nonverbal account of how the organism’s own state is affected by the organism’s processing of an object, and when this process enhances the image of the causative object, thus placing it saliently in a spatial and temporal context. The hypothesis outlines two component mechanisms: the generation of the imaged nonverbal account of the object-organism relationship—which is the source of the sense of self in the act of knowing—and the enhancement of the images of an object (p. 169). This framework would thus place my engagement in the movement arts, richly endowed with images, as having been instrumental in developing an autobiographical self, which in turn was key to gaining deeper insight into notions of what is an image. Clearly, the study herein, which relies heavily on the subjective and experiential, faces scholarly refutation on the basis that any personal insight will remain particular, and lacking repeated trials, must therefore remain unconfirmed. Yet, as the widely respected neuroscientist V.S. Ramachandran (2006) claimed during an interview with Roger Bingham on The science studio, both the subjective and experiential is critical to understanding the brain-mind-body problem. Moreover, the ‘n’ of one, that is to say the analysis of ‘data’ arising from a single case, even that of an ‘autobiographical’ 31 case, is inevitable in light of perplexing phenomena that have yet to be studied under repeated trials. Hence, to further understand what constitutes an image and its role in the brain-mind- body complex, it has been through the subsequent application of what I learned while teaching in a classroom context whereby I gained important insights. Moreover, as the ground of classroom experimentation, my own continued endeavors in filmic activities, juxtaposed with the study of science, philosophy, cognitive linguistics, cultural and film theories, to name a few, became part of the equation to arrive at the sum of knowledge regarding moving images. That is to say, the knowledge of what constitutes an image ran parallel to my experiences both in and out of the classroom, and thus became possible to theorize. Any further confirmation would naturally depend on continued studies. Notwithstanding, I am buoyed by the startling fact that scientific discoveries have often been historically reliant on the ‘n’ of one, namely a single case, including that which is autobiographical (Ramachandran, 2006). In sum, my desire for consonance with subjectivity, pedagogy, and theory sent me in search of a resonant idea. This is not to say that I ventured inductively from the particular to the general, rather that the particular was analyzed by searching a wide body of knowledge and overlaying models of inferences that appeared to fit observations. The idea I was most interested in was one that would resound with my autobiography, which is a body of evidence derived from processes and products (i.e., events and objects). This resonant idea had to be empirically anchored and theoretically balanced on the issue of both human nature and nurture relative to knowledge; to emotion, perception, and cognition; and to symbol systems, such as language, film and music. Alarmingly at first, as I examined the branches of knowledge on my way through the forest of disciplines that dotted the landscape filled with notions on film, music, language, 32 technology, and literacy education, I became overwhelmed by the tangle of competing theories. It was disconcerting that my experiences as an artist, educator, and researcher found no theoretical agreement between the social sciences, sciences, and arts, despite that I could sense this congruency, this harmony, in myself. For that matter, despite a metaphoric alliance and obvious connection between artistic modalities, there was little cohesion between the arts, which also perturbed my sense of unity as a multi-disciplinary artist. It was of great relief, therefore, to eventually come upon a body of knowledge that does not privilege nature or nurture, which resonates with my experiences and holds great potential for building a foundational rapport between all the areas so frequently kept separate by specialization. While there remain many unsolved riddles with respect to the nature of being human, the scientific studies that have led to hypothesizing on the brain-body-mind problem have afforded me a new perspective into my artistic and educational concerns. Embarking on this journey of knowledge and understanding, therefore, I have posed the following research questions. 1. How do contemporary theories in film and literacy inform an arts-based researcher in education? 2. What further insight is drawn from studies in the cognitive, social, and neurosciences? 3. How can the knowledge gained from the preceding exploration in research questions one and two, enhance our understanding of arts and literacy so that change is positively affected in classroom pedagogy? 33 CHAPTER TWO Consciousness begins when brains acquire the power, the simple power I must add, of telling a story without words, the story that there is a life ticking away in an organism, and that the states of the living organism, within body bounds, are continuously being altered by encounters with objects or events in its environment, or, for that matter, by thoughts and by internal adjustments of the life process. Consciousness emerges when this primordial story—the story of an object causally changing the state of the body—can be told using universal nonverbal vocabulary of body signals (Damasio, 1999, p. 31). Situating the purpose, research questions, and concerns It was near the end of a hot dry summer in the mid nineteen-nineties when I had discovered that schools are wonderful repositories for historical artifacts. With just a few days before the commencement of another year of creative arts projects in a small elementary school in Calgary, I had been rooting around in neglected storage spaces looking for a reel-to-reel film projector and audio deck. There were several notions I had hoped to put into action. First, having in my possession several film and audio reels, I wanted to give the students an opportunity to play with and compare analogue technologies, which were near relics, with the latest image and sound digital technologies. My purpose was meant to shed light on the relationship between the medium and the message. Before embarking on a series of projects involving new technologies, I felt I needed to connect media between then and now, demonstrating continuity between creative expressions through the use of technologies. In other words, I favored Marshall McLuhan’s view (1963) that technologies are extensions of mental processes. In other words, their relational position with the brain-mind-body meant that all media depend on human actions dependent on mental processes. I wanted to begin with something visibly transformed, yet familiar in its ‘objective’ (i.e., to produce images and sound). Within the range of ‘objects’ used by artists to create visual and sound images, I had wanted to draw out the relationship between artist and object, e.g., musician and instrument, filmmaker and camera, dancer and body, etc. Without fully being able to 34 articulate my plan, looking back, it is clear that my overall objective was to observe what Damasio (1999) deemed as the “unified mental pattern that brings together the object and the self” (p. 11). Second, I wanted to explore the constructivist frameworks our school had been promoting beyond what I had done the previous year. From a pragmatic viewpoint conceived by John Dewey (1958), constructivism was viewed in our school as a student-centered, self- discovery, and direct-experience approach. While constructivism has often been associated with social-constructivism insofar as most experiences take place in a ‘social’ sphere, we were principally interested in the cognitive construction of meaning. We were convinced that we could isolate the ‘object’ and individual from social factors in order to gain a conceptual view of the acquisition of knowledge and skills. The year before, I had borrowed the idea of building ‘atelier-based’ centers of learning, which are the hallmark of the Reggio Emilia schools in Italy. Founded by Loris Malaguzzi, Reggio Emilia schools practice constructivist and transformative learning (Edwards, Gandini, & Forman, 1993). Not easy to define or explain fully, those terms were in process of being unpacked in our school through dialogue and practice, such as the outcome of a successfully implemented series of ‘ateliers’ for children aged 6 to 12. The ateliers were in-class activities that were partly self-directed and partly apprenticed. They included, to name a few: painting and sculpting; performance and composition in music, dance, and drama; animation and creative writing; theatre construction; and puppetry. The activities ranged from ‘low tech’ (e.g., pop up books, dramatic sketches, and dioramas) to ‘high tech’ (e.g., video stop action and digital animation through the use of hypertext programs). Examples of the animation programs included, Hypercard Studio and turtle vector graphics, i.e., LOGO, originally designed by Seymour Papert (1980). 35 Officially as the school music teacher, I was kept busy with school concerts, assemblies, and performances for the first half of the year. Nonetheless, I was part of a school that valued creative means for “building learner capacity, knowledge, relationships, and community,” which defined our school mission statement. As the school was filled with gifted children, knowledgeable teachers, and professional parents ranging from sciences to arts, I had envisioned the ateliers as the ideal ‘living’ laboratory for observing an ensemble of artistic and intellectual practices operating on multiple levels of abilities and interests. The ateliers were successful and although I was able to document much of the process, I was far from understanding how to analyze the disparate ‘data,’ much less synthesize the work that had transpired. For as Alfred North Whitehead (1938) expressed, “We experience more than we can analyze. For we experience the universe, and we analyze in our consciousness a minute selection of its details” (p. 121). Nonetheless, the ‘laboratory’ laid the foundation for another project that appeared to me to hold a more ‘unifying’ objective among participants, namely, the art of film production. Teaching film arts in the classroom was not unfamiliar to me, as I had previously introduced a documentary film module to inner-city youth in a middle school. In large part due to being an active member of The Calgary Society of Independent Filmmakers (CSIF) and the age of my students, I had been able to borrow sophisticated equipment from the generous support of CSIF for a short lapse of time. The situation differed at University Elementary School (renamed University School in 2007). The school had purchased sufficient equipment for getting a ‘film’ project off the ground. This included: three S-VHS cameras and tripods, a bank of Macintosh computers with arts-based software programs (Avid Cinema and Adobe Premier), a piano synthesizer that connected via MIDI (Musical Instrument Digital Interface) and computer music software, along with the latest in digital photography. With the school so equipped, it was plain that I had ample technologies to engage upper elementary students between the ages of 9 36 and 12. Having already managed such a complex multidisciplinary setup the previous year, I could already envision grouping the children into film teams that would allow me to address differences in learning by staggering the stages of activities. This was because filmmaking naturally lends itself to multi-stage learning due to the interdisciplinary and independent needs of production. Importantly, we plunged into a ‘new knowledge economy’ implementing all of the latest technologies with our school’s entry into virtual learning (i.e., the Internet), following Tim Berners-Lee’s invention of HTTP (hypertext transfer protocol), HTML (hypertext mark-up language) and WWW (World Wide Web). To situate the school in this new economy further, the new project I had envisioned came merely two years after the development of some major on- line innovations. For instance, in 1995 web pages were dynamically represented with font, type, and layout styles (i.e., CSS or cascading style sheets), which included graphics and icons. The web design language HTML 2.0, as it was called, merged with photo protocols set out by the Joint Photographic Experts Group (JPEG), which being part of the ISO (International Standards Organization) enabled photographs to be imbedded into web pages. The upshot of all those changes to the Internet was that web pages could now support full-blown JPEG images. This new event had us staring with delight and anticipation at the computer screen as colorful photographic images slowly crept into view. It gave us the feeling of watching an image emerge from its chemical bath or Polaroid sensitive paper as the image arose from blurred pixilated squares to a focused and clear representation, which seemed to ‘materialize’ from a ghostly realm into the tangible. The new digital photographic images were more colorful and ‘sophisticated’ in imagery than the ‘geometric’ vector graphics inherent of computer programs to date. Moreover, the photographs could also be printed (albeit with poorer quality than when printed from film negatives due to the low quality printers and papers), which 37 made the whole set-up seem like a veritable ‘do-it-yourself’ photo print shop. Despite that we still did not possess digital video cameras, the change that most affected us was the fact that moving images could be transferred from analogue to digital and back again. We were also on our way to ‘do-it-yourself’ film production and distribution. A deeper look into the school’s philosophy and the questions it raised The administration and teaching staff at University (Elementary) School were part of a historical visionary educational project (the school having been inaugurated in 1968 as a ‘laboratory’ of learning for pre-service teachers and postgraduate researchers). For this reason, most of the teachers were in possession of a postgraduate degree and actively involved in research. The driving philosophies of our school were mostly gleaned from learning theories in the cognitive sciences, such as the stage theories of Piaget, multiple intelligence theory from the work of Howard Gardner (1983), but also postmodern curriculum theories as proposed by William Doll (1993). By and large, these theories were rooted in the cognitive and aimed toward pushing intellectual boundaries. The only theories I recall that were missing from our staff discussions were social and cultural theories, e.g., critical theory, gender and feminist theories, etc. Thus, curriculum and pedagogy was made up of a system of complex concepts arising from the latest findings in educational research. The practical realization of some of the theoretical frameworks included multi-aged groupings, i.e., placing students in family groupings combining three grades into one class; team teaching; creative arts applications across multiple subject areas; and collaborative, negotiated, and generative curriculum. The latter invited students and parents to participate in curriculum development through large town hall meetings. There was also a focus on what we believed to be transformative learning, which included many concepts drawn from postmodern curriculum theory (Doll, 1989; Pinar, 1995), some of which 38 had been gleaned from mathematics and physics, such as theories of chaos and complexity. In addition, our administrators brought forward studies in brain research that gave us a ‘picture’ of the mental operations of the cerebral cortices, i.e., ‘higher order thinking.’ In essence, the school was awash in some of the most current educational theories in an established laboratory setting that looked principally at cognition. In preparation for the six-month unit of study, therefore, I was eager to test some of the many theories that drove our discussions. With transformative and constructivist learning in mind, the objective was first to explore and play with the new technologies, to view and critique many video images, to deliberate on varied film techniques and then to create short narratives that explored a topic students were studying in other subject areas. With a dedicated view of cognitive sciences, the teaching staff placed a strong focus on the conceptual and high-order thinking (i.e., the analytical, synthetic and evaluative). General concepts, we euphemistically called ‘big ideas,’ were viewed as ‘mind-expanding’ tools that allowed one to build intellectual capacity because of the factor of connectivity with diverse notions, topics, subjects, and ideas. The transformational learning would come through multi-disciplinary activities infused with creative applications that brought individual and ‘objects’ into close encounters. By bringing universal concepts and particular ‘objects’ together, we hypothesized that learners would bridge knowledge and skills that would bring rational meaning to the whole. By rational I am referring solely to conceptual processes because nowhere in our discussions did we center on emotional and social aspects directly. Instead, the two were viewed as ‘by-products’ of a richly endowed intellectual environment with ‘social’ frameworks. Naturally, the ‘missing’ discourse prevented us from probing deeper into socio-cultural or emotional aspects on the whole. Notwithstanding, given some of the deep emotionally laden conflicts that arose between staff members and also between staff members and parents in relation to the implementation of 39 some approaches, the question was raised whether it was sufficient to focus entirely on cognitive development. The concerns that were heatedly expressed inevitably forced me to ponder on such matters. A surprising find: The puzzling field of film literacy After some time rummaging in the school’s storage spaces, I finally found a 16mm reel- to-reel film projector and, to my surprise, a box of old newspaper clippings, several 35mm film reels, and a folder of teacher notes. The yellowed clippings were from the mid-seventies and informed me that a teacher had once boldly introduced our school to cinematic arts. I was impressed, to say the least. While I had found a hobby out of making short films with a Super 8 camera or video whenever summer rolled around, 35mm film projects seemed that much more sophisticated (not to mention expensive to operate, purchase, and print). Moreover, one of the articles applauded the educational initiative taken to introduce Alberta students to a new film curriculum, joining the active British Columbia school districts. It was not surprising that BC had already launched a film curriculum in schools since it was third in line as film Mecca in the Canadian film industry (behind Ontario and Quebec). Encouraged by the teacher’s mention of several useful filmmaking books (including several from BC), I was fortunate to track these down in our school library, where they had been tucked away under communication arts. The books gave a general overview of filmmaking and, in themselves, were historically interestingly. Mostly, they contained the techniques and approaches to filming, lighting, editing, and sound, as well as photographic settings, film handling, printing, and projection, not unlike the commercial and highly technical step-by-step video production books then in current use. As the latter texts would become (now having been updated to accommodate ‘digital’ video production and projection), the former technical concepts outlined in the film texts were generally outmoded by our latest technologies—as were the ‘teaching’ approaches that 40 accompanied the concepts and techniques. In truth, they were not manuals or even textbooks so much as dictionaries filled with terms and definitions. Several books were of particular interest as these were written for children and contained a few photographic principles that had been a staple throughout cinematic history—and likely to remain (Listone & McIntosh, 1970; Lowndes, 1968). Such things as camera angles, panning and zooming, lighting for mood, and other types of common film elements were explained with illustrations. Sound engineering also had a role, which was useful to me since I wished to continue activities using MIDI in my music classroom. Naturally enough, like most instructional texts on the arts (as Aristotle famously showed), the children’s books were focused on the technical and practical, giving little notion of the poetics of film arts. Heartened by the fact that there was some mention of the artistic connections to traditional art expressions the children already had experienced, especially photography, I seized the opportunity for designing integrated learning. Connections to traditional forms of expression were of particular significance to me, first because it fit with our school mission statement: collaborative learning that is interconnected, interdependent and generative. But more importantly, as a multi-disciplinary artist, the framework fit with my personal experiences in the arts. Finally, some books mentioned the “above-the-line” specialties in film—actor, director, photographer, and editor—which glamour inevitably outweighed the “below-the-line” specialties (such as, lighting and sound engineers, set design and decorators, costumers, continuity person, script manager, etc.). I remember feeling that this view tends to skew filmmaking as the providence of a talented few rather than the intentioned, thoughtful efforts of many. In this sense, I was prepared to broaden the bias toward ‘artistry’ in general. Not only would filmmaking exemplify collaborative work, but also demonstrate the capacity to bring together a shared 41 vision, diverse abilities, and purpose in a community of teaching and learning. Moreover, tucked in the back of my mind, I had retained the notion that filmmaking was an auteur art (not unlike creative writing). This notion was based more on my filmmaking experiences than film theories that would later cross my path. In the majority of instances, since the films I had produced I had also written, directed and edited, I was biased toward seeing myself as having authored the works. Despite the limitations of the teaching aids and scant notes that accompanied the film texts, I was humbled to learn that my “innovative” filmmaking venture (as it was later acknowledged by the Canadian Teacher’s Federation) had an enterprising teacher pioneer design a unit of film study some twenty years before me in the same space. I remember thinking, “What coincidence! Could he have been a kindred spirit?” I also remember thinking how enjoyable it would be to interview him, or any students, now adults, to cull their memory of such a unique experience. Undertaking my film project had appeared to me as a new way of learning, of seeing the world, of generating and constructing one’s own knowledge, of understanding self and the world. I realized, however, that those images had simply come full circle some twenty years later. I also realized that this intrepid teacher and his students had felt the spark of creativity not unlike the “Langley Schools Music Project” (1976-77). I imagined that those educators viewed their projects as giving birth to a new curriculum and pedagogy. Though I was unable to locate him, his phantom presence prodded my nascent film curriculum. What I have slowly come to understand fifteen years later is that phantoms had been haunting our school long before the twenty-year gap between our two units of study began. Some of those phantoms had been partly responsible for a short-lived film curriculum that had been introduced in our school, despite that the news article had lauded it as being a step toward a ‘new literacy.’ This idea of a new literacy (though I was not clear what that meant precisely) was my 42 general intent toward constructing such a project in the first place. Naturally, I was surprised to see that between then and now, its presence had vanished for such a long period of time. For a brief instant, that vanishing made me hesitate. As an artist, tucked in the back of my mind lived the notion of ‘legacy.’ A sense that what we as artists do will leave a lasting footprint on culture and society, not necessarily out of a sense of achieving ‘immortality’ (even if this factors for many) but out of a sense of causing an ‘evolution’ of thought that would shift society ever closer to ‘higher consciousness.’ Generally speaking, it is safe to say that all educators think of their work as a form of legacy to the next generation. And as an arts educator, therefore, my notion of ‘legacy’ was founded in the ‘transformational’ outcome of learning in and through the arts. I held firmly to the feeling that arts education as a form of ‘literacy education’ was an emancipation project, which I had interpreted as a means of knowing acquired so thoroughly as to be a way of knowing that would allow one to counter ‘dominant’ voices in whatever form they arose. For me, the arts were another kind of ‘language’ that certainly required an ability to ‘decode’ and ‘encode’ artistic works (i.e., knowledge and skills). By its thorough learning and application, the arts gave one the liberty to choose a way of thought or action no less than by the written language. This view of arts education was heavily influenced by my discovery of the philosophical writings of Elliot Eisner (1997), Maxine Green (1995), and Langer (1942). But their philosophical views on arts education would not have resonated quite as strongly if I had not been struck from an early age with literature that impacted on my understanding of social justice, such as the writing of Antoine de Saint-Exupéry and works by Dickens, Dostoevsky, Molière, Soljenitsin, Steinbeck and Shakespeare. In addition to literature, I was also drawn to artists who had pioneered an art form in an act directed against the tyranny of social or academic views, such as Beethoven, Isadora Duncan (pioneer of American modern dance), Charles Chaplin, and 43 Stanislavski. It is difficult to pinpoint where my feelings toward the arts as an artist and arts educator began and ended as the two were embroiled in a common sentiment. I was thus faced with a dilemma with respect to the ‘legacy’ I imagined could ensue from such a project. My experience in teaching music, drama, and dance in the schooling system had led me to sense that as far as ‘literacy’ was understood, it was drama that held a position of merit among language educators. It is not hard to understand this place of privilege since written and spoken language go hand in hand with classic theatre. And while ‘notation’ has held importance in music, this form of ‘literacy’ was easily confused with a mathematical symbol system, a subject that came second to language. Dance, of course, is the most ephemeral of the three since its form is ‘mute’ and without notation (save for the short-lived Laban notation, film has been the only means of preserving and archiving dance). For these reasons, I focused on creative drama to better articulate with notions of literacy being explored in schools. Hence, student projects tended toward combining drama and music or drama and dance. The fact that drama and language arts education were thoroughly entwined, as my own Canadian schooling experience told me, it seemed natural that drama was a pivotal art form for arts educators interested in issues of literacy. As far as I was concerned, film and drama were so closely affiliated through language that I saw no reason why film education would not become a subject as thoroughly entrenched as drama. Naturally, my thought was that I could ‘bootstrap’ music and dance as part of the ‘package’ of arts employed in film. But evidently that was not the case. With only a subjective experience as my means of measurement, I had no theoretical means by which to bring music and dance into the discourse on language and literacy education any more than film arts. Dance, music, or physical education, for all intents and purpose, were fleeting ‘movement-time’ arts, unable to match the conceptual sophistication of thought that comes from language itself in the 44 act of reading and writing. That said, film appeared to have all the elements of a ‘written text,’ and seemed the likeliest to enter the realm of the conceptual sophistication of thought. Why had film not reached this status in an educational setting? This nagging question seemed to bring up the worry of offering a subject that carried no lasting societal value. With a hollow feeling, arts education seemed to echo the fleeting, ephemeral traces of art processes that momentarily brightens the lives of the indifferent, the bored, or the down and out (i.e., art therapy). Worse still, following a backlash on the politicization of the arts as cultural transmitter or form of ‘critical pedagogy,’ many arts educators resorted to saying, “Enough! Let’s do arts for arts sake.” As sympathetic as I have been to art therapies or the cry of “arts for arts sake,” those avenues did not open the door to intellectual participation in a literacy movement that we all felt gaining momentum since the word “literacy” was first coined in 1883, according to the Oxford English Dictionary. Having just crawled out from under Reagan’s educational “Back to Basics” policies, which affected us in Canada as much as in the United States, I longed for a new theoretical foundation of artistic experiences. To some, it would appear that the arts in education are of nature aesthetic, cultural, and social, operating wholly within a semiotic framework. As important as that framework may be to understand a part of the arts, a semiotic framework is said to operate at odds with cognitive and neuroscience theoretical frameworks (Petric, 2001). In the tension, therefore, between semiotics and cognitive science, we end up with an ongoing nature versus nurture argument that may not advance our understanding of the whole of arts, why we do art, or how the arts are equally important to language as “alternate forms of representation” (Eisner, 1997). For my own sense of self-worth, the answer to those questions was an imperative. My entire concept of self was wrapped up in a lifetime invested in the study of performing arts, which deeply connected to my 45 conceptual understandings of multiple subject areas, alongside agency and democracy. Any theoretical gap distanced the ‘importance of things felt’ from ‘the matter-of-fact.’ I had no desire to follow schools of thought in music or dance that tried to force a pre-de Saussurean semantic view of sound and movement that fit into a linguistic paradigm “concerned with the diachronic study of the signified” (Colin, 1995, p. 103). That is to say, the meaning of language was dependent entirely on temporal contexts. On the other hand, I was not quite ready to embrace de Saussurean semiotics either, i.e., language as a formal system independent of the temporal-spatial production and comprehension. No matter what my mental processes consisted of, I was not ready to accept that a mode or a gesture had conceptual and definitional meaning any more than a color or ink blot; neither was I ready to accept that sound and gesture are mere social signifiers that ‘point’ us toward meanings that are purely contextual. In search of more than socio-cultural meanings, it was natural to want to turn to cognitive sciences. Of course, the desire or need to tap into the cognitive aspect of the arts, flies directly into a wall of defense as an attack on the aesthetic or the ephemeral. As a dancer, I would be disingenuous if I pretended to dance to enrich my mind. I dance for joy and as far as I can tell, Whitehead (1938) was one of the rare philosophers who counted “joy” as having importance and an end onto itself. Joy is different from pleasure—I feel that too when I dance as I also feel my muscles aching and burning, which has often placed dancers, especially ballet dancers, in the strange realm of enjoying ‘pain’. Joy is a feeling of elation that lingers for many days or years without regret. If there were ever a perfectly good reason to exist, I believe it would be to feel joy. And as someone whose first object in life was to dance, it is possible that this lasting feeling of joy that I have felt my entire life when dancing (especially ballet) may have come partly as a result of the nature of movement and partly as a result of a fascination with movement. Now the question raised was: what is movement? 46 The fact remains that the arts have only tended toward explorations of aesthetic, cultural, psychological, sociological, pragmatic, and semiotic, whereby the arts are deconstructed through isolating their parts into single frames in order to render meaning from a rational viewpoint. That is to say, the whole is rarely taken into consideration particularly with regard to movement, which plays an essential role in film, dance, and music. The study of movement is complex due to the corollary of changes in time and space, which change necessitates a comprehension of forces (as motives and intent) and the ensuing effects. And although the study of society and culture also undertakes the study of historical movements, given our short life span, we are continuously reduced to looking at ‘moments in time and space.’ In a film context, analysis consists of looking at still frames rather than the psychic integrity of movement. There is absolutely nothing wrong with studying the arts as moments in time and space, just as there is nothing inherently wrong with studying the arts from rational frameworks. But it is clear that all such views are partial. To move toward a fuller understanding, the arts must be examined beyond rational theories and must include the study of emotions without solely relying on cognitive, social, and psychological theories that have tended toward ‘rational’ accounts of emotional phenomena (Damasio, 1999; Kivy, 1997; Plantinga & Smith, 1999; Smith, 2003). Essentially, in our preoccupations with mind, we had been primarily focused on the media that allow analyses and critical capacity (i.e., clarity of thought). Thus we had acknowledged the media of language, technology, and arts, but each with their own particular means. We were able to describe, define, categorize, order and identify systemic rules of varied media. We could compare the mind’s processes to machines and information processing (i.e., input, output, storage, and logic). Thus we are able to mechanize systems that are the ‘medium’ of thought, as easily as we have mechanized the system that is thought, as Descartes did so effortlessly. 47 But to understand the medium that is thought, not as machine but as an organism, which is also separate from language, technology and arts, is as difficult to tease away from the latter as it is to tease brain from body or body from mind. What was missing is what Whitehead (1938) eloquently expressed, “There is no such independent item in actuality as ‘mere concept.’ The concept is always clothed with emotion, that is to say, with hope or fear, or with hatred, or with eager aspiration, or with the pleasure of analysis” (p. 167). Poetry, of course, demands that we consider humans holistically and that we do not tease such things apart, as can be noted in the words of William Butler Yeats. Labour is blossoming or dancing where The body is not bruised to pleasure soul. Nor beauty born out of its own despair, Nor blear-eyed wisdom out of midnight oil. O chestnut-tree, great-rooted blossomer, Are you the leaf, the blossom or the bole? O body swayed to music, O brightening glance, How can we know the dancer from the dance? Indeed, as a dancer I want to immerse, nay lose myself, in the emotional content of movement filled with joy, fear, anger, sadness, and surprise. It seems like a mean and picky thing to anyone with a deep devotion and sensitivity to aesthetics or to the emotional content of the arts, to try to tease apart the poetics of art from the artist. But herein are we at a loss as arts educators to explain what we know in body and soul. Yet, it seemed to me that to understand artistry, as a measure of both thought and emotion, clarity must be sought after. As I pondered semiotics, for instance, I wondered how one could speak of signification that is aesthetic, psychological, social, cultural, and pragmatic—all of which is attached to emotion—without giving some attention to the brain that is the medium that leads to thought or rather, as Damasio (1999) points out, to images. It could very well be that images, upon which we create meaning, are deeply connected to emotions, which meaning is ‘motion outward,’ in the same manner as language (Pinker, 2008). 48 Borrowing David Rodowick’s (2008) thoughts on the study of film and cinema, the movement arts (i.e., dance, film, music) have suffered from particular investigations as “a rather vulgar philosophical empiricism,” and whatever was gained, “in their signifying processes and in their social and historical contexts…lost a possible knowledge of a generalizable theory” (p. 388). Then again “in an effort to become more scientific, theory risks, sadly, becoming more conservative and reductionist” (p. 389). From the preceding perspective on film theory, it is difficult to not feel a win-loss polemic. But I did not believe that the root of the problem to understanding movement arts (especially film) was due to competing theories. I saw the root of the problem beginning with the limited manner in which movement arts are studied as still and lifeless structures, with parts possessing so many ‘signifiers’ that the whole could be thought to be understood solely by its parts. This class of signifiers, by Rodowick’s (2008) account, is a “codified system that nonetheless escapes notation” and are thus aligned with Metz’ description of film, namely, “an imaginary signifier.” Since on a philosophical level, the notation of any movement art is more or less ‘imaginary’ (including music), it is the “experience of the imaginary signifier [that] is something of a psychological constant” (p. 389). Put another way, the movement arts (which include spoken word) are thought to differ from written language by virtue of their ‘ephemeral’ qualities and must be experienced to be known, that is to say experienced in time and space to be felt as ‘real.’ It is the quality of ‘utterances’ as ‘images’ in sound, film, and dance forms, which we can speak of as qualia, a term philosophers use to describe the purely subjective experience of a thing (e.g., the redness of red). Confusingly, the arts also allow some codification and standardization that continually bumps up against the problem of knowing a priori solely by 49 notational or representational means. This is especially true of language, but is also true of all the arts. How does one explain a purely subjective experience to one who claims to know by means of rational thought? Short of finding a means to resolve this paradox, one has a feeling that a view of film as the ‘imaginary signifier’ suffers from a fullness of understanding of what constitutes an image, whether sound, visual, kinetic, haptic (i.e., touch), or visceral. However one chooses to understand what is an ‘image,’ we are also faced with the problem of understanding ‘representation,’ and ontologically speaking that ultimately leads us back to the problem of ’Reality.’ While many images drifted through my mind as I pondered film education, I could not help but sense the phantoms that haunted my elementary school—and more precisely haunt the halls of all institutions of learning. Their traces remain as the memories of those who lived and experienced, observed and thought, invented and experimented and, finally, wrote and immortalized. We appear haunted by so many images that arise by poetic means, yet most often immortalized by words. Artistic artifacts that remain (the film reel left behind by the teacher, which I was never able to view since I lacked a 35mm film projector) are mostly made ‘meaningful’ by the written word that ‘fills in’ what has gone missing (the time, the culture, the artist’s intent). Without words to accompany the presence of artistic artifacts, it is all too easy for those images to simply vanish from insight. Of course, one could say as much about literature. What would remain of literature without the endless written reviews, musings, and critiques? What of film, I wondered. Does it not also possess all the qualities of immortalizing ideas? Finally, why has education conflated all digital activity as the seat of learning in such a short historical period of time whereas film and arts education broadly speaking, despite their longevity, have not enjoyed such a seat at the table of educational basics? Why has educational research focused so 50 much attention on the one but not the other? The only way I could begin to unravel those mysteries was to take stock of education’s film history. Surveying the situation in education When surveying film in education, what became clear from the outset was the fact that there were complex histories of film research that emerged from diverse disciplines of thought. First, because of a lack of specialization in film curriculum and pedagogy, educational film research has periodically risen in domains with interest in language, literacy, communication, media, sociology, psychology and policy. This aspect of cross-disciplinary action in film educational research has blurred the boundaries of the study of film as a whole. In other words, there has been no central framework that would be considered the hub of research activity on film in schools of education (i.e., Faculties of Education). Second, independent of the schools of education, which focus has been principally directed toward public education at large, there exists a complex history of film studies within universities. Some of the research has been and continues to be centralized in Faculties of Film, and others are found under the broader umbrella of Liberal Arts or Communication Studies (the latter having the most influence in education). Film research, therefore, is as widely distributed across multi-disciplinary areas in the whole of universities as educational film research is distributed across faculties of education. Interestingly, there have been many more educational researchers who have embraced some of the field theories and methods established in the disciplines of film studies rather than the other way around. While it is possible there are some who have penetrated this barrier, I did not come across educational film theorists, cited or otherwise, within a large body of research situated solely in academic film journals. Third, both educational film research and contemporary film studies have drawn their theoretical and methodological frameworks from broader fields of study in philosophy, 51 linguistics, psychology, and sociology (Casetti, 1999). Thus it was clear from the start that a rather daunting task laid before me with respect to reviewing salient literature and drawing out a cogent critique that has impact within arts educational concerns. As the worlds of research unfolded like Russian nesting dolls, each one profoundly mirroring the other, the entire universe began to look like Charlie Kaufman’s 2008 film Synecdoche – a film that visually portrayed a small theatre, within a larger theatre, within the theatre of life. The film tried to render a view of chaos and complexity but its mirrored portraits is perhaps why it was not entirely successful among all critics. I discovered, however, a useful metaphor to help illustrate this chaotic and complex network of research activities. A pithy explanation of the emergence and evolution of the field of film scholarship by scholar Francesco Casetti (1999) has broad applications. From amateur interest to specialization to internationalization, Casetti’s description of film study from start to present is exceptionally useful to explain the emergence and evolution of any new field of study. Generally speaking, at the beginning of emergent fields of research, borrowing an idea from Bakhtin (1981), we find a “carnival” of people engaged in the discovery of new wonders after a point of rupture from an ‘ordered’ world “marking the entry into the field of a new paradigm,” (Casetti, 1999, p. 55). Furthermore, with respect to this cinematic ‘carnival,’ Casetti points out, “the debate about the new medium appeared to be open to anyone” (p. 8). True to form, a review of early film literature reveals that contributors to the ‘new’ field of film came from diverse backgrounds. From journalists to artists, from social critics to philosophers, psychologists, anthropologists and scientists, contributions to film ‘theory’ included such luminaries as Freud, Bergson, Adorno, Levinas, Merleau-Ponty, Barthes, Hugo Münsterberg, Rudolf Arnheim (who coined the term, 7th art), Béla Belázs (librettist for musician Zoltán Kodály), and Siegfried Kracauer (who worked alongside Walter Benjamin). 52 From those early strivings to explain film, one can begin to surmise what set off the ‘carnival’ of thought. The rupture of the ordered and ‘flawless’ transition between what was then and what is now, as McLuhan (1963, 1967) tried to explain, comes about when an artifact shifts from the ‘background’ to the ‘foreground’ of attention (e.g., motion photography). This change of focus renders a feeling of rupture in continuous space-time. When introduced into society, an ‘object’ or artifact that alters the flow of movement that has, up to that point, been sensed as a constant will inevitably dramatically alter the way people feel and think. This phenomenon is most often experienced in a music context. In the case of motion photography, a new ‘mirror’ on the world shifted the images that are were mere “movies-in-the brain,” as Damasio (1999) described the workings of the mind to a visibly external ‘reality.’ In other words, when moving ‘images’ past and present are split apart, largely due to the introduction of a new medium, a subjective sense of continuity is no longer sewed together seamlessly. As a phenomenon shifts from novelty to a cultural staple (in the case of cinema, widely accepted as an art form), “theory was no longer produced by private clubs, animated by enthusiastic amateurs, but research groups and pressure groups that became the meeting point of professionals working in the field…there were no longer only film schools, but universities” (p. 8). As the ‘products’ of innovation gain momentum, new governing agents begin to shape policies and economies. This rite of passage is apparent in the histories of arts and sciences in general. In the case of film arts, governing sectors evolved to oversee artists, production, and distribution of ‘goods and services’ (e.g., studio systems and national film boards). In film studies, those who gained theoretical authority included university film faculties, a few exceptional film centers of high repute, and various journals or film magazines that regularly published scholarly works. The political and economic developments involving film works, 53 while appearing tangential to the intellectual forces, are critical, yet often overlooked by those unfamiliar with the terrain. Although this particular aspect of film arts merits greater attention in light of the impact of political and economic forces on consumers, researchers, and the artists themselves, there is not enough space to provide a detailed account herein. At any rate, between widespread acceptance and the rite of passage from carnival to academy, there is a move toward specialization. Specialization, according to Casetti (1999), operates on “three different levels” (p. 8). First, there is the “separation between theoretical and ordinary language,” which, for the most part creates a necessary step between ‘amateur’ and ‘expert.’ Casetti elaborates that in film study, “We moved from a common lexicon, scarcely marked by any technical terms, to a real jargon, full of words that defy immediate decoding, at least some of which were borrowed from other fields” (p. 8). In the new scholarship of film study, lingo was necessary to distinguish the new discipline of “filmology,” according to Casetti, by “explicitly proposing a new vocabulary (filmofanic, profilmic, diegetic, etc.), just as semiotics and psychoanalysis would become exemplary in the 1960s and 1970s by tending toward private lexemes (syntamagtic, icon, suture, etc.)” (p. 9). As with any field (academic or otherwise), either in its initial stage of development or in its projects of renewal (which we also call ‘reforms’), a new lexicon is very nearly a rite of passage. As many who enter the field of education have discovered, learning educational jargon is the difficult first step in becoming a bona fide teacher or educational researcher. Next, there is “a separation between theory and criticism” (Casetti, 1999, p. 9). Between those who systematize and those who interpret and, rather than “a mutually enriching interchange that makes theory into a sort of conscience of criticism, we observe an increasing mutual indifference. The categories by one group do not rely immediately on the discourse by the other” (p. 9). In fact, this systemization was discussed at length in the works of Whitehead 54 (1938) in relation to philosophy, mathematics, and science. Thus, much as it has occurred throughout the history of intellectual works, the debate that ought to lead to a “conscience of criticism,” merely entreats a debacle emphasizing the distance between experts rather than the proximity of a “mutually enriching interchange” (Casetti, 1999, p. 9). And finally arrives the moment all practitioners, from one end of the knowledge spectrum to the other, are made sharply and painfully aware, “there is a separation between theory and practice” (p. 9). Whether in the fields of arts or education, the mutuality of practitioner-scholar disappears. Even arts based researchers are forced to disown the very art processes and products that led them to the academy in the first place (except when those are dressed up as scholarly). To use Casetti’s words in comparing film arts to pedagogical arts, “the critic” (i.e., scholar) is “both a mentor and a prophet,” while “the filmmaker” (i.e., practitioner) is “in the guise of a witness and explorer” (p. 9). And just as it is so often the case between teacher-practitioner and educator-scholar, “in their place appears a theatre of incommunicability in which the theorists dream” of an educational system (i.e., cinema) that “does not exist, and yet continues to be proposed,” while the practitioners (i.e., filmmakers) make the classrooms (i.e., films) they “want or are able to make, not paying much heed to the suggestions they are given” (p. 9). Making matters more complex, whereas prototypal forms created at a local level may be studied with confidence due to small and undiluted concentrations (feeling some control over the narrow ‘influences’), as soon as local forms are accepted and specialized, they enter international territories. In the 21st century, the speed of globalization (i.e., internationalization) has reached unprecedented velocities. Once internationalized, there is at a point of ‘saturation,’ when forms (old and new) become an indistinguishable blur as these begin to mingle, meld, blend, hybridize, merge, fuse, infuse, overlap, and mix. The study of subjects, be they music, film, education or otherwise, is open to a much, much larger field of specialized operation, which generally invites 55 analytical and scientific methods that bring order from chaos. Yet, with so many fields of study now blanketing the world, one begins to wonder, just how many particulars can a limited world ponder and generate? More precisely, how many particulars are needed to find a pattern of general understanding? As the preceding section demonstrates in its analogous and metaphoric description, one is able to survey the landscape due to dynamic images. In this case, it is the history of fields of knowledge (i.e., disciplines) that move from an ambiguity of practice (the amateur, the carnival) toward methods of description and explanation (the expert, the academy). Studies in cognitive science suggest that our linguistic faculty, which in addition to expressing events, action and states of being, saturates language with “implicit metaphors like events are objects and time is space,” which is linked to our conceptual faculty’s “ability to frame an event in alternative ways” (Pinker, 2007, p. 4, 6). So aside from depicting movements of history, Pinker also brings us face to face with a troubling question: “[If] language is supposed to give us a way to communicate who did what to whom, how can it ever do that if two people can look at the same event and make different assignments of the who, the what, and the whom?” (p. 52). Clearly my reading of Casetti’s historical account of film studies presented itself as analogous to the fields of knowledge and practice with which I have been engaged, not the least of which is arts education. The capacity that we have as human beings to look at events (or objects) in different frameworks demonstrates a “cognitive flexibility,” which according to Pinker, “is in many ways a blessing.” Nonetheless, Pinker also points out that “in figuring out how language works, it is something of a curse” (p. 52). The same ‘curse’ may be said of film arts. Professor James R. Elkin, from the College of Law at West Virginia University, who teaches a course on the interpretation and critique of ‘lawyer films’ posted on his course website, under a section on film theory, an essay by PhD candidate, Stephen Rowley (2008) entitled, Is 56 film theory bullshit? The essay starts by positing a dilemma frequently encountered with all beginning undergraduates critiquing films, namely, how to respond to those who view the ‘interpretation’ of films as a type of ‘free-for-all’ whereby films may be made to “mean anything.” In my view, the answer to this dilemma does not rest upon the curse of cognitive flexibility as Pinker (2007) eloquently describes. Rather the solution rests in examining the cause of cognitive flexibility. To know that we are capable of “flipping the frame,” as Pinker suggests, such as shifting from seeing the ‘old lady or young lady’ in this and other famous dual- perceptual images, is one thing. To understand why we are able to flip the frame, why we are able to ‘see’ things in multiple perspectives, is another. To bring further insight into our human capacity for cognitive flexibility, therefore, one may begin with an account of historical events. In the retelling of such histories, a pattern emerges that points toward a human capacity not yet fully explained in cognitive and social sciences. In other words, the following, which traces the historical accounts of contemporary theories in film and literacy education, brings us closer to understanding the events and will demonstrate upon inspection that they are rife with contradictory theories. Motion pictures: The event that changed the world It would thus appear to have been seventy-five years after its invention that film production had veritably burst into school curriculum (Listone & McIntosh, 1970). Geller and Kula (1969), for instance, who cited a survey conducted by the American Film Institute (AFI), explained that “the incredible growth of film is seen not only on the university level” where it was “discovered that 5,300 students are now preparing for a career in film production, film scholarship, or film teaching” but also “within some 22,000 U.S. secondary and elementary 57 school systems” where “there is an increasing interest and activity centered on film that literally boggles the mind” (p. 98). To which the authors adjoin, “the stacks of mail arriving at AFI daily announcing new film programs and requesting film materials and recommendations lend additional proof, if any is needed, that film is in” (p. 98). Notwithstanding Geller & Kula’s exuberance, as English teacher Eleanor Child (1939) attested, it was as early as 1937 whereby an NCTE (National Council of Teachers of English) survey showed that making films had become a means of “making school work more vital, practical, and appealing” (p. 706). And in her article, Child went on to explain the practicalities for beginning a production program in the school. Needless to say, discovering there had been pioneers involved in school film production as early as the 1930’s has not yet ceased to amaze me. Clearly, some forty years after film’s invention, a group of English teachers had been busily engaged in film production; another thirty-some years elapsed before there was a new wave of production initiated by arts educators and non-profit arts organizations, and an additional thirty-six years have gone by before film production would enjoy the ubiquitous, sweeping, and unprecedented position it holds today in schools and community projects worldwide. A mere one hundred and fifteen years since the introduction of film in society to allow the average individual to produce, publish and distribute a film worldwide—a feat that has outdone mass print technologies, which took over five-hundred years to accomplish! This historical perspective leaves much to ponder. Thus film production had found its way into North American schools as early as the 1930s in large part through the use of standard 8mm film cameras, which had just been invented. Another wave of film production in the 1970s came about with the introduction of Super 8 cameras, which apparently, as I had discovered in my school archives, went so far as teachers using traditional 16mm or 35mm film cameras (a project undoubtedly financed by my school’s 58 wealthy community). Then, into the 1980s, video production became widely introduced in schools when video-8 and Hi-8 video formats became commercially available. While film viewing for educational purposes and critiquing (i.e., media literacy) has been education’s main staple—rising further in popularity by 1984, when large quantities of films were transferred to analogue video rendering them economical and practical for teachers (Cox, 1984)—it would appear that with each new development in camera technologies, film production also seemed to increase (Buckingham, 1990). This has evidently been due to the fact that with each camera innovation, filming became more cost effective and required less technological know-how. Additionally, as far as producing, viewing or critiquing film, video made the costly and cumbersome use of reel-to-reel projectors unnecessary. Though in principle, film critiquing also benefited from technological shifts, as was evident when media literacy took off in the Eighties once films were transferred to video, making it possible to rapidly rewind and review a segment several times over. Notably, the ‘freeze frame’ technique for film analysis commonly used by Jacques Aumont (1996) was not yet available on video format, as analogue video did not hold the image clearly. Since the projector did not hold a frame at all (stopping a projector on an image was not possible without melting the film), film analysis only benefited from multiple viewings in analogue video (rewind/fast forward). To analyze a single frame, therefore, it was necessary to put the film through a manual editor such as the Moviola or a flatbed that resembles a reel-to-reel audio editing device. To this day, the freeze frame technique for analysis, which Aumont (1996) suggested paralleled the analysis of written texts (where one may pause on a phrase), is one that is questioned since moving pictures, unlike words, filmstrips and slides, are viewed in motion, making ‘motion’ an 59 important component of understanding film (Casetti, 1999). Once again, film critique entered the realm of rational, reductionist thought by studying still images as parts of the moving whole. At any rate, the principle of innovation holds today, but at an increasing speed and by quantum strides. I suspect that the focus on production at the turn of the 21st century, which has changed educational interest from mere ‘viewing’ to ‘creating’ knowledge with images and sound, as Buckingham (2007) and others have noted, came about through the introduction of digital cameras along with digital post-production technologies that take filmmakers beyond merely capturing film (i.e., on digital 8 or mini-DV) toward non-linear editing, archiving, and distributing (i.e., sharing on the Internet). What we now have in our power that differs from all the other time periods in camera or film format innovation since cinema’s invention, is the capacity to fully create using images and sound, which is not merely having shifted from linear to non-linear editing (like the evolution of editing on a typewriter to a word processor). From concept to photography, from photography to editing, and from editing to distributing or sharing, the average individual now can be a full- fledged producer and distributor. Thus, today’s filmmaker, as filmmaker Rodriguez (1998, 2004) epitomized, is a self- contained, self-taught, self-mastered, self-distributing production studio with access to all the necessary elements—including fully orchestrated sounds in the public domain, that can be re- mastered for creating a high quality soundtrack, with a free distribution center that takes minutes to upload and share (e.g., YouTube, Vimeo). And even if there are hundreds of thousands of images that are not much to look at and subjects that are just as banal, this new age of sharing images globally was made possible by the invention of YouTube (2005). In many ways, that new age parallels the ‘age of Daguerre,’ beginning in the nineteenth century, whose photographic 60 invention in turn began the ‘age of the Lumières,’ whose filmic invention sparked the twentieth century filled with motion pictures. In any case, one can easily see why a film production curriculum, as an artful process, had a tendency to just fade away in education until the 21st century. If a young filmmaker could not edit, orchestrate, or distribute, the film experience would remain as a wonderful and unforgettable process, but to the teacher, it would be an unfinished work not easily evaluated or defended in an age of accountability where the objectivity of criterion-referenced assessments and standardized evaluations are given education’s highest value. Despite the importance that process plays in a learning environment, educators continue to be firmly anchored to a productive end largely due to the accountability factor, all the more realizable in today’s digital context. This is not to say that film or video has faded away in education—on the contrary, viewing films and videos continue to be plentiful in various subject areas as part of the learning process (e.g., to clarify and illustrate meanings). I am specifying, however, that viewing and making are two separate cognitive functions, which is precisely what we know of reading and writing (see footnote).1 Put another way, it is almost impossible to objectively determine the emotional and mental growth, intellectual capacity, and cognitive achievement of a learner without rational evidence. And despite the importance of filmic texts, which specialists across multiple disciplines have intuited since film’s inception, school film viewing and production had been left in a status of novelty or an enriching pastime that made school more palatable, yet easily shelved and forgotten when more pressing ideas would take hold—until such time as technology changed the nature of viewing and production. 1 Canadian author, Howard Engle (2007), has written a memoir, Man who forgot how to read, highlighting how a stroke had left him utterly unable to read despite that he could still write, demonstrating what neurologists have come to see as two separate cognitive functions. 61 At any rate, until such time as society had been fully saturated with images, produced by children and youth, as well as by activists, politicians, corporations, social pundits, religious organizations, and journalists, many educators could not predict the unfolding events attributed to film’s invention. Today, educators have sounded the call for action, and the political motives span the range of individuals anxious to reign in what seems like a runaway phenomenon. A phenomenon that resembles a gigantic tidal wave set in motion by undercurrents of activity not well understood is an event that demands explanation. Graduate researchers and the emergence of a collective Thus far, this historical overview helped me to uncover the sense of ‘urgency’ that has motivated and driven educators to research new digital works comprised mainly of images (i.e., visual and auditory). But a more detailed historical view has helped me unearth many more important issues. As time progressed, a worrisome thought arose from out of my readings on historical film research within and without education. I wondered how I would join my experiences with broader theoretical understandings of film in an educational context. While I was located in the department of curriculum and pedagogy, as a multi-disciplinary educator and researcher, the first thing I sought was where to situate my investigation. The second pursuit was finding a means to transform my experiences into scholarship. To cross the threshold of experience, from making films and teaching with new digital film technologies into film scholarship in an educational context required a meaningful goal. As the famous modal jazz piece by Miles Davis goes, I needed to find the ‘So What’ of my investigation. When I had embarked on my filmmaking unit, fifteen years earlier, I did so with an awareness of many issues in education that menaced my sensibilities toward agency and creativity. As mentioned, that elementary school film experience had been my second foray into 62 educational filmmaking. The first had been at a time when I was teaching various subjects in an ‘inner city school’ filled with youth challenged by social and learning issues. As it has been common in many school districts dealing with difficult behavioral concerns that often lead to tragic events, that inner city school was modeled on behavioral psychology. The result of such an approach in an educational context, while useful in managing behavior, left no door open to explore other less invasive and coercive approaches to learning. This impasse conflicted with Maxine Green’s words that haunted my mind, “A teacher in search of [his/her] own freedom may be the only kind of teacher who can arouse young persons to go in search of their own” (Green, 1998; as cited in Ayers & Miller, 1998). When I eventually transferred to a school in a community that was economically well off, I was surprised to discover that agency and creativity were the founding principles driving the school philosophy. The school’s administrative efforts to create a collaborative teaching and learning environment based on contemporary learning theories made a significant impact on me. First, by comparing the disparity between the two school environments, I was sensitive to matters pertaining to ‘accessibility’ (e.g., social, technological, and economic constraints). Second, by comparing contemporary learning theories and practice to behavioral theories and practice, neither of which were fully satisfactory, I was left to wonder whether there were theories and practices not yet examined in light of the changing times in technological advancement. Those thoughts, which led me to return to complete graduate work on creativity and technology, opened the door to explore classroom instruction in three domains of expertise: music education, physical education, and literacy education. During my graduate studies and beyond (a period that stretches over a decade), I experimented with variant theoretical perspectives that underpin curriculum and pedagogy both in the context of instructing teacher candidates (i.e., pre-service teachers) and youth who have been part of several university-led 63 research projects on video production and literacy. Therein I sought to find purpose in theory and practice. In part, my investigations began to take shape with an unexpected venture, which began when I was asked to videotape a Reader’s Theatre performance of a well-known Broadway play and movie, The Laramie Project. The project, supported by the Dean of Education, was directed and performed by education research graduates and professors. It was put into production following a public school district’s debacle over closing down a high school production of the play during rehearsals. Instead of embracing artistic works that could open positive debate around difficult social issues, it would appear that school districts facing social and political constraints are not yet ready to dismantle barriers for those working in the area of social justice and equity. Shortly after the theatre presentation, I became part of a group of research graduates keen on continuing ‘performance research.’ We formed a group that came to be known as The Collective and met regularly to discuss projects that would allow us to ply our various talents in music, theatre, dance, scripting, and other arts. The play had motivated us to form a performance research group for several reasons. First, the play not only brought to the fore the tragic events around homophobia, it also, ironically, brought forward issues of censorship and prejudice that still continue to plague our communities of learning. The incident led to a distinct feeling that the political and economic forces surrounding literacy were at the heart of educational concerns. In other words, issues of accessibility (harking back to my experiences with troubled youth), which are politically and economically motivated, went beyond mere ‘classroom practice’ and we were suddenly buoyed 64 up with the emotional sensation of having landed squarely into the title, Pedagogy of the oppressed, of Brazilian literacy activist Paulo Freire’s (1970) seminal work.2 Second, inspired by the response to the Reader’s Theatre performed on campus by members of the faculty, forming a graduate research group, made up primarily of performing artists, offered us a challenge to bring performance-based research to the fore. This form of research was finding new footing in our faculty in large part due to research efforts by various members of the faculty (e.g., Fells, 2002, 2004; Gouzouasis et al., 2007; Springgay et al., 2008; Sinner et al., 2007) and had had precedents set in faculty-sponsored conferences. Thus, not long after The Collective was formed, we were offered a second Reader’s Theatre performance opportunity that coincided with the politics and policies underpinning adolescent literacy. The proposal made for an ideal experiment. This time we were going to ‘perform’ the content of an article that had been written by Dean R. J. Tierney (2001-2002), first published in the Journal of Adolescent Literacy, under the title: An ethical chasm: Jurisprudence, jurisdiction and the literacy profession. The original article by Tierney was already one that had deviated from the usual journal offerings, insofar as it had been written, in part, as a courtroom drama and was easily ‘performable.’ Having been inspired by David Guterson’s (1995), novel Snow Falling on Cedars, Tierney had seized the opportunity to render a rather dry, technical article into a lively narrative that would allow him to capture some of his experiences ‘behind closed doors’ as a language and literacy researcher. Several members of The Collective condensed the dramatic courtroom sections of the article, considerably shortening the text into a script to be performed during Dean Tierney’s Keynote address of the Simon Fraser University’s 40th Anniversary, March 4, 2006. A year 2 Freire’s work centers on the idea that a pedagogy of oppression is buried in the varied ‘texts’ written and produced by a dominant culture and society (i.e., textbooks, novels, television, works of art, and so on), which maintains and reinforces the segregation and inequality (i.e., oppression) of minorities and the disadvantaged. 65 later, I revised the script into a ‘shooting script,’ upon being given the green light to turn it into a movie. Casting most of the actors who had parts in the original Reader’s Theatre, save for those I had played, which I turned over to others in order to direct, I added a few more actors to fill the parts that had been doubled on stage. With my affiliation to the film industry, I was able to secure two professional film actors in a cast of thirteen, and a cameraman who had worked on several television and film crews as a steady-cam operator (i.e., handheld camera). The film took three full days to shoot and more than a year to complete the editing, with a second edit after its première with cast and crew. At any rate, once I had made the decision to embark on this filmmaking challenge, I entered a dimension of investigation that I had hardly expected given the genesis for this project. What I found myself engaged in is what Whitehead (1938) described as “the basis of democracy,” which “is the common fact of value-experience, as constituting the essential nature of each pulsation of actuality. Everything has some value for itself, for others, and for the whole” (p. 151). While still operating under the assumption that I was investigating performance-based research with an emphasis on the relationship between arts and literacy, which then shifted to film-based research (an intersection of film, arts, and literacy), I began to examine the body of work I had undertaken under the light of this politically charged world of ‘literacy education.’ Under which assumptions had I been entrusting theoretical constructs? The question that remained uppermost in my mind hinged on whether the theories founded in educational research on arts, language, and literacy held any cogency with those found in the discipline of film studies. And if those collective theories could explain the fundamental transformations that have occurred each time I undertook film arts in the classroom or beyond. The purpose for my investigation, which had remained opaque, began to reveal itself in the course of sifting through the literature during the very processes of teaching film arts, filming, and editing. 66 Film arts as literacy, communications, and technology Initially, film in education rose predominantly out of communication studies as the ‘modes of communication’ (i.e., radio, film and television) began to intensify and shift the way in which we understood the shaping of society (Smith, 2003). Thus, the study of ‘film literacy’ by the mid-nineteen eighties settled within several curricular areas of interest, such as ‘media arts’ under the direction of arts educators; ‘media literacy,’ typically under the purview of sociologists and communication arts educators; and ‘multiple literacies’ as a component of language and literacy disciplines, which extends the research, teaching and learning of the written word (Eisner, 2001; Jenkins, 2006; New London Group, 1996). However, long before the writing of the multiliteracies ‘manifesto’ by The New London Group (1996), which espoused virtues of a burgeoning new literacy in “multimedia technologies,” digital images and sound had entered technology studies, visual arts, and music curricula for several decades. This history has been explored by a number of researchers, such as, Ely, 1992; Madeja, 1993; Moore, 1991; Papert, 1980; Rheingold, 1985; Roland, 1990; Slawson, 1993. Despite that arts and technology have held a particular interest in film arts throughout history, it has been a collective concern with literacy that has driven research movements in recent times. As an arts educator, with interests that ranged from philosophy to cognitive science, it was necessary to survey what literacy implies and why it has become the guiding principle behind current educational policies, which were raised in Tierney’s (2001-2002) article. According to the New London Group (1996) the principle behind literacy “is to ensure that all students benefit from learning in ways that allow them to participate fully in public, community, and economic life” (p. 60). Turning to education journals, I traced some of the economic and 67 political forces behind issues of literacy as early as 1886 when the National Education Association reported the following on the low literacy rate in the State of Louisiana. President William Preston Johnson of Tulane University, Louisiana, in his paper on education in his own state, spoke of Louisiana as lowest in the scale of literacy, only forty-nine per cent of its population being able to read and write. He pleaded for the national aid proposed by the Blair bill. There was, however, in his paper, nothing to offset the arguments that have been urged against the bill. It is hard for a close student to see how the mere lavish outlay of money is greatly to overcome conditions which money can only indirectly and remotely affect (p. 92). Clearly, early advocates in literacy became inexorably tied to the political and economic forces of the time. Surveying today’s legislation on literacy in education—for instance, the policies introduced by the Clinton and Bush administrations—it is plain to see how far the forces of politics and economics reach. To this end, Tierney (2009) expressed, “The control of literacy carries enormous political clout as well as economic advantage whether the profit be book sales, curriculum control or tenure” (p. 278). Additionally, Tierney pointed out that this political clout and economic advantage is wrought by the “power of certain groups to lobby for legislation to ensure certain pedagogical approaches” (p. 280). As I explored the body of literature published in education journals related to theories and practice of film or video, it therefore came as no surprise that the vast majority were written with literacy in mind. It is difficult to disagree with the tenets of the New London Group (1996) who sought to emancipate students from a “literacy pedagogy [that] has traditionally meant teaching and learning to read and write in page-bound, official, standard forms of the national language” (p. 60-61). Who best to put forward an argument of this kind than those whose professional expertise begins with the teaching of reading and writing? In effect, by advocating for a new literacy, the tenets of democracy would be vigorously upheld, and the New London Group has been able to make a convincing plea for new comprehensive pedagogical directions. 68 Digital curriculum and pedagogy in education, which include film arts, grew from quantitative communication research (Currie, 1999; Smith, 2003). Ultimately this research was supported and sanctioned by proponents of literacy as an emancipation project. For as the two questions and stated concerns that follow demonstrate, the New London Group (1996) felt compelled in light of a changing society to re-conceptualize literacy. How do we ensure that differences of culture, language and gender are not barriers to educational success? And what are the implications of these differences for literacy pedagogy (p. 61)? The main areas of common or complementary concern included the pedagogical tension between immersion and explicit models of teaching; the challenge of cultural and linguistic diversity; the newly prominent modes and technologies of communication; and the changing text usage in restructured workplaces. Our main concern was the question of life chances as it relates to the broader moral and cultural order of literacy pedagogy (p. 62). By way of clarifying what is meant by pedagogy, there are several ways in which this term may be employed. What the New London Group expressed as the tension between immersion and explicit models of teaching is frequently viewed as the difference between what educators refer to as experiential learning versus direct instruction (Dewey, 1958). The movement toward experiential or immersion learning led to constructivist pedagogies, with some theoretical frameworks drawn from social constructivism (Brooks & Brooks, 1993; Plantinga & Smith, 1999). Notably, pedagogy does not merely refer to teaching approaches and methods of instruction but also refers to ‘textual’ pedagogies whereby knowledge is drawn from cultural entities, such as arts, politics, and economics and their ‘texts’ (i.e., written, aural, and visual artifacts). This view led the movement toward critical pedagogies, which method of ‘decoding’ textual meanings were intended to emancipate the learner through drawing awareness of the implicit knowledge systems that shape positioning, subjectivity, and identity (Freire, 2005). The 69 New London Group addressed all three kinds of pedagogies, adding a fourth perspective referred to as transformative pedagogy. Four components of pedagogy are suggested: Situated Practice, which draws on the experience of meaning-making in lifeworlds, the public realm, and workplaces; Overt Instruction, through which students develop an explicit metalanguage of Design; Critical Framing, which interprets the social context and purpose of Designs of meaning; and Transformed Practice, in which students, as meaning-makers, become Designers of social futures (p. 65). One may begin to suspect that the policies governing arts and technology education have not had the same political and economic clout as literacy. First, one may consider the contribution of arts and technology in the heightened speed and breadth by which we have become, as previously expressed, “globalized societies” creating “the multifarious cultures that interrelate and the plurality of texts that circulate.” Both arts and technology have been viewed as complicit in propagating an image soaked society through television, video, Internet, and film. Despite tensions that may arise between arts and technology researchers on the principle of aesthetics (Gouzouasis, 2005, 2006), it is impossible to not see that today’s digital expressions make arts and technology political and economic allies. From one viewpoint, technology contributes to what Donna Haraway sees as a ‘cyborg’ pedagogy, whereby humans are not merely comparable to machines but are in fact becoming a hybrid of human and machine. For the artist and humanist, technology dangerously crosses the border of what it means to be human (Garoian & Gaudelius, 2001). From another perspective artists who “perform resistance in the digital age” may be considered disingenuous in their artistic critique given the centuries long association they have had with multiple technologies (Gouzouasis, 2006). Second, controversies surrounding the ‘digital divide’ caused by economic inequality between school districts and communities have included the prohibitive costs in hardware, software, and the training and hiring of specialists with skills and knowledge requisite of 70 information and communication technologies (e.g., programming). While arts and technology educators argued for political and economic backing that would serve the interests of advancing new technologies and preserving dwindling arts programs, literacy educators argued for political and economic backing that would provide for “a good life and an equitable society” (New London Group, 1996, p. 67). Thus technology and arts educators have often been viewed as promoting pedagogies despite inequalities, whereas literacy educators (buoyed by the field of communications) have been viewed as more closely allied to empowerment. This latter view was helped by the fact that ‘reading and writing’ has long been an emancipation goal aimed toward the whole of society. In consideration, certain aspects of arts and technology education retain the unfavorable view of being exclusionary and elitist with technology garnering the hostile view that proliferates comparisons between humans and machines, evidenced as early as J.P Guilford’s 1950 (1987) address and, most recently, in light of what is perceived as ‘cyborg’ pedagogy. In any case, the unrelenting speed of technological developments in the past four years has created a heightened sense of urgency toward digital literacy. Since the 2005 launching of the video streaming on-line program, YouTube, which currently uploads 200,000 new videos per day (Wesch, 2008), educators across all disciplines have hastened to research the digital video phenomenon and to initiate literacy pedagogies in video viewing and making (Jenkins, 2006). In other words, film works have become a renewed point of interest for many educators due to a number of factors, not the least being the introduction and advancement of technologies used to manipulate images. Importantly, I do not speak of images as merely visual. For the most part auditory, visual, visceral, and kinetic images have saturated our brains for over a century as those have emanated from big screen theatres to miniaturized cellular telephones and from social to individual spaces re-conceptualized to fit a technological age. Image producing technologies, 71 such as, cameras and MIDI instruments, along with computer hardware and software for projecting, manipulating, scanning, animating, recording, composing, and editing, have become increasingly popular in the classroom as digital technologies continue to expand (Jenkins, 2006). How are we to understand images and their affect? Taking another look into history, new movements in literacy were on the horizon with the advent of photography and radio in the late nineteenth century, the introduction of which led to the development of educational interests in cinema, television, and video (e.g., Allen, 1940; Child, 1939; Ginsberg, 1940; Gray, 1940; Smith, 1942; Mitchell, 1929). Over one hundred years later, film continues to be a central focus of literacy and arts of our times by virtue of the fact that films, in their multiple genres and technological formats, flood our environment with images and continue to be thought to possess qualities of language and art that serve to construct and transform dialogic, technological, scientific, social, cultural, political, and economic spaces (e.g., Allen & Smith, 1997; Bakhtin, 1981; Baudrillard, 1981, 1998; Braudy & Cohen, 2004; Miller & Stam, 2000; Rogers & Schofield, 2005). One can think of film literacy, generally speaking, as the critical and expressive ways students ‘read’ and ‘write’ using filmic processes (i.e., aural, visual, kinesthetic images), which is as relevant to literacy educators as it is to communication researchers rooted in sociological and cultural perspectives (Buckingham, 1990; Sefton-Green, 2006). Yet film arts also invoke the artistic processes that arise when students are immersed in subjective aesthetic experiences. Those experiences raise the question on the workings of an embodied brain (Eisner, 1981, 2001; Gouzouasis, 2006). A theoretical view of the embodied brain, from a neuroscience perspective, is one that carries importance if we are to deepen our understanding of what constitutes an image beyond socio-cultural foundations. Understanding images in relation to the workings of an 72 embodied brain through neuroscience frameworks is an area of education that has yet to receive full consideration. The rise of film research in education: In search of the expert With interests and perspectives stemming from the end of the nineteenth century leading the way, which one might argue continue to be in currency today, mass media innovations beginning with photography, gave rise to the study of what arts educator Elliot Eisner (1997) has termed, “alternate forms of representation.” Those alternate forms, in contrast to written language, were not solely limited to the study of media as art, as the more than one hundred years of research on film shows. In fact, that research is well documented, with many historical and contemporary journal articles archived in JSTOR. Additionally, there is a vast collection of books detailing historical accounts of the psychological and social science initiatives arising from pedagogical concerns, with the majority tied to education. Moreover, there is a wide collection on cultural film studies, which emerged from anthropology and critical theory. Those collections of works from the social sciences sit apart from an even greater collection of magazines, journals, and books specialized and dedicated to understanding film as an art and a language. The comparison of language, contained in the earliest works written on cinema from classic film studies to the most recent studies, is what has been called the “second wave” of semiotics, generative linguistics, cognition, and pragmatics (Casetti, 1999). Generally speaking, since film studies were widely disseminated to the public, one may find in all the collections—from one discipline to the next—ideas that have migrated and crossed over the mostly porous boundaries. Beginning as early as 1909, with the publication of Nickelodeon, “America’s leading journal of motography,” studies centered on “entertainment, education, science, and advertising” (Grieveson & Wasson, 2008). By 1913, Emilie Altenloh, doctoral student in sociology at the 73 University of Heidelberg, completed her dissertation on the sociology of cinema, and by 1916, Hugo Münsterberg had published his seminal work on the film viewer, entitled, The film: A psychological study (Grieveson & Wasson, 2008). Those theories, coupled with “the study of propaganda emerging in the early 1920s,” were rooted “on the particular conceptions of subjectivity, social order, and media effects” whereby it was “connected to the pressing imperative to understand the management of opinion in mass democracies” (p. 14). According to Grieveson (2008), given the enormous influx of people moving from rural to urban centers and the extraordinary innovations in mass communication systems at the end of the nineteenth century, many agreed that, “democracy is governable only on the basis of a knowledge of the opinion of the masses” (p. 15). Walter Lippman, Pulitzer prize intellectual and political commentator, for instance, contended that “people’s thoughts were increasingly shaped by the agencies of mass communication, which molded a society’s knowledge and appealed only to ‘stereotypes’ and beliefs rooted in myths, dreams, traditions, and personal wishes, thereby ‘manufacturing consent’ and problematizing the sustainability of democracy” (p. 15). Moreover, Lippman argued that what was needed “was a scholarly elite to assess and interpret objectively the potentially dangerous public opinion and to work through organizations of independent experts to make ‘the unseen facts intelligible to those who have to make decisions’ ” (p. 15). His views on the scholarly elite, which were held in common among the intellectual and governing classes, were realized within the new fields of psychology and anthropology—giving rise to social psychology and social sciences now interested in the “social behavior” of the mass public. As Grieveson (2008) further elaborates, the rise in empirical methods of investigation, which gave way to measuring “attitudes” and “opinions” believed by many to be the cause of human actions and a “critical component of managing behavior,” established quantitative and 74 qualitative studies that showed “an assessment of people’s mental attitudes could be useful not only for commercial purposes but also for ensuring the sustainability of democracy and of social order” (p. 15, italics added). It was in this frame of mind that “philosophical concerns with mass publics, opinions and mimesis” (concerns, one may recall, founded in Ancient Greece) “were made empirical” (p. 15). In a review of a rather chilling work of Grieveson and others, such as Mark Anderson (2008), one discovers that the new psychology and sociology of that era, which introduced the notions of Freud and Münsterberg on ‘mimetic relations, dreams, and hypnosis,’ along with the concerns of “social control” felt endemically as part of governmental dispositions, “became the central issue animating a sense of urgency about studying cinema” (Grieveson, p. 11, italics added). As far as the study of film was concerned, Grieveson notes disturbingly, Identifying potential disorder with the goal of instilling social order was a primary impulse underpinning these studies. Accounts claimed that the audiences for nickelodeons were predominantly children, immigrants, or women—all groups regarded as particularly prone to mimetic tendencies, as we have seen, because of their unstable location as self-aware/governing subjects. The reform journal Outlook typically commented, ‘Undeveloped people, those in transitional stages and children are deeply affected’ by moving pictures. Initial studies of cinema often posited the direct impact of moving pictures on the behavior of audiences and thus on what the social reformer Jane Addams called their ‘working moral codes’ (p. 11). Despite the direction empirical study had taken at that time, Grieveson also points out that there were counter forces acting as means of resistance to such popular views among the intellectual elite. Beginning with new discoveries in sciences and social sciences that rejected positivist research, this new perspective resulted in a necessary resistance to extreme ideologies. This resistance led to critical and field theories still in operation today. Nonetheless, despite Münsterberg’s directorial position with the psychological laboratory at Harvard, it was the University of Chicago, then working toward an elite class of film experts, 75 which veritably established the film scholar. Operating under the ‘new’ empirical ethos in the humanities (particularly among social scientists and psychologists), the University was the center of empirical film study, whereby such notable film ‘scholars’ rose to public prominence such as, among others, the behaviorists John Watson and Karl Lashley, communications guru Edgar Dale, sociology professor Ernest Burgess and, most notably, two members of the Chicago School of sociology, Frederick Thrasher and Robert Park. As Anderson (2008) explains, the University of Chicago was poised to utilize “adapted biological concepts of growth and decay to describe the rapid rise of the modern industrial city” stemming from the fact that “the city was both a natural environment and research laboratory where the social scientist might observe and record the forces of organization and disorganization that led to continual social change” (p. 41). Elaborating, Anderson renders more insight. Social disorganization was seen as an ordinary part of social formation since it is often necessary that older social relations be broken down so that new relations might form; however, if the rate of growth is too rapid, then social and personal disorganization can easily give rise to social ills such as delinquency, poverty, crime, suicide and disease. Thus social problems should be understood as disequilibrium and degeneration in the social organism (p. 41). While a differing notion of disequilibrium from a physical science framework requires more detailed examination, suffice it to say that social science had drawn from a biological theory a concept whereby disequilibrium was viewed preferable when it achieved zero entropy (total disorder), since high disorder in society that causes chaos is considered intolerable (and possibly tragic). The history, which Grieveson (2008) and Anderson (2008) carefully details of those laboring under such views, is long and complex. Although only briefly touched on herein, several important educational outcomes that stem from this view need further clarification. 76 Operating under the 1928 committee funded by philanthropist Frances Payne Bingham Bolton, the Reverend William Short endeavored to conduct a “nationwide study to determine the degree of influence and effect of films upon children and adolescents and ultimately lobby for more stringent forms of legalized social control over the film industry” (Grieveson, p. 17). Short met in Chicago with social scientists Jane Addams and Alice Miller Mitchell, professor in the Chicago School of Education Werret Wallace Charters, psychologist Louis Thurstone and the School of Sociology’s Robert Ezra Park. According to Grieveson, “Together, the scholars gathered worked in the disciplines of sociology, psychology, social psychology and education; the innovation of the study of cinema grew from the disciplinary imperatives to understand individuals, social groups, and the educability of both the individual and social group” (p. 17). Among them, Park was notably enthusiastic about the Payne funded committee, for his goal was to delineate the causes of human behavior and delinquency as was also the goals of Short, whose intention was to study “cinema as a component of collective behavior and its impact on the creation of delinquency” (p. 20). Thus, the Payne Fund Studies were conducted from 1918 to 1919; while those were published (1928), the thirteen studies were not widely read as had been anticipated (Grieveson, 2008). In light of this, the studies were then compiled and published in a popularized 1934 version by committee chairman, W.W. Charters and journalist Henry Forman in Our movie made children. Re-edited and re-released in 1996 under the title, Children and the movies: Media Influence and the Payne Fund Foundation Controversy (Jowett, Jarvie, & Fuller, 1996). The studies themselves prove to continue to have some currency in our times. Notwithstanding the historical analysis conducted on the social, political, educational, and scientific movements influencing researchers at that time, while reading any one of those early studies, one can easily deduce that research interest also coincided, then as now, with 77 changes in mass media innovations. In particular, film played a major role. Media shifted from photography to silent film with orchestrated sounds, then to films with sound vinyl recordings. From live radio to live television, media also shifted to analogue audio-visual taping, then onto digital audio and video at a fraction of the cost of previous production. Emergent technologies in motion photography and recorded sound have thus motivated and driven various groups of researchers for more than a century at different junctures. As mentioned, those early studies under the Payne Fund Studies came up against resistance by the Chicago Motion Picture Commission (CMPC), which was assembled to consider various points of view at a weekly hearing held between late 1918 and May 1919. The CMPC had been adamant that empirical verification of the effects of motion pictures was produced and, thus, sponsored a survey conducted by Ernest Burgess, “co-editor of the influential textbook, Introduction to Sociology, to quantify the effects of motion pictures on school children” (Grieveson, pp. 15-16). But it came down to the fact that the collection of the studies, on the whole, “had a muted impact on the continuing study of cinema” (p. 22). Grieveson speculates that the reasons for the “limited impact” had to do with the Production Code that was then written by the Motion Picture Producers and Directors Association (MPPDA). In the code the political goals of the studies were partly realized or at least deflected…The MPPDA also actively sought to undermine the validity of the studies when the organization seized upon a critique of the studies’ methodology and findings articulated by the philosopher Mortimer Adler in his 1937 book Art and prudence. The organization not only promoted Adler’s critique but also commissioned Raymond Moley to write a popularized summary of it. Moreover, as one begins to take note in the following addendum, the constant technological changes influenced the direction of political leadership, a factor that continues today as we grapple with issues of government control and access (e.g., Google in China). 78 One other important potential reason for the eclipse of the Payne Fund Studies…cinema itself became less centrally important to practices of governance in line with the increased importance of other media, starting with radio and later with television (pp. 22-23). Although the MPPDA exercised a degree of force in opposing the centrality of those studies in the “new and as yet unformed discipline of film studies” (p. 22), it should come as no surprise to anyone in education that the Payne Fund Studies had some “impact on pedagogical practices in high schools and universities” (p. 22). One can only conjecture as to why those kinds of studies seem to impact education more forcibly than in the fields that study media. Clearly, education has much broader concerns that occlude the importance of understanding the medium of communication, not the least is an understanding of human behavior. I can imagine the effect such empirically studied viewpoints held on the public, let al.one teachers. What I suspect was that the studies, based on views largely generated by teachers who were part of those studies, carried an air of grave importance. Teachers considered movies to have a negative impact on “classroom” behavior as “moving pictures induced in young girls the ‘vampire attitude,’ taught young boys ‘boy bandit games,’ and stopped children from becoming ‘good citizens’ ” (Grieveson, p. 16). As a teacher and educator in a teacher education program, I can attest to the sensitivity to ‘classroom behavior.’ This has been an ongoing theme in schools of teacher education, probably since the start of compulsory education and possibly due to the task of managing large numbers of children at once. In turn, this ‘behavioral’ concern has caused many to wonder about the ‘artificiality’ and ‘coerciveness’ found within a classroom that inevitably produces uncharacteristic behaviors in children as a result, as it was most notably observed by philosophers John Dewey and Alfred North Whitehead. 79 Finally, the Payne Fund, as mentioned, “went on to support research in radio,” and, thus, “communication studies emerged here…aligned with the social sciences then coalescing into a discipline in the 1940s when the term “communication research” first became apparent” (Grieveson, p. 23). Privately or governmentally funded, the field of communication studies became established in universities in the 1950s and although film was thought to fall under this new field, in fact, film study gave way to the humanities. That change came about when film was viewed increasingly as a category of ‘art’ largely thanks to the initiatives to show and preserve film arts by the Museum of Modern Art (MOMA) and such tenacious individuals as Henry Langlois, who was considered instrumental in the preservation of vast quantities of films, particularly during WWII, where many films were in danger of being destroyed by enemy forces. Having been obsolesced by the study of radio and television, the study of film texts was left to the expertise of the film enthusiast interested in the art of filmmaking. Film studies became an unbridled ‘invisible college’ that seemed to operate outside the tenets of communications. As I witnessed in my early years of teaching, however, with the development of video, then computers, a new wave of interest by the 1980s put film texts and film study back under the communications’ microscope and, naturally enough, education. As mentioned, I had found filmmaking books in the communication arts section of our school library—a section that did not include, for instance, traditional arts, but rather many books concerning technologies utilized in the art of communication, namely, radio, television, and film (interestingly, books on photography were shelved under visual arts). Thus, under the communication arts section, I also came across a book that had developed a film and television curriculum by Considine and Haley (1992). 80 In their preface, the authors explained that TV and film audiences either perceive and process at a shallow level or have a deep understanding depending on the level of media literacy—a term frequently employed by diverse experts—though what this kind of literacy entailed (beyond the notion of critical viewing and responding) was not explained further. And at that time, no doubt to visual artists reading this today, I shared the same concern as to how a communication arts specialist can lay claim to understand the image without some recourse to experts in the arts or neuroscience. At any rate, in keeping with their research interests and background, the authors did not describe film as a form of art or language, nor any detail regarding what is an image, rather, everything was explained through a communication model (making the shelving more obvious, while not explaining the distinction). This work was, more or less, a model that descended from the ideas of Harold Innis and Marshall McLuhan, thus more specifically tied to ‘media studies.’ More curiously, the ideas expressed in the authors’ preface gave way to the view of what Adorno and Horkheimer called “the culture industry.” Nothing led me to consider their pedagogy had much to do with the European Marxist intellectuals of the 1930s, which descended from the Frankfurt school’s luminaries such as Habermas, and their American counterparts, Kracauer, Lowenthal and Marcuse, nor that of the American traditions of cultural critique. Yet, their premises, I was to discover, were deeply aligned with the research tradition known as critical theory—a tradition vastly different from empirical social sciences. It seems logical that, if one aims toward critical thinking, some aspect of critical theory will filter the perspective. Indeed, Considine (2009) recently stipulated, “Media literacy can be an empowering pedagogy to protect students from potential media manipulation while also preparing them for the responsibilities of citizenship” (p. 66). Since the notion of “empowerment” and “protection from manipulation” has long been the general aim of critical 81 theory, if not stated, it is fair to suggest that some hidden connection exists between the two. Critical theory, as a form of resistance, had a broad mandate, not only to critique research based on normative values, such as those underpinning the work of psychologists and sociologists, but it could also align itself with critiquing mass production that advanced Marxist materialist views. The evidence for this suggestion lies in the makeup of the broad field of communication studies. Considine and Haley (1992), for instance, found support for their ideas as part of the vast, overlapping and, some say, all encompassing field of Communication Studies. While not the most authoritative, Wikipedia describes communication studies to be “part of both the social sciences and the humanities, drawing heavily on fields such as sociology, psychology, anthropology, political science, and economics as well as rhetoric, literary studies, linguistics, and semiotics.” And as broad as that may seem, “the field can incorporate and overlap with the work of other disciplines as well, however, including engineering, architecture, mathematics, computer science, gender and sexuality studies.” Despite this less-than-reliable information source, one can verify that the National Communication Association (NCA) list of disciplines is aligned with the preceding insofar as the list includes: Communication & Technology, Critical-Cultural, Health, Intercultural- International, Interpersonal-Small Group, Mass Communication, Organizational, Political, and Rhetorical. While The International Communication Association (ICA) includes, among others: Communication History; Communication Law and Policy; Ethnicity and Race in Communication; Feminist Scholarship; Gay, Lesbian, Bisexual and Transgender Studies; Global Communication and Social Change; Information Systems; Journalism Studies; Instructional/Developmental Communication; Language and Social Interaction; Organizational Communication; Philosophy of Communication; Political Communication; Popular 82 Communication; Public Relations; and Visual Communication Studies. In short, anything that might be considered a matter of communication. This diversity within the field of Communication Studies may explain the overlap of critical theory in film criticism, for instance, as expressed by communication arts specialist Henry Jenkins, critical theorists Henry Giroux and Peter McLaren, anthropologists Bruno Latour and Marcus Banks, and feminist theorists Laura Mulvey, Kaja Silverman, Teresa de Lauretis and Barbara Creed, to name a few. Upon reflection, there is a feeling of the same alarmist views in the latter critical frameworks for film study as the earlier empirical ones regarding the deleterious effects of films on audiences—which, to put it politely, smells the same, while theoretically is situated in a different pile. In other words, critical theorists have fought to reveal the forces that “worked to implement social control by reproducing normative subjectivity and so effectively enslaving people and making totalitarianism possible” (Grieveson, 2008, p. 23)—and in some sense or another, both groups of experts, empirical and critical, have labored as resisters against an advancing sinister force. To be able to discern sinister forces—those that destroy the life and dignity of human beings—requires more than one kind of critique. Unfortunately, for many kinds of resisters, ideologies may at times blind one’s capacity to make such discernment fully conscious. If empirical studies were conducted as a means to measure the deleterious effects of film contributing to the ‘delinquent’ behavior of individuals and societies, critical theories, by contrast, sprang from the well of a materialist view on the deleterious effects of film as a means of social control, whereby society was viewed as controlled principally by consumption and production. Whereas the first sought to control and shape society into ‘good citizens,’ the latter hoped to free society from the pernicious control of the market (i.e., capitalism). 83 In other words, both rational positions frame the individual and society as fragile beings. Unable to make critical decisions, individuals or whole societies are governed by their “emotional, psychic, mimetic, and delusional” (Grieveson, 2008, p. 23) impulses that lead them to excesses in consumption and, thereby, are driven to enslavement by the forces of production (e.g., an unregulated, free market). Fundamentally, both theories stem from observing and speculating on the forces at play as society moved into mass reproduction and the perilous twentieth century filled with mass killings. As always, this rational and social view has positioned itself without consideration to the emotional and subjective. This privileging of the rational over the emotional has largely played into religious contexts. For individuals like Reverend Short, I suspect it is neither social or material forces that governs behavior and hinders ‘good citizenry,’ rather it is the pernicious forces of ‘evil’ undermining the moral fiber of individuals (including the feeble emotional register that pervades human thought), but with an entirely different force perhaps bearing horns, tail, and pitchfork. Above all, there is an ongoing complicity between religious and empirical views that see ‘reason’ as necessary to govern human passion if we are to rise above the folly of nature. The two distinct paradigms of resistance to the advancing changes in society at the start of the twentieth century, i.e., positivism, came to be known as post-positivism and critical theory. Those paradigms simply illustrate the iterations of a familiar conflict that plagued society within the different ages, e.g., within Ancient Greece or within the Enlightenment. But also in evidence across the ages: between Ancient Greece and the Roman Empire, between the Renaissance and the Enlightenment, between the Industrial Revolution and Postmodernism. In essence this may be viewed as Hegelian in nature (i.e., thesis, antithesis, synthesis) and reflect the numerous ‘revolutions’ (as the word implies) that simply go round and round. 84 Like variations on a theme but played out in a shorter riff (i.e., decades versus centuries), one observes that, throughout the 20th century, the realist-idealist or empirical-rational positions, along with the distinctive methods of induction-deduction, have created the age-old tensions that continue to invoke both ontological and epistemological debates. In fact, this iteration is an important point to be made with respect to the neglect of the emotional, which rarely factors into such critiques of the social. Although an exploration of the emotional is one that requires a more thoughtful review, before leaving this section on the history of educational film research and in keeping with the crescendo of impressions I experienced while reviewing the literature, several more observations are worth discussing. The dizzying effects of revolution: An imperative for creating experts With each novel change in mass communication technologies, first appearing in the late nineteenth century, the public was offered entertaining and artistic diversions at an economical rate (compared to live art exhibits and performances) and, by consequence, new media gained vast acceptance into society. Mark Anderson (2008) argues that “the cultural ascendancy of the modern human sciences coincide with the rise of mass culture” (p. 39). More importantly, not merely a societal change, but one that is felt as a rapid succession of large-scale, gestalt shifts (or the speed at which the magnitude and totalizing effect of the audio-visual could be compared with print technology) that makes a ‘dizzying’ impression, or as quoted earlier, a feeling of “disequilibrium.” Is it possible that modern human sciences might not have devoted so much intensity to the study of film if researchers had not themselves felt disoriented or imbalanced by the flux of images? The irony, of course, is that ‘rapid’ change is something we experience as children naturally and may well be why the spread of new media is taken up by those whose inclination is to adapt to change (until such time as an adult perspective brings a ‘steady’ rate). At any rate, the 85 many discussions I had had over the years with colleagues expressing a certain reluctance or distaste for technology versus those who embraced technological change, led me to believe that the tension I felt from teachers resisting new technologies came about because of rapid change. That kind of thinking appears as commonsense, however, cannot be so simple. While mass culture was arguably the primary impetus for the large body of empirical studies on the impact of movies on society, which then gave way to critical studies, Anderson ventures that “before any aesthetic, psychological, economic or sociological inquiries into motion pictures could properly converge to form a unique area of scholarship, an important institutional condition had to be fulfilled: the creation of the media expert” (p. 38, italics added). In other words, a change in society often pushes a group of experts to the fore as a means to assess and evaluate the phenomenon using objective means. Often, but not always, it is left to the group of experts to resolve issues of deep social concern. Invariably, the creation of the expert is a prickly matter for artists. Personally, expertise in the arts has always been confusing since it was clear that ‘expertise’ is first and foremost derived by experience—at least as it implies fluency. To be fluent suggests a conditioned and skilled reflex stored in memory after long practice, which does not require labored thought in order to perform or produce an effect. Fluency is a type of rapid ‘dexterity’ and range of ‘vocabulary’ not found in the novice, for instance, when playing an instrument or speaking a language. Thus, fluency implies rapid perception, comprehension, and performance. Without question, fluency has long been argued to differ from theoretical knowledge, i.e., knowing which elements are necessary versus applying those elements. In this respect, the art critic has a rather long and contentious history with the artist, as do art historians, art scholars and, art journalists. My sensibility toward artistic expertise laid somewhere between my years of practice and performance next to those who supported and understood my intentions. While I 86 certainly consulted the expert views of those who critiqued artistic works (at times to deflect a negative audience response), I was also prone to consult friendly audiences (usually to deflect the expert view). The ‘expert’ evaluating my artistry, therefore, seemed to vacillate with the winds of my ego, which grew or shrank dependent on the actual value offered by friends (and having spent many years in the midst of large numbers of artists, I suspect is true for most). But none of the preceding emotional content served to reconcile the difference between tacit and conceptual knowledge. In other words, the expert remains controversial in light of the insoluble problem between the subjective and objective viewpoints. Or more precisely, the problem rests between the differences of the rational and the experiential. Second, what remains quite puzzling is the issue of emotional investment. I have often wondered whether an expert was supportive or resistant to the medium upon which they were focused (as many have puzzled over the role of the art critic). Coming from my background, art critics, art historians, art scholars and art journalists, largely tend to be emotionally invested in art appreciation, while cultural critics appear to swing in either direction (a position not easily defined). Arguably, criticism may be seen as an art form, yet, those who do not practice or have not mastered fluency in the art they are critiquing, are in danger of producing a detached, rational impression that differs vastly from the subjective ones experienced by the artists themselves. This unfortunate state of affairs renders critical interpretations as having an air of pedantry with some mean-spiritedness attached. Of course, this brings up other issues pertinent to expertise, not the least of which is our subjective preferences. From a linguistic perspective, we are all experts in our maternal tongue, rarely making the kind of grammatical or syntactical errors of foreign speakers. Nonetheless, we are less than expert in trying to explain the structures of language, making us seem rather naïve in our use of language. Expertise, it would seem, requires both a profound subjective experience, richly 87 endowed with emotion, and an objective rational understanding that can pull apart the elements of a complex system without losing sight of our emotional salience. It is not clear that either the language or film expert emerged from such a holistic center. What is more likely is that the emergent experts in film arts were more or less ‘technicians’ favoring film’s function as language or social mediator. As Kracauer (1960) aptly noted, “The pervasive growth of technology has given birth to an army of technicians trained to supply and service the innumerable contrivances without which modern civilization cannot be imagined…the essence of all of them is tantamount to their function” (p. 292). He contrasted the ‘army of technicians’ with that of the artist since, “artists have a way of sensing and baring states of mind of which the rest of us are only dimly aware” (p. 294). This statement aligns itself with McLuhan’s (1988) declaration that, “artists are the antennae” of society (p. 6). I can only surmise in these statements that the difference between the expert technician and artist is precisely what ‘machines’ are incapable of possessing, namely, emotional awareness. By technician, I am also speaking of the merely ‘rational,’ which Damasio (1999) clearly showed through compelling evidence to be no less than a ‘technique’ or device for decision-making (such as the ability to perform a cost-benefit analysis). Individuals who have diminished emotional capacity due to brain lesions but are otherwise completely rational have shown to be stymied by decisions requiring emotional valence (i.e., values). Why this issue of expertise or what one may call relational authority is important extends to the problem an artist researcher faces when discerning between the varied theories that underpin film scholarship. That discernment is not just a matter of degree, it is central to an artist turned arts educational researcher. Whose expert view should the artist lean upon to help frame their understanding once turned art educator and researcher? Which expert can best support and deepen what one tacitly knows as an artist and teacher gained through the subjective 88 experiencing of art and teaching? The problem rests partly on whether the analyses based on theories of a ‘technical nature,’ which are often challenged by subjective experiences, are able to reach concomitance. And partly on the issue of the intent behind the expert, which can so easily be governed by ideology (i.e., utopian view). At any rate, the media and film expert, according to Anderson (2008), was made up of university scholarship and “founded upon the emergent authority of the human sciences…namely anthropology, psychology, sociology—disciplines whose application to practical tasks discursively produced various forms of modern expertise as so many sets of power relations, e.g., anthropologist/native, psychologist/patient, sociologist/deviant” (p. 39). Reading from the list, I wonder if not also scholar/artist could be added. The forces at play may well fit with Foucault’s view of “modern scientific disciplines” as “sites of power” (Anderson, 1985, p. 39) but, as noted, it is not limited to the scientific but to empirical disciplines at large. To complicate the problem of authority, while perusing the mission statements expressed by varying organizations on media literacy, for instance, in Canada as in the United States, it is the intent behind the expert that leaves one unsettled. It is difficult to determine which view prevails as a thread of continuity between early thinkers and which view deviates from that course of thinking and action. The Canadian Center for Media Literacy (CML), which is also part of the Alliance for a Media Literate America (AMLA), for instance, has established its authority while making the following statement, To become a successful student, responsible citizen, productive worker, or competent and conscientious consumer, individuals need to develop expertise with the increasingly sophisticated information and entertainment media that address us on a multi-sensory level, affecting the way we think, feel and behave. As if in response to some form of criticism, they adjoin this anti-media ‘disclaimer.’ Finally, while media literacy does raise critical questions about the impact of media and technology, it is not an anti-media movement. Rather, it represents a coalition of concerned individuals and organizations, including educators, faith- 89 based groups, health care-providers, and citizen and consumer groups, who seek a more enlightened way of understanding our media environment. Coinciding with the mission statement is also a statement in a section entitled, Values education, which expresses the following, The mass media are an ideal resource for the discussion of moral dilemmas, the development of moral reasoning, and the use of techniques such as values clarification. Dialogical reasoning, which has been described as an important part of critical thinking, can play a significant role in discussions of topics such as the pros and cons of the mass media, government control of media, censorship, advertising, and the moral values identified in popular television and films. Consult the bibliography in the ministry's resource guide Personal and Societal Values (Toronto: Ministry of Education, Ontario, 1983) for further information on values education. Based on the preceding, projects aimed toward media literacy contain important ideas, yet do not clarify who the expert is that is conducting research, influencing or making policies, directing educational curriculum, and forming pedagogical practices in response to our concern with images and sound. To a particularly keen observer of ideology, some clues are imbedded in the language used to describe their aim. For instance, a ‘values education’ statement carries very strong implications that normative values based on ‘moral’ grounds are being upheld. That a ‘values education’ is founded upon moral concerns may not come as a surprise given the long history of the role of religious thinkers with a deep commitment to educating the young. The trend for religious leaders in media education began by the 1920s with the aforementioned Reverend Short, followed in the 1960s with Father John Culkin, the latter appointed by Marshall McLuhan to become fellow at the University of Toronto's Centre for Culture and Technology and whose appointment was announced by McLuhan as having, “obtained the services of John Culkin, the film Jesuit, who is known throughout the world among film-makers and teachers.” And, finally, with the current Director of the Jesuit Communication 90 Project in Toronto, Father John J. Pungente, who “continues his main work of promoting Media Education across Canada.” Indeed, there have been many with religious convictions in Canada and the US laboring to promote film and media literacy, which has led me to wonder what may have been and continues to be their underlying motive. Moreover, whether they are supportive or resistant to film and new media in general, what steps have they taken to reach out to scientific research for further understanding? Given the conflict between religions and sciences, it is troubling to think that there is a very real possibility that science has not been part of the discourse and policies that concern the mass production of images, their latest developments and impact on the individual. Simply put, decision makers and the public require more than rhetoric or convictions that lean heavily on moral grounds. Finally, given the influences that religion has had on the field of media literacy in research and pedagogy, ought there be concern? In light of the long association between religion and education throughout history, it is tempting to view values education as solely being the providence of the religiously motivated, which inevitably raises concern in an era of the separation of state and religion. To be fair, however, moral reasoning is the capacity to judge and assign responsibility for actions taken—our own as much as another—which is not merely a religious concern. Moral reasoning, it may be argued, is part of being a responsible and judging human (Arendt, 2003). And the fact that religious and educational institutions have been the principle vehicles for delivering a moral-based education is perhaps something neuropsychology can bring to light without the century long polemics. The question that remains unanswered, however, is the distinctions in moral reasoning and whether there is room for a broader discourse. 91 Not every concerned investigator in media literacy or communication arts has held a notable religious background. For instance, communications guru Edgar Dale who published the 1933, How to appreciate motion pictures, which became the best-selling volume from out of the Payne Fund Studies, “pursued a program of film education in concert with Ohio State University and the National Council of Teachers of English.” In general, he appeared enthusiastic toward film after “commending a cannon of approved movies such as the adaptations of A tale of two cities, Great expectations, A midsummer nights dream, Anne of Green Gables, and so on” (Grieveson, 2008, p. 22). That sort of selective curriculum, however, has troubled film theorists (as much as it has literary theorists), since their aim was to study all films in order to deepen our understanding of the breadth and depth of human nature. Not merely works that uphold moral values but works that ‘teach’ us something about the nature of being human. The fact remains that educational film programs (i.e., viewing and critiquing) “were conceived as a way of destroying the mimetic effects of cinema” made possible with the list that Dale (1933) promoted as following from the “model of spectatorship advanced by the Payne Studies as a whole” (p. 22). Setting aside the notion of ‘mimetic effects,’ one can begin to surmise that Dale was governed by a moral code based on normative values and ‘common’ folk wisdom. Searching for a new direction Why should any of the foregoing matter? In light of the inexhaustible body of work on film studies found in books, trade magazines, and academic journals—in libraries and on the Internet—an historical perspective on motives and drives behind film research and study clearly demonstrates the importance of the subject of film. For complex social, psychological, and cultural reasons, film is not simply a fascination, but a cause for profound concern. Nonetheless, in light of the diversity of theories, which span more than a hundred years with no explanatory 92 whole, many theorists have abandoned any hope of achieving a grand theory of film (Smith, 2003). Whatever may be said of film today, many theorists are cautiously treading where once they spoke with definite assurance. This caution can be sensed across many fields of research, largely because the naive belief we once held, which was that science and technology would solve many complex human issues, has been thwarted by escalating concerns that have turned global and epidemic. Given the numerous technological calamities—principally rooted in physics, chemistry, and biology—we are slowly gaining a view of technology that it is best directed toward problems of logic, not problems with the kind of complexity that make up humans, cultures, and societies. As one sifts through the literature on film, one finds complex human themes that have been cycled through theories in psychology, sociology, politics, and economics—all manners in which humans have been entangled throughout history. The range of those complex themes— representation, meaning, subject formation, identification, ideology, agency, subjectivity, and authority—have been theorized anew under what appears as new conditions (e.g., modern or postmodern). Yet the same underlying motives and drives permeate new frameworks as the generation previous, namely, an emotionally laden logic imbued with values of ‘good’ and ‘bad.’ It is the ‘loop of logic’ that led Latour (1993) to make the claim that “we have never been modern.” Though he did not base this view on the study of emotions and values, Latour had a point. Nonetheless, without understanding the attached emotions and values imbedded in logic, we are bound to create an eternal loop of reasoning from which there is no escape. What we end up with are interminable descriptions of universal themes retold in particular ways. Of course, the descriptive always reaches a state of exhaustion before an explanation that would lead to a solution is ventured. In the area of film studies, an explanation began when Christian Metz 93 (1974) ventured to compare film with language. Dependent on emerging studies in structural and cognitive linguistics, Metz’ effort to explain film works were foiled by new film forms linked to new technologies in sound and photography. In essence, Metz’ theories underwent half a century of critique and although his work made a tremendous impact in film studies, his ideas were also both lauded and discredited due to the many descriptive examples of new film forms. As in the case of the novel, film explanations always faced an exception to the rule (Casetti, 1999). Had Metz read Bakhtin’s (1981) view of the novel, which has in common with film a form that is always in flux, he might have been able to envision the difficulty he was bound to encounter. The salience of technology is not merely an element of film works. Cosmology, mechanical physics, chemistry, and biology—have relied on two thousand years of mere descriptive understanding, with explanations deferred to philosophy. These were given explanatory depth through technologies capable of seeing macro and microelements. Neuroscience has also benefited from evolutions in technology. Given the limited manner in which the brain could be studied, descriptions in neuroscience seemed to be all that could be ventured for several hundred years, with explanations and solutions deferred to philosophy and psychology. Today, neurological explanations are being posited, which would lead to real solutions, because as Ramachandran (2004) expressed, scanning technologies—leading to designing better experiments—are giving us access to what lays inside the ‘black box.’ Through the countless efforts to describe phenomena in film, many have made speculative, rational explanations only to be displaced by the next, more eloquent interpretation. Quasi-experiments in psychology and sociology have loaned theoretical frameworks to film studies that tend toward isolating elements from the whole or reducing and simplifying complex mental processes and social interactions. In turn, those theories have been both simplified and conflated through film analyses (Smith, 2003). In film works, rather than the ‘brain peering into 94 the brain,’ as the frontier of neurology appears to lead us, we appear to have something made by the brain, which is yet entirely removed from the brain—something purely technological or artistic. Or is it? In truth, theories and analyses in film studies reflect areas of human concern that have changed little over the centuries and for which we have had little capacity to resolve (e.g., mimetic effects, identification, and subject formation). What remains is not better film analysis or a better way to penetrate technology, but to understand consciousness and the brain-mind- body problem because technology is a human expression of inquiry. Technology has brought us ever closer to the age-old, self-conscious question: Why are we here? Though not yet answerable, understanding consciousness and exploring mental processes from a holistic perspective (i.e., brain-mind-body) has taken our concern into new directions. Largely this is due to the fact that technologies are helping us to peer further into the brain as it has done into the heavens (Damasio, 1999; Ramachandran, 1998; Sacks, 2010). The embodied brain has appeared to us as a black box, which has prevented us from understanding its workings. Yet we are growing closer to capturing mental processes and in so doing, understanding human cognition, emotion, motivation, drives, and intent (Grodal, 2009). Reflecting on the urgent call to action Of the individuals world-wide now fully intoxicated with inexhaustible images, none have become so thoroughly drunk as children and youth whose estimated time spent in front of the television, Internet, and PDA’s (personal digital assistants) has lengthened and intensified beyond anyone’s expectations at the turn of the 21st century (Kaiser Family Foundation Study, 2010). This increased duration and intensity is difficult to explain in rational terms, making it increasingly difficult to curb the latest advances in imagery. Yet, through anecdotal reporting, one notes the concerns that have been raised as we move into the latest technological invention 95 in 3-D images. Not able to make conclusive statements as to the cause of such mental ‘disturbances,’ we are faced with only one option, namely, to reconsider what constitutes an image and its relationship with our brain-body. Putting this latest phenomenon into context makes our present world seem rather unique in the history of humankind. Our world is experiencing images in a manner unprecedented in human history, with innovations being introduced at an accelerated tempo with no sign of slowing down. From personal experience, I have seen that even in remote, impoverished regions of the world, audio-visual media are accessed via community resources with links to larger metropolises made possible by infrastructures in transportation, optic cables, and satellite. The idea of ‘remote locations’ has been forever altered. For anyone who aims toward ensuring a safe and healthy environment for children, there is much to consider. In what direction should this dramatic change take those whose concerns lay in children and youth as they grow, study, and mature into adulthood? Certainly interest in the education of the child is as varied as the fields of concentration aimed toward understanding the many facets that make us human. And every field hopes to contribute in meaningful ways to understand the pedagogical consequences to the psychological, emotional, social, intellectual, physical, and spiritual development of the child or youth as such. But the enormity of assembling such diverse views, with the hope of finding our way through the tide of initiatives in order to best serve our children and youth, seems almost inconceivable. It also feels that when we rush the decisions, we may cause further harm rather than good. In the ensemble of theories and methodologies, one may wonder, what are the avenues that can help position and guide educational researchers to better understand the course of knowledge in coming years with a little more circumspect? Is there time to be circumspective in such a sped-up environment? Clearly, film’s impact on children has been on people’s minds 96 since as early as 1909 (Grieveson & Wasson, 2008). Without wishing to overstate the points outlined thus far, the following brief account concludes that a century has not be sufficient in bringing us to conscious terms with the feeling of urgency that motivates and drives our research and policies. When Alice Miller Mitchell (1929) published her lengthy qualitative study, replete with statistics of children’s film “behaviors and attitudes,” her study drew the attention of educators, sociologists, and psychologists with interest in child behavior and dispositions, which in turn, prompted several published reviews. Naturally enough, each specialist took a different perspective on the study’s conclusion. Some were ‘positive’ in their critique. One pair of authors thought Mitchell’s study showed undeniably that, “All classes preferred movies to reading, but the delinquents more especially” (Bernard & Bernard, 1930, p. 127). One author considered the study “shows conclusively that attendance at motion-picture exhibitions is a regular experience with the vast majority of city children” (Freeman, 1930, p. 636). Others were significantly less positive. One attacked the methodology and conclusions, suggesting instead that, “This book is really a statistical collection of opinions…the interpretive sections all grow out of more or less common assumptions made by social workers, juvenile judges, recreation directors, and school authorities about the deleterious effects of the motion picture upon conduct” (Young, 1930, p. 307). Yet another indicated that, “What the data actually show is that all of the children are alike in preferring play to movies” (Peters, 1930, p. 207). Within such vast perspectives one can begin to detect a familiar debate that still centers then, as it does today, on the problem of epistemology and methodology. The knowledge digital ethnographic experts presently offer To update the debate that took place under a sense of urgency at the turn of the twentieth century, the following puts matters into a current context. In the words of digital ethnographer 97 and YouTube specialist Michael Wesch (2009), “This new media environment can be enormously disruptive to our current teaching methods and philosophies” (p. 1). Thus it would appear that our concerns carry the same threads of the early empirical film studies. But in what manner are new media environments disruptive? Is it contributing to youthful ‘delinquency’ as it was thought to do in Mitchell’s day? According to Wesch, this new virtual world, made up of images, sound and written text, is disrupting traditional schooling with “physical structures” designed for transmitting information, implacable “social structures” in the form of “standardized testing” that evaluates the degree to which information has been acquired, and “cognitive structures,” which has shifted a traditional understanding of space-time (pp. 1-2). Given Einstein’s concern with the indexical now and the enduring problems dividing physics from the human experience, one wonders just how our view of space-time has been altered, and whether Wesch can provide us with the answer. In a strange turn of events, however, according to Wesch, the ‘delinquents’ of today are researchers (i.e., faculty members) who wish to “subvert the system.” It is interesting how ‘delinquency’ has been recast. In Wesch’s view, it is not the youth who display deviant behavior from the norm, rather it is those scholars who also attempt to instruct through these new technologies. Understandably, the ubiquity of images and sound, as these exploded on the scene at the start of the 21st century, makes it difficult for many educators to sustain learning through ‘old’ traditional approaches. Perhaps there is a dual message in Wesch’s analysis. Perhaps his thoughts follow the sequence of causation that media has disrupted youth, which in turn now disrupts the system of education. In any case, taking his argument at face value, the disruption of which he speaks appears to be between traditional and new forms of teaching and learning. By consequence, he is 98 part of the new vanguard of scholars repositioned to promote new media and, along with it, new approaches to research and pedagogy. But one does not get the sense that this promotion is in favor of a new generation finding their voice and generating critical views. Wesch’s ethnographic work, particularly on the development of YouTube, suggests something of the old ‘authoritarian’ spirit remains. For instance, Wesch is not unlike the intellectual elite of the past who saw the rapid change in our environment as requiring experts poised to be able to mediate or manage (perhaps control) the learning of children and youth. We are thus, immediately thrown back to a time when it was thought necessary to create a particular kind of expert, namely film or media scholar, able to interpret the physical, social, and mental change and to translate this for those unable to make the shift. As an arts educator, more inclined to explore collaborative processes with learners that lead to critical and creative thinking, the expert presence becomes a misplaced interpretation of the role of the pedagogue. Thus, it is interesting to note that, then as now, ethnographers are positioned at the front lines to offer us convincing views, for instance, Wesch’s popular 2002 YouTube video: The machine is us/ing us. There is something disconcerting, perhaps even disingenuous, about a digital ethnographer using digital video to persuade us. Curiously, my university students have picked out a subtext in his videos that counter his main text—one that they feel is manipulative (the author, that is, not the medium as Wesch had hoped to show). I do not wish to suggest Wesch has consciously set out to promote his ideas through devious means, rather, it is clear that the use of video carries ethical implications in a research context aimed to defining and explaining the medium of video. I would expect no less meta- awareness of a written context. What would be the point behind the study of language if not to acquire an understanding of its ethical use in society? In anthropology, ethical concerns of this 99 kind dates back to the uses of photograph and documentary film—a history that made many anthropologists wary of tampering or ‘staging’ visual and aural data (Barnouw, 1993). Aside from ethics, what is curious is that my students gave a critical ‘reading’ of the video independent of any comments from me. Some researchers would have said that their competency to read a video is a result of highly developed language skills. In this statement one would assume that language is the optic through which to understand film. Yet, according to my personal observations, the fact is that whether the viewers are made up of university students or youth with ‘low’ literacy rates, all viewers demonstrate the ability to ‘read’ video critically. This phenomenon, which was illustrated so well at the end of Rouch’s 1960 film, Chronicle of a Summer, a broad swath of individuals are able to critically ‘read’ moving images (Rouch, 2003). This well known fact continues to elude film theorists and researchers. Due to the type of resistance shown by first year undergraduate students toward accepting film analyses as they are presented in the classroom, many film educators, analysts and theorists will contend that the depth of reading depends on the level of filmic knowledge. But that kind of bias only reinforces an authoritative view that cannot easily account for creative or logical conclusions made by ‘naïve’ readers. Moreover, the capacity to read ‘virtual’ moving images appears to entail certain ‘competencies’ similar to language, which point to cognitive processes. Those mental processes, however, leave out the ‘social’ processes involved in communication since both filmic and written texts do not have a communication ‘back-and-forth.’ This confusing state of affairs questions both the expert views of film ethnographers and cognitive film theorist to explain social and structural phenomena as occurring in a film context. 100 The knowledge communication experts presently offer Besides ethnographers, there are other experts on the front lines of study offering slightly different views. Media and communication theorist, Henry Jenkins (2006) outlines, “three concerns [that] suggest the need for policy and pedagogical intervention.” The first of these current concerns is what he calls the “participation gap,” whereby all youth are not fully prepared for “participation in the world of tomorrow.” Second, “the transparency problem,” which “challenges young people…to see clearly the ways that media shape perceptions of the world” And third, “an ethics challenge,” which is felt in the “breakdown of traditional forms of professional training and socialization that might prepare young people for their increasing public roles as media makers and community participants” (p. 3). What Jenkins and Wesch both seem to suggest and share in principle, is that there exists a gap between the youth now actively engaged in image and sound production (as it is readily apparent on YouTube) and traditional print-bound educators. And both are poised to suggest that media shapes perception (whether cognitively, emotionally or socially). Yet, Jenkins extends his argument to include an ethical challenge to youth becoming full members of society, which allows them to, “articulate their understanding of how media shapes perceptions, and has been socialized into the emerging ethical standards that should shape their practices as media makers” (p. 4). Here Jenkins argues that youth, being not yet full members of society (assumingly politically and economically), require expert intervention to help them to articulate how media shapes perception as well as how they are socialized, or ought to be, according to new ethical standards emerging in society. Those principles, as far as I can determine, neither disfavors youth nor new media. 101 It is perhaps a truism that a digital gap exists between learner and teacher, though I suspect this is not particularly unique to our time, despite what is seen as the ‘digital shift’ as it is depicted in the popular Shift happens series on YouTube. From a linguistics viewpoint, the video may be an ideal example of film’s referential-indexical capacity, for its content begs the question: is the content speaking through a first person singular, and, if so, who is doing the talking? This is an important question to pose if one is to try to deconstruct what is meant by what Jenkins and Wesch, call the “digital native” in contrast with the “digital non-native.” And which voice, whether native or non-native, is in authority? But the language that is used to describe the ‘generation gap,’ such as ‘analogue’ and ‘digital,’ creates confusion as to what has specifically caused the gap in the first place, since there is a lack of precision in the use of the two adjectives. In other words, calling today’s generation “digital natives” confounds how “digital” media impacts today’s generation differently than those preceding it given the binary nature of the written language versus the analogue nature of natural language or the traditional arts (such as music, visual art and dance). As linguists and media specialists have noted, the written word is a linear, sequential digital medium (i.e., in that written letters are the smallest logical units to represent linguistic sounds)—precisely as are computer languages. The confusion appears to rest in the fact that “digital” new media produce ‘holographic’ projections that imitate traditional analogue systems (e.g., natural language, images, sounds, and movement), which are multi-directional or non- linear. The first word processing system that allowed us to produce non-linear text by moving text around (as Wesch’s video illustrates) is simply a medium imitating what we have always been able to produce through spoken word and other art forms, namely, syntactic flexibility. Certainly the traditional forms of visual art, music, and dance have always possessed the 102 temporal-spatial qualities that are now found at the touch of a mouse (or fingertip). What, therefore, can be said to have altered our analogue processes through digital new media? Philosophers, linguists, neurologists, and cognitive scientists, having described the written word as a digital abstraction, provide compelling ideas and facts regarding the relationship of the written word with thought. To suggest that youth today are “digital natives” ultimately overlooks certain kinds of knowledge, then oversimplifies the cause and complicates the phenomenon we are witnessing today. Ultimately, this creates an unnecessary tension where there need not be one. In other words, the issue of “digital natives” may be nothing more than a red herring that serves to divide us conveniently into two groupings (the technologically adroit and the non), ultimately distracting us from reflecting more deeply on the matter. While more could be said on this issue, one final thought sums up the foregoing. Whatever aspect of the human being one views new media to affect, whether cognitive, social, emotional, physical or spiritual, media (signs, symbols, objects, etc.) have been predominantly viewed to shape or form human beings in some fashion. The form, in other words, is informing the human. In neither the positions taken by Jenkins or Wesch, do they clearly express that all media, as extensions of thought, both inform us and are, in turn, deformed by us. Although Wesch shows the ease with which we are able to move text around on a virtual page or shift from one image to another seamlessly, the slight of hand seems to underscore the medium’s hold over us rather than acknowledging the creative capacity to capture ideas on video—something that machines have never been shown to possess. There is something uncannily Hollywood in such a position that ventures that the “machine is us/ing us.” On the other hand, perhaps I am not doing the field of cybernetics due justice (if this is Wesch’s position). Cybernetics (which is also a branch of communication studies) has certainly given food for thought, not the least of which, offered by Donna Haraway (1991) in her seminal 103 essay, A cyborg manifesto. I cannot hide the fact, however, that Haraway’s view of essentialism and naturalism is not one that I share. Despite my deep admiration of all those who have resisted and continue to resist normative values, I disagree with her claim that, “We are all chimeras, theorized and fabricated hybrids of machine and organism; in short, we are cyborgs” (p. 150). I disagree on the ground that humans are entirely organic in nature. To confuse the fact that our use of technology, including language, informs our brains with the imaginative view of Roddenberry’s cyborgs, is rather like comparing our brain-body-mind to clocks or machines. Our entirely organic impulse to create, that is our human evolutionary inheritance, is one that needs careful consideration in the face of the many forces that continue to do harm to our lives and dignity. Nowhere then or now has the evolution of human beings been the result of machine/organ mutation, except in Hollywood films. Finally, this lopsided view of media seems to position vulnerable segments of society (e.g., youth, the artist, the ‘non-expert’) as either ‘victims’ or ‘champions’ of new media (depending on one’s expert view); and ultimately puts new media as the primary and unstoppable force, according to Wesch, that is rendering traditional schooling insignificant, which is not much different from the kind of view experts offered in Mitchell’s day. What is confusing to an educator or an artist is the dichotomy between ‘traditional’ and ‘new.’ Once again, our research or schooling problems, may lay in the fact that educational researchers and teachers overlook compelling ideas, oversimplify a cause and complicate the issue by the lack of precision in the use of the terms traditional and new. Before we begin to name, define, and explain the relationships between words and ideas, we must, at the very least, begin to clarify their meanings as embodied images. In the end, pedagogical intervention, as it is given importance through scholarly research, is done for the purpose of curbing the speed of change and redirecting youth’s viewing and 104 production in order to “help students acquire the skills they need to become full participants in our society” (Jenkins, 2006, p. 4). There is nothing particularly objectionable about concentrating on mass literacy, which has been part of the democratic process since compulsory education began, and for these reasons, researchers such as Wesch and Jenkins actively advocate for new technologies, new research programs, and instructional improvement as these are felt across the community. Yet, intervening with policies with only fragments of information at hand seems unwise. Despite the push, experts are cautious, even when their claims are vociferous. For instance, Jenkins states clearly that while he advocates strongly for new media use in the classroom, he does not “dismiss the very real concerns that the [Kaiser Family Foundation Study] raises” regarding the “amount of time young people spend on screen media” (p. 11). He admits that to some degree, the report is not to be discredited since media have not merely disrupted traditional forms of education, but also, as it would appear, the lives of youth themselves (though this disruption has been fuzzily explained). Clearly, advocates today are moving ahead with propositions and policies before any confirmed evidence from research has been reached, which in many ways indicates that action is being undertaken with a sense of urgency before sufficient observations and facts can be gathered. I can well imagine how the public would react to either science or medicine if hypotheses were left unconfirmed. Then as now, research and policies are being conducted in a state of ‘emergency’ in response to our lives being disrupted from a type of disequilibrium poorly understood. And clearly, the urgency to address the so-called disruption stems from less than scientific notions surrounding what is being ‘disrupted,’ which urgency compels experts to propose a solution. For Jenkins, the solution to ‘disruptiveness’ rests in doing away with a 105 “laissez faire” attitude toward youthful viewing and production. That is to say, the “laissez faire approach” in pedagogy and research, “assumes children, on their own, can develop the ethical norms needed to cope with a complex and diverse social environment” (p. 12). From this perspective, the concerns that shifted from “screen time” that shaped youthful viewers to “screen time” that now shapes youthful producers, went from needing to urgently restore (or maintain) social and cultural norms. This measure has been taken to protect youth from self and society (i.e., delinquency) and to urgently engage youth in “critical dialogues that help them to articulate more fully their intuitive understandings of these experiences.” This engagement is apparently to “participate in social and cultural norms” as these are expressed politically, socially, and economically (p. 10-12). This convoluted argument leads us to note that in Mitchell’s (1929) time, the urgency that was felt toward research and policy was valued as a means to “protect” youth and the public, while in our time is valued as a means to be “pro-active.” Despite the clever semantic turn from passive to active, there is still a sense that education or youth (or both) are being jeopardized. There is something doubtful, however, in this new active view, as optimistic and noble as it may seem, for as Buckingham (2000) argues, “By and large, young people are not defined by society as political subjects, let al.one as political agents. Even in the areas of social life that affect and concern them to a much greater extent than adults—most notably education—political debate is conducted almost entirely over their heads” (pp. 218-219). Wesch’s view of the problem is more pragmatic. The issue of the Internet as a social network, which inevitably includes the sharing of videos on YouTube, is a matter of succumbing to the old adage: if you can’t beat ‘em, join ‘em! Or as Wesch expresses, “With total and constant access to their entire network of friends, we might as well be walking into the food court…and trying to hold their attention” (p. 4). In Wesch’s words, this new educational battlefield will be 106 best served if we “work with students [to] leverage the networked information environment in ways that will help them achieve the knowledge-ability we hope for them” (p. 4). It is plain that Jenkins and Wesch, have now become the new experts, (Jenkins is even being touted as the ‘new McLuhan’) and support new technologies. They are also the ‘new’ interpreters of a phenomenon that is well over a century old. In some ways, they appear to be operating within familiar frameworks. In other words, their aims do not venture far from those of the past experts with concerns in education. There is a need to intervene in the education of youth, whether that is to protect them or actively prepare them for full participation in society. Then as now, issues of agency continue to be a factor and the urgency of the times proves to set in motion a familiar refrain: a mere description of the times. What continues to be lacking is an explanation that will help us to find solutions to the disruption that threatens our sense of wellness. Under such historical optics, one is tempted to revisit Francis Bacon (1561-1626) and reconsider his proposed Idols of the Tribe or of false notions; Idols of the Cave or opinion and personal interpretation; Idols of the Market-place or of the uses of language and arts; and Idols of the Theatre or of reliance on Aristotle’s syllogisms. Bacon’s lawyer instincts led him to distrust both the methods of the Scholastics and the science of his day, which was filled with conclusions that leapt from particular observations to indefensible general axioms. His shrewd observation of epistemological and methodological problems, despite what we have come to note as also his shortcomings, appear to have set the stage for modern sciences. But it is not Bacon we are in need of revisiting, rather it would appear that individuals like William James and Alfred North Whitehead could offer us greater insight into the emotional register that biases our rational views. 107 Questions and concerns that continue to haunt arts educators In truth, as far as my concerns were fifteen years ago, the phantoms of the past did not seem to be of any importance to me as I planned my film production unit. First, because artists, like scientists, are always tackling new technological innovations, whose necessity is the ‘mother of invention’ with which to create or investigate. And, second, I did not perceive new media as being any more disruptive than the panoply of ‘traditional’ arts I endeavored to teach. For better or worse, I was part of a long history of artists who have considered the arts as sites of empowerment through expression and self-reflection. And more often than not (since arts educators tend to think as artists), whether in schools or universities, arts educators also view their work to be disruptive to social, political, and economic forces within and without education (O’Donague, 2009). Hence, in those days, alongside being part of a school whose vision was to disrupt standard forms of curriculum and pedagogy, which served to support my film project as an arts educator, I was also part of a long line of conscientious artists whose artworks meant to unsettle normative values across socio-cultural structures including political and economic values held by the artists themselves. It simply felt, to me, that the times were ideal for developing a unit of study in filmmaking, despite that it was still ten years from achieving full production and distribution capacity. At any rate, none of what I was endeavoring then gave me a clue as to where it would lead me today. It certainly gave me no sense of the history of film studies in education. In general, artists, whose intent is to disrupt normative values, begin to reassess their works when a measure of success places them squarely back in the boundaries of socio-cultural, political, and economic norms. If they do not reassess or cannot get past the comfortable position of now being 108 part of the norm, their disruptive inclination is effectively dampened—and, worse, may be subject to parody. This cyclical pattern in the world of the arts continues to generate much discourse, for throughout art’s history, when the once ‘unsettling’ art form is fully embraced by society (often with a generation lag), social, economic, and political forces enter to take over to reap the benefits—an impediment to the artist’s desire to ‘move on.’ There is always money to be made and politics to be dealt with, once art or technology becomes part of an economic exchange. As to the successful artist, who is not seduced by success? At any rate, what was once viewed as ‘disturbing,’ soon becomes an institution and the artist innovator either becomes a living display in the museums of the mind (e.g., Michael Jackson) or obsolesces entirely as new artists create variations on a theme. Of course, nostalgia, which means ‘to return to the hearth,’ inevitably keeps the memory, and the fans alive and faithful. Filled with such ideals, the kind I see enter the teacher education program year after year, there is a fervor and hope one may disrupt socio-cultural norms through artistic endeavor, perhaps unsettle curriculum and pedagogy in the process. For me, the art of filmmaking has been simply another medium through which to exercise autonomy of thought and composition. In whatever art form, that is simply the realization of the human creative impulse—one that carries a degree of importance, an aspect of the immanent, and a promise to transcend the finite that is felt in the quotidian, the ritual, the mundane, and the matter-of-fact (Whitehead, 1938). While it certainly was not notoriety or financial gain, which threatened to throw me back into the ‘norm,’ there was one major difference between my work fifteen years ago and my work today, which indubitably has caused me to reassess my work as an artist and arts educator. Film works can now be fully realized from the initial concept to screening and distribution, which 109 means that young voices today have within their power the medium of film as never before—as do all of us in positions of authority—at a pace of production that is staggering. And, for what it is worth, students, artists, researchers, and educators (along with production industries and the political forces that surround us) all face a time when normalization and disequilibrium seems to be cycling faster than ever before. Due to the rollover of the ‘new,’ we are very nearly fatigued by ‘innovation’ and change, i.e., unsettling ideas meant to overthrow old thoughts and action. What was ‘disruptive’ yesterday, today hardly causes an eyebrow to arch—everything is ‘old school.’ That could be more disconcerting overall to an artist-educator than the sense of urgency to intervene in the education of youth. Of course, unless ‘old school’ is in, like the return of the vinyl and the immortalization of Elvis. Perhaps what is needed today is an antidote to speed. At this point, anything traditional is looking more and more appealing. On the other hand, one look at nostalgia and one soon begins to wonder whether it is not part of our trouble. Does our nostalgia temporarily slow down the rate of change or does nostalgia create our anxiety toward change? It is perhaps nostalgia that creates a desire to preserve tradition, name new alliances ‘tribes’ or prompt men like Levi- Strauss to call traditional peoples ‘noble savages.’ Nostalgia appears to prompt us to draw from tradition in order to enhance fashion and architecture. Nostalgia invites the tourist to purchase costly tribal souvenirs (now only made for tourism), it romances the filmgoer with epoch histories, and the television viewer with stories from ‘mom and pop.’ Rather than nostalgia, we need a true antidote to the rate of change and the biases that arise from its dizzying acceleration. And for what it is worth, the Dean of Education’s film project, which was to capture the ‘reality’ behind literacy policies, has given me reason to believe that film is an integral part of an educational antidote to the effects of change as it has impacted on our efforts in the classroom. A detailed account and analysis of film processes and 110 products in the third and fourth chapter, including the Dean’s commissioned film, will provide a more thorough explanation. At any rate, this chapter has been written to describe some of the forces that have directed film studies arising from psychology and the social sciences as those flourished in the United States. Additionally, film continues to be studied as a part of the larger investigation on digital media in both Canada and the United States. Time and space has constrained me from including the significant role that the National Film Board of Canada (NFB) has played in Canadian film education, which briefly, began after WWII under the direction of British documentary filmmaker, John Grierson (Evans, 2005). Prior to that time, the early Canadian ‘utilitarian’ film projects, mostly sponsored by the government, contributed in shaping a pragmatic view toward film, which was, according to the founder of the Film Studies Association of Canada, Seth Feldman (2011), “associated with education, propaganda, and advertising” (in Canadian Encyclopedia). Aside from the renowned NFB artistic film projects, such as those of animator Norman McLaren, which were shown principally in film departments, film art houses and art galleries, that ‘utilitarian’ view allowed the purchase and building of Canadian movie theatres through American financing. By consequence, Canadians grew up watching Hollywood movies. And to date, Canadian films are poorly received by Canadians, with the exception of the province of Quebec, whose spotty but plucky ventures into narrative feature length films has garnered the support of a much wider film audience after the release of Denys Arcand’s films, Le declin de l’empire américain (1986) and Jesus de Montréal (1989). Also missing are the various histories of educational film studies, in particular the documentary films of sociologist Edgar Morin and film anthropologist Jean Rouch (2003), which have impacted on film’s pedagogical uses. And while the contributions of Morin and Rouch hold 111 great importance in the overall study of film, which includes my personal understanding, nevertheless, a study specific to documentary film and other ‘reality based’ film works, which could be examined further in relation to the mind-body-brain problem, is a topic that deserves its own focus. In sum, by the start of the 1980s, having been shaken by new empirical studies in the sciences, cognitive sciences, and social sciences, as well as rational studies in critical, feminist, and postmodern theories, we emerged from out of the behavioral frame in which hangs Mitchell’s eighty-year old study (1929). What remains out of reach in our time will be the framework utilized by generations as they ponder their ‘holographic’ context, while looking back on our sophomoric ‘digital’ age eight decades from now. Yet, that new framework appears to be forming as we investigate the final frontier into the nature of being human: the brain. As with anyone trying to imagine what the future will be like for generations to come, part of the answer rests in understanding how the brain forms images and why images of the past and the future are critical in creative processes and forming an identity. Another part includes the serious contemplation of consciousness and the role of emotions, which are inexorably attached to objects that move in time and space. Because of their intrinsic alliance with consciousness and emotions, it is important to revisit the senses and their relationship with the brain’s cerebral processes. Ultimately, the movement arts, especially film, music, and dance, inasmuch as memory and attention allows, significantly factor in the processes that make up our somatic-sensory cortices—which inevitably are the deciding distinction between humans and animals. Through theories and findings in neuroscience, which harmonize with social, cultural, cognitive and psychological propositions, an alternate view of the movement arts (i.e., film, dance, music, and language) is made possible. For this reason, the next chapter will explore contributions offered by brain researchers. 112 CHAPTER THREE I suspect consciousness prevailed in evolution because knowing the feelings caused by emotions was so indispensable for the art of life, and because the art of life has been such a success in the history of nature (Damasio, 1999, p. 31). Modes of inquiry: Insights from philosophy, social, cognitive, and neurosciences Birth is unquestionably our first conscious encounter with the limits of a finite space and time. A mere inkling of our unique mortal self, in the first two weeks of gestation we began as neither with a brain or nervous system, however, we rapidly developed in uterus as a being engineered by perfect genetic timing. From a rather primitive neural tube, which formed into the central nervous system (CNS), i.e., the brain and spinal column, by the fourth week of gestation the CNS divides into the forebrain, midbrain, and hindbrain. The forebrain and hindbrain divide one week later into the sections of the brain that are both, respectively speaking, the oldest and newest developments in the history of human evolution: the telencephalon (i.e., cerebral hemispheres) and diencephalons (i.e., thalamus, hypothalamus and pineal gland) and the metencephalon (i.e., pons and cerebellum) and myelencephalon (i.e., medulla oblongata). In a short period of time, those important brain structures are primed with the neuronal pathways to weave sensory-somatic input into what will be a complex narrative of non-verbal images. The meanings of those images will be held by our attention, in our working memory, and through a cascade of emotions. The sum of those will give rise to seamless consciousness in wakefulness and realms of surrealism in slumber. The midbrain, which does not divide, holds an important role outside of our conscious knowledge for it regulates many of our life forces, such as, visual and auditory reflexes, sleep and wakefulness, concentration, heart rate, blood pressure, balance, and posture (Nedivi, 2003; Ratey, 2001). Thus midbrain, hindbrain, and forebrain are the divisions of an orchestrated entity whose finely tuned members are called neurons, all poised to let loose an electrifying symphony of 113 cascading changes at the command of various signals. The heart of the symphony contains an emotional register that is attuned to sensory ‘frequencies,’ which “send commands to other regions of the brain and to most everywhere in the body proper” (Damasio, 1999, p. 67). Through two separate routes, one to the bloodstream and the other to neural pathways, the signals will act on “other neurons or on muscular fibers or on organs. The result of these coordinated chemical and neural commands: a global change in the state of the organism” (p. 67). The knowing self, that is, the autobiographical self, is wholly dependent on the harmonic and synchronic images that map signals from within the brain, which in turn are induced interiorly and exteriorly of the body. This unique human consciousness is indispensable for it “generates the knowledge that images exist within the individual who forms them” (p. 24). The capacity to bring to mind images of the past in which we played a central role is essential to bring to mind images of the future in which we will have a salient role to play. This autobiographical memory—past and future—is both the spiritual and material substance of the creative process. Thus, “consciousness is an indispensable ingredient of the creative human mind” (p. 28). Safely harbored in the womb, the wonder of life—forged by the genetic pathways of our evolutionary past—gives way to a cascade of images that will guide and mold a unique existence. Though not yet fully understood by science, once born, our new environment plays a critical part in the next stage of our development (Damasio, 1999, 2003). In other words, our development in uterus was timed by the unfolding of biological actions that grew in concert with our mother’s body. Provided her body was finely tuned, free of harmful substances and mental stress, we matured in a harmonious state of movement within a stable spatial-temporal sphere. When finally we rushed into the world, we learned to adjust to the pattern of our breath and 114 manage the spaces between feedings and touch, which after satisfying our hunger and sense of wellbeing, was and still is primordial to our survival (Ratey, 2001). This entry into the world, designed by our genetic molecular pathways, gave rise to the important network of billions of cortical neurons, shaped and linked by input activity that will in turn map patterns of images. According to Damasio (1999), those networks contain the image and dispositional space that create the mind, whereby the image space “is that in which images of all sensory types occur explicitly” (p. 331). Of consequence, some of those images are experienced consciously, while others will remain outside of our realm of knowing. By the same token, the dispositional space “contain the knowledge base and the mechanisms with which images can be constructed from recall, with which movements can be generated, and with which the processing of images can be facilitated” (p. 331). Importantly, the two spaces, partly inherited from molecular pathways, partly learned through input activity, are inextricably linked. While the contents of the image space can be known directly; the dispositional space, which is “always” out of conscious range, lies dormant in anticipation to come to life. And although dispositions can never be known directly, albeit observed in their resultant action, their role as the “record of potentialities,” Damasio (1999) explains further, is essential to consciousness. All of our memory, inherited from evolution and available at birth, or acquired through learning thereafter, in short, all of our memory of things, of properties of things, of persons and places, of events and relationships, of skills, of biological regulations, you name it, exists in dispositional form, waiting to become an explicit image or action. Dispositions are not words. They are abstract records of potentialities. Words or signs, which can signify any entity or event or relationship, along with the rules with which we put words and signs together also exist as dispositions and come to life as images and action (p. 332). Since birth, each cortical neuron, designed with an action potential (i.e., electrical discharge conducted along the neuron’s axon), moves life-dependent chemicals carrying vital 115 ‘information,’ which Damasio describes in the above quote, throughout the brain. Those action potentials fire electrical impulses in response to input activities and, in turn, contains the details of our inner and outer surroundings. The specific details from birth onward are crucial both to the overall brain circuitry and to the maps formed in memory as a pattern of events and objects. With the exception of vision, which range of sight is mechanically limited, our global senses (e.g., olfactory, auditory, visceral, and kinetic) leave no information unattended. The vast input of information through our global senses, which lay in close proximity to each other, would dangerously tax the brain should all possible stimuli reach the ‘knowing’ centers of the cerebral cortices. For this reason, evolution has taken many precautionary steps. For instance, the brain is divided into independently organized hemispheres, unnecessary neurons are ‘pruned’ back according to the specific needs of the organism and information is carefully screened and relayed by a specific area of the brain called the thalamus (Ratey, 2001). Moreover, each sensory mechanism, e.g., eye, ear, skin, etc., despite a complex design, are little more than ‘antennae’ that send signals to the brain to be interpreted by more parsimonious areas of the cerebral cortex and cerebellum. For instance, almost half of the brain (i.e., the back half) is devoted to an elegant visual system that possesses a simple means for selecting signals. Although the input end (i.e., the retina) is a “pixel-based device…we do not see in pixels” (Hayes, 1999). But as the world is made of edges that define the shapes we see, the cells in the primary visual cortex (i.e., V1) are highly responsive to orientation selectivity that detects an edge of light, a discovery made by Nobel Laureates, Hubel and Wiesel (1962, 1974). From simple edge detection, a rather remarkable feat of pattern recognition is enabled throughout the visual system and we are able to mentally construct and store in memory the shapes that surround us alongside all of their qualities. Just as remarkable is the ingenuity of some visual 116 artists to use this ability to represent objects and persons with only a few strokes of a paintbrush, still managing to capture the essence or qualia. Given the tremendous relief that could be given to those who suffer from mental disorders, it is important from a medical perspective to understand the pathways that govern the brain and body proper. It is equally important to glean from the facts that are made known by sophisticated technologies and well-designed clinical experiments the manner in which mind could arise from such a wondrous act of nature’s engineering for the simple fact of relieving much suffering. From birth, the mind most certainly emerged through a complex network of systems responding and expressing in perfect attunement. Yet that scientific view is not one that is shared by all who have theorized on the mind or mental processes in search of the ‘good’ life, one that presumably ought to bring a measure of happiness to individuals and societies. From Plato to Descartes and beyond, there have been many who have committed the error imagining a “little person” that governs mental processes, i.e., the homunculus fallacy (Damasio, 2003). Descartes is best remembered for his contention that mental processes are governed mysteriously by an immaterial soul or conscious homunculus that cannot be explained by natural science. And despite modern objections to Cartesian dualism, many theories of mind or mental processes succumb inadvertently to the homunculus fallacy. Instances in philosophy and cognitive science include computational theories of mind whereby it is believed that innate structures (i.e., modules) provide the instructional ‘code’ for the ‘language of thought,’ which are a set of ‘rule-bound’ operations or algorithms that enable mental processes from mental imagery to language. Two researchers who have adopted that particular position are Noam Chomsky (1968, 2000) and Jerry Fodor (2000, 2008). In essence, the homunculus fallacy leads to an endless regress as one contemplates who or what governs the ‘little person,’ module or structure and each one in turn. 117 How the mind emerges from an embodied brain (Damasio, 2003; Sacks, 2010), however, finds its most compelling evidence in the neurosciences. Despite the findings in neurosciences that have moved beyond the descriptive to the explanatory, which have led to the discovery of real solutions to debilitating mental conditions such as ‘phantom limbs’ (Ramachandran, 1998), the work in neuroscience is still being debated among philosophers and cognitive scientists such as Fodor (2010). This debate is in large part due to viewing neuronal explanations as nothing more than reductionism, which is perhaps the fear Whitehead (1938) called, of “explaining away” the complex nature of humans and the conditions that surround them. Reductionist perspectives are viewed as far from fully expressing the uniqueness of human creativity and consciousness. From my perspective as an educator, however, by ignoring neuronal activity and the coordinating systems of the brain, we are left with few means other than ‘metaphor’ to come to grips with mental processes that we struggle to apprehend in practice. On a more critical note, metaphors are unlikely to provide us with accurate information that is necessary for finding viable solutions to perplexing mental phenomena such as schizophrenia, attention deficit, and other debilitating disorders that strike children and youth in our school system. Clearly, to understand the circuitry of the brain on a microscopic scale while puzzling over unusual neurological phenomena is the sort of preparation one needs for developing creative insights and subsequently new opportunities for important discoveries (Ramachandran, 2003). For instance, as ascending stimuli first travel up the brain stem, all sensory information with the exception of smell must travel through the thalamus, which is part of the limbic system sitting just on top of the cerebellum. Descending signals also reenter the thalamus before moving to the body proper, save for a small portion that enter directly through the pons to the cerebellum (the oldest brain system that controls the smooth, precise, and coordinated movements of the 118 skeletal muscles). The limbic system is the primary center for emotion formation and processing, as well as for learning and memory. It is a collection of deep structured nuclei that also includes the cingulate gyrus, basal ganglia, hippocampus (which handles complex cognitive processing, memory storage and formation), and the amygdala (which handles complex emotional responses). The synchrony of ascending and descending signals, from the cortices to the body proper and back, harmoniously operates all aspects of conscious and unconscious activity. What can we gain from understanding this symphony of electro-chemical firings? As film theorist Torben Grodal (2009) who studies emotion in film expressed, we are utopian to think that a complete map of the brain processes, given the complexity of neuronal activity, is forthcoming any time soon. Despite our relatively ‘primitive’ view of all such processes, however, clearly what we can gain from studying the brain can only be fully answered the further we investigate neuronal networks and coordinating systems. The purpose of this has thus far revealed contradictory information to our beliefs and practices in mental health and educational settings accustomed to predicting outcomes solely based on behavior. Although it would appear that neuroscientists are the only ones qualified to pose research questions and design appropriate experimental trials, how are scientists able to actively pursue all such mysteries relative to the brain-body-mind if not confronted with ‘normal,’ typical processes and activities in artistic and educative settings? Surely the potential for knowledge exchange is of great importance. A deeper look into brain research Of all the nuclei, the thalamus has the most important role to play in processing and relaying information to the interpretive centers of the sensory and motor cortices. It is also the key to maintaining our state of wakefulness and level of consciousness. Any disruption or damage to the thalamus would cause us to fall into a coma. At first glance, knowing the role the 119 thalamus plays between conscious and unconscious states is perhaps more useful to the anesthesiologist than the artist, educator, or consciousness researcher. Yet, understanding the role of the thalamus is of significance not merely to the medical practitioner. The thalamus is an important cognitive connector and any interruption to its functions would cause undue harm. It sifts through stimuli before relaying it to the interpretive centers of the brain (i.e., cerebral cortices and subcortical regions), and back to the skeletal, visceral, and muscular front lines where the work must be innervated and performed. Its intimacy with all parts of the brain and body proper, therefore, makes its role that much more important to study carefully and to pay attention to the outcomes of those studies (Magnotta et al., 2008). Anatomically, the “higher-order cortices,” which are like an “ocean around the islands of early sensory cortices and motor cortices” hold dispositions that are “implicit records of knowledge” (Damasio, 1999, p. 333). From the thalamus, therefore, incoming information is relayed to our evolutionarily advanced cerebrum, made up of a rich, multi-layered cerebral cortex containing primary sensory and motor cortices, and includes the limbic system. When activated, dispositional circuits signal to adjacent circuits, which in turn cause images or actions to arise from somewhere else in the brain. Just as swiftly as it enters, therefore, information flows back down the brain stem, outward to our body. The orchestration of incoming and outgoing signaling is conducted with impeccable timing by the thalamus. The thalamus, in other words, which includes the “interrelation of signals, control of brain activities in disparate areas, and relay of signals,” is undeniably “indispensable for consciousness” (Damasio, 1999, p. 333). Of major importance since birth, images and dispositions are comprised of primal emotions, which served to express to our caregivers happiness, sadness, fear, surprise, motives, and drives. Extraordinarily, nature took great strides to ensure we were equipped with all that was necessary to face the world for the next few years of life and beyond. Emotion, which 120 literally means motion outward, is about action. As such, Damasio (1999) explains that emotion is a collection of automated actions aimed at a specific effect, which importance is the regulation of life. Innate and universal, emotions followed a long evolutionary and ontogenetic pathway to allow a “rapid intelligence” among all living creatures—from the simplest neuronal beings to the most advanced. The evolution of emotion, from sense to feeling, held the key to our human ability to learn what it is that we feel, which we are so adept in doing through reason. Learning what we feel, which requires an extended conscious awareness of our emotions, leads to higher order intelligence that unfolds cognitively and socially over much longer periods of time. It would appear that emotions are the rapid firing of action potentials that move muscles automatically, whereas feelings arise in deliberation, i.e., contemplation and discourse. By studying the distinction between emotions and feelings, we can begin to understand the interconnection between emotion and cognition, without which our human nature could not be accounted for. For most of the twentieth century, as Damasio (1999, 2003) recounts, the study of emotions was viewed in academia as stepping back into nonscientific waters and few researchers took the risk of facing the disapproval of their peers. Several exceptions include the distinguished scientist, Paul Ekman, whose study of facial expressions spans forty years. Nonetheless, Ekman (2008) himself admits to having approached his work on emotion while steeped in the tradition of cognitive behavioral science. The study of the mind has been the indelible mark of a little more than a century of intellectual fervor in philosophy and the newer epistemologies of psychology and neurology. Yet the intense focus on reason obsolesced any insights on the mental processes of emotions, which had been proposed early on by philosophers like William James and Alfred North Whitehead. Fortunately, since the last fifteen years, new studies in the neurosciences have provided compelling evidence of the importance of emotions in 121 the development of reason (e.g., planning and decision making). This movement was begun largely through the efforts of Antonio and Hannah Damasio (1988) and others, such as Gerald Edelman (2000) and Francis Crick (1994). Whether some educational researchers wish to accept it or not, it was at birth that our brain, with its legion of modulators on full emotional alert, worked in concert with our sensory- somatic networks imbued by reason (Damasio, 1999). Through serial, parallel, and massive numbers of actions, the brain and body proper are the epitome of a complex communication system that connects our inner and outer worlds. As the brain creates images from intact senses, the mind measures the valence of images and signals as values of good and bad or pleasant and painful. We attach those values to changes in location, in pressure and temperature, in sound and light, and odors and tastes. Within days of birth, we learn to discern such spatial-temporal aspects as shapes, tone, pitch, timbre, color, and texture to which values are instantly attached. Clearly it is our sensory-somatic system that ‘translates’ incoming signals into images, the purpose for which appears to regulate a ‘mindful’ relationship with the world, but what connects images to values? Damasio (1999) offers a telling proposal. Emotions of all shades eventually help connect homeostatic regulation and survival ‘values’ to numerous events and objects in our autobiographical experience. Emotions are inseparable from the idea of reward or punishment, of pleasure or pain, of approach or withdrawal, of personal advantage or disadvantage. Inevitably, emotions are inseparable from the idea of good and evil (pp. 54-55). Here then is the embodied brain in action: we intuitively gravitate toward, or shy away from, the objects that enter our new universe. Though we cannot bring to mind with any clarity the ocean of images that flowed in and out of our consciousness when our body first made contact with visual, tactile, and acoustic space—any more than we can recall how we responded to the myriad of objects and events we encountered—it is quite certain that shortly after birth we 122 became mindfully engaged ‘in the act of knowing’ self and the world (Damasio, 1999). The brain and body proper orchestrated our proprietary knowledge in space and time (Pinker, 2007). Consciousness and the flow of images It is thus a constant flow of images, which Damasio (1999) termed “movie-in-the-brain,” and the dispositions, which give rise to “the sense of self in the act of knowing,” that ultimately constitute the seat of consciousness (p. 19). Consciousness allows feelings to be known and thus promotes the impact of emotion internally, allows emotion to permeate the thought process through the agency of feeling. Eventually, consciousness allows any object to be known—the ‘object of emotion and any other object—and, in so doing, enhances the organism’s ability to respond adaptively, mindful of the needs of the organism in question. Emotion is devoted to an organism’s survival, and so is consciousness (Damasio, 1999, p. 56). Despite lacking the fullness of knowledge, it is possible to hypothesize two qualitatively different forms of consciousness from a neuroscience standpoint. Positing mind as having at least two significant neurological states, Damasio (1999) describes one as being core and the other as extended consciousness. Though we commonly speak of consciousness as a state of awareness (e.g., memory) or attention, there is much more to consciousness than either of those two necessary aspects. In truth, the complexity of consciousness creates a large enough problem for study and reflection to continue across many scientific and non-scientific disciplines. Accepting the possibility of at least two neurological states of consciousness, if core consciousness were to exist (inevitably directed by the thalamus), the evolution of at least one innate entity appears undeniably necessary, namely, emotion. And for extended consciousness to exist, which is found foremost in humans, it is probable that we had to possess at the very least, what Darwin called, “an instinct to acquire art” (in Pinker, 2007). That instinct, which leads us ingeniously to everything from technologies to language and the arts, is notably distinct among humans. 123 Importantly, however, for extended consciousness to arise at all, an intact and fully functioning core consciousness must be present. That a core consciousness exists independent of the mind ‘knowing’ it exists, arises from studying cases of brain disorders such as blind-sight—a condition whereby a person’s blindness, due to lesions to the primary visual cortex, is yet able to predict object location, despite their insistence that they see nothing at all (Damasio, 1999; Gelder, 2010). Other examples include the case of a man named David, whom Damasio (1999) described as having a total loss of “working memory,” sometimes improperly referred to as short-term memory. Despite being totally incapable of learning anything new, David was still able to evaluate novel experiences as good or bad, pleasant or unpleasant, which persuaded him to either move toward or step away from social situations and ultimately demonstrated an intact core consciousness though his extended consciousness was severely compromised. According to Damasio (1999), one problem of consciousness “is the problem of how we get a movie-in-the-brain” (p. 9). That is to say, there is an explanatory gap between the brain’s creation of images, and the movie that is formed, “provided we realize that in this rough metaphor, the movie has as many sensory tracks as our nervous system has sensory portals— sight, sound, taste, and olfaction, touch, inner senses, and so on” (p. 9). The second problem of consciousness raised by Damasio is the question of how images “are sensed as the unmistakable mental property of an automatic owner who…is an observer, a perceiver, a knower, a thinker and a potential actor” (pp. 10-11). Clearly, movement and the flow of images play a central role in the development of both states of consciousness. At one time noted and carefully documented by observers like Darwin (1890), A.R. Luria (1976), and Piaget (1970), a careful survey of newborns and infants is now being carried out experimentally in cognitive and neurosciences (Hamlin et al., 2007; Newman et al., 2008; Wynn, 2008). The sum of the findings provides us with evidence that a constant flow 124 of movement, which begins early in development, is fully integral to life itself. With the advent of sophisticated new technologies, the result of movement can also be physically observed, captured, analyzed, and mapped at micro and macro levels (Wynn, 2008). While yet unable to watch the actual flow of movement in the brain, we may surmise that movement is captured within an image space, which prompts the “movie-in-the-brain,” and within a dispositional space, which prompts “a sense of self in the act of knowing” (Damasio, 1999). The flow of movement in image and dispositional spaces, simply put, allow me to know it is my thoughts and images that are being expressed into words; I am able see that it is my hands and my fingers that are moving across the keyboard typing my thoughts and images Another perspective on the idea of image Naturally, one may wonder what is an image from a neurological standpoint rather than say from a semiotic or cultural one, which researchers from various fields of study also entertain (e.g., Deleuze, 1986, 1989; Evans & Hall, 1999). Damasio (1999) states that an image is “a mental pattern in any of the sensory modalities, e.g., a sound image, a tactile image, the image of a state of wellbeing” (p. 9) and arguably “the currency of our minds” (p. 319). A biological purpose suggested for images is to convey “aspects of the physical characteristics of the object” (p. 9) within and without the brain, i.e., derived from the body proper and externally, along with our responses to all such objects that includes the “web of relationships of that object among other objects” (p. 9). Hence, whether concrete or abstract, objects situated within or without the brain-body activate a flow of images represented in the brain as smells, tastes, sounds, haptics, movements, space, time, and edges of light. While the use of the word representation may connote a facsimile, the fidelity of which is an exact copy, Damasio employs representation to simply mean a “pattern that is consistently related to something” (p. 320). Aptly, therefore, a representation 125 could be any representation stereotyped semantically, such as musical notes on a staff, which in aural representation is a sound image. Ultimately the “problem with the term representation is not its ambiguity,” so states Damasio, “since everyone can guess what it means” (p. 320). Rather it is the “implication that, somehow, the mental image or the neural pattern represents, in mind and in brain, with some degree of fidelity, the object to which the representation refers” (p. 320). Far from that view, however, “whatever the fidelity may be, neural patterns and the corresponding mental images are as much creations of the brain as they are products of the external reality that prompts their creation” (p. 320). Insofar as reality is concerned, Damasio maintains that rather than objects being a mental copy or facsimile, “the structure and properties in the image” with which we end up, are brain constructions producing images as we are engaged to form neural patterns “according to the organism’s design.” Notwithstanding the brain’s creation of such images, “the object is real, the interactions are real, and the images are as real as anything can be” (p. 321). Damasio’s view of reality brings full circle a notion that has long been debated on philosophical and religious grounds since Plato, which was that reality is wholly unavailable to those who use their senses. Quite the opposite, Damasio points out that a neurological “perspective has important implications for how we conceive the world that surrounds us” (p. 199). The implications begin with our notions of reality and extend to our sense of truth inasmuch as Cartesian dualism has remained fundamentally unchanged. Having thus remained fixtures of discourse over time, neuroscience is poised to challenge truth statements based on notions of ‘realism’ made at every historical juncture, which today sits between the real and the virtual in a society immersed in filmic images. 126 The brain, images, and film Clearly the implications of reality go beyond philosophical propositions when we begin to take into account that “the neural patterns and the corresponding images of the objects and events outside the brain are creations of the brain related to the reality that prompts their creation rather than passive mirror images reflecting that reality” (Damasio, p. 199). We may thus begin the process of unmasking the polemics surrounding mind-body dualities. First, we may do so through assembling the wide panoply of facts arising in neuroscience and in the diverse body of knowledge from adjacent fields. Second, we may utilize those facts to dismantle ‘old intuitions’ founded upon arguments, which heavily rely on abstractions and theories drawn from specialized ‘objective’ views, whether scientific, cognitive, or social. Taking apart film’s basis of realism as explored by theorists Kracauer and Bazin, or taking into consideration the basis of realism in language as Steven Pinker (2003) explained in his work entitled, The stuff of thought, requires several frameworks of understanding for film and language. It requires a precise view of the brain-body, which is awash in synchronous stimuli that produce a harmonized sense of reality, relative to the psychological, sociological, cultural, and cognitive processes that consciousness allows. Damasio (1999) underscores those implications by taking us through a cogent view of the production of images “based on changes that occurred in our organisms, in the body and in the brain, as the physical structure of that particular object interacts with the body” (p. 199). Using music as an example, Damasio points out that what we interpret as sounds are the “patterns that are visual, auditory, [and] motor—related to movements made in order to see and hear,” and the sound patterns are the emotional patterns that are the result of the “person playing, to know the music is being played, and to characteristics of the music itself” (p. 199). This is an important point to be made since sounds, not solely interpreted by the auditory cortex, are made 127 ‘intelligible’ through the assistance of the visual, motor, haptic, and other sensory-somatic cortices (Ratey, 2001) and forcefully given values relative to an emotional register. The fact that sounds are reliant on visual or motor cues is well known among Foley experts (i.e., sound effect engineers) in film and the emotional valuing is well known to those who compose and mix orchestrated film music (Branigan, 1997). Multiple vision experiments with primates have demonstrated that the image that inevitably forms in the mind is correspondent with neural patterns, which are constructed “according to the brain’s own rules for a brief period of time in the multiple sensory and motor regions of the brain” (Damasio, 1999, p. 199). A metaphor Damasio utilizes to understand the figurative “building blocks” in the brain is to make a comparison with a room full of Lego, which would then allow any number of ‘objects’ (i.e., images) to be built. That there is an interaction between the object and the neural properties is undeniable, as neuroimaging has shown. Importantly, both the object and the neural patterns that form according to the “organism’s design” are real, as are the images that arise in the mind. Thus, “the neural pattern attributed to a certain object is constructed according to the menu of correspondences by selecting and assembling the appropriate tokens” (p. 200). It is not surprising that the menu, the selection process, and assemblage give rise to similar images in our collective minds because of similar biologically designed brains (i.e., humans). The differences in images between humans, however, can be attributed to “models we expect,” according to our experiences and largely due to attention (i.e., salience) and memory (Ratey, 2001, p. 93). To illustrate the preceding, taking music once more as an example, the listener’s degree of experience with music (e.g., trained or untrained) will influence their ‘expected model’ of sound (i.e., images stored in memory) and their attentive selection or degree of salience. An assemblage of all the elements present in the brain-body-mind will then create a unique neural 128 map that, while similar to another, will contain sufficient differences as to make it entirely unique and subjective. Those elements assembled include: the models of expectation, the selectivity or salience of certain aspects, the sensory-somatic images that assist the auditory cortex, and the emotionally charged images and dispositions (Damasio, 2003; Ratey, 2001). The differences heard between listeners is not just a matter of degree it is “why we can accept, without protest, the conventional idea that each of us has formed in our minds the reflected picture of some particular thing. In reality we did not” (Damasio, 1999 p. 230). For social creatures, there is much to be gained from recording and conveying the register of emotions that evolve from primary to secondary, which the latter Damasio calls feelings or ‘social’ emotions. But adding to the complexity that surrounds the preceding discussion on reality is the challenge that is inherent of words to convey images present in the mind of the writer as embodied knower—whom Elliot Eisner (2006) called, “connoisseur”—with images that will form in the mind of the lector whose experience inevitably differs. If one wishes to exchange ideas among disparate connoisseurs through a written medium, the challenge is in finding words that best articulates what one knows that meets a spectrum of experiences. To capture the process of embodied knowing, therefore, one must invent clever ways to overcome the limitations of symbolic logic. In other words, to convey mental processes, one cannot rely on technical terms alone but must rely on the human capacity for rich metaphors (Marcus, 2008). This is precisely what Einstein said of his mental processes, which he called “muscular” as the primary stage of knowing, while “conventional words or other signs have to be sought for laboriously only in a secondary stage” (in Damasio, 1994, p. 107). Hence, the ideas I have introduced thus far are dependent on the logic of images shared between the writer and lector. To arrive at a point of shared images in a written context, a great many words are needed to express meaning. The old adage, ‘a picture is worth a thousand words’ 129 can also refer to a ‘picture’ of sound or movement or any other sensory-somatic medium one chooses. Because words and strings of words to make phrases contain ambiguities, implications, denotations, connotations and social-cultural biases, we were obliged to invent ways to come to some precision of meaning—metaphors are just one of those human inventions (Jackendoff, 2002; Lakoff & Johnson, 1980; Pinker, 2007). But metaphor is not the only invention. As any mathematician will attest, conjunctions such as the terms, AND, BUT, NOT, and OR can be used to suggest what meaning a word or symbol is not or what it parallels (i.e., synonyms). This foundational principle of logic, governs the semantics of many sign systems (e.g., language, mathematics, syllogisms, computer programming, search engines, etc.). The price that we pay for the clever use of symbols to convey meaning risks total communication break down. But given the explanatory benefits, somewhere along our evolutionary path we inevitably chose the risky business of developing untold languages. In any case, whether we utilize the term ‘representation’ or prefer to use the terms, ‘mental map’ or ‘mental pattern’ to describe mental processes, all terms point to the fact previously stated that “the neural patterns and the corresponding images of the objects and events outside the brain are creations of the brain.” To the philosopher who may be impregnated with Kantian views, ‘sensed’ objects are dubbed as phenomena (i.e., perceived objects), whereas that which arises in the mind not physically sensed as noumena, in relation to the Greek ‘nous’ denoting ‘mind’ (Kant, 1998). Kant’s phenomena/noumena proposition was an attempt to resolve the Cartesian duality of mind/body, which unwittingly painted Kant into a semantic corner because of the inherent difficulty with ‘object-ness.’ Kant wished to distinguish phenomena or sensed ‘objects’ (i.e., appearance of things to our senses) from noumena (i.e., concepts arising in the mind) as ‘non-objects.’ But once the knower is aware of a concept that 130 has taken shape in the mind, it possesses a ‘thing-ness’ and, as such, is now an object or event to be grasped and passed on to others. Even so, we find it difficult to explain an object’s quality, which may be referred to as qualia and speaks to the thing-ness of an object, i.e., the redness of red. And despite all the energy that has been spent on devising systems of evaluation for creative outcomes in schools and universities, it is the issue of qualia that vexes educators and students alike. While neuroscience, in collaboration with many diverse fields of social and scientific knowledge, have made quantum leaps insofar as resolving the mind-body duality, including doing away with the notion of the homunculus, we are collectively still far from being able to resolve the issue of qualia. Notwithstanding, objects may be as physically concrete as a chair, a sound, a color, or as fleeting and abstract as an event or state of being good, which Damasio adds simply is “as diverse as a person, a place, a melody, a toothache, a state of bliss” (p. 9). Damasio (1999) attempts to clear up all such matters with the following succinct summary, which inevitably brings to mind William James’ notion of the “stream of consciousness.” Images in all modalities “depict” processes and entities of all kinds, concrete as well as abstract. Images also “depict” the physical properties of entities and, sometimes sketchily, sometimes not, the spatial and temporal relationships among entities, as well as their actions. In short, the process we come to know as mind when mental images become ours as a result of consciousness is a continuous flow of images many of which turn out to be logically interrelated. The flow moves forward in time, speedily or slowly, orderly or jumpily, and on occasion it moves along not just one sequence but several. Sometimes the sequences are concurrent, sometimes convergent and divergent, sometimes they are superposed. Thought is an acceptable word to denote such a flow of images (p. 318). In all likelihood, Marshall McLuhan would have concurred with this view for as Marshall and Eric McLuhan (1988) postulated in their “Laws of Media,” any object, whether human or natural artifact, concrete or abstract, obeys a movement logic that begins and ends in attention and memory. 131 The flow of movement and arts-based education Of great importance to my work, is an understanding of the flow of movement and its emotional register. To begin, both the cerebral cortices and the cerebellum responsible for gross and fine motor movement are directly linked to cognitive processes. New evidence in the neurosciences shows that “movement is crucial to every other brain function, including memory, emotion, language, and learning” (Ratey, 2001, p. 148). In fact, “several studies have linked language production with complex motor skills, indicating that the two functions share neural networks” (p. 272). In speech production, for instance, it is believed that movement is linked to the “sequencing area at the root of human language” (p. 272) as well as linked to sequencing the thoughts needed to plan, deliberate, and ponder acts, thus “translating thoughts into deed” (p. 148). Movement is also fundamental to developing a sense of steady beat and thus everything to do with rhythm (Gouzouasis, 1991, 1992). Despite the historical reluctance to equate humans with animals, in large part due to the differences between action and thought where the former was believed to be a ‘lower’ brain function by comparison to the latter ‘higher’ brain function, studies indicate that the frontal cortex, anterior cingulate, basal ganglia, and the dentate nucleus in the cerebellum are motor centers that are critical to higher order thinking (Ratey, 2001). With well over a century of study on neurological disorders we have gained a historical perspective on the effect of lesions in otherwise ‘normal’ brains due to tumors, viruses, strokes, accidental brain injuries, and birth anomalies (Damasio, 1994; Sacks, 1995). As such, recent studies of the brain are closing the gap on the brain-body-mind division that has plagued philosophers at least since Descartes. Heightened by the most advanced technologies used to study the brain and improved experimental testing that measure more precisely the state of mental activity rather than the general testing conducted in cognitive trials, we have been given a window into the magnitude of movement and its correlate with emotion (Damasio, 1994; 2003). 132 For instance, based on observing the behavior of unicellular organisms reacting to movement, simple experiments using a “chip” moving across a computer screen have been conducted with children and adults. The experiments show an “anthropomorphizing” affect linked to movement speed, iteration per units of time, and style of movement. In this instance, “jagged fast movement will appear ‘angry,’ harmonious but explosive jumps will look ‘joyous,’ recoiling motions will look ‘fearful.’ A video that depicts several geometric shapes moving about at different rates and holding varied relationships reliably elicits attributions of emotional state from normal adults and even children” (Damasio, 1994, p. 70). Quite interestingly, an experiment showing simple geometric shapes that either help or hinder the action of another shape elicited an emotional response in infants as early as six months of age. When asked to choose between the helper or inhibitor shape, infants chose the helper shape regardless of color, size, or form at every instance (Wynn, 2008; Hamlin, Wynn & Bloom, 2007). As the brain stem swiftly deploys incoming signals synchronously to the thalamus and on through the layers of cerebral cortices across both hemispheres, whether we take note or not, the brain employs purely organic means to regulate the balance of life. In this instance, the balance of life demands that that values are awarded to actions, which in turn will facilitate social bonding. As Damasio (1999) clearly outlines, “emotions are part of the bioregulatory devices with which we come equipped to survive. That is why Darwin was able to catalog the emotional expressions of so many species and find consistency in those expressions, and that is why, in different parts of the world and across different cultures, emotions are so easily recognized” (p. 53). Although the “precise composition and dynamics of the emotional responses” manifest in humans depend upon a “unique development and environment,” there is a growing body of 133 evidence that shows that “emotional responses are the result of a long history of evolutionary fine-tuning” (p. 53). In other words, from out of the many evolutionary measures, biological regulation depended on an emotional register finely tuned to react to activity within and without the body. Without the aid of an emotional register communicating impulses along neural channels throughout the body and brain, organisms would be effectively compromised in their capacity to make appropriate life-saving decisions, e.g., appropriate social bonds. To illustrate this point more humanly, Damasio (1999) recounts the story of a young female patient whose life had been frequently compromised for lack of any fear. A scan of this young woman’s brain revealed that both amygdalae (in the left and right temporal lobes) were “almost entirely calcified” (p. 62). The result of this calcification did not affect her ability to learn new facts nor hinder her social capacity, contrary to what had been previously theorized regarding the role of the amygdalae. In effect, the opposite was true: she had a keen intellect and a social, affectionate nature. Despite her affable and spirited nature, her behavior would not be considered to fall within the norm. In short, “she made friends easily, formed romantic attachments without difficulty,” but unfortunately, “had often been taken advantage of by those she trusted” (p. 64). It was her lack of fear and the anger that often follows a fearful incident, which allowed positive emotions to “dominate her life,” yet al.so prevented her from discerning any danger whatsoever. In this perpetual state of bliss, she could not judge harmful actions in the past that would protect her from harm in the future. The most curious outcome of her lack of fear came from the fact that, while possessing a “remarkable gift for drawing,” she was unable to reproduce faces that showed fear any more than she could summon this emotion to her own face despite being able to draw or mimic the other primary emotions. 134 Noticeable at human birth are the universal and innate (i.e., a term that means inborn, existing from birth) movements, which were carefully woven into our genetic design by evolution. These early movements, referred to as ‘schemes’ by Piaget (1970), comprise our primary reflexes, namely, rooting, sucking, grasping, along with the startle response and tonic neck, of which only the startle reflex continues to be present throughout one’s life span. Without those movement reflexes, not only would our survival be compromised, so too would our cognitive and emotional development be greatly hindered (Ratey, 2001). The images that are produced from signals that enter the early sensory cortices (e.g., visual, auditory, visceral, etc.), therefore, include signals from reflexive movements that cause disturbances only when diminished or absent. The register of primary emotions, as reflexive movement outward, can no more be controlled than any other reflex. But like all reflexes, secondary movements, like secondary or social emotions, can be learned (Damasio, 1999; Ratey, 2001). Individuals who possess diminished emotional capacity, in fact, have been able to educate what emotions they do possess through simple movement or actions, allowing them to adapt more readily to social environments. For instance, two well-known individuals with Asperger Syndrome, Temple Grandin (1995) and Daniel Tammet (2007), have been able to share their personal experiments that link objects, images, and movement through autobiographical accounts. Sensory dispositions and image formation Showing signs of development early in gestation, we are at birth in the possession of dispositions or dispositional representations relative to each sensory-somatic organic structure that render the sense of movement, touch, sight, sound, smell, and taste. As such, Damasio (1999) has posited that dispositional representations are contained in dispositional spaces with a “neural counterpart” whereby both “images and actions can be generated, rather than holding or displaying the explicit patterns manifest in images or actions themselves” (p. 219). For each 135 dispositional representation to reach its potential, which disposition retrieved in memory leads to the formation of images, movement is indispensable to allow early sensory cortices to interpret changes in space and time. The dispositional spaces and correspondent representations are potentials, which have been described by Damasio to resemble the town of Brigadoon: a place suspended in space and time with ‘inhabitants’ waiting for an action to release the contents. The dispositions are innately primed to encounter the senses that in turn fill the image spaces with critical neural information, which the cerebral cortices correspondingly interprets both on the plane of core and extended consciousness. Without neural counterparts for dispositional spaces, there could be no neural counterpart for image spaces and the formation of images; conversely, without primary sensory stimulus, dispositions would fail to be ‘awakened.’ While images “of all sensory types occur explicitly,” dispositions always remain implicit and can only be understood in terms of the results or states of being, e.g., racing of the heart or movement of the hand (Damasio, 1999). For instance, touch is interpreted by the dispositional spaces as pressure and temperature, whether it is sensed internally or externally. Words and images that form such as, “pinching,” “tightening,” or “burning” are frequently used to describe pressure and temperature whether the source comes from within or without the body. A pain value we may then attribute to those sensory images is dependent upon several factors. Erroneously, we have thought that the skin and viscera possess “pain receptors,” yet studies in neuroscience tell us that no such thing exists (Nedivi, 2003). Instead, studies show that our ability to isolate the source of the stimulus and its correspondent “pain” is made possible by sight, sound, and adjacent sensory cortices, which Damasio (1999) termed “convergence zones,” without which we would only retain the general sensation of pressure and temperature. This important point also speaks to the pain or pleasure 136 that one experiences in the act of viewing or listening. Neither the occipital or temporal lobes possess receptors to either pain or pleasure, yet we are able to detect those values in the body as much as the mind, for instance, in our skin when it conveys such knowledge with goose bumps, shivers, or tingles. The somatic marker hypothesis All sensory stimuli, such as sound amplitude (intensity or loudness) and timbre (tone color), which are interpreted early at birth, are awarded values of pain and pleasure in the mind. Pain and pleasure are markers that contribute to knowing our state of wellbeing (i.e., a state of alarm in danger or calm in safety). Each will give us a sense of direction, a movement toward or withdrawal from our primary caregivers whose care contributes to our emotional growth. Whether the stimulus is light, sound, or motion, we are primed early in infancy to adjoin a value to the varying ways inducers (i.e., objects) interact with our bodies. For instance, sound is detected as early as five months in gestation and its impact, which corresponds to our dispositional spaces, will play an important role in sound receptivity and differentiation in speech (Bench & Parker, 1971; Lecanuet, 1996; Lecanuet & Schaal, 1996). Though limited to a narrow sphere of conscious awareness, at our birth, our bodies (i.e., soma) inevitably ‘mark’ sound as painful/pleasant or as good/bad. As with sound, we are also impressed by visual stimuli. Even with the limited visual distance due to retinal focusing at birth, we are able to detect shape, color, contour, and spatial dimensions, and a complex topographical design that allows facial recognition. Prior to and well within the first month of birth, therefore, infants show an extraordinary sensory-somatic discernment in which movement, space, and time factor significantly. Spatial patterns are made evident by the infant capacity to attend to object count (one, two, many), tone (i.e., vocal pitch), direction, and location. Time patterns are also notably discerned by infant attention to rhythm 137 and syntax (i.e., beginning, middle and end). Clearly, nature equipped us with remarkable sensory abilities to detect characteristics of sound, sight, touch, smell, and taste, so that our senses, barring any genetic or accidental disruption, enable us to engage with objects very early in development, contributing to our developing consciousness (Ratey, 2001). All the early sensory cortices, “the areas of cerebral cortex located in and around the arrival point of visual, auditory, and other sensory signals,” deploy a vast number of images to associative cortices synchronically throughout the brain (Damasio, 1999). This synchrony of images is like the cacophonous tuning of orchestral members seated throughout the central nervous system. Notwithstanding, this cacophony will find order under the direction of the thalamus to produce a symphony of images, the unity of which enters harmonic memory. During those early exploratory months, attention and memory ensures that the sequence of actions, or the orchestration of neuronal firings—as many as 40 quadrillion in concert—produces indelible images (Damasio, 1999). For most of us, our grasp of space and time, as these are constituted in finite ways, are sensed by the body in motion, which forces and causal connections take hold in the mind the instant we experience life on earth. As humans, nature rewards us with working memory and attention—the ability to focus on specific stimuli and to image the past in milliseconds thereby quickly planning a future action. As it turns out, small amounts of memory and attention are sufficient to ‘sense’ impending danger or reward, from one-cell to complex organisms, to take precautionary steps. And conditioning, despite the ‘ugly’ term it now possesses since Skinner and Pavlov, is still part of nature’s bag of tools to keep us from lingering too long over making a life-saving decision. As any wilderness guide, athlete, dancer, or racetrack driver will attest, life’s timing is highly dependent on one’s conditioning—the speed at which we process information and the capacity to hold important information in our memory. The capacity for the 138 smallest to largest organism to detect danger or reward, to move toward or away from objects situated in space and moving in time as a reasonable speed, is essential to survival. Indeed our ability to move toward rewards or away from danger comes with the ability to ‘mark’ objects, events, and emotions as either giving pleasure or pain. This ability is the evolutionary force of our existence as it has been observed and explained by such insightful observers as Spinoza, William James, and Charles Darwin (Damasio, 2003). Throughout the ages, there have been many who consider the ‘primitive’ behavior observed in nature as a form of intelligence (Narby, 2005; Wilson, 1998). No one denies, of course, that the behavior of simple organisms all the way to primates is both like and unlike human intelligence. Yet, the behavior of approaching or retreating from stimulus, taps into a mechanism once thought to be solely ‘animalistic.’ In humans, it was viewed as buried deep within subcortical processes in the most ‘primitive’ parts of the brain. For most of the 20th century, in fact, cognitive science has tried to show that intelligence rests uniquely in the cerebral cortex and is able to exploit emotions and feelings at whim on rational, analytical terms. Behavior that was based in ‘emotional’ content was viewed as ‘subhuman’ or ‘irrational,’ since no one thought that emotions assist in reasoning and decision-making. But that view is incorrect in light of neurological evidence. An individual who posesses a diminished emotional capacity, while still demonstrating a high intellect and retaining the ability to reason (e.g., weigh cost/benefits), will nonetheless flounder when it comes to making a decision (Damasio, 1999). Without the emotional dispositional spaces, it is as though an individual’s rudder is removed from the ship and thereby disabling its personal stewardship. Damasio (1994) points out that emotional structures are “first and foremost about the body,” which as “preorganized mechanisms” were designed specifically to influence cognitive structures (p. 159). Thus, while single cell organisms have no brain to speak of and higher 139 primates possess a brain that comes very close to ours, nonetheless, they possess a ‘body’ in need of care and attention. Even if they do not ‘know’ this to be the case consciously, all living bodies are designed to overcome such limitations. Damasio (1999) goes on to say that in human anatomy, “it is apparent that emotion is played out under the control of both subcortical and neocortical structures…more important, feelings are just as cognitive as any other perceptual image, and just as dependent on cerebral- cortex processing as any other image” (p. 159). What is true for humans is generally true for all cellular organisms, namely, that “feelings let us mind the body attentively” (p. 159). And although we do not generally endow single-cell organisms with ‘feelings,’ we can safely call their behavior as ‘feeling’ the environment, even if they have no mechanism for which to know they are doing so. Thus, the terms pain and pleasure, or good and bad, which values are given to objects and events encountered in life, are merely ‘markers’ to indicate the action of movement toward or withdrawal from stimuli observed even in simple organisms. Some individuals who study emotions have shied away from employing the terms pain and pleasure, such as film theorist Grodal (2009). This reluctance, Grodal explains, is due to the manner in which humans take pleasure in viewing films filled with horror or tragedy. The ‘pleasure’ derived from watching horror and tragedy in theatre was thought to be a ‘cathartic’ experience, an idea that dates back to Aristotle’s belief that tragedy was a means to ‘cleanse’ negative emotions. This Aristotelian view has been supported throughout the ages by leading psychologists and theorists of film and theatre (e.g., Bertolt Brecht). Despite catchy song titles like James Brown’s, I feel good, the terms good/bad or pain/pleasure are not emotions in the true sense. Rather they are qualities that ‘mark’ a range of primary emotions (e.g., fear, anger, sadness or happiness), which extends to a broader range of 140 feelings. In general, those terms are useful descriptors when linked to emotional registers but are able to ‘flip’ around in meaning (as when ‘bad’ means ‘good’) by virtue of social linguistic flexibility. I suspect that any aversion to the terms pain and pleasure may be rooted in a social distaste for the countless psychological perspectives that stem from Freud’s sexually-laden theories. In any case, whether those are the terms one chooses to use or not, in the end, the inducer or force that causes individuals to draw toward or withdraw from stimuli, as a response to unique bodily needs, will make the decision based on some marker (i.e., a rubric) like good/bad or pain/pleasant. Ultimately, to be able to fully understand the dispositions, images, emotions, and values that converge and are necessary aspects of human behavior, it is important to move beyond the polemical views founded in cultural, social, and psychological theories. Damasio (1994) conceived of a somatic marker hypothesis precisely because we are able to ponder the values of pain/pleasure or good/bad and the role that those ‘markers’ play in decision-making. “It is perhaps accurate to say that the purpose of reasoning is deciding and that the essence of deciding is selecting a response option, that is, choosing a nonverbal action, a word, a sentence, or some combination thereof, among the many possible at the moment, in connection with a given situation” (p. 165). Just as assessment and evaluation are often interchangeable in a teaching context due to their interwoven nature, though possessing very different meanings and actions, so too are reasoning and deciding. What reasoning and deciding imply is that some form of logic must be underway and, generally speaking, must tap into working memory and attention. Not surprisingly, however, little is said about the necessary aspect of emotion and feeling involved in the ‘decision-making’ side of reasoning. But when considering the emotional register needed to decide on what to reason about, the pithy saying by Phillip Johnson-Laird suddenly makes more 141 sense: “In order to decide, judge; in order to judge, reason; in order to reason, decide (what to reason about)” (Johnson-Laird, as cited in Damasio, 1994, pp. 165-166). Darwin’s insight into art To comprehend all such nuanced matters, unlike other living organisms, one must possess an extended consciousness. In short, it is from a human perspective that such nuances may be learned, studied, and pondered. As Whitehead (1938) once declared, it is through an assemblage of studies and ideas across space and time that ultimately brings greater clarity. For instance, throughout our human history—between the abstract notions founded in the philosophical and scientific—our earliest subjective encounter with movement, space, and time, which neither limit nor escape our existence, has been expressed in countless ways through film, literature, sociology, psychology, politics, economics, law and other artful objects. Aside from accumulating many means for expressing our experiences, what is clear is that the field of our finite existence, integral with dispositions, images, emotions, and values since birth, appears to have set in motion what Darwin expressed as an “instinct to acquire an art” (in Pinker, 2007). Darwin’s astute remark is founded upon an organic impulse that is primitively expressed throughout nature. That impulse is an instinct for “making things fit together,” the meaning of which is the origin of the Latin word art and lives on in such words as articulate, article, artifact and artisan. This impulse to make things fit together is both a natural occurrence in nature and an observable human trait that appears at an early age. Whether it is fitting together objects or parts of an object, fitting images with feelings or fitting oneself to life’s conditions, there exists a natural instinct for organization and unity. Indeed, after considerable time spent observing nature, as biologist Edward O. Wilson (1998) and many others have done, one may conclude that all organisms possess an instinct for making things fit. While we share this impulse for organization and unity with all living organisms, and all are biologically organized in and of 142 themselves, what is unique to humans is a universal sensibility to the flow of images set upon a backdrop of emotion in a unified field of existence (Damasio, 1999). It is perhaps this instinct that drove a group of Canadian west coast arts educators (Irwin & deCosson, 2004) to coin the term a/r/tography as an umbrella concept to express the fitting together of their lives as artist, teacher, and researcher. This universal instinct to fit things together extends to the making of tools, which many species from crows to primates have shown to do. Many would point out, however, that human artifacts do more than merely fit the facts. Human artifacts are the work of an artisan who is emotionally attuned to beauty and are an expression of beauty. For this reason, the word art, as David Bohm (1998) points out, “has come to mean mainly to fit, in an aesthetic and emotional sense.” This view of beauty inevitably fits the sense that first apprehends any art form at its surface, namely sight in visual art, hearing in sound arts, somatic-motor for movement arts to which emotions are ascribed. But beauty is neither an emotion nor merely an aesthetic marker since the word beauty originally meant, “to fit in every sense.” In truth, for an object to possess beauty, one would need to have a clear understanding of the converging dispositions, images, emotions, and values that fit together in every sense. By adhering to partial meanings, however, humans both reinforce their separateness from nature and from each other. By drawing from a fuller meaning, however, one begins to encounter the wholeness of thought as it is outwardly manifested through a constant flow of images or stream of consciousness. When we maintain a separation between art and technology, we neglect the whole that is present in each that includes values and emotions and end up with a focus on what Alfred Whitehead (1938) called the matter-of-fact. Separating us from other life forms but uniting us as humans, therefore, is our handiwork, at times viewed as art, at times technology, which word, having derived from the Greek tèchne, means “art” (Aristotle, 1941; Gouzouasis, 2006). 143 Without dismissing the value that comes from focusing on “one thing or another, as the occasion demands,” whenever “one is engaged in breaking the field of awareness into disjoint parts,” there is no question that “deep unity can no longer be perceived” (Bohm, 1998, p. 62). But when we acknowledge that human artifacts, as McLuhan attempted, are the work of an artisan, we refocus our attention on a unique somatic-sensory flow as a whole with images and emotion. Every living being on earth shares the same existential framework just as surely as we are carbon-based for, as Einstein proposed, all beings are part of “a field, which is an unbroken and undivided whole” (Bohm, 1998). We thus owe our unique form of consciousness to this integral makeup that is immanent, which also grants us the ability to exploit our reality beyond an instinct for survival. One may wonder how an instinct for making things fit together arises from an unbroken and undivided whole. From whence does a sense of separateness arise? Human thought, free from neurological anomaly, assures me that I will never experience time as a sloth, estimate the size and dimension of objects as would an ant, relate to motion and stillness as a hummingbird, or respond like a fly caught in a spider’s web. Notwithstanding, I am able to liberally use such images to express metaphorically and metonymically my experiences with reasonable assurance that others of my kind, also void of life as a sloth, ant, hummingbird or fly, will understand what I intend. Thus, while I am quite certain that I share the backdrop of this existence with other forms of life, I can exercise an ability to spin a tale out of my imagination with absolute certainty that my perspective will not be countered by a single sloth, ant, hummingbird, or spider. On the one hand, reflecting in that manner invites us to respect the diversity of life forms knowing of our human advantage to think and speak in such ways and, on the other, leads us to marvel at our ability to communicate experiences in countless ways despite inviting contradiction from others of our kind whose encounters with life will differ so greatly as to be 144 the source of confusion. As far as I am able to detect nature through my observations, an organism that negotiates movement, space, and time is contradicted by the whole of nature itself, which forces agree only because of an elegant fit in nature. Our human capacity to negotiate, as defined in the Oxford English Dictionary allows us, “to communicate or confer (with another or others) for the purpose of arranging some matter by mutual agreement; to discuss a matter with a view to some compromise or settlement.” If we contrast human communication with that of other organisms, it is clear that since the dawn of humankind, language has been a great advantage to humans in being able to negotiate above and beyond the forces of nature. Vygotsky (1962) was most certainly correct in making clear the distinction. As much as we may intuit that animals possess a communication instinct, at times defending their expressions as something like language, humans do more with language than the “spread of affect,” for “a frightened goose suddenly aware of danger and rousing the whole flock with its cries does not tell the others what it has seen but rather contaminates them with its fear” (pp. 6-7). On the other hand, an instinct to communicate at all is not something to be taken lightly. Thus, the spreading of fear due to impending danger or some other affect is common among all organisms and this fear is communicated out of preservation of the whole. Yet humans are endowed with a capacity to go beyond basic expressions to communicate a shared understanding of who did what to whom in the perfect past, specious present, and conditional future (Pinker, 2007). It is this negotiated space and time that we are able to thus counter forces with the dictionary definition of “a view to some compromise or settlement.” Often, however, that compromise is not with another human being, but with the self in an attempt to make sense of emotions that contradict with the images and feelings as they arise in our lived conditions. 145 The remarkable thing about language is that it universally emerges in humans from birth (Nathani, Ertmer & Stark, 2006) and appears in sentence form approximately two years later. While our introduction to negotiated movement, space, and time in those first two years is filled with sounds (or in the case of the Deaf, gestures) uttered and heard (or seen), language is mostly unintelligible as far as symbols go. It is not made clear until such time as we develop “a knack for communicating information about who did what to whom by modulating the sounds [or gestures] we make when we exhale [or move our limbs]” (Pinker, 1994, p. 5). Elaborating further, Pinker (1994) describes language as, “a complex, specialized skill, which develops in the child spontaneously, without conscious effort or formal instruction, is deployed without awareness of its underlying logic, is qualitatively the same in every individual, and is distinct from more general abilities to process information or behave intelligently” (pp. 4- 5). Thus, what is common among humans is an instinct to acquire a ‘techne’ (i.e., art) that is also complex and specialized. When it comes to the kind of contradictions that arise through language, namely differences in what we image, we have at our disposal abundant evidence through repeated encounters with others of our kind that we share the physical aspects of movement, time, and space. We also share the numerous social forces, causes, and effects as these are sensed physically, emotionally, and psychologically. From this abundant source of information, which we tap into every time we engage in conversation, we may be assured that humans all over this earth have experienced the arrow of time that moves forward in one direction, three dimensional space, motion at a human pace, and constant change. As to change and the ensuing effects, we may also be assured that without prompting, humans will infer and hypothesize a cause. It goes without saying that psychology, in addition to being a field of knowledge with an insatiable need 146 to find cause, is a field where researchers have demonstrated that thinking causally arises even before language appears in the course of one’s life (Wynn, 2008). Though humans image in unique and varying ways and are thus able to express in imaginative and abstract terms notions of infinite space and time, of endless stillness, and forces without cause, no one on earth has physically experienced infinity. Thus from birth, we characteristically build our relationship to movement, time, space, forces, causes and effects with a physical adroitness. It is only much later that we are able to generalize our experiences as symbolic variations on a theme and, by consequence, extend our thinking to include generalizations of phenomena not experienced, as in the ‘thought experiments’ of Einstein and other physical theorists. The capacity to generalize or to simplify our experiences is a form of reduction that precedes symbolic thinking. For as anthropological linguist Edward Sapir (1949) once asserted, “The world of our experiences must be enormously simplified and generalized before it is possible to make a symbolic inventory of all our experiences of things and relations” (p. 12). The arts as concrete and symbolic inventory Permeating throughout the arts are variations of movement, time, and space, and expressions of force, cause and effect, precisely because our physical reality is the genesis of universal themes, both concrete and abstract. Because our human brain evolved in such a way, generalizations become elaborate grammars (i.e., patterns) that evolve into contradictory systems of thought. The versatility of such grammars allow us to zoom in and out, crop, narrow, widen, overlap, recombine, blend, slide, flip, and rotate the semantic content. It does not much matter with which mental images we play—for instance, visual, linguistic or sound images—the result always brings variations on life’s theme. In other words, from a finite and shared reality, the human intellect produces infinite and oft times contradictory notions, which nonetheless proceed, 147 from our universal experience of a body occupying space, that progresses forward in time, moves according to variable forces and is causally connected. Given our human capacity for thought that produces so many variations in art, language, religion, culture, society, and politics, many will balk at the view that concepts of movement, time, space, forces, and causality are anything but learned. Many, for that matter, will question the very notion of universal realities in light of social and cultural variations, with the obvious exception, of course, of birth and death. There are many instances whereby behavioral investigators viewed variations in cultural artifacts as being particular to social groups, which artifacts were thus an expression of learned behavior. For the first half of the twentieth century, for instance, it was thought that the acquisition of language was entirely learned until such time as Chomsky (1957, 1972, 2002) was able to show that a universal grammar exists, which indicates an innate human capacity for language. Other instances include the observations of cultural anthropologists who noted variations in facial expressions from one culture to the other, which rendered the impression that emotions were also entirely learned. It was only when cognitive scientist Paul Ekman (2003) gathered sufficient data world wide that overwhelmingly showed the universality of primary emotions, namely, happy, sad, fear, anger, disgust and surprise. Damasio (1994) describes many studies in neuroscience that corroborate Ekman’s conclusions. For instance, a genuine smile is caused by emotions activated in the anterior cingulate, which regulates involuntary muscles—a discovery made by Norman Geschwind (1974). When asked to smile, patients suffering from partial facial paralysis are incapable of smiling symmetrically. Rather their smile is lopsided because it is not prompted by genuine feelings. The fake smile is what Geschwind called, “pyramidal,” since it is caused by actions in the motor cortex and its pyramidal tract, which control voluntary motion through peripheral 148 nerves (Damasio, p. 142). Thus, if genuine joy is felt, a patient’s smile will involuntarily spread from ear to ear; if not, the face will sag on the paralyzed side. Damasio (1994) points out that Darwin was the first to observe facial differences “of genuine and make-believe emotions” in a book entitled, The expression of the emotions in man and animals, which was published in 1872 (p. 142). But a decade prior to Darwin’s insights, Guillaume-Benjamin Duchenne had discovered the type of musculature and the control that was necessary to form a genuine smile. He noted that a “smile of real joy required the combined involuntary contraction of the zygomatic major and orbicularis oculi,” the latter of which cannot be activated willfully (Damasio, p. 142). Though we may be able to control our expressions, as anthropologists were quick to note, our innate, emotional register is not controllable. And the orbicularis oculi is just the kind of mechanism that gives away what Duchenne called, “the sweet emotions of the soul” (p. 142). The ‘fake’ or ‘pyramidal’ smile is often what we see in photographs, which leads us to sense, consciously or not, that something dishonest is being portrayed. Our ‘unconscious’ ability to spot disingenuous expressions or the ‘lying’ voice or face was brought forward in a story by Oliver Sacks (1985), entitled, The president’s speech, as told in, The man who mistook his wife for a hat. Two groups, one with global aphasia and one with tonal agnosia, watched a president give a speech on television. Those suffering from global speech aphasia, which prevents their understanding of the meaning of words spoken, were convinced that the president was lying. The other group bearing tonal agnosia, which prevents their comprehension of tone, gesture, and other non-linguistic expressions, considered the speech flawed in logic and concealing something. It is true that while an actor must learn how to manipulate face, gestures, and tone to render their emotions believable, the honesty of their expressions is also dependent on an emotional register that must underscore the act. What may be at play is the movement of 149 expression, for as Ekman (2003) has shown, smiling or frowning produces feelings of happiness or sadness—demonstrating that motion impacts on our emotional register. Thus, the gift that anthropology bestowed upon us throughout the 20th century is a view of the infinite variations of human expressions. To a degree, the numerous and varied images they have presented us of their observations have been as artful as any storyteller might render— written, photographed, or filmed. For many years, anthropologists have recounted tales of strange and exotic places, which have held us in wonder and surprise. As with any story, however, those who listen through the filter of lived experiences are apt to discern reality from the romance that forms between the storyteller and their subject. More importantly, since the world through technology has now been turned into the ‘global village’ McLuhan predicted (1963), it is only fitting that anthropology would lose its intellectual authority. The increasing evolution of social consciousness, aided by a flood of images few anthropologists can keep up with, has once more brought science to the fore. Why science? Science differs from anthropology and art for its tenet, ever since Galileo, is not to uphold romantic views but to test the ground of observation. The fact that science strives toward experimentation and repeatability, however, does not lessen its artistry. Indeed, given the infinite figures of the mind, would truth resonate if it did not stand upon a universal ground that fits in every sense? One does not need to be a neuroscientist working with aphasiacs to come to realize the capacity to ‘read’ truthful expressions. The oft times when we have witnessed animals, children, and persons tuned by experience, who are able to respond to an emotional register that betrays the words that are spoken, is proof enough that we possess a tacit understanding (Gladwell, 2005). But it takes scientific observation to confirm what we think we know. 150 Anthropology, however, is not the only field that has been slowly eroded by lived encounters made possible by new technologies of travel and communication. The fields of psychoanalysis, cognitive science, and developmental psychology have also been challenged by repeated studies and a new neurological field that studies the brain with technologies able to penetrate the landscape of what appears to be science’s final frontier. Whatever tenets still hold together psychoanalysis and cognitive science, we have become increasingly aware of the tenuous claims they make in light of more and more evidence. A person who wishes to amputate a limb because it feels extraneous can rightfully object to a Freudian Oedipal diagnosis, as was the case of a man seen by neurologist Ramachandran (Colapinto, 2009). A highly creative music composer, an eloquent speaker, a genius of color and numbers, each possessing an extremely low score on general intelligence tests, may still lay claim to their extraordinary mental gifts despite the discrepancies. It is clear that psychoanalysts and cognitive scientists continue to struggle to explain phenomena from a purely structural and rational view (Damasio, 1999; Ramachandran, 2008; Sacks, 1995). Nonetheless, it is important to know and understand the many contributions made by those researchers rooted in psychological and sociological fields of investigation. Without the deep and penetrating questions that have been posed throughout the past few centuries, we would not have fashioned new tools or reached the insights we have today. We are, of course, far from answering all the questions and equally as far from posing the ‘right’ questions. But many researchers hold the belief that visiting the past enables us to envision the future. It is for that reason that many theories are worth dusting off and revisiting, if only to put to rest those that continue to dog our understanding and actions. Because of his influence in education, Piaget’s theories are ones that need to be revisited. Piaget made a convincing argument that children progress through several stages of “cognitive 151 maturation” beginning with a sensorimotor stage (logically speaking, the sensory-somatic stage) whereby notions of movement, time, space, force and causality shift from childish imaginings into formal concepts, each encoded, conserved, and decoded in appropriate ways. Nowhere, for personal reasons, did Piaget consider the basis of emotion in cognitive maturation. Formally, time is composed of direction, space is composed of mass and matter, and movement implies a force, a cause, and an effect and each may be quantified and measured. Our capacity for logical formulations suggests that concepts, such as time-space and movement, are rooted in sophisticated human thought that does away with anything as ‘deterministic’ as instinct. Accordingly, formal thinking must be learned, for as Newtonian physics or that of the twentieth century has shown, the notion that we possess an instinctive grasp of movement, time and space is neither determined nor indeterminate—it is both. In such rational formulations, however, what lays amiss is our emotional apprehension of time, space, and movement. If this gap were trivial, would we spend as much effort to detach our thoughts from emotions in a formal setting? What some refer to as ‘instinct,’ Piaget (1970) essentially called “image-using intuition” or “perceptual intuition as opposed to rational intuition” (p. x). He theorized that instincts, which could be observed in sensory-somatic and pre-operational stages, are transformed through operational thinking. According to Piaget (1970), “the gradual passage from intuitive thinking, still tied to the information of the senses, towards operational thinking, which forms the basis of reasoning itself, may be studied in the light of particularly simple examples” (p. ix). Whereas animals and infants lack a rational view, children demonstrate that they develop rational thinking that override “perceptual intuitions.” The ability to reason formally was the first step in overturning animistic views—or what many saw as mystic views that determined socio-cultural behaviors contrary to the ‘true’ laws of nature. But as Vygotsky demonstrated, this rational thinking only comes by formal means. In other words, rational thinking is indeed learned. 152 Maintaining a cognitive perspective whereby humans progress through distinct intuitions, i.e., perceptual and rational, Piaget’s developmental theory supported the view of the superiority of reason over instinct or of cognition over emotion. When following up his earlier work on time with that of movement and speed, Piaget (1970) contended that the “three basic concepts are interdependent” (p. ix). As such, he explained that, “as in previous works on the child’s conception of number, quantity and time, we shall take the term operations, in a limited and well-defined sense, to mean actions or transformations at once reversible and capable of forming systematic wholes” (p. x). Here was an instance whereby Piaget neglected the irreversibility of time in thermodynamics as explained by Prigogine (1984). Nonetheless, the objective, according to Piaget, was to contend with “the essential problem which we shall study in relation to movement and speed is the passage from image-using or perceptual intuition to the forming of operational systems” (p. x). Piaget did not hide the fact that, his “emphasis will be exclusively on this operational aspect of development,” but he also did not offer to clarify his stance on conception and perception. Indubitably, Piaget conducted his observations while under certain philosophical influences that continued to separate the two and to class them hierarchically. Why instinct is so frequently and unfortunately viewed as deterministic stems from a distinct feeling of being more than animal in rational and expressive ways—a picture reinforced by experiments conducted in psychology by the behaviorist psychologists John Watson, B.F. Skinner, and Pavlov. Quite naturally, the image of a dog salivating at the sound of a bell hardly inspires a view of humankind that fits with a soul divinely endowed with language, arts, intelligence and the capacity to discern beauty from the vulgar, good from bad, right from wrong. As a form of response to stimulus, instinct was darkly associated with stereotyped, uncontrollable, determinate, and predictable behavior—a kind of automaton. On the one hand, that may very well be what an instinct conveys since according to Pinker (1994), a language 153 instinct “conveys the idea that people know how to talk in more or less the sense that spiders know how to spin webs” (p. 5). Thus we expect that spiders do spin webs uncontrollably, determinately, and predictably without ever learning this feat. On the other, a language instinct, like a web-spinning instinct, would suggest that all humans are born with an instinct to communicate “information about who did what to whom by modulating the sounds we make when we exhale” (Pinker, 1994, p. 5). And, without exception, those whose sound capacity is limited by genetic or accidental fault will find sound symbol substitutes, such as hands and fingers—a sound image substituted with motion and sight. Anyone who has raised a child since birth whose neurological framework falls within the norm will attest to the child’s stereotyped, uncontrollable, determinate, and predictable behavior for making sounds in an effort to communicate. To argue that language is instinctual, (i.e., innate), therefore, contradicts a view of language being a ‘cultural invention.’ And while it appears that our human activities, such as aggregating in social groups, is the perfect medium for cultivating language, even in unusual instances where humans have been born and raised in isolation, rudimentary language was still found to be present (Marschark & Everhart, 1997; Siple, 1997) To suggest that humans possess an instinct to acquire language, according to Pinker (1994), “is not a manifestation of a general capacity to use symbols: a three-year-old…is a grammatical genius, but is quite incompetent at the visual arts, religious iconography, traffic signs, and the other staples of the semiotics curriculum” (p. 5). Rather, an instinct suggests that its symbolic genesis is the genetic molecular pathways that set the necessary foundation to give rise to neuronal networks responding to incoming actions arising from internal and external objects and events. All of which carries dispositional representations that must be brought to life under very specific conditions. 154 When evolutionary psychologist Peeters (1996) concluded, “Differences in forms of thinking do not correspond with education as such, or with different cultures, but with different forms of activity” (p.180), he was pointing precisely to the movement toward and away from objects and events in our lives that would shape neuronal networks, form image and dispositional spaces, and elicit emotional responses and outward expressions. In short, schooling may very well be an activity that corresponds to differences in thinking, but in all likelihood, the differences adhere much earlier in life and all of our engagement with self and surroundings. Tulvist (1991) expanded this notion as follows. If the school method of solving syllogistic problems represents a generally higher stage in the development of thinking that makes it possible to solve any problem better, then it would certainly be retained by individuals who had attended school and it might even be subject to further development. Absence of this method under traditional environmental conditions, on the other hand, indicates that we are dealing with a specific method of thinking that is functionally appropriate to solving specifically school or scientific problems and does not have a functional significance in the types of activity that do not require application of scientific information and the solution of corresponding problems. Understanding film in education Analogous to Tulvist’s viewpoint and by borrowing his words, it is perhaps true that the method of film studies throughout the 20th century is an indication of the “specific method of thinking” that fails to address the “types of activities” that arise in a school setting. As educators, we are at a loss to know how the theories that arise from specialized film studies can benefit us in a classroom context. Yet, what we venture to gain from exploring the entire range of film studies is an historical perspective that links directly to the spectrum of research across all domains of knowledge throughout the twentieth century and beyond. Insights gained from studying the body of knowledge in one domain can help to illuminate perspectives drawn from another framed by common paradigms of thought. 155 Each research paradigm that shaped film studies paralleled the paradigm under which studies in philosophy, science, arts, psychology, and sociology emerged at large. And each study was built on the insights of the preceding or of parallel studies. From the ontological film theorists, we gain a view of reality that must be re-examined in light of what we know and hypothesize today. From psychological and sociological film theories we may begin to look anew at the contributions and limits of each in light of new developments in the neurosciences. The latter is also true of structural, cognitive, and semiotic film theories that operate on rational grounds independent of the confluence of emotions (Bordwell & Carroll, 1996; Buckland, 2000; Casetti, 1999; Grodal, 2009). In short, despite the evolution of studies in film research, we continue to face a gap in knowledge that addresses both the objects of film (i.e., images, movement, time, space, and emotion) and our relationship with film as process and product in a teaching and learning context. In an educational setting, this epistemological gap constrains us from fully engaging in filmic works and from answering questions and resolving concerns, which have arisen since new forms of literacy were thrust into society at the turn of the 21st century. It is at this juncture that the autobiographical has the potential to shed light on what remains amiss. For although objective and formal methods have undoubtedly pushed film studies through varying paradigms of research, we have inevitably become overwhelmed by the multiplicity of theories. The fact that autobiographies have been written across many fields of knowledge, which have challenged current theories, is a sign that this method of investigation has deep value. The fact that neuroscientists were among the first in the sciences to write autobiographical accounts and among the first to use those accounts to research beyond mere description, indicates yet another 156 vital method toward the acquisition of new knowledge (Pinker, 2007; Ramachandran, 2008; Sacks, 2009). The convergence of specialists and generalists in neuroscience, education, and film The stuff of this universe, then, is ultimately mind-stuff. What we recognize as the material universe, the universe of space and time and elementary particles and energies, is then an avatar, the materialization of primal mind. In that sense there is no waiting for consciousness to arise. It is there always. What we wait for in the evolution of life is only the culminating event, the emergence of creatures that in their self-awareness can articulate consciousness, can give it a voice and being also social creatures, can embody it in culture in technology, art and science (Wald, 1984, p. 74). Since the invention of positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), which are advanced brain-imaging tools that have revolutionized our view of the brain, we have entered a new era of inquiry into the nature of mind and consciousness. Though we have benefited medically and scientifically from advanced technologies and new experimental designs over a fifteen-year span, it has only been in the last five years that we now possess paradigm-shifting insights readied for educational use. Inevitably, the excitement of being able to finally peer inside the proverbial ‘black box’ is tempered with cautionary views. Perhaps what we are waiting for, as Wald expresses in the quote above, “is only the culminating event.” It goes without saying that no matter how compelling the evidence, fields of research are never without battles fought over explanatory grounds. As the neurosciences have become increasingly diverse and specialized, those at the frontier of brain research are challenged by the separation of theory from practice, not unlike any field of research. As the ‘brain’ that created ‘mind’ is now attempting to peer back into its origins, we feel a rising dissention between a subjective and objective viewpoint. Yet can there be any real doubt that in subject and object we 157 find a kinetic counterpoint awash in a temporal and spatial dimension to synchronize in perfect harmony? From ‘amateur’ to ‘expert’ qua generalist to specialist, the study of the nervous system reaches as far back as 4,000 BCE, according to the University of Washington’s education website entitled, Milestones in Neuroscience Research (retrieved Nov, 2010). Despite such a formidable history to understand the brain’s system, with salient strivings some 400 years ago, we generally consider modern neuroscience to have begun as a result of the pioneering works of the prominent neurologists Paul Broca, John Hughlings Jackson, and Carl Wernicke in the mid- nineteenth century (Damasio, 1994). Although their findings generated new philosophical and psychological inquiry, it was Broca (1865) and Wernicke’s (1874) contributions to ‘locating’ speech production and its impairment (i.e., aphasia), which was the impetus for refocusing a scientific inquiry into the relationship between language and thought (Damasio, 1994). Several years prior to Broca and Wernicke officially announcing their discoveries, Phineas Gage, an unfortunate railway worker had survived an explosion that caused an iron spike to enter up through his cheek, pierce the base of his skull, tear across the front of his brain, and exit out the top of his head (Damasio, 1994). Despite his ‘miraculous’ recovery in the hands of a competent physician by the name of Dr. John Harlow, Gage suffered a mental injury that produced a complete and radical transformation of his personality. Gage’s unusual mishap made headlines news, but the story soon faded from scientific awareness and became peripheral to the exciting new discoveries surrounding the location of speech in the brain. According to Damasio (1994), the obsolescence of Dr. Harlow’s account of Gage occurred because of two principle reasons. The first, he postulated, was that “even if a philosophical bent allowed one to think of the brain as the basis for the mind, it was difficult to accept the view that something as close to the human soul as ethical judgment, or as culture- 158 bound as social conduct, might depend significantly on a specific region of the brain” (pp. 20- 21). And the second was that, as attention turned to Broca (1865) and Wernicke’s findings (1874), the physician Harlow, by comparison, appeared as an amateur. Borrowing once more a term to delineate ‘artists’ from educational arts researcher Elliott Eisner (2008), Harlow’s account of Gage’s life had been recounted from the perspective of a ‘connoisseur.’ As captivating in its description as it had been, it lacked a theoretical framework with explanatory insights. In many respects, one could say that descriptive accounts lacking explanatory insights have plagued arts-based educational research and educational research in general. But of the two credible reasons for researchers ignoring Gage’s injury and the catastrophic outcome to his personality, I suspect that the former philosophical view maintained the latter bias. By the time neuroscience had gained sufficient numbers of experts in its own right at the turn of the 20th century, not much had changed with respect to a worldview of human nature, making cognition and speech a far more suitable topic than the study of the brain that presumed to explain the mind. Naturally, there were other urgent political and ideological concerns that resulted in two world wars. Thus, much of modern science in the first half of the 20th century was preoccupied with the new physics of time-space, psychology with studying aberrant behavior, and sociology with critiquing power relations; philosophy was kept busy with the ontology of symbol systems, such as film and language. Many of the concerns kept the study of the brain in the shadows, away from a milieu that championed the objective, rational mind as having made us distinct from animals. Thus, not even salient animal experiments that revealed now and again interesting insights on perception and cognition, were paid much attention (Doidge, 2007). Aside from a handful of theorists in the new field of neuroscience, neurologists were mostly relegated to the periphery of mainstream science. This status is born out by the fact that Gage’s story was not the sole account that was overlooked for many years. 159 As many as three thousand cases of WWII brain injuries were studied by neurologist A.R. Luria, which included a 25 year study of a soldier with a shrapnel injury to the temporal- parietal-occipital juncture. The soldier’s slow recovery was brilliantly recounted in a book, Luria (1972) co-authored with his patient, Zasestky, entitled Man with a shattered world. In view of their extraordinary contribution to society, once more one takes note that the feeling of urgency, which I described in Chapter 2, caused many to turn their attention away from such important findings to focus matters-of-fact—only to be retrieved many years later. For this reason, the above quote by George Wald is both fitting and worth contemplating. Biochemist and winner of the Nobel Prize in 1967 for his contributions on the neurophysiology of vision, Wald expressed a sobering thought: the scientific method has yet to solve two puzzles that have preoccupied humans since our earliest writings. After a lustrous career, Wald (1984) concluded, “I have come to the end of my scientific life facing two great problems. Both are rooted in science; and I approach them as only a scientist would. Yet both I believe to be in essence unassimilable as science. That is scarcely to be wondered at, since one involves cosmology, the other consciousness” (p. 69). Wald expressed two key thoughts. First, that researchers begin to understand the limits of their method, when at the end of a career, the greatest puzzles remain unsolved. And second, that despite the incredible advances of the scientific method, something else is needed that would take us to the edge of understanding the cosmos and consciousness without falling into mysticism. Pondering this thought further, it would seem that it is not methods per se that sets limits upon our understanding. Rather, it would appear that the kinds of problems we face universally as humans are framed by circumstances that produce specialized skills and knowledge, that places limits on our ability to (1) resolve a similar problem arising in a different context and (2) recognize an unexpected and unusual outcome as being a critical solution to a general problem. 160 Thus, the two epistemological facets, namely, the general and special, inevitably disallow their becoming visible at the same time. It is only when the mind toggles back and forth between the two—as experience allows—whereby both may be taken into account. On the matter of consciousness, for instance, the problem produces as many different concerns and contexts as there are disciplines and sub-disciplines. For the biologist who possesses an interest in living organisms and their relationship with the environment, consciousness presents itself as a different concern from that of the cognitive scientist who possesses an interest in the mind and its relationship with symbols. Naturally, consciousness is of an entirely different concern to the artist or spiritual leader. The diverse concerns and contexts for concern present us with a multitude of plausible theoretical positions, which arguments do not hold true all of the time, but “may hold sometimes” (Siple, 1997, p. 31). Or, as a good friend likes to frame this thought, “sometimes but not always.” Reflecting upon his life’s work in philosophy and film, Deleuze (2004) believed that “the encounter between two disciplines doesn’t take place when one begins to reflect on the other, but when one discipline realizes that it has to resolve, for itself and by its own means, a problem similar to one confronted by the other” (p. 367). Wald’s thought, by contrast, suggests that the scientific method contributes valuable insights to numerous fields of science and beyond, but that the specificity of findings does not ensure general usage. Moreover, when it comes to metaphysical concerns, empirical findings prove not to be of any use—or so it would seem. More limiting than the method of inquiry is perhaps a perceived distance between the fields of knowledge, which appear to tackle unique problems particular to their ‘terrain.’ But all fields of knowledge are faced with fundamental concerns common to all, or as Deleuze described it, “the same tremors occur on totally different terrains” (p. 367). For instance, from whichever discipline or theoretical framework one views the senses, anyone curious as to how the senses 161 give rise to actions, i.e., emotions, cognition, representation or reality, ought to be equally curious as to how attention, memory, interests, drives, and motives give rise to knowing or a sense of knowing self. But none of those research interests can escape the ontological issues that are inherent in the study of consciousness. Thus, a scientist who possesses an interest in how the senses give rise to actions, is bound to have much in common with the artist and educator who collectively possess an array of concrete means to tackle one concerning action after another as they occur in context. From a metaphorical standpoint, what the scientist, artist, and educator practice is the concrete stitching together of the philosophical fabric that has created a patchwork of questions common to all who ponder our existence and reality. Faced with transferring knowledge, from essential qualities to a set of facts, which solutions promise to solve ‘real’ world problems, the scientist, artist, and educator have in common the motive to experiment with all types of constructs that are in perpetual motion and forever bumping into the other. By contrast, the philosopher may only experiment with swatches of ideas as they are expressed throughout society while perched at a comfortable distance. To think deeply on life’s experiences requires a distancing, not to ‘resolve’ problems, per se, rather to continue to probe with pointed questions and put forward ‘possible’ solutions—no matter how farfetched. In trying to make a distinction between ‘logical’ thinking (i.e., reasoning) from that which is mere ‘common sense,’ the challenge philosophers face is a rational mind that is capable of inverting all manner of logic—from common sense and back again. This mental flexibility is one we witness daily through the syntactic and grammatical nature of language, which the ancient Greeks studied as logos and grammar qua rhetoric. The linguistic challenge, which was wittily exploited by Socrates’ dialectics and reconsidered by Wittgenstein, appears as the first limitation to arriving at the truth. Heidegger 162 (1968, 1971), by contrast, posited that it was ‘thinking’ that set challenges to our ability to arrive at the truth. In the end, perhaps the point of philosophy is not of necessity to arrive at truth but to shift attention and raise interest, not the least of which is the matter of language and thought. By contrast, the challenge experimental researchers and all practitioners face at every turn is in first identifying salient factors, defining their parameters, and selecting which of the possible factors give rise to causes and effects. In other words, selecting the explanatory elements essential to making statements of truth or, at the very least, shift attention and raise interest so as to search for explanatory elements. Frequently stymied by the complexity that faces them and overwhelmed by a myriad of potential factors, however, educational practitioners are often obliged to rationalize and make hasty decisions, taking action even in a fog of confusion. For this reason, practitioners would do well to view ‘best practices’ (i.e., methods and approaches) not as a set of standards, but rather as practice that is as good as one is able to apply under the theoretical frameworks and their limitations of the times. Action that is taken in a fog of confusion, which reflects both indeterminacy and a social zeitgeist, may not be much of a concern for the artist and philosopher, whose life work is as much a creative construction as it is deconstruction of the phenomena that capture our interest and shift our attention. But it is definitely an ethical and moral challenge for the educator and scientist whose collective work is to find solutions to the problems that vex our mental, emotional, spiritual, and physical health. This includes identifying the causes to allow solutions to be found. The corollary between medical-scientific practitioners and educators, in fact, gives rise to practitioners citing the Hippocratic oath, as was the case of the protagonist in the fictional court case I made into a film. The ethical concern was as much a part of the inner-workings of 163 the narrative written by Tierney (2001-2002) as it was a question that lurked in his mind as an educational researcher and policy maker. The need to seek insights in comparative fields inevitably compels scientists, educators, artists, and philosophers to criticize findings, “not by some right of consideration,” but rather in the sense that Deleuze (2004) suggested, “The only true criticism is comparative because any work in a field is itself imbricated within other fields” (p. 367). When Varma, McCandliss and Schwartz (2008) set about writing an article on the challenges that we face bridging education and neuroscience, their intent was to show that despite the differences in methods and research questions, the common concerns sufficiently overlap as to render their findings mutually beneficial. While it is true that “neuroscience has little to say about the social construction of inequity, and education as little to say about the hemodynamic response function” (Varma et al., p. 150), each field has something vital to offer to the other. From the standpoint of neuroscience, they outline what has been underway at present. We have seen that neuroscience treats the motivational, cognitive, social and emotional dimensions of learning as integral (Montague et al., 2006). We have seen that neuroscience research sheds light on cross-cultural differences in reading and mathematical reasoning (Siok et al., 2004; Tang et al., 2006). Studies of the neural correlates of experiencing violence in video games are beginning to appear (e.g., Weber, Ritterfeld & Mathiak, 2006). As these examples suggest, the ultimate scope of educational neuroscience is an empirical question (p. 150). Viewed from this light and from the standpoint of education, the necessity for educational researchers to work alongside neurologists is vital, not only because of the empirical value that comes into play but in view of the fact that the brain is the seat of consciousness. It would appear that a fundamental understanding of the brain is an essential part of teaching. That is not to say that pedagogy undertaken without knowledge of the brain has been unproductive. “Education research,” claim Varma et al., “has produced unique insights into the 164 nature of complex cognition and its development—insights that are potentially of foundational importance to future neuroscience research” (p. 148). Finding a means to work together, however, is key to advancing knowledge for the simple reason that neurologists would find it difficult to assimilate the volumes of educational research. Without collaboration neurologists are as likely to approach learning with a certain naiveté based on romantic notions drawn from personal viewpoints, just as educational researchers are likely to oversimplify complex scientific findings and apply simple solutions to learning contexts. As Varma et al. (2008) suggested, “without collaboration, neuroscientists are at risk of running naïve experiments informed by their personal experiences of how children come to learn content area skills and knowledge” (p. 148). And without question this is also true of well- intentioned educational researchers who are likely to turn neuroscience into mythic cure-alls in an effort to package the latest curriculum or educational activity designed to foster higher mental processes and achievements. It is not merely that neuroscientists and educational researchers lack mutual knowledge from insufficient collaboration; both disciplines also lack collaboration and salient information from related fields. Film studies, for instance, which represent a wide range of ontological and epistemological problems relative to perception, identity, representation, symbolic logic, images, and all manner of issues of a socio-cultural and psychological nature, have amassed a large body of knowledge that offers much to the collective concerns of science and education. Thus, just as educational research could prevent neuroscientists from going down the same blind alleys, so too would film research allow educational researchers and neuroscientists to critically examine film as a resource of pedagogy or methodology. In other words, employing critical discernment of film theories, researchers in education and neuroscience could begin to navigate filmic properties as these relate to the classroom or scope of brain research. 165 To bring together neuroscientists and educational researchers into collaborative spaces of investigation in the first place requires the breaking down of a perceived incommensurability. As Varma et al. (2008) explain, the two major roadblocks that impede educational researchers from embracing neuroscience appear to be the proliferation of “neuromyths,” and the sense that neuroscience can do little more than substantiate what practitioners have intuitively amassed after years of experience. Of the two, “neuromyths” have had the most serious and negative impact in the areas of language, mathematics, and music education. Those oversimplified views with mythic proportions have resulted in either questionable language policies (e.g., multilingual education), allowed many to profit from controversial and costly music programs and learning approaches (e.g., The Mozart Effect ™), or has driven most to ignore the neurology of a cognitive and emotional basis for the development of numeric understanding (Butterworth & Laurillard, 2010). It is understandable that educators and researchers will question the role of neuroscience if classroom experience provides sufficient data for tackling learning difficulties. On the other hand, given the continued search in education for solutions to challenges in learning, it is unlikely that experience alone has achieved its end. It is also understandable that those invested in public education are vexed by neuroscience myths, which support interest groups and ideologies or fuel commercially produced programs that only benefit a profit-driven private sector. The former perception, which Varma et al. (2008) outlined in their article as being incommensurable due to context, brain localization, reductionism, and philosophy, pales by comparison to the latter seen as aggressive ‘educational’ programs aimed at depreciating public and free education. Concerns of equity, politics, and economics, particularly in light of the multinational corporate system that fuels mainstream film and technology industries (e.g., Pixar ™ films, 166 Apple™ computers and iPhone™), inevitably plagues the advancement of film studies. Clearly public educators are vexed by the takeover of filmic and technologic objects that are vying for educational space and time, thus making the study of film or technology as it relates to education, rather contentious activities. In the end, when it comes to the impasse between film studies and education, similar apprehensions with respect to reductionism, philosophy, and myths abound. Dismantling the mirrored dissention that reigns in film and language studies Today’s studies in film theory are often slotted into three key investigative strands: cognitive linguistics, semiotic/cultural theory, and social/psychology. Film methodologies are filled with terms and concepts drawn from their respective umbrella disciplines, and then interwoven in a very specific manner as it relates to filmic elements that draw parallels with language, mind, or society. Understanding the scope of film studies, therefore, has proven to be a challenge for educational researchers lacking in disciplinary knowledge beyond social sciences and psychology but more so if lacking film knowledge and skills. Thus, educational researchers tend to turn to the more pragmatic fields of communication and media studies whereupon the social, cultural, and semiotic focus is made accessible through palpable and easy-to-understand theory-to-classroom exercises that can be replicated, observed and, more or less, analyzed. Importantly, one cannot exclude feminist and critical film theory, theories that have taken on the challenges of exploring stereotypes and prejudices thought to be reinforced by media influences. Ultimately, the social, cultural, semiotic and psychological perspectives in film study try to identify either what film ‘teaches’ its audience, ‘communicates’ to spectators, or how it ‘socializes’ communities and reinforces psychic stereotypes. Since all of those motives are driven by human concerns that affect the whole of society as well as the individual, their collective research approach tends to be viewed as ‘holistic.’ By contrast, cognitive film studies 167 are, more or less, perceived as ‘atomistic’ and overly specialized, in the end, make cognitive theories appear irrelevant to learning contexts. For that reason, educational researchers mostly ignore cognitive film studies, especially as that research program moved closer to the neurosciences. Notwithstanding, those interested in cognition and emotion from a neuroscience perspective may find film studies rather disappointing. Educational researchers, therefore, who may turn to film studies for understanding of image or sound in relation to thought, would likely discover an insufficient number of studies corroborated by neuroscience (Grodal, 2009). It is clear that the perceived gap between neuroscience and education is as wide as the perceived gap between neuroscience and film for most of the reasons stated above, making the possibility of a film theory that would assist educators and educational researchers based in neuroscience as practically inconceivable in the present state of affairs. Insofar as uniting film and neuroscience studies, therefore, the problems described are exacerbated by the fact that most of film study, whether rational, sociological, psychological or semiotic, practices some form of reductionism that removes itself from the ‘real’ concerns of practice (Casetti, 1995). This problem is also one that plagues language specialists, one that film studies have practically emulated to a fault. In effect, film studies followed the course of language studies as these shifted from semiotics founded in structural linguistics (e.g., de Saussure, 2002) to post-structural viewpoints founded predominantly in social-semiotics (e.g., Halliday, 1973; Hodge & Kress, 1988; Kress & van Leeuwen, 2006). When cognitive language theories were catalyzed by Chomsky’s theory of universal grammar, film theorists were convinced that therein lay the answer to the dichotomous views that had plagued film studies since Christian Metz framed film as a semiotic question. What was hoped to be resolved was the tension between the ideal and real forms of communication, for instance, between ‘coded 168 written’ language and ‘natural speech’ forms, which are part of verbal and non-verbal communication. Not unlike language studies, however, when the shift to a more ‘scientific’ view of film rippled through the film community, it was thought that progress in film studies was finally being made. Most film researchers will agree, however, that this is not the case (Grodal, 2009). The result of taking either a rational or scientific standpoint meant that semiotic and cognitive film studies have, respectively, emulated the reductive methods divorced from practice that structural or post-structural and cognitive linguistics still find fashionable today. Even throwing a pragmatic framework into the mix (e.g., Odin, 1995) has not significantly improved the controversies inherent of the rational-social-cognitive dichotomies. In any case, whichever framework one chooses, which includes feminist studies and critical discourse analysis, most remain bound by an objective view far removed from experimental and natural learning experiences (Chateau, 1995; Grodal, 2008; Smith, 2003). Chomsky’s theory of transformative and generative grammar successfully disrupted behavioral theories arising from Skinner’s studies that led many to ignore innate functions, which are present in cognitive processes. Although Chomsky showed that innate elements were at play in language development, transformative and generative grammar, as it is understood linguistically, does not necessarily extend to a filmic language (i.e., assuming a filmic language exists that is comparable to language itself). In essence, Chomsky revealed the innateness of language upon ascertaining that syntax is infinitely generative and transformative. He demonstrated this by drawing on sentences that show, architecturally, their overall structure using sentence trees with imbedded sub-particles common to all languages (e.g., noun phrase and verb phrase, S-V-O). To date, despite some remarkable attempts, there are no film theories able 169 to successfully propose general principles of mental processes or symbolic logic across all films to the degree that language has achieved. In a filmic context, therefore, insofar as the search for filmic elements that ‘resemble’ those identified as paradigmatic (i.e., phonological and morphological) and syntagmatic (i.e., syntax) in both semiotics and generative grammar, universal elements sought in motion photography and the cinema simply do not hold consistently across films (Chateau, 1995; Grodal, 2009; Smith, 2003). Despite Metz’ seminal essay that raised the question as to whether cinema is a language-like form of communication (i.e., langue) or a particular language (i.e., langage), the semiotic and cognitive search has yielded insufficient evidence to definitively determine one or the other; in some instances, appearing to be neither. For that matter, the hold that Sapir (1949) and Whorf (1943) continue to have in the study of language, namely, linguistic determinism or relativity, also dominates film studies. In short, whether in film or language studies, theorists are still plagued by the manner to which mental processes arise, whether by innate—and therefore, instinctive means—or influenced predominantly by the environment as it is mediated through symbol systems. Whether or not nature determines the outcome or nurture fashions the many forms and variations, film theorists have not extricated themselves from a web of theory that language theorists and philosophers of mind have spun for many years. Nature and nurture: the struggle continues Despite some very convincing arguments to the contrary, viewing language as an “instinct,” as Pinker (1994) proposed, does little to resolve the nature-nurture issue that is inherent of the study of either mind, language, or other symbol systems. Though few theorists who study human nature will deny that innate factors inevitably influence the development of the mind as it interacts with the environment, what remains controversial is to what degree the brain 170 creates mind and how this interaction causes extended consciousness to arise. Moreover, when one studies the development of artistic skills and knowledge, especially as those relate to film processes and products, the scope of cognitive research as it stands presently fails to explain how both instinct (i.e., innate capacities) and learning play themselves out in a context that does not make comparisons to language but treats the arts as modalities unto their own. By contrast, social semiotic theories are no more able to address the manner by which the brain, as it receives and processes sensory input, mediates motor output. Whether through images, sound, and movement as those are expressed in a film context or through symbols in language and mathematics, social semiotics can only surmise that ‘something’ drives learning and output. At best, this interaction appears as an associative, ‘extra-linguistic’ capacity that is driven by social discourse—another ‘language’ theory at best (Bakhtin, 1981). Rarely do semioticians examine the brain-mind-body complex to the degree that scientists have explored that which may lead us to understand the ‘extra-linguistic.’ Though the possibilities abound, generally speaking, the bridging between arts, motion pictures, language, thought, and emotion is still far from being realized, much less understood from the perspective of the brain that creates mind (e.g., Grodal, 2009; Plantinga & Tan, 2007) and even less so in an educational context. Of the forces that make up mind and consciousness, as previously discussed, emotions studies have been the least sought after among theories in education and beyond. Nonetheless, some educational researchers have taken an interest in the study of emotions, particularly as it became a popular topic at the instance Daniel Goleman (1995) published a book on emotional intelligence that reached educational circles. Most educational researchers, however, will admit to remaining focused on the relationship between cognition and symbols, i.e., language or mathematics, rather than cognition and emotion (Varma et al., 2008). 171 Despite recognizing the importance that emotion has on learning and social development, such that some educational psychologists have explored (e.g., Hymel et al., 2010; Schonert- Reichl & Hymel, 2007), most curriculum theorists and classroom teachers remain tied to semiotic, social, and cognitive theories that are largely detached from the study of emotions. Given the social and political mandate for ‘deliverables’ in education, which demand is for written output (i.e., language and math), it is understandable that a preferential focus on the study of speech and writing as core literacy has been the thrust of educational research. And despite the ongoing studies on serious social concerns in school contexts that have caught the attention of the public, such as bullying (Trach, Hymel, Waterhouse & Neale, 2010), literacy continues to remain at the fore of political action taken on school policies (Tierney, 2001-2002). In my view, this is because most educators continue to view literacy (however it may be interpreted) as fundamental to human agency, which when achieved universally, is believed to be able to change the ‘behavioral’ problems witnessed in social contexts. Literacy is taken as a collection of approaches that are believed to foster the ability to reason, most often termed ‘thinking critically.’ The question is, of course, what type of reasoning would change the behavior of such serious concerns as bullying and do literacy approaches target that type of reasoning? By the same token, educational researchers who invest in the study of cognition appear to stop short of exploring output modalities other than speech and writing. Visual and performing arts, for instance, are seen as ‘perceptive’ and ‘affective’ activities that do not tap into higher order thinking that aims toward explanatory meanings (Barone, 2003; 2006; Barone & Eisner, 1997; Irwin, 2003). Insofar as poetic expression is concerned (whether in film or in written forms), poetry is frequently viewed as nothing more than descriptive, intuitive, aesthetic, and emotive, thus lacking explanatory means (Leggo, 2001, 2003, 2008). With few exceptions, this tendency places the study of arts and emotions in areas of either social or psychological domains, 172 which inevitably places them at a distance from cognitive science. As such, the study of signs (i.e., semiotics), which does portend to explore multiple modalities, offers only social theories that do not align with studies in cognitive science. In short, whether educational research or educational arts-based research, the focus that still leans upon either social, cognitive, or semiotic frameworks occludes the use of any discoveries in neuroscience that would shift researchers to a holistic understanding. Language as grammar, syntax and semantics Since language is quintessentially tied to thought, comparisons to language structures continue to dominate our understanding of film and literacy. Basing all that we understand of cognition on language comparisons, however, effectively preempts any bridging between perception, cognition, and emotion inherent of alternate modes of expression (i.e., the visual and performing arts). That is due to the fact that while some artistic modalities, such as theatre and film, are viewed as possessing a close relation to language, others modalities, such as music and dance, appear not to be granted one or more aspects of grammar, syntax, or semantics (e.g., Gordon, 1990; Chateau, 1995; H’Doubler, 1940; Langer, 1942). Of the three essential and necessary interdependent characteristics that make up any language, semantics continue to pose a significant problem, for instance, within computer, language studies and the arts for different reasons. Generally speaking, in the scope of language studies, semantics conflates or detracts our view of cognition largely due to conflicting theories surrounding word ‘meaning,’ namely, how it arises, how a word becomes shared within a community, and why it is able to both denote and connote meanings. While semantics holds a key role in language, semantics is thought to hardly play any at all within other spatial-temporal modalities, such as instrumental music and dance. In the case of film, ‘semantics’ fits mostly under semiotic theories of ‘natural’ language and holds a tenuous position in cognitive film 173 studies that address ‘syntagmatic’ strings of meanings (‘filmic sentences’ possessing ‘word-like’ meanings). Semantics, however, is the crux of what most literacy and language researchers refer to as ‘critical thinking’ (i.e., higher order thought), which is the ability to understand beyond surface meaning (i.e., literal) toward deep meanings (i.e., figurative, metaphoric, ironic, paradoxical, etc.), and include Grice’s (1975) conversation implicatures (i.e., hidden meanings). Yet, meaningfulness is not solely a matter of semantics, since semantic meaning is dependent upon syntax and grammar, as Lewis Carol cleverly showed. While each part of language may be toyed with when meaning does not aim to be precise (i.e., poetically) meaning, nevertheless, occurs mainly when all components of language are present (Asher, 2007; Pinker, 2007). Moreover, semantics cannot simply be lexically meaningful (i.e., according to the dictionary definition) but must be placed into context. The idea that the meanings of words, or more properly the results of the semantic interactions between words, will shift depending on the other elements in the predication or in the larger discourse context in some sense obvious if you look at dictionary entries or think about how words combine with other words in different contexts (Asher, 2007, p. 4). Neither is it possible for words to be semantically ipsative, i.e., meaningful to self, if one wishes to avoid becoming frustrated in conversation (e.g., the way preverbal children often become). The intent of words is to be shared and understood by an entire community, which renders all spatial-temporal arts, i.e., language and arts, as a social activity (Pinker, 2007). Yet, despite the social and communicative aspect of the arts—always seen as somewhat problematic in formal ‘texts’ of books, movies, plays, or radio since the reader, viewer or listener are not usually part of a conversation loop—the absence of a theorized semantics in the movement arts, such as dance, music, and film, seems to be missing outside of semiotics or philosophy. 174 What many cognitive theorists have tried to do to overcome the problems that surround semantic logic, however, is to address varied ways by which the mind has been thought to operate through scientific investigations. Asher (2007) gives a succinct description of the leading cognitive theorists on semantics. What is it to give the meaning of a word? There are a number of answers in the literature on lexical semantics or theories of word meaning. Cognitive semanticists like Talmy, Givon, Victorri and others seem to think that meanings are some sort of picture, graph or representation; and so a lexical theory should be a theory of those pictures. Gardenfors supplies a more abstract version of such a theory in his recent book Conceptual Spaces. Others in a more logical and formal framework like Dowty (but also Jackendoff, Shank, and other researchers in AI) take a specification of lexical meaning to be given in terms of a set of primitives whose meaning can be axiomatized. Still others take a denotational view; the function of a lexical semantics is to specify what is the denotation of the various terms, typically to be modeled within some model-theoretic framework (p. 4). Some of those theories have found their way into both cognitive and semiotic film analysis. But as far as ‘meaning making’ goes in a film context, semiotic research actually fails to answer issues around grammar and semantics “characterized by the absence of direct contact between” filmmaker and spectator (Chateau, 1995, p. 36). Semiotic film analysis, moreover, does not adequately reconcile the issue of “filmic signifiers” that belong to “two sensory orders: auditory and visual,” with that of movement, i.e., camera, personages, and objects (p. 36). There is also the admittance that semiotics does not “know how to interpret the phrase, this is an ungrammatical sequence” (p. 37). Overall, semiotics is unable to identify the cognitive and perceptive processes necessary for filmic interpretation. And although some film theorists have tried to reconcile what semiotic theories fail to do, the study of film as either a comparison to cognitive linguistics or semiotics, which latter dwells on dialogic (i.e., social linguistics) or communicative functions, continues to stymie our comprehension of how motion pictures and sound express shared meanings. This impasse, however, is not simply due to language comparisons in cognitive linguistics and 175 semiotics. The bias that is present in both research paradigms, in fact, dramatically impacts on film studies. By consequence, what Pinker (2007) tried to show by mobilizing the work of many cognitive theorists, is that the existence of universal principles of movement, time, and space, are the foundational ‘realities’ that govern language structures and presumably all symbol systems (though he does not cover more than the linguistic). But the notion of universals as underpinning language does not sit well with everyone. As long as it is believed that thought is determined or shaped by language, many investigators will continue to resist the notion that innate and convergent components of perception, cognition, and emotion play a significant role in language development and use (or in any other symbol system for that matter). The danger in expressing the idea that mental processes are the ‘motor’ for language development usually places one perilously in the ‘nativist’ trap or any number of traps that puts ‘nature’ in a dominant position over nurture. That said, one need not fall into any of those traps if cognition is not the sole criterion that is judged to possess universals. In truth, the theory of conceptual semantics, which Pinker (2007) expressed quite convincingly but not always meeting my needs as an artist, has driven my continued search for neurological evidence. I esteemed that brain research held the potential to begin to unravel the paradox that produces so many conflicting theories of language and cognition as it has been generally expressed through distinct and opposing research paradigms. I stand alongside the many who hold hopeful expectations that brain imaging and other experimental data will demonstrate that one cannot drive the other (as many have intuited) but must have innate and social forces at play, be both separate yet, at times, interdependent, and, ultimately, in complete co-operation with perception and emotion for either to exist at all. 176 Aside from semiotic and cognitive film theories, however, one last roadblock to making progress in film studies must be mentioned because of its stronghold in some educational circles. This roadblock to progress arises from psychological film research, which is best summarized in a discussion between film theorists, Plantinga and Tan (2007). “Psychoanalytic theories of spectatorship are more or less united in their assumption that narratives depend for their affects on the presumed human instinct to restore an earlier state of things. Narratives replay childhood fantasies and elicit psychic conditions that mimic early stages of childhood development” (p. 4). Reviewing several of the major contributors to psychoanalytic film theory, Plantinga and Tan also reveal the limitations of employing a framework, which focus is entirely on the film spectator. The fundamental motivation of the spectator’s experience, then, is to replay the originary fantasies in order to recover the lost plenitude of early childhood, or to master the anxiety and fear arising from later stages of development. For Laura Mulvey, in her early writing at least, film viewing follows the contours of male desire, and film narratives are structured to diffuse the castration anxiety originating in the Oedipal Stage of childhood development. For Linda Williams, the so-called body genres—pornography, horror, and melodrama—have their roots in the originary fantasies of primal seduction, castration and the mystery of sexual difference, and the loss of origin, respectively (p. 4). As with much of psychoanalytic theory, what constitutes as primal impulses—retrieved again and again through indirect means, such as film viewing—are the archetypal expressions imbedded within works of art and cannot, as it is presumed, be accessible to conscious thought. Despite the interest that any educational researcher may have in unconscious thought, there is little direct means to its exploration except through a behavioral framework, which motives, drives, and rewards may be observed and replicated under controlled experiments. As I attempted to demonstrate in Chapter 2, because of the inaccessibility of the unconscious, psychoanalytic theories have only been useful in education insofar as behavioral research could 177 frame research questions to arrive at observable variables and outcomes. There is, of course, no way of knowing whether the behavior is due to the unconscious. To briefly summarize, therefore, the comparison between film and language in cognitive or semiotic film studies, parallels the seeming incommensurability between views of nature and nurture. Each worldview has produced distinctive research paradigms and biases, which are not easily harmonized. As film theorist Petric (2001) pointed out when critiquing Buckland’s (2000) valiant effort to reconcile the two, “due to their different premises and disciplinary traditions, semiotics and cognitive studies are widely and on the whole justifiably perceived as strange bedfellows” (pp. 1-2). For instance, semiotics has been the founding framework of socio-linguistics, which “has postulated an all-embracing theory of human culture and posited humans as having an indirect relation to their environment, mediated by language and other sign systems” (Petric, 2001, p. 2). Moreover, research domains influenced by semiotic theories—such as anthropology, media studies, communications, cultural, feminist, and critical theory, along with postmodern philosophy—have each in their own way failed to explain the manner by which language provide humans with the distinct feeling of knowing their environment or knowing the self as knowing. Vexed by ontological concerns without scientific explanations, semiotics does little more than name, describe, and classify. By contrast and not much better, cognitive science “has emphasized the language user’s capacity to independently and creatively manipulate the signs as a context-free entity” (Petric, 2001, p. 2). Whereas semiotic theories are not divorced from context (making this an appealing educational framework), cognitive science predominantly studies language independent of one’s environment. Therefore, by referring to the opposing traditions that make up the whole of semiotic and cognitive science underscoring all epistemologies, Petric correctly situates the two 178 disparate film disciplines as incompatible based on premises that can only in part understand the whole. “While structurally inspired film semiotics positions the viewer conceived of as an ideological subject into filmic meaning that is a result of a system of codes, cognitivists conceive of film viewing as a purely rational activity in which film simply cues the spectator to perform a variety of operations” (pp. 2-3). Even if it were possible to reconcile the two ‘ideal’ and objectively bound traditions, Pinker (2007) sets the record straight in asserting that “there is a relation of words to a community,” which guarantees that “people use language not just to transfer ideas from head to head but to negotiate the kind of relationship they wish to have with their conversational partner” (p. 3). Moreover, while it is generally agreed that there is a relationship between words, thoughts, and reality (even when reality is set aside in favor of the ideal) that are “anchored to things and situations in the world,” both theoretical constructs neglect that “words don’t just point to things but are saturated with feelings” (p. 3, italics mine). Clearly the unification of the two theories is far from being realized, but even if they were reconciled, there is little hope that the whole combined would make much ontological headway as long as the special grounds upon which each tradition is rooted—ideological and objective—remains as an abstraction from the whole. Near and far: the spatial side of the part-whole dichotomy Whatever motive exists for inspecting a phenomenon in extreme close up, which is essentially a reductive view that ‘makes things appear larger than they are,’ and whatever challenges a close-up brings to apprehending the whole, we sense in this nearness that we are obstructed from seeing the whole. The inability to capture the whole may not appear problematic to those who may sense that the parts contain the ‘essence’ of the whole. Observing the hands, for instance, may speak volumes of the individual to whom they belong. With this very thought 179 in mind, one student enrolled in a media course I was instructing produced a short film on the topic of teaching as a ‘soapbox’ activity (i.e., in reference to public speaking that is often political or ideological in content). He chose to film four different instructors’ hands as they taught him in varying classes, at first in consideration of their identity in view of his theme. Unexpectedly, he discovered something in his shots, which he reworked in his montage that moved beyond his original ‘soapbox’ theme. Though he kept the title, he confessed that in watching the playback of the filmed hands, he had seen more communicated about his instructors’ teaching ‘voices’ than had he chosen to include the actual voices and full body shots. Since he was a musician, he saw an opportunity to compose an instrumental soundtrack in ‘rondo’ form (a music form that is usually divided into two alternating parts as ‘repetition and contrast,’ e.g., ABACADA, which he synchronized to a repetitive image (part A) and the shots of the hands (parts B, C, D). Aside from the visual and acoustic ensemble, which outcome was aesthetically pleasing, since I was one of the instructors I was particularly surprised to watch myself from this angle. I recognized immediately that my hands and feet (which he had also filmed in my case) gave away much of my essence as someone who is comfortable in ‘using my body’ to ‘speak’—i.e., the fact that I use my hands confidently and extensively, or that my feet are in constant motion, moving either toward or away from my learners. Thus, this act of zooming in is an attempt to make the invisible visible. By drawing a part of a phenomenon spatially near to us, or moving nearer to the object, the motion certainly requires some confidence (i.e., the object cannot be menacing). And by moving near, despite that we become aware that this action excludes ‘seeing’ the whole of its nature, we are much more likely to ‘grasp’ it than if it were far from our reach. There must be a purpose for moving near to an object, of course, just as there most certainly is a purpose for moving far from one. In any case, from the first instance the near and far dichotomy crossed my awareness, I became 180 intrigued by the perceptive, cognitive and emotional outcome that spatially near and far objects and actions produce. The first time wherein this spatial aspect captured my attention occurred while showing several Canadian National Film Board (NFB) shorts to a group of upper elementary school children while priming them for a film unit. And the second instance occurred after experimenting with a film short subject on near and far qualities in dance, which I originally produced as part of a conference presentation (Gouzouasis, LaMonde, Ricketts, Ramsey & Mackie, 2007). In the first instance, the NFB short opened as an extreme close up of an indiscernible object. As the camera slowly zoomed out, giving us more and more information, the students did one of two things from the start: (1) either they shouted out possible solutions to the mystery, or (2) they sat quietly waiting for more to be revealed. None of the first type guessed that it was an ‘egg frying in a pan.’ All of the students had to wait until such time as the camera had zoomed back far enough to finally disclose the object and the event. Surprisingly, the group that had been shouting out solutions seemed disappointed, which was made apparent by their groans, “Oh, it’s just an egg.” The group of learners who had sat quietly, however, was riveted until the last frame of the film ended. Whereas the lively group anxiously waited for the next film short to be shown with queries such as, “Are we going to see another?” The quiet group appeared to be interested in the ‘meaning’ of the film. They asked questions such as, “Who was frying the egg?” “Was the film ‘about’ making breakfast?” “Were the eggs ‘supposed’ to make you think they were one thing but turn out to be another?” What raised my curiosity at first were differences in motivation, attention, and elaboration of thought. In recalling this incident, however, I also began to wonder on the emotional responses of both groups of learners, as evidenced in their expressions of ‘enjoyment’ 181 or ‘disappointment’ toward the film short. And whether those emotional responses had any impact on their cognitive and perceptive abilities. It was in recalling this experience, therefore, that I was led many years later to create a film short on the spatial-temporal aspects of dance, which I captured in a film. Originally entitling the film, Rapprochement/eloignement (i.e., ‘closeness,’ ‘remoteness’)—using French to ‘mask’ the meaning in an all English setting—I was interested in utilizing the near-far perspective in film to test an emotional response, which I had imagined would emulate a ‘feeling of closeness’ versus ‘a feeling of remoteness.’ As this was a film specifically made for an educational research conference (Gouzouasis et al., 2007), I was also interested in seeing whether I could provoke some thinking around emotional values attached to research dichotomies such as, the subjective-objective, part-whole, or general-particular viewpoints. I shot the film as two long continuous ‘takes’ with the help of a dancer who improvised slow and lyrical movements to the first of three Gymnopédies compositions by Erik Satie. Having trained as a ballet dancer, I was eager to capture two aspects with the camera. The first was to use a hand-held digital camera and shoot in extreme close-up by ‘pushing’ the camera toward the dancer and her limbs in mirrored harmony. Rather than a zoom in, which would not necessarily give the viewer the feeling of ‘dancing,’ I wanted to film the dance as if the viewer were part of the pas de deux. Hence, I thought that the camera would be able to accomplish this perspective by ‘dancing’ with the camera in hand. It was easy enough to perform with the dancer, as I had improvised many pas de deux in my life. I simply had to handle the camera so that it moved smoothly throughout and take the shots I was most interested in capturing (i.e., wear my director’s cap). To improvise in a pas de deux, one dancer anticipates the movements of their partner and carefully follows the line of 182 movement by mirroring or filling the spaces created by the dancer’s line of movement with complimentary shapes. The difference was that, in this instance, with the music in mind, I followed the line of movement or filled the spaces with a camera as an extension of my line of sight.3 The second aspect was to slowly pull back from the dancer as she drifted into a distant location, still improvising, to create an extreme long shot. To accomplish this take in a single shot, I wanted a ‘dolly’ shot effect, typically referred in this manner as it is a moving camera shot that follows, moves toward or away from an object using a ‘dolly’ on tracks rather than panning across a scene or zooming (both instances whereby the camera may be held stationary on a tripod). Since I did not have a dolly, I simply used my body as steadily as I could and slowly glided backward with the hand-held camera. This camera movement is typically referred to as a steady-cam shot. To achieve the smoothest shots with a hand held camera however, generally it requires that the camera is held with an extended mechanical arm on springs in a special mechanism that fits the camera operator’s body. The mechanical arm’s suspension allows for smooth, gliding motions as the camera operator moves. This gliding shot is not unlike a dolly on tracks. At any rate, I successfully captured both the extreme close up and long shot in two continuous takes, which I then edited applying several special effects. First, I turned the entire film into black and white (removing color as a visual element allows one to focus more precisely on shape and contour). Next I added a ‘ghosting’ effect to the dancer’s body and limbs, which leaves ‘traces’ of motion as a temporal quality (i.e., seeing the ‘in-between’ allows one to linger on the past, present and future of motion). I also wanted to repeat certain motions in quick 3 Interestingly, a university professor and performance-based researcher, who had sat watching the entire filming process exclaimed, “I was mesmerized by the duet and completely lost sight of the fact that you were filming!” In that moment, at least, I knew that I had been ‘dancing’ while filming. 183 succession, so that, although I had shot the film in two continuous takes, I actually split several frames, copied and inserted them in succession. The effect is like a continuous ‘playback,’ like a needle stuck on a record groove. It has the effect of ‘reinforcing’ the memory of the movement in replay. Finally, I softened some of the body parts filmed in extreme close up, for instance, the breasts, pits of the arms, buttocks, and groin, by blurring the image slightly with a ‘gauze’ effect. Despite that the dancer wore dark tights and leotard and the movements were slow and lyrical, I did not want the close ups of ‘sensitive’ body parts to appear ‘ugly’ (i.e., pornographic), thus by blurring the image slightly, there was a ‘beauty’ to both the image and the motion (though sensuality was inevitably also present since the close ups were of body parts we hardly attend to while watching dance). Unfortunately, since the film was part of a larger performance-based research presentation, wherein I was part of a trio of dancers that performed live in front of the film that was projected in the background, I was not able to generate the responses or engage in the discussion I had hoped any more than other members of my research team on their ‘performance’ segments and the theoretical ideas that supported the performances (e.g., notions of Gymnopédagogies, “naked” pedagogy or pedagogy “exposed” through the performing arts). Possibly, the presentation was too visually and aurally overwhelming for researchers accustomed to attending conferences with speakers and slide show presentations focused on a single, key concept to follow the diverse points of interest. Thus, with such stimuli surrounding the audience, it was difficult to explain the end point of our collective goals. The film’s spatial and temporal qualities, which raised cognitive and emotional responses, were made more apparent when I later used it as a lead-in to some movement pedagogy. In this instance, I chose to focus specifically on the concepts of near and far. In the 184 thematic dance vocabulary that was developed by Rudolf Von Laban, spatial qualities, such as near and far focuses the dancer to attend to an egocentric view of one’s bodily extensions (i.e., the arms and legs) and the articulation of those body parts in relation to the core (i.e., trunk). It also allows for an allocentric view (i.e., one that situates the body in relation to one’s environment) by heightening the dancer’s awareness of their body in relation to other dancers or the space in which they are moving. Both egocentric and allocentric views are an essential part of spatial reasoning, which reasoning is necessarily part of cognitive functions (Milner & Goodale, 2006). It goes without saying that what distinguishes our human species is the ability to reason, namely, apply logic to objects and events (i.e., discern, infer, judge, calculate relationships, find causes, predict, and plan). Because spatial reasoning is a cognitive process that plays a significant role in the survival of all species, it is possible to show that spatial reasoning is a cognitive universal that is remarkably stable in all forms of human symbolic expression. In evolutionary terms, spatial reasoning allows us to calculate distances between objects, which “points of physiological space,” according to Ernst Mach, “are nothing other than the goals of various movements of grabbing, looking, and locomotion” (in Rizzolatti & Sinigaglia, p. 67). Movement, in effect, is the starting point “from which our body maps the space that surrounds us, and it is due to their goal-directedness that space acquires form for us” (p. 67). While living in Costa Rica I was mesmerized by the accuracy by which monkeys, coati (similar to raccoons), and squirrels would judge the distance between the trees, thereby leaping and landing from branch to branch with the agility of ballet dancers. Moving from one location to another, high above land-locked predators, is clearly an advantage to these animals. It is perhaps with something similar in mind that Poincaré also spoke of spatial reasoning as movement that 185 must be decided upon to defend oneself from the “blows that may strike us,” so that through a series of “parries,” nature has enabled our ability to protect ourselves from threats (p. 69). To better understand near and far space, however, it is first important to note several roles that the visual and somatosensory neurons play in locating an object or self in space. To begin, we must be able to calculate the distance between an object and our reach, or between our body and the boundaries that surround us (e.g., walls, edges, or markings on the floor). The act of reaching toward an object or of moving around in a space without colliding into an obstacle, requires a series of brain processes from “coding the spatial relations” to the “transformation of this information into the appropriate motor commands” (Rizzolatti & Sinigaglia, 2008, p. 53). Interestingly, “the visual receptive fields are always located around their respective somatosensory receptive fields” (p. 55). In experiments with monkeys, it was shown that the “same neuron that discharges when we brush a monkey’s forearm also becomes active when we move our hand close to the animal’s forearm, entering its visual receptive field” (p. 55). As Rizzolatti and Sinigaglia illustrate, we can test this visual and somatosensory capacity by bringing our hand close to the cheek, which will allow us to feel the hand before our “fingers actually touch the skin” (p. 55). In effect, numerous experiments carried out on ‘bimodal’ neurons show that “in 70% of these neurons, their visual receptive fields are linked to their somatosensory fields, coding spatial stimuli in somatic, as opposed to retinal, coordinates” (p. 58). Bimodal neurons respond strongly to three-dimensional objects and their “receptive fields are coded in somatic coordinates and anchored to various parts of the body” (p. 62). Additionally, “visual stimuli must appear in the spatial region which includes all the objects at arm’s length,” which Rizzolattie and Sinigaglia named the “peripersonal or near space to distinguish it from the extrapersonal or far space which is beyond the reach of our limbs” (p. 186 62). Insofar as the ‘ocular motor system’ is concerned, what is needed is a “coordinate system that can calculate the position of the objects in the surrounding space as a function of the observer, and not of their position on the retina” (p. 63). By ‘objects,’ we may include all boundaries in space including walls and obstacles (e.g., chairs, tables, etc.). According to Poincaré, “not only is it necessary to ‘discard the idea of a presumed sense of space,’ we must also be clear that ‘we could not have constructed space if we had not an instrument for measuring it’—an instrument ‘to which we refer everything’ and ‘which we use instinctively,’ which is our body” (in Rizzolatti & Sinigaglia, p. 68). Moreover, as Poincaré also expressed, “it is in reference to our own body that we locate exterior objects, and the only special relations of these objects that we can picture to ourselves are their relations with our body” (p. 68). Rizzolatti and Sinigaglia thus summarize near and far in the following. Taken in the context of evolution, the near/far dichotomy as well as the connections between the motor possibilities of the various parts of the body and the codification modalities of the spatial relations lose much of the mystery that surrounded them at first glance. Space would no longer be represented per se somewhere in the cerebral cortex; its construction would depend on the activity of the neural circuits whose primary function is to organize movements which, albeit through different effectors (hands, mouth, eyes, etc.), ensure interaction with the surroundings, locating possible threats and opportunities (p. 70). With the help of ultrasound technologies, one can clearly see unborn children engaged in various motor activities as early as the eighth week, such as, moving the hand toward the face, and toward the sixth month, sucking the thumb or grasping the foot (Rizzolatti & Sinigaglia, 2008, p. 71). What this suggests is that before birth, “babies possess motor representations of space,” after which “movements become increasingly goal-directed and clearly referred to the space around their body” (p. 71). Near and far, therefore, is represented between that which is within grasp and that which lies outside of one’s grasp. It would appear that all which lies outside of our grasp would hold little importance to us since there would be no means by which 187 to ‘reach’ an object, but since we are not anchored to any one location, as a sea polyp would be, space becomes a dynamic, rather than static entity. Dynamically, we perceive objects near and far as those depend on our saccadic rhythms (retinal eye movement) and body in motion (movement toward or away from objects). Importantly, we are also able to judge moving objects, which speed is coded as distance (velocity/space), presumably as an advance warning system to allow us time to ‘get out of harm’s reach’ (Rizzolatti & Sinigaglia, 2008). Finally, as soon as we put an instrument in our hands, the instrument is coded as an extension of our arm. This has been tested with individuals with a neural impairment to the visual cortex that disables their ability to perceive near objects normally located in their visual field. Yet, when an instrument, like a pointer, is placed in their hands, those individuals unable to ‘perceive’ near objects are suddenly able to locate the object as if it had been in their reach all along. This neurological phenomenon makes McLuhan’s (1963) view of technology as ‘extensions to our bodies and senses’ as truthful as ever! Experimenting therefore with the short film I had produced in a context I had designed to focus on near and far space, the responses were as I had predicted somewhat intuitively— without knowledge of any neurological explanation at hand. The teacher candidates initially commented on the manner by which the film had made them feel. Viewing the dancer in ‘extreme close up,’ some students commented on the intimacy or sensuality that emanated from the first half of the film, and some commented on the alienating or ‘foreign’ aspect they sensed in the second half, which showed the dancer in a long shot. To those comments were added the feelings of comfort or discomfort, whereby the dancer, when seen at a distance made them feel comfortable, whereas viewing her up close led to feelings of discomfort. At first, I presumed that these feelings were culturally based, but not necessarily so. Clearly an object that is moving slowly and ‘receding’ into the distance (as my dynamic steady camera work entailed), is less 188 threatening than one that is moving nearer toward you (e.g., when I pushed the camera into extreme close up), even if the latter movements are ‘lyrical’ as opposed to sudden and rapid. The feelings that were expressed, in a dynamic sense, point more to a biological attachment to sensing near and far space dynamically rather than culturally. On the other hand, the feelings of discomfort when the dancer was in a state of intimacy or sensuality could be construed as cultural. Certainly there is nothing inherently uncomfortable or threatening about intimacy and sensuality—unless those states are learned as being socially, culturally or morally unacceptable. Later, I engaged the students in a dance exercise improvised to Laurie Anderson’s Born, Never Asked, which is a piece of music that is mostly instrumental. The music possesses somewhat of an intense lyricism similar in tempo to the Satie piece I had used in the short film, but with a steady pulse that is sensed as a strong ‘pulling’ and ‘pushing’ rhythm (like sawing wood or ringing Church bells). Classified by Laban as a movement quality that pairs a strong ‘weight’ with sustained ‘time,’ in musical terms may be notated as martellato style (i.e., a kind of slow ‘hammering’ feeling that string instruments may produce to render intensity). The students were asked to improvise movements with their arms or legs whereby they were either ‘pushing or pulling’ imaginary objects, and simultaneously dynamically moving as an ensemble, changing directions while walking or changing levels by crouching or stretching. I had framed the dance piece as a narrative around ‘cellular’ bodies existing in a confined space (i.e., using marked boundaries), which were ‘born into’ or set free to move in an open space (i.e., the entire room). The near and far were sensed as the distance created between the students (i.e., bodies pressed close or distanced from each other). A discussion ensued at the end, which resulted in the students expressing similar sentiments as those made toward the short film of the dancer. For some the ‘intimacy’ (no one mentioned sensuality) of moving in a limited space, which caused them to nearly brush against 189 each other, provoked feelings of strong discomfort (often expressed with nervous giggles during the improvisation). Conversely, in an effort to avoid any near contact with bodies, many commented on the ‘isolating’ feeling of moving away from others in the room (some locating spaces as remote as possible). Most students said they felt comfortable by the ‘distance’ they were now afforded, however, and could tolerate the movements more readily. They also mentioned being able to judge when a body was moving toward them, which allowed them to move away to avoid ‘touching,’ as more ‘empowering.’ As we continued to elaborate on those feelings, connections were made to their past experiences. One tall student bearing a large frame commented on how she felt like ‘an elephant’ towering over others and forever bumping into them, which brought uncomfortable souvenirs of her time spent in Japan. Another student who was more diminutive in stature responded that contrary to feeling like an ‘elephant’ he felt a comfortable ‘fit’ moving either near or far, which brought satisfying memories of his time spent in Japan where he felt he had “fit right in to the culture.” Both students commented on the similarity between their past experiences with the movement exercise. The comments provided a view of how the students negotiated their space, near and far, what sentiments were provoked, and the manner by which they connected to their overall attunement to sensing and knowing self in the world. What was of particular interest to me was that the movement experience did not alter their first impressions of the dance film I had shown and, in fact, reinforced the images they had viewed, as many of them referred back and forth between the film and the ‘dance’ they had just experienced. The outcome proved to deepen the questions I had regarding the interrelation of perception, cognition, and emotion. And deepened my curiosity as to whether or not images and sound in a filmic context produced similar brain activities as when they experienced the dance. 190 Further insight on the issue of spatial reasoning is expressed in the following summary by Rizzolatti and Sinigaglia (2008). Objects and space seem therefore to refer to a pragmatic constitution by which the former appear as poles of virtual acts and the latter is defined by the system of relations deployed by these acts and anchored to the various parts of the body. The neural circuits involved are obviously different, just as the typologies of acts that they codify are different. Nevertheless, however, distinct they may be, and although they operate in parallel to each other, these processes are modulated by action. The functions of the premotor cortex can only be fully comprehended if it is clearly understood that these areas code goal-centered representations of movement. We do not extend our arm towards an object unless we intend to interact with it, to grasp, or maybe just parry it (p. 77). Spatial reasoning: its impact on research methodologies It is possible to conceive of research as having been impacted as much by our capacity to reason spatially as it has having been formally shaped by theories and methods of investigation. Our sensibilities to near and far space could be compared to whole-part sensibilities, whereby the near, which in obscuring the view of the whole provokes a threatening feeling. The far, by contrast, disallows details of objects and events but which places us at a comfortable ‘objective’ distance. To illustrate how spatial reasoning may relate to sensibilities toward research methods, Wald concluded at the end of an illustrious career that the way of science insufficiently responds to the whole, which the near view obscures. He intuited that while the scientific method has established many matters-of-fact, our questions and inferences are limited from our ‘grasping’ of objects and events close up, which do not grant us access to that which makes us uniquely and wholly human. Thus, our ability to know the world as it appears to us in close up, which is the scientific method, ironically leaves us with an uncomfortable feeling of having become intimate with parts of ‘real’ objects, all the while removed from the perspective granted by distance. Despite that scientific researchers, in carefully isolating factors, believe they have achieved ‘objectivity’ in 191 their experimental endeavors, much of science remains bereft of holistic understanding. Clearly it was this bias toward objectivity to which Werner Heisenberg (1958) objected in physics. To what extent, then, have we finally come to an objective description of the world, especially of the atomic world? In classical physics, science started from the belief—or should one say from the illusion—that we could describe the world or at least parts of the worlds without any reference to ourselves. This again emphasizes a subjective element in the description of atomic events, since the measuring device has been constructed by the observer, and we have to remember that what we observe is not nature in itself but nature exposed to our method of questioning (pp. 55, 58). Just as spatial reasoning evolved from a reference to our bodies, Heisenberg points out that our “measuring devices” are also in reference to ourselves. But it is not only science that suffers from nearsightedness. The same could be said of researchers who analyze phenomena theoretically by idealizing the context. No matter how convincingly ‘objective’ the techniques for analyses may appear to be, academics always see in part what must be considered as a whole. In film analysis, for instance, a close up on an ideal audience or viewer obscures from the senses much of what needs to be considered of film as a whole in a ‘real’ context. Thus, to be able to see at a distance by pulling away from the object of investigation and looking through the lens of another could be quite useful—perhaps to avoid biasing one’s own discipline or analysis. Beyond sensing that theories (as abstractions from the whole), are limited when it comes to apprehending and explaining phenomena, one can reasonably deduce that reducing the whole to its ‘idealized’ parts, which inevitably constrains the fullness of knowledge, has some advantages. It allows us to make educated guesses as to the ‘essence’ of a thing and provides science with the opportunity to test those hypotheses out. By the same token, we sense that theoretical nearness, which may provide wonderful images, metaphors, and descriptions of various parts, is insufficient to explain the ‘idealized’ whole. Like the story of the blind men who try to describe what they feel though each is holding a part of the elephant, whether rational or 192 empirical, the truth may only be revealed in the toggling back and forth between near and far. In the case of the blind, it is perhaps the assembling of all the data taken ‘up close’ that can eventually lend itself to an ‘objective’ distanced view. Thus, to accept the part-whole dichotomy, we take the action of mentally or physically zooming in for a close up and zooming out for a wide-angle view. Provided we are able to maintain the two disparate views through memory and attention, we benefit from the action of zooming in and out to provide greater information than if one or the other were missing. It does not require a film expert to acknowledge that the capacity the camera has to zoom in for a close- up (not unlike the microscope) or zoom out for a long shot (not unlike the telescope) has changed the way we view the self and the world around us. Emotions and cognition: the importance of things felt and the matter-of-fact The part-whole dichotomy, however, is not the only challenge we face in our endeavor to explain phenomena. First, as self-aware creatures, we are faced with the agonizing quest of understanding the knowing self, i.e., consciousness. And as the mind tries to peer into the brain, there can be no greater mystery than the manner by which the brain creates mind or the mind enables humans to create reality. Philosophic inquiry, which Whitehead (1938) contended ought to pose questions without limits, will continue to provoke our need to make inferences and predictions. And, as it turns out, it is those very inferences and predictions that we continually attempt to test through the sciences. It goes without saying that what cannot be put to the test cannot be stated as facts, but will remain as indeterminate potentialities. As exhilarating as the ‘space’ of indeterminacy may be to some individuals—but not pragmatists like William James whose distaste for tautologies led him to seek teleological and practical ends—throughout history, it is clear that human curiosity does not happily consider the indeterminate or the ambiguous for long. Science, 193 therefore, was born of philosophy, and philosophy is born of science for each are the skeptical engines that drives the other to turn facts on their heads as reason and experimentation allows. One intuits, therefore, that science and philosophy are indispensable for each other. Without the bottomless well of skeptical inquiry, of which both philosophy and science drink from, science would fall prey to the dreary task of describing, naming, classifying, and systematizing phenomena, events, and causes (unfortunately, the brunt of science education spends much time in this manner) and philosophy would do nothing more than take stock of history and idealize the future. It is only when philosophy prods, with the questions of “What is this?” and “How do you know?” that we are forced to reconsider the facts. And it is scientific method that shakes rational formulations, which critical methods offer descriptive but not explanatory causes. Unfortunately, what often passes as science turns out to be nothing more than classifying and measuring, and by the same token what often passes as philosophy turns out to be nothing more than ideology and revolt. Whitehead (1938) eloquently stated that philosophy and science work in tangent when the never-ending ‘assemblage’ of the ‘importance of things felt,’ is partnered with the ‘matter-of- fact.’ It is clear that philosophers are as interdependent with phenomena as physicists are with atoms for as Wald (1984) amusingly expressed, “It would be a poor thing to be an atom in a universe without physicists. And physicists are made of atoms. A physicist is the atom’s way of knowing about atoms” (p.73). Likewise, it would be a poor thing to be a phenomenon in a universe without philosophers. Hence, the importance of philosophy is its capacity to ‘assemble’ phenomena, which were viewed by Whitehead (1938) as driving points of interest that may be ultimately systematized as ‘matters-of-fact.’ But as Whitehead (1938) noted, “we are aware of grading the effectiveness of things about us in proportion to their interest,” so that in “some sense or other, 194 interest always modifies expression” (p. 16). Film history elegantly illustrates the preceding statement for it is clear, from purely an historical viewpoint, that the driving interest in film during the early years bordered primarily on the ontological, making philosophical views far more salient than scientific inquiry. Later, as science began to gain its stature in society throughout the 20th century, interest modified expressions from the philosophical (e.g., ontology, semiotics) to the matter-of-fact as it was explored between the diverse quasi-scientific disciplines of anthropology, psychology, sociology, and cognitive science. But in this astute observation Whitehead went beyond predicting human tendencies, he closed in on some of the key findings on human nature now emerging through neuroscience. The brain, according to Damasio (2003), appears uniquely designed to attend to drives, interests, rewards, and values for the sake of survival, which dispositional states in relation to the environment indeed ‘modify expressions.’ For obvious reasons, brain states, i.e., drives, motives, interests, values, and rewards are of keen importance to education, since without the ensuing actions derived from those brain states, neither life nor academic processes would be realized. Simply put, humans would have little reason to do much of anything at all were they not ‘driven’ to act in some fashion. It is not necessarily the ‘driving’ forces that causes humans to ‘agonize’ (as much of religion and psychology has claimed) rather what humans appear to ‘agonize’ from collectively are the limitations found in the forms of expression that are needed to share those brain states with others. What we tacitly and ipsatively understand but wish to share to avoid being isolated, is forever being challenged first and foremost by philosophy. Nonetheless, the primary form of philosophic expression is language and the “permanent difficulty of philosophic discussion [is] that words must be stretched beyond their common meanings” (Whitehead, 1938, p. 16). 195 Notwithstanding the difficulty with which words are able to express or capture our dispositional states, Whitehead (1938) further noted that, “the generic aim of process is the attainment of importance, in that species and to that extent which in that instance is possible” (p. 16). Thus, whichever endeavor is undertaken, the processes used to grasp ‘meaningfulness’ appear to gravitate toward the importance of things sensed (that which we apprehend and understand to be important). Once more, Whitehead was aligned with what neuroscience has begun to demonstrate, which is that interest, which intent is the importance of things felt (i.e., sensed), are deeply intertwined with action. Since interests lead to ‘grading’ importance differently, their relational values are tacitly understood as raising and lowering their values simultaneously. Thus, interest ‘grades’ the importance of things, which inversely raises interest. The interrelatedness of importance, interest, and values pertains to the way those mental processes play out (i.e., act and are acted upon) in varying contexts as innate potentials are triggered and discharged. Whitehead distinguished between abstractions thought in part, from the concrete, which is sensed holistically—another way of stating the part-whole dichotomy. Under normal circumstances, barring any mental disorder, he considered conceptual instances (i.e., the matter- of-fact) to be understood only partly in light of cognitive functions. Conversely, under normal circumstances, barring any sensory disorder, he considered perceptual instances (i.e., the importance of things felt) to be understood holistically through sensory, somatic and motor activity. By those two means of knowing, therefore, there exists “two contrasted ideas which seem inevitably to underlie all width of experience, one of them is the sense of importance, the other is the notion of matter-of-fact” (p. 5). According to Whitehead (1938), while nothing may appear more antithetical than the sense of importance (i.e., whole) from the matter-of-fact (i.e., abstract), 196 both are completely interdependent. To think otherwise seems almost inconceivable for how can there exist in part what does not exist as a whole? Or how can facts be produced without pointing toward the importance of a phenomenon under investigation? McLuhan (1963) comes to mind in this instance, insofar as the message, which can be solely understood in part because of the ‘focalizing’ aspect of any ‘figure,’ (e.g., the invention of film) cannot exist without the medium (e.g., the intellectual milieu), which is sensed as the whole because of its ‘grounding’ aspect. And the action of ‘looking’ at figure to ground and back again is the only means by which one is able to situate the part from the whole or the whole from the part. If the back and forth is done in succession and kept sufficiently in memory and attention, the two are understood to belong to the other, even if they cannot be reconciled due to their distinctive aspects. If not, the one will obscure the other—just as McLuhan (1988) predicted in his laws of media. Not surprisingly, therefore, both Whitehead and McLuhan’s observations founded upon exhausting practice, aligns remarkably well with discoveries made in neurosciences. The remarkable aspect of the systems of vision, in cooperation with the somatosensory, auditory and motor, affords us perceptual, cognitive, and emotional dimensions, which is the ability to discern the importance of things from the matter-of-fact. Like all paradoxes, however, words chosen to express opposing but interdependent entities create confusion. As a practicing mathematician, Whitehead (1938) attempted to overcome the limitation of the use of his words by commutative means, a common practice in mathematics. For instance, he suggested that, insofar as humans are concerned, the sense of importance is not able to exist without facts and vice versa, for “we concentrate by reason [as matter-of-fact] of a sense of importance. And when we concentrate [of a sense of importance], we attend to matter-of-fact” (p. 5). Whitehead’s notion of importance portends to universal truths, since universals allow us to zoom out from particulars so as to be able to survey the 197 whole. On the other hand, his notion of the matter-of-fact portends to abstractions from the whole, since particulars reverse our focus from a universal whole to zoom into its discrete parts. Taking Whitehead’s (1938) thoughts one step further, the importance of things felt, which arises from perceptual phenomena taken in as a whole that leads one to action, flounders in the reflective, explanatory state. For instance, in moments of stress or danger, the entire being is made alert and readied for action that prompts us to move toward or away from objects or events, rather than reflecting upon how or why one should move. As Churchill once famously declared in wartime, ‘There is a time to think and a time to act, and now is not the time to think!’ Barring any urgent condition that requires our full attention for action, however, it is in the careful investigation of facts that one reduces the overwhelming action of ‘taking in a phenomenon all at once,’ then to try to explain it in detail. History, for instance, always appears to us as a set of facts, far removed from the feeling of urgency from which they arose, while technology, as it arises in the moment, always appears to us as possessing a sense of importance. In other words, when the whole is ‘reduced’ to manageable facts, one may begin to construct a rational, more thoughtful view of what was initially perceived and acted upon rapidly, which happens well enough in historical investigations. That also is the strength of documentary filmmaking, which is to take historical facts and to reconstruct those from a reflective stance. However, even when documentary films are shot spontaneously or ‘on the fly’ (out of a sense of importance) without a plan or script in mind, the matter-of-fact (which is reflected upon) is always constructed in editing. To that extent, one can sense that field investigations in social sciences, have been compared to documentary filmmaking, or vice versa, precisely because the level of interest and importance has been tempered by distance, which allowed the ‘matter-of-fact’ to emerge. And true to Whitehead’s claim, it is thus the ‘matter-of- fact’ that once more raises the level of interest and importance. 198 The matter-of-fact, as it turns out, is perhaps what is needed to slow down the reaction time when confusion reigns in times that are not perilous, such as I described in Chapter 2. Reflecting on the states of urgency in education, which are perceptions based on the importance of things felt that often lead to hasty actions, I am reminded of the words of a good friend and artist educator who once declared, “We must not look at education as a whole, rather we should consider it minute by minute.” Although his notion was borrowed from filmmaker Lars Von Trier in describing the process of filmmaking, it certainly holds its own in an educational context. It is interesting to note the two spatial-temporal qualities, which the preceding implies: the distance that must be created to gain perspective and the time that is required to delay the reaction. 199 CHAPTER FOUR A handicapped child represents a qualitatively different unique type of development. If a blind or deaf child achieves the same level of development as a normal child then the child with a defect achieves this in another way, by another course, by other means; and, for the pedagogue, it is particularly important to know the uniqueness of the course along which he must lead the child. This uniqueness transforms the minus of the handicap into the plus of compensation (Sacks, 1995, p. xvii) In search of meaning: cognition, perception, and language Despite my lack of formal training in neurology, I developed a rudimentary knowledge of the systems and functions of the brain and body through studies in anatomy and biology that were part of my undergraduate degree in dance education. With a general understanding of the Central and Peripheral Nervous Systems (CNS and PNS), I intuited that the brain demanded some earnest attention if I wanted to guide learning toward perceptive and cognitive fluency in dance. Several years into a public school teaching career, however, I encountered the writings of neurologists Oliver Sacks (1989, 1995, 1996) and A.R. Luria (1976, 1982), which gave me another lens through which to frame my concerns in teaching and learning. Long before I had embarked on units of film studies, I began my public school career in a French immersion school, simultaneously teaching kindergarten, visual arts, and music education to upper elementary students. All of my interests—early childhood education, language, and visual and performing arts—seemed to converge in my first and impressionable year of teaching. I found that Piaget’s theories offered me a lens through which to understand cognitive development. Since I had not yet encountered Edwin Gordon’s music development theory4, which Gouzouasis (1991, 1992, 1993, 1994) interpreted as based on Piaget’s organicist insights 4 Edwin Gordon pioneered a new developmental music learning theory, some of it based on the work of Gagne (1985) but related to Piaget through an organicist world view, which led him to coin the term audiation (rather than the popular term ‘music imagery,’ which was in use). After years of using the term, he argued that ‘imagery’ was of a visual nature, and did not describe the cognitive aspects of sound, namely, patterns of sound that one perceives and conceptualizes, which thus enables one to infer and predict new patterns. In other words—from perception to conception, audiation captures the view of ‘thinking musically’ (Gouzouasis, 1998) rather than merely ‘imitating’ or ‘copying’ what is heard. 200 (Pepper, 1942; Overton, 1984), I mostly inferred from Piaget whatever I could apply to the arts. Teaching five year olds certainly gives one much to ponder since early childhood programs encourage placing equal value to all facets of learning through learning centers, i.e., cognitive, social, emotional, and physical. And, given my personal interests and background, I found it simple enough to integrate the learning for all the age groups I taught (grades K-6). At any rate, cognitive linguistics was one of the starting points for research as I pondered how children relate to the ‘objects’ of sound, speech, movement, and visual images. Naturally, I tried to connect what I observed while developing ‘literacy’ skills with visual and music education. In many instances, I was confronted with cognitive and sensorimotor ‘handicaps’ that often interfered with learning and reasoning. I was particularly vexed by why ‘identified’ cognitive or sensorimotor deficits could interfere with emotional and social reasoning. At the same time, I was also surprised by the way those same learners found means to acquire the ability to reason and express through the arts or technologies (e.g., music or filmmaking). Naturally, I was curious to know whether and how cognition, emotion, language, and the arts, converged. In effect, it was while reading the above quote by Vygotsky (in Sacks, 1995)—whose work I encountered later—which showed a sensibility toward the ingenuity of children with sensorimotor deficits to be able to take alternate pathways to achieve what comes naturally to the ‘normal’ child, whereupon I was struck with a necessary first step toward understanding teaching and learning. Thus, guided by the work of Sacks (1989) and various Deaf researchers (e.g., Padden & Humphries, 1988; Marschark et al., 1997), I was drawn to explore the cognitive and linguistic development of individuals born Deaf.5 5 According to Wikipedia, the capitalization of “D” in deaf is in reference to a culture of deafness, which describes the “social beliefs, behaviors, art, literary traditions, history, values and shared institutions of communities that are affected by deafness and which use sign languages as the main means of communication. When used as a cultural 201 This preliminary investigation, however, was also due to three personal experiences with audition. The first experience was with a Deaf six year old child enrolled in one of my ballet classes long before becoming a schoolteacher. Her energy and sensibilities to movement fascinated me. First, because despite the fact that she could not hear the music I played, her physical fluency was remarkably beyond most of her hearing peers. Second, the fact that she was keenly observant of every movement I made and was highly adroit at imitating me in ways that most of the children could merely ‘approximate.’ The second experience was of my father’s temporary loss of hearing due to a viral infection that affected a part of his temporal lobe. This unusual and puzzling infirmity meant that my father’s loss was not limited to the sounds emitted in his environment, but included a total loss of his musical memory—by all counts, a cognitive impairment. Of course, as a classically trained musician, my father was quick to notice that he could not call to mind a single melodic phrase or part of a phrase from his extensive repertoire of classical and operatic music he had learned over the years. He could not, for that matter, call to mind any sense of pitch whatsoever. No matter how hard he tried, his mind was completely void of sound. And for a period of three agonizing months, my father was left with a horrible sense of loss and confusion. Though his recovery time was too short to gain further insights into his sudden illness, I gained a new perspective on sensory loss from my father’s account on the devastating emotional effect that is the consequence of losing all traces of hearing and thinking in sound.6 But also on the ingenuity of my father’s ability to ‘act as if he could still hear,’ leaving most people who knew him ignorant of his loss. label, the word deaf is often written with a capital D…when used as a label for the audiological condition, it is written with a lower case d.” (http://en.wikipedia.org/wiki/Deaf_culture). The discussion tab on this page gives an interesting perspective on the controversy surrounding the issues of a Deaf culture as viewed by hearing and Deaf communities. 6 Oddly, my father also lost his sense of smell through a different viral infection. The loss was of 7 years duration before regaining his olfactory sense (which doctors had estimated would not return). His greatest loss, he recalls, was not being able to taste any of his favorite meals. 202 A third experience arose many years later due to an opportune classroom context. As I was starting a new university term with a group of generalist pre-service teachers enrolled in elementary music pedagogy, I was startled by the fact that a Deaf student was part of the cohort and accompanied by signing translators. For the first time in my career involved in teaching music, I was challenged to reconsider the notion of ‘thinking musically.’ As I thus began to puzzle over the perceptive-cognitive matter of translating ‘sound’ to ‘symbol,’ along with the corollary between the visual, auditory, and motor systems, I searched for clues to anchor our learning around music within the first few classes together beyond the typical ‘dance’ activities I always engaged students in. Since I also taught in several other disciplines, including language and media arts, and utilized varied artistic modalities (e.g., music, drama, dance, and film), I was on the look out for general principles that would ‘translate’ my pedagogical knowledge from one learning context to the next. At the same time, I was motivated to investigate the auditory system and its cortical functions. I gleaned that buried in the lateral sulcus, referred to as the Silvian fissure, are the primary auditory areas that connect in complex ways with the somatosensory, motor, and visual cortices. What I also discovered was that the superior temporal gyrus, which sits above the superior temporal sulcus, has predominantly auditory functions. Yet the superior temporal sulcus “conceals both higher-order visual areas and polymodal areas in which visual, auditory, and somatic modalities converge” (Rizzolatti & Sinigaglia, 2008). Far from the simple and distinct neural models for sound, vision, and motor functions I studied in my undergraduate years, I learned that audition is a complex system with neural functions that are not localized or discrete. Rather, dedicated ‘auditory’ neurons are distributed to several other key sensory regions of the brain, which in turn converge or associate with the primary auditory cortex. A fact that also describes the somatic, visual, and motor cortices alike, 203 the primary auditory system was as much a part of a complex web of interactions as many of the other primary sensory cortices (Milner & Goodale, 2006; Rizzolatti & Sinigaglia, 2008). Moreover, within the auditory system are the adjacent areas in the superior, lateral, and posterior parts of the temporal lobe involved in higher-order processing, such as speech and semantics. The left temporal lobe’s function, in fact, does much more than perceptive sound processing it includes comprehension, naming, verbal memory, and other language facets. Additionally, the ventral side of the temporal lobe is also involved in higher-order visual processing and deep within the medial temporal lobe rests the hippocampus responsible for transferring working memory to long term memory, as well as the control of spatial memory and behavior (Burgess, Jeffrey & O’Keefe, 1999; Ratey, 2001). Although studying the auditory system seemed like nothing more than a collection of facts that were far removed from the ‘whole’ of the teaching context—cognitive, social, and emotional—I was nevertheless compelled to continue looking at the brain’s systems. At the very least, I was beginning to gain a view of the brain that was having an impact on how I envisioned I would instruct music pedagogy to generalist elementary pre-service teachers in a context where fewer than 5% possessed any formal music training. With the knowledge I was slowly gaining on anatomical brain structures and neural functions, I became keenly interested in better understanding the impact that brain ‘deficits’ have on learning—not the least of which are sensory. Brain deficits, as those are studied in the neurosciences, are largely due to neural neglect (i.e., atrophy of neurons) or conversely, overgrowth (i.e., excessive neurons), lesions or calcifications caused by drug use, birth defects, illnesses, or accidents (e.g., trauma to the head or asphyxiation). Aside from the educational reasons already stated, I also had very personal reasons to seek knowledge beyond the theories proposed by Piaget and Vygotsky, which were expounded 204 upon in educational psychology and the social sciences. One of my children, whose ‘learning deficits’ had never been identified throughout her formal years of schooling, was then facing the sudden onset of a debilitating ‘mental illness’ (i.e., brain disorder) that affected her in emotional, social, and cognitive ways. As the Haiti earthquake has shown the world, we are never more alerted of a gap in understanding or of the effects of earth-shattering tremors than when it strikes home and causes much confusion and untold collateral damage. On the home front, given that our political and social policies have entirely excluded family members from participating in the care and recovery of adult children who possess ‘mental illnesses’ without their consent (which in most cases cannot be obtained because of the interference of the illness itself), I was not able to fully grasp how I could help my daughter for many years to come. But in the educational setting, with the unique opportunity for developing a curriculum to meet the needs of a Deaf student in a prominently auditory arts context, I was naturally drawn to developing pedagogy through the study of neuroscience and a telling perspective from the history of Deaf education. From the start, I was dumbfounded by the literature surrounding the socialization and education of the Deaf upon first encountering it. The bias and prejudice aimed toward the Deaf (e.g., at birth or prior to learning to speak) due to the cognitive differences that naturally occur in their early developmental years appeared to me as both a deeply ethical and pedagogical concern (Padden & Humphries, 1988; Sacks, 1989). The emotional impact that this bias has had on Deaf children has been and continues to be grounds for invoking human rights advocacy. According to cognitive testing, the differences are notably significant in the Deaf child, and for some may last well beyond formal stages of learning (Marschark et al., 1997). Those differences begin with the kinds of causal connections we make on a day-to-day basis when sound is present. For instance, a child born without the ability to hear sounds will be perplexed 205 by the comings and goings of individuals responding to the ringing of a telephone or doorbell. Without knowing that it was the sound of a doorbell or a telephone that motivated individuals to disappear from a room and reappear with a guest, or to suddenly pick up the telephone and speak without dialing, young Deaf children are apt to take longer to make the simple sorts of connections hearing children make in the first few years of development (Padden & Humphries, 1988). While those differences have been carefully noted in research, the problem that has beset fostering concept formation among the Deaf is a belief that cognitive differences beginning early in the child’s life were life-long ‘deficits’ that would impact on the social and cultural integration of a Deaf child, not to mention their ability to become ‘productive’ citizens (Marschark et al., 1997). Notwithstanding attitudes toward the Deaf, anyone paying attention to educational pedagogies aimed toward cognitive deficits in general may note that that there is a strong bias toward viewing the brain as ‘fixed’ in early childhood (by nature or nurture), largely due to theories of cognitive stage development. At best, what one hopes to do is to ‘accommodate’ cognitive deficits by some other strategy or technique to ‘work around’ the problem areas (Eaton, 2011). From an educational perspective, at stake for the Deaf has been their capacity to develop language—both written and oral—so as to achieve a level of literacy that was ‘close to’ or ‘at par with’ the population of hearing people. What fascinated me about this objective were the range of linguistic theories on language acquisition and cognition among the Deaf, which focused principally on ‘speech’ and written communication. It was not until William Stokoe (1965) published his Dictionary of American Sign Language on Linguistic Principles that a modicum of consideration was given to signing in the development of cognitive skills, such as abstract reasoning and symbolic thinking. Although 206 there have been many more studies published in recent years that specifically address signing, many ‘hearing’ language theories persist as the foundation for understanding language development among the Deaf. This is due to the manner by which speech and written language is thought to be the foundation for ‘formal’ and higher-order ways of thinking. Educational ethics arise, however, if individuals are thought to be less able to adapt to a changing environment when needs or interests arise by ‘slotting’ individuals into social and cognitive demographics. The biases and prejudices arising from gaps in knowledge, which consequently produces an emotional resonance not in keeping with the ‘intelligence’ of individuals, impacts on our ethical approaches in education. The ramification of such attitudes affects every population that sits outside of the social norms inclusive of all nation states. Witnessing these attitudes in an educational setting while working among native populations of Costa Rica made me realize ever more of the global problems, which solutions must begin at home if we are to actualize the ethical change activists seek around the world. Not surprisingly, the study of language development among the Deaf mirrors the study of language among the hearing, as if the two could be considered entirely alike (Marschark et al., 1997). Deleuze (2004) could not have been more perspicacious in his view of the problems that beset us and the particular biases that necessarily impact on educational research methodologies and pedagogy. The education of hearing and Deaf has been rife with epistemological and ontological conflicts that have plagued the understanding of language acquisition and the measures taken for improving teaching processes. It is clear that ‘tremors’ of all such contrasting language and cognition theories continue to “occur on totally different terrains” (Deleuze, 2004). Inevitably, tied into all such concerns has been the enduring controversy over consciousness qua ‘higher order’ knowing and reasoning. Upon examining the various positions of the relationship between language and cognitive development articulated by Deaf researchers 207 (Marschark et al., 1997), one is obliged to recognize the overall scope of theoretical frameworks have in influencing language experts of both hearing and Deaf populations. The bias toward an auditory system as being the primary and fundamental component of language and thought was indicative of beliefs that surprisingly remain entrenched in the epistemology and ontology of Western views despite advances in brain research. Understanding those beliefs is essential to bringing about change in research and pedagogy. Language and thought as equal, independent, or deterministic The first influential language position, which held sway for a time but is seldom articulated in today’s research context, is the view that language and thought are the same. Most often attributed to the Behaviorist school, which was championed by John Watson, this view was largely rejected following “the observation that an individual without language is nonetheless capable of thought” (Marschark et al., 1997, p. 6). Although Chomsky played an essential role in overturning Skinnerian behaviorism by introducing the theory of universal grammar, Vygotsky (1962) had first criticized the theories of behaviorists who viewed thought and speech as “one and the same thing” (p. 2). His contention lay in the fact that this position prevents seeing any relationship for “those who identify thought with speech simply close the door on the problem” (p. 2). By the same token, Vygotsky also maintained that to study thought and speech separately forced researchers to “see the relationship between them merely as a mechanical, external connection between two distinct processes” (p. 3, italics mine). The extreme position held by ‘nativist’ theorists, such as Noam Chomsky and Jerry Fodor, is that language and thought are so independent that, “the development of language and the development of cognition are distinct” (Marschark & Everhart, 1997, p. 7). Fodor has maintained the view that word meanings are innate substrates that respond to the environment 208 under specific conditions, whereas Chomsky based his views on the theory of a universal grammar that stipulates language acquisition is derived solely by ‘filtering’ out “irrelevant and erroneous language examples from the larger corpus of correct language and determining which rules were the right ones for a particular language environment” (p. 7). In both instances, cognition, (i.e., abstract reasoning) is viewed has having little to do with linguistic development since language exists as an evolutionary entity and by selective processes because of its innate design arises by adapting to the environment. Cognition is one thing, language is another, and neither theorist proposes a satisfactory explanation of how the two interact. Critiquing the two ‘nativist’ perspectives, Marschark and Everhart (1997) wondered whether “that filtering depends on an innate or user-independent set of linguistic rules” or whether language “depends on experience and related cognitive function” (p. 7). Inquiry of that type eventually led to the interactionist view, whereby cognition and language was viewed as interacting one with the other by some fashion. Not surprisingly, however, since that view grew out of the theories of Chomsky and Fodor, “the interactionist perspective is usually discussed in terms of the way language acquisition is shaped by cognitive development, not the other way around” (p. 7). Another perspective follows that language determines thought and according to Pat Stiple (1997), “this position has had perhaps the most profound influence on educational and social decisions concerning deaf individuals” (p. 7). Usually aligned with the linguistic determinism and social relativity theories of Sapir (1949) and Whorf (1943), linguistic determinism and social relativity implies that “individuals who have ‘inferior’ (or superlative) language are expected to have ‘inferior’ (or superlative) thought” (p. 8). Once more, as with Behaviorist viewpoints, Diane Lillo-Martin (1997) contends that, “despite their popular appeal, the two hypotheses…have been largely discredited through cross-linguistic study” (p. 8). Martin notes 209 that most theorists would accept that “differences in culture may be entirely accidental” or “reflect—but not determine—speakers’ worldviews” (p. 8). It is interesting to note the residual ‘popularity’ of the behaviorist viewpoint that continues to play itself out in society. Rather than the spread of empirical and scientific advances that show unequivocally that behaviorism is limited, could this residual popularity be trumped by perceptive and emotional convergences whereby what is perceived as ‘inferior’ or ‘intelligent’ is graded on a ‘first impression’ (e.g., a foreign accent, a dialect, limited vocabulary, etc.)? Even the most ‘educated’ and rational individual will nonetheless possess the feeling that a Harvard professor simply ‘sounds’ more intelligent than a Texan rancher or that a US President is intellectually inferior due to malapropisms in English. From yet another inverse position, generally viewed as stemming from Piaget, it has been proposed that thought determines language. “From this view, shared by Pat Siple, the child brings to development a set of cognitive, not language universals, and so it is cognition that drives or structures language” (Marschark & Everhart, 1997, p. 9). It is fascinating to note that emotion is rarely viewed as ‘structuring’ language or, at the very least having an impact, and theories on language are almost never framed with the notion that the ‘child brings to development a set of emotional universals.’ Yet spoken and written language, Steven Pinker (2007) points out, is universally strewn with emotion. Synonyms, for instance, go from attractive (e.g., curvy) to neutral (e.g., large) to offensive (e.g., fat); epithets and imprecations are “not just unpleasant but taboo” (p. 19) and ‘the seven words you can’t say on television’ (according to George Carlin in 1973) are all related in every language to sexuality, excretion, and religion. From the point of view of the development of social and emotional reasoning—cognitive functions that are vital in negotiating self within a world—it is rather odd that emotions are never considered as one of the universal forces that drives language development. 210 According to Marschark and Everhart (1997), one may “hypothesize cognitive prerequisites to language development without assuming a direct causal role. Thus, one does not need to find a strong correlation between language and sensorimotor development to conclude that cognitive development drives language development in some ways” (p. 10, italics mine). In this case, one may say that cognition is a necessary but not sufficient cause for language to develop and that the two may emerge, more or less, independently. By employing a ‘stealthy’ imperative, ‘saving face’ or ‘offering a way out,’ language, however, is understood on “multiple levels” according to Pinker (2007). For instance, as tacit propositions, “we anticipate our interlocutor’s ability to listen between the lines and slip in requests and offers that we feel we can’t blurt out directly,” (p. 22) Given the fact that language is bursting with emotional, social, and abstract reasoning, due to its semantic ‘coloring,’ and grammatical modes, aspects, and tenses, which is employed and understood on multiple levels, the most logical explanation for the ontogenetic development of language and thought is by locating and understanding the convergences of the imaging brain with universal ‘objects,’ which all living beings encounter. Implicitly understanding that there is a universal emotional and social register in language, there are language experts on the look out for alternatives to linguistic determinism. By rejecting determinism, we are compelled to consider “another possible alternative relation of language and thought” (Marschark & Everhart, 1997, p. 10). In other words, as Bates et al. (quoted in Marschuck & Everhart, 1997) point out, any correlation between “cognitive and language development need not imply that one or the other was in the driver’s seat” (p. 10). But simply rejecting deterministic properties, as ‘postmodern’ theorists have done, leaves a gaping theoretical hole that has yet to be filled with a logical explanation not packed with emotional descriptors. 211 Beyond postmodern views, many seek to find the “other factor that influences or promotes development in both and thus may result in the observed relations between two domains that may be largely independent” (p. 10). Put succinctly, Marschark and Everhart (1997) explain what this ‘other factor’ could be in the following manner. Early in development, according to Piaget, both cognition and language are promoted by a child’s sensorimotor experiences. At later points in development, however, both cognition and language growth could be driven by qualitatively different kinds of experience. During the period of concrete operations and then the period of formal operations, there would be an epigenetic interaction between the child and the environment, such that their development both requires and reflects experience of differing levels of complexity. It is not essential that there be any simple one-to-one development between cognitive and linguistic domains in this view” (p. 10-11) Thus according to Piaget, the child adapts to the world first through a sensorimotor system—that children bring into the world (i.e., innate abilities)—that allows them to attend to experiences as the child encounters these and, by consequence of both language and thought evolving over a period of maturation, formalizes both in developmentally reciprocal ways. Finally, what we may consider as universally occurring through the stages of development, both linguistically and cognitively, are the experiences and observations common to others of our kind, awash in universal internal and external structures. All things considered, differences ranging across a general bell curve are mostly superficial in the scope of human consciousness. Moreover, “the fact that human language and human cognition have developed in tandem” leads credibly to the idea that they “have natural interconnections and interactions that are distinct from either and are perhaps synergistic in their effects” (p. 12). More salient is that the deep similarities that all individuals share in language, cognition, perception, and emotion, most certainly expresses a reality that is “woven into the causal fabric of the world itself” (Pinker, 2007, p. 9). What Luria (1972) called ‘kinetic melodies’ is the bounded and unbounded flow of change perceived, processed, and outwardly expressed; then 212 reflected back. In other words, through our integrated sensory, somatic, and motor systems, which receptivity to ‘objects’ pays special attention to motion in time and space, our brain interprets causality. Interpreting forces of change is the brain inferring, predicting, and planning for its ultimate survival through its sensitivity to movement. Just as surely as our sensorimotor systems dynamically ‘transform’ information received by the viscera, skin, hands, noses, mouths, eyes, and ears, so too does our mind, dynamically ‘construct’ reality by mirroring the deeply integrative relationships between us and a physical world in constant flux. Fortunately, we respond most readily to the flow of the real world nearest to our senses that warn us of imminent danger. We may take note, for instance, of the speed, intensity, and direction of a river flowing at our feet, but not the growth of the trees along its banks. We will attend to the incoming sounds of a buzzing insect but hardly notice the spider sitting next to its carefully spun web high above our head. We will register the fluctuations of the heart when our beloved walks away from our embrace, but hardly notice the cold nipping at our body. Moreover, the fact that the mind is able to abstract from the whole by singling out static frames of ‘movies-in-the-brain,’ reflects the kind of consciousness that is able to ‘freeze’ on matters-of-fact, while at the same time, grasp the importance of its dynamic nature. Knowing that motion in film is detected at 24 frames per second, for instance, carries some significance with respect to the subtle representations that are vital to its reality. Yet, to develop and maintain a healthy perspective of what is essentially dynamic, we require a dynamic view able to change as the world around us changes. This reality, shared universally with others of our kind, as Pinker (2007) details with adroitness, is inherent within and expressed through all of our symbol systems. In addition, while reflecting on this dynamic reality, I recall that by three years of age, I was fortunate to be immersed in dance and song, followed by acting, speech arts, and piano. 213 Hence, my years were filled with the art of motion. Though I was not fully conscious then, I now fully appreciate my eyes, which were fortunate to possess the normal saccadic rhythm and acuity needed to locate objects, follow their motion, discern direction, and track symbols across a screen or a page; my ears, which were able to sense the temporal and spatial flow of sounds; my fingers, which possessed the fine motor control needed to coordinate my touch with appropriate intensity the keys of a piano. I can also appreciate that all those objects and interactions, which must have delighted my cortical systems, somehow motivated me toward more artistic studies. My experiences, of course, did not allow me to comprehend what my world would have been like if those innate abilities had been absent. And though I was exposed to many examples and theories through school and in textbooks, it was in becoming a practitioner of life as both a parent and teacher that I strove to comprehend more. Perhaps this happens because we must ‘look’ so intently into the eyes of others as we guide their footsteps and in so doing, as neuroscience may prove, our empathy grows. The eyes may be more than ‘windows to the soul,’ they may be the mirrors to our reality. I thus began to question the theories posited on the relationship between language (i.e., speech and written) and cognition. I questioned the status of symbolic writing, the fact that written language appeared much later in the history of human evolution indicated that it had an epigenetic relationship with speech and cognitive development. And upon studying the world of the Deaf (especially as it is recounted by those who are deaf), I questioned whether speech is what distinguished human cognition after all. And I wondered what all this ‘questioning’ would bring to my understanding. 214 Observations of a Deaf student negotiating music concepts The preceding experiences obliged me to reconsider artistic modalities that engage cognitive processes, but also the emotions that are necessarily integral to the arts. It is clear that the importance to which education places on spoken and written language as having cognitive primacy is made evident by the varying positions that have been taken in language and cognitive domains impacting on arts education. The majority of language and cognitive researchers, in effect, have all but virtually ignored contributions made by music and film researchers. Understandably, aesthetic and semiotic theories that have dominated music or film research, do not lend themselves to locating relationships between language and thought. Moreover, music and film theories that have ventured beyond philosophy aligns with one or another language or cognitive theory rather than the other way around (e.g., see the work of Edwin Gordon). Perusing history, it is clear that for various reasons of a religious, cultural, political, economic, and social bias, language has been thought of as the rubric when decorticating mind both from either an empirical or philosophic standpoint. Without doubt, it is language and human reason that ‘appears’ to distinguish us from non-human animals; to think otherwise is thought to be a form of anthropomorphizing. Whereas language, which for centuries has been tied to thought, is the summum, or highest form of human communication, the arts generally are viewed as an extension of thought as a form of communicative ‘accommodation.’ In other words, the arts (especially in education) appear as a by-product of several sensory modalities that may be harnessed when language or cognitive deficits are present. Hence, there is a parallel to be made between the arts and signing languages among the Deaf, whereby the repertoire of gesture, movement, visual, vocal, and facial expressions, perceived as ‘accommodating’ speech (i.e., language) are also hallmarks of theatre, dance, film, music, and visual arts. At most, the arts ‘enhance’ what language fundamentally possesses—the ‘technology’ by which to think. 215 In short, a prevailing cultural and social attitudes toward the Deaf border on the perception that when lacking the primary sensory modality (i.e., auditory) for speech production, which ‘forces’ one to employ alternate modes of expression, individuals are ‘handicapped’ in their failure to ‘communicate’ clearly. Viewed as a poor cousin of spoken and written language, images, gestures, and other non-verbal means to get the point across do little more than ‘accommodate’ non-verbal children and the deaf. This view, in other words, considers everything that falls outside of spoken and written language as a ‘failure’ to communicate clearly and, by consequence, places individuals with ‘deficits’ at a considerable disadvantage in society. In light of this ‘linguistic’ disadvantage, one could venture that the grading of the importance of music, dance, and visual arts in the school curriculum rests on a rubric for communication, which is compared to language as possessing both clarity and poetry. For contrasting reasons, therefore, the language that is present in poetry, theatre, and cinema potentially positions those arts advantageously in the school curriculum. More often than not, however, from a social constructivist viewpoint, the arts are viewed to have the capacity to ‘empower’ young people to ‘find a voice’ (Rogers et al., 2010). Armed with artistic skills and knowledge, youth may confidently venture forward with critical views of the world. Of course, it is interesting that theories of positioning or empowerment that hold so much clout in educational research do so despite that the study of emotions is never included. What are empowerment and positioning if not the ‘feelings’ one possesses when locating self in the world? In the end, the preceding may explain why the arts are prevalent in educational contexts filled with ‘disadvantaged’ learners that require a form of ‘accommodation’ for expression. When language skills are ‘bootstrapped’ to the arts, it is believed all the better for the learner. It is not necessarily a ‘failure to communicate,’ however, whereby non-verbal individuals are viewed as ‘handicapped.’ The attitude toward spoken and written output goes hand-in-hand 216 with the view that language determines or shapes thought and, thus, reinforces the belief that it is cognition itself that is seriously hampered if language skills are not intact. Awash in a sea of affective ‘ambiguity,’ something akin to the plight of Helen Keller prior to her first contact with words, non-human animals, preverbal children, the Deaf, and those who suffer from neural impairments are simply viewed as apprehending the world affectively but not fully comprehending it (i.e., logic and reason). As Vygotsky (1962) once claimed, animals do not communicate other than the “spread of affect” and, by contrast, “higher forms of human intercourse are possible only because man’s thought reflects conceptualized actuality” (pp. 6-7). Despite the extensive research conducted on signing languages (e.g., ASL), which shows unequivocally that signing is a complete language, the prejudice toward non-verbal languages, has negatively impacted the Deaf and produced ineffective, at times harmful, language acquisition programs (Marschark & Everhart, 1997). For those who insist on the primacy of verbal or written language in determining or shaping human thought, any proposition that argues for alternate modes (e.g., signing or gesturing) is seen as suspect. For instance, the late and distinguished researcher, Dr. Rangaswamy Narasimhan (1997), stated that, “Cases involving deaf individuals (children, as well as adults) call for quite complex analyses and interpretation. Our understanding of the relationships between language underpinned by gestures and language underpinned by speech are by no means satisfactory. In these circumstances, it is difficult to affirm confidently that such individuals lack language or do not lack it” (p. 150). In my view, however, there is nothing wrong with language holding a kind of ‘technological’ primacy. On the contrary, there is every reason to believe that language holds primacy in communication by virtue of its form and function. Who can refute the clarity with which words are able to denote or the arousing manner by which words connote meaning and able to strike at the heart of truth especially when well spoken? Who will deny the ability for 217 poetry or prose to make the heart quiver and the stomach tighten? Who can overlook that linguistic imagery has the ability to defy reality and send us into fantastical realms? On the contrary, there is reason to believe that spoken and written language is the quintessential means for producing and maintaining a rich cognitive and emotional balance of brain-body-mind. Extended consciousness, such that distinguishes us from animals, therefore, may be indebted to language without being necessarily ruled by it. But the facts seem clear: spoken language could not have emerged evolutionarily or ontogenetically without an intimate corollary of the visual, auditory, and motor systems of the brain, as is evident through brain imaging (Milner & Goodale, 2006; Rizzolatti & Sinigaglia, 2008). What this suggests is that the educational focus on language as holding primacy in cognitive development obscures an important stage of development that may be gained only through and with the aid of the arts. The ability to code and interpret nonverbal information, which then allows one to plan, problem solve, and create nonverbally surely has a significant role to play in social and emotional reasoning. In the context of teaching music pedagogy to a Deaf learner, I did not set out to determine which, if any, of the positions taken regarding the relationship between language and cognition hold true. But during the course of teaching music in the special circumstance I found myself in, and despite that I did not formally test a hypothesis, I began to lean more on the idea that an innate repertoire of capacities in corollary with the ‘deficit’ had a definitive impact on cognitive outcomes. What became apparent to me, therefore, was the importance of the visual and motor modalities, which includes the sense of touch (e.g., vibrations) as my Deaf student sought to make meaningful connections within the realm of a music-making context. Extraordinarily, she was able to participate in all the activities, with the exception of vocal singing, which she accomplished through signing, on the strength of her sense of rhythm 218 and visual sensibility. Despite her inability to ‘hear’ sound, she was able to accurately play along using tonal instruments, such as the guitar and xylophones, by strictly attending to location, direction, and time to accompany song and speech. I wondered whether the definition of ‘melody,’ which is characterized by shape, contour, direction, and levels could be sensed through synesthesia.7 That is to say, through other senses, upon which melody could be understood beyond hearing. With the exception of the special known cases of synesthesia, whether or not this crossing of the senses holds true universally requires following up this experience with other Deaf students since it could very well be the case that this learner was exceptional in every way. As expected, it was in drumming and dance whereby my student excelled ‘musically,’ a factor that also captivated and motivated her participation, which she claimed to have been her very first music learning experience. Since she attended schools for the deaf, I can only surmise that music is not generally viewed as a plausible part of deaf curriculum. What I could judge of her capacity to ‘think musically’ in performance, I assessed in the context of space, time, movement and emotions—the forces behind music expression. She readily made inferences and predictions of a rhythmic and syntactical nature during improvised and compositional activities common to cognitive approaches (i.e., audiational) in music education (e.g., Gordon, 1990). And her capacity to infuse her instrumental work with various qualities of timbre (i.e., instrumental voicing), which I can only surmise was enabled by sight and touch alone, articulation (i.e., attack and decay of sustained and sudden movement), tempo, and dynamics, made it especially moving to listen to her play as she wove syntactic structures in soundscape compositions (i.e., music that underscores images in film, illustrations, and dramatizations; see Schafer, 1986). 7 Synesthesia, first identified by Ramachandran and Hubbard (2003), is the unusual capacity to experience one sensory mode through another, e.g., hearing color or tasting shapes. It was extensively described by synethesist, Daniel Tammet (2006), in his memoir, Born on a Blue Day. 219 Clearly her capacities to perform in a musical context depended on other sensorimotor modalities and their association and translation into higher-order thinking. How she accomplished this, however, is difficult to explain, though it appeared obvious that the visual and kinesthetic played a big part, which includes the vibrations that she could feel. Many would contend that she was using compensatory means that would never fully achieve thinking musically, but this argument simply recapitulates the very biases Deaf communities have fought against. Moreover, it was evident from her written work that she possessed the ability to think critically with respect to music concepts as far as her experiences allowed and to employ descriptions that captured music’s essence, which in a purely philosophical sense could only ‘approximate’ or ‘translate’ tonal music. The result of her responses to the stimuli I offered led me to invent new ways of approaching music pedagogy. Inevitably that led the entire class into very different experiences I was accustomed to constructing, which heretofore had not solicited to such a degree a visual mode beyond ‘notation.’ In particular, I began to focus on thinking through exercises intended to elicit inferences and predictions of a musical type beyond tonal discrimination. Thinking in this manner, however, was not especially new to me. In fact, it was precisely because of my background as a dancer that I was able to intuit the nuances between movement and music. There is little doubt that I have grasped music, since my earliest recollection, as a dancer. As a child I could not listen to music without dancing to it and usually the improvisations I would perform were based on a repertoire of movement vocabulary I had developed in the study of dance. In other words, I was conscious that movement had formally defined lines, shapes, contours, and patterns that transitioned through time and space. 220 It was precisely this movement vocabulary that I exercised through dance that provided the ground for my thinking musically. Long before I began my formal education in music, I had performed millions of pliers in triple meter, which is a bending of the knees and tendus in duple meter, which are repeated extensions of the legs and feet. Clearly this division of time through movement, which is necessarily expressed in metered music, had imprinted physically on my biology. Throughout my years of training as a dancer, I learned to create patterns of movement by varying the duration of sound and silence within metered phrases or patterns of movement that intentionally crossed metered phrases, along with countering the meter that is felt in music by performing steps that were rhythmically textured. For instance, by performing a movement phrase in duple meter that counters triple meter music. The movement vocabulary, therefore, was a means to develop kinesthetic order, which is felt musically as steady beat, metered time, duration, and patterns. Although dance is necessarily rhythmical, the gestures that are performed in the upper body (i.e., trunk, arms, hands, and head) provide a type of lyricism of which melody is a part (i.e., melodic rhythm), which includes ‘imitating’ the direction and contour of pitch. For the most part, the melodic direction and contour is an index of the gestures to be performed. Thus, it would hardly be appropriate to perform an upper sweeping of the arms to melody that descends in pitch. Of course, there are many choreographic exceptions, which can usually feel comical or paradoxical to perform and to observe. At any rate, my formal dance study enabled me to move to both a strictly percussive accompaniment (e.g., the repertoires of Merce Cunningham and Martha Graham) and to an accompaniment that was both rhythmic and melodic (e.g., the repertoires of Jose Limon or Luigi-style Jazz). In either case, gesture and the movement of the upper body was part of a 221 kinesthetic texturing that is the ‘melodic’ layering over the rhythmic lower body (i.e., legs and feet), which also marks temporal change as the dancer travels through space. In Luigi-style Jazz Dance, the movement repertoire that I developed under the training of Vicki Adams Willis (artistic director of Decidedly Jazz Danceworks)8, which is performed strictly to jazz music rather than popular music, renders both melodic and rhythmic nuances. Willis carefully choreographs dancers as metaphoric of the nuances—rhythmic, melodic and textures of both—that make up part of jazz music. As a dancer, therefore, the relationship I had with music was through a kinesthetic response to the elements of music, namely, rhythm, melody, texture, timbre, dynamics, tempo, and articulation. Through dance, I was able to feel music in my entrails and muscles as deeply as I could feel movement throughout my body. The two modalities, music and dance, were inevitably blurred within my biology and, today, I can no more separate these than if I were to try to separate spoken word from the movement of my lips, tongue, and vocal chords. Certainly this experience is my music reality, not necessarily another’s formation in music. Dance forced me to attend to the musculature that was necessary to maintain flow—my center of balance, posture, and body in space. Dance required that I understand the precision of movements being performed for me to imitate or recreate them precisely and in synchrony. Dance forced me to attend to ‘invisible’ details (e.g., the shape of my fingers or feet) or the contraction of abdominal and gluteal muscles in order to maintain my posture and balance while performing a pirouette or leaping through the air and landing. At the same time, dance also forced me to attend to the ‘visible’ details, such as the steps and gestures that could be viewed by an audience, which demanded memory and conscious focus to movement in space and time. It was thus through dance that I learned the control of my breath when singing or reciting poetry— 8 a performance that illustrates this style of Jazz Dance by Decidedly Jazz Danceworks may be viewed on TED talks: http://www.youtube.com/watch?v=HBjgyJN2Vc0 222 a conscious effort that enables the physical control necessary to producing sounds and nuances of sound. As well as the muscular control of my fingers when touching an instrument, which includes the releasing of tension that can build in the arms or neck not necessarily used in the action. In other words, as a dancer, I developed a hyper awareness of my body and its parts in intimate detail, much of which became automatic through years of practice. All of this knowledge was something that I possessed personally and in the context of teaching music to a Deaf individual, I was no stranger to the corollary between movement and music. Thus, I contemplated an approach that I could use that would capitalize on my student’s acquired knowledge and skills, e.g., signing, which would in some manner imitate the kind of dance-music training I had experienced, without focusing the entire class on merely dance. One activity, outside of the usual movement repertoire to which I normally introduce students, stands out in particular. I chose a form of ‘music’ performance called, spoken word, which sometimes involves a continuous soundtrack underscoring rhymes and free verse and, at other times, a type of instrumental ‘call and response’ to the words being spoken (e.g., piano, drum or saxophone). To comprehend the overall effect and intended ‘meanings’ in spoken word (of which the popular form ‘rap’ is a derivative), one cannot merely read or listen to the words in and of themselves, rather the visual, kinesthetic, and qualities of ‘intonation’ of the words that are performed are essential. In light of the knowledge I had gained on signing, I was counting on the similarity between sign language (i.e., ASL) and the musical form of spoken word, to make this activity a very successful one for a signing Deaf student. Playing only a sound recording, rather than a video, I had taught myself to sign the words of a spoken word performed by Peter Sellers. I was fully aware of my lack of fluency in signing but proceeded, nevertheless, with the full force of expressions that drew on the comedic style of Sellers. 223 What was important to the interpretation of this particular piece was the manner whereby Sellers had created a parody of a 1964 song by the Beatles, A hard day’s night (the title soundtrack to the short ‘mockumentary’ of the same name). The parody could only find ‘meaning’ in the context of the spacing, timing, movement, and inflection of the words in ‘voice’ (i.e., sign) and facial expression. Judging by the class response, my performance of the spoken word in this manner allowed me to ‘translate’ the parody and comedic values originally intended by Sellers through a visuomotor modality. This was born out when my Deaf student brilliantly performed her own spoken word in a small group. Her contributions to the overall composition included drumming to underscore the words spoken and signed by her group, which were perfectly timed with the intended inflections, visual and kinesthetic, that allowed for meanings to take hold as these were adjoined by sounds performed by the group members. Had the group of beginner ‘musicians’ stripped from the music performance either the words, gestures or facial expressions, which latter arguably produced a visuomotor language through the use of signs both deictic (i.e., in context) and referential (i.e. indicating independent of context), one would be obliged to accept that music is empty of semantics (i.e., meaning) and grammar (Gordon, 1990). With some exceptions, for instance, a transitive gesture (such as picking up a cup of coffee), an intransitive or non-deictic gesture or sound (such as waving the hand or a crash) and non-referential sound or gesture can be shared affectively but without a contextual or temporal anchor (i.e., past, present, and future) ‘meaning’ would have as many interpretations as there are listeners and viewers. Even though music has a temporal order, it could be argued, as Gordon (1990) did, that without word meanings (i.e., when and who did what to whom), listeners only interpret ‘qualities’ of music as ‘perceived’ feelings and moods. Most often, such ‘meanings’ are attached to a context (e.g., an artifact, culture, image, word or gesture). One interesting point to recall is 224 that the word modality, which stems from mood, may be thought of in terms of pure sensory modes. Why would this matter? Sensory systems code for four aspects of a stimulus: type (i.e., modality), intensity, location, and duration. In combination with other sensory modalities, therefore, meaning arises from shared ‘objective’ premises. For instance, to feel ‘blue’ (color coded in the visual), is distinct from the feelings of ‘depression’ (intensity and duration coded in the somatosensory) or ‘nostalgia’ (location coded in the visual and somatosensory), which are all part of ‘sensing’ temporally and spatially the appropriate mood—all of which can be identified musically. Thus, when sound is contextualized via other modalities, a common interpretation is likely to be described by several listeners. Is there such a thing as semantics in music? The question as to whether music possesses semantics beyond contextual anchors or references, such as language, gestures, and images, raises the issue of whether it is possible to compare music with language. At a particular level, no one can be sure that a melodic phrase, such as the familiar opening of Beethoven’s Fifth Symphony renders any clear meaning to all listeners. Semantically, there is nothing in Beethoven’s familiar motif that allows one to comprehend the particular meaning of the music he composed. Although on a general plane, one could comprehend Beethoven’s music within the context of the sweeping changes in European music, which differ significantly from the development of Eastern music. Tagore, for instance, in a conversation with Einstein described these differences (Matai, 2007). TAGORE: It is difficult to analyze the effect of eastern and western music on our minds. I am deeply moved by the western music; I feel that it is great, that it is vast in its structure and grand in its composition. Our own music touches me more deeply by its fundamental lyrical appeal. European music is epic in character; it has a broad background and is Gothic in its structure. 225 In response, Einstein pointed out that familiarity one’s own music makes it difficult to move beyond cultural nuances. EINSTEIN: This is a question we Europeans cannot properly answer, we are so used to our own music. We want to know whether our own music is a conventional or a fundamental human feeling, whether to feel consonance and dissonance is natural, or a convention which we accept. Exploring the contextual elements that can render music meaningful on a general plane, several of my students ingeniously produced videos to test some of our collective responses. One video produced as an ABA form, began with a montage of facial expressions projected without sound, which then changed into a montage of landscape images with an instrumental sound track. It ended by returning to the same montage of facial expressions while the instrumental track continued playing. Independently, the images of faces without sound and the music set to scenery were, more or less, attended to. Since there could be heard ‘audience settling’ sounds in the background, the rustling cued me to the fact that there was some distraction. But when the faces reappeared in the third section, underscored by the music that had been paired with scenery, there was no sound from the students to be heard. Moreover, the students later remarked that the music had changed (though it had not). Furthermore, having turned toward the students to scan for reactions in the third section of the film montage, I noticed near mirrored expressions form on the students’ faces, which I had not seen in the first section (i.e., when the faces appeared without a soundtrack). The music that was present in the third section, which we discussed later, seemed to accentuate the images with greater intensity (ecstatic versus happy), duration (a smile appeared to last longer when followed by a frown) and location (the look of pain could be distinguished from an internal or external stimulus). Nothing in those responses can assist us to determine a particular meaning that is present within the music itself. All that can be surmised through this brief experiment is the 226 manner by which music underscores or heightens the visual images, which causes a refocusing of one’s attention. Music, in this instance, affects positively or negatively the impact of visual images because of the manner by which music is felt, e.g., the mood it creates. In another instance, a pair of students created a short video that juxtaposed two different soundtracks to the ‘wrong’ images. In the first half of the video, a sequence of people smiling, laughing, hugging, clapping in joyful moments during celebrations, festivities, and events, was underscored with the music of Saint Saëns, Dying Swan (originally used by Fokine to choreograph Anna Pavlova). The music is a lament on death and by no means can suggest joy. By contrast, the next sequence was of people crying, fearful, angry, and upset in terrifying moments of war, famine, and catastrophes, which was underscored by James Brown, I feel good. While the first sequence drew quizzical looks on student faces, a sense that there was something wrong but not quite certain what it was, the second sequence most definitely drew squeamish looks and ‘throat clearing’ sounds around the room. Almost all the students in the class echoed my own feeling of deep discomfort, as I listened to Brown’s music while gazing at gut-wrenching images of people in deep sorrow. The outcome of two student video experiments hardly qualifies to assert much more than the elaboration of thoughts and emotions that were provoked by pairing sound with images. Yet, it does force one to pause on the notion of ‘meaning’ in both a music and film context, and how perception initiates many more responses that converge with cognitive and emotional processes. A visual image arguably does not possess lexical meaning (i.e., precision of meaning) that may be shared between individuals, anymore than a passage of music. Without references (i.e., lyrics) or context, there is no particular meaning that is elicited by either listening independently to music or viewing visual images beyond the matter-of-fact. Although it is possible to identify a style of music (e.g., blues) or to state what the visual image is about (e.g., a 227 landscape), independently, all images produced in the brain, i.e., sound, visual, olfactory, gustatory and haptic, merely express the matter-of-fact. Moreover, in some instance, it is only by recalling to mind what something is like, by comparison, that statements of fact can be made at all. For instance, when wine is said to taste like chocolate or cherries, although nothing of the sort is actually present in the wine, the brain searches for familiar ‘objects’ to which it can attach qualities of taste or texture. For this reason, it is possible to call to mind what music is like, even if the composer held no such idea. Clearly, wine tasting and listening to music are similar in that they can both be a purely subjective experience that does not share precision in meaning. The familiar dictum, “a picture is worth a thousand words,” falls short of explaining the past or future events of an image captured in time. Thus, unless the context is also familiar to the viewer, images lose their referential and contextual meaning in much the same manner as may words or music. Nevertheless, it is without question that words and visual images out of context may be semantically intelligible to the reader and viewer insofar as expressing the ‘matter-of- fact.’ This may be also true of music without lyrics to some degree. In other words, the matter- of-fact is present in a picture of a landscape or in the statement ‘here is a landscape.’ In the case of music, such as a soundscape (Schafer, 1986), one may also conclude the matter-of-fact in the way in which music is able to convey temporal qualities (i.e., story). Unlike words and visual images, however, music must necessarily refer to movement present in a landscape, e.g., the rush of water over rocks as opposed to rocks themselves. Hence, it is precisely because music is temporal and necessarily is embodied through movement (i.e., for sound to be present) that music cannot represent with any precision what is visually perceived symbolically through language or pictorially through images. Moving beyond the matter-of-fact, meanings of words and images arise largely due to our capacity to ‘grasp’ intent according to ‘movement’ that may be implicit in either (e.g., verbs or 228 action that is caught in mid-stream). Arguably, still photography (e.g., landscape, flowers, or portrait) has just as much difficulty in offering anything beyond what is descriptive. Any meaning inferred beyond the matter-of-fact would necessarily require movement and context. It is possibly for this reason that music, which offers a temporal order, frequently ‘contextualizes’ images and words. In other words, when music is present and one feels a change in tempo, a rising in pitch, or increase in volume, etc, the brain is able to pick up the change and interpret a cause for movement. Inherently in statements made, such as, “it sounds like ocean waves” does music offer some kind of meaning, which may or may not be a shared interpretation. There are, of course, many images that have mobilized society due to a ‘movement’ force in photography, which visual artists express as ‘motion’ depicted in stillness, such as a dancer leaping. This ‘movement’ force in still photography is what most likely prompted Eddie Adams, who won a Pulitzer Prize in 1969 for his famous photograph of General Nguyen Ngoc Loan shooting a Vietcong in the head at point-blank range, to express the idea that “still photographs are the most powerful weapon in the world” (as cited in Time Magazine, 1999). Importantly, although one could imagine the moments before and after a visual image is captured, the precise semantics of a visual image would depend on referential and contextual elements, i.e., cultural, social, and emotional. This referential and contextual aspect holds true while listening to a passage of music. Any meaning one may imagine would depend on one’s subjective experience. When visual images and music are paired together, as it is frequently done in a film context, meaning is bound to occur that is more likely to be shared between viewers, and more precisely when language is present. Finally, one may note that when two individuals are able to share meanings inherent to music structure, the two share semantic understanding that is particular to music. For instance, one may teach a student to hear and audiate “do” as the resting tone in major tonality by first 229 singing “so-la-so-fa-mi-re-ti-do.” After successfully imitating the “do,” over a period of time, a student can learn to sing a variety of tonic and dominant patterns with syllables in major tonality. After further practice and experience with tonic and dominant major patterns, when the student can recognize the pitches, “so-la-so-fa-mi-re-ti-do” without the syllables, and identify a variety of tonic and dominant patterns without syllables, the student may be said to demonstrate a fundamental understanding of the aspect of major tonality as those are expressed through pitches, which are heard and audiated. Hence, without reference to the terms major tonic and dominant, the instant one is able to sing a pattern that is a variant of one preceding, the two singers share the meaning of major tonality. This example in music, therefore, demonstrates that music semantics are particular to the structure of music itself but cannot be said to exist outside of the sphere of music (Gouzouasis, 1992). The impact of language and cognition theories on pedagogy Up to this point, I suspect that there is little in the preceding account that would prompt educators or policy makers to alter their course of action with respect to language or arts education. For one, artistic processes and products are commonly embraced on the basis of introducing multi-modalities (often synonymous with literacy) for at least three commonly held rationales: to accommodate learning, to draw out and emphasize the importance of language skills in multiple domains, or to bring to pass a social context that is viewed as an important communicative function for the development of critical and transformative thinking in interpersonal and intrapersonal interactions (The New London Group, 1995). By the same token, arts educators will continue to employ language insofar as it facilitates and accommodates the learning of visual, motor or auditory modalities as those are developed in practice by naming and classifying ‘objects’ and ‘actions.’ For instance, naming the action of ‘steady beat’ while listening to, audiating and feeling a steady beat as it moves forward 230 in time and space is done to target the convergence of the cognitive, kinesthetic, and affective. Words may become important in expressing what a person can recognize (e.g., steady beat) but may not be able to perform. Generally speaking, therefore, arts educators (particularly in the early stages of development) concentrate on modeling and assessing nonverbal modalities most suited to developing specific skills and knowledge in an area of artistic study. In other words, most educators would agree that the goal of arts pedagogy is to foster the ability to ‘think’ in an artistic modality (e.g., sound, movement, visual images, etc.), to achieve fluency in output and, at the same time, acknowledge that language ‘facilitates’ reasoning when the modal output fails the learner (e.g., an intention to demonstrate a steady beat). As Leon Botstein expressed, “The point is not to make better painters, better poets, and better musicians, but to integrate what we identify to be unique in the artistic experience with the experiences of young people in schools. Art creates a language, but it is not that the arts are another kind of language...language [simply] ends up being at the center of artistic activity” (in Ayers & Miller, ed., 1997). Hence, arts educators draw upon words to name, classify, and indicate an intent, which sits primarily outside of the artistic act itself. Essentially words afford one the ability to know what something is supposed to sound or look like, even if the output does not match. As such, many arts and language educators accept that different modalities foster some kind of thinking, which serves to ‘accommodate’ learning when necessary. They may also see multi-modalities as enhancing creative processes and outcomes by drawing from a larger pool of artistic works, which may be ‘translated’ into another context—not unlike knowing several languages. Generally speaking, a bias toward one’s disciplinary modality, i.e., spoken and written language, dance, drama, music, and visual arts, is prevalent within each area. That bias is often the cause of the generalist-specialist debates, which continue to plague school policies. As a multi-disciplinary artist, I sense a congruency in alternating between general and special 231 approaches—the former broadening neural networks as the latter deepens and thickens neuronal projections. Much to my dismay, however, I have found that the bias shown toward the primacy of language with respect to ‘thinking’ is just as prevalent in the arts. In other words, sound and visual images hold primacy for ‘audiating’ or ‘visualizing’ but not thinking. For instance, though there is a focus on dance in an elementary music education context, which has a long pedagogical history that acknowledges movement as ‘enhancing’ specific areas of music learning, dance is not seen as directly responsible for ‘thinking’ musically. No more than ‘thinking musically’ is responsible for ‘thinking kinetically.’ Yet dance and music, as motor activities, may have more to do with artistic development in all of the arts combined—language included—than one may readily acknowledge. As an amateur painter, I am fascinated by the motion my hands and arms make with the extension of the paintbrush and the strokes I place on the canvas. By all counts, I feel as though I am dancing and the strokes possess a rhythmic quality not unlike those I perform on a xylophone or guitar. A cognitive music theory: neither language nor transcendent According to Gordon (1990), “music is not a language, because it does not have a grammar or parts of speech. Music does have syntax, however, because there is logical order in its sounds. Meaning is given to music through syntax. To comprehend syntax in a piece of music, one must audiate its organization and structure” (p. 22). The following discussion helps to further clarify what Gordon intended when he coined the term audiation. Though music and language are different, the process of audiating while listening to music is like the process of thinking while listening to language. There is language and there is thought. There is music and there is audiation. Just as language and thought have different roots and develop differently, so music and audiation have different roots and develop differently. Language and music have a biological basis, whereas thought and audiation have a psychological basis (p. 19). 232 Gordon’s view of music carries more than a hint of looking through the same lens as we have done toward language, a view that has followed us through the ages and was described as ‘immanent’ by Umberto Eco (1998). Thus to Gordon, music, like language, is quite possibly immanent and may only be ‘grasped’ by the human mind. In other words, to perceive music is to ‘hear’ patterns of sound as they exist immanently, which cannot transcend but must be ‘translated’ into meaningful terms through the process of thinking musically. But what is thinking? Is it only accountable through ‘sound images,’ i.e., audiation? With the imbrication of dualisms, such as, transcendent/immanent, perception/cognition, emotion/cognition, one begins to see just how deep and far reaching the philosophical and psychological study of language, perception, and cognition has impacted theories of music cognition and learning. Of course, one cannot ignore that the brain’s auditory, visual, and motor systems have an extensive research history in perception studies both within psychology and neurology but the two very distinct disciplines use different methods of investigation, which have been undertaken for very different reasons. Perception studies, under behavioral sciences, records reflexes in stimulus and response experiments by way of studying ‘learned behavior.’ Neuroscience, by contrast, distinguishes its work from studies of perception in behavioral science, by seeking principles that attempt to explain the complex brain-body-mind relationship. Perception studies in neurology, in one instance, tells us a great deal about how the changing patterns of sound striking the tympanic membrane or the changing patterns of light striking the retina “are transformed into neural impulses” (Milner & Goodale, 2006, p. 1). With this type of perception study, “modern neuroscience is helping us to understand the operational characteristics and interconnectivity of the various components,” despite the fact, that “the organizing principles…as a whole remain open to debate” (p. 1). Thus, before moving 233 on to discuss in more detail Gordon’s cognitive theory of music (i.e., audiation), several more insights from neurovisual researchers Milner & Goodale (2006) helps to situate the debate. Although neuroimaging has revealed that vision is interdependent with the somatosensory, auditory and motor systems, its distinction “provides us with detailed information about the world beyond our body surface” (Milner & Goodale, 2006, p. 1). Arguably, but with fewer points of reference, the same may be said of the auditory system. “While much is understood” of those signals and how these “are transformed into neural impulses, far less is known about how the complex machinery of the brain interprets these signals” (p. 1). It is not for a lack of attention to and experimentation with cortical systems, however complex those systems may be. Rather part of the problem rests in our conceptual understanding of ‘cognition’ and ‘perception’—some of which is a remnant of the historical debate between concept and percept—that shapes the kind of questions that would lead to ‘testing’ new hypotheses. And part of the problem rests in understanding systemic functions from an evolutionary standpoint. Though there exists differing interpretations of ‘cognition’ and ‘perception,’ the following provides us with one from an evolutionary perspective (Milner & Goodale, 2006). A useful first step in trying to establish these principles is to take an evolutionary perspective and to ask what the system is designed to do. In short, what is the function of vision? One obvious answer to this question is that vision allows us to perform skilled actions, such as moving through a crowded room or catching a ball. Such actions would be quite impossible without visual guidance. But, of course, vision allows us to do many other things that are equally important in our lives. It is through vision that we learn about the structure of our environment and it is mainly through vision that we recognize individuals, objects, and events within that environment. Indeed, this ‘perceptual’ function is the one most commonly associated with vision in people’s minds (p. 1). Thus following the preceding thoughts, one could venture that sound’s purpose was to provide an ‘advance’ warning system that would locate a moving object heard before seen, but also being 234 a vocal quality among a host of animal species, to communicate near and far. From this, one would surmise that all of our senses assist us to “recognize individuals, objects, and events within that environment,” so that perception would then “refer generally to any processing of sensory input” (p. 2). Yet, what is unique about the visual systems is that it must accommodate “two somewhat distinct functions—one concerned with acting on the world and the other with representing it” (p. 1). Those two distinct functions prompt what is mostly a pragmatic question: What would be the purpose of perception and representation if it did not lead to action? From an evolutionary view, the interconnectedness of acting and representing was so organized for the purpose of moving away from or moving toward objects to reach, grasp, parry, eat or mate for the species survival. As Milner and Goodale further stipulate, “unless percepts can somehow be translated into action, they will have no consequences for the individual possessing them. Nor indeed would the brain systems that generate them ever have evolved” (p. 221). It is understandable, however, that to make an argument for ‘instinct’ as driving both the senses and thought, may appear highly reductive and somewhat distasteful to those with a view of consciousness that embodies notions of a ‘higher purpose,’ but certainly no less distasteful than reducing human beings to a set of ‘learned behaviors.’ In truth, the brain need not exclude a higher purpose, all the while busying itself with the business at hand. And we cannot ignore that behavior is, at times, a ‘learned’ reflex. Between the two extremes—instinct and learned behavior—the elemental conditions and subsequent neuronal processes that have evolved efficiently to ensure the survival of all species, have simply become a more refined and interconnected web of organization in humans to allow maximal capacity for higher order cortices to think and feel on such matters. 235 According to Milner & Goodale (2006), a “single multipurpose” system could theoretically “serve both the guidance of actions and the perceptual representation of the world” (p. 1). But as they explain further, a system of this type would necessitate a single mechanism that would handle both autonomic responses such as the “control of papillary diameter” and more “complex processing such as that required for the moment to moment control of many skilled actions such as walking or grasping” (p. 2). Instead, it makes more sense (and is anatomically more correct) that the visual system is comprised of “two separate and quasi- independent visual brains” (p. 1). One that serves to process sensory input, i.e., perceiving an object, and the other to process the visual control of action. In other words, perception per se, is not what the ventral stream is there for: perception is not an end in itself, in biological terms, but rather a means to an end. Ultimately both visual streams exist to serve action—the difference between them is that the dorsal stream provides direct, moment-to-moment, control of our movements, whereas the ventral stream exerts its control in a much more indirect fashion (p. 221, italics mine). Generally speaking, ‘perception’ is used to refer to “any processing of sensory input,” notwithstanding, “another more restricted sense…allows one to assign meaning and significance to external objects and events” (p. 2). According to Milner & Goodale, therefore, perception may be viewed as “subserving the recognition and identification of objects and events and their spatial and temporal relations, [which] provides the foundation for the cognitive life of the organism, allowing it to construct long-term memories and models of the environment” (p. 2). It would make sense, physiologically, therefore, that “by the time the sensory signals leave the eye on their way to the brain, a good deal of processing has already occurred” (p. 3). As one reflects on the sensory, motor, and somatic systems, which foundation allows cognition and emotion to arise, along with the development of symbol systems for expression, knowledge gained from the neurosciences obliges us to revisit the sensorimotor-perceptive processes from 236 the input stage onward. The view that keeps those processes as separate and distinct from cognition simply cannot hold its ground in today’s scientific discoveries. Rethinking imitation, meaning and understanding At first glance the preceding discussion on the visual system does not necessarily describe the systems (i.e., auditory and motor) most needed to develop a full range of music skills (i.e., tonal and rhythmic). Notwithstanding, understanding the visual system helps to gain a better understanding of the cortical activity of all sensory systems as they function perceptively and cognitively. Moreover, taken from the view of neuroscience, we are invited to reconsider the concept of ‘meaning.’ Gordon (1990) was clear in stating that perception is a necessary antecedent to cognition in a music context, but he also argued that ‘meaning’ in music can only be achieved when sound is cognitively processed into syntactical forms (i.e., a logic of sound sequencing). He stipulated that the cognitive correlate of “audiation is confused with inner hearing, imitation, recognition, and memorization” (p. 20). The problem with those kinds of distinctions is that they very nearly describe the early auditory system as if processes of this type were somehow divorced from higher order thinking or the emotional register that is needed to interpret sounds. In other words, although Gordon concedes that, “a child can imitate or ‘inner hear’ without audiating” (p. 20), he imagined that hearing and imitating are not processes that allow one to ‘think musically’ (i.e., audiate). But conceiving matters in this way leaves one to wonder how imitation is possible from merely ‘hearing’ sounds—which in my experience in learning second languages and music is simply not as simple as one would think. If one does not think in sound ‘images,’ which are necessarily temporal and spatial, it is rather unlikely that hearing sounds will prompt imitation— not even when those sounds are ‘broken down’ to their smallest units of sound (as the character 237 Eliza Doolittle epitomized in the 1964 film production of My Fair Lady, and more recently, Steve Martin playing the role of Inspector Clouseau in the 2006 remake of The Pink Panther). Hearing and imitating music without audiating (i.e., thinking in sound, aurally conceptualizing music; see Gouzouasis 1992, 1993), as Gordon (1990) theorized, finds little support in neuroscience. Before imitation is possible, some ‘action understanding’ must precede imitation. As Gouzouasis would express, a pre-verbal child is likely to observe without imitating until such time as the child is able to understand what is intended by the actions, i.e., movement or singing. To this extent, thinking musically must be the innate ability upon which imitation will then follow. Thus one notes that the imitation of sound, whether linguistically or musically, shifts from an innate ‘babbling’ stage, whereby an infant is able to produce sounds, to sounds that are reproduced with intention and purpose even if this is merely to imitate. When sounds have intent and purpose, one may assume that the individual is thinking linguistically or musically. Ultimately, what we learn from neuroscience is that if the sum of the auditory-motor areas were not able to perform convergent and essential processing, for instance, sound recognition or discrimination, sound signals would not be processed in the auditory cortex in any meaningful way. Perceptive functions, which relay essential information to the sensorimotor system, is a life saving process in evolutionary terms that allows for higher levels of thought. Essentially, one cannot separate perceptive and cognitive functions when we listen to, perform, compose, improvise, and move to music. Any disruption in perceptive or cognitive functions, therefore, would significantly alter incoming information or processing it for meaning to occur. When essential and necessary sound discrimination is made in conjunction with the auditory cortex, the cerebral cortices are able to do what they do best in humans, which is to think in complex patterns and multi-layered and dimensional connections (a veritable network of 238 interconnected and interdependent neurons). To Gordon’s credit, he was correct in noting that a deeper and richer level of comprehension is achieved through complex patterns of thought in music. And comprehension may be measured by degrees of ability—from core to extended consciousness. For instance, the study of individuals who possess a low IQ or other neurological disorders who are, nonetheless, capable of extraordinary music skills in performance and composition, demonstrate a degree of understanding in music, even if their comprehension is not fully present in all cognitive functions (Sacks, 2008). Intentionally, I have employed the two words understanding and comprehension in the preceding sentence, which terms have been commonly used to distinguish between perceptive and cognitive functions. Of some significance is the fact that the Oxford English Dictionary makes no distinction between the two words as both mean “to apprehend the meaning.” On the other hand, one would note that in French, the verb, comprendre, which lexical meaning is directly translatable to the English verb to comprehend, is also synonymous with the action to grasp, as in the synonym saisir. Whereas, the French verb entendre translates to the English verb to hear or colloquially to understand and the two verbs are used logically in the sentence, “je t’entends, mais je ne comprends pas.” In English, this sentence is most commonly translated as, “I hear you but I don’t understand.” Hence, the English version appears to be a tautology in French, i.e., I hear but I don’t hear you. A more accurate translation of the French would be, “I hear you, but I do not grasp what you mean” or “I hear you, but I do not comprehend your meaning.” In English, however, since there is flexibility between the two verbs, the hear/understand pairing is not redundant and is commonly understood as ‘being able to hear sounds but not process them in any meaningful way.’ Accordingly, the difficulty one encounters when studying savants is trying to assess the depth of their music understanding qua comprehension. While some savants are only able to 239 ‘reproduce’ (i.e., imitate) music they have heard, there are others who are able to improvise, create new inventions in improvisation and composition. In either case, according to the interdependence of the perceptive and cognitive systems (i.e., primary sensorimotor systems and cerebral cortices), savants are necessarily thinking musically, whether imitating or composing, across a spectrum of understanding and despite an inability to ‘translate’ this knowledge or skills into linguistic explanations. There is an argument, however, that holds if all an individual can do is imitate or, in educational terms, if learning has been merely ‘by rote’ and shows no independent thought, there is no ‘actual’ comprehension. Whether this argument is made in language, mathematics or music, the idea is that a learner who is capable of imitation but is unable to infer, predict, solve a problem or invent any new patterns, is merely perceiving but not thinking. In truth, the problem rests partly in memory and partly in the complexity of neural connections. Although it is true that modalities may not be compared at a particular level one-to- one, any more than one language is precisely translatable to another, on a general plane, however, what one encounters is a metaphorical capacity that opens one to multiple perspectives. Such general concepts as time and space or weight and force, are not limited to any specific modality and may be used to mean the same thing generally though not specifically. Hence, the reading of an analogue clock, a sentence, or listening to a passage of music require temporal- spatial understanding though each handle time and space in a very specific manner. What comprehension beyond imitation implies is the capacity to store in the memory a collection of patterns, which patterns may be split apart, inverted, flipped, rotated or otherwise perceived from multiple angles and ‘moved’ from one context to another. Any weakness in attention or memory along with any barrier in a specific cognitive area could result in a person eventually capable of imitation (particularly in the short term) without full comprehension. 240 Thus neuroscience is helping us to gain new perspectives on the interdependence of the perceptive-cognitive functions that operate in complex ways. The brain’s plasticity, as many neuroscientists are beginning to articulate, is such that when areas of the brain dedicated to perceptive or cognitive processes, malfunction, the brain’s parsimonious manner of processing information is able to convert incoming information via another route, sometimes with neurons actually invading areas they would not under normal conditions (Ramachandran, 2010). Generally speaking, therefore, the capacity to discern then ‘grasp’ the temporal-spatial aspects of sound, which is a necessary first step in the development of speech or music that leads to motor processing. Without the temporal-spatial discernment of the start and finish of a single unit of sound or of the difference between phonemic and morphologic units of sound, speech, like music, would be a melodious blur—difficult to follow or to imitate, not unlike the common complaint made by adults struggling to learn a foreign language. The most difficult aspect to learning a new spoken language, in fact, is the spatial and temporal aural discernment of phonemic and morphologic units. Likewise, the most difficult aspect to learning music is the spatial and temporal aural discernment of sound values (tonal and rhythmical), which include pitch and duration. As such, Rizzolatti and Sinigaglia (2008) offer us a “second look at many key aspects of the traditional view of how the brain works, particularly with regard to the organization of the motor system and its functional relations with other systems” (p. 3). To begin, a traditional view held that “sensory, perceptive, and motor mechanisms were situated in clearly distinct cortical areas” (p. 3). According to this view, vast cortical regions—often defined as the associative areas—were located between the sensory and motor areas; it was thought that in those associative areas, particularly in the temporo-parietal regions, information 241 form the various sensory was assembled and objectual and spatial percepts formed for dispatch to the motor areas for organization into movement (p. 3). Thus, this model would suppose that when we grasp an object, “the brain implements a number of serially organized processes,” which takes information arrived from the sensory to the associative areas “for integration,” then relaying “the resulting data to the motor cortex to activate the appropriate movements” (p. 3). What this describes is a system that segregates sensory, perceptive, and motor mechanisms, and a system of this kind would exist only to “translate thought and sensation into movement,” (p. 8). In essence, the role of the motor system thus viewed as “peripheral and almost exclusively executive” (p. 3) is applicable only in a clinical sense to ascertain movement localization. It serves to reinforce the theory that the motor cortex, as the arrival point for “sensorial information processed by associative areas,” is fundamentally “devoid of any perceptive or cognitive role” (p. 8). The following, however, invites us to reconsider what was once traditionally thought to occur in the sensorimotor-perceptive areas (Rizzolatti & Sinigaglia, 2008). Significant changes to the traditional view were brought about by the discovery that the areas of the posterior parietal cortex (traditionally labeled as ‘associative’), not only receive strong afferents from the sensory areas but also possess motor properties that are analogous to those of the agranular frontal cortex, to the extent that together they actually form highly specialized intracortical circuits (p. 20). This important discovery pertains to systems of vision and sound, which highly depend on motor function to process vital temporal, spatial, and kinetic information. Far from being peripheral to and isolated form the rest of the cerebral activity, the motor system is made up of a complex web of cortical areas that are anatomically and functionally different and contribute to produce those sensorimotor translations (or, more precisely, transformations) required to individuate and locate objects and implement the movements required to execute the acts that compose our daily lives (p. 20). 242 What this suggests is that processes “normally considered as being higher order and therefore attributed to cognitive systems,” may turn out to reveal that the “primary neural substrate of these processes lies in the motor system” (p. 20). Notably, as far as cognitive systems are concerned, Rizzolatti and Sinigaglia (2008) include the act of perception and recognition of actions carried out by others, imitation, gestural and vocal communication. Despite that this view has been met with opposition, the implications of such discoveries in neuroscience resonate deeply with my experiences as both creative artist and pedagogue in music, dance, and film. Creative reasoning: a cognitive-perceptive act of logic Reflecting upon creative reasoning in music and dance, which normally leads the individual, aurally and kinesthetically, to potentially produce an original composition, necessitates the ability to discern by choosing or eliminating sounds or gestures within a possible range of ‘vocabulary’ to achieve an intended outcome. All music and dance compositions, whether or not they are well ‘constructed,’ are granted syntactical logic (a sequencing of sounds or gestures according to compositional rules in particular genres and styles of music and dance). If dance and music possessed grammar, which by definition cold be thought of to link the sounds and gestures in a music or movement phrase to meanings, is it reasonable to suggest that music and dance do not also possess semantics? In a language context, it is difficult to attribute ‘meaning’ solely to one or another aspect of linguistic structure, by virtue of the fact that semantics, syntax, and grammar play an integral role in conveying meaning. Would that not also apply to other temporal-spatial arts? Returning to the case of my Deaf student, therefore, what I observed was a capacity to think ‘musically’ through perceptive-cognitive processes that coded temporal and spatial elements inherent of music, e.g., duration, tempo, and dynamics, along with vibratory nuances of timbre or pitch, e.g., sound envelope. All such understandings were principally derived from her 243 capacity to apprehend motion and stillness (i.e., sound and silence) through her visual, kinesthetic and haptic (i.e., touch) senses. In addition, her working memory was evidently able to remember sequences of rhythmic patterns that she could perform when responding (i.e., imitating), improvising or composing. Although her auditory deficit limited her music comprehension to visual, haptic and kinesthetic modalities, the degree to which she was able to imitate, improvise and compose musically reflected the simple manner by which the brain is able to translate incoming signals in areas that are not impaired in higher order processes. Upon reflection, there is no question in my mind that she found meaning in our activities both cognitively and affectively precisely due to aspects of movement that provide spatial-temporal order. The ability to fully comprehend music in a conventional sense was only deterred by her lack of an auditory sense. What is common to all senses, therefore, is the remarkable general capacity to detect movement, i.e., change. Importantly, several senses can aid another in the task of detecting movement. For instance, it is well known that ‘noise’ assists patients with an acquired neurological disorder that fails to visually detect motion (Rizzo, Nawrot & Zihl, 1995). It is fortunate that the brain operates in this manner, in particular for those who are completely blind since their ability to comprehend space, time, direction and speed of motion can only be detected by haptic or auditory means. Our sense’s ability to detect motion, in effect, is crucial in informing the motor system when or how to act. Individuals with the acquired disorders of akinetic mutism—which manifests itself as an absolute absence of the will to reach, catch or grasp objects—and motion blindness or Akinetopsia, is nothing short of disorienting and disruptive to consciousness and is frequently an aspect of Alzheimer’s (Damasio, 1999; Damasio, Tranel & Rizzo, 1999). In the well- documented case of the patient LM, simply pouring a cup of tea became an arduous task since 244 the liquid appeared in her visual cortex as a frozen mass leaving her incapable of determining when to stop pouring (Rizzo, Nawrot & Zihl, 1995). On the one hand, it is remarkable to note that the brain is particularly adept in attending to changes within and without the body for the specific purpose of achieving homeostasis. But beyond this capacity to sustain life here on earth, our innate ability to accurately apprehend the temporal and spatial properties of our existence, which are detected through motion—that of the organism or surrounding objects—is the foundation that enables ‘order out of chaos’ (Prigogene, 1984). In pondering the importance that movement plays in our cognitive development, therefore, one begins to appreciate the sensorimotor system, which services the ability to not merely grasp time and space but to respond to it appropriately. It is within a temporal-spatial framework, when perceived, whereby all organisms are furbished with a means to ‘comprehend’ causal events. The most remarkable discovery is that ‘comprehension’ may be found in the most primitive of organisms, for instance, in the case of a slime-mold experiment where slime was enticed through a maze to find a food source (Nakagaki, et al., 2000). Thus it may be noted that without the capacity to perceive causal order (e.g., if, then), it is unclear how living organisms would be able to learn and adapt to new environments. The fact that the body-brain processes more than perceived random or unrelated synchronous temporal-spatial events, places movement as an essential element of learning. Thus, without the essential element of motion, humans would have no means by which to develop core consciousness, which Damasio (1999) explained is essential for the development of extended consciousness. We may have instead evolved as a beautiful and complex tree, which could be detected by other living organisms swinging through the branches or observing the wind rustling its leaves. There would be nothing wrong with being a tree, but most humans 245 would agree that it is better to be a human being even in the midst of our varied understandings rendered messy because of our ability to move within and between our worlds, e.g., zoom in and out, change focus, walk, gesture, etc. Having so far ‘zoomed in’ to a brief neurological view of the relationship between cognition and perception, it is necessary to ‘zoom out’ for another view on the relationships believed to exist between language and cognition. With a slightly different focus on language studies than expressed heretofore, the following discussion impresses upon why language has held primacy as the rubric for understanding symbol systems. Later, when I return to the discovery of mirror neurons, I discuss why mirror neurons theoretically increase the possibility that the arts play an essential role in higher order thinking. Universals as determining language and cognition Language universals underpin the theory of generative grammar and, as Chomsky himself conceded, do not extend to semantics. This view holds that “the child comes with an innate language module that governs the child’s acquisition of grammar” (Siple, 1997, p. 48). Hence, “this module consists of general universal principles of language structure, including abstract principles that produce morphological analysis during language acquisition” (p. 48). In other words, humans begin language acquisition according to “a default parameter value” but ultimately develop language skills and abilities “through interaction with specific language input” (p. 49). What Chomsky theorized, therefore, is that “independent of cognitive ability” a child begins to acquire grammar due to ‘triggers’ and ‘filters’ with innate structural principles “that generalize to language type” (p. 49). This independence from cognition, as an instinct places the early stages of language development not unlike the motor system, namely, as executing an action without cognition. In the case of language, since verbs are the “chassis of the sentence” 246 (Pinker, 2007, p. 31), movement and change is what drives grammatical constructions, yet therein we note a basic conundrum. Pinker (2007, p. 37), articulates the problem by posing the following questions: How do children succeed in acquiring an infinite language when the rules they are tempted to postulate just get them into trouble by generating constructions that other speakers choke on? How do they figure out that certain stubborn verbs can’t appear in perfectly good constructions? One may presume that there are brain structures, which are dedicated to accurately respond to linguistic ‘triggers,’ match sounds to ‘rules’ of universal language structures, generalize to language type, and perform the necessary ‘filters’ to allow for correct verb usage, e.g., tense (past, present, future), aspect (perfect and imperfect) or mode (indicative, subjunctive, and imperative). If the same question were asked of music, dance, or film—all of which are ‘verb-like’ (spatial, temporal and kinetic)—one could posit the same response with respect to composition, choreography, and film montage whereby time, space, and movement are intuited as ‘grammatically’ correct or not. One point to be noted before moving on to Pinker’s (2007) discussion on how to escape the problem he identifies with verbs, is that in my final discussion, which analysis is of the short film Ethical Chasm (by Tierney 2001-2002), I return to the underlying temporal-spatial and movement qualities that mark action and their ‘mirror’ role in a film context. That role is one that has driven my conceptual view of film ‘grammar’ and ‘semantics,’ independent of film theory’s focus on linguistic or syntagmatic constructions. But the following helps to frame my conceptual analysis in greater detail and clarity. Thus, Pinker (2007) suggests that there are three ways out of the problem. The first is to assume that a child, when applying verbs ungrammatically, frames a rule too broadly and initially lumps together verbs that both fit and resist the rule, but in time are able to “somehow figure out the restriction and append” the rule (p. 37). The only problem is that all too often, 247 verbs such as pour, fill and load—“all ways of moving something somewhere” (p. 37)—are too close in meaning to make such a fine distinction without invoking some other type of logic. The second possibility, which follows that rules of structure do not in fact exist, is that verb combinations and usage are stored away in a child’s memory and upon hearing elders speak, “conservatively stick to just those combinations” (p. 37). While some have taken this hypothesis seriously, it is clear that children construct some of the most peculiar sentences, which they could never have heard prior and it is certain that an ungrammatical use of a verb in a sentence will occur. For instance, in the sentence “pour the cup with juice” (p. 37), occurs when the locative case rule is broken. That is to say, types of objects that are either ‘content’ (i.e., juice) or ‘container’ (i.e., cup) necessitate the use of the correct verb to convey meaning in the direction of movement with respect to the object of focus, namely, that the cup’s empty state will be changed. Thus, ‘fill the cup with juice’ would be the correct verb. The ‘ungrammatical’ locative construction (i.e., container or content locative) not only occurs among children, but also among adults when ‘new verbs’ are introduced into the language—making the argument for using verbs ‘conservatively’ (until hearing the correct usage) unlikely according to Pinker (2007). However, when new verbs are introduced into English, the ‘correct’ content-container construction will show up. Pinker draws on the example of ‘burn or rip’ a CD, as the correct ‘container locative’ verb, while ‘burn songs onto a CD or rip songs from a CD’ as the correct ‘content locative’ verb. Pondering the comical use of verbs (i.e., actions) used incorrectly in a sentence, I was surprised to discover that a filmic scene may produce a comical reaction due to its ‘ungrammatical’ construction not unlike what occurs upon hearing a child’s error—an example of which I give later. Finally, a third possible way Pinker (2007) suggests to resolve the problem as posed earlier, is to allow that children make errors, “but are corrected by their parents” and thereby 248 avoid incorrect verb use in the future. Unfortunately, as most parents will attest, no amount of grammatical intervention actually proves effective in correcting “deviant sentences.” The following exchange from an example offered by Pinker (p. 39) illustrates this point. CHILD: I turned the raining off. FATHER: You mean you turned the sprinkler off? CHILD: I turned the raining off of the sprinkler. Word meanings believed to exist independent of cognition, therefore, are founded upon innate structures that begin with listening to and producing language appropriate sounds, and arise because of a genetically human linguistic factor (i.e., language instinct) that is triggered through social contexts. This view is supported by the study of neurological disorders, such as schizophrenia, Alzheimer’s syndrome, autism, and certain aphasics, which has shown that “fluent grammatical language” and numerical skills may be highly developed “in many kinds of people with severe intellectual impairments” (Pinker, 1994, p. 41). Unfortunately, what those mental “impairments” point toward is that language and cognition “have different genetic roots [whereby] the two functions develop along different lines and independently of each other” (Vygotsky, 1962, p. 41). But under normal circumstances, those examples do not offer insight as to how perception, cognition, and language converge. A rare neurological disorder caused by a “defective gene on chromosome 11 involved in the regulation of calcium” (Pinker, 1994, p. 41), called Williams-Beuren syndrome (or simply Williams syndrome), provides a good example. The syndrome is characterized by low IQ alongside distinctive features, such as, an elfin face, a low nasal bridge, and the possession of a cheerful and engaging demeanor. Sometimes referred to as the ‘chatter-box’ syndrome, individuals bearing Williams syndrome also have an unusual language ability, which renders them loquacious, disarming, and excessively social (Pinker: 1994; Ratey, 2001). 249 While working on a case of Williams syndrome, the psycholinguist Ursula Bellugi was able to show through laboratory experiments that despite their low reasoning ability, children could nevertheless “understand complex sentences, and fix up ungrammatical sentences, at normal levels” along with possessing an “especially charming quirk [insofar as] they are fond of unusual words” (as cited in Pinker, 1994, p. 43). Reflecting on Williams Syndrome, therefore, one may begin to see that language and cognition are indeed two functions that present distinctively one from the other. At the same time, it is too simple to place cognition or language into separate monolithic wholes. Clearly there are cognitive functions that allow sophisticated language use, while simultaneously presenting with a low general IQ. Thus, the search for cognitive universals underpins the view that language is “couched in some silent medium of the brain—a language of thought, or mentalese” (Pinker 1994, p. 45). And “compared with any given language, mentalese must be richer in some ways and simpler in others” (p. 72). The notion of the existence of a ‘language of thought’ (i.e., mentalese), which was first posited by Fodor (1975), and elaborated upon by Jackendoff (1987) then made popular by Pinker (1989,1994, 2007) has been met with strong resistance among language and literacy experts. For instance, Narasimhan (1997) expressed the following in opposition that cognitive universals are a necessary component of thought itself. On the other hand, if an essential component of ‘thinking’ is ‘reasoning or ‘ratiocination,’ it is difficult to see how such thinking can be carried out without the use of language (p. 150). In Narasimhan’s (1997) view, the very notion of cognitive universals as underpinning language acquisition is met with suspicion on the basis that reasoning (i.e., a conscious deliberate ability to make inferences) is believed to occur solely through the use of language. Giving away his position by utilizing the term ‘ratiocination’—the Latin root for ‘reason’ that points to 250 syllogistic thinking—Narasimhan does not wish to appear too narrow in his definition of language. It is important to note that ‘language’ does not mean only ‘words.’ A whole lot of our intellectualizing is based on our ability to exteriorize the results of our mental activities and innovate ‘hardcopy’ representations of them. These exteriorized representations then act as new inputs for mental operations. Texts, charts, diagrams, pictures, maps, tables, etc., are some of the exteriorized representations that literacy makes it possible for us to use (p. 150). Perhaps Narasimhan was thinking along similar lines as Sapir by suggesting that it is at the instance when we are able to exteriorize mental activities, when the ‘word’ (or other such ‘language’ form) is at hand, wherein we are finally able to reason. Continuing thus his critique of Pinker’s (1994) view that a ‘language of thought’ (i.e., mentalese) exists, which drives language competencies, Narasimhan invokes the ‘mystery’ that surrounds the creative thinker who attests to thinking in images. Pinker offers the usual examples of statements by creative artists and scientists claiming that their creative thinking is non-verbal and chiefly image-based. Par of the problem involved here arises through our tendency to equate ‘language’ with verbal-language exclusively. As discussed in the last paragraph, one could still underpin ‘thinking’ with ‘language’ which involves little or no verbal expressions. This, of course, is not a complete explication of ‘creativity.’ There is much about creativity that we just do not understand. Shrouding it in myths and mysticisms, certainly is no help (pp. 150-151). One can only assume that creative processes and products continue to be mythic and mystic to those whose personal creativity remains unexamined. It hardly seems credible that after 60 years of intensive research carried out on creative processes, products, and environments, so little is understood of creativity or that the field of creativity research, which has been carried out principally within cognitive sciences, is awash in myths and mysticisms (e.g., Amabile, 1996; Bohm, 1998; Csikszentmihalyi, 1990, 1996; Gardner, 1993). On the contrary, many researchers have successfully ‘debunked’ a great deal surrounding creative thinking and have included the study of creative thinkers (e.g., artists and scientists) precisely because creative individuals are 251 fully aware that their thoughts are as Damasio (2003) has described them, namely as mental ‘images,’ which include ‘symbols’ and other objects (i.e., sounds, colors, tastes, etc.). Nonetheless, the argument made to oppose a ‘language of thought’ does not rest merely on trusting that language is both necessary and sufficient for reasoning (i.e., interpretation) to occur, it also rests on the idea that all reasoning is contextualized and that context is the factor whereby one is able to process of ‘natural language’ (i.e., spoken word) making cognitive universals unnecessary. Thus, Narasimhan (1997), continues with the following point. One of the major open problems in natural language processing (NLP) is precisely this: formulating (in computationally meaningful terms) the internal representation of contexts; and specifying methods of identifying (in real-time), given an input sentence, the context relevant to its interpretation. Given the open nature of this fundamental issue, it is unconvincing to argue that ‘Mentalese’ must necessarily be made up of a symbolism that is quite different from normal natural language expressions ‘networked’ in appropriate ways (p. 151, italics mine). Despite the overwhelming scientific evidence that supports the view that there is a rich cognitive life present among nonverbal children and individuals with auditory or neural impairments, a theory of cognitive universals is still found to be lacking construct validity. Those who cling to the primacy of language as shaping thought, admit readily that there is ‘something extra’ operating for language to be effectively communal. From a socio-linguistic perspective, science appears incapable of pinpointing that ‘something extra,’ which does not point precisely to a social context (Bakhtin, 1981). In other words, social context is the extra-linguistic factor for which any ‘isolated’ study of language would fail to provide evidence that ‘context’ sufficiently provides. This argument reminds me of the poor patient seeking a physician’s help to remedy a pain in the neck, “Doctor, my neck hurts when I move it ‘like this.’ What should I do?” And the doctor responds with, “Well, stop moving it like that.” Apparently, one does need not to find the cause of pain, as long as the necessary action (i.e., context) that causes it to occur is sufficient to settle the problem. 252 It is clear that language expressions cannot be interpreted in terms, exclusively, of other language expressions all the way ‘down.’ At the primitive (i.e., basic or fundamental) level, language expressions must necessarily be interpreted in terms of non-linguistic ‘expressions’ and ‘schemata’ (of perceptual, motor and other modalities). We have already emphasized that ‘language’ expressions must be understood in a larger sense to include the use of other notational and representational conventions (such as diagrams, tables, lists, etc.) (p. 151). Despite an aversion to any atomistic explanation (i.e., expressed in the phrase, all the way ‘down’), Narasimhan’s use of “perceptual, motor, and other modalities” in the above quote is rather tenuous, and indicates that he concedes ‘something’ is going on inside the brain, which he cannot fully understand but suggests it can be exteriorized in a ‘non-linguistic’ modality (e.g., a diagram). Viewing the ‘language of thought’ as “merely shift[ing] these problems one more step inside our heads,” Narasimhan asks, “If this language of thought is like any other ‘language,’ as this term is normally understood, what is its vocabulary? What is its grammar? Without clarifying these technical issues, talking about ‘translation’ of ‘Mentalese’ into English (or any other natural language) and vice-versa, amounts to mere handwaving” (p. 152). Then by citing Wittgenstein, Narasimhan concludes his arguments in the following manner. When I think in a language, there aren’t meanings going through my mind in addition to the verbal expressions: the language is itself the vehicle of thought” (Wittgenstein, 1968: 329). If ‘language’ and ‘verbal expressions’ are interpreted in the larger sense I have been advocating throughout, it is unclear whether ‘Mentalese’ buys us anything extra (p. 152). Narasimhan seems not to consider what Rizzolatti and Sinigaglia (2008) call “the vocabulary of motor acts” (p. 46). A cognitive and emotional register that create a movement vocabulary After many years of studying, applying, and teaching Laban Movement vocabulary (e.g., Preston, 1963), there is little doubt in my mind that a general vocabulary of motor acts exists and, furthermore, are translated or transformed into infinite choreographies. Whether or not 253 ‘motor acts’ are viewed as that which produces cognitive and emotional universals is, of course part of the controversy that surrounds the visuomotor system in general, but not this system alone. The visual system, comprising of distinct and separate ventral and dorsal streams, has been shown through a sufficient number of experiments in recent years, that the two streams most likely converge, with one accessible to the conscious mind, while the other available through motor acts (Milner & Goodale, 2006; Rizzolatti & Sinigaglia, 2008). Insofar as language is concerned, therefore, the search for cognitive universals, required, “a large number of languages that represent different language types,” which are necessary “to avoid the problem of overgeneralization from too few language contrasts” (Siple, 1997, p. 31). By studying the abundant crosslinguistic data, therefore, “three types of potential cognitive universal prerequisites have been described” (p. 31). One universal “fully available before language emerges” (p. 31) is the motor ability to produce sounds (in particular phonological) “long before their use in language is acquired” (p. 32). In assessing the pre-language signing stages of hearing and non-hearing alike, it has been proposed that the use of gesture (common to all infants) is another universal motor ability that parallels the ‘babbling’ stage (Siple, 1997). A second universal is exemplified in the work of Piaget, commonly referred to as the stage theory, whereby “language emergence is dependent on completion of the sensorimotor period of development” (p. 32). A third prerequisite is perhaps viewed best by its limitation, which is memory and attention. Impacting on the development of language, therefore, would be the transformation from a ‘babbling’ stage to one of combining sounds or visuomotor signs to produce meaning as memory and attention allows (Gouzouasis, 1992). As to emotional universals, it is clear that ‘motor acts’ necessarily code sensory information as emotional ‘representations’ that operate at the level of core consciousness to be stored in long term memory—emotions that are dynamically represented in our face and throughout our body. 254 Without this capacity to code sensory information into emotional representation, individuals are all too likely left with a spectrum of autism. The semantic brain: in search of the language of thought The problem with semantics qua meaning, however, poses the greatest difficulty to language experts. Most agree that de Saussurean semiotics is problematic in light of social relativity (e.g., Sapir, 1949; Whorf, 1943). On the other hand, a nativist position, such as that taken by Fodor, also proves problematic. Innate language structures that conceivably precede any conceptual capacity humans possess, suggests that word meanings are ‘hard-wired’ in the brain. In the extreme nativist framework, word meanings would be derived from a linguistic repertoire laying in wait to be triggered by specific language contexts. Thus, as Sapir (1949) remarked, “As soon as the word is at hand…the concept is ours for the handling” (p. 17). Observing recently my two pre-verbal grandchildren, I wonder how children under the age of 28 months, possessing less than 40 words, adroitly think and act in ways I wish them not to, whether or not words are ‘handed’ to them by consequence. Almost in answer to my query, Vygotsky (1962) expressed the following idea that words are of themselves generalizations that are processed cognitively, with word and thought operating in tandem—not merely within a contextual parameter but as the brain seeks meaningfulness according to the direct interaction with objects or events at hand. A word does not refer to a single object but to a group or to a class of objects. Each word is therefore already a generalization. There is every reason to suppose that the qualitative distinction between sensation and thought is the presence in the latter of a generalized reflection of reality, which is also the essence of word meaning; and consequently that meaning is an act of thought in the full sense of the term (p. 5). Nonetheless, below is a perspective by Steven Pinker (2007), which offers a much more detailed view of the complex role of ‘semantics’ as thought and emotion as they are expressed linguistically. 255 Semantics is about the relation of words to thoughts, but it is also about the relations of words to other human concerns. Semantics is about the relation of words to reality—the way the speakers commit themselves to a shared understanding of truth, and the way their thoughts are anchored to things and situations in the world. It is about the relation of words to a community—how a new word, which arises in an act of creation by a single speaker, comes to evoke the same idea in the rest of a population, so people can understand one another when they use it. It is about the relation of words to emotions: the way in which words don’t just point to things but are saturated with feelings, which can endow the words with a sense of magic, taboo, and sin. And it is about words and social relations—how people use language not just to transfer ideas from head to head but to negotiate the kind of relationship they wish to have with their conversational partner (p. 3, italics mine). To gain understanding with respect to conceptual development, a review of Vygotsky (1962) helps to situate cognitive processes made in Pinker’s assertion above. When tackling the issue of concept formation (i.e., word meaning), Vygotsky noted several flaws found in the design of most word-object comprehension experiments. Up to the time Vygotsky tested his hypothesis on generalized thinking, concept formation had been tested in one of two ways. The first method depended upon word definitions, while the second depended on abstracting a common trait from a group of items. He felt that both of those methods were flawed because word definitions only tap into a child’s acquired verbal knowledge and the role that the symbol plays contextually was disregarded. Nonsense words and a colorful variety of geometric objects, which were then introduced by Ach and Rimat (Vygotsky, 1962), provided a more accurate basis from which to observe conceptual thought. Vygotsky and his associates (1962) conducted an experiment with over three hundred children, adolescent, and adult participants. The experiment consisted of sorting 22 wooden blocks of varying color, shape, height and size, under which were written four nonsense words: lag, bik, mur, cev. The blocks were well mixed at the start of the experiment with nonsense words hidden. The examiner would pick out one of the blocks, reveal the nonsense word, and then ask the participant to pick out all the blocks belonging to it. Each time the examiner would 256 turn up a ‘wrong’ block (i.e., did not fit in the group) based on the participant’s prediction, the task would have to be repeated. Whenever an error occurred, participants re-evaluated their predictions in order to derive meaning from the nonsense word and place the blocks in appropriate groupings. Eventually, participants achieved the goal of grouping the blocks into their shared characteristics. One indicator that cognition was being enacted before word comprehension was through participant utterances. For instance, if the examiner turned up a block that differed from the others which had been predicted by color, the subject would exclaim, “Oh, then it is not color...” Despite similarities between the ages, insofar as demonstrating cognitive activity, Vygotsky observed characteristics in thought which were unique to the child participants. The experiment suggested that complex thinking moves through three distinct phases, the first of which had three stages: (1) trial and error, (2) spatial positioning (i.e., objects were grouped as they entered a child’s visual field) and (3) heaping (a reshuffling of the first two stages). The second phase consisted of placing blocks into family groupings, indicating that the objects were beginning to form bonds. Finally, in the third phase, considered to show complex thought, objects had to be determined as maximally similar, i.e., “small and round, or red and flat” (p. 76). From the latter, more advanced phase, Vygotsky concluded the following. The advanced concept presupposes more than unification. To form such a concept it is also necessary to abstract, to single out elements, and to view the abstracted elements apart from the totality of the concrete experience in which they are embedded. In genuine concept formation, it is equally important to unite and to separate: Synthesis must be combined with analysis. Complex thinking cannot do both. Its very essence is overabundance, overproduction of connections, and weakness in abstraction. To fulfill the second requirement is the function of the processes that ripen only during the third phase in the development of concept formation, though their beginnings reach back into much earlier periods (p. 76). Two key observations were made from those initial experiments, which differed from Piaget’s findings: (1) association and object permanence were insufficient causes to show 257 concept formation and (2) a child’s ability to form concepts sharply increases upon reaching the age of twelve. In Vygotsky’s (1962) view, “Ach’s experiments showed that concept formation is a creative, not a mechanical passive, process; that a concept emerges and takes shape in the course of a complex operation aimed at the solution of some problem; and that the mere presence of external conditions favoring a mechanical linking of word and object does not suffice to produce a concept” (p. 54). I have italicized a portion of the phrase for important reasons. In my initial study of Vygotsky’s work, with little corroboration from neuroscience, I had overlooked a key aspect of cognitive functions, namely, the drive toward a goal and the motor system required to carry out that drive. The brain’s goal-directed disposition and subsequent motor activity in relation to higher order thinking has found new importance in all aspects of the development of mind. Later, in my analysis of Ethical Chasm, I give an illustration of the goal seeking brain in a filmic context. At this juncture, however, what is important to note is that conceptual thinking cannot arise merely from the memorization of a word, which is then applied to an object or vice versa (a common means of concept attainment in educational settings). Rather, words are in themselves a creative solution to a problem. Despite that Ach posited a “determining tendency,” which is a result of the aims of a goal, Vygotsky (1962) noted that infants and children before the age of twelve are able to undertake experimental tasks demonstrating cognitive flexibility. Moreover, after the age of twelve and beyond, it is noted that mature minds with developed vocabularies approach problems in similar fashion to those less mature, although differ in procedural means for arriving at the solution. In Vygotsky’s view, it appeared that there were other factors that had not been explored, yet responsible for the basic differences between adult and child conceptual thinking. 258 With a similar concern, Pinker (2007) argued that the bias, which is shown toward hunter-gatherers as possessing ‘primitive’ cognitive abilities caused by the ‘absence’ of words, originates from a Sapir-Whorf framework. Many hunter-gatherer peoples, as Pinker (2007) explains, such as those living in the Brazilian Amazon, “count with only three number words, meaning ‘one,’ ‘two,’ and ‘many’ ” (p. 138). From a “strong Whorfian theory,” according to Pinker (p. 138), one would assume that it is the word that enables our ‘advanced’ society to distinguish objects and to count using an elaborate numbering system. Studies of infants under twelve months of age, for instance, who appeared not to be able to “distinguish toys from each other” or to keep “track of how many there were,” was believed to be enabled only after the acquisition of words entered their universe (p. 136). But as Pinker points out, “Deaf people who grow up without a signed or spoken language certainly don’t act as if they fail to distinguish bicycles, bananas, and beer cans when keeping track of the things around them” (p. 137). “Baffled by the prevalence of the ‘one, two, many’ counting systems among illiterate peoples,” led Pinker to explore this issue with the anthropologist Napoleon Chagnon (p. 138). What Chagnon explained, is that Amazonian tribes “don’t need exact numbers because they keep track of things as individuals, one by one. A hunter, for example, recognizes each of his arrows, and thereby knows whether one is missing without having to count them” (p. 138). The difference, therefore, is between the precise counting of ‘parts’ that are missing versus the general sense of the whole missing a part. Given the vast distinction between hunter-gatherer societies and modern ones, whereby keeping track of arrows differs significantly from keeping track of “exact magnitudes, particularly when they are traded or taxed,” one may easily conclude that “more sophisticated systems capable of tallying exact large numbers emerge later, both in history and in child development” (p. 138-139). But herein, one notes that, rather than the word preceding the 259 concept, it is precisely the distinction between being a hunter-gatherer and modern human that demonstrates differences at all. It is the “lifestyle, history, and culture of a technologically undeveloped people, [which] will cause it to lack both number words and numerical reasoning” (p. 138). As I explained throughout Chapter 3, however, concept formation only partly addresses the development of thought, which allows one to find meaning in the physical world. Meaning, as it arises, must be a convergence between conceptual and emotional understandings that includes intention. The flip side: mirror neurons and the study of the visuomotor system In an interview some time ago, the great theatrical director, Peter Brook commented that with the discovery of mirror neurons, neuroscience had finally started to understand what has long been common knowledge in the theatre: the actor’s efforts would be in vain if he were not able to surmount all cultural and linguistic barriers and share his bodily sounds and movements with the spectators, who thus actively contribute to the event and become one with the players on the stage (p. ix). Among the most exciting discoveries in neuroscience is the potential for understanding more completely the role of mirror neurons, which full comprehension may be the foundation for new directions in language and cognitive research that could significantly alter current approaches in education. One only needs to read the above start of the preface to Mirrors in the brain—how our minds share actions and emotions, written by Rizzolatti & Sinigaglia (2008), to feel the magnitude of this discovery. The initial discovery of mirror neurons, by di Pellegrino and associates (1992), came about serendipitously in the course of studying brain activity in the visuo-motor cortices of monkeys, “in which the monkey had not been trained to perform specific tasks but was able to act freely” ((Rizzolatti & Sinigaglia, 2008, p. 79). The element of surprise came when it was discovered that “neurons were found which became active both when the animal itself executed a motor act (for example, when it grasped food) and when it observed the experimenter doing it” 260 (p. 79). In other words, specific motor neurons fired when monkeys not only performed acts but visual-motor neurons specific to certain acts also fired when merely observing experimenters. Prior to this discovery, it had been observed that canonical neurons were known to discharge at the “sight of food or other three-dimensional objects” (p. 80). Unlike canonical neurons, however, mirror neurons neither discharged at the mere sight of objects nor were influenced by the size of the stimuli. Additionally, it was observed that mirror neurons in non- human primates did not discharge when movement was that of a “mimed motor act or intransitive action (i.e., without a correlative object) such as raising the arms or waving the hands” (p. 80). By contrast, mirror neurons were later observed to fire in humans during mimed motor and intransitive actions. In this instance alone, one may begin to deduce that the differences between the discharging of mirror neurons in non-human primates and humans points to differences in language development that begin in the visuo-motor regions. That is to say, since the visuo-motor system is known to play a significant role in the development of speech (among other symbolic activities), the existence of mirror neurons and the differences found between non-human animals and humans carries implications with respect to the ‘language of thought’ as it has been posited by Pinker (2008)—not only impacting language but other symbol systems. Although inferences between the discovery of mirror neurons and a ‘language of thought’ have only been hinted at, the implication of mirror neurons at the level of motor activity as part of cognitive function, such as the view of ‘action understanding,’ has led to renewing the heated debate regarding perception and cognition (Milner & Goodale, 2006; Rizzolatti & Sinigaglia, 2008). For many philosophical reasons previously stated, differing stances toward perception and cognition, has made for contentious arguments with respect to interpreting the role of mirror neurons (Hickok, 2008). Yet as one begins to decorticate the highly technical literature, it 261 becomes more and more clear that this new evidence surrounding brain activity has the potential to overturn the radical separation of cognition and perception, which could lead to untold paradigm shifts in learning contexts if interpretations begin to align with practice. From elementary acts such as grasping to the more sophisticated that require particular skills such as playing a sonata on a pianoforte or executing complicated dance steps, the mirror neurons allow our brain to match the movements we observe to the movement we ourselves perform, and so to appreciate their meaning. Without a mirror mechanism we would still have our sensory representation, a ‘pictorial’ depiction of the behavior of others, but we would not know what they are really doing. Certainly, we could use our higher cognitive faculties to reflect on what we have perceived and infer the intentions, expectations, or motivations of others that would provide us with a reason for their acts, but our brain is able to understand these latter immediately on the basis of our motor competencies alone (Rizzolatti & Sinigaglia, 2008, p. xii). Thus, despite explanatory errors that may arise when certain interpretations of recently discovered data are applied to events in practice, such as I have explored, the development of new research questions and experiments outweigh the risks of being entirely wrong. However, before addressing implications of the role that mirror neurons may play as foundational to the ‘language of thought,’ which primal substrate enables our human capacity for reason and expressions in language, film, music, dance and the like, a brief overview with respect to this discovery is helpful to situate the reader. The discovery of mirror neurons shifts perception studies Much of what we began to understand about the human brain came as a result of studying perplexing states of agnosia caused by neural injuries, birth defects, and illnesses, which produces a loss of knowing (i.e., gnosis) or the ability to recognize the physical properties of objects, persons, sounds, shapes, and smells normally gained through primary sensory systems but damaged by lesions in visual, auditory, somatosensory (touch and proprioception), gustatory and olfactory cerebral cortices. Other means of studying brain functions was derived through the study of animals, in particular non-human primates, which neurologists today have been able to 262 generalize to human anatomy and function through extensive examination of the results of brain imaging between species. Prior to the study of agnosia, of course, was the discovery of language deficits caused by injuries to specific brain areas first identified by Broca and Wernicke. But whether brain lesions cause the conditions of aphasia or agnosia9 (see footnote for examples of categories), both point to a sensory, perceptive, or motor deficit, which affects the processing of information either at the point of incoming (i.e., the primary sensory systems processing signals, which later cannot be ‘unscrambled’ by sensorimotor cortices), or processing within the sensorimotor cortices themselves. The corollary between mirror neurons and brain lesions that either interrupt the ‘knowing’ of visual, auditory, and motor information or ‘interpreting’ verbal or written output, is by no means insignificant. For one thing, mirror neurons explain a good deal with respect to ‘blind-sight,’ which allows individuals with cortical blindness to smoothly navigate around obstacles (Ramachandran, 2008). Its discovery has also been applied in designing new therapies for reducing or eliminating phantom limb pain or cuing social information in the case of autistic children with the use of video images produced specifically for this purpose (Ramachandran, 2008). Of course, in both instances, mirror neurons may appear to some as the mysterious ‘unconscious,’ which activity does not portend to extended consciousness. But the discovery of mirror neurons, in tangent with some very important discoveries of the visuo-motor system, cannot be so easily dismissed as mere unconscious activities that ‘mysteriously’ give rise to 9 alexia: inability to recognize text; akinetopsia: loss of motion perception; amusia: inability to recognize music; anosognosia: inability to be aware of physical states, especially paralysis; apperceptive agnosia: inability to copy images; apraxia: inability to produce motor output in speech, movement, visual tracking; associative agnosia: inability to recognize scenes and objects; auditory agnosia: inability to distinguish sounds (e.g., speech and environment); autotopagnosia: inability to orient parts of the body; mirror agnosia: inability to distinguish real object in hemispatial neglect (i.e., when a mirror is placed in area of visual neglect, patient grasps for ‘mirror object’ as though it were real); prosopagnosia: inability to recognize faces (i.e., face blindness); 263 states of cognition or emotion. Rather, their discovery and interpretation is fundamental to understanding how we process and translate signals perceptively, cognitively, and emotionally— a discovery that is clearly important in understanding how we learn. With experimental data gained from the study of non-human primates, neurologists began to explore new experimental means to understand patients suffering from consequential brain lesions. One patient, known only as DF, and studied extensively by Milner and Goodale (2006), had suffered “vast lesions to the occipito-temporal lobe” due to asphyxiation (Rizzolatti & Sinigaglia, 2008, p. 39). “Through a series of ingenious experiments, they showed that DF had relatively normal sight with regard to the elementary visual properties (e.g., visual acuity), but was totally unable to distinguish between even the most basic geometric forms,” additionally, although “DF’s capacity to discriminate forms was badly impaired, she was still able to interact with objects” for instance, when catching a ball (p. 39). In general, patients suffering from agnosia “have a problem in recognizing objects which they have no difficulty in consciously detecting” (Milner & Goodale, 2006, p. 121). DF could just as easily catch a ball without the slightest knowledge that it was a ball she was catching. But what makes up for the division of labor between that which is perceived or consciously detected and that which is ‘recognizable’ or understood to mean something? For that matter, how does an individual with such an odd visual impairment, not the least bit affected in acuity, learn to interact with objects in a meaningful manner? According to Milner and Goodale (2006), the discovery in 1982 of two anatomical visual cortical streams, as these were first identified by Ungerleider and Mishkin (Milner & Goodale, 2006), led most to believe that the division of labor between the two was a “simple partitioning of the analysis performed on the visual array” (p. 40). Hence any distinction was based on input that coded for either object or spatial vision (i.e., visual attributes versus location) and the 264 “product of this processing” would achieve a “single combined representation of our visual world that provides the foundation for thought and action” (p. 40). Though the evidence for the ‘division of labor’ between the two visual systems has not been refuted, the view that the outcome was of a single representation did not fit with the evidence observed in real life encounters or experimental designs recreated in ‘natural’ settings. In other words, the problem, which was encountered in cases of agnosia, is that thought and action are not so cleanly expressed from a monolithic view. In fact, much thought and action originate from disparate and distinct visual processing streams that produce unique mental models. In the case of one or the other stream being impaired, i.e., ventral or dorsal, some will result in erroneous and at other times quite accurate representations, which naturally leads researchers to take more account of the output processing. An example of the importance of observing the output of visual processing can be taken from the study of color vision. Color vision, which depends on the quality and number of cones in the retina that codes primary colors of red, green and blue, was mostly focused on input mechanics. Nonetheless, in the case of total color vision impairment caused by the absence of cones in the retina, one is offered a deeper understanding of the output processes. Oliver Sacks (1996), for one, found that individuals lacking cones in the retina, but possessing the necessary rods that translate light into monochromatic values, had a rich and unusually deep relationship with objects and events that differed significantly from the way in which he understood reality. In filmic terms, a neurological, not merely social, psychological or cultural basis, for differences found between viewing black and white motion photography and color, is likely to have caused an evolution of how we interpret classic film works. Insofar as the visual system is concerned, what became an important shift in neurological thinking, which impacted on both the design of experiments and the interpretation of data, was 265 the manner by which the two visual streams eventually had to be viewed not solely coding input information (i.e., the dorsal stream for localization and the ventral stream for attributes of size, shape, orientation and color). Rather it was in examining the “output characteristics of the two cortical systems” that present us with a more complete view of the visual-motor system at large (Milner & Goodale, 2006, p. 40). Out of the input functions found to be coordinated by the dorsal and ventral visual streams, “each component of the action” must ensure that the “action can be correctly executed with respect to the goal object” (p. 41). In this sense, it is likely that the two streams converge to maximally allow appropriate actions. Reaching out and grasping an object, for example is a complex act requiring coordination between movements of the fingers, hands, upper limbs, torso, head and eyes. The visual inputs and transformations required for the orchestration of these movements will be equally complex and differ in important respects from those leading to the identification and recognition of objects and events in the world. Thus, to fixate and then reach towards a goal object, it is necessary that the location and motion of that object be specified in egocentric coordinates (that is, coded with respect to the observer). But the particular coordinate system used (centered with respect to the retina, head, or body) will depend on the particular effector system to be employed (that is eye, hand or both) (p. 41). Thus far, this draws upon the fact that perception and cognition are distinct but interact one with the other to achieve both universal and particular interactions with objects and events in the world. Moreover, as far as gaining perspective is concerned, if actions are to be fluent at the individual level, “visual coding for the purposes of perception must deliver the identity of the object independent of any particular viewpoint” (p. 41). This complex ‘grasping’ of an object requires a good deal more than simple perception. Similarly, to form the hand and fingers appropriately for the grasp, the coding of the goal objects’ shape and size would also need to be largely viewer based, that is, the object’s structure must be coded in terms of its particular disposition with respect to the observer. In addition, since the relative positions of the observer and the goal object will change from moment to moment, it is obvious that the egocentric coordinates of the object’s location and its surface and/or contours must be computed on each occasion that the action occurs. A consequence of this last requirement will be that the visuomotor system is likely to have a very short ‘memory’ (p. 41). 266 The preceding describes the importance of coding in the visual system, while also allowing for the retrieval of object identity. Ultimately, “it is objects, not object views, that the perceptual system is ultimately designed to deliver” and by consequence the “characteristics of objects can be maintained across different viewing conditions” (p. 42). What this points to is that outputs are especially suited for “the long-term storage of the identities of objects and their spatial arrangements” (p. 42). In short, all of the above leads to the conclusion that two streams necessarily evolved distinctly though interdependently in order to code and transform distinct input information for appropriately harmonized action outputs to be performed. Importantly, given the two distinct divisions of labor between the ventral and dorsal streams that make up the visual cortex, “lesions of the ventral stream can also result in disorders in the recognition of spatial relations between objects in the world” (p. 121). As it turns out, spatial recognition disorders impact dramatically on social, linguistic, and numerical comprehension, which in turn prove to impact on the emotional resonance one develops with the world. In fact, the salience of spatial recognition is not merely a perceptive ability to be added to a string of abilities that lead to cognitive functions, it is fundamental to cognitive development. Hypothesizing the function of mirror neurons At any rate, what was discovered, as early as 1890, was that patients with cortical lesions were described as suffering from two kinds of visual agnosia. The first, which was termed, apperceptive, was an inability to perceive the coherence of object structure, and the second, termed, associative, retained the ability to detect structure but had no way of recognizing the object. Whereas the apperceptive agnosic had some form of disruption in the “early state of perceptual processing” (p. 122), the associative agnosic appeared to be at a “higher cognitive level of processing where percepts would normally be associated with stored semantic information” (p. 122). 267 Taking a closer look at mirror neurons, this discovery in particular led to making further distinctions in the functions of the visuo-motor system and, by consequence, radically transformed perception studies. To begin, it was once assumed that the act of reaching toward an object and grasping it produced interdependent and inter-coordinated actions, with the latter causing the shaping of the hand to occur by consequence of the former. But it was found that “the arm moves towards a cup and contemporaneously the hand assumes the shape necessary to grasp it” (Rizzolatti & Sinigaglia, 2008, p. 21). In order for the hand to actually grasp an object, the brain has to: (1) possess a mechanism which transforms the sensory information relative to the geometrical properties (the ‘intrinsic properties’) of the object to be grasped into an appropriate shaping of the fingers; (2) be able to control the movements of the hand, particularly of the fingers, to execute the actual grasping (p. 21). Notably, although initial studies conducted on non-human primates were done by training monkeys to perform certain movements, it was when studying monkeys performing a “wide range of spontaneous movements in as natural a context as possible” that new discoveries were made (p. 22). Although such allowances may be viewed as introducing a ‘subjective’ approach, this kind of study, according to Rizzollati and Sinigaglia (2008), “is less susceptible to preconceived notions which often run the risk of degenerating into pure prejudice” (p. 23). Beginning with a detailed overview of the organization of cortical areas and their connections in the brain of monkeys alongside homologous regions in humans, Rizzolatti and Sinigaglia (2008) state that, “experimental data significantly changed the view of the motor system which dominated the scene in physiology and the neurosciences for many years” (p. 19). By detailing the organization and connections of neurons in this manner, they were able to show that assumptions once held according to topographical maps drawn out by Woolsey and Penfield were no longer “adequate” (p. 19). For instance, studies showed that it was an “oversimplification” that the “sensory, perceptive, and motor functions are housed in distinct and 268 separate regions” and, moreover, given the “vast number of structures and functions found to belong to the motor system” it became “increasingly evident that its role cannot be that of a passive executor of commands originating elsewhere (p. 19). The following further elaborates the inadequacy of earlier views. The agranular frontal cortex and the posterior parietal cortex are composed of a mosaic of regions, which are strongly interconnected but are anatomically and functionally distinct, forming circuits, which work in parallel and integrate the sensory and motor information relative to specific effectors. The same holds true for the circuits [that] involve the prefrontal and cingulated cortices, which are responsible for forming intentions, long- term planning, and deciding the appropriate time to act (p. 19). In monkeys, the motor regions identified from F1 through to F7, are anatomically analogous to the structures mapped in humans by Brodmann (Teasdale et al., 1999). The discovery that was made with monkeys is that the majority of F5 neurons (Brodmann’s area 44, i.e., the posterior part of Broca’s area), which contain motor representations of the mouth and hand that partially overlap, seemingly have direct access to visual stimuli, and thus code motor acts that are “goal directed” but not “individual movements” as other more specific neurons (e.g., F1 motor neurons) are designed to do (p. 23). For instance, F5 neurons would discharge whether grasping with the right or left hand or with the mouth, yet not during other “seemingly related acts” (e.g., bending the finger for grasping but not when scratching). Similar data have also been reported in humans: fMRI studies have shown that in normal subjects the sight of graspable instruments or objects activates the area of the premotor cortex which is considered to be the human homologue of F5 both when prehension is required and when it is not (p. 28). The perplexing part of this discovery was in the interpretation of the data. First, classifying F5 neurons as solely motor did not reconcile the fact that these neurons showed visual responses to objects. Thus one wondered whether the neurons were discharged as expressions of intention or merely attention. Yet, neither hypothesis seemed to satisfy the fact that the neurons selectively responded to objects (e.g., certain geometrical solids but not others) that required varying 269 responses (e.g., whether the thumb was required or not), and strongly discharged independently of an actual motor response. In other words, one could assume that responses were either of a motor or visual nature, but if solely the former, it would be difficult to explain the fact that motor neurons discharge in the absence of effective movement (p. 30). The question, therefore, was to determine the role of neurons that enable “the transformation of visual information into the motor format required for the execution of an act” (p. 34). One foundational view provided a framework for designing experiments based upon our interaction with objects. The notion of affordance (as first introduced by James J. Gibson) gave a global sense of what may be at stake when an object is attended to for it “implies the immediate and automatic selection of those of its intrinsic properties that facilitate our interaction with it” (p. 34). In other words, the object, which affords an organism with an immediate set of objectives and calculated parameters for interacting with it, “incarnates practical opportunities that the object offers to the organism which perceives it” (p. 34). Such a viewpoint naturally resonates with a functionalist or pragmatic sensibility. Thus, affordance allows one to admit that an object does not merely trigger potential motor acts based on its intrinsic properties (e.g., shape, size and orientation), rather intention and attention must inevitably trigger neurons that are meant to “code a specific affordance” (p. 35). So that, for instance, through specific feedback circuits, “only those which permit us to form adequate motor behavior will remain” and “once we have discovered how to conjugate the different kinds of motor acts with specific visual aspects relative to objects, which therefore become object affordances, our motor system will be able to perform all the transformations necessary to carry out any act” (p. 35, italics mine). Taking a cup of coffee as an example, Rizzolatti and Sinigaglia (2008) maintain that any grasping to be done will depend on the intentional context. For instance, whether the cup is 270 meant to be poured, filled, moved, washed, hung on a hook, or whether the circumstances depends on an emotional register such as fear or duty (e.g., taking note of whether the liquid inside is hot or requiring some specific ceremonial or cultural act). Thus, in the simple act of grasping, beyond the visual properties the object affords, the cortical activities that must align themselves with the act appear to be “motivational or decisional in nature” and, hence, “involve other areas in the prefrontal cortex” (p. 36). To summarize the complex actions of mirror neurons, several discoveries were made that were homologous with humans but differing in some important areas. First, motor properties of mirror neurons were identical to those of F5 neurons, “in that they discharge selectively during specific motor acts,” but differing also from visual properties (p. 80). As mentioned, behavior did not change with respect to size of the visual stimuli, and in the case of monkeys, either during an intransitive (e.g., waving the hand) or mimed action (p. 80). Nonetheless, in the case of communicative neurons (e.g., lip protrusions and lip smacking), which act has been shown through studies to have “evolved from a repertoire of movements…associated with ingestion and linked to grooming,” do respond to the sight of intransitive acts (p. 89). Lip protrusions and lip smacking, which are part of the act of grooming in non-human primates, are considered communicative due to the manner by which grooming “is one of the principle ways of affiliation and social cohesion” (p. 90). Analogous mirror neurons in humans do discharge during intransitive and mimed acts whether the action is communicative or otherwise (p. 123). Certainly this has many implications for the arts as much as it does in the case of signing languages. Second, mirror neurons “trigger when a specific type of act is observed” (p. 81). Specific neurons discharged, for instance, when grasping, holding, manipulating, and placing. Moreover, there were congruencies with visual responses, that is to say, “the congruence between the coded 271 motor act and the observed motor act which triggered it” (p. 82). Interesting properties included the fact that neurons discharging were not influenced by distance and spatial location, though were in fact influenced by direction. For instance, when experimenters turned knobs clockwise or counterclockwise, the mirror movement of the hand could be seen to follow likewise (p. 80). Moreover, in monkeys, the anterior part of the superior temporal sulcus, which are purely visual neurons, “respond selectively to the sight of a wide range of body movements performed by another individual: some become active when the monkey observes head or eye movements, others respond to the observation of movements of the trunk and legs (walking), and others again code specific hand-object interactions” (pp. 91-92). The specificity of the discharge of mirror neurons is what makes their study so intriguing. Mirror neurons, which are not solely situated in the visual system but distributed throughout the parietal and temporal lobes (i.e., auditory, somotosensory), it turns out, are highly specialized, targeting some but not all sensory and motor signals but also are communicative across the brain to areas of significance, especially in higher order cortices. In hypothesizing the function of mirror neurons, therefore, Rizzolatti and Sinigaglia suggest that, “the activity of mirror neurons cannot be satisfactorily explained as a form of preparation to act” (p. 95). This was determined by that fact that when a monkey observes another picking up food, which is outside its reach, the monkey “has no reason to prepare to act” p. 96). Moreover, mirror neurons do not become active at the “sight of a motor act followed by its execution,” for instance, when the monkey sees the experimenter handing it food, which then leads to grasping or grasping and holding (p. 96). Thus, “had their response been linked to the preparation of an action, [mirror neurons] should have been active during the phase prior to the monkey’s execution of movement” (p. 96). It does not follow, therefore, that there is an automatic ‘triggered’ response function for the existence of mirror neurons. 272 Marc Jeannerod (as cited in Rizzolatti & Sinigaglia, 2008), on the other hand, gave a more “sophisticated” interpretation that would hardly surprise a music or dance educator (p. 96). Utilizing the music classroom as an example, he describes a pupil “attentively watching his maestro playing a complicated passage on the violin that he will be required to replicate when the maestro has finished playing” (p. 96). At this juncture, it is enough to recognize that Jeannerod was pointing toward the act of imitating, whereby mirror neurons function to code specific acts (e.g., finger placement, shape and direction of movement), which are then performed by memory. Hypothesizing that mirror neurons functionally exist to perform imitative acts would be make sense in light of the fact that imitation is of primary importance in the development of human cognition and emotion, and seen to be dramatically disruptive when it is lacking (e.g., in cases of autism). But this hypothesis does not describe the scope of functions for mirror neurons. Mirror neurons beyond imitation: learning new action patterns If the function of mirror neurons were solely imitative, there would be no reason to expect anything more at a cognitive level than an ‘automated’ response to internal motor representations, which are then translated into a concrete sequence of actions. What Rizzolatti & Sinigaglia suggest, in addition, is that imitation afforded by mirror neurons must involve the “understanding of the meaning of motor events, i.e., of the actions performed by others” (p. 97). Their use of the term understanding “does not necessarily mean that the observer…has explicit or even reflexive knowledge that the action seen and the action executed are identical or similar” (p. 97), but rather as elaborated in the following. What we are saying is much simpler: we are referring to the ability to immediately recognize a specific type of action in the observed ‘motor-events,’ a specific type of action that is characterized by a particular modality of interacting with objects; to differentiate that type of action from another, and finally, to use this information to respond in the most appropriate manner (pp. 97-98). 273 The ability to respond ‘in the most appropriate manner,’ which Rizzolatti & Sinigaglia (2008) have termed ‘action understanding,’ is thought to correspond to the developmental linking of the visuo-motor system that begins in infancy. For instance, the fact that we become aware that to grasp an object requires fixing our gaze upon it, ‘teaches’ that the direction of the gaze is not accidental rather it is part of “our vocabulary of acts” that when replicated in others, resonates with our motor system. In short, “we recognize the intentional aspect of movements and understand the type of action” (p. 100). The function of mirror neurons, therefore, is a “form of implicit understanding of pragmatic and not reflexive, origin; it is not constrained by a specific sensory modality, but is bound to the vocabulary of acts which regulates and controls motor execution” (Rizzolatti & Sinigaglia, 2008, p. 106). This would mean that knowledge of our own acts “is a necessary and sufficient condition for an immediate understanding of the acts of others,” the evolution of which is of “fundamental importance for building a basic intentional cognition” (p. 106). Thus grasping or lip smacking are no longer just motor acts being performed but ones that lead to practical ends (i.e., eating and communicating) and the intention not only “exceeds the single act” but “modifies its meaning in the one sense or the other” (p. 114). If this were not so, the “fluidity of movement” (p. 114), from sensory to motor act, which Luria (1972) called ‘kinetic melodies,’ would simply not be possible Hypothesizing the relationship between mirror neurons and conceptual semantics Viewing the brain as a fluidity of movement, whereby intent plays a significant role, one can better focus on Pinker’s observation (2007) that, “the mind has the power to frame a single situation in different ways” (p. 45). This linguistic ‘flip’ is similar to the face-vase, Necker cube or spinning dancer illusions, which offer two views according to differences in visual ‘framing.’ Exploited by artists and neurologists interested in assessing where the brain is making a decision, 274 visual illusions have intrigued psychologists. However, despite all sorts of psychological claims (which once were made of Rorschach inkblot tests), the brain has evolved with what is referred to as bistable perception, namely, an ambiguous view that allows one to ‘flip one’s mind’ dependent on context (Parker & Krug, 2003). Investigators are now beginning to think about the relationships among the particular signals that correlate with perceptual decisions when subjects view ambiguous figures and the more general neuronal signals that are involved in perceptual and cognitive decisions. For example, neuronal mechanisms that underlie attention have been studied in depth in the past few years, and an important future activity is to determine how the signals identified in the attention paradigm relate to those identified in perceptual decision-making (p. 434). This ability for visual ambiguity, which is not restricted to the visual system but also found to occur in the auditory, called the ‘tritone paradox’ (Deutsch, 1991, 1992), is thought to have arisen evolutionarily due to the brain having to take in a very restricted amount of information and make dizzying calculations that, in nature, had to be quite accurate to allow maximal survival. Linguistically, it was Chomsky who first showed that syntactical inversions were grammatically possible, one which allows an individual to ‘flip the frame’ of any given event. As Pinker (2007) demonstrated in his examples of the financial ‘framing’ of the Twin Towers into ‘one’ or ‘two’ terrorist events (with the latter doubling the cost of insurance), this could be viewed as economic ‘survival,’ with the outcome decided in a court of law. The ability to linguistically frame an event in two different ways due to ambiguities, has been the hallmark of comedians and politicians, but also skillfully rendered in out-of-sequence shots in the French New Wave films, by François Truffaut (1960), Don’t shoot the piano player and Jean-Luc Godard (1965), Pierrot le fou (a style that was ‘retrieved’ a generation later in Dogma 95 and Quentin Tarantino films). 275 What neurologists have attempted to show is that the cognitive flexibility we possess to ‘flip’ objects in visual and auditory illusions, and in linguistic or filmic events, operates with less whimsy in ‘reality,’ a fact that is born out in countless experiments. To be able to cognitively ‘grasp’ (i.e., comprehend) an object perceived concretely depends on our ability to code its affordance, which ultimately inhibits inverting certain kinds of visual-auditory-motor strings or sequences. In other words, ‘flipping the frame’ must still maintain the logic of affordances or else the possible sequence of causally connected events in the world would render our view highly unstable. Through extensive analysis, Pinker (2007) demonstrated that certain linguistic ‘inversions’ are simply not possibly (if language is to make sense to a community of speakers). This is true of predicates, for instance in the sentence, ‘John is blue,’ where ‘is blue’ (i.e., predicate), is the verb + object that modifies the subject ‘John.’ Pinker pointed out that ‘object affordances’ do not allow one to simply ‘cut and paste’ verbs (i.e., action) in a sentence willy- nilly. First, in the case of linguistic semantics, the predicate is indicative that something is true, e.g., ‘the wagon is loaded’ and, hence, must have a basis in ‘reality’ if it is to be trusted. Second, in the case of transitive verbs (one that needs a direct object) and intransitive verbs (one that does not), a transitive verb such as ‘sprain’ cannot be used intransitively without sounding odd. The verb ‘sprain,’ in other words, requires a direct object, as in the sentence, “Shirley sprained her ankle,” because to say “Shirley sprained” would sound peculiar and incomplete (Pinker, 2007, p. 31). By the same token intransitive verbs, such as ‘snore’ may be used without a direct object, as in the sentence, “Max snored” and, conversely, would sound peculiar if one said, “Max snored a racket” (p. 31). The importance to which we utilize transitive and intransitive verbs, of course, has absolutely everything to do with the way the brain operates. It is one thing to note that ‘Sally 276 waved’ – a fact that may or may not be coded for future action to be taken. But it is quite another thing to note that ‘Sally waved the police car down.’ Clearly the verb, wave, may be used both transitively or intransitively in a sentence but it is only in the second sentence that something of importance would require one’s attention—an action to be remembered in the future or an action that required another action to be taken, all the while operating with an emotional register relaying important information. On the other hand, the verb ‘to sprain’ is always transitive precisely because to sprain something is an egocentric action – always in reference to one’s body parts – and is noted by the brain as rather important. Whereas the verb ‘to snore’ is intransitive because ‘snoring,’ despite its egocentric view, does not usually require urgent attention. What is important to understand about language is that verbs anchor the meaning around which all sentences arise. Anyone who has struggled to learn a new language beyond childhood, will attest to the fact that to memorize nouns, pronouns, adjectives, adverbs, and, to some degree, prepositions, offer very little by way of communicating much of anything. While arguably those words are descriptively useful for the purposes of identifying, locating, or indicating the ‘addressor or addressee,’ namely the who is doing the talking or pointing to whom (also known as ‘enunciation’ in literary and film circles)—information provided remains, more or less at a ‘nothing happening’ state and, thus, not given much ‘meaning’ in the general scope of causal reality. This is because the purpose for human communication is to exchange thoughts on the states in which objects or events have been changed, are presently occurring, or will occur in a world always in flux—sometimes advantageous and other times not. Presumably, the human brain focuses on past states, which are necessary for inferring casual relations, predicting, and planning future outcomes. Hence, a beginner’s foreign language course proves to be very ineffective in real settings if the focus remains fixated on memorizing words other than verbs and simply frustrating if the 277 course is designed to only introduce present tense verbs (often with the view that conjugations in most languages are difficult beyond the present tense). Courses of this kind demonstrate learning language by ‘rote’ and show just how impossible it is to learn a language simply by parroting phrases or parts of phrases one stores in memory like a telephone number. This fact is born out by the number of ‘repeat’ beginners who remain at a beginner level notwithstanding and retain not much more than the first few words acquired that make ‘sense.’ I suspect that learners who insist they cannot master a second language are bereft of knowledge of their own. The fact that the brain is able to store visual or auditory strings to be ‘parroted,’ is most likely what Gordon referred to with respect to ‘memorization,’ but which concept is too simple. Whether it is learning language or music, memorization of longer strings of data (i.e., syntax) requires specific ‘coding’ often aided by mnemonic devices, which devices require ‘meaningful’ connections through the senses as these are interpreted in temporal and spatial frameworks. Thus, while living in Costa Rica, my experiences as a beginner ‘second’ language learner demonstrated how futile it was in common day-to-day exchanges to (1) memorize a few short sentences, (2) be able to name a list of objects, or (3) only know how to conjugate in the present tense. Memorized sentences did not allow me to create or elaborate a single thought and knowing a list of nouns made no headway in ‘locating’ objects or naming an object. Rather nouns simply state the obvious (unless preceded with interrogatives how, where or when, or spoken with an inflected ascending tone or a quizzical look on one’s face while miming actions—the movement needed to connect the nouns to meaningful ideas). In effect, when it came to temporal concerns, listeners had to ‘divine’ from my hand actions (alluding to past or future) that I was not referring to an object or event (especially one that required urgent attention) which was in a present state of change but rather that I was 278 recounting an event or object referred to in the past or explaining why an event or object necessitated attention in the future. To make matters more complex, the mere choice of verbs in a sentence alters meaning dramatically. For instance, in the sentences “Barbara caused an injury and Barbara sustained an injury” the verbs play a pivotal role in meaning (Pinker, 2007, p. 31). As much as Spanish may offer up many French or English verb equivalents, one soon discovers in the course of conversation that a chosen verb, while perfectly legitimate according to their lexical meaning (i.e., dictionary definition), may connote the wrong ‘movement’ entirely. Thus, “the information packed into a verb not only organizes the nucleus of the sentence but goes a long way toward determining its meaning” (Pinker, 2007, p. 31). Fussy verbs prove to be connected to reality Of the several linguistic cases, locative verbs indicate that an object’s state has changed by something having moved (changed from this state or ‘location’ to another). If the object were a container, such as a wagon, changing its state of ‘load’ would require that an object would be moved to ‘fill’ the empty space that is its receptacle. If the object were the content, such as hay, changing its state as ‘loaded’ content would require that it would have been moved into a container. The object, therefore, is modified either by a ‘content-locative’ or a ‘container- locative.’ For instance, the use of the locative verb ‘loaded’ in the sentence, the ‘wagon is loaded,’ (i.e., container-locative) is grammatically correct, but cannot be used in the sentence the ‘hay is loaded’ (i.e., content-locative). Rather one would use ‘loaded’ to complete the sentence with the ‘hay is loaded onto the wagon.’ Thus since locative verbs also indicate causality, they accord themselves with object affordances, namely whether by nature the object is a ‘container’ or ‘content.’ In the first instance, the hay causes the wagon to be changed (‘the wagon is loaded’), but in the second 279 “loading hay onto the wagon is something you do to the hay,” (Pinker, 2007, p. 43) namely, causing it to go to the wagon. Object affordances, of course, are not the only instance where visuo-motor processing restricts linguistic expressions. In addition to the locative case rule, which demonstrates causality, Pinker (2007) elaborates further on our cognitive semantic processing that allows or limits the flexibility of ‘flipping’ grammatical constructions. In the case of an indirect or direct object compliment, for instance, cognitive semantics apply when the change that is observed is whole or part. Pinker called this the ‘holism’ or gestalt effect that impacts on our understanding of the action (i.e., how verbs ‘construct’ meaning). For instance, ‘Peter painted on the door’ merely indicates that he put some paint on the door, but ‘Peter painted the door’ indicates that the whole door was painted. Other locative verbs, such as spray and stuff (as in ‘stuff the turkey’), can also be used to distinguish between actions that are part or whole. Clearly, understanding the intentional distinction is one that requires a visual or auditory motor ‘vocabulary,’ which must operate at a cognitive level in memory and attention. Verbs, according to Pinker, are rather fussy in their application since “the holism effect turns out not to be restricted to the locative construction; it applies to direct objects in general” (p. 45). For instance, ‘to drink from the glass of beer’ (part) is entirely different than ‘to drink the glass of beer,’ where the glass is the direct object and implies that the beer is all gone. To consider why ‘content’ may be interpreted as a whole in direct object constructions, Pinker points out that “the English language treats a changing entity (a loaded wagon, sprayed roses, a painted door) in the same way that it treats a moving object (pitched hay, sprayed water, slopped paint)” (p. 47). So that an object in space that changes is in ‘motion’ in the same manner as an object that changes from one location to another. This is certainly congruent with the motor vocabulary of dancers, which audiences experience visually and kinesthetically (and proprioceptively if one were a dancer and could think muscularly), whereby the dancer’s body 280 parts may be in motion (i.e., changing shape) while remaining in one location or traveling from one location to another through space. Language has a wonderful metaphoric capacity that may be exploited in humorous sketches and cartoons based on everyday physics. In exploring physics further, however, Ray Jackendoff was able to determine that words express “motion, location, or obstruction of motion in physical space…or obstruction of motion in state space,” (p. 47). Change of special states are reflective of everyday sentences such as, ‘Petra went from first to second base’ or as a kind of metaphorical change of state in, ‘Petra went from sick to well.’ Additionally events may also be constructed around temporal changes, for instance, ‘Sue moved the meeting forward.’ All such sentence constructions are part and parcel of ‘translating’ our physical world into symbols of expression. Thus, one may infer or decide on what action to interpret or take, namely, which verb is the most appropriate, since it involves the ‘physics’ of objects perceived as real (whether or not one can physically or simply mentally grasp an object). With respect to locative alternations, for instance whether there is a holism effect, the physics (internal geometry) of an object means that, “we’re really talking about a state-change effect, and ordinarily the most natural way that an object changes state when something is added to it is when the stuff fills the entire cavity or surface designated to receive it” (Pinker, 2007, p. 49). Thus, to speak of a room, one can express the thought, “throw a cat into the room, but cannot throw the room with a cat, because merely throwing something into a room can’t ordinarily be construed as a way of changing the room’s state” (p. 49). In the case of forces that cause events or objects to change, verbs that “differ in their syntactic fussiness, like pour, fill, and load” are such because of the chemistry of an object, which “turns out to have a distinct kind of semantic fussiness—they differ in which aspect of the motion event they care about” (p. 49). The two verbs pour and fill syntactically mirror each 281 other, and in fact refer to similar causal forces. To pour or to fill implies a ‘letting’ rather than ‘causing’ a change of state, whereas to load implies that the object has to have the right geometry (e.g., size or shape), must fit as content (into a container) and is applied directly (a causal force has effected the change). According to Pinker, verbs that take locative alternation because of causal forces, as in the sentences, “smear grease on the axle or smear the axel with grease include: brush, dab, daub, plaster, rub, slather, smear, smudge, spread, streak, and swab” (p. 53). Verbs that do not allow locative alternation, because the agent of change “allows gravity to do the work,” such as “pour water into the glass but not pour the glass with water include: dribble, drip, drop, dump, funnel, ladle, pour, shake, siphon, slop, slosh, spill, and spoon” (p. 53). The difficulties one encounters when attempting to employ verbs correctly while learning a second language is simply that physical states, which always have more than one property or affordance, may focus on differing ‘concepts’ from one language to the other. Or as Pinker (2007) expresses it, “When it comes to basic concepts, the world’s languages are like a game of Whack-a-Mole: if a language whacks a concept out of one of its grammatical devices, the concept tends to pop up in another” (p. 80). This may also be said of differing ‘concepts’ in other modalities other than language. The concepts are present, but how these are expressed may be unique to the medium itself. For instance, I could not help but take note of the parallel between the movement and music ‘vocabulary’ with which I grew up and the impact that this vocabulary had on my development of ‘grammar’ in both of those arts. For instance, when Laban (Preston, 1963) identified dance ‘grammar’ as movement qualities that include ‘aspects’ of space (direct- indirect), time (sudden-sustained), and flow (bound-free) along with ‘modes’ of weight (strong- light), he did so by noting both the dancer’s physics (change-state) as well as the causal forces that are implied when such movements are enacted. With movement vocabulary (i.e., verbs) such 282 as float, flitter, and sink, the dancer moves as if the forces of gravity allow the movement to occur. With the verbs, dab, push, wind, coil, and brush, the dancer moves as if applying forces to imaginary objects (dab the paint, push the door, wind the clock, coil the rope, or brush the snow). In the first instance, the dancer’s body may only be viewed as being the object of change, but in the second instance the dancer’s body and an imaginary object are perceived—which alternation may be that a dancer is acting on the imaginary object or the imaginary object is acting on the dancer. The combination of those verbs as a ‘grammar,’ depends on the quality of those change- states and how the intended movement is to be portrayed. When watching a dancer, whether or not one is ‘conscious’ of such meanings, one may ‘read’ the physics of movement in much the same way as one would in real life. Thus, a floating motion would elicit a much different reaction than a dabbing action, the former implying the force of nature, the latter implying a human force. The first appears ‘non-threatening,’ and most likely we respond to it with resignation, whereas the second appears as menacing, which raises some concern with respect to the intent behind the force. Of course, dancer movements are not able to denote any precise meaning without the assistance of context, e.g., set décor, costumes, liner notes, and character portrayals. But, this is may be true of language also since a verb, to make any sense, must be in reference to subjects, objects, events, time, and space. The movements, moreover, adhere to laws of physics (whether as dance or language) and necessarily connote meanings (which may be interpreted contextually in a narrative) as those perceived actions ‘discharge’ appropriate visuomotor and emotional responses—particularly when movement is executed with skill and precision. Hence, a ‘sloppy’ movement (in dance as it is in music) is no more readable than a poorly chosen verb. Returning to linguistic verb analysis, therefore, a few more interesting oddities elaborated by Pinker (2007) are worth mentioning in light of the discovery of mirror neurons, which studies 283 include actions of ‘having’ and ‘giving.’ The dative case rule, with respect to children learning which verb to apply correctly in what context, “has all the ingredients of the learnability paradox encountered with the locative” (p. 57). The dative, which in Latin means ‘to give,’ may be constructed twofold. The first, “Give a muffin to a moose” is “called the prepositional dative (because it contains a preposition, namely to)” (p. 57). The second construction, “Give a moose a muffin,” has what is called a “ditransitive or double-object dative (because the verb is followed by two objects, not just one)” (p. 57). As with locatives, verbs either accord with both, or with prepositions but not double-objects or vice versa. Pinker (2007, p. 58) offers several examples of verbs that work well with prepositions but not the double-object, for instance, Goldie drove her minibus to the lake but not Goldie drove the lake her minibus. As it turns out, “the two construals are cognitively different, because some kinds of causing-to-go (cause a muffin to go to a moose) do not result in causing-to-have (cause the lake to have a minibus)” (p. 59). By way of elaborating, Pinker uses two examples using homonyms in the following sentences: Annette sent a package to the boarder. Annette sent a package to the border. Whereas the first sentence may be constructed with a double-object, as in the sentence, Annette sent the boarder a package, the second may not, as in, Annette sent the border a package. That is due to the fact that a ‘boarder’ can ‘have’ a package but a ‘border’ (which is an inanimate object) cannot ‘possess’ anything at all. By the same token, he offers examples of verbs that operate with the double-object dative but not prepositions, for instance, The IRS fined me a thousand dollars but not the IRS fined a thousand dollars to me. The problem here indicates that “some kinds of causing-to-go are incompatible with causing-to-have,” for instance, “Cherie gave Jim a headache but not Cherie gave a headache to Jim” (p. 59). While something she did ‘caused’ Jim to have a headache, it is 284 in the ‘unintentional’ action whereby the headache was caused, not because Cherie ‘physically’ handed over a headache to Jim. In this sense, therefore, there is a cognitive understanding that causation makes a distinction between intention and association. Returning to an arts context, what is most fascinating to me as a dancer and a musician is that sentences with datives that go both ways, prepositional and double-object, are paired with verbs that are ‘sudden’ or instantaneous (e.g., bat, bounce, bunt, flick, flip, etc.). Thus the sentence, Susie batted the ball to me, may also be said, Susie batted me the ball. By the same token, sentences with datives that do not work as a double-object occurs when verbs indicate the continuous application of force (e.g., carry, drag, hoist, lift, pull, etc.). For instance, Jim carried the box to him but not Jim carried him the box. I was struck once more with what Laban (Preston, 1963) identified as sustained movement, which is musically analogous to legato (i.e., continuous sound), and sudden movement (i.e., staccato or detached sound). Cognitively one is readily able to discern the two ‘qualities,’ which inevitably causes to have or to give an emotional response. Sustained music and movement often causes one to ‘have’ a spatial state of being, which lingering sensation may be thought of as ‘reflective.’ Sudden music or movement, by contrast, ‘gives’ one a temporal state of being, which is ‘iterative’ and ‘impulsive.’ The former continuous movement may be expressed as ‘the music brought distant places to my mind,’ not ‘the music brought my mind distant places.’ And the latter ‘staccato’ music, ‘tossed me the beat’ or ‘tossed the beat to me.’ Linguistically, if a sentence is to be comprehended, object affordance, substances, physics (i.e., forces) and flow are just some of the examples that impacts on grammatical constructions. This points to the possibility that the brain’s visuo-motor system, which accords itself similarly, simply reiterates reality within multiple modalities, whether linguistic or artistic. Flipping the frame is not merely a haphazard jumbling of images or words. Rather it is 285 dependent first on the ‘action understanding’ processed by the visuo-motor system itself, which then may be ‘translated’ into expressive forms. Clearly, the role of mirror neurons demand some serious attention if one wishes to understand the relationships between language and cognition or between the arts and cognition. Mirror neurons in action As mentioned, there are some important mirror neuron differences between non-human primates and human. The most significant is that mirror neurons in humans code both transitive and intransitive motor acts—something that is of primary importance in terms of syntax—and also able to code for both the goal of the motor act and movements of which the act is composed. Moreover, transitive ‘mimed’ actions discharge mirror neurons—a point that draws us back to audiences responding to dance, theatre, and film. This fact alone may explain why we are so captivated by mimes in the first place; it may also explain why spectators attending performances of the ‘air guitar’ phenomenon (i.e., contests held of lip-syncing and imaginary playing of the guitar to sound recordings) is taken seriously as an ‘art’ form. The striking aspect of understanding the acts of others, is the fact that those acts are bound to a motor vocabulary that the observer has already acquired. This fact was one that I recall from early experiments during one of my undergraduate courses in biomechanics. At that time, we set about to test the theory of video instruction, namely, whether watching a baseball swing on video offered the same instructional value as one who was coached through a swing. It was a simple research project that had us separated into trial groups (the standard pre and post test group and a control group). The idea was to log the number of successful hits in a batting cage prior to receiving any instruction, which would rate our personal performance. Then we were separated into those who received no instruction, video instruction, and those who were coached. The number of successful baseball hits were once more recorded, and then scored on an 286 improvement rating scale. While I do not recall the precise outcome, my rudimentary baseball swings, which I had practiced in the batting cage, improved only moderately with video instruction. In contrast to the baseball experiment, wherein my swing was definitely at a ‘beginner’ level, my responses to watching ballet on television or on stage are quite different given my many years of performance. I physically feel my muscles and respond with my body alongside the dancers. I am able to infer the quality, duration, and intensity of movement, and to predict spatially, temporally and kinesthetically the next sequence of movements. In addition, since I also trained musically for many years, I find that I am highly attuned to the corollary between the dance and music, which heightens my appreciation of choreographic intent when it ‘emulates’ varying aspects of the music structures which underscore the movements (e.g., rhythm or melodic contour). The capacity to weakly or strongly respond to movement, has been similarly described through an experiment designed to ascertain the level of mirror neuron response of participants viewing a video of Capoeira and another of classical dance (Rizzolatti & Sinigaglia, 2008). The videos were shown to participants, which included Capoeira teachers, classical dancers, and individuals who had never taken a dance lesson in their life. As predicted, while both groups of dancers responded strongly to the videos, a greater number of mirror neurons fired when Capoeira teachers viewed Capoeira steps and when ballet dancers viewed ballet. By contrast, the participants with little to no movement vocabulary proved to have the weakest amount of discharge. The result of the combined experiments “confirm the decisive role played by motor knowledge in understanding the meaning of the action of others” (p. 137).10 10 Due to the delimitations of research, it was not possible to include all of the investigators in dance education who have touched on perceptive, cognitive and emotional aspects of dance, nor of the connections that are currently being sought through neurosciences. For a more comprehensive 287 While this shows that we respond to sensory input, it does not explain how we are able to either acquire new actions via imitation or “translate the sight of a sequence of movements, which taken individually could be devoid of meaning, into a potential action that has sense for us” (p. 140). There is at least one hypothesis that has played itself out in many of my movement classes, which is drawn from the difference between the activation of anatomical and spatial congruency during observed and imitated movements respectively. In a study to test congruencies, it was shown that the mere observation of hand movements activates the anatomically congruent hand of experimenter and participant (i.e., right hand to right hand). But during ‘imitation,’ there occurred a spatial congruency (i.e., right hand to that of the left). Accordingly, “it is probable that this inversion…during imitation is to be ascribed to the influence of the fronto-pareital mirror neurons, which would promote the selection of motor prototypes that are spatially congruent with those that are being observed” (Rizzolatti & Sinigaglia, p. 145). To this extent, therefore the “common representational domain,” which was part of the “ideomotor compatibility” as formulated by American psychologist Anthony Greenwald (as cited in Rizzolatti & Sinigaglia, p. 142), suggests that the more the action is like that of the repertoire of actions of an observer, the more likely it will be executed intentionally. By contrast, the common representational domain “would not be considered as an abstract, amodal domain” rather it would be a means by which to “transform visual information directly into potential motor acts” (p. 142). In a ‘mirroring’ dance that I call Palindrome, which meaning is typically connected to a word or image that is the same forward and backward, I invite my students to perform even understanding through dance educational studies of movement, see the article by Hanna, J. (2008). A nonverbal language for imagining and learning: Dance education in K-12 curriculum, Educational Researcher, 37(491). Dance education researchers of notable mention also include, Blumenfeld-Jones, D.S.; Cancienne, M.B.; and Snowber, C. 288 though the majority are not dancers. The Palindrome is an improvised dance that requires 10 or 12 couples. Each pair of ‘dancers’ face one another with a dividing line that runs down the middle of all the couples standing side by side alongside others in two rows. The following diagram (figure 1), which explains the set up, shows the pair of dancers facing one another opposite a dividing centerline. Figure 1. Palindrome set-up The idea for the dance is to maintain a palindrome between pairs as well as the entire ensemble of pairs. Thus, a movement palindrome would disallow crossing the centerline and each individual would have to maintain a spatial balance, as well as lines of movement and shape on either side of the dividing line on both sides with their partner and the ensemble. In other words, although the dance is performed by couples responding to each other while moving in two rows, with their entire focus on each other’s movements, they must also maintain their peripheral awareness of other couples. The overall effect (especially if one had a bird’s eye view) is a perfectly balanced movement piece involving an ensemble of ‘dancers.’ The movements, initiated by one and mirrored by their partner, can travel in any direction (e.g., front, side, back) and freely use arms, hands, legs and torso to create levels and shapes but must always be positioned in relation to the spaces created by all the couples (moving at once) without destroying the structure of the palindrome. When I begin to introduce the dance piece, I always use several volunteers to model the experience. If I ask a volunteer to demonstrate the ‘dance’ with me, and begin to move slowly, the participant will imitate my actions with spatially congruent body parts (e.g., right leg to left 289 leg). But if I tell the participant to merely observe my movement, and then ask him or her to repeat it, he or she is likely to use the anatomically congruent hand or leg (except in the odd cases when the participant is a dancer and anticipates moving), thus ‘breaking’ the palindrome effect. Immediately, those latter participants feel they have erred. Clearly, the difference between the two instructional approaches is that in the first instance, the participant anticipated mirroring my actions, whereas the second did not. Merely observing, therefore, is not sufficient to initiate imitation, instead there must be an intent to act, which is found to be a frontal cortex activity (i.e., cognitive). Beyond the intent to imitate, one wonders how new patterns of action are then initiated through mirror neurons. In Rizzolatti and Sinigaglia (2008), they describe an experiment designed in an effort to understand the shift from imitation to freely performing acts, which involved participants in a guitar exercise whereby they were asked to observe, observe then imitate, and then freely execute chords. The recorded responses “during the intervals before imitating and before free execution of chords” (p. 149) indicated that mirror neurons came under the control of specific areas of the frontal cortex, namely, Brodmann’s area 46. Aside from area 46’s involvement with working memory, it is also “responsible for the re-combination of individual motor acts and the formation of a new pattern of action” (p. 150). The activation delay, between imitation and execution, suggests that the latter required the “general decision to act,” which cognitive function is also part of planning (p. 151). There is of course an explanatory neurological gap between imitating and expressing new patterns, though Rizzolatti and Sinigaglia discuss what is important to consider that leads hypothetically to understand how we learn. In practical terms and independently of the neural circuits in which it is embedded, the mirror neuron system is at the root of a common space of action: if we see someone grasping food or a coffee cup in his/her hand we know immediately what they are doing. Whether that someone likes it or not, the very first signs of movement in the hand 290 ‘communicate’ something to us and that something is the meaning of the act: this is what ‘counts,’ what we share with the person who is executing the act, thanks to the activation of our motor areas (p. 154). 291 CHAPTER FIVE Every film, in addition to representing either an interior or an exterior world, gives us information, impressions, ideas. It offers us meaning and it makes that meaning’s importance clear. This much is obvious, but it is on these premises that the theoretical orientation we are reviewing considers cinema to be essentially a language (Casetti, 1995, p. 54) Ethical chasm: an experiment in film pedagogy and research methodology In thinking back on the ‘script’ for the short film, An Ethical chasm (2008), the entire project, from start to finish, was an experiment in research and pedagogy. In writing for a scholarly journal, Tierney (2001-2002) was first to experiment with an artistic genre (i.e., playwriting) in an attempt to ‘translate’ complex language and literacy theories with legal and political ramifications into a palpable piece of reading. In other words, to grasp the ideas that Tierney tried to flush out, he had sensed that a narrative framed in a dramatic dialogue would provide the best ‘pedagogical’ vehicle for his readers. Thus, taking the dialogue directly from the article, it was transformed into a performance piece (i.e., Reader’s Theatre) and subsequently turned into the movie. Beyond the text, however, Tierney also initiated a performance, which ultimately transformed the words on the page into an auditory modality. Since the courtroom scene provided little ‘action,’ he rightfully concluded that the dramatic genre of Reader’s Theatre would be the most appropriate means to bring to life the emotional register of the written word. At that time, it was not possible to definitively ascertain whether or not the text was appropriate as a film. It seemed that it would be possible to transform the narrative to yet one more modality, but the temporal and spatial aspects of the story cast some doubt from the start. One source of concern in film studies has hinged on the manner by which an audience perceives temporal and spatial aspects. For instance, how the audience perceives time past, present, or future in a filmic context. And whether scenes or actions that are shot ‘out of sequence,’ allows viewers to comprehend the whole. 292 As far as ‘time’ is concerned, some film theorists have argued that a film ‘appears’ as if it were in the present ‘tense’ by virtue of the cinematic event happening now and the linguistic framework in which the dialogue unfolds. Logically, however, with the exception of capturing a live event on television, most everyone living in today’s media-rich world understands that once the film has ‘gone to print,’ the ensuing narrative can only be the recounting of a story in the past, irrespective of the dialogue or temporal markers in the film (i.e., the historical time in which the film is set). And, for the most part, it is audience knowledge—the complicity of filmgoers with filmmakers—that ought to be taken into consideration when trying to ascertain how a film is ‘read’ and understood. It goes without saying that pre-promotional tours and film trailers, the latter of which I also produced for conference purposes, also impacts on a film’s reception. To this end, the ‘background’ information, which was principally ‘told’ by the narrator, had been written into the original script and was originally part of the Reader’s Theatre, which ran 45 minutes in performance length. To shorten the length of the film version, however, much of the ‘script’ was altered. For instance, I chose to remove the ‘narrator’s voice, despite the fact that I did not include additional background knowledge by adding extra scenes that could have been shot outside of the courtroom. Notwithstanding, while audiences generally understand that films live in the past, they are also able to understand the ‘progressive present’ (in English) through the actions on screen as they unfold or the ‘future’ that is located in the past (e.g., pluperfect) through flashbacks and flashforwards. In dialogue, while recounting a story, the insertion of temporal markers now or then brings the present into focus without too much confusion, e.g., ‘So now she is saying to me, I’m not listening.’ Additionally, film audiences readily play along when a character ‘breaks’ the rule of gazing directly into the camera, as if the story is now in the present, not unlike breaking 293 the ‘fourth wall’ in a theatre context. That was a device I kept in the final closing statements of the defense attorney. Live performances do differ temporally and spatially from filmed performances for obvious reasons. First, a live performance occurs in a bounded space that can neither ‘pull focus’ through close ups or long shots (e.g., near and far), nor control the visual angles that one is able to with a camera. Perspective in theatre is entirely dependent on the line of sight of each audience member. Theatre audiences, therefore, do not experience the kind of visuomotor dynamics that are created by the camera, which are able to provoke subtle feelings of danger (i.e., discomfort) without ‘grandiose gestures’ and other theatre devices. To create those feelings in a theatre, a metaphoric tightrope walking would have to be designed. By contrast, Dziga Vertov (1929), in his silent film, Man with a movie camera, was among the first filmmakers to play with the metaphor of the ‘eye of the camera’ as he subtly demonstrated the ‘physical’ similarities between the camera and the visuomotor system—which he naturally assumed was the providence of the ‘eyes’ alone and not the brain’s systems. Second, in theatrical time, there is a dual sense of a temporal ‘present’ as the ‘actors’ are performing live on the stage such that there is an underlying sense that ‘anything’ could happen, including the actor forgetting their lines or blocking. This temporal ‘present’ of a live performance is also mixed with the temporal ‘past’ or ‘conditional’ future as recounted in the dialogue through the story’s narrative or imagined projections. Theatre audiences are just as able to grasp temporality and ‘imagined’ spatial boundaries as filmgoers are able to with a conscious understanding that what they are viewing is art, namely, a piece of theatre or film unfolding before them. Importantly, utilizing research on mirror neurons to guide one’s thinking, there can be very little difference between film and theatre on a temporal-spatial plane, since mirror neurons 294 do not distinguish between the ‘virtual’ and the ‘real.’ The filmgoer and theatre patron will be surprised, frightened, angry, happy, or sad, developing empathy according to the ‘kinetic melodies’ that play out in the brain’s neuronal circuitry that recalls one’s own experiences while observing the actions of others. In horror film genres, which capitalizes on a type of visuomotor ‘blocking’ of the senses that normally are attuned to exterior stimuli as we experience them in reality, the rational mind is the only thing that keeps you from running from the darkened movie theatre at the sight of violence or danger. Although the darkened movie theatre is the ideal space for experiencing the virtual, it is not solely a darkened theatre space that ‘suppresses’ the sensory systems beyond the visuomotor. Rather it is believed to be the mirror neurons in the visuomotor areas themselves that act on the whole of the brain’s sensory systems, which neurons make no distinction between the size and dimension of objects in the visual field. Hence, contrary to David Lynch’s assertion, one can become transfixed by a miniaturized film on an iPod, though I would have to agree that the handheld experience must naturally differ from watching it on the big screen in the dark. Between the ‘real’ and the ‘virtual’ of the bounded spatial and dual temporal qualities of live performances, playwrights must consider clever sets, linguistic, and performance devices to convey space and time. They must also use those devices to reveal the motives and intentions of the principle characters, such as blocking (movement and gestures on stage). Motives and intentions, as revealed through language and performance of stage actors, are often not as subtle as they can be portrayed in film. Many devices may be used in theatre, including entrances and exists, blocking, set décor and props to point or indicate meaning (i.e., lexical properties). This also includes ‘backstage’ action, deictic and non-deictic sounds (i.e., either part of the story or extra to the story, such as a soundtrack played on loud speakers), correspondences read aloud, monologues (i.e., internal 295 voice), or conversations between characters plotting or recounting an event. Those simple theatrical devices may be employed in the cinema and indeed were used in making classic films, but as filmmaking has evolved many more technical aids have been innovated to create temporal and spatial meaning and to convey character motives and intent. Today, those theatrical devices would only be utilized stylistically in a film. In any case, directorial and scripting devices assist to ‘narrate’ the story, without the aid of a narrator, who in classic Greek theatre provided then and now essential and sometimes necessary information to an audience. The narrator is the single character in theatre and film that acts as if he or she possesses an omniscient view, which reason tells us reveals the author’s intent in a more ‘direct’ manner than, for example, the character’s voices. Of course, one cannot escape the author’s intent in any artistic work, which arguably is the ‘director-editor’ in film rather than the ‘writer.’ At this juncture, my intention is not to discuss either auteur theory or how filmic ‘voices’ are comprehended, namely, ‘who enunciates what to whom’ in enunciation theory (e.g., Casetti, 1999). I simply want to point out that the narrator sits interior and exterior to a story in both live and filmed performances and may also sit more prominently in an audience’s mind as being the voice of the author. A narrator is often essential to ‘historical’ stories or scenes that show little action, as documentary films, courtroom dramas, detective stories and journalistic pieces demonstrate extensively. Films that involve telling past ‘historical’ states or events through ‘witnesses’ (e.g., An Ethical Chasm, 2008) and ‘interviews’ (e.g., The Laramie Project, 2002, see footnote11), generally use narrators to move the story forward. The narrator is also an essential device for establishing context by recounting past events still acting on the present. In a grammatical sense, 11 The Laramie Project (2002), originally a staged performance turned into a film, was a project performed by the Faculty of Education professors and students just prior to Tierney (2001-2002) turning his article into a Reader’s Theatre. It was the impetus for creating a reader’s theatre in the first instance. 296 one may say that the narrator carries the role of the present perfect tense, as in the sentence, ‘I have eaten,’ whereby the past is ‘understood’ to still be acting on the present. The narrator is also useful in changing the tempo and length of the story either by speeding up the temporal or shortening the spatial content. In other words, what cannot be ‘performed’ can be told through language. Hence, playing the role of the ‘omniscient voice,’ the narrator is able to communicate to a theatre or film audience the temporal and spatial ground of intentions, interests, motives, and goals of the key cast of characters and, ultimately, the author’s. True to form, in Tierney’s case, he included the narrative voice in the original ‘script’ and we cast this character for the Reader’s Theatre. But I chose not to include the narrator in the film context because I believed the story could be ‘shown’ without the presence of the omniscient voice. Furthermore, time did not permit me to fill in the kinds of details the narrator had done in the theatrical performance. Unfortunately, this background information is vital to the life of the characters by providing the audience an empathic view of their personage’s complex nature. Without empathy for the cast of characters, the story may be doomed to fail—a situation not unlike the 2004 German film of Hitler’s last days in his bunker, Downfall, which cast of characters gave us no reason to empathize with any of them. In story terms, we need to feel an attachment to both the hero and the villain, or to the good and bad inherent in all personages if the story is to strike us as portraying ‘reality.’ Fundamentally, whether we wish to accept it or not, in life as in art, every person is a hero to their own story in whichever ‘role’ they are ‘playing,’ and thus carry within their own set of intentions, motives, drives and goals that become part of the story’s causal fabric itself. When those internal ‘movement’s are made available to audiences, empathy arises—though it arises by degree always in accordance to a viewer’s experiences. 297 Additionally, I did not include background as context to the story itself, namely, the zeitgeist of the times, which often explains the driving forces behind the causal events. The story, in fact, had no precise temporal ground (it was unclear whether it happened yesterday or a decade ago). Except for the ‘clothes’ (as costumes) that were worn, which had a contemporary feel that gave some clue of its temporal state, there was no sense of how close to the present day the events were being recounted. The importance of ‘timing,’ in a ‘research’ context, has everything to do with whether or not the ground has been shaken or is being shaken with new knowledge and ethics. In the end, I felt that a narrator’s voice would not add more ‘empathy’ to the characters than what was present in the performance and casting, which latter had gone against type purposefully to make the characters appear more ‘believable.’ But much to the surprise of the writer whose intent had been to set the story in the United States, based on actual personages from his experiences, particularly influenced by the politics of the Clinton and Bush administrations, the casting against type changed the story. By Tierney’s account, the original story, which now appeared to be set in Canada through casting against type, created a different optic. The reason for that is because ‘stereotypes’ are frequently exploited in visual storytelling, according to graphic artist Will Eisner (1996), as a ‘shortcut’ to identifying status and positions of the personages (e.g., strong/weak; good/evil). In the case of the judge, for instance, an unlikely casting choice was made insofar as the actor, being an East Indian who was educated in England (and had a strong ‘British’ accent), could not conceivably play an authoritative role in a legal context in the USA. For the film viewer, there was no ‘tension’ created from watching a potentially biased judge. 298 In short, the defendant, the witnesses, the attorneys had all been cast either against gender or ethnic stereotypes, which contributed to a more ‘egalitarian’ society, such that Canada is frequently portrayed as such in the eyes of the world. This Canadian ‘fair play,’ made the actor’s performances that much more difficult to convey motive and intent. At any rate, I was counting on the casting and performances, as providing ‘first impressions,’ to create audience empathy, alongside thinking that the dialogue itself, as it unfolded over time, was sufficient to ‘explain’ the circumstances. I was hoping, more or less, that the ‘facts’ would speak for themselves without much more consideration for building the character’s backgrounds or placing the story into a ‘believable’ context. With those aspects ‘taken care of,’ I would be left with more ‘creative’ input to build the film’s dynamics through the camera and editing. The grammar of film: a temporal-spatial logic Logically speaking, theatre and film are performed under an overarching temporal state and spatial viewpoint that reflect grammatical tense (past, present, and future) and aspect (imperfective and perfective). Clearly, tense and aspect inherently vary within the language expressed by the actors—as it does in real life. Thus, depending on whether an actor in ‘character’ is recounting a past event, discussing a present or planning a future event, the language will grammatically inflect and conjugate what is intended. By implying an overarching tense and aspect, therefore I am referring to of the entire event (i.e., the narrative or story), which may be broken into sub-units (i.e., sub-events or scenes and shots). The imperfect tense and imperfective aspect, which in grammatical terms are both bounded by time or space, respectively, marks the ‘time’ of the action (i.e., location in past, present or future) or shows the ‘shape’ of the event (i.e., viewpoint). The imperfect and imperfective does not allow the listener to ‘know’ or ‘see’ either the start or endpoint of an action or event in the past. The spatial aspect of language structure (i.e., viewpoint) is akin to ‘seeing’ a 299 close-up of a ‘scene’ by drawing the focus of the spectator—the Latin root for the word ‘aspect’—into an ‘imperfect’ view spatially. The imperfect tense, not to be confused with aspect, is a temporal ‘nothing is happening’ state, since there is no sequence of causal actions being depicted. The aspect that depicts the start and endpoint of an event, unfolds in the perfective viewpoint, which is akin to a long shot that allows sufficient distance to view the whole of an action. For obvious reasons, the perfect tense (not to be confused with the perfective aspect) locates events in time and, hence, possesses a temporal order with a definite start and completion of an act (as the grammatical term ‘perfect’ implies). As tense is a temporal state, the perfective viewpoint, by contrast, is based on the contour or ‘shape’ of the event in time, which in linguistics is called, Aktionsart (from the German for ‘action class’). As far as action classes are concerned, “the deepest divide is between ‘states,’ in which nothing changes, like knowing the answer or being in Michigan, and ‘events,’ in which something happens. Events, therefore, divide into those that can go on indefinitely, like running around or brushing your hair, and those that culminate in an endpoint, like winning a race or drawing a circle” (Pinker, 2007, p. 197). Events that have an endpoint are called telic—a word related to teleology, which stems from the Greek telos for ‘end’ and the “endpoint is usually a change of state in the direct object that was caused by the agent” (p. 197-198). By contrast, atelic aspects are those that have no endpoint. The following description expressed in my faithful companion to learning Spanish conjugations, i.e., Spanish past-tense verbs: up close, written by Vogt (2009), gives a superb metaphor directly from a movie context. Another way of looking at the difference between the preterite [perfect past] and the imperfect [tenses] is cinematographically: the imperfect is often compared to a camera as it pans a scene, acting as someone’s memory—but a scene in which nothing is happening, yet. The description of such a scene in the past requires the 300 use of the imperfect. Any action or actions that happen in this flashback would be, if put into words, in the preterite [perfect past]. Another analogy, also from the movies, is that if it is dealing with an action and not just a static scene, the imperfect [viewpoint] is like a slow-motion camera that catches an action in the middle of things, since it focuses on the process [contour], not the beginning or end of an action (p. 27). In a movie context, therefore, the camera view that is a pan across or a zoom in to a landscape, an object or person, establishes a scene that is merely descriptive and sits at the ‘nothing happening’ state. It reveals no causal actions or events. In film terms, this imperfective view is called the ‘establishing shot’ common to openings of films or scenes. The imperfect tense also reveals continuous bounded actions located in time—e.g., day or night, seasons and, often, the general ‘time period’ (based on costumes and contemporary objects or ‘props,’ such as vehicles, briefcases, computers, telephones, etc.). The perfect tense, in contrast to the imperfect, necessarily depicts and explains a sequence of causal actions in the past from start to finish. Again, once more using Vogt’s (2009) description, one can sense the difference between ‘narrating’ in the perfect tense and describing the past in the imperfect using a movie analogy—and the problems that are encountered by foreign language students in a real conversational context. By the time the forms of these two tenses are mastered, they also should understand and appreciate them as two ‘aspects’ of the past that are not interchangeable. Yet, as all learners and their teachers know from experience, it is one thing to understand the concept and another to remember the details in the nick of time, especially when speaking. The preterite [perfect past] views an action in the past as completed, or focuses on its beginning or end [aspect]. By contrast, the imperfect views actions in the past as process whose beginning or end is not of interest. This makes their combined use so expressively rich that the best word to describe their reciprocal effect is cinematographic—they create vivid moving images in the minds of listeners and speakers. The imperfect, with its focus on past actions as in progress, expresses most clearly and unmistakably what the background or circumstance is in which other actions occurred. The function is of the preterite [perfect past] is to relate the actions that occurred in that circumstance (p. 22). 301 In other words, the perfect past (both aspectual and temporal) is the grammatical means to recount a story in the past. As the term ‘perfect’ in grammar refers to a completed action or event, it is clear that the perfective aspect provides sufficient ‘spatial’ distance (unbounded) to allow one to ‘see’ the whole of an event, rather than the ‘imperfect’ view, which visual boundary does not allow one to see the start and finish. After years of watching French films and listening to the complaint that, by simply showing a ‘slice of life,’ they possess neither a satisfactory beginning or ending, it occurred to me that French filmmakers are adept at exploiting the imperfective aspect as a story device for the purpose of showing ‘nothing happening’ states of life—like moving tableaus. In truth, few films (plays or novels) are able to construct an entire work solely with ‘establishing’ shots or descriptive scenes because inevitably some kind of narrative unfolding in time, which connects meaningful parts into a coherent whole through causal events, and is bound to be expressed in the course of its telling. One exception may be the 1992 film, Baraka, which is a wordless film that depicts world ‘cultures,’ ‘objects,’ and ‘events’ through powerful and majestic images and music. In this case, the entire film’s ‘grammatical’ construction may be said to have been through an imperfective view, i.e., nothing happening states and events, although arguably leaving very strong visual, auditory, and visceral ‘beginnings and endpoints’ between the montage of scenes and music, that leaves lasting impressions of people and objects with which to ‘construct’ one’s own narrative in the mind. By suggesting that films or plays possess an overarching imperfective aspect, is merely to point out that there is not sufficient time or space in either artistic mode to recount and thereby explicate all the causes underlying a story. The temporal-spatial quality of film or theatre is perhaps more akin to short stories and novellas that differ from novels in length. Of course, not all films stay within the comfortable ‘sitting’ range of 90 to 120 minutes. Instead, epic films that exceed this time provides spectators with much more detailed information than most films allow 302 (though may not be entirely appreciated in light of the length of time one is forced to sit through a movie). Conversely, not all novels exploit their length to provide the reader with all the causal events that led up to the overarching event or complete the narrative action from front cover to back—some writers prefer to leave information out to allow the reader to actively engage in their own imagination. Perhaps this is what the French are apt to exploit in film arts rather than the ‘moral’ or ‘fairy-tale’ stories so often produced in Hollywood, e.g., rags-to-riches, happy endings, or warnings of the peril of bad behavior. Importantly, one notes that in terms of film structure (which is helpful to understand in the writing, shooting, and editing), a movie is a single overarching event (i.e., a story) made up of many sub-events (i.e., scenes), which in turn are made up of sub-actions and objects (i.e., shots). The sub-events and sub-actions (i.e., scenes and shots) may be edited out of sequence, provided they are ‘sandwiched’ between the opening and ending of an overarching event (i.e., story). This phenomenon may also be found linguistically in poorly written sentences or ‘misspelled’ word, whereby a ‘clue’ is offered with beginning and endpoints to render the whole. One popular example, with no basis in any research whatsoever, has been ‘floating’ around on the Internet for some time and provides an interesting sample that shows the flexibility of the brain to code and comprehend unconventional phonemic order. Being Able To Read MISPELLED Words Not Really True Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a total mses and you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Amzanig huh? Moreover, just as novels may recount three events in one piece of writing (i.e., storylines), so too can film. There have been many films that have exploited this multi-story 303 complex in different ways, including multi-framing (i.e., film frames of simultaneous action). Whether a film is a single event, portrays simultaneously several different events, or ‘flips’ to different stories with a series of transitions, films generally employ the overarching grammatical structures of the imperfective aspect or imperfect tense (i.e., establishing scene or shot) along with the perfect tense (i.e., a sequence of causal actions) in scenes, just as we do in language. Importantly, film structure calls for ‘completed’ actions in scenes that are like ‘sentences’ (i.e., syntagmas) sequenced to recount an entire event. As can be noted, the two perfect and imperfect tenses, are combined for creating what Vogt (2009) described as rich imagery with narrative meaning for instance, in the sentences, “It was a dark night and it was raining when the murderer arrived at the office” and “The murderer was opening the door slowly so as to make no noise, but the dog smelled him” (p. 27). Indeed, the famously clichéd horror-story beginning It was a dark and stormy night places the reader or viewer in the midst of a situation. Likewise, in the second example above, the slow opening of the door is in progress when the moviegoer’s eye or the reader’s mind’s eye is treated to the opening scene. Despite being actions, they are presented as in progress. They are descriptive. They do not advance the plot. Note how the other actions in the examples above advance the story. They move it forward [in time] and are said to be narrative. The murderer arrived. The dog smelled the murderer. Once done, these actions cannot be undone. The expression of actions in this manner is the function of the preterite [perfect past]. My intent for describing tense and aspect in a film context is simply to demonstrate that universal principles govern film in much the same manner is it does language, but that the two will employ these universals in unique ways according to the demands of the medium. Those universal principles are what Pinker (2007) asserted as the ‘language of thought,’ which is ultimately tied to the visuomotor system as actions and objects are perceived and cognitively processed. Inevitably, each of those states produces an emotional resonance that in sum render meaningful the intentions and motives for the acts that we observe. 304 Test audiences: film reception and interpretation The audience reception was generally positive in the two instances we performed An Ethical chasm, one instance that included teachers and administrators, and another instance that included university professors. In both cases, Tierney provided a narrative presentation on the backdrop for the piece, its significance in schools and research, and the purpose for bringing the piece into a performance modality. The ‘framing’ of context with before and after discussion provided the cognitive complicity of the audience members, writer, and actors in much the same manner as would occur in a film or stage context. Ideally, however, Tierney hoped to exploit the controversies that were present in his journal article by allowing the performers to express through vocal inflections and facial expressions the ‘emotionally’ charged biases that were part and parcel of all the characters he had cast as stakeholders and decision makers involved in language and literacy education. And with additional information provided for by the narrator, the central themes could be conveyed that filled in the narrative gaps. When it was decided that the piece would be turned into a movie, I made a conscious choice to use the camera as a dynamic device to create temporal and spatial qualities, which could not be exploited in a live Reader’s Theatre performance beyond those naturally occurring in dialogue. While I did ‘block’ the actors, which allowed some movement on film to occur as deictic and lexical markers, a courtroom story, being an event that occurs principally in static states (e.g., sitting in the witness chair) required some device to exploit ‘action’ principally to prevent the ‘talking heads’ phenomenon. Therefore, as with the dance film, I once more utilized the steady-cam by directing the camera operator to film the entire piece by varying camera motion that resulted in multi angle perspectives, extreme close-ups of faces or hands and long shots, which alternated between near and far perspectives. The idea was to involve the viewer in intimate or distant spaces, which revealed nervous twitches of the hands, body parts or faces—all 305 of which could be sensed as ‘descriptive,’ with only a few actions (e.g., handing documents to the judge or swearing in) as ‘moving the story forward in time.’ To move the ‘story’ forward with a dynamic view of all the characters, I exploited multiple camera angles (i.e., shots), which I cut in quick sequence (i.e., edited), to depict positions of power or to create a ‘dizzying’ sense of repartee between the prosecutor and defense attorney when examining the witnesses. In some instances, I chose to pan the camera across two actors as though one were watching a tennis match, rather than cutting between the two while exchanging dialogue. I also included a 360-degree pan that swept from the prosecutor to witness to judge at a tempo that created a crescendo of intensity. In addition to the steady-cam shots, I also edited the film with ‘fast’ cuts leaving little ‘space’ between actor’s exchanges, which inevitably quickened the tempo of the scenes—which were none other than the ‘testimonials’ of the characters. And to establish that a trail of witnesses would eventually narrate the circumstances, I also created a montage and sequence of four smaller film frames that appear, all remain present for a time and disappear from the screen, but expanding the last frame on the witness being examined by an attorney. This eliminated the repetitive description of each of the ‘witnesses’ at the start of every scene and allowed the audience to anticipate the range of personages playing the role of a witness for either the prosecution and defense. The movie’s opening scene was a careful composition of image and sound, which began with the film’s thematic music I had composed to render a majestic overture—a theme I had envisioned would ‘portray’ or underscore the importance of the courtroom proceedings. The opening of the film, therefore, is a ‘grammatical’ shot that exploits the imperfect aspect, namely, an establishing shot depicting a grey, dreary, and rainy day with an imposing grey granite exterior of the movie ‘set’ in the background (i.e., a gothic building on campus that added to the 306 gloomy setting). Continuing with a dynamic ‘slow motion’ shot of the prosecutor walking into the ‘courthouse’ using multiple angles, the scene cuts to the interior of the courtroom (i.e., the Moot Courthouse in the law building on campus). A quick cut to ‘black,’ which was produced by filming the back of one of the courtroom ‘audience chairs’, the camera then pans up to the judge entering, the prosecutor, defense attorney, and defendant rising, and the case being ‘announced’ by the court clerk (all of which is muted, allowing the music to underscore the opening credits). Continuing forward with a ‘scene’ shot from the perfective viewpoint, the first witness is seen entering the courtroom in slow motion, walking between the aisles where an audience would normally sit, and into the witness box whereupon he is sworn in. Partway through his entrance, the prosecutor and defense attorneys begin their opening statements as ‘voice-overs’ until the first witness sits down, at which time the camera pans to the prosecutor. This choice of editing was done to ‘speed’ up the film’s establishing shot, which in the script and live performance was recounted by the narrator. Since I deemed that the opening statements carried enough information about the case against the defendant, I chose to shoot the entire opening as mainly a descriptive beginning—setting the tone—with ‘endpoints’ in each of the actions (e.g., entrances, courtroom protocols, etc.) in contrast to the ‘events’ being narrated by the characters that would unfold over the proceedings of the court case. The experiment proves to be both a success and failure At this juncture, what must be concluded is that the film both succeeds and fails as an experiment. Having become so intimately familiar with the text and the entire process from start to finish (i.e., from live performance to the film’s final cut), I had decided to set the film aside in order to organize and write the background and theories that would lead to the film’s analysis. Upon reviewing the film just prior to writing the present analysis and conclusions, after not having seen it for a lengthy period of time, I was both pleased and disappointed by what I saw 307 and heard. On the one hand, the film successfully portrays the universal concepts that are expressed as readily through language as they are in multiple modalities, including film. Additionally, judging from responses to viewing the film, as I explain further below, it was readily apparent that from a neuroscience perspective, the sensory systems (i.e., visuomotor and auditory), the mirror neurons, and the ‘dispositional spaces’ to which Damasio (1999) referred, demonstrated ‘action understanding’ as a convergence between, perception, cognition, and emotion. Since I had jotted down in my journal a few of the reactions of my language and literacy students when I screened the film in several course sections, I was able to revisit some of the conversations that ensued at the end of the film. The comments were, for the most part, conservative and understandably ‘polite.’ The film did not appear to engage the students into a lively discussion on the ‘politics’ or ‘legalities’ of language and literacy policies (i.e., the content)—nor did they seem to react very much, as teacher candidates, to the defendant who was on trial for ‘failing to meet university standards’ while teaching theoretical, pedagogical, and curriculum principles to her ‘teacher candidates.’ They did comment on the ‘professional’ quality of the film, however, and expressed genuine appreciation for the montage and music but in general, with the exception of a few students, there was not enough response to the content of the story to warrant the film’s successful ‘translation’ of a complex topic. It was not that the students did not understand the content of the film—though I suspect there were a great deal of ‘facts’ couched in educational or legal jargon that would have needed a great deal of unpacking. In fact, their comments immediately after the screening showed that they were able to grasp most of the meanings, which were intended, gleaned from the dialogue despite its highly technical language. They also reacted with visible shaking of the head, 308 exhalations, or even sarcasm in several places whereupon the witnesses were portrayed with a certain ‘villainous’ attitude. Between my first showing and last, in fact, I was able to re-cut the ending, which had elicited inappropriate laughter by the students. Originally, I had chosen to insert a voice over of the ‘narrator’ asking the audience to ‘decide on the outcome’ and end on a penetrating look from the prosecutor into the camera—an ending not unlike that of the live performance. But the laughter indicated something I had sensed intuitively, which was that the ending was too melodramatic and ‘over the top.’ I was thus able to cut the ending, remove the voice-over, and fade to black immediately following the last statement, which was emphatically rendered by the defense attorney. This new edit resulted in a subdued reaction—a lengthy pause before an applause—with the next student groups, which was the effect I had desired. I was never entirely certain whether the film ‘worked’ or not as a ‘pedagogical’ text and, certainly, had I given out a survey or questionnaire, I would have undoubtedly received better feedback and stronger data. But the purpose of showing the film in the classes was not meant to obtain precise data, it was meant to stimulate further discussion in much the same manner as the experimental dance film I had produced for the conference (Gouzouasis, LaMonde, Ricketts, Ramsey, & Mackie, 2007) to explore the feeling of near and far. In some respects, this less than lively outcome, however, could have been due to the fact that I had no specific focus with which to address a poignant concept. Simply asking the students ‘what they thought of the film,’ was not sufficiently framed in a way to foster deepened thought on the particulars of the film. As the particulars were too ‘technical,’ which rendered the numerous issues that had been touched upon somewhat distant, the principle meaning that appeared to be shared was the overall emotional impressions left by personages involved in educational matters. 309 As a way to bring the whole to a more ‘personal’ level, it successfully set off a chain of emotionally laden responses to academia and schools in general, with modest attention to the topic of language and literacy (which was to be expected in a course designed to ‘think critically’ on matters of language and literacy education). Most of the conversations, however, digressed to their practicum experiences, for instance, the ‘ethics’ of educational standards, the usefulness of high school curriculum, or the manner whereby some teachers ‘ought’ to be put on trial because of their ‘old school’ teaching styles and approaches. Without the detailed research background, which I was able to assemble over time, some of which I have written on in this thesis, students had no anchor by which to make profound statements on what they had seen and heard in the film. Unfortunately, I did not screen the film with academics, arts or language educators, which may have rendered a completely different set of responses based on their background knowledge and skills. Irrespective of the ‘limited’ audience feedback I received, the real test, in my opinion, came when I asked my eldest daughter to watch the film. This was an important ‘audience test’ because she represented a ‘neutral’ viewer well outside of the educational context in which the film was based. As she was unfamiliar with the ‘language’ and ‘culture’ in which it was written, she could only respond to whatever impressions the film would offer. This ‘neutrality’ was not unlike my status while watching English films in my early years upon arriving to Canada, which experiences were brought sharply to the fore while watching Spanish films in Costa Rica without subtitles. The films that I remember seeing as a child most definitely left indelible impressions in my mind, which did not shift significantly my overall comprehension when seeing some of the films in later years. By contrast, as an adult with higher expectations for ‘comprehension,’ the films I watched in Spanish were frustrating to me. This was especially so because I had progressed 310 sufficiently to comprehend what was being said in face-to-face conversations, over the telephone and in television contexts (e.g., newsreels, soap operas, and talk shows). In a film context, while I could generally grasp the action and events as they unfolded in each scene, I was not privy to understand their subtle meanings (e.g., inferring relationships). From this experience, I surmised that movies are sufficiently different from other modalities and contexts that one could say their ‘form’ may be as distinct as speech is to written language. Yet, as was the case when I was young, I could ‘divine’ what was happening in a scene dynamically (e.g., actions and intentions). It was only in assembling the parts by the film’s end that I could make any ‘sense’ of it as a whole, though I was left with the feeling that my comprehension was superficial at best. Despite my frustration, given my intense desire to learn Spanish, I was able to sit through an entire film pleased when I could decode the odd phrase. This I did a sufficient number of times, gleaning much from observing the expressions, gestures, and actions, along with listening to the vocal intonations and qualities that it felt as though I understood the language. As my spoken Spanish improved, a subtle shift in comprehension took place without my conscious awareness. In any case, my daughter was the perfect ‘test audience’ for many more reasons than the ones I describe. As a young adult, about to return to school to finish her education, she had very strong emotional responses to her schooling that, in her opinion, ‘taught her nothing.’ Largely due to some unidentified learning difficulties, yet excelling in certain linguistic areas (e.g., possessing a rich vocabulary), she often struggled with reading comprehension. But never short of being able to interpret social situations with great perspicacity, at the very least, I guessed that I would receive a straightforward response given her ‘shoot-from-the-hip’ style of communicating. 311 At the conclusion of the film, she paused at length before asking me a rather pointed question, “Is education a noun or a verb?” The surprising question froze my thoughts in midstream. Just prior to responding, I wondered whether or not she was joking. But as she was looking at me rather seriously, I replied that it was a noun. To which she responded, “Well then, the movie simply had too many nouns and not enough action.” I was left dumbfounded as she articulated precisely the gut feeling I had had in watching the movie again after such a long absence from it. I proceeded to then ‘listen’ to the entire film without looking at the images and, conversely, watch the entire film without sound. Two things jumped out at me—the first was the story of the aphasics that had watched and listened to the President’s speech on television—those with tonal agnosia sensing insincerity in his words and those with global aphasia sensing insincerity in his gestures. The ‘spoken’ text, in fact, contained a dizzying set of facts (or nouns), just as my daughter had intimated, without the slightest anchor to ‘reality’ (or at the very least, her experiences with life). Perhaps this is not surprising given the fact that the dialogue almost paralleled the academic content of the journal article. The facts were filled with educational, linguistic, or legal jargon mixed with long theoretical terms, couched between a very modest number of ‘interesting’ social comments—such as ‘students’ having been angry with their professor or one witness suggesting his homogenous steering committee was intended to produce a ‘group that would agree.’ The most powerful words that were spoken, in fact, was in the defense attorney’s closing statement, which mostly summarized the facts and spoke more directly to the ‘ethical’ concerns (as the title so described) of putting a diligent, thoughtful professor with an integrity comparable to a medical doctor, on trial for judging (i.e., criticizing) language education and literacy research and sharing her judgments with her students. Ultimately, I realized that the rest of the dialogue 312 had, from the start, perplexed me as much as it had my daughter. Lacking sufficient knowledge in the field of language education and literacy research, it was precisely the language Tierney had used in his article that had set me on a course to try to understand the issues at hand. From the perspective of an experiment in research methodology, therefore, I can attest to the processes of participating in a performance and then making a film as having been eye-opening. It was truly a dynamic laboratory that pushed me toward investigating the issues on my own. But my motives for investigation were not solely to bring language and literacy concerns to the fore, despite instructing on the topic. The truth was that, as an arts educator and arts-based researcher, I wished to demonstrate that film had a legitimate place in the school curriculum and was a powerful form of pedagogy to be critiqued and constructed no less than language. Thus, my motive drove me to further understand the scope of implications that arise from language education and literacy research, but also implications that arose from the research disciplines, e.g., cognitive, linguistic, social, psychological, and semiotic, which impact on film studies I saw as a roadblock to ‘unpacking’ the essential nature of film. Though each discipline had something to offer in terms of ‘zooming in’ on particulars, none of the literature fully resonated with me integrally and holistically as a dancer, musician, and filmmaker. It was not until I began to decorticate emotion studies in the neurosciences that I began to make connections between my dance and music knowledge and skills with that of my language and film knowledge and skills. The incessant focus on ‘cognitive processes,’ such that I had done as a teacher, marred the potential for understanding the brain-body-mind complex. By the same token, I was strongly interested in learning how films were perceived and why it was possible to conceptualize so many film theories and philosophical viewpoints. Clearly, film as a phenomenon was too powerful to ignore. And as time went by, I began to see some astounding connections between all of the fields of study that simply could not refute the 313 integral manner by which the brain-body interacts to create mind—and how the mind is able to be so cognitively flexible. Listening to the film, in fact, reminded me of the fact that one of the actors had suggested prior to filming that we turn the script into a radio play rather than a movie. She was probably correct in making that suggestion—but I was determined to carry out the experiment to the end. On the other hand, the experiment was successful in terms of the cinematography and montage (i.e., editing)—not because the images were powerful in rendering meaning, but because the film’s action, for lack of ‘anything happening,’ is a lot of hand waving. From the perspective of neuroscience, there is good cause to find the images ‘difficult’ to follow, except for the obvious lexicon of ‘hand waving.’ With little to no causal connections being shown, without transitive or intransitive acts, without ‘material objects’ upon which to anchor intent behind the actions—with few of the ‘courtroom’ actions, in their repetition, showing distinctions—it is difficult to imagine that there was much being registered at a ‘cognitive’ level that had to do with the film’s linguistic meanings. In other words, even the extreme close-ups, long shots, or handheld motion to create a ‘mood’ or ‘impression’ did little to add to the film’s subtle meanings. Though it did create three modes of address, which were evident in the relational exchanges between the characters, namely, the imperative, subjunctive, and conditional. It also created feelings of discomfort due to the ‘intimacy’ or ‘dizzying’ views, which arguably are also part of the ‘semantics’ of the film. The only problem is that these camera angles, which denote intimacy or a dizzying sensation, were employed without cause. Why would an audience member be thrown suddenly into an intimate space with one of the actors? Why would one need to feel a dizzying sense of disequilibrium? 314 Those questions struck at the heart of what my daughter felt was ‘insincere,’ namely, that the ‘camera’ as the ultimate ‘narrator’ did not match the gestures, tone or words of the characters on screen. Wholeheartedly, I had to agree with her. I also wondered whether or not I had portrayed Tierney’s intent. Despite that the content was almost word for word what was written originally, how the content would be received based on my visual portrayal was of some concern. Although Tierney appeared generally pleased with the overall effect, to my knowledge, he only screened the trailer or several key scenes to support some of his ideas during other conference presentations. It would be an interesting way to end the experiment to watch the film together once more and ask him some pointed questions. Thus, with the complex language of the film concretized with visual images that did not ‘match’ the words, one could sense at a level of core consciousness, that something was not quite right or that the film was ‘insincere.’ The words, in short, may have been better performed as a radio play for the visual dynamics that was created by the camera failed to offer the visuomotor meanings I had hoped would emerge. The film, when watched without sound, in fact is grammatically like one long statement constructed in the imperfect tense (i.e., past) and imperfective aspect (i.e., descriptive). Activity that lacks a fixed boundary, which is atelic (without end) and durative is like continuously ‘running around.’ With the occasional fixed boundary at the end of each ‘scene,’ marked by a fade out and fade in of a new witness, that signaled a telic and durative action, each scene amounted to ‘drawing a circle’ without end. The visual dynamics, which described the comportment and expressions (e.g., facial and hands) of people in a courtroom proceeding, did not allow one to infer or predict the outcome. There was no way of knowing if all those actions of ‘drawing a circle’ were actually completed. 315 While watching the film with sound, ‘meanings’ could be gleaned from the dialogue, but only, as my students and daughter proved, if one was familiar with the technical language. Moreover, the technical language, which my one time actor daughter said would make “any actor proud to speak the dialogue because it makes them look smart,” was never expressed in a way that the viewer could understand despite not having knowledge of technical language. As Star Trek or West Wing once showed, technical or political jargon may be used effectively if somewhere in the dialogue conversations ‘lighten up’ on the ‘facts,’ and verbs are thrown in to implicate, relate, point, or otherwise indicate where the problem will lead the personage. As one of my acting teachers once told us in scene study, an actor needs to learn how to ‘verb’ their lines to render them meaningful. This idea was a way to remind us to turn ‘technical language’ into physical actions that would help to ‘translate’ the meaning for an audience without prior knowledge or experiences. The implications in a teaching context are astounding. What can teachers do to ‘translate’ complex ideas into comprehensible meanings? There were small crescendos and decrescendos of affective qualities (a climax ‘raising’ the sentiments akin to anticipating that ‘something is going to happen’), which rose and fell in each ‘scene’ thanks to the music soundtrack (a device that I could not ignore) or some comic relief through the use of light-staccato sounds in contrast to the strong legato that underscored ‘intense’ moods. Although each scene was divided with fades to black, I also composed the music to indicate the start or endpoint of an action within a scene, relying on the fact that most any one (as my generalist teachers so proved in a music context) with a modicum amount of music listening experience can sense the start and stop of a melodic contour and would subconsciously register this as several narrative actions over time. Unfortunately, for the most part, the actions within the scene never moved the story forward because they were much like ‘hand waving.’ That is to say, the movement was not 316 anchored to transitive or intransitive actions that related to the content of the words (i.e., objects), with the exception of a very few places. For instance, when the defense attorney asks one of the witnesses whether he can ‘explain’ academic freedom, having badgered him into admitting that ‘no test’ can conclusively judge the competence of a student or teacher. The defense attorney then ‘retrieves’ the university document to read the points aloud on academic freedom. But as my daughter said, “Big deal, that just showed he had to cite facts to prove his point.” When asked whether he had ‘proved his point,’ she simply adjoined that the defendant on trial, by the film’s end, appeared to be unfairly judged, and that the court case was in violation of her ‘academic freedom.’ But once again, her candor took me by surprise with her final take on the matter. “So what?” she asked. Experiences not abstracted from our sensory experiences Indeed, the ‘so what’ that Miles Davis had so cleverly composed, continued to follow me throughout my research process, right to the end of my experiment in filmmaking as research and pedagogy. As Pinker (2007) so noted, “Our experiences unfold in a medium of space and time, which isn’t abstracted from our sensory experiences but rather organizes our sensory experiences in the first place” (p. 157). Pinker invites us further to take note of the following. We are not just a passive audience to these experiences but interpret them as instances of general laws couched in logic and scientific concepts like ‘and,’ ‘or,’ ‘not,’ ‘all,’ ‘some,’ ‘necessary,’ ‘possible,’ ‘cause,’ ‘effect,’ ‘substance,’ and ‘attribute’ (the last two pertaining to our concept of matter, such as the ability to conceive of a melting ice cube and the puddle it turns into as the same stuff). These concepts must arise from our innate constitution, because nothing in our sensory experience compels us to think them (p. 157-158). As much as I enjoyed reading and unpacking all of the literature that went into my thoughts on all the topics that converge on language, literacy, and arts, in particular Pinker’s writings, it is in that last sentence that he gives himself away as a ‘nativist.’ It is 317 precisely our sensory experiences, coupled with innate systems of perception and cognition that we are able to move from core to extended consciousness. And it is precisely because the brain is parsimonious in its extraordinary design and can use multiple systems to infer, predict, plan, and take action whenever we interact with the world, that we are necessarily embodied brains swimming in an ocean of objects ready for our comprehending minds to construct meanings. Building capacity: the brain that changes through pedagogy The question that sits foremost on my mind, in the end, and one that I pose at this juncture, is whether doing pedagogy in the same way for the same reasons has not produced perceptive and cognitive limits to learning that could otherwise be removed. And if perceptive and cognitive limits were removed, what effect would this have on learning? From a pedagogical perspective, one may wonder whether we have an adequate understanding of the brain that would allow educators to change what and how we teach. As I recall the goal to ‘build capacity,’ which was one of four driving mission statements at University School where I first implemented a lengthy film unit of study (along with building relationships, community, and knowledge), today I am aware that this capacity was not specifically directed toward changing perceptive (i.e., attention and discernment) or cognitive functions (e.g., the capacity to reason). Despite that our school Principal was strongly influenced by brain research, addressing specific perceptive and cognitive functions to maximize meanings across learning, was never part of our many staff discussions. And attending to emotional resonances was even further from our objectives. Rather, capacity was expressed in terms of self-awareness or social intelligence aimed toward empowerment, good learning habits, and self-esteem—all things teachers believed could be changed with enough effort paid to those areas. Except for those teachers whose training was 318 in ‘special education,’ which offered them more techniques and strategies for addressing some learner weaknesses, most teachers were disadvantaged with respect to addressing ‘academic’ concerns. Understandably, when a learning weakness was identified, such as memory, attention, symbolic or oral processing, written or motor output, we accommodated the learning through an adjusted curriculum (e.g., allowing more time to write in cases of output problems, chunking learning in cases of difficulties in memory, etc.). With a common view of the brain as ‘fixed’ or ‘hard wired’ in childhood, despite several centuries of evidence that the brain is plastic and is able to change dramatically, the best a teacher has at their disposition is the hope in practice to accommodate learning weaknesses as ‘compensation’ (Doidge, 2007; Eaton, 2011). Therefore, what we implemented at University School were, first, individual educational plans (IEP) or individual personal plans (IPP) that adjusted the curriculum to fit the ‘cognitive dysfunction,’ as opposed to searching for means that would bring about changes to the neuronal web of brain activity. And second, though we implemented what we termed Creative Applications (i.e., music, drama, movement and technology) in both specialized and integrated learning contexts, we had no means by which to judge the efficacy of this approach. In both cases, therefore, it was our understanding that learning was being accommodated and enhanced through compensatory techniques and ‘multimodal’ learning strategies, i.e., the arts and technology. While the intent fit our mission statement, there was no procedure set in place by which to know whether strategic plans or creative applications achieved anything more than accommodating learners less suited to the pace and demands of an academic setting. At best, we were able to account for a more positive attitude toward learning through feedback from students and parents, which accomplishment was not so bad. We intuited that it was better to work in a happy community than one that feels threatened! In the end, there was never any intent to test activities in terms of changing the brain’s capacity. Though, in all fairness, as we were in 319 an experimental phase, we did not systematically implement strategic plans and the arts, or test the results as one would if precise findings were being sought after. All things considered, comparisons between arts (e.g., film, music, dance, etc.) and language to accomplish the ultimate goal of literacy is perhaps too great a task if we do not search for the fundamental forces that produce so many extensions of the mind, which are the symbol systems, namely, expressions of reality that we utilize to share our thoughts one with the other. The bridge between the two (i.e., language and the arts) will forever be stymied if particular comparisons to linguistic syntax, grammar and semantics continue to be made, since these operate quite differently in varied modalities although their general apparition will arise notwithstanding. As Pinker (2007) expressed it, concepts are “like a game of Whack-a-Mole,” disappearing in one context to reappear in another (p. 80). Language, as an abstract, seems to be awash in figures of speech (e.g., metaphor), while the arts, appear to be the concrete act of motor, temporal, and spatial qualities. But this proves not to be entirely the case, for if it were, film arts and arts generally would be ‘grasped’ with deeper comprehension and insight by greater numbers of people, which is precisely why these symbolic systems are so intriguing in the first place. If one were to study the relationship of all symbol systems (i.e., language) and cognition by examining findings in neuroscience, the possibility exists for understanding the brain’s capacity to transform neural activity into rich and complex ways of thinking and expressing using multimodalities. The mutual development of language and arts, in relation to the brain-mind-body complex, could be ascertained as highly advantageous to the human species for reasons beyond social empowerment. Having since learned of educational programs that target changes to cognitive capacities in learners identified with learning dysfunctions, which simultaneously affect social and emotional learning, one asks, “If changing brain capacity were indeed possible in educational 320 settings, would this not be the greatest pedagogical goal?” The question is one worth pondering in light of the few schools, which pedagogy has been designed to change primarily cognitive functions, i.e., the Arrowsmith School in Toronto and the Eaton-Arrowsmith School in Vancouver established by Barbara Arrowsmith Young and Howard Eaton, respectively (Doidge, 2007; Eaton, 2011). With the investigation of the results of 35 years of practice in those schools, now being undertaken by experts in the fields of psychometric testing and cognitive science, there is ample evidence that a pedagogy aimed at changing brain capacity has dramatic results (Doidge, 2007; Eaton, 2011). The existence of schools of this kind, demonstrate that a change in attitude toward pedagogy as a way to change the brain’s capacity is under way, albeit slowly. It is a debt owed to findings in neuroscience. If I had an opportunity, the one thing I would ask Barbara Arrowsmith Young, whose remarkable accomplishments were featured on a CBC special, The Brain That Changes Itself, based on Doidge’s (2007) book, is whether or not there is a place for the arts to be fully implemented in a program aimed toward changing perceptive and cognitive functions. But not the arts taught ‘willy-nilly,’ rather the arts as carefully conceived approaches to addressing cognition in a similar manner as Arrowsmith-Young designed over her thirty five years of studying brain functions. Inclusive of the short, perceptive and cognitive activities, many of which have been developed through computer programs designed to improving a range of neurological dysfunctions (i.e., autism, aphasia, attention deficit, etc.) that require drill and repetition for improving brain functions, the Arrowsmith program also includes a task called, artifactual reasoning. That task requires a learner to describe, infer, and predict the visual ‘meanings’ of Norman Rockwell paintings. Utilized because of their clarity in visual expression of ‘social contexts,’ the social reasoning impairment that is aided from this activity, as Eaton (2011) 321 describes, sometimes requires that a learner ‘zoom in’ on just a small frame of the painting (e.g., the eyes) and slowly ‘zoom out’ as whole meanings unfold in the mind of the learner. One of the most popular cognitive tasks I witnessed the students engaged in at the school was reading a multi-handed analogue clock, which required making precise calculations from each of the positions of the clock’s hands. This first cognitive activity, which Arrowsmith-Young was designed to assist her own cognitive weakness based on Luria’s work, “helped her develop the capacity to grasp logic, see cause and effect, and understand mathematical concepts” (p. 44). Reasoning, which depends entirely on one’s ability to make causal connections, is enabled through the clock activity precisely because of its inherent mechanical movement, temporal, and spatial qualities. Surely music pedagogy has something to offer, which can be as rewarding and perhaps more stimulating than reading the hands of a clock. Though I have no intention of disparaging this activity, which I find to be a remarkable cognitive feat, I simply want to point out that music, with its innate temporal, spatial and movement qualities may be the right extension for developing new reasoning circuits that unlock the causal relationships in the world that surrounds us. And if reading visual images and clocks were not also akin to reading movies, I would be very surprised. In the end, as far as the Arrowsmith schools are designed, I would not suggest changing any of the cognitive exercises that have been successfully shown to change the results of psychometric testing (Eaton, 2011). Neither would I suggest the school include curriculum that is set aside in favor of the cognitive tasks, which setting aside proves to not hold the students back once they are reintegrated into a regular classroom. In fact, despite having been away from their regular classroom for three years while studying exclusively at the school, and precisely because Arrowsmith students have shown to be able to master curriculum content upon return—in some cases demonstrating being more advanced—the results question our continued focus on building 322 scholastic learning around curriculum content. Setting those results aside, however, I wonder whether a carefully designed arts program, partnered with the knowledge of cognitive and neuroscience, could be added to address the kinds of learning deficits that impeded social, cognitive, emotional learning. As far as the film experiment is concerned, as I have explained it through this writing, I will continue to probe deeper into the manner by which films may become part of a program of studies that target the scope of literacy we desire to foster in schools. But also the manner by which filmmaking, as a research methodology, may teach us more than merely the study of language with respect to the relationships that exist between perception, cognition, and emotion. As a result of the findings drawn from neuroscience, some of the most exciting approaches implemented today to solve vexing problems of mind, such as Ramachandran’s (2008) mirrors used by phantom limb patients, have been designed by creative ‘practitioners’ in search of possibilities in the ‘everyday’ objects and actions that surround them. From the simplest of exercises and tools, such as reading clocks or visual artifacts, literacy may yet shift to a plane of understanding that reflects both core and extended consciousness. Reflecting on a lifetime of dance and the exploration of many more movement arts, such as speech arts, drama, music, and film, along with all that I have written in these pages, there is little doubt in my mind that movement is the cornerstone of our thoughts (i.e., images) and expressions. This appears naturally since we are inevitably bathed in a dynamic universe from conception to death. Aside from its essential, direct relationship to existence itself, one may question why movement is so important. The brain is constantly and without rest mapping a dynamic universe. Never fixed, its neural makeup, from the simplest to the complex, is topographical in nature. That means the brain literally mirrors the world we live in. For instance, the auditory neural network in the 323 cochlea is anatomically designed precisely as the pitches are arranged on a piano. Every single neuron is exactly positioned topographically in relation to our body to respond temporally and spatially in the same sequence and order. Thus, neurons are arranged and fire according to the physical properties our bodies are designed to possess as it moves through time and space. Every action—for instance every bend in the finger, leg, and arm—is recorded exactly as we observe and perform that action (Doidge, 2007). With a brain so designed to learn and master its environment, clearly any change we desire to foster in our neural maps, whether it is for optimal development or to ‘fix’ what has gone awry, implies that movement must be initiated for change to occur at all. Thus by enabling, focusing, halting, strengthening, refining, and redirecting movement, within and without the body, the brain is able to form important topographical neural maps, and those maps result in the formation of the body-mind we evolve into. In other words, the brain veritably mirrors all to which it relates, and the mind emerges as a result of such mapping (Damasio, 1994; Doidge, 2007). How is it possible to think of the mind otherwise evolving? It is only through movement that we are mindfully (i.e., consciously) aware of anything that surrounds us and it is through movement that we become aware of causality, which allows us to infer, hypothesize, predict (i.e., if-then clause) and plan our actions. It is movement, which leads us to discern and judge the complex patterns as they arise in an interdependent and interconnected dynamic universe, and movement that is the sum of our thoughts and expression. Our appetites and interests are founded upon our ability to discern and filter the ocean of information that forms within and surrounds us. Those appetites and interests direct our ability to navigate our surroundings and negotiate relations with objects that appear temporally and spatially within a sensorisomatic framework. Perceptively and cognitively, over time and space, 324 we develop the means to apprehend complex patterns, which forms our ability to learn. Yet a full comprehension of complex patterns, which arise in a complex dynamic environment (i.e., multiple stimuli in constant flux), is highly dependent on the speed of thought and size of memory, which is also plastic and ever changing. Thus, the medium of the brain-body-mind is in constant communication with the medium of its existence. And it is precisely because we exist in an ever changing, dynamic environment that movement is essential to the development of complex, thinking organisms. Movement is at the heart of the biological, physical, and neurological programs now being designed to change the brain’s neural functions (Doidge, 2007). As Doidge (2007) eloquently describes in his book, we are at the frontier of brain science. The key is in understanding and applying to our pedagogy three neurological maxims: (1) use it or lose it; (2) neurons that fire together wire together; and (3) neurons that fire apart wire apart. These three maxims are essential to understanding both the early neural plasticity of childhood, which differentiates or dedifferentiates neurons (i.e., in the case of autism), and the plasticity of the brain as it matures into adulthood. Furthermore, Doidge (2007) expresses thoroughly that neural development requires appropriate enablers to allow maximal differentiation of neurons (i.e., the rise of a panoply of neural branching that strengthens perceptive, emotional, physical, and cognitive functions), but also constraints to incessant stimuli that threatens to dedifferentiate neurons (i.e., causes neurons to weaken, dissociate or atrophy). From relieving those with obsessive-compulsive thoughts to those who withdraw from society, as displayed by autistic children; from exciting the muscles enervated by cerebral traumas to exciting the temporal, parietal, occipital areas of the brain that cause difficulties in comprehending various visual, written, oral, and kinesthetic modes of expression, it is clear that brain science holds much promise to changing our agency. “How can 325 we know,” as William Butler Yeats queried, “the dancer from the dance?” Perhaps by observing carefully the relationship between the two, then testing and designing ways to bring about the fluid changes we seek. As artists and arts educators, we are intimately familiar with movement and the resultant changes in perception and cognition. When focus is fostered and practice is encouraged in the arts, we tacitly understand the gains that learners make across multiple areas of learning. What remains for arts-based researchers and educators is to tie the knowledge gained from neuroscience to arts pedagogy, to develop our knowledge and skills to create thoughtful learning environments and approaches as a way to strengthen our artistic value in the scope of human development and learning. And while it would seem that all the words preceding is motion forward in time, it is really nothing more than a reflection of action already complete. A reflection that, if expressed, interpreted well, and researched in a variety of contexts, could lead to exciting new actions in education. 326 REFERENCES Allen, R. (1940, June). Our new illiterate class. The Journal of Higher Education, 11 (6), 322-327. Allen, R. & Smith, M. (Eds.). (1997). Film theory and philosophy. Oxford: Clarendon Press. Amabile, T (1996). Creativity in context. Boulder, Colorado: Westview Press. Anderson, M. (2008). Taking liberties: The Payne Fund Studies and the creation of the media expert. In Grieveson, L. & Wasson, H. Inventing Film Studies (pp. 38-65). Durham, NC: Duke University Press. Arcand, D. (Director/Producer) (1986). Le declin de l’empire américain.[DVD] Arcand, D. (Director/Producer) (1989). Jesus de Montréal. [DVD] Arendt, H. (2003). Responsibility and judgment. New York: Schocken Books. Aristotle (1941). Nichomachean Ethics. In McKeon, R. (Ed), The basic works of Aristotle (pp. 935-1112). New York, NY: The Modern Library. Asher, N. (2007, August). A web of words: Lexical meaning in context. Retrieved November, 2010 from http://mutis.upf.es/~mcnally/ESSLLI/asher_webofwords.pdf Aumont, J. (1996). A quoi pensent les films. Paris: Nouvelles Editions Séguier. Ayers, W. & Miller, J. (Eds). (1998). A Light in Dark Times: Maxine Greene and the Unfinished Conversation. New York: Teachers College Press. Bakhtin, M.M. (1981). The dialogic imagination. (C., Emerson & M., Holquist, Trans.). Texas: University of Texas Press. Barnouw, E. (1993). Documentary: A history of the non-fiction film. New York, NY: Oxford University Press. Barone, T., & Eisner, E. (1997). Arts based educational research. In M. Jaeger (Ed.), Complementary methods for research in education, 2nd ed. (pp. 36 116). Washington, DC: American Educational Research Association. Barone, T. (2003). Challenging the Educational Imaginary: Issues of Form, Substance, and Quality in Film-Based Research. Quality inquiry, 9 (2), 202-217. Barone, T. (2006). Arts-based educational research then, now and later. Studies in art education, 48 (1), 4-8. 327 Baudrillard, J. (1981). For a critique of the political economy of the sign. St. Louis: Telos Press. Baudrillard, J. (1998). The consumer society. Paris: Gallimard. Bench, J., & Parker, A. (1971). Hyper-responsivity to sounds in the short-gestation baby. Developmental Medicine and Child Neurology, 13, 15-19. Bernard, L.L. & Bernard, J.S. (1930, October), Behavior, individual and social. Social Forces, 9, (1),125-131. Bohm, D. (1998). On creativity. New York, NY: Routledge. Bordwell, D. (1980). French impressionist cinema: Film culture, film theory, and film style. New York: Arno Press. Bordwell, D., & Carroll, N. (Eds.). (1996). Post-theory: Reconstructing film studies. Madison: University of Wisconsin Press. Branigan, E. (1989, Autumn). Sound and epistemology in film. The Journal of aesthetics and art criticism, 47 (4), 311-324. Branigan, E. (1997). Sound, epistemology, film. In Allen, R. & Smith, M. (Eds.). (1997). Film theory and philosophy (pp. 95-125). Oxford: Clarendon Press. Braudy, L., & Cohen, M. (Eds.). (1999). Film theory and criticism: Introductory readings. New York: Oxford University Press. Broca P. (1865) Sur le siege de la faculte du langage articule. Bulletin de la Societe d’anthropologie, 6, 337-93. Brooks, J. & Brooks, M. (1993). In search of understanding: The case for constructivist classrooms. Alexandria, VA: Association for Supervision and Curriculum Development. Buckingham, D. (1990). Watching media learning: Making sense of media education. Bristol, PA: Falmer Press. Buckingham, D. (2000). The making of citizens: Young people, news, and politics. London: Routledge. Buckingham, D., Pini, M. & Willett, R. (2007) ‘Take back the tube!’: The discursive construction of amateur film and video making. Journal of Media Practice, 8 (2), 183-201. Buckland, W. (Ed.). (1995). The film spectator: From sign to mind. Amsterdam: Amsterdam University Press. Buckland, W. (2000). The Cognitive Semiotics of film. Cambridge: Cambridge University Press. 328 Burgess, N., Jeffery, K.J. & O'Keefe, J. (Eds.) (1999). The Hippocampal and Parietal Foundations of Spatial Cognition UK: Oxford University Press. Casetti, F. (1995). Face to face. In Buckland, W. (Ed.). The film spectator: From sign to mind (pp. 118-139). Amsterdam: Amsterdam University Press. Casetti, F. (1999). Theories of cinema 1945-1995. (F., Chiostri & E., Bartolini- Salmebeni, Trans.). Texas: University of Texas Press. Child, E. (1939, November). Making motion pictures in the school. The English Journal, 26 (9), 706-712. Chateau, D. (1995). Towards a generative model of filmic discourse. In Buckland, W. (Ed.). The film spectator: From sign to mind (pp. 35-44). Amsterdam: Amsterdam University Press. Chen, J. & Nedivi, E. (2010). Neuronal structural remodeling: is it all about access? Current opinion of neurobiology, 20 (5), 557-62. Chomsky, N. (1957). Syntactic structures. The Hague: Mouton. Chomsky, N. (1972). Language and mind, 2nd ed. New York: Harcourt Brace Jovanovich. Chomsky, N. (2002). On nature and language. Cambridge: Cambridge University Press. Colapinto, J. (2009, May 11). Brain Games: The Marco Polo of neuroscience. The New Yorker, 76-87. Colin, M. (1995). The grande syntagmatique revisited. In Buckland, W. (Ed.). The film spectator: From sign to mind (pp. 45-110). Amsterdam: Amsterdam University Press. Considine, D. & Haley, G. (1992). Visual messages: Integrating imagery into instruction. Colorado: Teacher Ideas Press. Considine, D. (2009, March/April). From Gutenberg to Gates: Media matters. The Social Studies, 100 (2), 63-73. Cox, C. (1984, January). Shooting for a Judy award: A documentary on beginning filmmakers. The English Journal, 73 (1), 46-50. Crick, F. (1994). The astonishing hypothesis: The scientific search for the soul. London: Simon and Schuster. Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. New York, NY: Harper & Row. 329 Csikszentmihalyi, M. (1996). Creativity: Flow and the psychology of discovery and invention. New York: Harper & Row. Currie, G. (1999). Cognitivism. In Miller, T. & Stam, R. (Eds). A companion to film theory (pp. 105-122). Malden, MA: Blackwell Publishers. Damasio, A.R. & Damasio, H. (1988). Advances in the neuroanatomical correlates of aphasia and the understanding of the neural substrates of language. In Hyman, L.M. & Li, C.N. (Eds). Language, speech and mind. New York: Routledge. Damasio, A. (1994). Descartes’ Error: Emotion, reason, and the human brain. New York, NY: G.P. Putnam. Damasio, A. (1999). The feeling of what happens: Body and emotion in the making of consciousness. Orlando, FL: Harcourt Publishing Company. Damasio, A. (2003). Looking for Spinoza: Joy, sorrow, and the feeling brain. Orlando, FL: Harcourt Publishing Company. Damasio, A., Tranel, D. & Rizzo, M. (1999). Disorders of complex visual processing. In Mesulam, M. (Ed). Principles of Behavioral Neurology. Contemporary Neurology Series (pp. 332-372). Philadelphia: F.A. Davis Darling-Hammond, L. (1997, September) Quality teaching: The critical key to learning. Principal, 5-11. Darwin, C. (1890). The expression of the emotions in man and animals, 2nd ed. F. Darwin (Ed). London: John Murray. Retrieved October, 2010 from http://darwin- online.org.uk/contents.html Darwin, C. (1974). In de Beer, G (Ed.), Autobiographies: Charles Darwin, Thomas Henry Huxley. New York: Oxford University Press. Dennett, D.C. (1991). Consciousness Explained. Boston: Little, Brown and Company. Deleuze, G. (1986). Cinema 1: The movement-image. (H. Tomlinson & B. Habberjam, Trans.). Minneapolis, MN: University Minnesota Press. Deleuze, G. (1989). Cinema 2: The time-image. (H. Tomlinson & R. Galeta, Trans.). Minneapolis, MN: University Minnesota Press. Deleuze, G. (2004). Difference and Repetition. (P. Patton,Trans.). New York: Continuum. de Saussure, F. (2002) Écrits de linguistique générale. Paris: Gallimard. Deutsch, D. (1991). The tritone paradox: An influence of language on music perception. Music Perception 8, 335-347. 330 Deutsch, D. (1992) Paradoxes of musical pitch. Scientific American, 267, 88-95. Dewey, J. (1958). Experience and nature. New York, NY: Dover Publications. Doll, W. (1993, Summer). Curriculum possibilities in a "post"- future. Journal of Curriculum and Supervision, 8 (4), 277-292. Doll, W. (1989). Foundations for a post-modern curriculum. Journal Curriculum Studies, 21 (3), 243-253. Doidge, N. (2007). The brain that changes itself: Stories of personal triumph from the frontiers of brain science. New York, NY: Penguin Group Inc. Eaton, H. (2011). Brain school: Stories of children with learning disabilities and attention disorders who changed their lives by improving their cognitive functions. Vancouver, BC: GLIA Publishing. Eco, U. (1998). Serendipities: Language and lunacy. New York: Columbia University Press. Edelman, G.M. & Tonini, G. (2000). A universe of consciousness: How matter becomes imagination. New York: Basic Books. Edwards, C., Gandini, L., & Forman, G. (1993). The hundred languages of children: The reggio emilia approach to early childhood education. New Jersey: Ablex Publishing Corporation. Eisner, E. (1981, March). Mind as cultural achievement. Educational Leadership, 38 (6), 466-471. Eisner, E. (1997, January). Cognition and representation: A way to pursue the American dream? Phi Delta Kappan, 349-353. Eisner, E. (2001). Should We Create New Aims for Art Education? Art Education, 54 (5), 6-10. Eisner, E. (2006). Does Arts-Based Research Have a Future? Studies in Art Education, 48 (1), 9-18. Eisner, W. (1996). Graphic storytelling & visual narrative. Florida: Poorhouse Press. Ekman, P. (2003). Emotions revealed: Recognizing faces and feelings to improve communication and emotional life. New York: Henry Holt Evans, G. (2005). John Grierson: Trailblazer of documentary film. Montreal, Quebec: XYZ Publishing. 331 Ely, M. (1992). Software for classroom music making. Music Educators Journal, 78(8), 41-43. Evans, J. & Hall, S. (Eds.) (1999). Visual culture: The reader. London: Sage Publications. Feldman, S. (2011). Film education. Retrieved Sept, 2010 from http://www.thecanadianencyclopedia.com/index.cfm?PgNm=TCE&Params=A1ARTA0011487 Fels, L. (2004). Complexity, Teacher Education and the Restless Jury: Pedagogical Moments of Performance, Complicity: An International Journal of Complexity and Education, 1 (1), 73–98. Fels, L. (2002). Spinning Straw into Gold: Curriculum, Performative Literacy and Student Empowerment. English Quarterly, 34 (1, 2), 3–9. Fodor, Jerry A. (1975). The Language of Thought. Cambridge, MA: Harvard University Press. Freire, P. (1970). Pedagogy of the Oppressed. New York: Continuum International Publishing Group. Freire, P. (2005). Education for critical consciousness. New York: Continuum International Publishing Group Freeman, F. N. (1930, April). Review: Children and Movies by Alice Miller Mitchell. The Elementary School Journal, 30, (8), 636-637. Gagne, R. (1985). The Conditions of Learning (4th ed.). New York: Holt, Rinehart & Winston. Gardner, H. (1983). Frames of Mind: The theory of multiple intelligences. New York, NY: Basic Books. Gardner, H. (1993). Creating minds: An anatomy of creativity seen through the lives of Freud, Einstein, Picasso, Stravinsky, Eliot, Graham, and Gandhi. New York, NY: Basic Books. Garoian, C. & Gaudelius, Y. (2001, Summer). Cyborg pegagogy: Performing resistance in the digital age. Studies in Art Education, 42 (4), 333-347. Geller, R. & Kula, S. (1969). Toward filmic literacy: The role of the American Film Institute. Journal of Aesthetic Education, 3 (3), 97-111. Geschwind, N. (1974). The anatomical basis of hemispheric differentiation. In, J. Beaumont, G., & Dimond, (Eds), Hemisphere Function in the Human Brain (pp. 7-24). London: Paul Elek 332 Ginsberg, W. (1940, January). Films for high-school English. The English Journal, 29 (1), 44-49. Gladwell, M. (2005). Blink: The power of thinking without thinking. New York: Little, Brown and Company. Goleman, D. (1995). Emotional intelligence. New York: Bantam. Gordon, E. (1990). A music learning theory for newborn and young children. Chicago, IL: G.I.A. Publications, Inc. Gouzouasis, P. (1991). A progressive developmental approach to the music education of preschool children. Canadian Music Educator, 32(4), 45-52. Gouzouasis, P. (1992, Fall/Winter). An organismic model of music learning for young children. Update, 13-18. Gouzouasis, P. (1994) A developmental model of music literacy. Research Forum 12, 21- 24. Gouzouasis, P. (1998). Thoughts on thoughts: Are you thinking musically or just thinking about it? The BC Music Educator, 41 (2), 7-13. Gouzouasis, P. (2001). The role of the arts in new media and Canadian education for the 21st century. Education Canada, 41 (2), 20-23. Gouzouasis, P. & LaMonde, A (2004). Classroom Uses of Wireless and Portable Technologies in an arts-based teaching and learning model. International Journal of Learning, Vol 11. Gouzouasis, P. & LaMonde, A. (2005, July). The use of tetrads in the analysis of arts- based media. International Journal of Education & the Arts, 6 (4). Retrieved July 4, 2005 from http://ijea.asu.edu/v6n4/. Gouzouasis, P. (2006). Technology as arts-based education: Does the desktop reflect the arts? Arts Education Policy Review. 107 (5), 3-9. Gouzouasis, P., LaMonde, A. M., Ricketts, K., Ramsey, L. & Mackie, A. (2007). New forms of narrative in arts-based research: Explorations in music, dance, dialogue, and play. American Educational Research Association Conference, Chicago, April 8-13. Grandin, T. (1995). Thinking in pictures: and other reports from my life with autism. New York: Doubleday. Gray, W. (1940, April). The language arts. Review of Educational Research, 10 (2), 79- 106. 333 Greene, M. (1995). Releasing the imagination: Essays on Education, the Arts, and Social Change. California: Jossey-Bass. Grice, P. (1975). Logic and Conversation. In Davidson, D. & Harman, G. (Eds.). The Logic of Grammar (pp. 64-75). Encino, CA: Dickenson. Grieveson, L. (2008). Cinema studies and the conduct of conduct. In Grieveson, L. & Wasson, H. Inventing Film Studies (pp. 3-37). Durham, NC: Duke University Press. Grieveson, L. & Wasson, H. (2008). Inventing Film Studies. Durham, NC: Duke University Press. Grobel, L. (2000). Above the line: Conversations about the movies. New York: De Capo Press. Grodal, T. K. (1994). Cognition, emotion, and visual fiction: Theory and typology of affective patterns and genres in film and television. Copenhagen: University of Copenhagen, Dept. of Film and Media Studies. Grodal, T. K. (1997). Moving pictures: A new theory of film genres, feelings, and cognition. Oxford New York: Clarendon Press Oxford University Press. Grodal, T.K. (2009). Embodied visions: evolution, emotion, culture and film. New York: Oxford University Press. Guilford, J. P. (1987). Creativity research: Past, present and future. In Isaksen, S. G. (Ed.). Frontiers of creativity research: Beyond the basics (pp. 33-65). Buffalo, NY: Bearly Limited. Guterson, D. (1995). Snow falling on cedars. New York: Vintage Books. Halliday, M. (1973). Explorations in the functions of language. London: Edward Arnold. Hamlin, J., Wynn, K., & Bloom, P. (2007). Social evaluation by preverbal infants. Nature, 450, 557-559. Haraway, D. (1991). A cyborg manifesto: Science, technology and socialist feminism in the late Twentieth century. In Simians, Cyborgs, and Women: The Reinvention of Nature (pp.149-181). London: Free Association Books. Hayes, B. (1999, May-June). Seeing between the pixels. American Scientist, 87 (3), 202. Hickok, G. (2008). Eight Problems for the Mirror Neuron Theory of Action Understanding in Monkeys and Humans. Journal of Cognitive Neuroscience, 21 (7), 1229–1243. H’Doubler, M. (1940). Dance: a creative art experience. Madison, WI: University of Wisconsin Press. 334 Heidegger, M. (1968). What is called thinking? (J. Glenn Gray, Trans.). New York, N: Harper & Row Publishers. Heidegger, M. (1971). Poetry, language, thought. (A. Hofstadter, Trans.). New York, NY: Harper & Row Publishers. Heisenberg, W. (1958). Physics and philosophy: The revolution in modern science. New York, NY: Harper & Row Hodge, R., & Kress, G. (1988). Social semiotics. Cambridge: Polity. Hubel, D. H. & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. Journal of Physiolog,y 160, 106-154. Hubel, D.H. & Wiesel, T.N. (1970). The period of susceptibility to the physiological effects of unilateral eye closure in kittens. Journal of Physiology, 206, 419-436. Hubel, D. H. & Wiesel, T. N. (1974). Sequence regularity and geometry of orientation columns in the monkey striate cortex. Journal of Comparative Neurology, 158, 267-294. Hymel, S., Schonert-Reichl, K. A., Bonanno, R. A., Vaillancourt, T., & Rocke Henderson, N. (2010). Bullying and morality: Understanding how good kids can behave badly. In S. Jimerson, S. M. Swearer & D. L. Espelage (Eds.), The handbook of bullying in schools: An international perspective (pp. 101-118). New York: Routledge. Irwin, R.L. & de Cosson, A (Eds.). (2004). A/r/tography: Rendering self through arts based living inquiry. Vancouver, BC: Pacific Educational Press. Irwin, R.L. (2003). Towards an aesthetic of unfolding in/sights through curriculum. Journal of the Canadian Association for Curriculum Studies, 1 (2), 63-78. Jackendoff, R. (2002). Foundations of language: Brain, meaning, grammar, evolution. New York: Oxford University Press. Jenkins, H. (1999). The work of theory in the age of digital transformation. In Miller, T. & Stam, R. (Eds). A companion to film theory (pp. 234-261). Malden, MA: Blackwell Publishers. Jenkins, H. (2006). Confronting the Challenges of Participatory Culture: Media Education for the 21st Century. Chicago, IL: MacArthur Foundation Jowett, G., Jarvie, I. & Fuller, K. (1996). Children and the movies: Media Influence and the Payne Fund Foundation Controversy. Cambridge, MA: Cambridge University Press. Kaiser Family Foundation Study. (2010). Generation M2: Media in the lives of 8-to 18- year olds. Henry J. Kaiser Family Foundation, Menlo Park, California. 335 Kant, I. (1998). Prolegomena to any future metaphysics. In Pojman, J., (Ed.), Classics of philosophy. (pp.774-856). New York: Oxford University Press. Kivy, P. (1997). Music in the movies: A philosophical inquiry. In Allen, R., & Smith, M. (Eds.). Film theory and philosophy (pp. 308-328). Oxford: Clarendon Press. Kracauer, S. (1960). Theory of film: The redemption of physical reality. New York: Oxford University Press. Kress, G., & van Leeuwen, T. (2006). Reading images: The grammar of visual design. New York: Taylor & Francis. Lakoff, G. & Johnson, M. (1980). Metaphors we live by. Chicago: University of Chicago Press. Langer, S. (1942). Philosophy in a new key: A study in the symbolism of reason, rite and art. Cambridge, MA: Harvard University Press. Langer, S. (1953). Feeling and form. New York: Charles Scribner’s Sons. Langley Schools Music Project. (1976-77). Innocence and despair. Retrieved October, 2010 from http://www.keyofz.com/langley/ Latour, B. (1993). We have never been modern. Cambridge, MA: Harvard Press. Latour, B. (Winter, 2004). Why has critique run out of steam? From matters of fact to matters of concern. Critical inquiry, 30 (2), 225-248. Lecanuet, J. (1996). Prenatal auditory experience. In I. Deliege and J. Sloboda (Eds.). Musical beginnings: Origins and development of musical competence (pp. 3-34). New York: Oxford. Lecanuet, J., & Schaal, B. (1996). Fetal sensory competencies. European Journal of Obstetrics, Gynecology and Reproductive Biology, 68, 1–23. Leggo, C. (2001). Living in words, living in the world: literature and identity. English Quarterly, 33 (1/2), 13-28. Leggo, C. (2003). Calling the muses: a poet’s ruminations on creativity in the classroom. Education Canada, 43 (4), 1-7. Leggo, C. (2008). Autobiography: Researching our lives and living our research. In S. Springgay, R. Irwin, C. Leggo, & P. Gouzouasis (Eds.). Being with a/r/tography (pp. 3-23). Rotterdam: Sense Publishers. Lillo-Martin, D. (1997). The Modular Effects of Sign Language Acquisition. In Support of the Language Acquisition Device. In Marschark, M., Siple, P., Lillo-Martin, D., Campbell, R., 336 and Everhart, V. Relations of language and thought: The view from sign language and deaf children (pp 153-162). New York: Oxford University Press. Listone, J. & McIntosh, D. (1970). Children as filmmakers. New York, NY: Van Nostrand Reinhold Company. Long, G. (Winter, 1997). Computer education in today’s technology. Family and Education, 16-19. Lowndes, D. (1968). Filmmaking in schools. London, UK: Watson-Guptill Publications. Luria, A.R. (1972). The man with a shattered world. New York: Basic Books Inc. Luria, A.R. (1976). Cognitive development: Its cultural and social foundations (M. Lopez-Morillas & L. Solotaroff, Trans.). Cambridge, MA: Harvard University Press. Luria, A.R. (1982). Language and cognition. New York: John Wiley & Sons. Madeja, S. (1993). The age of the electronic image: The effect on art education. Art Education, 46(6), 8-14. Malaguzzi, L. (1993a). History, ideas, and basic philosophy. In Edwards, C., Gandini, L., & Forman, G. The hundred languages of children: The Reggio Emilia approach to early childhood education (pp. 41-89). New Jersey: Ablex Publishing Corporation. Malaguzzi, L. (1993b, November). For an education based on relationships. Young Children, 49 (1), 9-12. Magnotta, V., Adix, M., Caprahan, A., Lim, K., Gollub, R. & Andreasen, N. (2008). Investigating connectivity between the cerebellum and thalamus in schizophrenia using diffusion tensor tractography: A pilot study. Psychiatry Research: Neuroimaging, 163, 193-200. Marcus, G. (2008). Kluge: The haphazard construction of the human mind. New York: Houghton Mifflin Company. Marschark, M. & Everhart, V. (1997). Relations of language and cognition: What do deaf children tell us? In Marschark, M., Siple, P., Lillo-Martin, D., Campbell, R., and Everhart, V. Relations of language and thought: The view from sign language and deaf children (pp 3-23). New York: Oxford University Press. McLuhan, M. (1963). Understanding media: The extensions of man. New York: The New American Library. McLuhan, M. (1967). The medium is the massage: An inventory of effects. New York: Bantam Books. McLuhan, M. & McLuhan, E. (1988). Laws of media: The new science. Toronto: University of Toronto Press. 337 Metz, C. (1974). Film language: A semiotics of the cinema. (M. Taylor, Trans.). New York: Oxford University Press. Metz, C. (1995). The impersonal enunciation, or the site of the film. In Buckland, W. (Ed.). The film spectator: From sign to mind (pp. 140-163). Amsterdam: Amsterdam University Press. Miller, T. & Stam, R. (Eds). (1999). A companion to film theory. Malden, MA: Blackwell Publishers. Miller, T. & Stam, R. (Eds.). (2000). Film and theory: An anthology. Malden, MA: Blackwell Publishers. Milner, D. A. & Goodale, M.A. (2006). The visual brain in action, 2nd ed. New York: Oxford University Press. Mitchell, A. (1929). Children and movies. Chicago, IL: University of Chicago Press. Moore, R. (1991). Technology for teaching: MIDI in the band room. Music Educators Journal, 77(9), 65-66. Nakagaki, T., Yamada, H. & Tóth, Á, (2000). Maze-solving by an amoeboid organism. Nature, 407 (470). Narasimhan, R. (1997). Steven Pinker on ‘mentalese.’ World Englishes, 16 (1), 147-152. Narby, J. (2005). Intelligence in nature: An inquiry into knowledge. New York: Penguin Inc. Nathani, S., Ertmer, D. J., & Stark, R. E. (2006, July). Assessing vocal development in infants and toddlers. Clinical Linguistics & Phonetics, 20 (5), 351–369. National Education Association. (1886). Science, 8 (182), 91-92. Nedivi E. (1999). Molecular analysis of developmental plasticity in neocortex. Journal of Neurobiology, 41 (1), 135-147. Nedivi, E. (June, 2003). Architecture of the brain. MIT lecture series. Retrieved October, 2010 from http://mitworld.mit.edu/video/150 Newman, G., Choi, H., Wynn, K. & Scholl, B. (2008). The origins of causal perception: Evidence from postdictive processing in infancy. Cognitive Psychology, 57, 262-291. New London Group (1996, Spring). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review 66 (1), 60-92). 338 Odin, R. (1995). For a semio-pragmatics of film. In Buckland, W. (Ed.). The film spectator: From sign to mind (pp. 213-226). Amsterdam: Amsterdam University Press. O’Donoghue, D. (2009). Are we asking the wrong questions in arts-based research? Studies in Art Education, 50 (4), 352-368. Orff, C. & Walter, A. (1963, Apr-May). The Schulwerk: Its origin and aims. Music Educators Journal, 49 (5), 69-70, 72, 74. Overton, W. (1984). World views and their influence on psychological theory and research: Kuhn-Lakatos-Laudon. Advances in Child Development and Behavior, 18, 191-226. Padden, C. & Humphries, T. (1988). Deaf in America: Voices from a culture. Cambridge, MA: Harvard University Press. Parker, A.J., & Krug, K. (2003). Neuronal mechanisms for the perception of ambiguous stimuli. Current Opinion in Neurobiology 13, 433-39. Papert, S. (1980). Mindstorms: Children, computer’s and powerful ideas. New York: Basic Books. Pepper, S. C. (1942). World hypotheses: A study in evidence. Berkeley, CA: University of California Press. Petric, M. (2001, April). Both semiotics and cognitivism? Journal of Film-Philosophy, 5 (11), 1-14. Peeters, H. (1996). Psychology: The historical dimension. Netherlands: Syntax Publishers. Peters, I.L. (1930, May). Review: Children and movies by Alice Miller Mitchell. Annals of the American Academy of Political and Social Science, 149, (3), 207. Piaget, J. (1970). The child’s conception of movement and speed. (Holloway, G., & Mackenzie, M.J., Trans.). London: Routledge. Pinar, W.F. (1995). Understanding curriculum: An Introduction to the study of historical and contemporary curriculum discourses. New York: Peter Lang Publishing. Pinker, S. (1994). The language instinct: How the mind creates language. New York: William Morrow and Company. Pinker, S. (2002). The blank slate: The modern denial of human nature. New York: Penguin Group (USA) Inc. Pinker, S. (2007). The stuff of thought: Language as a window into human nature. New York: Penguin Group (USA) Inc. 339 Plantinga, C. & Smith, G. (Eds.). (1999). Passionate views: Film, cognition and emotion. Baltimore: The Johns Hopkins University Press. Plantinga, C. & Tan, E. (2007). Interest and unity in the emotional response to film. Journal of Moving Image Studies 4 (1), 2-47. Retrieved November, 2010 from http://www.avila.edu/journal/vol4/Plantinga_Tan_JMIS_def.pdf Preston, V. (1963). A handbook for modern educational dance. London: Macdonald & Evans. Prigogine, I. & Stengers, I. (1984). Order out of chaos. In Pickering, J. & Skinner, M. (Eds). (1990). From Sentience to symbols: readings on consciousness (pp. 59-65). Toronto: University of Toronto Press. Ramachandran, VS, & Blakeslee, S (1998). Phantoms in the Brain. William Morrow, N.Y. Ramachandran, V.S. (2004). A brief tour of human consciousness. New York: Pi Press. Ramachandran, V.S. (2006, June). Take the neuron express for a brief tour of consciousness with neuroscientist V.S. Ramachandran. The Science Studio with Roger Bingham on The Science Network. Retrieved October, 2010 from http://thesciencenetwork.org/programs/the-science-studio/take-the-neuron-express-for-a-brief- tour-of-consciousness Ramachandran, V.S. (2008). The Man with the Phantom Twin: Adventures in the Neuroscience of the Human Brain. New York: Dutton Adult. Ratey, J. (2001). A user’s guide to the brain: perception, attention and the four theaters of the brain. New York: Pantheon Books. Rheingold, H. (1985). Tools for thought: The history and future of mind-expanding technology. Cambridge, MA: MIT press. Rizzo, M., Nawrot, M. & Zihl, J. (1995, October). Motion and shape perception in cerebral akinetopsia. Brain, 118 (5), 1105-1127. Rizzolatti, G. & Sinigaglia, C. (2008). Mirrors in the brain—how our minds share actions and emotions. (F. Anderson, Trans.). New York: Oxford University Press. Rodowick, D.N. (2008). Dr. Strange Media, or how I learned to stop worrying and love film theory. In Grieveson, L. & Wasson, H. Inventing Film Studies (pp. 374-397). Durham, NC: Duke University Press. Rogers, T. & Schofield, A. (2005). Things thicker than words: Portraits of multiple literacies in an alternative secondary program. In Anderson, J., Kendrick, M., Rogers, T. and Smythe, S. (Eds). Portraits of literacy across families, communities and schools: Tensions and intersections (pp. 205-220). New York: Lawrence Erlbaum Publishers/Routledge. 340 Rogers, T., Winters, K., LaMonde, A. & Perry, M. (2010). From image to ideology: Analyzing shifting identity positions of marginalized youth across the cultural sites of video production. Pedagogies: An International Journal, 5 (4), 298–312. Roland, C. (1990). Our love affair with new technology: Is the honeymoon over? Art Education, 43(3), 54-60. Ross, G. (Director/Producer) (1998). Pleasantville. [Film/DVD]. (Available from New Line Cinema, USA release). Rouch, J. (2003). Ciné-ethnography. (S. Feld, Trans.). Minneapolis, MN: University of Minnesota Press. Sacks, O. (1971) The man who mistook his wife for a hat. New York: Touchstone. Sacks, O. (1973). Awakenings. New York: Harper Collins. Sacks, O. (1989). Seeing voices. Los Angeles: University of California Press. Sacks, O. (1995) An anthropologist on mars. Toronto: Vintage Canada. Sacks, O. (1996). Island of the color blind. New York: Vintage. Sacks, O. (2008). Musicophilia: Tales of Music and the Brain. New York: Vintage Books. Sacks, O. (2010). The mind’s eye. New York: Alfred A. Knopf. Sapir, E. (1949). Language: An introduction to the study of speech. Ontario: Harcourt Brace Jovanovich, Inc. Schafer, R.M. (1986). The thinking ear. Toronto: Arcana Editions. Schonert-Reichl, K. A., & Hymel. S. (2007). Educating the heart as well as the mind: Why social and emotional learning is critical for students’ school and life success. Education Canada, 47, 20-25. Sefton-Greene, J. (2006). Youth, technology, and media cultures. Review of Research in Education, 30, 279-306. Sinner, A., Leggo, C., Irwin, R., Gouzouasis, P., & Grauer, K. (2007). Arts-based educational research dissertations: Reviewing the practices of new scholars. Canadian Journal of Education, 29 (4), 1223-1270. Siple, P. (1997). Universals, generalizability, and the acquisition of signed language. In Marschark, M., Siple, P., Lillo-Martin, D., Campbell, R., and Everhart, V. Relations of language 341 and thought: The view from sign language and deaf children (pp 24-61). New York: Oxford University Press. Slawson, B. (1993). Interactive multimedia: The gestalt of a gigabyte. Art Education, 46 (6), 15-22. Smith, D. (1942). The present status of reading in secondary schools. The English Journal, 31 (4), 274-283. Smith, G. (2003). Film structure and the emotion system. Cambridge: Cambridge University Press. Springgay, S., Irwin, R., Leggo, C., & Gouzouasis, P. (Eds.). (2008). Being with a/r/tography. Rotterdam: Sense Publishers. Stokoe, W., Casterline, D. & Croneberg, C. (1965). A dictionary of American sign language on linguistic principles. Washington, D.C.: Gallaudet University Press. Stokoe, W., Armstrong, D. & Wilcox, S. (1995). Gesture and the nature of language. Cambridge, NY: Cambridge University Press. Tammet, D. (2007). Born on a blue day: Inside the extraordinary mind of an autistic savant. New York: Free Press. Teasdale, J.D., Howard, R., Cox, S., Ha, Y., Brammer, M., Williams, S., & Checkley, S. (1999). Functional MRI study of the cognitive generation of affect. Am J Psychiatry, 156, 209- 215. The New London Group. (1996, Spring). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66 (1), 60-92. Tierney, R.J. (2001-2002). An ethical chasm: Jurisprudence, jurisdiction and the literacy profession. Journal of Adolescent Literacy, 46 (4), 260-277. Tierney, R.J. (2009). The agency and artistry of meaning makers within and across digital spaces. In S. Israel & G. Duffy (Eds), Handbook of research on reading comprehension (pp. 261-288). NY: Lawrence Erlbaum. Trach, J., Hymel, S., Waterhouse, T., & Neale, K. (2010). Age differences in bystander responses to school bullying: A cross-sectional investigation [Special issue]. Canadian Journal of School Psychology, 25 (1), 114 –130. Tulvist, P. (1991). The cultural historical development of verbal thinking. (M. Jaroszweska Hall, Trans.). Commack, NY: Nova Science Publishers. University of Washington Education. (2010). Milestones in Neuroscience Research. Retrieved Nov, 2010 from http://faculty.washington.edu/chudler/hist.html 342 vanMarle, K. & Wynn, K. (2006). Six-month-old infants use analog magnitudes to represent duration. Developmental Science, 9, 41-49. Varma, S., McCandliss, B. & Schwartz, D. (2008, April). Scientific and pragmatic challenges for bridging education and neuroscience. Educational Researcher, 37 (3),140-152. Vogt, E. (2009). Spanish past-tense verbs up close. New York: McGraw Hill. Vygotsky, L.S. (1962). Thought and language. (E. Hanfmann & G. Vakar, (Eds.), Trans.). Cambridge, MA: MIT Press. Wald, G. (1984). Life and mind in the universe. International Journal of Quantum chemistry 11, 1-15. In Pickering, J. & Skinner, M. (Eds). (1990). From Sentience to symbols: readings on consciousness (pp. 67-77). Toronto: University of Toronto Press. Wernicke, C. (1874). Der Aphasische Symptomencomplex. Breslau: Cohn and Weigert. Wesch, M. (2002, 2007). The machine is us/ing us: Final version. Retrieved Sept, 2010 from http://mediatedcultures.net/ksudigg/?p=84 Wesch, M. (2008, March). YouTube statistics. Digital ethnography. Retrieved Sept, 2010 from http://mediatedcultures.net/ksudigg/?p=163 and http://ksudigg.wetpaint.com/page/YouTube+Statistics Wesch, M. (2009, January). From Knowledgeable to Knowledge-able: Learning in New Media Environments. Retrieved Sept, 2010 from Academic Commons Website http://www.academiccommons.org/commons/essay/knowledgable-knowledge-able Wilson, E.O. (1998). Consilience: The unity of knowledge. New York: Knopf/Random House. Whitehead, A.N. (1938). Modes of thought. New York: Macmillan Company. Whitman, W. (2006). Leaves of Grass: The original 1855 edition. New York, NY: Simon & Schuster Whorf, B.L. (1943). Loan-words in Ancient Mexico. New Orleans: Tulane University of Louisiana. Wynn, K. (2008). Some innate foundations of social and moral cognition. In P. Carruthers, S. Laurence & S. Stich (Eds.). The Innate mind: Foundations and the future. Oxford: Oxford University Press. Young, K. (1930, September). Review: Children and Movies by Alice Miller Mitchell. The American Journal of Sociology, 36, (2), 306-308.