Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Tracing the Dynabook : a study of technocultural transformations Maxwell, John W. 2006

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata


831-ubc_2007-267578.pdf [ 20.98MB ]
JSON: 831-1.0055157.json
JSON-LD: 831-1.0055157-ld.json
RDF/XML (Pretty): 831-1.0055157-rdf.xml
RDF/JSON: 831-1.0055157-rdf.json
Turtle: 831-1.0055157-turtle.txt
N-Triples: 831-1.0055157-rdf-ntriples.txt
Original Record: 831-1.0055157-source.json
Full Text

Full Text

T R A C I N G T H E D Y N A B O O K : A S T U D Y O F T E C H N O C U L T U R A L T R A N S F O R M A T I O N S by John W . Maxwell M P u b , Simon Fraser University, 1997 B.A. (honours), University of British Columbia, 1988 A T H E S I S S U B M I T T E D I N P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F D O C T O R O F P H I L O S O P H Y in The Faculty of Graduate Studies (Curriculum and Instruction) U N I V E R S I T Y O F B R I T I S H C O L U M B I A November, 2006 © John W . Maxwell, 2006 Abstract The origins of the personal computer are found in an educational vision. Desktop computing and multimedia were not first conceived as tools for office workers or media professionals— they were prototyped as "personal dynamic media" for children. Alan Kay, then at Xerox' Palo Alto Research Center, saw in the emerging digital world the possibility of a communications revolution and argued that this revolution should be in the hands of children. Focusing on the development of the "Dynabook," Kay's research group established a wide-ranging conception of personal and educational computing, based on the ideal of a new systems literacy, of which computing is an integral part. j Kay's research led to two dominant computing paradigms: the graphical user interface for personal computers, and object-oriented programming. By contrast, Kay's educational vision has been largely forgotten, overwhelmed by the sheer volume of discourse on e-learning and the Web. However, an historical analysis of Kay's educational project and its many contributions reveals a conception of educational computing that is in many ways more compelling than anything we have today, as it is based on a solid foundation of educational theory, one that substantially anticipates and addresses some of the biggest civil/political issues of our time, those of the openness and ownership of cultural expression. The Dynabook is a candidate for what 21st-century literacy might look like in a liberal, individualist, decentralized, and demo-cratic key. This dissertation is a historical treatment of the Dynabook vision and its implementations in changing contexts over 35 years. It is an attempt to trace the development of a technocultural artifact: the Dynabook, itself partly an idealized vision and partly a series of actual technologies. It is thus a work of cultural history. But it is more than simply a looking back; the effective-history of the Dynabook, its various incarnations, and its continuing re-emergence and re-artic-ulation mean that the relevance of this story is an ongoing question which needs to be recognized and addressed by educators, technologists, and learners today. This dissertation represents an introduction to this case. Table of Contents Abstract •• '• Table of Contents List of Figures v ' Acknowledgements vii Dedication v i i i Chapter 1: Introduction 1 The Story So Far—A Conventional Mythology • • 3 A Critique of the Popular Mythology 8 Impoverished perspectives • 9 The Division of tabour in Modern Technology 11 How Education is Complicit 15 Alan Kay and the Dynabook Vision 21 From ARPA to Xerox PARC 23 Toward the Dynabook . . . 2 8 Elements of the Dynabook Vision 32 The fate of the Dynabook 3 5 What Follows 3 6 Chapter 2: Positions and Approaches 39 Introducing Myselves 39 Roots • • • • 4 0 My Encounter(s) with Objects 43 Why This Study? Why This Approach? 50 Reflecting on my history — 50 "Computer criticism" 52 Multiple perspectives, blurred genres — 52 Methodology and the Problematic of Distance 54 Introducing genre theory 58 History as politics — 59 The Dynabook as/in history 60 How Do We Know a Good Idea When We See One? 64 Personal computing/educational technology as a site of struggle — 65 Chapter 3: Framing Technology 68 Technology as Media 68 McCullough's Framing of Media 71 Latour's Mediation: Articulations and Translations 74 Technology as Translation , 77 Standardization and the Tower of Babel 82 The Mechanics of Text -83 On'Abstraction' 85 Digital translations 87 Software 89 The Semiotics of Standardization 93 Simulation as Interpretation • 95 The Ethics of Translation 99 Back to the Tower of Babel -100 Our responsibility to technology ' 101 Chapter 4: Alan Kay's Educational Vision 104 Computers, Children, and Powerful Ideas 105 "Late Binding" and Systems Design 112 Smalltalk—"A New Medium for Communications" : 117 Objects and messages 118 The Design of Smalltalk 120 Late binding in Smalltalk 122 The Smalltalk environment 125 "Doing With Images Makes Symbols" 126 Ways of Knowing: Narrative, Argumentation, Systems Thinking 131 What is Literacy? , 135 Vision: Necessary but not Sufficient 140 Chapter 5: Translating Smalltalk 142 Origins: Smalltalk at PARC in the Early Years 143 Educational limitations • 148 Technological limitations 149 Smalltalk's Initial Transformation at Xerox PARC 150 A Personal Computer for Children of All Ages becomes Smalltalk-80 150 From educational research platform to software development tool 156 From "designers" to "end-users" 159 The Microcomputer Revolution of the Late 1970s 163 From a software research tradition to a "gadget" focus 164 From a research focus to a market focus 165 The Dynabook after Xerox PARC 166 The Vivarium Project 169 Vivarium research • 175 HyperCard and the Fate of End-User Programming — 179 From media environment to "Multimedia Applications" 179 From epistemological tools to "Logo-as-Latin" 182 Chapter 6: Personal Computing in the Age of the Web 186 What is a "Powerful Idea," Anyway? 186 The ,1990s: The Arrival of the Web , 189 From stand-alone PCs to information applicances — 190 From Closed to Open Systems 191 The Web as an Educational Medium 193 From learning experiences to an economy of learning objects 193 The Dynabook Today: How Far Have We Come? , 196 Vendorcentrism 200 "New Media" vs.""Cyberculture" in the 21st century 201 Lessons from the Open-Source Movement 203 Of Unix and other IT cultures 207 Authoring in the Age of the Web ; 211 . iv Web authoring and computer literacy 214 Why Not Smalltalk? Why Not the Dynabook? 217 The Dynabook: existence and essence 221 Chapter 7: Squeak's Small but Mighty Roar 225 Squeak: A Renaissance Smalltalk 226 "Back to the Future" • • • 227 Squeak as an Educational Platform 231 Etoys: Doing with Images makes Symbols 232 Squeak vs. Squeakland 239 Squeak in School 242 The Squeak Community and its Trajectories. — 247 The Blue Plane and the Pink Plane 248 Squeak Communities Today 250 Squeak in print? 253 Where is that Dynabook, Anyway? 254 Squeak and Croquet at OOPSLA'04 255 Squeak: Mouse that Roared? 259 Chapter 8: Drawing Things Together 261 Where we've Been 261 Dynabook: Artifact or Idea? 267 Cultural history is not Biography 270 Back from the Future 272 Who Cares About the Dynabook? 275 Kay's relevance to education 276 Education and Powerful Ideas 279 The Politics of Software Revisited : 280 Bibliography 286 Appendix A: UBC Research Ethics Board Certificate of Approval 304 List of Figures Figure 4.1: Jimmy and Beth wi th their Dynabooks 109 Figure 4.2: Cardboard mockup circa 1971-1972 110 Figure 4.3: Cartoon by Ted Kaehler 116 Figure 5.1: Kids in front of A l to computer 145 Figure 5.2: Original over lapping-window interfaces 146 Figure 5.3: Ade le Goldberg's Joe Box in action 147 Figure 5.4: Marion's paint ing system 148 Figure 5.5: A Smalltalk "browser" 153 Figure 5.6: Playground environment, circa 1990 177 Figure 7.1: From "Programming Your Own Computer " 234 Figure 7.2: Playground II "Scr iptor" 235 Figure 7.3: Etoys "V iewer " in Squeak 3.8 237 Figure 7.4: Etoys tile representation and equivalent Smalltalk code 239 Figure 7.5: The hallmark "dr ive a car" Etoy ...241 Figure 7.6: Kay's sche ma f rom the 2004 Turing Lecture 258 Acknowledgements A good many people contributed in a variety of ways to my being able to take this project on and to complete it. M y sincere and lasting thanks go to them. T o Ricki Goldman for making my path clear; M a r y Bryson for her insistence on critical vision, and for her unshakable support; Gaalen Erickson for kindness, wisdom, and unflagging interest. T o Prescott Klassen for the sense of history and the encouragement to think big; David Porter for giving me creative opportunity and encouragement for my explorations; Rowly Lorimer for the space to think, to work, and to engage; Mart in L'Heureux for reliable intellectual foiling, and for sharing much of the path; A v i Bryant for being an exemplar. T o W a r d Cunningham for Wikis, especially c2 and for the incredible repository of software wisdom collected there; Simon Michael for Z W i k i , which served as my writing environment; Wikipedia for more breadth than I have any right to. T o Pavel Curtis and A m y Bruckman for objects; Matthias Muller-Prove for blazing a path through the literature. T o K i m Rose for the direct line to the sources and for unfailing paitience; T e d Kaehler and A n n M a r i o n for generosity of time and help far beyond what I asked for; BJ Al len-Conn, Bobby Blatt, and Dan Ingalls for their help in putting the pieces together; John Dougan for his help with the images. T o Alan Kay for a world to grow up in, and my Dad for seeing it. T o my wife, Kelly, for constant intellectual stimulation, for historiography, for patience, for superb copyediting, and for a home for me and my ideas; N o r m for seeing what I was on about before I even did. T o my M o m for the solidest of foundations. T o James and Ayla for the future. vii Dedication For James, Ayla, Anja, Cara, Maia , Adrienne, Julien, Ben, Joelle, Bram, and Natalia Joy. Chapter 1: Introduction This is a story about educational computing—that is, computers in education. What does that mean, exactly? How we come to an answer to that question is a good deal of what the next 200 pages are about. That is to say, the question of what "computers in education" means isn't a simple one. It is not the kind of question we can answer in a sentence or two and then get on to the business of plugging in cords and training users. Rather, the meaning of computers in education is something that is contested, and has been contested for about forty years already. Over that time, the answer to the question has not become particularly clearer; on the contrary; I wil l take pains to argue here that it has looked substantially clearer at points in the relatively distant past than it does now. The story I am about to tell is that of Alan C. Kay and his work on a possible vision of personal and educational computing; a research program which began in the early 1970s at Xerox' research labs in Palo Alto, California and which has gone through a variety of institu-tional contexts since then, even continuing today. Alan Kay's work is romantically described in a vision he articulated some thirty-five years ago under the rubric of the Dyna-book, which continues today to act as a sort of touchstone and reference point for the ongoing development and evolution of a particular rendering of what personal and educa-tional computing might mean. Kay's story isn't well known, compared, for instance, with the story of Steve Jobs and Steve Wozniak inventing the Apple computer in their garage in the late 1970s, or of Bil l Gates' founding of Microsoft corporation in that same decade. But despite its relative obscurity, I will argue that Alan Kay's story is one of the root texts in the construction of personal and educational computing. In delving into this history, and in evaluating our contemporary aporias in the light of it, I wil l argue that the cultural trajectory of personal and educational computing can be made better sense of—and that opportunities for personal agency, critical understanding, and political action appear—in the light of such a historical study. Chapter 1: Introduction 1 A starting point for this research is the constructedness of personal and educational computing. Now, the constructedness of the world is a popular topic in recent social and cultural theory, but what is often missed is the element of ongoing political situatedness and the active and generative space opened up by critical engagement with these constructions. The world is not given, but is in large part created by participants in particular social, cultural, historic, and practical contexts. Moreover, the constructedness of the world does not have to be something over/against regular folks. A good part of my agenda here is to show that the construction of personal and educational computing is not something done to users, learners, teachers, or even critics. Personal computing is not given; it has been constructed through particular historical contingencies, and, more important, it is continu-ally and ongoingly constructed. What is a computer? What good is it? What is it for? How does it fit into our lives? It is important to remember that these questions remain open. The work of defining this ground is still going on; the game is still up for grabs. I hope to show how Alan Kay's work—beginning in the 1970s—was the first major, sustained entree into this realm: that is, the work of opening up the possibility of personal agency in the struggle for the construction of meaning and effective action in personal and educational computing. The considerable influence in this area weilded by large market-driven corporations like Apple Computer and Microsoft Corporation is altogether more recent. Furthermore, despite the apparent dominance of corporate market logic in defining the meaning and significance of personal and educational computing, I intend to show how attention to the history of this field can reveal opportunity for individual 'users'—however circumscribed their agency may appear in the face of corporate domination or the threaten-ing chaos of the Internet. The apparent disempowerment of individual learners, teachers, and other front-line 'users' in the face of a rapidly growing and complexifying world of computing and digital media is the target of this work. In my studies of educational computing, I am repeatedly faced with the challenge of making sense of a field which basically does not make sense—that is, it is without a guiding rationale or set of common principles which might guide action or Chapter 1: Introduction 2 even critique. Educational computing seems a multi-headed and often self-contradictory beast, almost wilfully ignorant of its own history, and as a result often at the mercy of what-ever fashions or—in the post-9/11 world—terrors may carry the day. The result is that whether the current obsession is to be upgrading our software, updating our blogs, or fend-ing off network-borne viruses, the extent of most users' understanding and feelings of control over what they are doing is, to say the least, compromised. T H E S T O R Y S O F A R — A C O N V E N T I O N A L M Y T H O L O G Y You may ask yourself, how did we get here? How do we find ourselves in a world dominated by an often overwhelming technological infrastructure, in which fear and insecurity have become such driving forces? In order to answer this question, we can begin by examining the conventionial history of personal computing, that which serves as the origin myth and working model of this world. In the beginning—so the popular story goes—there were mainframes; computers were enormous, air-conditioned beasts tended by teams of white-coated priests (or, alternatively, by teams of pos t -WAC gals carrying reels of tape and patch cables). In these early days, the story goes, computers were put to large, institutionalized purposes: taxes, billing, artificial intelligence, and world domination: somehow these increasingly large and powerful machines would surely break their chains and devour humanity, or at least enslave us. But the promethean leap apparently came in the late 1970s, when a ragtag army of hobbyists in southern California—working in pairs in garages—invented the personal computer out of spare parts and baling wire. These early computers were tiny and inexpen-sive, tended by greasy adolescents in dirty t-shirts. It wasn't long, however, before the smell of money mingled with the the odor of solder and the whiff of burning components. A cadre of early computer entrepreneurs—Steve Jobs, Bil l Gates, et al.—set up shop to battle the established computer industry (IBM), who peered down from its mainframes and wondered what to do now that the secrets were out. Chapter 1: Introduction 3 I. This 'new world' of computers is romantically captured in a new genre of computer magazines—like BYTE—that appeared in the late 70s and early '80s: half inch-thick glossy publications stuffed full of opinionated editorials, recipes for homebrew computer projects, and full-page colour advertisements for the new software "titles" which were appearing to tap this new market. Software appeared in three major categories: office productivity soft-ware like spreadsheets and word processors—the personal computer had terrible, unresolved issues with legitimacy and desperately desired to be accepted by real business-people; computer games—probably the most lucrative market of the three; and educational software—which often looked quite a bit like the games, for marketing reasons at least. Educational software tended to be drill-and-practice exercises, tarted up with as much colour and sound as the makers (and the existing hardware) would allow. A n d there was also Logo, an educational programming language developed for children and undoubtedly good for developing young minds, though it seemed no one was quite sure how. In any case, regardless of the intellectual depth (or lack thereof) of these works, it was enough to estab-lish educational software as a persistent genre in the minds of the computer-buying public: one of the things that these machines were "good for," a reason for buying—or at least for justifying the purchase. According to the popular story, three key events in the early 1980s rescued personal computing from its greasy hobbyist image (with a vaguely countercultural air about it) and made it into an economic powerhouse. The first was IBM's hugely successful introduction of the " I B M PC," which set the paradigm for what a personal computer was, almost completely eliminating all others (other players in the PC market made "PC clones" after this). I B M brought the respectability of established business, and poured marketing money into making the "PC" an indispensable part of small business operations. The second—much more in keeping with the promethean mythology of personal computing—was Apple Computer's 1984 introduction of the Macintosh, branded "the computer for the rest of us." Apple's early marketing of the Mac lives on in popular history. W i t h it, they simultaneously defined a legitimate alternative market for personal comput-Chapter 1: Introduction 4 ers—easily reduced to "creative" types—and cemented IBM's mainstream business market. In caricaturing IBM's "big brother" image, Apple undoubtedly helped reinforce I B M as the market leader for serious computing. The paradigm—or genre—that I B M established was shored up by Apple's circumscription of the margins. This division lives on in the popular imagination to this day. The third event was not so much a single event as a clear trend: video games' growth into an enormous, lucrative market. Atari was the market leader here in the early 1980s, dabbling in sales of both personal computers and dedicated video-games machines, but more importantly with the design and distribution of the games themselves, to whatever platform. If there was a question—what are they for?—surrounding personal computers, there was no such worry about video games. Regardless of the ambiguity of what personal computers might be good for, the market and industry surrounding it grew with phenomenal energy through the 1980s and into the 1990s; a new breed of computer millionaire emerged in Silicon Valley, around Boston's "Route 128," and in several other centers in North America (notably Seattle, in Microsoft's case). There was money to be made, and between the innovating potential of digital tech-nology and the gradually growing demand for it in the marketplace, personal computing flourished. More and more businesses, schools, and individuals bought personal computers; the industry steamed ahead with new and innovative uses for them: productivity software, educational software, games, and now interactive multimedia, graphics, audio and video tools. The "Multimedia PC" of the early 1990s, centered around its C D - R O M drive, pushed the market ahead again, and the growth of a content-based C D publishing industry seemed certain. The key innovation to emerge in the 1990s, of course, was the World-Wide Web, which first reached public consciousness in 1994 and 1995. Almost overnight, the personal computer's identity shifted from that of productivity tool to information appliance, tapping a world-wide ocean of information; pundits waxed rhapsodic. For educational and personal users (that is, apart from the established office productivity market), the "Web" became the Chapter 1: Introduction 5 single most important reason to own a computer, and the Web browsing software Netscape Navigator was proclaimed the new "killer app." Netscape Communications Co. raised 1.2 billion dollars in its now-famous Initial Public Offering (IPO) in 1995, sparking a flood of investment reminiscent of the California gold rush of 1849, or at least the Dutch tulip market of the 1630s. Through the late 1990s this new gold rush ran wild, with billions of dollars invested in driving innovation online. When the "tech bubble" finally began to subside (it would be an overstatement to say it burst) in 1999 it left in its wake a landscape littered with new technologies; some useful, many not, some significant, many soon forgot-ten. What had become clear was that the paradigm of personal computing had been firmly established throughout western society: a 2005 report, for instance, states that 75% of Cana-dians have a computer at home; 72% are Internet users. More than 30% get their daily news online (Canadian Internet Project 2005). A 2004 Statistics Canada report states that 97% of Canadian schools were Internet connected, with an average of 5.5 students per connected computer (Statistics Canada 2004). One more important statistic is this one: the number of people online—that is, capable of communicating on the Internet—is one billion, as of late 2005, according to Mary Meeker of Morgan Stanley Research.1 A billion people makes for a large-scale, complex society by any measure. A n d yet, our primary means for interacting with this complex environment is the personal computer, a bastard, haywired-together technology born a scant two-and-a-half decades ago by greasy youths in garages in California, sold mostly by consumer-elec-tronics hucksters in the intervening years, and developed largely via gold-rush hysteria. What we've inherited is the PC as generalized interface to a big, scary world out there. But it is significantly underpowered in comparison to the task; I do not mean here that the processing power, the M H z , or the R A M is insufficient—what I mean is that what has become a significant communications medium—a major established genre or paradigm of human expression, communication, and commerce—is built on extremely shaky founda-1. Interestingly the study reports that 36% of those users are located in the Asia-Pacific region; while only 23% are in North America. See Meeker (2005). Chapter 1: Introduction 6 tions, and patched up and reinforced over the years with little more than glossy magazine advertisements. A hundred years ago, the exigencies of the book publishing world led print-ers increasingly to use cheap pulp paper, despite the fact that pulp paper disintegrates into dust within about a century under most conditions. But this is vastly more robust than the state of the personal computer, which threatens to burst asunder for many "users" on almost a daily basis, in the face of quotidian bugs, virulent viruses, overwhelming spam, software piracy, invasion of privacy, pop-up pornography, chat-room pedophilia, and general information overload. Now, fear and loathing have never been serious impediments to commerce or progress; indeed, they are often powerful drivers. The personal computing market is certainly driven by such forces, and educational computing is no different. Far from "personal" computer users—a collective which, at numbers like those quoted above, is roughly equivalent to "citi-zens"—being in any kind of control of the digital world, the real battle to control the discourse is fought by large and mighty corporations. Microsoft, for one (and they are certainly not alone in this), has established itself as an immense, indispensable part of the environment by offering to manage the interface between 'users' and the vast, ambiguous, frightening, and complex world of technology and the Internet. That they have been accused on many occasions of being more part of the problem than the solution matters little; Microsoft's marketing genius—a paradigm-defining one—is in understanding and managing just how much or how little consumers want to know, or understand, about what goes on beyond their monitor screens. It is not a stretch to say that all successful technology companies today succeed because they play this particular game well; consider Google's enormously successful management of online content (and the dearth of attendant critique). In education, WebCT, one of the most influential companies in educational tech-nology today, succeeds precisely because of their successful control of the ambiguities and complexities of the environment in which their customers need to work. This is the domi-nant dynamic of the first decade of the 21st century. Chapter 1: Introduction 7 A C R I T I Q U E O F T H E P O P U L A R M Y T H O L O G Y Such is the conventional story of personal computing. This is the mythology of this moment in time, the history which makes sense of the world we live in. It is, of course, only one story, and it is inadequate and indeed obfuscating on several levels. It is helpful to look at the story of personal computing as one emergent within a context of contemporary journalism, advertising, and marketing, for these are the main arenas in which the conventional story has played itself out so far. To the extent that popular journal-ism and advertising constitute public discourse, this is in fact and practice our story. But it is not difficult to problematize this. A simple tactic is to simply look for what is absent. In the first place, there is practically nothing about "computer science" in the story; it plays out as though the formal, academic study of computing (half a century old) did not exist, or perhaps as if this realm were some dusty, antiquated pursuit that we were better to have left behind in the promethean moment of the late 1970s. The second major absence is that of software. The conventional story, as reported and advertised in newspapers and magazines, and played out in catalogues and showrooms is overwhelmingly concerned with computer hardware. Software, when it is considered at all, remains in its standard-sized, shrinkwrapped boxes. Personal computing has largely been about personal computers, as artifacts, commodities, toys, gadgets. There is very little about what actually goes on inside these computers, even in the face of the obvious and oft-repeated fact that the wealthiest man in the world, Bi l l Gates, headed a company that ' doesn't deal in hardware at all. Somehow, the fetish is entirely physical, and we have come to accept that software is a necessary evil that allows the hardware to work, and which some-how slouches toward its slow improvement. Presumably, it is easier to talk and write about hardware than software. The finer points of chip design are buried deep within the black box—or rather, the shiny exterior (or at least the beige plastic cases) of the machine; the details of software are actually in our faces more than we like to admit, but besides a few trite discourses (GUIs vs command line; Mac OS vs Windows), this fails to get the attention that Chapter 1: Introduction hardware does. When C D - R O M s appeared in the early 1990s, and afterward the Internet, we began to talk about "content" with respect to computers, despite the fact that we rarely speak of digital content in ways that are any different from the content that appears in books or on television. But our conception of "content" is nowhere near sufficient to grasp the significance of software today. The third conspicuous absence in the conventional story is history itself. The sheer volume of discarded computer hardware suggests an alarming tale which appears now and then amid reports of sending old PCs to Africa, like eyeglasses in the Second Sight project. But nothing is ever said of the volume of discarded effort spent designing, developing, learn-ing, and using the software of years past. W i t h the exception of a persistent genre of old-timers' reminiscing their old beloved version of W o r d (or Wordperfect, of Star Writer, or whatever—always writers talking about word processors) long past, we give close to zero thought to the decades of evolution of software. The mythology seems to prescribe that the newer is always a straightforward improvement on the older (usually along the lines of more betterfaster cheaper), and wholesale innovations (the web browser, for instance) accepted as being born fully formed from the foreheads of their developers. This obsession with the march of the new masks not only the person-years of toil and thought, but also the myriad missed steps and missteps along the way. It masks, fundamentally, the constructivist's cry, "It could have been otherwise." Impoverished perspectives The conventional story of personal computing is caught between the twin horns of two popular caricatures of technology: instrumentalism and determinism. Instrumentalism is the simple and common belief that we create technologies to achieve particular ends, to solve particular problems. The assumption in instrumentalism is that these ends or problems are clearly defined in advance, such that technological solutions can straightforwardly be specified and developed. Instrumentalism further carries with it Chapter 1: Introduction 9 the assumption that technology is value-neutral, a mere tool in the hands of a purposeful designer or user. Technological determinism is in some ways the mirror-image of instrumentalism; the determinist perspective holds that technology has a logic of its own: most fundamentally, that progress is inevitable, towards better and better ends (this the Enlightenment's posi-tion) or toward more sinister and oppressive ends (the position of much critical theory and a good deal of latter-day science fiction). It is easy to pose these two stances against one another, and view the world of technol-ogy as a struggle between the two or as a playing-out of a middle ground or compromise. I think it better to see instrumentalism and determinism as commonplace perceptual facets of technological systems; which appear 'naturally' to us in differing circumstances, but which fail in most cases to really focus our attention or provide a useful analytical frame-work: we look at advertisements for new cell phones that can record movies and download coupons and we muse, "what next?" in a happily determinist frame of mind. We purchase the next iteration of the cheap disposable inkjet printer in a spendthifty instrumentalist mode. A n d then we wade through mountains of spam in our e-mail in-boxes and curse that the Internet is out of control. What to do? A n d how could we know anyway, given that our thinking about technology is so circumscribed? We need to remember—despite the constant temptation not to—that how we confront problems and issues today is historically conditioned; we got to this point by way of a specific unfolding of circumstance. But historical awareness is limited; things haven't always been as they are, and they might have been otherwise, but it certainly does not follow that we can simply choose otherwise: to consciously adopt a different position. Technology is political. It is not a neutral, external realm of human activity separate from political and ethical concerns. Neither is it an 'influence' on the ethical and political, nor are these facets of our lives mere 'influences' on technology. Rather, technology is poli-tics and ethics—beginning right with our difficulty in remembering so. This is a stance which I wil l elaborate in some detail in the pages that follow. In particular, I want to spot-Chapter 1: Introduction 10 light this notion with particular attention to computer software, a subset of technology which is more and more shot through our private and public lives. Software has always been political, but today, in the early 21st century, the politics of software have become acute. A n d while there is an emerging discourse and literature addressing this (e.g., see Lessig 1999; 2002b; Moglen 2000; 2003; Stallman 2001; 2003), it has not reached widespread public attention. I see this as a crisis facing Western societies (and by extension, everybody else, given the agendas of globalization). The reason for the lack of focus on the politics of soft-ware, despite the technological messes that accumulate around us, has to do with the basic ahistoricity in our thinking about technology. M y method here is to lead with historicity, so that this moment in time can be framed, and so that the idea of software as politics has some concrete meaning. T H E D I V I S I O N . O F L A B O U R I N M O D E R N T E C H N O L O G Y Let us begin with a particular question about technology, computers, and software: whose problem is this, anyway? Alternatively, we can ask: who's responsible for this mess? The common and superficial response, which often bills itself as the humanist perspec-tive, is that the designers and marketers of computing technology are responsible for the technological systems surrounding us. This argument casts our technological dysfunction in either a technological determinist light (Menzies 1989; Bowers 2000) or an instrumental-ist one with a determined overclass: the military-industrial complex (Edwards 1996). While these treatments both correctly identify a nastily asymmetrical power dynamic surrounding technology, they run into trouble when they attempt to isolate the problem as external to the lifeworld of ordinary people—that technology is a system put over against 'us.' The char-acterization of computer technology as having been imposed upon society by an engineer/capitalist elite neatly divides up the responsibility for our ills: someone (industry, salesmen, zealous technologists, etc.) is to blame, and the analysis ends there. The resulting responses tend to impotence: whether we should enact laws (limiting corporate power; protecting individual privacy; protecting consumer's rights; regulating the Internet; etc.), or Chapter 1: Introduction 11 'resist' technology (don't carry a cellphone; chop up your credit card; refuse to upgrade your word processor; computers out of the classroom), or write critiques and stern warnings about the fate of the world. These are all commonplace ideas; we all engage in many of these tactics—I certainly do. There is an underlying and foundational trope lurking herein, though, and it hamstrings everything we might like to do about our technological predicament. The assumption is, broadly framed, that technology is an external force on our lives, driven by someone else's agenda. More specifically put, the assumption is of a division in society: a division of labour between experts and end-users (or producers and consumers). We wi l l -ingly and unproblematically learn this division, choose it, take it on, and reproduce it. We reify it in our buying habits, in our curriculum plans, in our legislation, in our discourses. I would not claim that these power imbalances aren't very real, but we are doomed to live by their terms when we take on the roles assigned to us. But, of course, we're also stuck with them, and changing the world is not just a matter of changing one's shirt. Now, it is not my intent to go into a lengthy discussion of hegemony or domination here. M y purpose is rather to do the history of how we got to this particular place. In the hermeneutics of the historical process are—I optimistically believe—the generative possi-bilities. What can we know about the division of labour in information technology, between experts and end-users7. C P . Snow's famous "two cultures" of the sciences and the humanities only begins to frame the division as it presents itself here; the computer age brings with it an economic and political apparatus that institutionalizes the producer/consumer divide on top of the expert/end-user division. The tension between expert knowledge and public dialogue is age-old. Latour identifies the origins of it with Socrates in Plato's Gorgias, in which politics is (mis)represented as one of right vs. might (Latour 1999, p. 219ff). Latour uses this as an analogy for our popular conception of the relationship of science to politics. Instead of call-ing for a science free of political influences, Latour wants a "politics freed from science"— that is, freed from the kind of political shortcutting it is often called upon to do: "a substitute Chapter 1: Introduction 12 for public discussion" (p. 258). Long has "Science" (Latour uses the capital "S" in this rhetor-ical characterization) been called upon to end the messiness of actual political discussion: the introduction of the "impersonal laws" of nature as an antidote to the irrationalism and ambiguity of human judgement, and thus opposed to Politics as such. Latour presents an alternative "science" (without the capital) which involves the proliferation and extension of complex collectives of reason, argumentation, and agency which are political discourse. Latour's capital-S Science (or Reason) is thus a conventional tool for silencing one's oppo-nents, but he reminds us that this version of science is not the whole story, and that there is no analytically convenient "inside" and "outside" of science (1987, p. 145ff). Latour is concerned too with the division of labour. Complicating this account, however, is the work of technology theorist Arnold Pacey, who wrote on the "culture of expertise:" Pacey offers an argument for a specially situated kind of technological determinism or "technological imperative" at work within groups of engineers and technologists, that an airplane such as the French/British Concorde would never have emerged apart from a drive for engineering excellence in itself. Pacey cites Free-man Dyson on nuclear weapons: that their existence is in part due to the telosoi the culture of expertise, that they are "technically sweet" projects that appeal to physicists, as opposed to the hard, hacked out engineering of conventional weapons (Pacey 1983, p. 43). What is amiss in this kind of a world, Pacey suggests, is the compartmentalization of our values within various spheres of activity (public, private, men's, women's, educational, profes-sional, etc.), and that a solution might be a broad-based effort to break down these compartmentalized traditions and virtues. What this means to me is that the divide between inside and outside, or between expert and everyman, is not one that can merely be undone or unbelieved in; rather, it is a cultural phenomenon that we are dealt. Latour's two characterizations of S/science are, historically speaking, both actual, and actively in tension. Pacey's observations point to the fact that we continue to reify the poles of the division. The more we believe in them, the more real they become. The longer we believe in end-users, the more distant we become from the expert Chapter 1: Introduction 13 V pole. The result is a disastrous commonplace sensibility about technology's place in society. Wi th in education, computational-literacy advocate Andrea diSessa described what he calls the "culture gap," characterized by an "anti-learning bias" on the part of technologists, an insistence on superficial transparency of computing artifacts, and a deep-seated expecta-tion that only some individuals can assume (professional) positions of knowledge and authority—a notion which brings with it distrust of broad-based competence (diSessa 2000, p. 225ff, 237). This is but one particular articulation. Similarly, much of the literature on computing that falls roughly within the Science, Technology, and Society (STS) rubric (e.g., Turkle 1995 and others in Shields' 1995 volume) is unfortunately inscribed in the stereotypical humanist vs engineer division. The result is analysis that says 'the engineers only thought of things from the engineering perspective, and have imposed solutions on us that fail to take into consideration what we humanists need.' While there is undoubtedly a grain of truth expressed in this, it is but vanity to construct this as a story of oppression from above. To make it into such a moral position does considerable violence to the discourses coritextualizing the engineers' work—as if they were working in isolation. Turkle's 1995 analysis is a case in point: she reports on the MIT's Project Athena, an enormous effort to computerize an entire campus in the 1980s. Turkle cites Project Athena's ban on the progamming language BASIC, followed by a reversal under considerable pressure, but with the condition that BASIC would remain officially unsupported. Her account points this out as an example of the arrogance of the systems and administrative people in the face of'real world' needs of users. The critique, however, is predicated on the assumption of an insider/outside split: engineers vs. humanists; develop-ers vs. end-users; experts vs. regular folks. But such divisions, no matter how commonplace or self-evident they may appear (reifi-cation works thusly), are caricatures; they fold into non-existence the untold hours of labour that go into the design and maintainance of systems, the extensive and complex networks of discourse and practice that must be created and sustained in order for such Chapter 1: Introduction 14 systems to ever exist, and the deep points of connection that actually bind together the people and machines and systems on both sides of the apparent divide. There are, luckily, alternative conceptions. Two of the strongest, from the science stud-ies literature, and which serve as touchstones for me, are the writings of Bruno Latour and Donna Haraway. Both thinkers are bloodhounds on the trail of taken-for-granted bounda-ries. Latour's boundaries are those that separate science from politics, society from nature, human from nonhuman; these are mythological artifacts of the 'modern' age, Latour argues (1987; 1993). Latour counters that there is no inside or outside of science, only a prolifera-tion of hybrids. Donna Haraway's target boundaries similarly are those which purportedly guarantee purity of a privileged conception of humanity. Her powerful contention (1991) is that the cyborg is us, we are always already compromised and impure, hybrids political and natural, material and semiotic, technical and moral. The situated stance taken by Haraway is significant: we are not in a position to re-invent the world wholesale, but rather to fight for a fairer distribution of power, one not so over-whelmingly dominated by entrenched institutional power bases. This is not an all-or-nothing struggle, but rather a tactical strategy to spread the fruits of technoculture around more evenly. Technology, especially in its digital form, need not only be the instrument of established power to maintain and extend itself. That's what I mean by technology—and especially software—being political: it is an active politics that works bidirectionally; it is generative, as the Foucauldians have pointed out. There is actually a strong tradition of this sort of work and thinking in computing, in the academy, in education. A n d indeed, it is my contention that Alan Kay's body of work of speaks very clearly to this issue. H O W E D U C A T I O N IS C O M P L I C I T The conceptual rift between 'experts' and 'end-users' is thriving in our educational institu-tions. The whole field of educational technology is based on a confused discourse about ends and means; it reifies experts and end-users, technological means and pedagogical ends, as i f these were pre-existing categories. A n d in a sense they are, as the academic world is Chapter 1: Introduction 15 similarly predicated on this division of labour: researcher vs. researched, subject vs. object. The technological aspect then is symptomatic of a larger unquestioned division between experts and non-experts, making it a structural or systemic issue. The sheer volume of history—of tradition and culture—underlying this division of labour issue is immense: it goes right to the core of modernism and capitalism and science and our very way of being in the world. It has everything to do with how we inscribe the boundaries of technoscience— the structure of the economy, our construction of gender and class, our expectations about freedom and choice, our acquiescence and resistance to globalization and corporatization, our expectations about public vs. private vs. common. With in educational technology, the division of labour manifests itself along a number of different axes. In the first and most obvious case, educational institutions' uncritical acceptance of industry-originated 'solutions' and large-scale buy-in to marketing campaigns contribute substantially to the establishment of the subject positions which disempower pretty much everybody involved: students, teachers, and the schools them-selves. I wil l not go into this at length here, as the general phenomenon of the corporatization of schools has been dealt with elsewhere (e.g., Bromley & Apple's 1998 volume, Education/Technology/Power). The superficial appeal of industry-based solutions is easy enough to see: the difficult initial design and implementation work are taken on by an industry 'partner,' thereby freeing the school or college to concentrate on their core busi-ness: education. O f course, what's missing from this particular division of labour is any developed sense that the one may have an impact on the other: the 'problem' to which the 'solution' is an answer is one pre-defined by the vendor. A recent example is Apple Compu-ter's offering sets of wireless laptops to educational institutions; it is not at all clear what problem this solution actually addresses. The superficial answer was that learners would be freed from computer labs, but Apple's wireless-laptop scheme looked remarkably like computer labs on wheels: access to machines still had to be booked, hardware locked down to prevent theft, and, most importantly, the machines were still (ironically) 'time-shared,' as computer labs have been for thirty or forty years. Chapter 1: Introduction 16 A second manifestation of the expert/end-user divide is perhaps best articulated with reference to the "miracle-worker" discourse: This apparent and longstanding lack of success in reaching implementation goals with respect to uses of digital tools in schools has created a specific niche for the working of miracles—the provision of digitally mediated environments within which to re-mediate the production of knowledge in educational contexts... Wi th in such a context, the miracle worker's effectiveness is measured by their capacity to spin narratives of success against all odds by providing tools, but more often discourses, that appear to transform students' engagements with information, (de Castell, Bryson, & Jenson 2002) The "miracle worker" discourse reinforces the machinery of desire that is central to the marketing efforts of high-tech vendors. Seen on this level, that the individual 'miracle worker' is predictably non-duplicatable—or at least 'unscalable'—is unfortunately almost the point. While we love to love those who take the initiative to make a real difference in their schools and who personally drive innovation, the too-common reality is that when these few individuals burn out, retire, or take advantage of their technical expertise and get a higher-paying job, what is left is a reminder of how wide the gap really is, setting the stage for the next round of marketing campaigns. In a third manifestation, the trend toward online distance education, "distributed learn-ing," "learning objects," and so forth establishes an even more cynical (or at least 'closed') position, quite comparable to the textbook publisher, in which all knowledge and authority is vested with the publisher/information source and the model is a simple instructionist one of transferring this information to the user. As with the solution-provider discourses, the information-provider discourse makes plenty of sense in terms of business models, but not so much for learning. The "distance-ed" variety of this discourse is the centralized version, while the "learning objects" version is a distributed market economy; either way, the educa-tional process is one way, and reliant on an 'impoverished' recipient. Chapter 1: Introduction 17 A fourth manifestation of the expert/end-user divide within the educational environ-ment may be more damaging than any of the above: in this case, the critical faculties of the educational establishment, which we might at least hope to have some agency in the face of large-scale corporate movement, tend to actually disengage with the critical questions (e.g., what are we trying to do here?) and retreat to a reactionary 'humanist' stance in which a shallow Luddism becomes a point of pride. Enter the twin bogeymen of instrumentalism and technological determinism: the instrumentalist critique runs along the lines of "the technology must be in the service of the educational objectives and not the other way around." The determinist critique, in turn, says, 'the use of computers encourages a mecha-nistic way of thinking that is a danger to natural/human/traditional ways of life' (for variations, see, Davy 1985; Sloan 1985; Oppenheimer 1997; Bowers 2000). Missing from either version of this critique is any idea that digital information technol-ogy might present something worth actually engaging with. De Castell, Bryson & Jenson write: Like an endlessly rehearsed mantra, we hear that what is essential for the implementation and integration of technology in the classroom is that teachers should become "comfortable" using it. [...] We have a master code capable of utilizing in one platform what have for the entire history of our species thus far been irreducibly different kinds of things—writing and speech, images and sound-^every conceivable form of information can now be combined with every other kind to create a different form of communication, and what we seek is comfort and familiarity? (2002) Surely the power of education is transformation. A n d yet, given a potentially transformative situation, we seek to constrain the process, managerially, structurally, pedagogically, and philosophically, so that no transformation is possible. To be sure, this makes marketing so much easier. A n d so we preserve the divide between 'expert' and 'end-user;' for the 'end-user' is profoundly she who is unchanged, uninitiated, unempowered. The result is well documented: scores of studies show how educational technology has no measurable effect on student performance. The best articulation of this is surely Larry Chapter 1: Introduction 18 Cuban's (1986) narration of the repeated flirtation with educational technology; the best one-liner the title of his article, "Computers Meet Classroom: Classroom Wins" (1993). What is left is best described as aporia. Our efforts to describe an instrumental approach to educational technology leave us with nothing of substance. A seemingly endless literature describes study after study, project after project, trying to identify what really 'works' or what the critical intercepts are or what the necessary combination of ingredients might be (support, training, mentoring, instructional design, and so on); what remains is at least as strong a body of literature which suggests that this is all a waste of time. But what is really at issue is not implementation or training or support or any of the myriad factors arising in discussions of why computers in schools don't amount to much. What is really wrong with computers in education is that for the most part, we lack any clear sense of what to do with them, or what they might be good for. This may seem like an extreme claim, given the amount of energy and time expended, but the record to date seems to support it. If all we had are empirical studies that report on success rates and student performance, we would all be compelled to throw the computers out the window and get on with other things. But clearly, it would be inane to try to claim that computing technology—one of the most influential defining forces in Western culture of our day, and which shows no signs of slowing down—has no place in education. We are left with a dilemma that I am sure every intellectually honest researcher in the field has had to consider: we know this stuff is impor-tant, but we don't really understand how. A n d so what shall we do, right now? It is not that there haven't been (numerous) answers to this question. But we have tended to leave them behind with each surge of forward momentum, each innovative push, each new educational technology "paradigm," as Timothy Koschmann put i t . 2 I hereby suggest that the solution—not to the larger question of what should we do, right now, but at least to the narrower issue of how we can stop being so blinded by the shiny 2. Koschmann's (1996) article, "Paradigm Shifts and Instructional Technology" suggested that there had in fact been a series of incommensurable paradigms (in Kuhn's sense) governing the field; Koschmann was setting up "computer-supported collabora-tive learning" as the new paradigm. Chapter 1: Introduction 19 exterior of educational technology that we lose all critical sensibilities—is to address the questions of history and historicism. Information technology, in education as elsewhere, has a 'problematic' relationship with its own history; in short, we actively seek to deny its past, putting the emphasis always on the now and the new and the future. The new is what is important; what happened yesterday is to be forgotten, downplayed, ignored. This active destruction of history and tradition—a symptom of the "culture of no culture" (Traweek 1988, p. 162) that pervades much of technoscience—makes it difficult, if not impossible, to make sense of the role of technology in education, in society, and in politics. 3 We are faced with a tangle of hobbles—instrumentalism, ahistoricism, fear of transformation, Snow's "two cultures," and a consumerist subjectivity. Seymour Papert, in the midst of the backlash against Logo in schools in the mid 1980s, wrote an impassioned essay that called for a "computer criticism," in the same sense and spirit as "literacy criticism." In that article, Papert wrote of ...a tendency to think of "computers" and "Logo" as agents that act directly on thinking and learning; they betray a tendency to reduce what are really the most important components of educational situtations—people and cultures—to a secondary, facilitating role. The context for human develop-ment is always a culture, never an isolated technology. (Papert 1987, p. 23) A n examination of the history of educational technology—and educational computing in particular—reveals riches that have been quite forgotten. There is, for instance, far more richness and depth in Papert's philosophy and his more than two decades of practical work on Logo than is commonly remembered. A n d Papert is not the only one. Alan Kay's story, roughly contemporaneous and in many respects paralleling Papert's, is what follows. Since this story is not widely known, let me begin with a brief and admittedly rough sketch of the origins, general direction, and some of the outcomes of Kay's work. 3. We as a society are ignorant of these issues because, in a sense, they can not be made sense of. Maclntyre (1984) makes the much larger case that morality and ethics cannot be made sense of in the modern world, because our post-enlightenment inheritance is but the fragments of a tradition within which these could be rationalized. Chapter 1: Introduction 20 A L A N K A Y A N D T H E D Y N A B O O K V I S I O N Alan Curtis Kay is a man whose story is almost entirely dominated by a single vision. The vision is that of personal computing, a concept Kay began to devise in the late 1960s while a graduate student at the University of Utah. It is not an overstatement to say that Kay's vision has almost single-handedly defined personal computing as we know it today. Neither is it an overstatement to say that what he had in mind and what we've ended up with are very differ-ent. The story of that vision—how it has managed to manifest itself on all our desks (and laps) and also how far this manifestation remains from its original power and scope—is the story I mean to tell here. It is a story that deviates from the popular or conventional story of computing in a number of interesting ways. And, while this story is well known, it is rarely told outside of the computer science community, where Kay's contributions are founda-tional. What is less remembered is that Kay's contributions to computer science were driven largely by an educational vision for young children. Alan Kay was born in the early 1940s in New England, and grew up as something of a child prodigy; he proudly reports being a precocious—difficult, even—child in school, argu-ing with his elementary school teachers. He studied biology and mathematics in university, but dropped out and played jazz guitar in Colorado for a few years in the early 1960s; then, on the strength of an aptitude test, joined the US A i r Force and became a junior program-mer. Having then discovered computers, he decided to finish his undergraduate degree and go to grad school in 1966. He chose the University of Utah, where computer graphics pioneer Dave C. Evans had set up one of America's first computer science programs. A t Utah, Kay's career took off like a rocket; the timely meeting of a wildly creative mind with the fledgling American computing research program—Kay was only the seventh graduate student in computing at Utah (Hiltzik 1999, p. 86ff). To appreciate the difference between computing as most people encounter it today— personal laptop computers with graphical user interfaces, connected wirelessly to a global Internet, using the computer as an access and production environment to media—and what Chapter 1: Introduction 21 computing was in the mid 1960s—expensive and delicate mainframe computers staffed by scientists, with little that we would recognize as "user interface" (even time-sharing systems were a radical innovation at that time)—is to roughly frame Kay's contribution to the field. O f course, he did not accomplish this alone, but his vision—dating back to his MSc. and PhD theses at Utah (see Kay 1968) and strongly driving the research of the 1970s—is so central, and so consistent, that it is arguable that without Kay, the face of our everyday involvement with digital technology would be immeasurably different today. Kay is in one sense an easy study, in that he has remained consistently on point for thirty-five years, over which time he has contributed a large collection of reports, articles, chapters, and postings to online fora, as well as a large number of lectures and presenta-tions, many of which have been recorded and made widely available. In particular, Kay's writings and talks in recent years provide valuable reflection on his work and writings from the 1960s and 1970s; in all, a rich archive for the historian. What I find most important about Kay's oeuvre is, I believe, summarizable in a few brief (though rather expansive) points. These set the stage for the story I wi l l attempt to tell here: • Kay's vision (circa 1968) that in the near future, computers would be the common-place devices of millions of non-professional users; • Kay's realization that this kind of mass techological/cultural shift would require a new literacy, on the scale of the print revolution of the 16th and 17th centuries; • his belief that children would be the key actors in this cultural revolution; , • his fundamental approach to the design challenge presented by this shift being one of humility, and thus that the cardinal virtues would be simplicity and malleability, such that these "millions of users" could be empowered to shape their own techno-logical tools in accordance with the needs that they encountered; • Kay's insistence on a set of architectural principles inspired by the cell microbiol-ogy and complex systems theory of the post-war period: how the complexity of life arises from the relatively simple and common physics of the cell. Chapter 1: Introduction 22 There are many ways in which Alan Kay's vision of personal computing has indeed come to pass. In reading his manifesto from 1972 ("A Personal Computer for Children of A l l Ages"), there is little that sounds either dated or far-fetched. Most of the implementation details alluded to in his writings have in fact become commonplace—Kay was unable to predict the dynamics of the marketplace on personal computing, and so his timelines and pricepoints are both underestimated. It is indeed clear that his vision of a new "literacy" far exceeds the reality on the ground today. M y contention is that this is the piece of his vision which is the most critical; the need for a digitally mediated literacy is greater now than ever, and for reasons which Kay could hardly have foreseen in the early 1970s. From ARPA to Xerox PARC Alan Kay's story begins with the A R P A project—the US Department of Defense's Advanced Research Projects Agency, a Pentagon funding programme in part inspired by the Cold War and the perceived threat to American technological superiority that was raised with the launch of the Soviet satellite, Sputnik, in 1957. In A R P A is the root of the popular concep-tion that computers have sprung from the military; the vast majority of computing research in the formative decade of the 1960s was funded by ARPA' s Information Processing Tech-niques Office (IPTO). It is easy to take the significance of this funding formula too far, however, and conclude that computers were devised as weapons and digital technology is born of insitutionalized violence and domination. The story is quite a bit more subtle than that: the adminstrators of A R P A - I P T O research funds were not military men, but civilians; not generals but professors (NRC 1999; Waldrop 2001). It is perhaps better to think of A R P A as a Cold War instrument of American techno-cultural superiority, rather than a military programme. The funds flowed through the Pentagon, but the research was aston-ishingly open-ended, with the majority of the funding flowing to universities rather than defense contractors, often in the absence of formal peer-review processes (NRC 1999, pp. 101-102). In fact, to look deeply at A R P A and its projects is to see a ironic case—rare, but certainly not unique ( A T & T and, as we shall see, Xerox Corporation, played host to similar Chapter 1: Introduction 23 development communities)—of large-scale public works being committed in the name of capitalist, individualistic, American ideology. The public funding that went into A R P A projects in the 1960s no doubt vastly outstripped their Soviet counterparts; who, then, had the greater public infrastructure? The men who directed the A R P A - I P T O have come to be known by their reputation as great thinkers with expansive ideals for the common good, and their open-ended funding policies that focused on people rather than specific goals. The first, and most celebrated director, JCR Licklider, oriented the IPTO to the pursuit of interactive computing and inter-networking, concepts which were nearly science fiction in 1960, but which today are foundational to our dealings with digital media. 4 After Licklider came Ivan Sutherland, known as the "father of computer graphics;" Robert Taylor, who would go on to help run Xerox' research lab in 1970; and Lawrence Roberts, who in the late 1960s oversaw the implementation of the ARPAnet , the prototype and direct ancestor of today's Internet. In 1970, the advent of the Mansfield Amendment, which required Pentagon-funded research to be more responsible to military ends, is seen by many (Kay 1996 a, p. 525; "Waldrop'2001, p. 325) as the end of an era—an era in which the basic shape of today's digital technological landscape was being laid out. O f the spirit of the A R P A project in the 1960s, Kay reflected: It is no exaggeration to say that [ARPA] had "visions rather than goals" and "funded people, not projects." The vision was "interactive computing as a complementary intellectual partner for people pervasively networked world-wide." By not trying to derive specific goals from this at the funding side, [ARPA] was able to fund rather different and sometimes opposing points of view. (Kay 2004a) The legacy left by the 1960s' A R P A project is rich, and includes the Internet, time-sharing systems, computer graphics (both 2D and 3D), hypertext and hypermedia, and networked 4. Licklider wrote an early research manifesto called "Man-Computer Symbiosis" which laid out a blue-sky vision of what com-puting could become, one in marked contrast to the then-dominant trend to artificial intelligence research. See Licklider 1960; Wardrip-Fruin & Montfort 2004). Chapter 1: Introduction 24 collaboration. More important to the story at hand is the establishment of a community of computing researchers in the United States, from universities like Utah, U C L A , Stanford, M I T , and Carnegie-Mellon. A t these universities, fledgling computing departments and programs had received early and substantial research funding, and the A R P A - I P T O direc-tors made substantial efforts to bring these researchers together at conferences and retreats. The result was, by the late 1960s, a tightly knit community of American computer science research. Alan Kay, who had his first encounter with computer programming while on a stint in the US Ai r Force's A i r Training Command in 1961, went to the University of Utah to pursue a Masters degree. There he met and studied with Dave Evans and Ivan Sutherland, who were pioneering research in computer graphics. Kay spent the years 1966-1969 at Utah, working on and around ARPA-funded projects. It was here, in his M S c and PhD work, that he began to formulate a vision for a personal computer. Kay has referred to Sutherland's work on computer graphics as "the first personal computer" because Sutherland's project— Sketchpad—was the first interactive graphics program as we would recognize it today; a user sat in front of a display and manipulated the images on a screen by means of a pointing device (in his instance, a pen) and keystrokes (Sutherland 1963). This required that a single user monopolize the entire computer—in the 1960s an enormously extravagant thing to do. The inspirational impact of work like this should not be understated, especially where Alan Kay is concerned. Kay's account of the A R P A years is of one mind-blowing innovation after another—from Sutherland's elegant drafting program to Doug Engelbart's famous 1968 demo to the Fall Joint Computer Conference in San Francisco which showed the world a working model of hypertext, video conferencing, workgroup collaboration, and graphical user interfaces, literally decades before these concepts became embedded in the public imagination (Engelbart & English 1968/2004; Waldrop 2001, pp. 297-294). Kay's research at Utah focused on the design of a computing system called the FLEX Machine, which combined the interactive graphics ideas of Sutherland's Sketchpad with leading program-ming language concepts of the day and put them in a package that could sit on a desk. But Chapter 1 : Introduction 25 Kay's work at Utah was very much coloured by interaction and collaboration with the community of A R P A researchers. One of the greatest works of art from that fruitful period of A R P A / P A R C research in the '60s and 70s was the almost invisible context and community that catalysed so many researchers to be incredibly better dreamers and think-ers. That it was a great work of art is confirmed by the world-changing results that appeared so swiftly, and almost easily. That it was almost invisible, in spite of its tremendous success, is revealed by the disheartening fact today that, as far as I'm aware, no governments and no companies do edge-of-the-art research using these principles. (Kay 2004a) When ARPA' s funding priorities shifted to military applications in 1970, this community saw a crisis of sorts; where could they continue their work in the manner to which they had become accustomed?5 As the story goes, by historical accident, Xerox corporation, as part of a shift in upper management, wanted to establish a research lab to ensure their continued domination (Hiltzik 1999; Waldrop 2001, p. 333ff). As it turned out, former A R P A - I P T O director Robert Taylor was hired on at Xerox to establish the lab and hire its researchers. Taylor knew who he wanted, by virtue of the community of researchers he had known from his A R P A work. And, due to the circumstances of the funding landscape of the day, he had his pick of the leading researchers of the 1960s. The result, Xerox' Palo Alto Research Center (PARC), was staffed by a "dream team" of talent. Former P A R C researcher Bruce' Horn reflected, " P A R C was the Mecca of computer science; we often said (only half-jokingly) that 80 of the 100 best computer scientists in the world were in residence at P A R C " (Horn, n.d.). Alan Kay was one of the researchers that Taylor courted to be part of the Xerox P A R C team, and in line with the open-ended A R P A policy of the 1960s, Kay's agenda at Xerox was also open-ended. He took the opportunity to use his new position to advance the work he had begun at Utah on the development of a personal computer. 5. Young computer scientists in the 1960s were as attentive as any to the cultural movements of the day; see John Markoff's (2005) What the Dormouse Said: How the 60s Counterculture Shaped the Personal Computer Industry, for this treatment. Chapter 1: Introduction' 26 It was in fact impossible to produce something like Kay's desktop-oriented FLEX Machine given the hardware technology of the late 1960s, and as such Kay's early work was realized in various forms on the rather larger computers of the day. But to reduce the FLEX Machine concept to simply that of a graphics-capable system that could sit on a desk (or even, ultimately a lap) is to miss much of Kay's point. More fundamental to Kay's vision was a novel and far-reaching conception of computing architecture, and the FLEX Machine research is better positioned as an early attempt to articulate this. To explain this, let me delve into Sutherland's Sketchpad, a system which in Kay's view has not been equalled in the nearly four decades since. The overt concept—an interactive computing system for drawing and manipulating images—of course has been built upon, and today designers, illustrators, draftspeople, and indeed anyone who creates images with a computer uses a system which borrows from the general tradition established by Sketchpad. But integral to Sutherland's original system was an architecture based on "master" drawings could be used to create "instance" drawings, and that the parent-child relationship between such entities is preserved, so that changes made to the master (or prototype) would be reflected in any instances made from it. It is difficult to express the importance of this in so many words, 6 but this concept is representative of a way of thinking about the relationship between the part and the whole which underlies all of Kay's work and contributions. A t the same time that Kay was introduced to Sutherland's work, he was also introduced to a programming language called Simula, the work of a pair of Norwegian researchers. Kay recognized that the "master" and "instance" relationship in Sketchpad was very similar to the way the Simula language was arranged. This was the big hit, and I have not been the same since. I think the reasons the hit had such impact was that I had seen the idea enough times in enough differ-ent forms that the final recognition was in such general terms to have the quality of an epiphany. M y math major had centered on abstract algebras with their few operations applying to many structures. M y biology major had 6. There is, luckily, video available of Sutherland using the Sketchpad system. See Wardrip-Fruin & Montfort (2004). Chapter 1: Introduction 27 focused on both cell metabolism and larger scale morphogenesis with its notions of simple mechanisms controlling complex processes and one kind of building block being able to differentiate into all needed building blocks. The 220 file system, the B5000 7, Sketchpad, and finally Simula, all used the same idea for different purposes. Bob Barton, the main designer of the B5000 and a professor at Utah, had said in one of his talks a few days earlier, "The basic prin-ciple of recursive design is to make the parts have the same power as the whole." (Kay 1996a, p. 516) This is the first of the "big ideas" that comprise Alan Kay's work; we shall encounter several more. Toward the Dynabook Kay reports that his personal trajectory was significantly altered by a visit to see Seymour Papert's research group at M I T in 1968. At that time, Papert, Wally Feurzig, and Cynthia Solomon were conducting the initial research on exposing schoolchildren to computers and programming with the Logo language, which Feurzig had designed. Papert's research involved the now-famous "turtle geometry" approach which suggested that children could more effectively bridge the divide between concrete and formal cognitive stages (from Jean Piaget's developmental schema) via a computational medium (Logo) which allowed them to manipulate mathematical and geometric constructs concretely (Papert 1980a; 1980&). What impressed Kay was not so much this insight about cognitive styles, but that children using Logo could reach farther with mathematics than they could otherwise. Kay wrote: One of the ways Papert used Piaget's ideas was to realize that young children are not well equipped to do "standard" symbolic mathematics until the age of 11 or 12, but that even very young children can do other kinds of math, even advanced math such as topology and differential geometry, when it is presented in a form that is well matched to their current thinking processes. The Logo turtle with its local coordinate system (like the child, it is always at 7. The Burroughs B220 and B5000 were early computers Kay had encountered in while working as a programmer in the US Air Force in the early 1960s. Chapter 1: Introduction 28 the center of its universe) became a highly successful "microworld" for explor-ing ideas in differential geometry. (Kay 1990, p. 194) In what would be the beginning of a collegial relationship with Papert which is still ongoing, Papert's insights about children and computers, in combination with Kay's insight that computers would likely be much more numerous and commonplace by the 1980s, led to the crystallization of his thinking: This encounter finally hit me with what the destiny of personal computing really was going to be. Not a personal dynamic vehicle, as in Englebart's meta-phor opposed to the I B M "railroads," but something much more profound: a personal dynamic medium. Wi th a vehicle one could wait until high school and give "drivers ed," but if it was a medium, it had to extend to the world of child-hood. (1996a, p. 523) Kay was immediately seized by this idea, and on the plane back from Boston he drew up the basis for the vision of personal computing he would pursue thereafter. Kay called it the Dynabook, and the name suggests what it would be: a dynamic book. That is, a medium like a book, but one which was interactive and controlled by the reader. It would provide cogni-tive scaffolding in the same way books and print media have done in recent centuries, but as Papert's work with children and Logo had begun to show, it would take advantage of the new medium of computation and provide the means for new kinds of exploration and expression. Kay, now at Xerox P A R C , began to sketch out what the Dynabook would look and act like. Early models (in cardboard) suggest devices not unlike the desktop and laptop comput-ers we know today. Kay noted that he was directly inspired by (microchip manufacturer Intel's founder Gordon) Moore's Law, which states that, due to predictable advances in the miniaturization of chip manufacturing, available computing power doubles roughly every 18 months. Given this kind of development timeframe, Kay foresaw that by the 1980s, the sorts of things he was able to accomplish in his work on the FLEX Machine would indeed be possible on small, even portable devices. Chapter 1: Introduction 29 Again, however, it is important to sidestep the temptation to reduce Kay's vision to a particular conception of a hardware device or set of features. The deep levels of his research were aimed at coming up with ways in which people—not computer scientists, but school-children/after Papert's examples—could interact meaningfully with digital technology. In the 1960s, computers were still monolithic, vastly expensive machines; leading research of the day was aimed at the development of "time-sharing" systems which would allow multi-ple users to simultaneously use a large computer by connecting via a terminal—this was profoundly not "personal" computing. Despite the economic and logistical obstacles, Kay and his newly established Learning Research Group at Xerox P A R C wrestled to come up with a new model of how people could interact with computing technology. Kay's reflec-tions on the challenge give some sense of the scope of the task they set themselves: For example, one would compute with a handheld "Dynabook" in a way that would not be possible on a shared main-frame; millions of potential users meant that the user interface would have to become a learning environment along the lines of Montessori and Bruner; and needs for large scope, reduction in complexity, and end-user literacy would require that data and control struc-tures be done away with in favor of a more biological scheme of protected universal cells interacting only through messages that could mimic any desired behaviour. (Kay 1996a, p. 511) This was research without precedent in the early 1970s—no existing models of computing or user interface or media were extant that Kay and his group could follow. In a sense, it is interesting to think of the work done in the 1960s and 1970s as being groundbreaking because of the the lack of existing models. Kay recalls the number of innovative conceptual leaps that Ivan Sutherland's Sketchpad project made, and that when asked how this was possible in 1963, Sutherland later reflected that he "didn't know it was hard." What Kay and his colleagues seem to have been aware of is the sense in which they were in fact constructing whole new ways of working: ...we were actually trying for a qualitative shift in belief structures—a new Kuhnian paradigm in the same spirit as the invention of the printing press— Chapter 1: Introduction 30 and thus took highly extreme positions that almost forced these new styles to be invented. (1996a, p 511) The analogy of the printing press is one that bears more examination, if for no other reason than that Kay himself has extended the analogy. If the invention of the digital computer can be compared with the invention of the printing press, then it follows that there is an anala-gous period following its initial invention in which its role, function, and nature have not yet been worked out. In the history of printing, this period was the late 15th century, commonly called the incunabula, when early printers experimented with ways of conducting their craft and their trade. The earliest printers, like Johannes Gutenberg, created that closely mimicked the work of medieval scribes in design, content, and audience. As Marshall McLuhan (1965) noted, the content of the new media is the old media. But in the 1490s, the Venetian printer Aldus Manutius set up a printing business which, in its exploration of the possibilities of finding a sustainable business model, pioneered and established much of the form of the book as we know it today, in terms of layout, typogra-phy, size and form (Aldus is generally credited with the popularization of the octavo format, which would fit conveniently in a pocket or saddlebag), and, in doing so, defined a new audi-ence and market for printed books (Lowry 1979). Aldus' innovations established the nature of the printed book as we know it today, and innovations in book printing since then have been refinements of Aldus' model, rather than deviations from it. Alan Kay alludes to the example of Aldus in several places in his writings, and it seems clear that even if Kay doesn't necessarily consider his work to be parallel, then at least this is its goal. That we are in an incunabula period in the early evolution of digital computing—as evidenced by the general confusion the topic evinces—is an idea I am completely comforta-ble with; that Alan Kay's vision of personal computing is analogous to Aldus' pocket-sized books is at least worth consideration. Whether we can say one way or another, at this moment in time, is in part the subject of this dissertation. Chapter 1: Introduction 31 Elements of the Dynabook Vision In a paper presented in 1972, " A Personal Computer for Children of A l l Ages," Alan Kay spoke of the general qualities of a personal computer: What then is a personal computer? One would hope that it would be both a medium for containing and expressing arbitrary symbolic notations, and also a collection of useful tools for manipulating these structures, with ways to add new tools to the repertoire. (Kay 1972, p.3) Papert's influence is very clear here, especially his famous admonition that children should program the computer rather than the computer programming the children. But we should also pay attention here to Kay's emphasis on the multiple levels of media: that they should represent not just the content, but the tools to act upon the content, and even the means for creating new tools. This sheds some light on the Dynabook metaphor, for books represent not only content which can be extracted (as a shallow definition of literacy might suggest), but are also the means to participating richly in a literate culture. "One of the primary effects of learning to read is enabling students to read to learn" (Miller 2004, p. 32). Literacy is indeed what Kay and his team were after. "I felt that because the content of personal computing was interactive tools, the content of this new authoring literacy should be the creation of interactive tools by the children" (Kay 1996a, p. 544, italics mine). Kay's 1972 paper included a scenario in which two nine-year olds, Jimmy and Beth, are playing a video game,8 "lying on the grass of a park near their home." Young Beth, bored of repeatedly trouncing her classmate, muses about adding gravitational forces to the game in order to make it more challenging. The rest of the story has the two children seeking out their teacher to help them develop their model of how the gravitational pull of the sun should be integrated with the spaceship controls in the game. Together, "earnestly trying to discover the notion of a coordinate system," they use something much like the Internet to look up some specifics, and then Beth makes the changes to the physical model coded in the game. Beth later uses her Dynabook to work on a poem she is composing, and her father, on 8. The game is the prototypical Spacewar!, which has a.special place in the history of computing. Chapter 1: Introduction 32 an airplane on a business trip, uses his own Dynabook to make voice annotations to a file, and even to download (and neglect to pay for) an e-book he sees advertised in the airport. Kay was writing science fiction; it was 1972. But the vision is clear enough that we can easily recognize almost all of these elements in our quotidian computing environment. It is now within the reach of current technology to give all the Beths and their dads a "Dynabook" to use anytime, anywhere as they may wish. Although it can be used to communicate with others through the "knowledge utilities" of the future such as a school "library" (or business information system), we think that a large fraction of its use wil l involve reflexive communication of the owner with himself through this personal medium, much as paper and note-books are currently used. (1972, p. 3) Most importantly, Kay was close enough to the cutting edge of computer research to be able to judge just how "within reach" this vision really was. Kay's oft-quoted catchphrase," the best way to predict the future is to invent it," meant that his science-fiction writing was nigh on to a plan for implementation. His work at Xerox through the 1970s was nothing short of the realization of as much of the Dynabook plan as possible at the time. Kay foresaw that, given Moore's Law, that the hardware part of his vision should be feasible within a decade or so: the 1980s. The unknown part was the software. So, while much of the famous research at Xerox P A R C was in producing the first "personal computers" (these were hardly laptops; they were however small enough to squeeze under a desk); Kay's core focus was on the soft-ware vision; how would Beth and Jimmy actually interact with their Dynabooks? How would millions of users make effective use of digital information technology? What emerged after a few design iterations in 1971 and 1972 was a programming language called Smalltalk, as in "programming should be a matter of..." and "children should program in..." The name was also a reaction against the "IndoEuropean god theory" where systems were named Zeus, Odin, and Thor, and hardly did anything. I figured that "Smalltalk" was so innocuous a label that if it ever did anything nice people would be pleasantly surprised. (1996a, p. 528) Chapter 1: Introduction 33 Smalltalk was (and is) a programming language; the original version—implemented the following year and therefore designated Smalltalk-72—owed much to Paper's Logo in terms of syntax and aesthetics. But its aspirations were considerably greater—in many ways, it was a generalization of the sort of thing Papert was after. Kay went so far as to eschew the "programming language" description, instead calling Smalltalk "a new medium for commu-nication" (Kay & Goldberg 1976). Kay's research—and Smalltalk itself—got a boost in 1973 when researchers in PARC's Computer Science Lab developed the first iterations of the Alto workstation, which is commonly hailed as the first personal computer. 9 Kay and his team called the Alto an "interim dynabook"—not much like Kay's Dynabook vision at all, really—these were about the size of a bar-fridge—but the Alto is the direct precursor of the kinds of personal computers we have today (as opposed, that is, to the personal computers of the late 1970s and early 1980s): it had a bitmapped, graphical display, a pointing device, and, with Small-talk running on it, the kind of "desktop" environment we now take for granted. In 1973-74, these "interim dynabooks" were capable of Logo-like turtle graphics, but also featured a mouse-and-overlapping-windows interface, animated graphics, and music—in short, "multimedia." At the same time that the Alto was being created, and that Kay's team was working on Smalltalk, other researchers at Xerox P A R C were developing ethernet local-area networks ing, colour graphics software, word processing and desktop publishing, and the laser printer—all commonplace components of "personal computing" today, and all fitting neatly into Kay's Dynabook vision—what Kay would much later call the " P A R C genre" of comput-ing (2004a). The researchers at Xerox P A R C created hundreds of Altos and wired them all up with ethernet, installed precursors to office software, and had, by the mid 1970s, an internal prototype of the kind of computer-based office environment that is so common-place today. None of this would be commercialized or marketed until the following decade, but it was all running at P A R C , and there it was allowed to mature into an established 9. Reportedly, over 1500 of these machines were constructed and used by individuals at Xerox in the 1970s (Hiltzik 1999). Chapter 1: Introduction 34 pattern of information technology. And , of course, many of the researchers at P A R C in the 1970s subsequently left to start or join the companies that now dominate the computing world (3Com, Adobe, Microsoft, Apple, etc.). To end the story here is to uncritically accept Alan Kay's popular designation as "father of the personal computer." But what is missing from contemporary tellings of this heroic tale is the seed that started it: the vision of children and a new literacy. Notably, the three best available histories of this period (Smith & Alexander 1988; Hiltzik 1999; Waldrop 2001) significantly downplay or pass over the educational vision which provided the focus for Kay's work. It is easy enough to see ourselves as adult, even professional users of desktop computing systems like those pioneered at Xerox P A R C , but where are Beth and Jimmy today, "earnestly trying to discover the concept of a coordinate system?" Thefate of the Dynabook The personal computer did indeed come to be, very much as Kay anticipated. Indeed, I have written the present work on a notebook computer strikingly similar to the one Kay described in 1972. His vision of millions of users of computers is very much the reality today. But, I argue, the Dynabook vision has not been realized; the distinction I am making here is between the idea of portable, networked personal computing devices on the one hand, and the vision of a new literacy and attendant educational imperative on the other. Between the surface features ofa personal computer and Kay's deeper insights about what that personal computing should entail is a vast gulf. The difference is significantly not one of technological innovation; all the individual components of Kay's vision are extant, even mature technologies today, from the lightweight, wirelessly connected notebook computers equipped with multimedia authoring tools to the kind of simple, modular software models he pioneered (indeed, this part of the vision has been picked up by computer programmers and turned into a whole paradigm of software development). Rather, the difference is a cultural one, wherein what personal and educational computing means to us is vastly differ-ent from the vision Kay and his colleagues began to elaborate in the early 1970s. We have Chapter 1: Introduction 35 inherited all the components, but the cultural framework which ties them together relies on older ideas, and the new computer-mediated literacy that Kay articulated continues to elude us. The ramifications of this cultural difference are, I argue, vast, and they specifically underlie the problematic relation we have with digital technology I outlined in the early pages of this chapter. The case I will make in the pages which follow is that our contempo-rary condition of fear and loathing of digital technology, our narrow and ahistorical perspective on computing, our unquestioned acceptance and reification of the roles of 'expert' and 'end-user', and, most importantly, the compounded manifestation of all of these features in the confused world of educational technology can all be critically addressed—and in some cases remedied—by attention to this particular yet foundational thread in the history of computing. Most of us are, unfortunately, starting from a place of unfamiliarity with this tradition (or, for that matter, any such substantial historical perspec-tive); it is my intent with the present study to at least shed some light on a specific cultural tradition which, I believe, has much to say to our current predicaments. W H A T F O L L O W S . . . The chapters and pages to follow comprise a historical treatment of Alan Kay's vision and research, of the Dynabook vision and its various (and partial) implementations in changing contexts over three or more decades. I intend here to draw attention to the features of the . original vision which have changed through diverse times and contexts, those features which have remained constant, and those which have grown even more critical. Despite Alan Kay's centrality to this story, and his predominant role in its articulation, this is not a biographical treatment; rather, this is an attempt to trace the threads of a technocultural system over time and place. It is thus a work of cultural history. In the chapter that follows this one, I frame my positionality and approach to the inter-pretation and historiography of this subject. I begin with my own story, and how by degrees I have come to appreciate the importance and centrality of Alan Kay's contributions, and I Chapter 1: Introduction 36 elaborate the many forces which have led to my adoption of a particular attitude to and stance toward digital technology and its place in education. This treatment of my 'method-ology' is more a disclosure of my personal biases and theoretical inclinations than an analysis of social scientific method, for reasons I wil l elaborate in due course. In the third chapter, I provide a broad-strokes theoretical approach to the study of tech-nology which serves to ground the kind of analytical moves which come later in this account. Here, I address the main motifs of technological mediation, the sociology of trans-lation (after Callon and Latour), a semiotics of digital 'machines,' and finally an introduction to the notion oi simulation, which I take as the paradigmatic modality of digital media. These treatments set the stage for the history that follows. M y account of the history of Kay's project and the Dynabook vision begins in Chapter 4, in which I conduct a high-level review of the conceptual content of Alan Kay's numerous writings and talks. This review breaks down into six primary themes which I believe reason-ably represent the core of Kay's vision of personal and educational computing. In Chapter 5,1 cover in narrative form the key moments of the Dynabook's develop-ment through the 1970s and 1980s. This begins with an exploration of the ways in which the Smalltalk language and environment was translated from an educational platform to a profound and influential innovation in professional computer programming and software engineering. Second, I discuss the emergence of a personal computer industry and market in the late 1970s and 1980s and the intersection of this trend with the foundational research work done at Xerox P A R C . Third, I trace Alan Kay's research beyond Xerox P A R C to its home at Apple Computer in the 1980s, where very different economic, technical, and cultural forces were at play. Chapter 6 considers what personal computing—and; by extension, educational computing—came to mean in the 1990s, with the popular advent of the Internet and World-Wide Web. In many ways, this period represents the mainstreaming of a particular version of personal computing and a much more substantial cultural tradition against which the Dynabook vision must now be considered. The history of computing in the 1990s Chapter 1: Introduction 37 is one of global-scale market developments (i.e., the computing industry as a multi-billion dollar phenemenon), as well as the emergence of unprecedented forms of cultural expres-sion and digitally mediated social organization (and especially, the rise of the Free and Open Source Software movement, arising out of marginal computing cultures from the 1970s); these two large-scale trends are in considerable tension with one another. In Chapter 7,1 trace the actual re-emergence of a substantial chunk of Alan Kay's work and indeed the Dynabook vision against the backdrop of late 1990s computing culture and the trends introduced in Chapter 6. The Squeak environment, emerging from Kay's team at Apple Computer in 1996, lays technical and mythological claim to the Dynabook tradition of the 1970s. Here, I examine the development of Squeak, its development and user communities, and I attempt to evaluate its contemporary trajectories. Because Squeak provides an artifact so unambiguously connected to the idealistic work emerging from Xerox P A R C in the 1970s, it is possible to interrogate the relative health and coherence of the cultural traditions which it simultaneously draws upon and, arguably, creates anew. In the final chapter, I attempt to draw things together, bringing the focus back to the macro level and examining the Dynabook ideal in the large: is it better considered as a tech-nocultural artifact or as a touchstone idea connecting otherwise disparate embodiments? This question leads to a broader methodological question, regarding how 'ideas' are to be treated alongside more concrete material objects like artifacts and texts. Finally, I return to the higher level political and social questions of the ultimate relevance of this story to education and to the popular culture of technology. Chapter 1: Introduction 38 Chapter 2: Positions and Approaches I N T R O D U C I N G M Y S E L V E S In a number of respects I am a child of Alan Kay's vision. I've grown up in a world in which Kay's vision has always to some extent existed, though it was not until recently that I had any sense of this. M y own lifetime is almost synchronous with Kay's project; I was born at just about the time that Kay was working on his initial design for a "personal" computer, the F L E X machine, while at the University of Utah. When Kay went to Xerox in 1970 and began to work on a model of personal computing for children, I was just coming to the age at which my education, and my relationship to media, was beginning. Speaking strictly tempo-rally, my generation was the one that Kay was looking at as the target for his notion of personal computing, though it has taken me thirty years to recognize it. That I have grown up in a world in part defined by Kay's work, and that I have been at least somewhat aware of this fact is key to the present study. The significance of Kay's project, in its educational and political aspects, is something apparent to me perhaps because of the particularities of my own history. In attempting to present my treatment and interpretation of this project and its importance to the world, I am in a position of needing to examine and establish just what it is about my own perspective that makes these issues meaningful for me. The perspective(s) I present here is not that of a schooled computer scientist or tech-nologist; neither is it as a 'humanist' approaching the history of computing from without. Rather, I claim partial roots on both sides of this seeming divide, and as a result the story I wil l tell is not likely to be generic in either mode. 1 1. The evocation of distinct genres of technology historiography and critique is deliberate. Among writers who have attempted to interpret digital technologies to a wide audience, the 'distanced humanist' stance is well exemplified in Edwards (1996); Cuban (2001); Bowers (2000); Menzies (1996); Rose (2003), while the schooled technologist perspective is found in Papert (1980a); Winograd & Flores (1986); Harveyfl991); Stallman (1998); diSessa (2000). Chapter 2: Positions and Approaches 39 Roots As a child, I had some early exposure to computers and computing. M y father had long been an electronics hobbyist, one of a generation of early radar technicians in the Second Wor ld War 2 and a "ham" radio operator since the late '40s or early '50s. I grew up watching him wield a soldering iron, delighting in home-brewing his radio gear. In the 1970s, when micro-processors and integrated circuits ("chips") became widely available, Dad began experimenting with building computers. I remember him working for years on a teletype project—it must have gone through several versions—to be able to type text on a little T V screen; certainly not impressive to our 21st-century eyes, but he was building it almost from scratch, out of parts ordered from the back pages of hobbyist magazines, and he reveled in the pure challenge of figuring out how to make the thing work. Had he really wanted a tele-type for his radio gear, he could have bought or wired up a kit in short order, but instead, he worked for some number of years on designing and creating his own. I remember him bringing assembly code written out in pencil on envelopes and notepads to the table with his morning coffee, and also his experimentation with several different modes of creating circuitry: wires on pegboards, and later "etching" his own printed circuit boards. Despite any or all of this, I was not an electronics "whiz" as a child; I was not at all inter-ested in radios or computers, and while I casually shared in my Dad's intellectual journeys on occasion, I myself was barely competent with a soldering iron, and never learned the workings of the computers he created or how to program them. When I was 12, Dad encouraged me to study up on basic electronics and the Morse code and to take the tests to get my own ham radio license. I accepted the challenge in good spirit, but when the time came I failed the tests—it simply was not compelling to me. I wasn't particularly disap-pointed, and I remember Dad praising me for being a good sport about it; he never pushed me in that direction again. Comfortable with this arrangement, I continued to watch over his shoulder and be his sounding board as he talked out his design ideas and railed against 2. Dad joined up with the Canadian services in 1941 and was shipped to the UK, where he worked with the British RAF radar corps until the end of the war. Chapter 2: Positions and Approaches 40 the numerous conceptual obstacles he encountered. I recall him explaining the basics of assembly code programming—learning to work with hexadecimal numbers, add this byte to the register, jump to this location, and so on—but it retained for me the character of an introductory lecture: heard and even appreciated at the time, but quickly filed away and largely forgotten. Still, the "spirit of the quest" made an impression on me, and the exposure to the conceptual underpinnings—if not the details—of computing has surely stayed with me. In junior high school in the early 1980s, when the new Apple II computers came out, my friends and I played at programming simple things in BASIC, and I talked dad into getting a "real" computer at home (a tiny Sinclair ZX81). I got reasonably good at programming in BASIC, and wrote endless variations on an obstacle course game, littering my programs with clever details, elaborate introductory sequences, and the like—this kind of adornment was what was missing for me from Dad's earlier projects, I suppose. I even became some-thing of a "whiz" among my friends at school, owing to the extra time I had to explore at , home. When, in grade 11,1 was finally able to take a "computer science" course, I was well ahead of my classmates. Or at least, most of my classmates, for there were a couple of boys, whom I didn't know (I honestly wondered where they'd come from) who were clearly years ahead of me in understanding and proficiency. While I could write programs in BASIC, these kids were far beyond that, programming the assembly code my Dad had worked in, devouring acres of information from books and magazines. A n d I was intrigued, and got to know these kids a little, and learned from them. But shortly something interesting happened: whether it had to do with high school and my nascent awareness of social status, or whether it was a subconsious reaction to not being the head of the class anymore, I decided quite clearly and purposefully that I didn't want to be part of that group; that the obsession that these couple of boys displayed did not strike me as healthy in some sense. Around the end of my Grade 11 year, I quite sharply turned away from computers and everything to do with them. The next winter I started playing the bass guitar and I effec-Chapter 2: Positions and Approaches 41 tively forgot that computers existed. Dad continued with his projects, and my relationship with him had more to do with fishing trips and playing music. I had nothing to do with computers until my 4th year at university, studying cultural anthropology. I was writing an undergraduate thesis, and I had by now seen enough of my friends writing with a word processor to know that this was clearly the way to go about it. I talked my parents into subsidizing the purchase of an Atari ST machine—billed as a poor-man's Macintosh, as it had a mouse and a graphical interface, and cost about half what a Mac did. The Atari did its job with my undergrad thesis (I am still loathe to throw out the floppy disks containing it), but it did other things as well: the one that made the biggest impression on me was the inclusion—in the bundle of software and toys that came with the Atari—of a version of Logo, the language that Seymour Papert had designed for kids in the late 1960s. I didn't know what Logo was, particularly, but there was a one-page printed reference guide to the language primitives, so I was able to poke away at it and draw pictures on the screen with 'turtle' commands. I remember very clearly my series of discoveries with Logo; I quickly learned that you could draw stars with it, if you had the turtle move forward a standard distance, then turn some divisor of 720, and repeat this the right number of times. I quickly learned that you could generalize it: draw a line, then turn 720/n degrees, and do it n times. Gee, I discovered, 360/5 would draw a pentagon, while 720/5 would draw a star—how about that! I have to admit I was amazed (and I still am); here I had learned something 'real' about geometry that 12 years of school hadn't really made me understand. Obviously, after 12 years of math class I could have told you that the angles of a polygon have some relationship to 360, but the fact didn't really mean anything to me until I started playing with Logo. Much , much later I discovered that this was exactly the sort of experience that Papert was shooting for—only he had hoped that elementary-school kids would be making this discov-ery, not undergraduates in their final year. Well , better late than never. I have to give credit to Papert and his team—and this foreshadows an important theme of Alan Kay's which I wil l return to at length—for they had managed to embed the potential for a very particular Chapter 2: Positions and Approaches 42 kind of experience in their software (twenty-five years earlier, for heaven's sake) so that I could pick it up, sight unseen, almost completely by accident, and immediately have that very experience. I finished the thesis, graduated, and went off to be a young pseudso-intellectual, spend-ing a lot of idle time talking with my friends about ideas. One of the topics we talked about at length was computers and the idea that people would soon—perhaps already were—inhabit a computer-mediated environment. We all read Wil l iam Gibson's prophetic novels about cyberspace and were enthralled by this notion, almost entirely fictional at the time. In 1992, after a few years of not particularly making a living as a musician, I started looking around for a graduate program that would allow me to explore some of the ideas we had been talk-ing about: cyberspace, hypertext, and so on. I didn't find a graduate program perse, but I stumbled upon a one-year diploma program in "Applied Information Technology" at a local community college. I quit my band, bought a Macintosh, and enrolled. What I had little sense of just then was how many of my peers were doing the same thing—quitting their indie rock bands and getting into new media. A few years later I recognized this as a major shift for my generation. My Encounter(s) with Objects M y return to school in 1992 was the beginning of a series of encounters with Alan Kay's intellectual legacy, encounters which have fundamentally shaped my way of looking at tech-nology and the world I live in. M y decision to enroll in the program at Capilano College was mostly serendipitous; I had no defined goals, but I saw a direction to go in. M y classmates and I learned about—and produced—educational multimedia game/environments. The dominant software paradigm for us was Apple Computer's HyperCard, which, despite its superficial limitations—unmodified, it was only capable of black-and-white presentation— it seemed almost infinitely flexible, at least to my novice understanding. HyperCard was indeed an extraordinarily powerful media production tool, far more so than the bulk of what we have seen on the Web in the past decade. While building and programming multi-Chapter 2: Positions and Approaches 43 m e d i a p r o d u c t i o n s i n H y p e r C a r d , m y f r i e n d s a n d I e n j o y e d ' d i s c o v e r i n g ' a n a e s t h e t i c o f m o d u l a r i t y i n d e s i g n — o r , p e r h a p s m o r e a c c u r a t e l y , H y p e r C a r d ' t a u g h t ' t h i s t o u s , b y w a y o f i t s e l e g a n t c o n c e p t u a l m o d e l . A t C a p i l a n o C o l l e g e I a l s o f i r s t e n c o u n t e r e d t h e I n t e r n e t a n d t h e . t h e n - t i n y W o r l d -W i d e W e b . T h e C a p i l a n o C o l l e g e p r o g r a m r e q u i r e d a l l s t u d e n t s t o p a r t i c i p a t e i n a n e a r l y c o m p u t e r c o n f e r e n c i n g s y s t e m , a n d t h i s h a d a m i n i m a l I n t e r n e t c o n n e c t i o n . W e w e r e p o s i -t i o n e d p e r f e c t l y t o b e a b l e t o w a t c h t h e I n t e r n e t ' s t r a n s f o r m a t i o n f r o m s o m e t h i n g ' b e h i n d t h e c u r t a i n ' t o a m a s s i v e s o c i a l f o r c e o v e r t h e s p a c e o f a f e w y e a r s . I n 1 9 9 3 , w h e n I g r a d u a t e d f r o m t h e I n f o t e c h p r o g r a m , I p r i n t e d u p a s e t o f b u s i n e s s c a r d s t h a t s a i d , " J o h n M a x w e l l - H y p e r m e d i a A r c h i t e c t . " P a y i n g w o r k d i d n ' t r e a l l y a r r i v e f o r s o m e t i m e , a n d I s p e n t a v e r y l e a n f e w y e a r s i n t h e m i d 1 9 9 0 s , b u o y e d u p b y m y r a w e n t h u s i -a s m , w i t h l o t s o f t i m e o n m y h a n d s t o i m m e r s e m y s e l f i n t h e d e v e l o p i n g I n t e r n e t c u l t u r e , l e a r n n e w t h i n g s , a n d t a l k a t l e n g t h w i t h m y s m a l l c i r c l e o f f r i e n d s . W e w e r e h o p e l e s s l y i d e a l i s t i c , a n d f o r m e d a c o - o p e r a t i v e t o s h a r e w o r k a n d f o r w a r d t h e i d e a l s o f a n o n -c o m m e r c i a l , a r t s - o r i e n t e d d i g i t a l w o r l d , r e j o i c i n g i n a n e a r l y v e r s i o n o f o n l i n e c u l t u r e i n w h i c h b u s i n e s s m o t i v e s w e r e s c o r n e d i n f a v o u r o f a k i n d o f t e c h n o - r o m a n t i c i s m . I n t h o s e y e a r s , w h i l e t h e I n t e r n e t a n d t h e W o r l d - W i d e W e b w e r e s t i l l l a r g e l y f l u i d , u n d e f i n e d s p a c e s ( r a t h e r l i k e m y c a r e e r ) , I s p e n t a g o o d d e a l o f t i m e m u c k i n g a b o u t w i t h s o m e w h a t m a r g i n a l t e c h n o c u l t u r a l o d d i t i e s c a l l e d MUDs—multi-user dungeons. A M U D i s a r e a l - t i m e , n e t w o r k e d , m u l t i p l e - p a r t i c i p a n t , t e x t - b a s e d v i r t u a l r e a l i t y ( C u r t i s 1 9 9 2 ) . T h e b e g i n n i n g s o f t h e M U D p h e n o m e n o n w e r e i n t h e e a r l y D u n g e o n s a n d D r a g o n s - i n s p i r e d " a d v e n t u r e " c o m p u t e r g a m e s . A M U D i s , a t i t s s i m p l e s t , a t e x t - a d v e n t u r e g a m e w h i c h c a n a c c o m o d a t e m o r e t h a n o n e p l a y e r ; t w o o r m o r e p e o p l e c a n t h u s g a n g u p o n t h e d r a g o n , o r w h a t h a v e y o u . M y i n t e r e s t i n M U D s w a s n ' t t h e g a m e a s p e c t — I h a v e n e v e r b e e n a c o m p u t e r - g a m e p l a y e r — r a t h e r , I h a d b e e n i n t r o d u c e d t o a p a r t i c u l a r I n t e r n e t M U D c a l l e d LambdaMOO b y a f r i e n d w h o w a s v e r y i m m e r s e d i n I n t e r n e t c u l t u r e . L a m b d a M O O w a s a n i m m e n s e , f r e e -f l o w i n g s o c i a l e n v i r o n m e n t , l i k e a s p r a w l i n g t e x t - b a s e d h o u s e p a r t y 3 . T h e c r i t i c a l i n n o v a -Chapter 2: Positions and Approaches 44 tion of LambdaMOO was that the virtual environment was entirely constructed from within, by its players, rather than by a specially empowered designer/programmer. Lambda-M O O when I first encountered it was only a year or two old, but its 'topography' was already immense and complex, simply because some thousands of users from all over the Internet had been constructing and programming it. LambdaMOO was, I learned* the pet project of a computer scientist at Xerox PARC (which I had never heard of); its language and internal architecture were "object-oriented"—a term I had heard but which had little meaning for me—hence "MOO," for MUD, Object-Oriented. In practical terms, what this meant was that you could create new things and define their behaviour in the virtual world by basing them on already existing virtual objects and then specializing them by writing simple scripts. This meant that individual players could very easily create complex, interactive objects within this virtual world. One could easily recognize a kind of aesthetic of creation in M O O worlds. The artfulness of it was a particu-lar kind of illusionism: for instance, considering the best way to create a green grassy space in the virtual environment brought into sharp relief heady issues of Platonism, simulacra, and phenomenology. Though I have never done any theatre, its connection and relevance to MOOing was obvious.41 thought M O O was fabulous! At the time, I honestly felt that this was—in spite of its completely text-based interface—a vastly more important and promis-ing technology than the World-Wide Web: here were people, presenting themselves and their environment virtually, in time. How much more interesting than static web pages! Fed up with my impoverished experience as a reluctant Internet entrepreneur, I went back to school again to take a new graduate program in publishing. Part of the program was a 4-month applied internship, and I set up a project working with a distance education program at BC's Open Learning Agency (OLA). The project was to create a MOO-based 3. LambdaMOO is still running—at—more than fifteen years after it opened, which must make it one of the longest-running persistent object stores in existence. The original virtual space has been added on to tens of thousands of times by tens of thousands of 'players,' but the living room into which you emerge from the darkened 'coat closet' remains the same, as does the inanity of the conversation one can find there, any time of the day or night, any time in the last decade and a half. 4. I later learned that Juli Burk (1998) explored this theme in some detail. Chapter 2: Posit ions and Approaches 45 environment for high school distance learners. Here was an opportunity to do what I really wanted: to immerse myself in a project, technically and culturally, and to do some high-level reflection. The result was my Masters project (Maxwell 1996), one of the most rewarding things I have ever done. The Open Learning Agency was a good fit for me, and I stayed on after my internship. Ironically, I was soon involved in a project which would have been a much better fit for a publishing internship than my M O O project had been: the O L A ' s schools program was interested in offering their high-school distance education courses online as well as in print. The challenge of how to do both without having to do it twice was paramount, but a friend of mine there, Prescott Klassen, had an answer that set the direction for the next three years of my life. The answer to the problem of publishing in two formats from a single editorial process was a document management technology dating from the 1970s and 1980s called Standard Generalized Markup Language ( S G M L ) . 5 Klassen and I embarked on an ambitious project to design and implement an S G M L system and workflow to completely overhaul the OLA' s courseware production. The S G M L project was, in retrospect, a descent into the abyss, but many good things came out of it. The project was technically a success, but organizationally doomed, and I gained a wealth of insight into the cultural dynamics of technology integration. I also learned a lot in those three years—easily more than in any other period of my life—about computing, document management, and publishing technology. Not surprisingly, Klassen and I, having been given more or less free rein to do what we wanted, were able to move much faster than the organization (a working group of 25 or so) we were attempting to change (Klassen, Maxwell, & Norman 1999). A t the end of three years, our proof-of-concept complete, we both left the O L A burnt out (along with a handful of other people close to the project, including our director). Prescott went to work on similar publishing projects at Microsoft Press, and I went back to school. 5. SGML is an ISO standard for document management (see Goldfarb & Rubinsky 1991). A somewhat bastardized application of SGML is found in the Web's HTML technology. Chapter 2: Positions and Approaches .46 The lasting pieces of the S G M L project at O L A that I took with me are these: my grad-ual appreciation of the history of computing; that in many, many cases, the solutions to today's challenges are to be found in technology developed in the 1960s or 1970s, and that these solutions are very often the work of individuals or small teams of thinkers—that as a result, they are wholly graspable by an individual, given a commitment to explore not just the details, but the historical contexts of their development. The larger theme lurking in this issue is that there exists a tradition of "powerful ideas" in computing, ideas which are unfor-tunately often ignored or bypassed in favour of the superficial, partially understood implementations, which constitute another, larger tradition of discourse and practice. To see through the partial, ahistorical implementations to the clearer thinking and powerful ideas lurking behind them gives one, first, a more solid place from which to withstand the constant churn and instability of the market-driven world of IT, and second, an apprecia-tion that many of the big ideas in the history of computing are about people and cultures first, and the details of technical application second. I began to believe that most of this "isn't rocket science" after all. The possibility of demystification was enormously empowering. A particular case of this comes to mind; at a document-technologies conference in 1998,1 witnessed two of the leading minds in the field of S G M L and X M L — T i m Bray and Eliot Kimber—engage in a debate about the role of abstraction in information representa-tion. Kimber, a proponent of an abstract tree-based information architecture called "groves," argued for independence from and priority to particular representational strate-gies (like S G M L or XML—see Kimber 1998). His admonition: "transcendsyntax.''Bray, on the other hand countered by appealling to the Unix tradition and way of doing things, claiming that by agreement on a simple representational format (e.g., simple structured text files), a great arsenal of software tools could be combined in ways not foreseeable by the original architect. This debate, I realized, was one of cultural difference rather than of tech-nical merit. That realization led me to understand that at the core of technological systems lay people and practices, and that it was an understanding of these that was important. This Chapter 2: Positions and Approaches 47 does not—emphatically does not—mean that the technical components of a system are irrelevant, or interchangable, or governed by social factors; following Latour, it means that the technical facets can only be understood well by seeing their embeddedness and partici-pation in historical/cultural traditions of thought and practice. I was, however, sufficiently well steeped to appreciate both Kimber's and Bray's arguments in this light, rather than getting lost in the 'technical' details. The aforementioned "groves" concept on its own made a huge impression on me, as well, though I am convinced now that I only scratched the surface of it. In simple practical terms, it means the representation of a document or any document-like information as a tree structure, which can then be topologically treated in software. That such a structure can be simultaneously and reciprocally abstracted and concretized was another encounter with object orientation; that by abstraction, we gain an alternate realization. This is difficult to describe in so many words, but my seeing this concept 6 meant that I would never see a 'document' the same way again. For instance, it immediately led to a further realization that my object-oriented grassy field in LambdaMOO and the semantic structure of a term paper were made of the same kinds of things, and were, in some fascinating ways interoperable. Note that I am not talking here about the level of bits—of the ones and zeros that make up computation at the lowest level—rather, I mean this in terms of high-level structure, at the highest semantic levels rather than the lowest. This was my intellectual journey, at least. A t the same time I was discouraged and fed up with 'distance learning' and its institutional evolution 7—which was core to our work at O L A and which was just beginning to become a major area of interest in the late 1990s. In 19991 went back to school, (for good this time) and began a PhD program in education. But I wanted to avoid the "educational technology" I had been working at O L A . Instead, I pref-ered to spend time reading and thinking curriculum theory, continental philosophy, and 6. One must bear in mind while reading my pained attempts to render my own experience of this into words that I am not a math-ematician, that I have always had a very strained and compromised relationship with mathematics. I dearly wish that my math-ematical education had been better, for I have travelled this far in my life almost despite it. 7. David Noble's 1999" Digital Diploma Mills" is singularly instructive on this point. Chapter 2: Positions and Approaches 48 decentered ethnography (while studying with Ricki Goldman). M y techie side, all the while, was paying the bills by means of a few sizable technical contracts for the O L A , creating a couple of different iterations of a learning environment which would provide personalized access to "learning objects" (since our S G M L project had left the O L A with a large store of curriculum content easily broken up into such modular components). I had, at a deeper level, ceased caring seriously about "learning objects,"8 but since O L A (and every other educational institution in the networked world) was interested, these were the raw material for our efforts. Of more practical and intellectual importance to me was adopting an object-oriented platform for our development work. 9 More importantly, however, this work brought me incrementally closer to figuring out what object-oriented programming and systems were all about, and where this tradition had come from. By degrees, I began to real-ize that there was a common intellectual thread underlying almost every significant idea I had become aquainted with in the past decade: from HyperCard's elegant authoring envi-ronment to M O O ' s build-from-within model, document "groves," the contextualization and re-contextualization of learning objects, and the abstractions we were able to make designing online information spaces. The catalyst to my conceptualizing all of this as a common thread was my discovery of a paper written by Alan Kay for the History of Programming Languages II conference in 1993 (Kay 1996a) on "The Early History of Smalltalk." This paper, positioned amid a collection of highly technical reports on computer science history, stood out at once for me. In the first place, Kay was talking about an educational project for children, rather than the design of systems for professional programmers. Second, the sheer conceptual scope of the article— ranging from historical treatments of printing and print literacy to the evolution of compu-ter graphics and interface design; from the A R P A community in the 1960s to the design of 8. For the most part, "learning objects" are related to object-oriented design in name only. The very idea of "learning objects" as portable, recombinable curriculum components is, I think, highly suspect. Friesen (2001) is as good a 'last word' on this topic as I have seen. 9. That platform was Zope, an open-source object-oriented web publishing application, and I was much pleased by how easily it allowed us to think abstractly about information relationships. Zope provides a simple object model for web publishing; in •> effect, it transforms web publishing from a process of retrieving files or producing database queries to one of publishing objects—a shift to a considerably more sophisticated level. See Chapter 2: Positions and Approaches 49 modern notebook computers—impressed me immensely. Kay's 1996 history outlines the practical emergence of the "object paradigm" amid his research project to develop personal computing for generations of children to come. It is, in a sense, a post-hoc manifesto for an educational and technological vision. I was coming, by degrees, to appreciate this vision and its many connecting points to pieces of my own history. O n the strength of this encounter, I then began to recognize the various themes that had made an impression on me—Hyper-Card, M O O , groves, learning objects, Zope—as variations on and deviations from this core tradition; that here was the original source of this tradition of thinking about computing, media, and information design. What impressed me most was that this tradition began with an educational vision: the original beneficiaries of this thinking were children, and it was only considerably later that personal computing and object-oriented programming became wrapped up with business needs and office productivity. The more I began to look at the history of computing over the past three decades, the more I realized that the tradition represented here—centering on Alan Kay's multi-decade project—had enormous impor-tance to the study of computing in education, far more importance than is commonly appreciated. I had found my thesis topic. W H Y T H I S S T U D Y ? W H Y T H I S A P P R O A C H ? Reflecting on my history When I returned to school in 1999 with an agenda to study technology from an educational perspective, it was with a particular set of contraints that I began to set the scope of my work. • I saw a real need for any work I was to undertake—be it development or analysis— to have some sense of historical embeddedness. I had learned this during my work on the S G M L project at Open Learning Agency, and I felt that for my work at the doctoral level, it was essential that awareness, if not respect, for what had gone before had to be a foundational piece. Chapter 2: Positions and Approaches 50 • I had developed a jaded attitude toward ambitious projects and bold, 'new' ideas; having personally been through the belly of a few of these, I had the sense that a good proportion of the energy that drives innovative projects comes from the promise of proving something (or oneself) to naysayers. Every successful endeav-our is a rally against cynicism, but where this itself becomes the driver, healthy enthusiasm gives way to hubris. • I had come to recognize the "microclimate" surrounding most educational technol-ogy projects, within which all things are possible and kids do "wonderful" things. Beyond the fragile boundaries of the sphere of energy provided by a few inspired individuals (often teachers), duplication or scaling-up of such projects is impossi-ble. I later ran into this concept better articulated as the "miracle-worker discourse" (de Castell, Bryson, & Jenson 2002). • I had the sense that the division of labour between experts and end-users was path-ologically implicated in most educational technology projects, and that somehow this reification needed to be resisted. • I was wary of—if not positively hostile to—baldly technocentric thinking; there was no way I was about to uncritically engage in any actual development projects; I returned to school determined to not be anyone's "webmaster," nor to work on any "e-learning" projects, despite numerous opportunities to do so. Part of my intent in returning to school was to claw my way back from my position on the 'techie' side of the division of labour. For, lest we ascribe too much power to the (vaguely defined) 'technocrat,' we should remember that technical work is still work—that it is labour, carried out according to the same kinds of logic that governs labour in more traditional contexts. Technical work carried out in a naive technocentric mode is in the worst cases straightforwardly exploitative, in the best cases still narcissistic. Chapter 2: Positions and Approaches 51 "Computer criticism" In a 1987 article in Educational Researcher, Seymour Papert proposed a genre of writing he called "computer criticism." I felt that my work should take this to heart. But what would "criticism" mean, exactly, in the context of educational technology? Contrary to the wealth of disengaged condemnation that bills itself as critique, criticism in the larger sense prop-erly demands a certain kind of engagement to be meaningful. As Papert pointed out: The name does not imply that such writing would condemn computers any more than literary criticism condemns literature or social criticism condemns society. The purpose of computer criticism is not to condemn but to under-stand, to explicate, to place in perspective. Of course, understanding does not exclude harsh (perhaps even captious) judgement. The result of understanding may well be to debunk. But critical judgment may also open our eyes to previ-ously unnoticed virtue. (Papert 1987, p. 22) I have seen very little writing on educational technology that lives up to what Papert proposed here. 1 0 Criticism in the sense described here requires not just a.familiarity but a fluidity and fluency with the issues, the discourses, and the practices of computing. It requires a sense of and appreciation of where these discourses and practices come from, historically and topologically—that is, beyond the disciplinary boundaries in which they may immediately be seen. In a very important sense, Papert's computer criticism requires the breaking down of or at least resistance to the division of labour which sets technology and human aims apart. Rather, in order to make sense, we have to be able to see how these positions relate to and give rise to one another. Multiple perspectives, blurred genres This, then, has been my starting place: my highest-order goal in this process is to attack or subvert the taken-for-granted division of labour that inscribes the boundaries of technology and which makes Papert's "computer criticism" next to impossible. Whatever potential for empowerment and democracy may come with information technology is repeatedly under-10. Papert's own students are, perhaps predictably, exemplary: e.g., Bruckman 1997; Goldman-Segall 1998; diSessa 2000. Chapter 2 : Positions and Approaches 5 2 mined by the reification of these positions: experts and engineers vs. end-users and consumers. That alternatives are possible—and indeed extant—is the case which I seek to make here. To this task I bring my own background; as a student of education and culture, and also as a designer, creator, and teacher on the technical side. In what follows, I am trying to deliberately blur the lines between computing, social science, cultural criticism, and education. I am, to use Latour's (1993) vocabulary, interested in actively proliferating the hybrids. O f course, this isn't a matter of so much conscious choice on my part. M y personal exploration and growth in understanding of all sides of these issues is itself varied and begins from multiple perspectives. Because of this I cannot choose sides or loyalties in the division of labour; I stand with feet on both sides. A n d because of this, because of my posi-tionalities, I am faced with real methodological constraints: I cannot honestly bracket out the issue of expertise, hide in the inscription of an "outsider" analyst, nor pretend to naivete, as some have attempted in the sociology of science. I have come to a place where I can inhabit neither the "miracle worker" role nor that of the "humanist" critic—for these are caricatures which are products of the very division I seek to break down. A n d so I have undertaken to do this research, and to write this work, from multiple active perspectives: from the standpoint of educational research and cultural criticism and also, simultaneously, from a standpoint of some technical sophistication, to walk the in-between space between the romantic and the ironic: to be more, and to write more, than caricatures. M y challenge, then, is twofold: in the first place, my task here is not one of building— nor singing the praises of (that is, instructing)—another tool or toolset. I must aspire to a 'higher' analytic frame, as the literary critic does compared with that of the author. In the second place, however, intellectual honesty demands that I not restrict my stance to that of observer; in order to meaningfully reflect on what I am studying, I must bring to bear my full faculties, including my experience and positionality—my own history—as a designer and builder of systems. A t a theoretical level, this double role is of course the case in literary crit-icism; a critic is of course also an author—of criticism if not of'literature' perse; how could it Chapter 2: Positions and Approaches 53 ) be otherwise? In a practical sense, though, this doubling is problematic in the case of tech-nology—of "computer criticism"—because most of us do find ourselves on one side of the divide or other: designer or user? Do these two exclusives represent our full realm of possibility? In order to do effective criticism, in Papert's sense, I have to transcend that divide—to strike a balance between criticism and advocacy. I have to find a line (or invent one, or at least spend some time mapping one out) between philosophizing about educational IT "from 30,000 ft"—that is, from a disengaged, 'outsider' perspective—and uncritically evan-gelizing a particular technology or technological vision from the inside. M E T H O D O L O G Y A N D T H E P R O B L E M O F . D I S T A N C E The study presented here is a historical one. M y intent is to present the story of the Dyna-book project and to trace its historical trajectories. M y challenge, thus, is to write history effectively, and to write effective history; as this is a cultural historical account, my method-ological concerns must also bear upon traditions of making sense of culture. As a former student of anthropology, my unsurprising point of departure for the "inter-pretation of cultures" is the work of Clifford Geertz, who had ethnography in mind when he wrote his groundbreaking essays on the turn to a interpretive—hermeneutic—approach to the study of culture. Geertz' admonition that the analysis of culture is "not an experimental science in search of law but an interpretive one in search of meaning" (1973, p. 5) is as appli-cable to the business of historiography as it is to ethnography, as a body of work emphasizing cultural history shows. It is worthwhile tracing the "interpretive turn" back-wards from Geertz to French philosopher Paul Ricoeur, whose article "The Model of the -Text" (1971/1991a) set the stage for the adoption of interpretive 'method' in the social sciences—particularly in the English-speaking world—in the 1970s. Tracing back further from Ricoeur takes us to the major source of modern philosophical hermeneutics in the work of German philosopher Hans-Georg Gadamer. Chapter 2: Positions and Approaches 54 Gadamer's argument in his magnum opus, Truth and Method (1975/1999), is that it is only through participatory knowledge and interpretation that we are able to come to any kind of an appreciation or critique of what we study. This is significantly at odds with the more traditional methodological quest for objectivity, and the tension here has been a central issue in social sciences for the past half-century; it puts in question the very idea of social "science" and is the crux of Gadamer's questioning of "method." The task of research-ing human life and meaning cannot possibly proceed according to the objectivist ideals of natural science, since the question of meaning can only be approached by engaging with the subject at hand, intersubjectively, participatory, performatively. Gadamer wrote, "The concept of the life-world is the antithesis of all objectivism. It is an essentially historical concept, which does not refer to a universe of being, to an 'existent world'" (1975/1999, p. 247). Jiirgen Habermas is perhaps most succinct in his explication of this concept: Meanings—whether embodied in actions, institutions, products of labor, words, networks of cooperation, or documents—can be made accessible only from the inside. Symbolically prestructured reality forms a universe that is hermetically sealed to the view of observers incapable of communicating; that is, it would have to remain incomprehensible to them. The lifeworld is open only to subjects who make use of their competence to speak and act. (Haber-mas 1984, p. 112) That the lifeworld is essentially historical makes a particular demand on the inquirer. The challenge is not to make sense of social or cultural phenomena qua phenomena; that is, awash in a vast and complex present tense, in some vain effort to discern the structures there. Rather, it is to make sense—make meaning—by engaging with the historicity of the things and ideas and patterns we seek to understand. It makes the writing of history—no less than the writing of ethnography—necessarily and inescapably reflexive. Gadamer's powerful vision of this process is centered around the concept of tradition (1975/1999 p. 284ff) which, following Heidegger's philosophy, makes the temporal unfolding of experi-ence paramount. Richard Bernstein summarizes: Chapter 2: Positions and Approaches 55 As Gadamer sees it, we belong to a tradition before it belongs to us: tradition, through its sedimentations, has a power which is constantly determining what we are in the process of becoming. We are always already "thrown" into a tradition. We can see how far Gadamer is from any naive form of relativism that fails to appreciate how we are shaped by effective history (Wirkungsge-chichte). It is not just that works of art, text, and traditions have effects and leave traces. Rather, what we are, whether we are explicitly aware of it or not, is always being influenced by tradition, even when we think we are most free of it. Again, it is important to reiterate that a tradition is not something "naturelike," something "given" that stands over against us. It is always "part of us" and works through its effective-history. (Bernstein 1983, p. 142) The notion of effective history—and, as Gadamer extends it in Truth and Method, "histori-cally effected consciousness"—implies the kind of engagedness that binds me as a researcher, and in which I eschew methdological distancing. In fact, the opposite seems to be the rule: "Understanding is to be thought of less as a subjective act than as participating in an event of tradition, a process of transmission in which past and present are constantly mediated" (Gadamer 1975/1999, p. 290). Engagement with and participation in historically embedded situations—that is, within traditions—implies making judgments about what is good, what is valid, what is just. This constructive engagement is used in Goldman-Segall (1998) as a methodological tenet in her strategically decentered ethnography. It is the basis of Rudolf Makreel's (2004) notion of "ethically responsive" history. Philosopher Alasdair Maclntyre (1984) takes this thread from a history of philosophy into a theory of ethics. Jiirgen Habermas (1984) takes it to a theory of communication. In all of these variations, there is no appeal to external authority or source of rationality but the tradition itself, in which the researcher is necessarily—partially at least—embedded. Habermas writes: The interpreter would not have understood what a "reason" is if he did not reconstruct it with its claim to provide grounds; that is, if he did not give it a rational interpretation in Max Weber's sense. The description of reasons demands eo ipso an evaluation, even when the one providing the description feels that he is not at the moment in a position to judge their soundness. One Chapter 2: Positions and Approaches 56 can understand reasons only to the extent that one understands why they are or are not sound, or why in a given case a decision as to whether reasons are good or bad is not (yet) possible. A n interpreter cannot, therefore, interpret expres-sions connected through criticizable validity claims with a potential of reasons (and thus represent knowledge) without taking a position on them. A n d he cannot take a position without applying his own standards of judgment, at any rate standards that he has made his own. (Habermas 1984, p. 116) I see the basic question, or problematic, in historiography to be the distance between the storying of the historical subject and the storying of the historian; the distinction between primary and secondary sources is necessarily blurred. I do not take this to be a problem of validity—nor of endless layers of relativism—rather, in line with the hermeneutical approach, I see this straightforwardly as the space of interpretation. O n the question of the relationship between lived experience and narrated story, I do not see a "discontinuity" between these, as Hayden White (1973) has famously claimed; rather, I believe, with Gada-mer and his followers—notably Alisdair Maclntyre (1984), David Carr (1998), and Andrew Norman (1998)—that lives are lived and made sense of, in the first person, narratively. The conception of the unfolding of experience, and the hermeneutic circle of its ongoing inter-pretation in time is grounded in the Heideggerian phenomenological tradition, and I take as a basic methodological principle the treatment of experience, and action, as text (Ricoeur 1971/1991«). The historiographical challenge, then, is not one of gaining privileged access to or an unadulterated representation of what has gone before, but rather one of entering into a meaningful engagement with the myriad layers of effective history which themselves produce the possibility of such an engagement. This is why the details of my personal history with respect to computing and computing cultures are important to this account, as is my process of assembly and immersion in the sources I bring to bear here. It is worth noting in passing that nothing in this framing would be out of place in a contemporary exposition on ethnographic method, but the present study is not ethnography. Rather, I present here an analysis of a body of historical documents—to discern and document the Chapter 2: Positions and Approaches 57 traditions, genres, and apparent structures of discourse and practice. I wi l l attempt to outline the details of this approach presently.. Introducing genre theory Drawing on the historiography of Mark Salber Phillips, and contrary to Hayden White's somewhat monolithic and ahistorical notion of emplotment, it makes sense to speak of historically contingent genres of lived narrative. Phillips argues that history is best seen as a family of historically contingent and often overlapping genres. This approach—building on White's opening of historiography to literary theory and concepts—points to a rich interde-. pendence of text, context, readers, and writers (Phillips 2000, p. 10). "Genre, of course, is not a self-contained system. It is a way of ordering and mediating experience, literary and extraliterary" (p. 11). Genre theory—as it appears in historiography (Phillips 2000), in literary theory (Jauss 1982; Cohen 1986; Mil ler 1994a; 1994b), in linguistics and cognitive science (Swales 1990; Scollon 1992; Bazerman 1998; Lemke 2001)—opens up the possibility of considering the material in question directly in the context of the communities (scholars, audiences, markets) of people for whom the material is relevant. Swales is perhaps the most direct: "genres are the properties of discourse communities" (Swales 1990, p. 9). This works bi-directionally in a sense; genres are defined by the communicative acts of discourse commu-nities, who are, in turn, constituted by communicative acts understandable within certain genres. For this to work, we must ensure that we do not reify genres as sets of analytic char-acteristics or properties (Cohen 1986, p. 210). Bazerman puts it most eloquently: By genre I do not just mean the formal characteristics that one must observe to be formally recognized as correctly following the visible rules and expectations. Genre more fundamentally is a kind of activity to be carried out in a recogniza-ble textual space. [...] Thus genre presents an opportunity space for realising certain kinds of activities, meanings, and relations. Genre exists only in the recognition and deployment of typicality by readers and writers—it is the recognizable shape by which participation is enacted and understood. (Bazer-man 1998, p. 24) Chapter 2: Positions and Approaches 58 In putting the emphasis on genres as structures of activity and participation—rather than just of literary forms or bloodless "discourse communities"—Bazerman returns us to the continuity of narrative as lived and narrative as written: that the modes of interpretation of one's life and experience "typically" fall into various historically and culturally contingent patterns or categories (cf. a very similar framing in Maclntyre 1984, p. 212). Thus there is no essential difference between the treatment and interpretation of one's lived narrative and that of the subsequent "historical" narrative; both are literary/interpretive acts, both oper-ate according to conventional, historically conditioned genres and modes. Genres, then, are "perspectivity technologies" (Goldman & Maxwell 2003) that mediate between individual meaning and social practice. What this means for the present study is that genres are among the fundamental units of analysis, and, following Phillips (2000), my intent is to focus on the rise and fall of particular genres in historical context. In this sense, the subject of this study is the emergence, translation, and relative survival of a set of genres of computing and tech-nocultural discourse. History as politics The historian's task is "not merely a reproductive but always a productive activity as well" (Gadamer 1975/1999, p. 296). What makes the writing (and reading) of history interesting and useful is the dynamic between these interpretive locations. The writing of history is thus not an attempt to nail down what happened in the past, nor to explain the present by the past, but rather an attempt to "shape and re-shape our collective understanding" (Norman 1998, p. 162) of the past, the present, and, more importantly, the future. The writ-ing of history is therefore a generative, engaged, political process that is itself historically embedded. This very embeddedness, and not any methodological measure, is what keeps us from hopeless relativism. I am not—nor is any writer—in the business of telling a story in the sense of one story among many; l ean only write the story according to me, from my particular perspective. What I must strive to do—and here is the role of methodology, disci-pline, and rigour—is increase where possible the number and variety of perspectives Chapter 2: Positions and Approaches 59 considered and repeatedly test the story against existing frameworks; that is to say, to maxi-mize the connectedness of my account. Lather's (1991) discussion of triangulation, construct-, face-, and catalytic-validity guidelines are directly instructive here, and serve the same overall end of connectedness. It is only possible for me, as an embedded subject, to tell one story—the story I am given, shaped by my historical and hermeneutical horizons—the story which emerges for me in the process of my investigation. I can speak of "a story" among many possible stories, but I can only honestly mean this plurality as a reference to the possibility of or invitation to other stories, which can only be evoked or juxtaposed, and never directly represented—to do so would require that I actually have some means of getting outside the stories, to live outside the lifeworld, outside of narrative. The story, then, is necessarily incomplete, its only hope for greater completeness is to be futher embedded. The Dynabook as/in history Studying a technological project isn't any harder than doing literary criticism. - Bruno Latour, Aramis To frame my process and thus the scope of the story I wil l tell, a brief overview of sources and my engagement with them is in order. M y starting point for this research is a rich reflec-tive document by Alan Kay dating from the early 1990s, "The Early History of Smalltalk," prepared as a conference presentation in 1993 for the A C M ' s History of Programming Languages II and subsequently published (88 pages worth) by the A C M 1 1 in a large volume compiling formal articles on the sessions with appendices, presentation transcripts and discussants' remarks (Kay 1996a). Kay's piece serves as a bird's-eye view of the first decade or so of his work, as well as a hub document for subsequent research. "The Early History of Smalltalk" is still referenced and recommended within the development community as probably the single best articulation of Kay's vision and work, especially in the 1970s. 11. The ACM—Association for Computing Machinery—is computing's pre-eminent professional association. It operates a press and an enormous library (digital and otherwise) of published material, proceedings, and literally dozens of periodical publica-tions running the gamut from newsletters to scholarly journals. In the world of scholarly communication, the ACM is an exem-plar, and is certainly a terrific resource to the historian. Chapter 2: Positions and Approaches 60 Moving out from this point, I located two main bodies of literature which, though grouped chronologically, I distinguish contexually rather than as primary vs. secondary sources. The first, dating mostly from the 1970s, are conference papers and research reports from Xerox P A R C , outlining the Dynabook vision, the details of the early Smalltalk imple-mentations', and a smaller amount of material on educational research. In this literature, there are a small number of documents from the early 1970s, and then nothing until 1976, apparently due to a Xerox crackdown on publicity in the wake of a revealing article by Stew-art Brand in Rolling Stone magazine in 1972—an article which portrayed the P A R C researchers as long-haired "hot-rodders" (Brand 1972) and about which Xerox executives were somewhat embarrassed (Kay 1996a, p. 536). After 1976 there is a wealth of material documenting research and development; this literature culminates in the early 1980s with the end of the original Xerox P A R C teams and the release of the Smalltalk technology to the world beyond Xerox. A second body of literature I have identified follows on Kay's 1993/1996 publication, and comprises a large number of journalistic articles and interviews with Kay as well as a substantial number of lectures and presentations by Kay (many of which, thanks to the Internet, are available online as digital video, audio, or text transcripts). This second body of literature roughly coincides with the public release of the Squeak technology by Kay's team at Apple Computer and, later, Disney Corporation in the 1990s. The Squeak technology was positioned as the re-realization of portions of the original Dynabook vision, and not surpris-ingly, the literature appearing in the late 1990s and early 2000s is concerned largely with the re-articulation of that vision and its key principles, often in much more detail than the works from the 1970s, but certainly in line with them conceptually. Given these two groupings—material from the Xerox P A R C period (1971-1983) and then from the mid 1990s onward—there is clearly about a decade in which very little published material appeared. During most of this time, Alan Kay was at Apple Computer and working on educational research with a very low public profile; I have had to do some digging to turn up a number of unpublished reports from this period 1 2—a phase I think is Chapter 2: Positions and Approaches 61 critical to an appreciation of the project's re-emergence (with Squeak) in the late 1990s, ostensibly a continuation of the original 1970s vision, but now appearing in a very different world. A third "body of literature" I encountered is the archived Internet communications of the project teams and worldwide community surrounding the Squeak project since the 1990s; this comprises several mailing lists, a dozen or so websites, and the archive of the various versions of the software released on the Internet since then. A number of instruc-tional and/or reference books on Squeak have been published since the late 1990s as well, and I include these in this latter category. Having oriented myself to these three major document groups, it then became possible to interpret a large body of surrounding literature through the lenses of the Dynabook vision and its various research projects; this surrounding literature includes the documen-tation for the Smalltalk language (aimed at software developers rather than children or educators) from the 1980s; a wide variety of educational research and development work spanning three decades or more from MIT's Media Lab, with which Kay has been loosely associated over the years and which offers something of a parallel research agenda, particu-larly in the work of Seymour Papert and his successors; and, more widely, a large body of literature on educational computing, computer science, and Internet culture over the past two or three decades. M y trajectory took me through much of this documentary material, and having absorbed and made sense of a good deal of it, at least on a superficial level, I made personal contact with several of the key individuals in this story. In the spring of 20041 travelled to Glendale, California, to spend two days at the Viewpoints Research Institute with Alan Kay and K i m Rose, during which time serendipity allowed me to meet and talk to both Seymour Papert and Smalltalk developer Dan Ingalls. In the fall of that year, the OOPSLA '04 confer-12. Alan Kay and Kim Rose at the Viewpoints Research Institute are to thank here for opening their substantial archives to me; Ann Marion, project manager with Kay while at Apple Computer, similarly deserves thanks for sharing her own archives. But while my research of this period serves to capture the main currents of research arid thinking, I make no claim to have exhaustively covered this period; there remain mountains of primary documents from the 1980s that I did not cover, including hundreds or thousands of hours of video. Chapter 2: Positions and Approaches 62 ence happened to be in Vancouver, which brough Kay and a good number of the Squeak community to my own hometown, and which provided an excellent opportunity to talk with many of these people—in particular, I had opportunity to interview Ted Kaehler, who had been part of Kay's team since the early 1970s, and whose exceedingly detailed and organized memory of the past thirty-five years proved invaluable in answering myriad detail questions in my construction of this narrative. In a similar vein, conversations with K i m Rose and A n n Marion—both of whose relationships with the project date to the mid 1980s—filled in many gaps in my understanding. I want to point out, though, that I do not consider this to have been an interview-driven research project; my primary thread has been the consideration and interpretation of writ-ten texts, and I see these many valuable conversations as supporting that primary documentary work. I spent the better part of two days talking with Alan Kay while I was in Glendale, but there was very little of that wide-ranging and sublimely tangential conversa-tion that would be recognizable as 'interview'—a judgment Kay notably approved of. In retrospect at least, I feel there is something of the spirit of Jerome Bruner's "spiral curriculum" in my research method—a 'developmental' re-framing of the hermeneutic circle. I have gone around and around the literature, reading, re-reading, and adding to the collection as I have gone along; at each re-consideration and re-interpretation of my sources, and certainly through the process of writing and re-writing this account, I have achieved what I hope are both deeper and more broadly connected interpretations and framings of the various facets of the story. I Consider this to be an example of how herme-neutically informed historical inquiry ought to move: an iterative process of the "fusing of, horizons," of achieving a particular instantiation of the "unity" (as Gadamer would have it) of self and other. In the depth of time and interest I have engaged with it, it is hopefully rich and at least defensible in terms of its "validity." It is also entirely incomplete, inexhaustive, and open to debate. Whether this study succeeds I believe should be judged in terms of the value and appeal of such debates. Chapter 2: Positions and Approaches 63 How Do W E K N O W A G O O D I D E A W H E N W E S E E O N E ? One of Donna Haraway's great themes is that none of us is innocent in the realm of techno-science. It is only by virtue of our involvement and investment with these issues that we are able to make any sense, or make any difference. A n d so we can only attempt to own and own up to our position(s). Haraway writes: The point is to make a difference in the world, to cast our lot for some ways of life and not others. To do that, one must be in the action, be finite and dirty, not transcendent and clean. Knowledge-making technologies, including crafting subject positions and ways of inhabiting such positions, must be made relent-lessly visible and open to critical intervention. (1997, p. 36) I bring this point to the foreground in order to add one more layer of context to the present study: to forswear the objective of critical distance. To write an "agnostic" study of a case like the one I an writing here—the historical trajectory of Alan Kay's Dynabook vision— would be to limit the significance of the study to the 'conclusion' that a particular techno-logical project failed, or succeeded, or enjoyed market success, or withered away. It would not, on the contrary, be able to speak to the ethical dimension of such a trajectory and to suggest why we might care. This is to say, given the ambitions of this study as I have framed it for myself, it would not be possible to do the present inquiry while holding to an ideal of objectivity or agnosticism. Fpr the overarching aim of this study—and I take "aims" to be foundational in deciding methodological issues—is to break down the division of labour that we take for granted, to subvert the reified discourses of'experts,' 'engineers,' 'designers,' 'end-users,' 'miracle workers,' 'plain folks' and so on, an aim I think is possible at least by making the different threads and voices herein aware of one another: by building bridges, or at least by drawing attention to the always-already constructed boundaries inscribing our various relations to technoculture. I want the readers of my work to be able to read these discourses, evaluate these practices, with a richer, more critical eye than the usual rhetoric surrounding technology (especially in education) affords. This is precisely the goal of criti-Chapter 2: Positions and Approaches 64 cism in the large; it provides us with improved means for making practical judgements, for identifying the right thing, or the good thing—or at least the "better" thing. In order to answer the question, "how do we recognize a good idea when we see one?" we have to first recognize a good idea. This cannot be done while maintaining a stance of agnosticism or methodological distance. It can onlybe accomplished by engagement and then elaboration of the layers and spheres of meaning which can be generated within that space of engagement—it demands, as Haraway says, being "finite and dirty." Personal computing/educational technology as a site of struggle Thus what essentialism conceives as an ontological split between technology and meaning, I conceive as a terrain of struggle between different actors differently engaged with technology and meaning. - Andrew Feenberg, Questioning Technology Philosopher of technology Andrew Feenberg's conception is a response to the burden of essentialism, inherited from centuries of European philosophy. Feenberg's stance— technol-ogy as site of struggle—is a starting point for me. I conceive of technology, information technology, educational technology, not as a thing or a discourse or a set of practices to be analyzed or judged, but as a contested ground, over which wil l be fought the battles concerning democracy, education, and capitalism in this century. Personal computing is "socially constructed"—it is facile to point this out. In its construction is found the site of a struggle for meaning and significance. The substructure of this struggle is in the competing genres of understanding and articulation of what personal computing—and educational computing—are about. M y starting assertion is then that the Dynabook is a volley thrown into the midst of that struggle. It is not purely an 'historical' artifact whose day was in the 1970s; rather, the thrust of the Dynabook project is at least as relevant today in the 21st century as it was three decades ago. This is a battle for meaning, a battle to define what computing is and shall be, and it is far from decided. The stakes of this battle are, I believe, higher than we tend to admit, as our default instrumental-ist stance leads us to downplay the significance of this struggle. How computing is defined Chapter 2: Positions and Approaches 65 in these early years (in its incunabula, Kay would tell us) wil l have enormous consequence for how we live and work and learn and conduct our communities in the future. I mean this in the most concrete sense, but I am not interested here in evaluating any particular educa-tional technology in terms of instructional efficacy. Rather, my interest is at a broader level of citizenship and democracy, in the sense that John Dewey established: A democracy is more than a form of government; it is primarily a mode of asso-ciated living, of conjoint communicated experience, the extension in space of the number of invididuals who participate in an interest so that each has to refer his own action to that of others, and to consider the action of others to give point arid direction to his own... (Dewey 1916, p. 86) This may sound like an unabashedly Romantic framing of the issue—Kay has acknowledged as much on numerous occasions, and so should 1.1 am not neutral; I am not here to play the disengaged observer. I am interested in casting my lot for some genres and not others: genres which define and structure practice, interpretation, understanding as political and ethical patterns. Historian Mark Salber Phillip's studied the rise and fall of genres of histori-cal writing/understanding in 18th-century England. I see the present project as doing something related for the technocultural genres of the past three decades. M y task, then, is to demonstrate how and why technology is political—more specifi-cally, software as politics by other means. There is a superficial interpretation of this identification which concerns the way in which software is designed, released, and licensed in ways which to a greater or lesser extent constrain and direct a "user's" actions and poten-tial (Lessig 1999; Rose 2003); this is an arena of ongoing battles in copyright law, emerging trends like free and open-source software, the reach of open standards, and the market dynamics of dominant corporations such as Microsoft, Adobe, Apple, and others. But a second and somewhat deeper interpretation of technology and software as poli-tics has to do with a gradual (but by no means consistent nor even particularly widespread) democratization of computing over the past four decades. In this interpretation, it becomes possible to identify and map out the interplay of power and resistance within a technical Chapter 2: Positions and Approaches 66 sphere like the Internet or even your desktop. This is the arena in which much of Alan Kay' project can be considered. This is the level at which the educational implications of computing are most acute, and, indeed, most accessible, if we take the time to look. A third, more systemic interpretation requires a foray into philosophical theories of technology, locating the political aspects of technology at their lowest and most general level. It is to this topic that I turn next. Chapter 2: Positions and Approaches Chapter 3: Framing Technology In order to do justice to a treatment of technological development, I want to first establish a philosophical position with respect to technology. In doing so, my aim is to set up a theoret-ical scaffolding upon which I can hang particular reflections in my examination of Alan Kay's vision of personal computing. M y intent here is not to provide a broad-strokes "theory of technology"—neither by means of reviewing a hundred years' worth of philosophizing on the topic nor by attempting to construct a water-tight model. What I will try to present here is a provisional framing that draws attention to a number of particularly interesting charac-teristics of technology. ' The framing I have in mind rests on a foundational concept of technology as media and as mediation; Building on this, I wil l introduce the "sociology of translation" as advanced in the work of Bruno Latour and Michel Callon, and I wil l elaborate the implications of trans-lation as a principal metaphor for the dynamics of technocultural systems. This semiotic aspect of technology leads to a discussion of what I have called the "mechanics of text," and here I wish to present a latter-day framing of the kind of media ecology Marshall McLuhan outlined in The Gutenberg Galaxy, but with digital software given the central role, rather than print. The emphasis on software itself leads to a consideration of simulation as the paradigmatic practice of dynamic digital media, and I wil l argue that this is an essentially hermeneutic process, something which should focus our critical attention on its situated-ness. Finally, I wil l discuss some methodological implications raised by this framing of technology, and put the spotlight on the ethical and political considerations of a cultural treatment of technology. T E C H N O L O G Y A S M E D I A M y starting point is to treat technology as mediation—or as media, this word bringing with it a particular set of connotations and references (largely set in their current form by M c L u -han in the 1960s). To treat technology as media is to establish a perspective probably . Chapter 3: Framing Technology 68 distinct to the "information age." When the artifacts provoking discussion are the Internet and computing devices, a decidedly different tone is set than with industrial technology such as steel mills and power stations. This distinction immediately brings forth the herme-neutic aspect of technology (Feenberg 1999; 2004), by which I mean the complex of forces and processes which govern the significance of particular technologies to particular people in particular times and places. A n d it is the hermeneutic question that I mean to foreground here, rather than any suggestion of an essence of technology. In sum, differences in the way social groups interpret and use technical objects are not merely extrinsic but also make a difference in the nature of the objects themselves. What the object is for the groups that ultimately decide its fate determines what it becomes as it is redesigned and improved oyer time. If this is true, then we can only understand technological development by studying the sociopolitical situation of the various groups involved in it. (Feenberg 2004, p. 216) A treatment of technology as media or mediation lends itself also to the exploration of a media ecology. Now, by ecology I do not mean anything green (at least not directly). Rather, what I mean by ecology is, after Postman (1970), a dynamic, evolving system in which actor and environment are inseparable and mutually constitutive, in which both people and cultural artifacts are considered, and in which responsibilities and ethics are emergent and situated, in which the content-context distinction itself is problematic and should probably be avoided, for everything is the context for everything else. But, lest I risk characterising the whole issue as a sea of aporia, let me appeal back to mediation and hermeneutics—which provides a vocabulary for dealing precisely with this sort of contextualism. From a hermeneutic perspective, we are our mediations; mediation is primary, and not something that happens to already-existing entities. What this implies is that, as in M c L u -han's famous aphorism, the medium is the message (and that practically speaking, it is an error to attempt to distill one from the other), we human beings are effectively inseparable from our technology/material culture. Is this so radical a stance? It would not be terribly controversial to claim that human beings are inseparable from our language(s). I mean to Chapter 3: Framing Technology 69 claim for technological mediation this same primacy, to position technology as language, or at least to suggest that it be treated similarly. I make this claim not as part of a characteriza-tion of modernity, but as a fundamental part of what being human is about. Putting mediation first dissolves the debate between the idea that language expresses pre-existing thoughts and the notion that we are trapped within the limits of our language; or, similarly, whether culture is something internal or external to individuals. Michael Cole's influential book, Cultural Psychology: A Once and Future Discipline (1996) draws upon the Russian cultural-historical school (after Vygotsky) to elaborate a theory of medi-ated, contextualized action which "asserts the primal unity of the material and the symbolic in human cognition" (p. 118). In Cole's version of mediated action, artifacts are simultaneously ideal (conceptual) and material. They are ideal in that their material form has been shaped by their participation in the interac-tions of which they were previously a part and which they mediate in the present.... Defined in this manner, the properties of artifacts apply with equal force whether one is considering language or the more usually noted forms of artifacts such as tables and knives which constitute material culture, (p. 117) If we are our mediations, then certainly we cannot posit that ideas precede expression or mediation; nor can we accept that we can only think what our language of expression makes possible, for we can invent languages (and indeed do). So with our technologies: we do not exist in some essential way prior to technological mediation, nor are we subsumed within a technological trajectory. We invent tools, as we do languages, and subsequently our experi-ence and agency is shaped by them. But to say this is merely to ape McLuhan; it is the historicity of these 'inventions' that is the particularly interesting story. Precisely in the dynamics of this relationship and how 'we' (I will put that in qualifying quotes this time) see it—in terms of historicity, class, gender, politics, and so on—are found the interesting and important stories; that which is most worthy of study and the invest-ment that leads to deeper understanding. Far from seeing technology as something that augments or diminishes—or indeed qualifies—the human, my starting place is that human-Chapter 3: Framing Technology 70 ity is itself technologically defined, and in myriad ways.1 Donna Haraway's cyborg trope is a particularly eloquent and evocative address to this notion: There are several consquences to taking the imagery of cyborgs as other than our enemies. [...] The machine is not an it to be animated, worshipped, and dominated. The machine is us, our processes, an aspect of our embodiment. We can be responsible for machines; they do not dominate or threaten us. (Haraway 1991, p. 179) There is a decidedly historical character to such framings and perspectives. How this rela-tion presents itself to us today in the age of the "cyborg" is not what it would have been in the "machine age" of steam and steel; nor would it have the same character in the 13th century, with its 'machinery' of iconography, horsecraft, and emerging bureaucracy. But what is the same in all these cases is the central role of technological mediation. What is mediation, then, or media? I want to spend some time teasing out the implica-tions of these concepts by going down two different—but complementary—routes. The first route is the work of architect Malcolm McCullough; the second is by way of the sociol-ogy of technoscience of Bruno Latour and Michel Callon. McCullough's Framing of Media To turn to the more specific formulation, then, what is a medium7. To my mind, nobody answers this better—and in a definitively active, constructive, and contextualist mode— than Harvard architecture professor Malcolm McCullough, in his 1998 book Abstracting Craft. McCullough writes: Tools are means for working a medium. A particular tool may indeed be the only way to work a particular medium, and it may only be for working that medium. Thus a medium is likely to distinquish a particular class of tools. [...] Sometimes a medium implies such a unique set of tools that the whole is referred to without differentiation. Painting is a medium, but it is also the use of specific tools and the resulting artifact: a painting. The artifact, more than 1. Bruno Latour's Pandora's Hope (1999, pp. 202-213) traces a possible history of technocultural mediation which begins even before the primordial "tool kit" of stones and sticks: with the very idea of social organization—as technology. Chapter 3: Framing Technology 71 the medium in which or tools by which it is produced, becomes the object of our work. [...] Artifact, tool, and medium are just different ways of focusing our attention on the process of giving form... In many refined practices, the perception of a medium surpasses any perception of tools. If a medium is a realm of possibilities for a set of tools, then any immediate awareness of the tools may become subsidiary to a more abstract awareness of the medium. (McCullough 1998, pp. 62-63) McCullough positions media in the midst of the "process of giving form"—that is, practice. His framing integrates material and conceptual resources equally, suggesting that these become distinct as we focus our attention differently. McCullough clearly echoes Heidegger's famous ontology of the hammer as ready-to-hand. He also, as we wil l see, paral-lels Latour's vocabulary of articulation and network; the hand and tool and medium become a network, aligned in the articulation of the work. McCullough's exploration of the subtle-ties of craft—and particularly handcraft—moves from tool to medium to artifact seamlessly, drawing his attention to the spaces of tension and grain within each, letting his attention fall away where it is not needed, according to the dynamics of actual, situated practice. Note the emphasis in McCullough's account on work, and in particular form-giving work. This is a participatory stance, not a spectatorial one; it bases the ontology of media in the actor, and not in the spectator or consumer. Compare McCullough's framing with that of Peter Lyman, writing on the computerization of academia: [M]ost fundamentally, most people only want to 'use' tools and not to think about them; to'nonexperts, thinking about tools is a distraction from the prob-lem presented by the content of the work [...] Whereas a machine has a purpose built into its mechanism, and a tool requires the novice to acquire skill to real-ize this purpose, a computer is a field of play only to an expert. (Lyman 1995, p. 27) Lyman's analysis makes 'tools' into rather ahistorical black boxes while appealing to higher-order goals—which may prove to be something of a false economy. How a particular tool comes to be neatly integrated in a particular practice—to the point where it becomes subsumed into the practice—is a profoundly historical and political process (Franklin Chapter 3: Framing Technology 72 1999). This notion is often troublesome, but there exists a case in which nearly everyone recognizes the historicity of'practice': the relationship between a musician and her instru-ment. Both McCullough and Lyman touch on musicianship, but treat it rather differently. Lyman goes to far as to give special status to this case, claiming that the musician/instru-ment connection transcends tool use: "In performance, the musical instrument and player interact in a manner that cannot accurately be described as a human-tool relation" (Lyman 1995, p. 29). McCullough's treatment is much more involved: "acute knowledge of a medium's structure comes not by theory but through involvement" (McCullough 1998, p. 196). This awareness or knowledge has two faces: one is the familiar falling away of intermediaries to allow consciousness of the medium or practice itself. In the other are the traces of culture, knowledge, and history that become wrapped up in our tools, media, and artifacts. The guitar may disappear from the consciousness of the musician, such that she becomes aware only of the music, but over time, the instrument wil l bear the marks of her playing, wil l show wear from her hands, be stained by the moisture from her skin; conversely, her hands and her musicianship in general wil l bear the complementary wear patterns. Our tools, media, and artifacts are no less situated than we are, and they share the temporality of our exist-ence. They have history, or historicity, or horizons that we merge with our own. The social structure of this shared historicity is something akin to literacy, a topic I wil l return to at length. Chapter 3: Framing Technology 73 Latour's Mediation: Articulations and Translations Bruno Latour entered the consciousness of English-language social science with his 1979 book with Steve Woolgar, Laboratory Life, which has some claim to .being the first real ethnography of a scientific laboratory. Latour and Woolgar were interested in explicating the process of scientific investigation, and the tack they took was one which would largely define Latour's career: science as inscription—that is, the turning of things into signs, and, therein, the application of semiotics (in the mode of A . J. Greimas) to science and technology theory. Latour is now more famous for his association with colleague Michel Callon and the so-called "Paris school" of science and technology studies, and with a school of thought about science and technology called "actor-network theory" (later reified simply as A N T ) . Actor-network theory has been notoriously controversial (see, for example, Collins & Yearly 1992; Bloor 1999) and, I think, broadly misunderstood—in part simply because the wrong elements are emphasized in the moniker "actor-network theory." Latour's repeated call has been for symmetry in how we treat human and nonhuman agency, the social and technical, and his attempts at making this clear have required constant re-iteration and Chapter 3: Framing Technology James' Choo-choos (December, 2003) At 19 months, my son lames is "enthralled with trains. He has a • small wooden train set", andjie i's fascinated • by it, by the way, the cars go. together, the way they' go' around the, track, the way the track goes together. He looks for trains out in the world and in books. He sees things, like fences, and says, "choo rlioo," presumably seeing them as if they're tracks Trains are,educational mediator James. Here's how: we cannot assume that a train or trainset is for.him what it is.fdr us'.'A train cannot be said,to simply, be"; a train is the product of heavily layered interpretation, . tradition, enculturation. But for James, j encountering the world for the first time, a ' ; train is something new; he has no idea what a 'real', train : is or. what it is about—how could He? .What' a train:, is; is ' so ^under-determined, in; his case-that-his under-standirigiqf .the; significance of what a train is must be completely .different from', say, mine. We can talk to him about it, because ; there is a common referent, but its significancejn his world is—must be—so very different from mine. James at, 19 months is just beginning to •' develop' an overall; sense of the .world; as opposed, to*- knbwihgr fragmentary- things here and there!-,the episode's of immediate experience. Now he is beginning to I systematize, the trainset— and his railroad i trope more' generally—is one of the first systems of things that he's really engaged with. The train and its microworld is a whole, system:, if.has,'a grammar, a'set of rules,' ., constraints,'and'possibilities: James use's the train set as a model, or a frame, to look at the rest of the world. The trainset is a language, a symbolic system, and in the way he uses, it as lenses with which to see the 1 rest: of; the - ; world, it is ' almost" entirely-'; , metaphoric._Of> course, trains are meta-phors for adults too, but in a much different,: and perhaps Jess dynamic way. "•:-. As he grows,up7"6ther things will come to' take the place of the trainset as his lens. He . learns systems of classifications (animals, , colours,, weather, etc.) and media like pictures, and text. Books and print are'- : already iri.Jirie to become powerful symbolic' tools for.his.^understanding'the world, but .• not yet; a b*qok is still: merely a container of stories- fof^him; rather than a'"personal dynamic medium" like the choo-choo train. \ clarification over the past two decades. His first theoretical book in English, Science in Action, (1987) made the first broad strokes of a general picture of his ideas. The book was influential but its bold constructivist claims were seen by many as dangerously out on a limb (e.g., Bricmont & Sokal 2001), such that Latour has published substantial reworkings and reframings, notably the excellent essay, "Where are the Missing Masses? The Sociology of Some Mundane Artifacts" (1992), We Have Never Been Modern (1993), and Pandora's Hope: Essays on the Reality of Science Studies (1999). There is a general trend in these works from the initial semiotic approach Latour and Woolgar took towards a much more all-encompassing ontological stance, one that bears some resemblance to phenomenological theory. Since the early 1990s, Latour's works have been characterized by his use of a special vocabulary aimed at getting around semantic ambiguities raised by his theorizing, and influenced by A . N . Whitehead's event-oriented process philosophy? In Pandora's Hope (1999) Latour provides a comprehensive treatment of technical mediation, presenting the vocabulary of Paris-school science studies—associations, delega-tion, detours, goal translation, interference, intermediaries, programs of action, shifting in and shifting out—all of which elaborate Latour's theme of the alliance and alignment of resources, both human and nonhuman, and the ongoing construction of technosocial systems—or networks—thereby. Technical artifacts are as far from the status of efficiency as scientific facts are from the noble pedestal of objectivity. Real artifacts are always parts of institu-tions, trembling in their mixed status as mediators, mobilizing faraway lands and people, ready to become people or things, not knowing if they are composed of one or of many, of a black box counting for one or of a labyrinth concealing multitudes. Boeing 747s do not fly, airlines fly. (Latour 1999, p. 193) A key motif in Latour's recent writing is that of crossing the boundary between signs and things. Two entire chapters of Pandora's Hope are devoted to working through this process 2. A.N. Whitehead's 1929 book, Process and Reality is the touchstone here; one seemingly picked up by Alan Kay in his early writ-ings as well (see Kay 1972). Chapter 3: Framing Technology 75 in the natural sciences—how, by degrees, raw soil samples become quantified, comparable, publishable inscriptions. A n d conversely, Latour also attends to the ways in which signs are articulated as things: he draws attention to the lowly speed bump, which the French call a "sleeping policeman." The speed bump's original admonition, "slow down so as not to endanger pedestrians," becomes by stages translated into, "slow down so as not to damage your car's suspension," and beyond, the message being articulated not in words but in asphalt topography: The translation from reckless to disciplined drivers has been effected through yet another detour. Instead of signs and warning, the campus engineers have used concrete and pavement. In this context the notion of detour, of transla-tion, should be modified to absorb, not only... a shift in the definition of goals and functions, but also a change in the very matter of expression. (p. 186) But, Latour notes, in anticipation of the 'humanist' critique, We have not abandoned meaningful human relations and abruptly entered a world of brute material relations—although this might be the impression of drivers, used to dealing with negotiable signs but now confronted by nonnego-tiable speed bumps. The shift is not from discourse to matter because, for the engineers, the speed bump is one meaningful articulation within a gamut of propositions [which have unique historicity]. Thus we remain in meaning but no longer in discourse; yet we do not reside among mere objects. Where are we? (p. 187) Latour's foregrounding of articulation and translation as key movements in the relation-ships between us and our material realities makes mediation foundational, and, as with McCullough's practice-oriented framing, it reminds us that mediation is something ongo-ing, rather than a discrete step or a qualifier of otherwise stable entities. Stability in Latour's writings is an effect, not a starting point; in Pandora's Hope he is careful to distinguish between the idea of "intermediaries," which look like objects, and "mediations," which produce them. By working out a vocabulary capable of making such distinctions, Latour Chapter 3: Framing Technology 76 goes much farther than most in giving us a comprehensive philosophical framework for understanding mediation. T E C H N O L O G Y A S T R A N S L A T I O N It is in the detours that we recognize a technological act; this has been true since the dawn of time.... And it is in the number of detours that we recognize a project's complexity. - Bruno Latour, Aram is Michel Callon's article "Techno-economic Networks and Irreversibility" (1991) is probably the most lucid single articulation of the position that has come to be called "actor-network theory," which problematizes the individual actor by embedding it in a network and a temporal flow. The network comes to be articulated by means of what Callon calls "displacements"—that is, the re-arrangement and re-definition of various actors' goals, plans, and sub-plans such that these various elements come to be "aligned," forming a network or chain of articulations, and thereby allowing interactions of greater extent, power, and durability. The dynamics of this process are what Callon and Latour spend most of their time explicating; it is a time-consuming business, because each such displacement and alignment depends upon a packaging of complexity into an apparent "black box" of relative stability and dependability. The Paris school locates the methodology of science studies in the unpacking of these boxes, and hence, in the analysis of these networks. A n d yet, to focus solely on the nouns here—actors, resources, intermediaries, networks—is to miss much of the point. Callon's writings also provide a better slogan, one which better focuses on the process of articulation and alignment: the sociology of translation (Callon 1981; 1986; Callon & Latour 1981). In taking a complex articulation (it could be the reading of data, the negotiation of fund-ing, the assembly of particular material apparatus or group of people) and rendering it as a resource to be used in a larger assembly (with larger/different goals), that particular articula-tion is translated into something more or less different. It has been re-framed and thus re-contextualized, embedded in a different practical or discursive context; in doing so, its Chapter 3: Framing Technology 77 meaning or significance and the work it does changes. This is what is meant by translation.3 Latour explains: In addition to its linguistic meaning (relating versions in one language to versions in another one) [translation] has also a geometric meaning (moving from one place to another). Translating interests means at once offering new interpretations of these interests and channelling people in different direc-tions. 'Take your revenge' is made to mean 'write a letter'; 'build a new car' is made to really mean 'study one pore of an electrode'. The results of such rendering are a slow movement from one place to another. The main advan-tage of such a slow mobilization is that particular issues (like that of the science budget or of the one-pore model) are now solidly tied to much larger ones (the survival of the country, the future of cars), so well tied indeed that threatening the former is tantamount to threatening the latter. (Latour 1987, p. 117) The extent to which these ties are "solid," or, for that matter, "irreversible" is the matter of some discussion within the literature. Callon's 1991 article suggests that successful network alignments (and thus translations) are indeed irreversible, and some of Latour's early writ-ings seem to support this (his use of the term chreod—Greek for 'necessary path'—taken from biologist Waddington, confirms this reading. See Latour 1992, p. 240). Later, however, Latour seems to turn from this view, arguing that irreversibility is only the product of continual energy and organization (see Latour 1996 for a full, treatment of this theme), that systems or networks may undergo crises at any point which tend to misalign these same resources, turning tidy black boxes (the building blocks of more complex assemblies) back into complexities themselves. Translation is thus more a process than a product; it is "the mechanism by which the social and natural worlds progressively take form" (Callon 1986, p. 224). The framing of technology as mediation I offered earlier means, to use Latour and Callon's rich keyword, technology as translation: the (re-)articulation of the world in new forms and contexts, thereby effecting transformations of its 'Being'. Now, if technology is translation, isn't something always "lost in translation"? O f course it is. M y appropriation of 3. Latour and Callon credit French philosopher Michel Serres with this usage of "translation." Chapter 3: Framing Technology 78 the term in the service of this exposition is intended to do a particular kind of work; it fore-grounds some aspects of a theory of technology and deprecates others. A quote from Callon and Latour shows this in all its political and metaphorical richness. By translation we understand all the negotations, intrigues, calculations, acts of persuasion and violence, thanks to which an actor or force takes, or causes to be conferred on itself, authority to speak or act on behalf of another actor or force. (Callon & Latour 1981, p. 279) Let me leave this particular line suspended for a moment, however, while we return to a mundane example in order to work through some of the ways in which technology as trans-lation can be articulated. Let us take the old shopworn example of the hammer. 4 A translation-oriented way of looking at the hammer is that it is the technology that translates a nail into a fastener. This lands us mise-en-scene in a network of other articulations: a pointy stick of steel is translated into a nail; a nail becomes a fastener of wood; it translates two pieces of wood into a construction. Now that we have those pieces, the hammer can translate the arm into a nail-driver; the hammer-wielder is translated into a carpenter; a stack of lumber is translated into a house-frame; and so on. A whole concert of temporal displacements and ontological shifts occurs in the rendering of a hammer and a nail into 'functional' pieces. Some of these translations are more permanent than others: the house frame hopefully has some stability; the carpenter's professionalism and income are likely dependent on that translation haying some extent in time. The hammer variously translates us into builders, the things it hits into fasteners, and the things we hit nails into into arti-facts. Heidegger's point about a different ontology being revealed when we hit our thumbs is true, but what the hammer does when we hit nails is much more important. As Latour takes pains to point out, what is at stake are longer and more durable chains of associations. Now, as we have all heard, when all one has is a hammer, everything looks like a nail, and this is nowhere so true as when musing about technologies. Lest we treat all technology as a 4. Heidegger's famous rendering makes the hammer "ready-to-hand" in the practice of hammering. McLuhan's framing makes a similar move, making the hammer an extension of the arm (or fist, depending which political nail one is trying to strike).. Chapter 3: Framing Technology 79 hammer, remember that the hammer and nail example is but one set of articulations; long pants and thatched roofs and texts and numbers are all technologies, too, as are personal computers and nuclear reactors. Each is a particular articulation of networks of varying degree. In each case, different translations are effected. The common thing among these is not the kind or scope of changes which are brought about, but that the fundamental dynamic is one of translations, and chains of translations. Latour goes so far as to decon-struct the divide between so-called 'modern' and 'pre-modern' societies along these lines; what we see, rather than some kind of quantum difference, is a difference in degree, largely expressed in the scope and scale of translations which can be effected, and thus the relative length of the networks that can be sustained (Latour 1987,1993). Latour offers the example of the French explorer Laperouse, who visited the east Asian island of Sakhalin briefly in 1787, ascertained from the people living there that it was in fact an island (and not a penin-sula extending from Siberia, as the French suspected), and was able to record this knowledge in descriptions sent back to France (Latour 1987, pp. 215-218). There is no qual-itative divide between the thinking of Laperouse and the people of Sakhalin (who, LatcW notes, drew maps in the sand for the French), but there is a considerable difference in their relative ability to translate knowledge into "immutable, combinable, mobile" forms (p. 227) which in turn facilitate longer and longer-lasting chains of resources. Writing, cartography, and celestial navigation made up part of the technology of trans-lation for 18th-century French explorers; so too did muskets, cannons, and square-rigged ships. Latour's account of'modernity'—though he explicitly forswears the very notion—is one of proliferations of translating technologies, and the longer and longer networks that result. If there is an essence of'modern' technology, it surely has this characteristic: that it is more deeply intertangled and has more components than seen before. But, Latour insists, this is a difference of degree, not of kind. Translations have been the business of human culture from the Tower of Babel on forward, and the proliferation of them in the modern world is nothing new, in essence. Chapter 3: Framing Technology 80 This is a point upon which Latour departs from many technology theorists who hold that modern technology is oppressive, totalizing, or one-dimensional. The intersection of these ideas yields interesting questions; in particular, Heidegger's classic examination of the essence of technology, "The Question Concerning Technology" (1953/1993), could be seen as a translation-oriented approach, with Heidegger's concept of "Enframing" as the master translation, in which human agency is itself translated into a means-and-ends rationality. But Latour rejects Heidegger's conclusions on anti-essentialist grounds (Latour 1993, p. 66), claiming that Heidegger grants far too much power to "pure" instrumental rationality (which Latour characterizes as a myth of modernity), and that actual practice is far more complex than essentialist philosophy admits. Let alone an essence of technology to which we have succumbed, he writes, The depth of our ignorance about techniques is unfathomable. We are not even able to count their number, nor can we tell whether they exist as objects or assemblies or as so many sequences of skilled actions. (1999, p. 185) The alignment of networks and the maintenance of translations is difficult work, Latour argues, but it applies across all facets of culture and society: " is no more and no less diffi-cult to interest a group in the fabrication of a vaccine than to interest the wind in the fabrication of bread" (1987, p. 129). In his extended narrative on the aborted development of Aramis, an ambitious new rapid-transit system in Paris, Latour waxes eloquent about the difference between the actual and the potential: The enormous hundred-year-old technological monsters [of the Paris metro] are not more real than the four-year-old Aramis is unreal: they all need allies, friends, long chains of translators. There's no inertia, no irreversibility; there's no autonomy to keep them alive. Behind these three words from the philoso-phy of technologies, words inspired by sheer Cowardice, there is the ongoing work of coupling and uncoupling engines and cars, the work of local officials and engineers, strikes and customers. (1996, p. 86) Ongoing work is what sustains technosocial systems over time, what makes them necessarily collectives of both human and non-human actors, and what makes them complex (if not Chapter 3: Framing Technology 81 genuinely chaotic), and thereby eluding essentialist reduction. This is what makes "transla-tion" a better watchword than "actor-network." But remember that translation is fraught with power dynamics; the business of arranging the world into longer chains of mobilized (and therefore transformed) actors exacts a price. Standardization and the Tower of Babel In the sociology of translation, the key dynamic in the extension and sustenance of techno-social networks/systems is the translation of heterogenous and complex processes and articulations into seemingly simple "black boxes" which can be effectively treated as single, stable components. Callon wrote that "the process of punctualisation thus converts an entire network into a single point or node in another network" (Callon 1991, p. 153). It remains possible to open these black boxes and reveal the complex details within, but what makes for durable (or, by extension, "irreversible") associations is the extent to which we gloss over their internals and treat a whole sub-process or sub-assembly as a single object. Latour expresses it by making a distinction in his vocabulary between "intermediaries" and "mediations" (Latour 1996, p. 219; 1999, p. 307). Intermediaries, which appear as actors, are neat black boxes; mediations, which appear as processes, are open, complex, and irreducible to a particular role or function. The two terms are different aspects of the overall process of articulation, viewable as if from above and from below. The process of making complex details conform to stable black boxes which can then be combined and exchanged is precisely that of standardization—a Janus-faced concept 5 that has been the travelling companion of translation since the prototypical technological project, the Tower of Babel. The double loop of translation into black boxes, standardiza-tion, and subsequent translations has a particularly interesting implication for technology: it introduces (or reveals) a semiotic character to the technical. A l l technology—on this view— is information technology, or, to put it in the converse, information technology is the paradigm for all consideration of technology. 5. See Bowker & Star's 1999 Sorting Things Out: Classification and its Consequences, for a treatment of this dynamic. Chapter 3: Framing Technology 82 This view of technology puts it firmly in a linguistic frame, rather than, say, an 'economic' one concerned with commodification, or a 'political' one concerned with domi-nation. It is a view very much influenced by Latour's explication of the "circulating reference" of signs and things. It is (or should be) recognizably McLuhanesque, insofar as it once again puts mediation first. It is a view which puts the history of technology in a partic-ular light: the material rendering and transformation of any artifact is inseparable from its symbolic renderings and transformations. Technology is, in this light, a language of things. This is but one way to look at technology, but it is one which I think has much to offer to the present study. Technology is fundamentally and essentially about translation: the symbolic and mate-rial rendering of something into something else. Note that the symbolic is the primary term here, the material is secondary; this makes me an idealist and not a materialist, I suppose, but I mean it this way: technologies are symbolic means of re-ordering the world; in this sense they are just like language. T H E M E C H A N I C S O F T E X T Lewis Mumford has suggested that the clock preceded the printing press in order of influence on the mechanization of society. But Mumford takes no account of phonetic alphabet as the technology that had made possible the visual and uniformfragmentation of time. - Marshall McLuhan, Understanding Media The view of technology I am elaborating here puts the development of the phonetic alpha-bet as the defining technology—at least the defining technology of the Western cultural tradition I can claim to inherit. Language, writing, and standardization Of print are all advances in terms of greater and greater translatability; the phonetic alphabet is itself the longest-lived and farthest-reaching of all such technologies. Since its original development by the Phoenicians in the third millennium BC, it predates most living languages, has spanned the lifetimes of any number of particular writing systems, and is by far the most influential standardizing principle in Western history. The alphabet underlies all our most Chapter 3: Framing Technology 83 important machines: from books and clocks to computers and networks (McLuhan 1962; 1964). A theory of technology that locates its essence in translation—that translatability is the telos of all technology—is in principle alphabetic, in the various senses that McLuhan outlined: the alphabetic paradigm leads to systematization, organization, ordering schemes. The alphabet is the prototype for standardization: for interchangeable parts and mass production. The alphabet anticipates not just literacy and printing and empire, but mathe-matics, algebra, mechanization, the industrial revolution, the scientific revolution, and, par excellence, the information revolution. Only alphabetic cultures have ever mastered lineal sequences as pervasive forms of psychic and social organization. The breaking up of every kind of experience into uniform units in order to produce faster action and change of form (applied knowledge) has been the secret of Western power over man and nature alike.... Civilization is built on literacy because literacy is a uniform processing of a culture by a visual sense extended in space and time by the alphabet. (McLuhan 1964, pp. 85-86) What McLuhan realized so very early on was that all technology is information technology; by extension, digital technology is the epitome of technology, because digital technology makes the relationship between texts and machines real in real time. I would like to take a moment to explain what that means. The equation of texts and machines is more than just a convenient metaphor. I hope to show that it is (has always been) 'literally' true, and that this becomes clearer and clearer with the development of digital technology. I have argued that technologies are essentially about translation—about the symbolic and material rendering of something into something else. This is true of hammers and pillowcases and tea cozies; but it is especially true (or, more accurately, it is especially apparent) of technologies of representation. In late alpha-betic culture, we have developed an enormously powerful toolkit of representational technologies: from narrative, we have expanded the repertoire to include accounts, algo-rithms, arguments, articles, equations, mappings, proofs, tables, theorums, theories, Chapter 3: Framing Technology 84 transactions, and so forth. A l l of these technologies are representational, obviously, but to put it more forcefully, they are all technologies of translation; they are all machines for capturing and changing the rendition of the world in some measure. We don't think of them as machines because they operate relatively 'quietly'; in our post-industrial imagination, steam power is still our paradigmatic case of mechanization, desite this particular technol-ogy's relatively brief lifespan! But perhaps the old trope of the "mechanistic universe" is giving way to one of a textual universe. In the latter part of the 20th century, biologists began to recognize technologies for translation in the mechanisms of cells, and the field of bioin-formatics has sprung up around this. Here is another order of information-rendering machines, built of proteins instead of ink or silicon, but recognizable if not yet/quite interpretable.6 Language, writing, and standardization (and therefore mechanization) of print are all advances in terms of greater and greater translatability. Numbers, mathematics, and espe-. cially algebra are enormously powerful technologies of translation. For example, trigonometry is the translation of the idea of a circle into a number of relationships between parts of triangles such that both the circle and the triangle can be seen and related in new ways—facilitating the extension of technosocial networks of greater extent (quite literally to the moon and back). On 'Abstraction' I want to take a moment here to address the popular notion that information technologies lead to (or are achieved by) greater and greater abstraction. Jean Lave (1988, p 40ff; Lave & Wenger 1991, p. 104) and Bruno Latour have gone to lengths to directly warn against this simplistic concept. Rather, the concept of translation renders abstract and concrete as different languages; it does not set these up in an absolute hierarchy; to do so is to revert to 6. The qualifier here is not intended to soft-pedal a technological determinism, nor to problematize our relative proximity to instrumental power over the genome. Interpretation is always not yet/quite possible. Genomics and bioinformatics have shifted this dynamic to new terrain, but I am not at all sure that the core challenge is so different from interpreting written literature. Haraways' extensive (1997) critique of "gene fetishism" makes the same kind of contextualist argument against literalism that literary critics have mounted against determinate formalism. See e.g. Fish 1980. Chapter 3: Framing Technology 85 the old vertical structuralist logic of signifier/signified that I want specifically to avoid. Abstraction is a dangerous metaphor, as Latour notes: The concrete work of making abstractions is fully studiable; however, if it becomes some mysterious feature going on in the mind then forget it, no one will ever have access to it. This confusion between the refined product and the concrete refining work is easy to clarify by using the substantive "abstraction" and never the adjective or the adverb. (Latour 1987, p. 241) Technologies like the alphabet and the computer don't work because of abstraction; if anything, they are effective because of their greater concreteness. It can be argued that the turning point in the development of the digital computer, bringing together the thinking of Alan Turing and George Boole, was when Claude Shannon finally achieved a sufficiently concrete means of representing logical relationships. That there exists a pervasive fetishism of abstraction, especially in technoscience, is not to be forgotten, however—to the point where Turkle & Papert (1991) were led to argue for a "revaluation of the concrete." The concrete was of course there all along, despite the mythology of'abstraction.' The point I want to underscore here is that the dynamics of translation in alphabetic culture are not between brute concreteness and fluid abstractions, but rather between different forms of concreteness, lending themselves to different kinds of practices. Chapter 3: Framing Technology 86 Digital translations Different media are variously translatable; different genres are variable in their affordances too. Written text has been the very fount of translatability (in all senses of the word); image less so, and performance less still (hence the oft-quoted^and as oft-misat-tributed—"talking about art is like dancing about architecture"). The alphabet lends itself to transla-tion by reducing the material of representation to a couple of dozen glyphs that can be assembled and reassembled in a famously infinite number of ways. Digital computing extends this reduction by limit-ing the material of representation to just two states. The result is that 'everything' is renderable digitally (hence, in a crude sense, "multimedia"): a step beyond what could be accomplished with writing, which can only practically render what could be spoken or counted. But the extent to which text, image, and performance can be "digitized" is dependent upon the facility with which we can interpret (that is, translate) these as digital patterns: arithmetic systems were the first to be digitized, back in the 1940s; text-based media came next, and has been so successful as to suggest a revolution in reading, writing, and publishing. Still images are now in widespread digital form; music (both as digital recording and as digital notations like MIDI) has presented few challenges—the digital representation of music is by now the default format. Moving image (video) has been tech-nically difficult (owing largely to the sheer volume of bits that it requires), and the digitization of performative genres like dance has been the least widespread digital form, though certainly not untried. What is Digital? •:"Digital" simply refers to digits,, and what • aredigitsbutfingers? Digitalliterally«means , counting on your fingers, assigning a.finger to each, thing counted. In the sense of a "computer, this is exactly true, except the computer has only one finger, so it counts in is and os. A commonly encountered criticism of computers and digital technology is that everything is • reduced to an either-or .distinction. This is as vacuous asrsayingthat everything in English literature is:reduced to • 26 letters. But this simplistic critique is based on a misplaced correspondence theory .of meaning. If we remember that meaning is in the interpretation, ratherthan , the ^ representation, we quickly get beyond this -into more interesting terrain. But even mechanically, we make up more complexirepresentations than one/zero or yes/no by collecting bits 'into? larger patterns, or in establishing prior context— exactly as we do with words and sentences. As the layerings proliferate, we.gain more and more expressive power, and more demands are made on the interpretive system—just like with written literature. And, as we will see in the concept of "late binding," interpretation can be postponed almost indefinitely—just like, in post-structuralism! Chapter 3: Framing Technology 87 What is not so obvious here is whether a particular medium lends itself to digital repre-sentation as a result of the material affordances of that medium versus our ways of thinking about particular genres. For instance, in the past few years, academic journal articles have been largely re-realized in digital rendition and distributed via digital networks, to the point of near ubiquity today. But the relative acceptance of such a shift in this particular genre is in marked contrast to the well-hyped 'e-book' idea, which has promised to move novels and other popular literature into digital formats. That the latter has failed to appear on any significant scale has nothing to do with the material aspects of the medium—novels are composed of the same stuff as journal articles—the difference is cultural: of genres and practices. Genres, of course, do not operate only in the space of literature; they also form the substructure of technological practice. What is interesting is the examination of how both literary genres and technical ones wax, wane, transform, and persist variously in response to the dynamics of media ecology. But despite the endless (and endlessly interesting) vicissitudes of genre, digital repre-sentation implies that everything is translatable into everything else, since the underlying representational mechanisms of all kinds of digital media are common and as simple as possible. The poststructuralist credo that there is nothing outside the text becomes true in a very literal sense: everything becomes a 'text' because everything is, in a literal sense, a text; the genome is but one famous example. The translation of phenomena to textual renderings (which can then be further translated) is the core of what we call science, says Latour; his detailed description of the life sciences show a complex but systematic process of move-ment along the continuum "between signs and things" (Latour 1999). By this reading, science is a particular form of reading and writing. Indeed, to anyone's eye, science has certainly spawned particular forms of reading and writing; most 'modern' document genres, whether they pertain to matters of biochemistry or to journalism, owe their existence to the kinds of translation processes that have been developed in the sciences. Conversely, docu-ments themselves are a very interesting kind of technology, and they do a very particular kind of work (see John Seely Brown & Paul Duguid's 1996 "The Social Life of Documents"). Chapter 3: Framing Technology 88 Documents operate at a different level than texts per se, in that they are technologies that operate on audiences, rather than on individual readers and writers. A n d yet, in every docu-ment is a text doing its own work. The very idea that there can be numerous layerings of text and document (a complex which we commonly call "discourse") underscores the notion that there are large-scale networks of meaning-making at work. The semiotic analogy is doubled and returned: not only do we have tools operating as signs, we have signs that act like tools as the machinery of text meets the semiotics of publication. Software Thanks to computers we now know that there are only differences of degree between matter and texts. - Bruno Latour, Aramis In their book, The Machine at Work: Technology, Work, and Organization, Keith Grint and Steve Woolgar make the first explicit equation of texts and machines that I have been able to find in the sociological literature (though the idea has older roots in literary criticism; see Landow; Aarseth; etc.). What falls out of this equation is that the analogy of reading and writing becomes possible with reference to machines, rendering machines "hermeutically indeterminate" (Grint & Woolgar 1997, p. 70). This is all very well, and no doubt lends some valuable light to the study of various technologies. But Grint and Woolgar decline to take the next step: they persist in talking (as do most technology theorists) of machines in the "steam-engine" sense: as physical mechanisms, made out of hard stuff and powered by shovelling coal or at least plugging in the power cord. Although computing technology figures in Grint and Woolgar's analysis, their attention sticks with the plastic-and-metal object on the desk. It does not venture inside, to software. Even while making a case for the absolute blurriness of the line between texts and actions, Latour too holds tight to the conventional divide between steel and words: We knew perfectly well that a black box is never really obscure but that it is always covered over with signs. We knew that the engineers had to organize their tasks and learn to manage the division of their labour by means of millions Chapter 3: Framing Technology 89 of dossiers, contracts, and plans, so that things wouldn't all be done in a slap-dash manner. Nothing has a bigger appetite for paper than a technology of steel and motor oi l . . . . Every machine is scarified, as it were, by a library of traces and schemas. (Latour 1996, p. 222) If there's anything that's been shown in a half century of computing, it is that machines are not reliant on steam and steel. Machines are pattern-processors. That one particular pattern is in steel and another in fabric and another in bits is inessential. Where Latour and Grint & Woolgar neglect to go is precisely where I do want to go: the machine is text—and this is not just an analogy that makes literary techniques applicable. Machines are literature, and soft-ware makes this clear. This may not be yet/quite apparent in the public imagination, owing largely to our collective hardware, or gadget, fetishism. But the argument for placing the focus of our technological inquiries at the software level rather than at the hardware level is very strong. Pioneering computer scientist Edsger Dykstra wrote, in 1989: What is a program? Several answers are possible. We can view the program as what turns the general-purpose computer into a special-purpose symbol manipulator, and it does so without the need to change a single wire... I prefer to describe it the other way round. The program is an abstract symbol manipu-lator which can be turned into a concrete one by supplying a computer to it. (Dijkstra 1989, p. 1401 [italics added]) The efficacy of this perspective has been apparent within computer science since the late 1950s, and in particular, since the advent of John McCarthy's computer language Lisp (McCarthy 1960), hailed as doing "for programming something like what Euclid did for geometry" (Graham 2001).7 The significance of Lisp and the ways of thinking Lisp ushered in have been obscured by the popular rendering of Lisp as "an AI language" and therefore subsumed within the quest for artificial intelligence. But Lisp's connection with AI is an "accident of history" (Graham 1993), one which I wil l not dwell on here. What I do want to foreground here is the idea of textual constructions—programs and programming languages—acting as machines in their own right. O f course it is possible to quibble with 7. Alan Kay called McCarthy's contribution the "Maxwell's equations of software" (Kay & Feldman 2004) Chapter 3: Framing Technology 90 Dijkstra's formulation and strike a hard materialist stance,8 insisting that the electronic circuitry is the machine and that programs are "superstructure." What McCarthy's Lisp provides is a solid example in which it clearly makes more sense to view it the other way around: that the real machine is the symbolic machinery of the language and the texts composed in that language, and the details of the underlying hardware are just that: details. A half-century of Lisp 9 provides considerable evidence that the details steadily decrease in importance. In 1958, when the first implementation was made, it was of course a matter of enormous resourcefulness and innovation on the part of the M I T Artificial Intelligence lab, and so it remained, in rarified academic circles well into the 1970s, when implementations began to profilerate on various hardware and as Lisp became core computer science curric-ulum at M I T and other institutions (see Steele & Gabriel 1993; Abelson & Sussman 1996). Today, downloading and installing a Lisp implementation for a personal computer is a matter of a few moment's work. But more importantly, as Paul Graham's (2001) paper "The Roots of Lisp" demonstrates for modern readers (as it is a reworking of McCarthy's original exegesis), Lisp's core simplicity means it doesn't require a 'machine' at all. Graham's paper (it is important to dwell for a moment on the word "paper" here) explains how, in the definition of a dozen or so simple functions—about a page of code—it is possible to create a system which is a formal, functional implementation of itself. The machinery of Lisp works as well in the act of reading as it does in digital circuitry. 8. Friedrich Kittler famously made this move in his (1995) essay, "There is No Software,"—ignorant of both Dijkstra and McCar-thy, as far as I can tell—in which he argues that everything is indeed reducible to voltage changes in the circuitry. Kittler's argu-ment is clever, but I don't find that it actually sheds any light on anything. It is rather reductio adabsurdum, leaving us with no better grasp of the significance of software—or any means of critically engaging with it—than a study of letterforms offers to the study of English literature. 9. That Lisp has been around for half a century provides us with a near-unique perspective on the evolution of computing cultures; hence my argument for cultural history of computing. Chapter 3: Framing Technology 91 The j s imp l i o t y of Lisp has to do with its straightforward alphabet c nature L isp was preatorsTwere; mathematicians, but, Lisp ] sfabout symbol manipulation rather than aiithmetic«calculation.* McCarthy used" mathematical tormalisms*as languages and I r p ^ s e n t - d a y « i j s p % ' ' ^wr 1 teT?18scll t- s>, not 5?someth 1 ng./^ | ^ n u ^ ^ s ^ ) m e ^ ^ g « • : intrinsicallytfPaV" * o u f t , " l f o ^ < you .r> o 4 The significance of digital computing—that which systems such as McCarthy's Lisp make so clear—is not how computers have been brought to bear on various complex information processing applica-tions (calculating taxes, computing ballistics trajectories, etc.). Nor am I about to claim that the digital revolution brought forth AI and electronic autopoeisis, nor wil l I allude to any other such Frankensteinian/Promethean narrative. The far greater significance of digital computing is in the use of alphabetic language as a kind of "bifurcation point" (to borrow the language of systems theory), at which a different level of order emerges from the existing substrate. The advent of digital computa-tion marks the point at which the text/machine equation becomes literally and manifestly real. Invoking systems theory in this way implies that the elements of this shift are largely internal to the system; it was not the advent of the semiconductor, or the pocket protector, or any such isolable, exter-nal factor that led to this shift. Rather it has more to do with the ongoing configuration, reconfiguration, and 'refactoring' of alphabetic language. McCarthy's original innovation is described in a conference paper (1960) of thirty-odd pages. In it, there are no circuit diagrams or instructions for constructing electronic devices. Rather, it is a concise work of symbolic logic, describing the elements of a formal notation for describing recursive systems which are, interestingly, capable of describing themselves. McCarthy's work on the language in the late 1950s largely preceded any working implemen-not> in (•arthy, gua m ; othe%tasJ| !RllIll£ axiomize 1 c o m p 1 OR r A r n m i i n S ^ g u a ^ ' s t i 11 in u?e**kxJay; ' the o l d L ^ ^ Q R J R A N y i B M ' s language fo. numerical ca lculat ion*st i l l used in scienti f ic comput ing aogl ica i ions 1 he - 'str uou reWandys ' yn t "ax» 'o f * " Lis piN uiuv;rdirjbii^deceptiwly:.simple: everything.!, ^^^^^L^^^o^^r^r^L^VProce'^si^^,'j> W f e n ^ ^ ^ ^ ^ w r t ^ ^ ^ ^ ^ n theses f u n c ^ ^ S j l e c j | S f c *p/b\, des a mcchan . sm f o r i ^ m t i n B K ^ j l f i i s t i tem f i o m - j list 1 ccT»isilveT^fpRjyjng'sii<.hi a funct ion al lows oneXt^»t j?tersaE?s' ls* (and lists of l ists. ' u r v n m ^ ^ ^ i n a e f i n i t t depths of nesting) O t n e M ^ r S & u ^ ^ o n s ; * p r o v i d e means o f J comSnngJnTe'^e^aTuations of terns and for ma'KmgEgpnditional s ' i t emen ts A n d then, i ia lmost^ tera l lyXevc ' ry th i i i i ' e k e is c ieated f r o r ^ p h e s e v p r i m i t i v e bui ld in i ; b locks, b> lay jermgl landplayr t ing larger and larger 1 evaluat ion structures Chapter 3: Framing Technology 92 tation: "I decided to write a paper describing Lisp both as a programming language and as a formalism for doing recursive function theory" (1981). McCarthy was after a realization of Alan Turing's formal theory of computability, and Lisp was a success in this regard; it was a practical implementation of the use of recursive functions as an equivalent to the Turing machine (which, although logically complete, is not a practical system—see Graham 2001, nS). Lisp's legacy is thus not in electronics so much as in logic and formal systems—in language. The Semiotics of Standardization It would be trite to say that this innovation had been in the works for some time. I have mentioned Alan Turing, but other significant contributions—such as those of logician George Boole and the famous team of Charles Babbage and Augusta Ada—were required in order for the story to unfold. These three are key to this telling of the story precisely because their contributions predate any workable physical instantiation of the machine/text that their names would come to symbolize. Babbage is particularly interesting in this respect, precisely because his works were not practically realized despite his best efforts. One of the most intriguing subplots in the history of computing is that it was indeed practically impossible^in Babbage's day, to create a calculating machine of the complexity he required, owing to the relative lack of manufac-turing sophistication. Specifically, it was not yet possible, in the mid 19th century, to manufacture the thousands of gears required for Babbage's "Difference" and "Analytical" engines with sufficient precision—that is, to sufficiently close tolerances—that such a machine could actually run. What was required was the advent of standardization of gear cutting and manufacturing apparatus, a fact not lost on Babbage, who published significant works in this area. Interestingly, the standardization of gear cutting (and also screw threads) was largely pioneered by Joseph Whitworth, an engineer who had worked with Babbage, but the required precision was ultimately achieved in the 20th century, not the 19th. It has thus become possible—even relatively economical—to create working instances of Babbage's Chapter 3: Framing Technology 93 machines (indeed the Science Museum in London, England has done so), given modern-day manufacturing (Doyle 1995). In considering this side-story, I want to draw the focus again to the symbolic rather than the material. The difference between gear cutting in the 19th century and in the 20th isn't merely that the tolerances are finer; rather, the key is standardization, mass production, and the translation of the artifact from individual incarnation to its status as a commodity. What happens when you standardize production is that you shift the artifact semiotically. It stops being a isolated signifier of its own, and starts being a neutral component—that is, a "black box"—that can be assembled into larger systems, just like the letters of the alphabet. The glyph itself, like the gear, stops being a thing-in-itself and starts being part of a larger semiotic apparatus. This, according to Havelock (1980), is precisely what happened when the Greeks began to distinguish between consonants and vowels in written language, thereby making a more analytically complete mapping of morphemes to glyphs. The result was that the individual letters became unimportant in comparison to the words, sentences, and paragraphs that were built out of them—this is evident in constrast with, for instance, the letter-oriented focus of the Kabbalistic tradition, in which significant meaning is vested with the individual letters themselves (Drucker 1995, p. 129). This is also, arguably, paral-leled in the evolutionary shift from single-celled to multi-celled organisms, in which the activities and identitiy of individual cells becomes subsumed in the structure of the larger organism. Standardization of alphabets, of currencies, of machinery, even of living struc-tures, all effect this kind of shift towards a (analytically) higher-level assemblages of order and agency. There is clearly a moral question which emerges here, since we are not merely speaking of dumb objects, but conscious subjects too. Bowker & Star problematize standardization thusly: We know from a long and gory history of attempts to standardize information systems that standards do not remain standard for very long, and that one person's standards is another's confusion and mess [...] W e need a richer Chapter 3: Framing Technology 94 vocabulary than that of standardization or formalization with which to charac-terize the heterogeneity and the procedural nature of information ecologies. (Bowker & Star 1999, p. 293) Bowker & Star's plea, however, comes at the end of their excellent book on "categorization and its consequences" and not at the beginning, and so we are left with no more than an opening made toward the study of process, articulation, negotiation, and translation. These dymanics are invoked in a clearly political mode, for these are the dynamics of standardiza-tion and classification themselves. Compare Donna Haraway's description of the digitization of the genome and the subsequent emergence of the field of bioinformatics: Yet, something peculiar happened to the stable, family-loving, Mendelian gene when it passed into a database, where it has more in common with L A N D SAT photographs, Geographical Information Systems, international seed banks, and the Wor ld Bank than with T . H . Morgan's fruitflies at Columbia University in the 1910s or UNESCO's populations of the 1950s. Banking and mapping seems to be the name of the genetic game at an accelerating pace since the 1970s, in the corporatization of biology to make it fit for the New Wor ld Order, Inc. (Haraway 1997, p. 244) What is at issue, it seems to me, is not whether standardization is good or bad (or any simi-larly framed substantivist argument), but, as Haraway points out: What counts?For whom? At what cost? S I M U L A T I O N AS I N T E R P R E T A T I O N "To know the world, one must contract it." - Cesare Pavese, quoted by Alan Kay The preceding discussion of technology and translation, machine and text, is intended to create a frame for a topic which I wil l introduce here, but which I want to revisit a number of times in the pages that follow: simulation. Simulation is a paradigmatic application of information technology, something often hidden by our tendency to instrumental reason, but which the sociology of translation helps Chapter 3: Framing Technology 95 to illuminate. I mean this in the sense that simulation can be used as a general motif for viewing and understanding a wide variety of computing applications, from mundane 'productivity applications' to the more stereotypical systems simulations (weather, fluid dynamics, etc.). Alan Kay and Adele Goldberg put it generally and eloquently: Every message is, in one sense or another, a simulation of some idea. It may be representational or abstract, isolated or in context, static or dynamic. The essence of a medium is very much dependent on the way messages are embed-ded, changed, and viewed. Although digital computers were originally designed to do arithmetic computation, the ability to simulate the details of any descriptive model means that the computer, viewed as a medium itself, can be all other media if the embedding and viewing methods are sufficiently well provided. (Kay & Goldberg 1976) What's interesting and compelling about computing is not the extent to which models, simulations, representations are true, real, accurate, etc., but the extent to which they fit— this is'the lesson of Weizenbaum's infamous ELIZA, the uncomfortable thesis of Baudril-lard's Precession of Simulacra, and the conclusion of a growing body of literature on virtual reality. It is also, to step back a bit, one of the central dynamics of narrative, especially in the novel and myriad other literary forms. "Simulation is the hermeneutic Other of narratives; the alternative mode of discourse," writes Espen Aarseth (2004). If effect is the important point, then this is by definition an anti-formalist argument. But simulation is not merely reducible to surface and appearances at the expense of the deeper 'reality'—it reflects rather the deeper aspect of Baudrillard's "simulacra"—precisely where we live in today's world. But where Baudrillard's take is bitter and ironic, I have always been fond of Paul Ricoeur's hermeneutic version: Ultimately, what I appropriate is a proposed world. The latter is not behind the text, as a hidden intention would be, but in front of'it, as that which the work unfolds, discovers, reveals. Henceforth, to understand is to understand oneself in front of the text. (Ricoeur 1991&, p. 88) Chapter 3: Framing Technology 96 Madeleine Grumet similarly works with Ricoeur's framing in her discussion of theatre, the "enactment of possible worlds;" ...performed in a middle space that is owned by neither author nor reader. Constructed from their experience and dreams, this liminal space cannot be reduced to the specifications of either the author's or the reader's world. [...] - Performance simultaneously confirms and undermines the text. [...] Mimesis tumbles into transformation, and meaning, taken from the text, rescued from the underworld of negotiation, becomes the very ground of action. (Grumet 1988, p. 149) Rather than some sinister command-and-control reductivism, this is the space of simula-tion. If we go further, and look at simulation—model building—as a hallmark of science and the modern world, Ricoeur's stance presents itself as a refreshing alternative to the two horns of objectivism and relativism (Bernstein 1983). We needn't get ourselves tied in knots about our access to 'reality,' since our business—in science, technology, literature, politics— is fundamentally about invention and not discovery, and we can avoid the spectre of relativ-ism, because it is the world which we are building, not just arbitrary constructions. 1 0 "The point is to cast our lot for some ways of life and not others," Haraway admonishes. Thus, it matters greatly which constructions we choose; the process of creating them and deciding upon whether or not to embrace them is fundamentally political (Latour 1999). It is not so much dominated by issues of power but the very crucible wherein power is exercised and contested. Simulation—that is, model building—is essentially hermeneutic. Its process is that of the hermeneutic circle, the merging of the horizons of modeller and world, the part and the whole, and is determined by R.G. Collingwood's "logic of question and answer" (Gadamer 1975/1999, p. 362ff). The model—which is constructed, and therefore concrete—poses the questions to which the 'world'—ultimately inaccessible and therefore uncomfortably 10. Latour's "The Promises of Constructivism" (2003) makes a similarly positive argument. Chapter 3: Framing Technology 97 'abstract'—is the answer. It is not, thus, analytic or reductive, nor is it definitive; rather it is, ideally at least, dialogic. ' Wri t ing of the dynamic between simulation and narrative in games, Espen Aarseth says, If you want to understand a phenomenon, it is not enough to be a good story-teller, you need to understand how the parts work together, and the best way to do that is to build a simulation. Through the hermeneutic circle of simula-tion/construction, testing, modification, more testing, and so forth, the model is moved closer to the simulated phenomenon. (Aarseth 2004) This is decidedly not to say that the model is ever actually complete, but that our "forecori-ception of completeness" (Gadamer 1975/1999, p. 370) is a necessary precondition of participation and engagement. Once again, I want to avoid the vertical correspondence logic of classical structuralism (signifier/signified; model/reality) and instead pursue a vision wherein the elaboration of the model is its own end; it succeeds or fails not by being more or less faithful to 'reality', but by being better connected (to invoke Latour and Callon's network model once again). There is undoubtedly enormous danger lurking in the seduc-tion of the model, and we undoubtedly forget again and again that the map is not the terrain, becoming literalists once again. That this is a danger does not imply that we should avoid making models, though, just that we must strive to avoid taking our fetishes and "factishes" (Latour 1999) too literally. Wri t ing on the mapping of the genome, Haraway notes: Geographical maps can, but need not, be fetishes in the sense of appearing to be nontropic, metaphor-free representations, more or less accurate, of previ-. ously existing, "real" properties of a world that are waiting patiently to be plotted. Instead, maps are models of worlds crafted through and for specific practices of intervening and particular ways of life.... Fetishized maps appear to be about things-in-themselves; nonfetishized maps index cartographies of struggle or, more broadly, cartographies of noninnocent practice, where every-thing does not have to be a struggle. (Haraway 1997, pp. 136-137) Chapter 3: Framing Technology 98 It is important to remember that Haraway's argument is not against cartography—it is against the temptation to think that we and our constructions are somehow pure or inno-cent. There is no shortage of literature warning of the dangers of simulation; Kay himself wrote that "as with language, the computer user has a strong motivation to emphasize the similarity between simulation and experience and to ignore the great distances that symbols impose between models and the real world" (Kay 1977, p. 135). But to contrast simulation with something like "local knowledge," as Bowers (2000) does, is to badly miss the point and to essentialize (and caricature) both simulation and local knowledge. Local knowledge is mediated knowledge too. "Situated" knowledge is nothing if not mediated. We wil l return to simulation, at length, later. T H E E T H I C S O F T R A N S L A T I O N We are responsible for boundaries; we are they. - Haraway, Cyborgs, Simians, and Women The new media and technologies by which we amplify and extend ourselves constitute huge collective surgery carried out on the social body with complete disregard for antiseptics. - McLuhan, Understanding Media The foundation of my argument is that human culture is fundamentally and essentially technological—that is, technologically mediated. It makes no sense to attempt to isolate what is 'human' from what is 'technical'. As Latour has taken great pains to point out (esp. 1993), the attempt to isolate and purify these—that is, to come up with a society sans tech-nology or a technology sans society—has been at best fruitless. A n d yet it is a powerful temptation, as we have inherited a weighty intellectual tradition devoted to just such a proc-ess of purification. The words we use readily betray it: culture, society, technique. I begin to think that it rarely makes sense to talk of culture—or society—at all; Latour's refocus on the "collective" of humans and nonhumans is really the only sane construction. Chapter 3: Framing Technology 99 Back to the Tower of Babel Given the core notion of technology as translation—as delegation, as in one of Latour's rich metaphorical turns—the means of translating or transforming the world is, in a trivial sense, about power. But if the fundamental transformation is on the symbolic level rather than the physical, then this is even more important, for we are speaking of the power to shape people's reality, and not just their physical landscape. Technology is thus the medium of power/knowledge par excellence, for it simultaneously establishes and enforces power/knowledge structures (by very definition) and also provides the means for their subversion. This is politics. Haraway, again, has said it most eloquently: In short, technoscience is about worldly, materialized, signifiying, and signifi-cant power. That power is more, less, and other than reduction, commidification, resourcing, determinism, or any other of the scolding worlds that much critical theory would force on the practitioners of science studies, including cyborg anthropologists. (Haraway 1997, p. 51) Again, the admonition is to stop worrying about purification of essences, about what the world would be like without the polluting effects of technology, pining for an unmediated reality in some nostalgic reminiscence of a simpler age. Asymmetry in theory and practice is the order of the day—asymmetry of ways of seeing and drawing the world, asymmetry of access to ways and means, asymmetry of expression. But this isn't to be avoided or solved or redeemed in Gordian-knot fashion. Rather, it is to be confronted. Something is "lost in translation" because translation is always interpretation. The only remedy is further inter-pretation, the ongoingness of the dialogue. So it is with all things political. Chapter 3: Framing Technology 100 Our responsibility to technology Located in the belly of the monster, I find the discourses of natural harmony, the nonalien, and purity unsalvageablefor understanding our genealogy in the New World Order, Inc. Like it or not, I was born kin to Pu[239J and to transgenic, transspecific, and transported creatures of all kinds; that is the family for which and to whom my people are accountable. - Donna Haraway, Modest_ Witness When Everything Looks Like a Nail The danger o f t ranslat ions can be s imply recognized in the old chestnut: When all you have is a hammer , everything looks like a , nai l . McLuhan'.s jsaying, "we shape our tools and thereafter, our tools shape us " is a more fo rmalar t icu la t ion o f this basic point, which is interestingly absent f rom Heidegger, even though it is a c o m m o n -place for us. Heidegger seems genuinely worr ied that everything in the wor ld ( including human beings) has begun to look like nai ls, but he downp lays—to the peril- o f his argument—the hammer ' s generative role in this. Larry Wal l , developer o f the immense ly popular open-source p rog ramming lan-guage Perl (called the "duct tape o f the Internet"), made the fo l lowing insightful commen t : -You've all heard the saying: if all you have is a hammer, everything starts to look like a nail. That's actually1 a Modernistic saying. The postmodern version is: If all you have is duct tape, . everything starts to look like a duct. Right. When'sthe last time you used duct tape on a duct? (Wall 1999) The challenge, with respect to our fundamental relationship to technology and technological change, and the asymmetries which result, is one of literacies. By "literacy" I do not trivially mean the ability to read and write, but rather the ability to enter into and participate actively in ongoing discourse. If, as I have argued, technology is about the symbolic realm at least as much as the physical, then the importance of literacy is not confined to the written word. It is the matter of partic-ipating in the discourses which surround us; discourses of power. Technologies—things— are as much a part of discourse as are words. The task before us, in the early decades of a digitally mediated society, is to sort out the significance of a digitally mediated discourse, and what the implications are for actual practice. Consider the stakes. Without print literacy—in the sense in which we readily acknowl-edge it today as being fundamental to democracy and empowerment and social change— print technology would be nothing but an instrument of oppression. The written word itself, without the concept of a widespread (if not mass) print literacy, represents a terribly asymetrical technology of domination over the world. But it is this literacy, in the broad, distributed, bottom-up sense of the word, that saves the written word from being a truly oppressive development for humanity. Further, that print literacy is instead taken as the Chapter 3: Framing Technology 101 instrument—and the symbol—of liberation speaks to a dynamic not accessible from an examination of the material or cognitive features of reading; rather, literacy as an agent of emancipation—which then translates the written word into an agent of emancipat ion-operates on the larger socio-cultural level. But at this point in the Western world, "techno-logical literacies" are nowhere near as widely spread in practice as their print-oriented conterparts, nor are they held up symbolically as agents of emancipation and democratic participation. Fullan and Hargreaves (1996), writing of school reform, say, "Teaching is not just a technical business. It is a moral one too" (p. 18). Technology is not just a technical business either, and the longer we collectively pretend that it is, as we are wont to do as long as we remain inside our instrumentalist frame, the less we are able to grapple with it, politically and morally. But as instrumental logic blinds us to the political and moral implications of our medi-ated practices, so too the rhetoric of technological determinism provides a crippling apparatus with which to reach a critical awareness of technology. In the shadow of our yearning for a pure, unmediated past is the parallel idea that technological mediation is leading us down a tragic path 'no longer' of our own choosing. O f late, the tendency is to see digital media as the handmaiden of globalization (e.g., Menzies 1999). Here the argument is that the rendering (commodification) of all human activity into an easily translatable digital currency is at the expense of "local" culture, of the less privileged, and in the interests only of the corporate sector. It is an easy argument to make, and the argument borrows much from the connection between literacy and colonialism (e.g. Willinsky 1998). However, where the latter argument succeeds is in a far more sophisticated appreciation of the detail: literacy is certainly an agent of homogenization and indeed domination, but it is also, in countless cases, an agent of differentiation and emancipation (e.g., New London Group 1996). Note that this is not the same thing as resistance in the sense of resisting the onslaught of print media or computerization. "Resistance" is a troublesome term, because it suggests an "either-or," a one-dimensional problem (as in G.W. Bush's "You are either with us, or you Chapter 3: Framing Technology 102 are against us"). Re-shaping, re-direction, and re-figuration, as in Haraway's rich repertoire, are more apt. I suggest that print or alphabetic literacy is not inherently an instrument of domination precisely because the alphabet is open; in contrast, where state power has controlled who has access to reading and writing, it is not open, and for this very reason, modern democracies enshrine the institutions of mass literacy: public education, freedom of the press, and so on—lest you think my argument is for the perfection of these institutions, I mean here to point to these as shared ideals. But even the curious logic of liberal capitalism seems to real-ize (on odd days, at least) that in order to be effective, languages and communications must be open—private languages cannot thrive. A n d in openness is the possibility of refiguration. A student of mine, Bob Mercer, made this point about the unreflexive conceit of the "end of history:" If consumption in an industrial society is of industrial goods—cars, refrigera-tors, televisions, computers—what then is consumed in an information society? Information, surely, and some of that information takes the form of ideas. A n d some of those ideas in turn challenge the consumer society. (Bob Mercer, 2003, "Blogging at the End of History") Technology, like language, is not the instrument of power; it is the crucible of power/the very setting where the contest for one way of life or another takes place. Feenberg's framing of technology as a site of struggle means that the debate is ongoing; it is not an argument to be won or lost, but to be continually engaged. In putting the emphasis on technology as translation—of representations, of agency, of apparent worlds—I hope to open up this theoretical arena to what follows, which is an examination of a very particular and very ambitious project to develop a technological infrastructure capable of engagement with high-level philosophical, educational, and political themes. Chapter 3: Framing Technology 103 Chapter 4: Alan Kay's Educational Vision Alan Kay's project rests upon a number of substantial philosophical and theoretical founda-tions. A n examination of these wil l be helpful in the analysis of the trajectory of the Dynabook project over the past three decades. The following treatment draws less from the 'primary' documents of the early 1970s as from Kay's own reflection and exegesis—espe-cially that of recent years, which is substantial and which reveals a definite historical self-awareness in Kay's work. Kay's own sense of his place in history is a theme which emerges repeatedly in his writ-ings, from early grand ambitions of "paradigm shifts" to his more studied reflections on a fledgling digital age in comparison with the advent of print in Europe four or five centuries before. Throughout, Kay establishes his subject position in a decidely Romantic mode—to briefly invoke Hayden White's schema (1973) of historical emplotment—but in the first person and of the first order. I would like here to present an overview of each of the major themes elaborated in Kay's writings and talks. These are, in brief: 1. The vision of computers for children, and the early and foundational influence of Seymour Papert's innovative research with the Logo programming language; 2. Systems design philosophy, drawing on insights borrowed from cell biology and American political history; 3. The Smalltalk language and the object-oriented paradigm in computer science, Kay's most important and lasting technical contribution; 4. The notion that Doing with Images makes Symbols, a phrase which embodies an application of the developmental psychology of Jerome Bruner; 5. Narrative, argumentation, and systems thinking; different modalities for expressing truths about the world; Chapter 4: Alan Kay's Educational Vision 104 6. A particular conception of literacy that broadly includes technological mediation as a cultural and historical force. C O M P U T E R S , C H I L D R E N , A N D P O W E R F U L I D E A S : T H E F O U N D A T I O N A L I N F L U E N C E O F P A P E R T The image of children's meaningful interaction with computers in the first place evokes the image of M I T mathematician and computer scientist Seymour Papert, his work with the Logo programming language for children, and his influential writings oi l the role of comput-ing in education. Papert's research began in the late 1960s with Wally Fuerzig, Danny Bobrow, and Cynthia Solomon at