Estranging Information: Media Art’s Pedagogical Potential in the Age of Information Capitalism by Kevin Day B.F.A., University of British Columbia, 2009 M.F.A., University of British Columbia, 2012 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate and Postdoctoral Studies (Curriculum Studies) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2020 © Kevin Day, 2020 ii The following individuals certify that they have read, and recommend to the Faculty of Graduate and Postdoctoral Studies for acceptance, the dissertation entitled: Estranging Information: Media Art’s Pedagogical Potential in the Age of Information Capitalism submitted by Kevin Day in partial fulfillment of the requirements for the degree of PhD in Curriculum Studies Examining Committee: Dr. Anthony Clarke, Professor, Department of Curriculum and Pedagogy, UBC Co-supervisor Dr. Ian Hill, Associate Professor, Department of English Language and Literatures, UBC Supervisory Committee Member Dr. Kedrick James, Associate Professor of Teaching, Department of Language and Literacy, UBC Supervisory Committee Member Dr. William Pinar, Professor, Department Curriculum and Pedagogy, UBC University Examiner Marina Roy, Associate Professor, Department of Art History Visual Art and Theory, UBC University Examiner Additional Supervisory Committee Members: Dr. Sandrine Han, Associate Professor, Department Curriculum and Pedagogy, UBC Co-supervisor iii Abstract This research is a concept-based cultural analysis of media art that engages with the socio-political issues of information and communication technology (ICT) assemblages in today’s information society through media studies, philosophy of technology, and art theory, theorized in conjunction with the pedagogical and critical capacity of visual art, and explored through a close reading of eight case studies of media artworks. The research begins by articulating the socio-political landscape within which it is situated, one that recognizes the problem of algorithm-facilitated data-mining practices that encode the everyday and exploit the users in the big data economy for further entrenchment of decentralized control. It asserts that media art needs to address digital media by examining the underpinning logic of information within the wider landscape of information capitalism. Guided by a framework that pulls together theories of media and technology and theories of art pedagogy, my research argues that visual art engaged with digital media polemics has the capacity to subvert the normalized and entrenched information-based way of knowing through the tactic of estrangement and its potential to foster ways of knowing otherwise in relation to ICT. To substantiate the argument, my research interrogates the concept of information and positions it as an epistemic model through which one comes to make sense of the world, one that sustains the operation of information capitalism, and precisely that which visual/media art should tackle and question. The research argues against the binary of the knowing subject dominating and abstracting knowable objects inherent in informatics, against the claim that information is capable of iv adequately and neutrally representing the phenomena of the material world. Instead, it insists on the ‘other’ of information, the embodied contexts and performative materials of noise. Entwined with the theoretical analyses are the examination of several artworks from a diverse group of artists. These are used to explore how the selected artworks – such as hacked search engines, data-blocking devices, performances based on algorithmically-derived user profiles, hand-crafted infographics, 3D models of hidden data farms, and peer-supported exclusive networks – cultivate ways of knowing differently in relation to ICT operations and the epistemic model of information. v Lay Summary This research examines contemporary artworks that engage with the socio-political issues of information and communication technology (ICT) within today’s information society through media studies, philosophy of technology, and art theory. It is situated within the backdrop of information capitalism that, through the big data economy, encodes the everyday and exploits the average users’ data for further entrenchment of power. In this socio-political context, the research asserts that media artworks need to tackle information as a worldview through which one comes to make sense of the world in order to question, subvert, and rupture the dominant configurations of ICT. Drawing from media theory and art pedagogy, it argues that contemporary artworks engaged in digital media polemics have the critical and pedagogical capacity to subvert precisely this worldview fostered by information, specifically through the artistic tactic of ‘estrangement.’ This is explored and substantiated through the close reading of eight case studies of artworks. vi Preface This dissertation is an original, unpublished, and independent work by the author, Kevin Tsuan-Hsiang Day. All images included are reproduced with the permission of the artists, photographers, or their representatives. Portions of Chapter 3 have been published previously in: Day, K. (2020). Estranging the device: Media art pedagogy via Brecht, Heidegger, and McLuhan. The International Journal of Arts Education, 15 (1): 11-18. doi:10.18848/2326-9944/CGP/v15i01/11-18 Portions of Chapter 4 have been presented at the Universities Art Association of Canada (UAAC) Conference (November, 2016), UQAM, Montreal. Portions of Chapter 6 have been presented at the International Society of Education through Art (InSEA) Conference (July, 2019), UBC, Vancouver, and at the International Symposium of Electronic Art (ISEA) Conference (May 2020), Printempts Numerique, Montreal. vii Table of Contents Abstract………………………………………………………………………………………….. iii Preface…………………………………………………………………………………….…….. vi Table of Contents………………………………………………………………………….….… vii List of Tables…………………………………………………………………………....…..… viiii List of Figures…………………………………………………………………………....…….… x Acknowledgements…………………………………………………………………………........ xi Dedication………...……………………………...………………………………………...…… xii Chapter 1: Introduction…………………………………………………………………………... 1 1. Context within Relevant Scholarly Literature………………………………..……… 2 2. Literature Review and Gap……………………………………...…………………… 6 3. Program of Research and Chapter Breakdown…………………………………....… 11 3.1 Selection of Artworks…………………………………………………………….12 3.2 A Note on ‘Pedagogical’…………………………………………………………13 Chapter 2: Cultural Analysis………………………………………………………………….… 18 1. Concept-Based Cultural Analysis……………………………………………...…… 18 2. Encounter Between Concepts and Objects…………………………………………. 21 3. Why Cultural Analysis?.............................................................................................. 23 4. Introduction to Theoretical Framework…………………………………………….. 26 Chapter 3: Estrangement as Artistic Potential…………………………….…………………..… 29 1. The New/Different vs. the Critical in Visual Art Theory……………………..…….. 32 2. Foucault and Critique………………………………………………………….……. 38 3. The New/Different vs. the Critical in Art Education……………………….…..…… 40 4. Brecht and Estrangement…………………………………………….……….……... 49 5. Estrangement in the Context of Media and Technology…………………….…..….. 53 6. Estrangement as Artistic Potential………………………………………………..…. 58 Chapter 4: The Logic of Commensurability………………………………………….……….… 61 1. The Task of Revealing………………………………………….…………………… 62 2. Black Boxes…………………………………………………………………………. 68 viii 3. Information Capitalism and the Logic of Commensurability……………….………. 70 4. Universal Language and the Pertinent Unit………………………………….……… 75 5. Digitality and Knowing……………………………………………………………... 79 6. The Digital as a Way of Knowing…………………………………………………... 83 7. Estranging ICT……………………………………………………………………… 89 8. The Algorithmic Other (Erica Scourti)……………………………………………… 94 9. (Non)communicative Dissent (Julian Oliver)…...…………………...…………….. 103 Chapter 5: Knowing through Information……………………………….…………………..… 112 1. Determinism vs. Technical Regimes………………………………………………. 115 2. Knowing (from AI to Materials)…………………………………………………… 119 3. Knowing through Information: Two Case Studies………………………………… 130 4. What Does the Search Engine Know? (Mongrel)...................................................... 132 5. The Cartographers and Counter-Graphs (Richard Ibghy and Marilou Lemmens)…. 140 6. Conclusion…………………………………………………………………………. 153 Chapter 6: The Materiality of Mediation……………………………………………………… 155 1. Medium and Re/mediation……………………………………….………………... 158 2. Originary Technicity………………………………………………………………. 162 3. The Always Already Mediated Linguistic Subject (Erica Scourti)..………………. 170 4. Digital Materialism and the Cloud (John Gerrard)………………………………… 179 5. Conclusion…………………………………………………………………………. 187 Chapter 7: Estrangement through Noise………………………………………………………. 188 1. Immersion in Noise / Medium……………………………………………………... 188 2. Noise as the ‘Other’ of Information……………………………………………….. 193 3. Noise of the Internet (Eva and Franco Mattes)…………………………………….. 196 4. Estrangement Revisited (Trevor Paglen)..…………………………………………. 205 5. Conclusion…………………………………………………………………………. 216 Chapter 8: Conclusion…………………………………………………………………….…… 218 1. Summary…………...……………………………………………………………… 218 2. Conclusion and Key Points…………………………………………………….…... 223 3. Moving Forward…………………………………………………………………… 229 4. Afterword: The Order of Things………………………………..………………….. 230 References…………………………………………………………………………………...… 238 ix List of Tables Table 8.1 Summary of Case Studies……………………………………………………… 227 x List of Figures Figure 4.1 Erica Scourti, Life in AdWords………………………………………………….. 97 Figure 4.2 Erica Scourti, Life in AdWords…………………………………….…...……… 100 Figure 4.3 Julian Oliver, Transparency Grenade………………………………...……….. 105 Figure 4.4 Julian Oliver, No Network………………………………………...……….…… 107 Figure 5.1 Richard Ibghy & Marilou Lemmens, The Prophets……………………….…… 145 Figure 5.2 Richard Ibghy & Marilou Lemmens, The Prophets…………………………… 151 Figure 6.1 Erica Scourti, Think You Know Me……………………………………………. 171 Figure 6.2 Erica Scourti, Think You Know Me……………………………………………. 178 Figure 6.3 John Gerrard, Farm (Pryor Creek, Oklahoma)………………………………... 180 Figure 6.4 John Gerrard, Farm (Pryor Creek, Oklahoma)………………………………… 184 Figure 7.1 Eva and Franco Mattes, Dark Content………………………………………… 198 Figure 7.2 Eva and Franco Mattes, Dark Content………………………………………… 200 Figure 7.3 Trevor Paglen, Autonomy Cube……………………………………………….. 209 Figure 7.4 Trevor Paglen, Autonomy Cube……………………………………………….. 211 Figure 8.1 Kevin Day, The Order of Things……………………………………………… 232 Figure 8.2 Kevin Day, The Order of Things……………………………………………… 234 xi Acknowledgements My heartfelt thanks go to my co-supervisors Dr. Sandrine Han and Dr. Anthony Clarke, whose generous encouragement, caring guidance, and scholarly insights have made it possible for me to pursue the research I am passionate about. I am greatly indebted to the patience with which they have engaged with my work, their generosity in sharing their wealth of knowledge, and their invaluable feedback throughout various stages of this research. I am also extremely grateful to my committee members Dr. Kedrick James, whose enthusiasm and expertise have been a constant source of guidance and inspiration, and Dr. Ian Hill, whose keen intellect and rigor have contributed enormously to the clarity of the research. Lastly, I would also like to extend special thanks to Dr. Donal O’Donoghue, who encouraged me to pursue the PhD and provided much direction and grounding to my doctoral pursuit. It has been a privilege to learn from them all. I am grateful for all my peers, with whom I had the pleasure of studying with, exchanging thoughts, and working through concepts. I would also like to thank all the faculty members with whom I studied with during my doctoral pursuit, for their intellectual rigor and tutelage, which significantly contributed to orienting me to a very expansive and multi-faceted discipline. My sincere thanks also to the Department of Curriculum and Pedagogy at the University of British Columbia, for creating a space that allowed me to build and develop my research, and for generously providing me with a Four-Year Fellowship. In addition, I am grateful to the Department of Art History Visual Art and Theory at the University of British Columbia, for instilling in me such an enduring passion and commitment to visual art. Finally, I would like to thank the Canada Council for the Arts for their continued support of my artistic practice. xii Dedication To Ksenia, my loving information professional 1 Chapter 1: Introduction “For me, big data means one thing and one thing only: the exploitation of labour,” writes media theorist, programmer, and philosopher Alexander Galloway (2013, p. 86). Both the psychologist Robert Epstein (2016) and philosopher Arthur Bradley (2011) have pointed out the curious phenomenon where how we view the world and our understanding of how the world functions is greatly influenced by the dominant technology of a given era. For example, hydraulic systems led to the conception of internal humours that constitute the human subject, industrialization led to the conception of the human as an automaton functioning like clockwork, and so forth, all the way to more current understandings such as the human as a computer – an input-output device storing, retrieving, and processing information/knowledge – a legacy of “the informational turn” after the Macy Conferences on Cybernetics in the 1950’s (Gleick, 2011, p. 262). The conceptualization that the human (and all worldly phenomena) is constituted by data, leads philosopher Colin Koopman (2019) to coin the term ‘informational person.’ What are the socio-political implications of an epistemological model where the world and its phenomena consist of information? For Galloway, the answer appears to be that it leads to extraction of value on an unprecedented scale – the commodification of the everyday at all levels, accomplished through the big data economy. In an age where the legacy of cybernetics and information theory is the dominant logic, realized as the capitalist enterprise of data-mining, digital labour, and algorithm-facilitated management of the everyday, such worldview warrants a deeper analysis as part of an examination of the digital medium. 2 How might contemporary visual art engaged in digital media concerns foster different ways of knowing, ones that have the potential to destabilize and subvert the ubiquitous, data-driven, and algorithmically-governed ICT assemblages? While the numerous ways in which digital media permeates contemporary society and the socio-economic and cultural-political implications are richly explored in scholarship from a variety of disciplines, I address information and communication technology (ICT) assemblages from the particular disciplines of visual/media art, art education, and critical theory and philosophy of technology and media. Specifically, I examine (a) the current ubiquity, claims, and limits of an information-based way of knowing as the epistemic model that undergirds and sustains ICT operations, and (b) how contemporary art engaged in the polemics of algorithm-facilitated data-mining practices (that encode the everyday and exploit the users in the big data economy) has the capacity to problematize this model, through its potential to disrupt entrenched norms of information systems and protocols, by analyzing eight case studies of artworks that interrogate and subvert various contemporary ICT configurations. 1. Context within Relevant Scholarly Literature Before delving into the literature review, it is necessary to clarify some terms and how they would be used throughout the dissertation. This research recognizes terminologies such as digital art, media art, net art, computer art, information art, database aesthetics, and other terms that derive their name from the medium that artists utilize as their tool, rather than from their conceptual focus. This way of categorization is the most common, such as media art historian Oliver Grau’s (2007) usage of ‘media art’ and philosopher Dominic McIver Lopes’ (2009) usage of ‘computer art.’ Some of the above terms encompass others. For example, Grau utilizes the term media art, which 3 for him is a broad category that includes more specific versions of technology-heavy practices such as digital art, telematic art, interactive art, genetic art, etc. Artist and theorist Stephen Wilson (2002) prefers the term ‘information arts,’ which is used by him to designate art practices that draw primarily from technological and scientific advances in society. Like Grau, he also includes under this umbrella term everything spanning from telematic art and genetic art to net art and algorithmic art. The study also acknowledges the seminal exhibitions such as Information (1970) at MOMA and Software (1970) at the Jewish Museum that included numerous well-known Conceptual artists who were not necessarily working with digital media, but sought to acknowledge the increasing presence of information and systems/cybernetic thinking, and even considered art practices as the creation of information. This research does not focus on Conceptual artists like Lawrence Weiner or Hans Haacke, but acknowledges this crucial period in the 70’s where the dematerialization of certain art practices coincided with the turn of numerous artists towards the presence of information systems, such as Iain Baxter’s perspective that all art is information and artists are information workers (Cook, 2016), and goes beyond highlighting the presence and ubiquity of information. It moves forward with the position that the dominance of information society is a given, and considers media art not through a definition based simply on medium/tool, but rather on the conceptual focus and intention to address the medium (in this case ICT) and its socio-political implications. Therefore, while the shorthand of ‘media art’ is utilized in this program of research, it designates any contemporary visual artworks and practices that engage with the socio-political and onto-epistemological configurations of digital media. Such a shift of definition to a concept-focused approach makes space for artworks that may not utilize digital media but instead takes digital media as the object 4 of investigation. This move is deliberate, which broadens and challenges the neutral, instrumental, and essentialist perspectives of media art discourse with its modernist tendency to locate inherent properties of the media, which ranges from a strict focus on film and moving images, to video games, to practices using only the most innovative technologies, to interactive audio-visual design utilized for concerts and other entertainment. And while concepts such as technology and the digital are investigated in depth throughout the research, it should be stated here that ‘digital media’ refers both to the entity that is encoded digitally (therefore can be computed) and the protocols that support, use, and consist of such entities (i.e. both the image files and Photoshop, both the content on a website and the Internet protocols that govern the site, both programmatic functions and the programming logic that structures them, etc.). In particular, the study focuses most specifically on information and communication technologies (ICT) and its diverse manifestations (the assemblage of systems, protocols, software, and data that allow users to store, access, transmit, receive, and manipulate information in a digital form). As this section and the next demonstrate, there is a lack of scholarly approach in media art that engages directly with the socio-political dimensions of the digital medium, ones that do not position technology as instrumental and neutral. Since the study intends to address this gap within the discourse of media art, it begins with the umbrella term of digital media. However, as the chapter breakdown below shows, diving into the digital requires one to address ‘information,’ therefore this research spends the majority of its time investigating ICT assemblages in the information society, most specifically the operation of data-mining within the big data economy. 5 I argue that, in order for media art to address digital media and its socio-political and material conditions, it needs to examine the underpinning logic of information and its operation within the wider landscape of information capitalism. Philosopher Colin Koopman (2019) notes that while there is a rich supply of research on digital media, it tends to focus on communication to the detriment of an analysis of information itself. A focus only on the communicability of information presumes that information is neutral. Instead, he argues for the need to “fully interrogate the social significance of information, or data, beyond its communicative functions” (p. 1327). The need for such interrogation lies in Koopman’s assertion that information is the problem of today that media discourse overlooks, for “we are constituted by our data” (p. 1335) within the operations of what he terms ‘infopower.’ The claim of information to represent individuals and worldly phenomena is the main object of scrutiny in this research. As media art curator Sarah Cook’s (2016) book Information demonstrates, several artists and artist groups operating in the 60’s and 70’s were interested in the rapidly-growing telecommunication industry and sought to explore this through the form of networks and taxonomies (i.e. lists, records, definitions, documents, etc., and the formation of special libraries and archives), and also through presenting such forms and materials as art (Cook lists Hanne Darboven and General Idea as examples). In other words, art as information and information as art. However, the art practices of merely emphasizing the existence of telecom networks and exploring information as a process (i.e. arranging real-time collaborations over telephone or sending mail and fax to galleries or supporting a ‘network’ of art publications) was soon no longer enough, and other artists began to interrogate the socio-political implications of information 6 systems, such as questions like how do these systems control and who controls the systems, who is given access to information, and what are the claims of these systems and databases. The artists and artworks I wish to examine may be considered as part of this second trajectory’s legacy. 2. Literature Review and Gap As notable theorists have pointed out, a celebratory perspective can often (while certainly not always) be identified within the scholarship dedicated to various iterations of what one might term media art, as well as particular directions within art education and media studies (Barney et al, 2016; Galloway, 2004; Galloway, 2013; Huhtamo, 1993; jagodzinski, 2010; Simanowski, 2011; Stallabrass, 2003; Taylor, 2014; Weibel, 1996; Wilson, 2002; Winner, 1986). These perspectives are hinged on the assumptions that digital media can naturally foster a more participatory and egalitarian society, one that provides free and equal information access, distributes power to users, and levels class distinctions. Margot Lovejoy’s (2004) ambitious primer on the history and key concepts of media art, with its focus on a postmodern exaltation of the author-less and participatory as inherent and given attributes of digital media, is a notable example. Former Ars Electronica artistic director Peter Weibel (1996), media archaeologist Erkki Huhtamo (1993), and media artist/theorist Stephen Wilson (2002) have all remarked that all too often, ‘media artists’ uncritically accept the terms and conditions of the medium they utilize without attempting to examine their socio-political arrangements and implications, who end up celebrating and supporting the entertainment, communication, government, and military industries within which ICT is embedded. One of the ways this becomes evident is through the artists’ focus on 7 ‘interactivity,’ a celebrated characteristic that is assumed to be a unique feature of digital media, which overlooks its imbrication with the commercial and state usage of ICT administration. There is a tendency for existing scholarship on digital media and visual art to proceed from a medium-specific and instrumental approach, meaning the literature often attempts to talk about all artworks that utilize digital media as a self-evident and isolated category based on form or medium, with essentialist assumptions that all such works naturally exhibit certain properties (Grau, 2007; Lovejoy, 2004; Manovich, 2003; Paul, 2007; Shanken, 2009). Such survey approach often attempts to identify ‘themes’ that recur in these artworks (such as distributed authorship, open-ended, de-hierarchized, interactive, immaterial and therefore non-commercial, etc.), defining and limiting what digital/media art could be. Often, the artworks are also theorized in conjunction with or against other historical mediums such as painting and photography in an attempt to establish a logical teleology to legitimize media art within art historical canon (i.e. video works that ‘update’ the still-life or interactive works that ‘continue’ the chance play of Duchamp). Wilson’s (2002) comprehensive tome on media art, which covers a wide range of practices such as ones utilizing the Internet, biotech, artificial life algorithms, datasets, robotics, facial-recognition software, and telepresence, demonstrates this tendency. Wilson’s survey is organized according to the technology/medium being used by the artists, who explore the function and promises of these technological achievements (such as accurate quantification, egalitarian communication, interactivity, increased creativity, more elaborate sensorial stimuli, more ‘intuitive’ AI, etc.). While the examination contains a few artworks that seek to destabilize the usage and effects of these media, the majority of artists and scholars contained in the collection 8 seem to be proceeding with the perspective that technology is neutral and instrumental/functional (which can and should be leveraged for certain ends and further innovation), two commonly-held assumptions that philosopher of technology Langdon Winner (1986) has identified. In fact, a significant number of the artists noted in Wilson’s massive collection appear to be developers and engineers experimenting with new technology. The focus was on coming up with forms and applications that are basically visual effects, simulators, and entertainment, and exploring ways of turning one input into an output of a different modality simply for the sake of interactivity (for example, gesture into sound), rather than critical artistic inquiry. As an example of this technologist approach, the artists working with the technology of artificial life – programs with conditions and parameters that allow digital entities to interact and learn and evolve in generative ways – listed in Wilson’s collection appear to be more interested in supporting and exploring advancements of the technology (more accurate or more complex behaviors of AI agents in an artificial environment) or seeing the aesthetic and formal qualities they can create (the ‘pretty’ images procedurally generated by artificial life algorithms), rather than examining the socio-political implications or onto-epistemological claims (questions such as ‘how is this technology utilized on the general population’ or ‘can organic behaviors actually be programmed by the designer’). Media art curator Christiane Paul (2015) has a more nuanced approach, including both practices that simply utilize digital media for commercial and technology development and practices that tackle relevant socio-technological themes. But while she concedes that “classification based on form are not necessarily helpful in consistently outlining the themes developed in a given art” (p. 71), she nevertheless identifies ontologically given features of digital art, such as the interactive, participatory, dynamic, and customizable. Rather than such an 9 approach, this dissertation does not claim to examine media art in general, but specifically art practices and projects that take on digital media configurations as a central concern to critically address (and not just view them as neutral tools). In the realm of visual art and art history, Claire Bishop suggested in 2012 that contemporary art discourse has not (more accurately, is unwilling to) rigorously examined the pervasive digitization of our contemporary society and the everyday, due in part to the mainstream art industry clinging to the marketability of the physical art object. Bishop characterizes contemporary art’s relationship to the digital as one of ‘disavowal,’ which also paradoxically shapes what she sees as a persistence of the uniqueness, aura, and authorship of the art object. As she writes, artists today inevitably utilize digital media in various stages of the production, dissemination, and consumption of their work, using image-manipulation and video-editing software, social media for publicity, search engines and other informatics protocols for research, etc. (or simply, use smart phones and computers in general), but otherwise overlook digital media as the content of their practice. Here, the medium is simply considered a tool, rather than the focus of artistic and theoretical inquiry – an oversight that is one of the main foci and drivers of this research. This particular essay was met with controversy, partly because Bishop (2012) seemed to have only cited renowned and already canonical artists and their lack of engagement with digital media, and neglected to locate lesser-known artists who have been dedicated to the debate. Nevertheless, her essay still demonstrates a gap in the scholarship of mainstream visual art and art history dedicated to the critical examination of contemporary visual art exploring the politics of the digital medium. 10 There is a strong presence of scholarship, from disciplines such as media studies, philosophy of technology, science and technology studies, sociology, cultural studies, information studies, and critical theory, that is grappling with the various manifestations of digital media, specifically the sovereignty of algorithms and the tyranny of a distributed network of power working insidiously through communication, trading information as capital (Alsina & Galloway, 2007; Assange, 2011; Carr, 2012; Dean, 2005; Deleuze, 1992; Galloway, 2004; Hu, 2015; Koopman, 2019; Lash, 2002; Lash, 2007; Lazzarato, 1996; Mejias, 2013; Pasquinelli, 2009; Poster, 1990; Stallabrass, 2003; Striphas, 2015; Taylor, 2014; Wark, 2004; Winner, 1986). However, as media and communications scholar Robin Mansell (2009) notes, these concerns are often overlooked and “research in the more critical tradition has had relatively little influence on the priorities of those … in a position to make design and other choices regarding the nature and use of technology … in search for profit and according to the values of global capitalism” (p. 12). Such concerns (i.e. decentralized control, data-mining, surveillance, information capital, immaterial labour, among others), and the gap noted above for a critical approach within visual art discourse, indicate that there is relevance and urgency for interdisciplinary research that looks at the potential of contemporary visual art to unsettle and upend norms, assumptions, and understandings afforded and prescribed by ICT assemblages. Sociologist Scott Lash (2002), articulates the concern clearly and urgently in his book Critique of Information. Lash elaborates on the state of information society and power, emphasizing that a critique of information is not only imperative, but also must come from within society, and especially within information itself. Fittingly, he closes his argument by locating one such possibility precisely within contemporary art. Arguing against practices that focus on 11 aesthetics and self-referential forms, he advocates instead for the critical reflexivity of post-Conceptual practices as projects that could function as critique of information society. This research acknowledges Lash’s championing and takes up his call by elaborating and substantiating this possibility. In short, while numerous artists are interested in bringing technology to art, in seeing how tech can transform and innovate the traditional forms (for example, machines/programs that draw or create ‘music’), this research situates itself in the opposite direction, where it aims to bring the critical potential of contemporary art to bear on digital media technologies. 3. Program of Research and Chapters Breakdown To approach the question of “How might contemporary visual art engaged in digital media concerns foster different ways of knowing, ones that have the potential to destabilize and subvert the ubiquitous, data-driven, and algorithmically-governed ICT assemblages,” the research adopts Mieke Bal’s (2002) concept-based cultural analysis. Through the interdisciplinary and boundary-crossing method, it conducts close readings of artworks in conjunction with the examination of concepts (Chapter 2). The framework of Bal’s cultural analysis encourages one to examine their objects of study and concepts simultaneously, allowing them to encounter and interact with one another to produce new insight. For Bal, cultural analysis offers an interdisciplinary approach not bound by canon or convention, surpassing cultural studies in its mobility and fluidity, allowing the analyst to follow various concepts in their travels through different disciplines in order to read research objects in innovative ways. 12 3.1 Selection of Artworks Through consulting existing media art literature and the exhibition archives of selected media and visual art institutions from around the world, eight projects were selected for analysis. The artworks were chosen primarily for the efficacy with which they have tackled the socio-political issues of ICT assemblages outlined in this dissertation, in particular the various key areas such as search engines, data visualization, digital labour, data farms, and data-mining in general. Together, they create a representative but certainly not exhaustive sampling of artworks that aim to subvert the dominant mechanisms of ICT assemblages, insisting on the need and offering ways to engage such mechanism differently, instead of holding a more neutral or instrumental position. In short, the artworks were selected to function as case studies that can generate productive insight for the research question. Guided by scholars of art education (Atkinson, 2008; Ellsworth, 2005; Garoian, 2008; jagodzinski, 2010), as well as media and technology (Feenberg, 1999; Heidegger, 1962; McLuhan, 1964), Chapter 3 conceptualizes a methodological framework through which to analyze the potency of the artworks engaged with the socio-political perspectives of ICT configurations within information capitalism. In particular, it puts forward the Brechtian theorization of estrangement as the adversarial tactic that encapsulates the pedagogical and critical potential of visual art. Informed by art education scholarship, this research follows the perspective that artworks are learning spaces, with the potential to foster pedagogical encounters and to function “as the vehicles through which we come to know differently” (Ellsworth, 2005, p. 37). The theorization of the pedagogical 13 potential for artworks to cultivate ways of knowing otherwise, to come to make sense of the world differently, is a crucial aspect of this dissertation. 3.2 A Note on ‘Pedagogical’ It is important to clarify that the word ‘pedagogical’ as it is used in this dissertation does not refer to what topics and theories should be brought into the classroom by art educators and how they should be taught, nor does it operationalize the arguments into concrete practices in the classrooms, but rather it refers to how the artworks themselves can foster ways of knowing differently. Therefore, the pedagogical potential is located between the artworks and the viewers, which can occur in the classroom, but is not restricted to that setting. As such, while this dissertation can contribute to the art education classroom, its main ‘call to action’ is intended for practicing media artists and media art theorists. Relatedly, the art education scholars consulted in the dissertation were selected due to the breadth of their research, which speak to the potency of artworks and what they can do, taking the discussion of art’s potential into society at large, rather than focusing on the theories, artists, and tools that should be brought into the classroom. As such, they offer appropriate ways of bringing art educational theories into the discourse of contemporary studio/media art and theories of media and technology, bridging the disciplines more effectively. The dissertation also recognizes other conceptualizations of the term ‘ways of knowing,’ such as its usage with direct reference to Jürgen Habermas’ three-part theory as advanced in 14 Knowledge and Human Interests (1968), its designation as beliefs about knowledge and learning within some of the scholarship of psychology, or its specific place within the highly mathematical tradition of analytical philosophy. However, it departs from these fields and utilizes the term more broadly. In the present context, ways of knowing refers to the “different understandings of the social world … and different ways of coming to understand that world” (Moses & Knutsen, 2019, p.2). The term refers to a kind of orientation to the world, a worldview, a lens through which one comes to make sense of the world and the phenomena within it. The concept as it is used here takes partly from Michel Foucault’s (1966/2005) theorization of the discursive formations, the systems of knowledge, or the epistemological fields that create the conditions of possibility for certain assumptions, norms, truths, and thoughts to exist within a certain epoch or regime, while excluding others. It is also crucial to note that the research aligns itself with the argument that epistemological concerns cannot be separated from ontological ones, whereby knowing is not considered in isolation from the world, in order to avoid placing primacy on the mind and discourse. This line of thinking is supported by the scholarship of Karen Barad (on the entanglement of discourse and materials), Arthur Bradley (against the distinction between episteme and techné), Hubert Dreyfus (a Heideggerian theorization of know-how versus knowing-that), and Tim Ingold (on the inextricability of knowing and doing), among others. Chapter 4 begins analyzing the concept of the digital in conjunction with two artworks by performance artist Erica Scourti and critical engineer Julian Oliver. It establishes the socio-political landscape within which the research is situated, one that recognizes the pervasive utopic myth and exploitative algorithmic activities of informatics in a networked society. By digging deep into the constituent unit of digital media – information, the chapter explores and asserts the 15 perspective that the binary digit is closely linked to traditional epistemology of the knowing subject discovering a world of static and knowable objects (Dreyfus, 1991; Galloway, 2014; Gottlieb, 2018; Gunkel, 2007; Wilson, 2002), and that the assumption where worldly phenomena can be represented as information should be positioned as an information-based way of knowing – a logic of commensurability. The chapter, and the three subsequent ones, all end with close readings of artworks in order to analyze and speculate about their pedagogical potential for cultivating different ways of knowing in relation to the ICT operations in question. Chapter 5 continues the argument proposed in the previous chapter and substantiates a critique of ‘knowing through information’ by drawing from the scholarship of Hubert Dreyfus (1992), Karen Barad (2007), Tim Ingold (2013), and Baruch Gottlieb (2018). In particular, the chapter elaborates on the Cartesian assumption where one can come to know the world from a decontextualized and disembodied position, consuming the world through the anthropocentric and rational seat of a floating mind/eye, at the expense of the material substrate. Against such representational model of knowing, these theorists propose alternatives that emphasize contingency, embodiment, embeddedness, and materiality. This counter-argument to knowing through information is further explored through the artworks of artist group Mongrel and collaborators Richard Ibghy and Marilou Lemmens, investigated in conjunction with theories on search engines and algorithms. Chapter 6 leans heavily into the scholarship of Jacques Derrida’s student and philosopher Bernard Stiegler (1998) and his concept of ‘originary technicity,’ which insists that the technological is not an external addition (McLuhan to the contrary), but rather always already 16 entangled with the human. It complicates the distinction between the human and the non-human, such that the metaphysical superiority of mind over body, subject over object, and human over nature cannot be sustained. This chapter is an advancement of the argument against informatics and its elevation of abstraction over embeddedness. If one is always already mediated, then such mediation is also material. The chapter emphasizes that there is no ‘outside’ to the sovereignty of data, and that critique must come from within, to tackle information directly. The chapter examines another piece by Erica Scourti as well as a work by digital artist John Gerrard, read through Gottlieb’s (2018) theorizations of ‘digital materialism.’ Chapter 7 wraps up the cultural analysis by proposing noise as an emblematic iteration of estrangement in the context of information capitalism and positions noise as the ‘other’ of information. It offers noise as another concept through which to think about this dissertation’s focus on embodied contexts and performative materials, against the abstracting act of information and its ‘reduction-to-form.’ The chapter examines noise through media and sound theories, touching upon its penchant for interference of the dominant regime through an amplification of that which has been suppressed and marginalized. Tying the concept back to the previous chapter, the argument insists on the persistence of noise, the irreducible presence of the material world, as a counter to information. The chapter examines the concept of noise in relation to two artworks by the artists/hacktivists 0100101110101101.org (Eva and Franco Mattes) and the artist/geographer Trevor Paglen as invitations to think differently about ICT assemblages. Through the case studies, I argue that visual art engaged with digital media politics has the capacity to destabilize, disrupt, and subvert the normalized and entrenched information-based way 17 of knowing through its potential for fostering ways of knowing otherwise, specifically, in the present context of ICT, through the tactic of estrangement. The main argument questions such way of knowing, which presumes information is capable of adequately and neutrally representing the phenomena of the material world, with all of its particularity, contingency, and excess. Noise, as an instantiation of estrangement, shifts the focus to the material, embodied, embedded, and lived contexts, and insists on that which cannot be captured and predicted by information’s abstracting act. To amplify such noise is the pedagogical potential of media artworks that confront information systems today. In various forms, all eight artworks examined in this research foster radically different ways of knowing the world by intervening into the systems’ informational order and protocols. They invite the audience/users to consider, relate to, and engage with ICT assemblages in drastically different ways, beyond the logic of commensurability – the assumption that worldly phenomena can be represented / encoded as information. As a practicing media artist and writer engaged in the theorization of digital media politics in information capitalism, I conducted this research to investigate this political landscape and potential counter measures, to explore the artworks of other artists working with similar concerns, and to inform my own practice. My intention is for this dissertation to offer other media artists and theorists another way to conceptualize digital media configurations and what our tactics might be in the face of its ubiquity, and to contribute to art education scholarship’s engagement with contemporary media art. The study concludes with the examination of one of my recent art projects, as the final case study in the dissertation. 18 Chapter 2: Cultural Analysis This chapter begins the discussion of methodology by outlining the theoretical background for the methodological structure of the interdisciplinary approach utilized by the dissertation, and articulating the reasons why it is suitable for this program of research. The chapter ends by introducing the theoretical lens that was assembled to guide the reciprocal analysis between the artworks and concepts of the present study – a Brechtian methodology that is informed by media theory – which is elaborated in more depth in the next chapter. 1. Concept-Based Cultural Analysis Following the proposal interdisciplinary scholar Mieke Bal (2002) outlined in Travelling Concepts in the Humanities, the present study utilizes what she terms a ‘concept-based methodology of cultural analysis’ as its guiding principle to analyze case studies of media artworks. The method entails affordances of disciplinary fluidity and boundary-crossing, following certain concepts of interest and analyzing what their cultural relevance may be in relation to the research objects, rather than being bound by canons, conventions, and methods of particular disciplines. In short, cultural analysis is “a critical and theoretical activity of the humanities in an interdisciplinary perspective” (Bal, 2007, p. 1). According to her, often several methods are applied. “You do not apply one method,” she writes, “you conduct a meeting between several … so that together, object and methods can become a new, not firmly delineated, field” (p. 1). In the research of scholars who proclaim to be using cultural analysis, often several methods are evident: feminist and archival research (Frojmovic, 2007), autoethnography and critical theory (Wilks, 19 2007), sociology and art history (Bal, 1999), Benjaminian thoughts and visual art (Pollock, 2007), global politics, memory studies, and literary analysis (Saloul, 2007), Frankfurt School, postcolonialism, and cultural theory (Grabner, 2007), among others. As such, a ‘new’ method is often derived, a method-assemblage. By her own account, Bal (2002) has a difficult time positioning her work within a singular discipline. As a result, she came up with the term of ‘cultural analysis,’ advocating that “the field of cultural analysis is not delimited, because the traditional delimitations must be suspended” (p. 4). In this instance, ‘analysis’ refers to the “sustained attention to the object” (p. 9), whereas ‘culture’ is deliberately left undefined, as Bal (2003) understands it to be constantly mobile and shifting. Finding Raymond Williams’ four-part definition dissatisfying, she chose instead to rely on the qualifier ‘cultural’ to broadly refer to the act of negotiating when people of various practices engage one another. Taken together, the primary objects of cultural analysis are “the master narratives that are presented as natural, universal, true and inevitable” (p. 22), and its mission is to “dislodge them so that alternative narratives can become visible”1 (p. 22), examining “how power is inscribed differently in and between zones of culture” (p. 18). As such, Bal’s theorization of this interdisciplinary method that analyzes concepts in relation to research objects with a critical lens provides helpful guidance for my own work. 1 It should be noted that while I support Bal’s championing of the subversion of master narratives for the potential of alternative ones, I am cautious about the concept of making something ‘visible.’ In the context of Bal’s usage, interrogating visual culture studies, the term seems appropriate, but for my study, I do not wish to place emphasis on the act of making visible, which is fraught with its own contentious discourse around power and scopic regimes. 20 It should be noted that while Bal is supposedly theorizing and proposing her own methodology, I recognize that a significant amount of scholars in the arts and humanities are doing or had done work that may be similar to cultural analysis, but simply do not refer to it as such, using Bal’s scholarly branding. While any list would be incomplete, one can perhaps consider, for example, the work of seminal scholars such as Roland Barthes (semiotics, autoethnography, visual art, cultural theory), Julia Kristeva (semiotics, psychoanalysis, feminism), Fredric Jameson (political theory, literary theory, cultural theory, visual art), or Chantal Mouffe (political theory, post-structuralism, visual art), to be interdisciplinary, or even more recent scholars such as Karen Barad (quantum physics, feminism, continental philosophy), Elizabeth Ellsworth (film, architecture, affect, pedagogy), and Alex Galloway (media theory, continental philosophy, art history, computer science), to name a few that this research consults. Therefore, it is important to acknowledge that plenty of interdisciplinary scholarship in the arts and humanities are being and had been conducted without relying on Bal’s theory, and certainly predate Bal’s theorization by decades. What cultural analysis does do, though, is to provide an explanation of why interdisciplinarity can be beneficial for certain studies. This crucial point is rarely touched upon in the scholarship noted above, and is assumed to be self-evident in the theorization that unfolds. Bal’s conceptualization of the methodology precisely addresses this omission by articulating and explaining the significance of such practice and gives it a name – cultural analysis. For the present study, this method accomplishes work that cannot be done within the confines of the stand-alone disciplines, creating an intersection and positioning itself within it. In a way, this chapter serves as an introduction and justification to the substance/content of the actual methodology of the study, which is presented in the next chapter. 21 2. Encounter between Concepts and Objects The present study positions itself within the scholarship of visual/media art. However, that within itself is already a nebulous, amorphous, and interdisciplinary field. Without such traveling and borrowing of concepts (especially from literary theory, women’s studies, post-colonialism, critical theory, cultural studies, anthropology, philosophy, but also from afar such as physics, biology, economics, etc.), the scholarship of visual art today would be extremely different, and arguably quite impoverished. Positioned somewhere in between visual/media art, art history, art education, continental philosophy, media theory, cultural studies, critical theory, among others, my project lands in a similarly interdisciplinary terrain as the one described by Bal. As she writes, “interdisciplinarity in the humanities … must seek its heuristic and methodological basis in concepts rather than methods” (2002, p. 5). To that end, Bal suggests revisiting the literary method of close reading (but without its formalist and asocial baggage from New Criticism), in order to analyze things/objects (media artworks in this case) in conjunction with particular concepts, in a way that acknowledges the objects as always already embedded within culture, positioned in a scholarly conceit which recognizes that concepts travel and operate through various disciplines. Bal deliberately departs from the convention of close reading as used by New Criticism that focuses on textual analysis and ignores the socio-political that occurs outside of the text: “Here lies the major difference between the old close reading, where the text is alleged to speak for itself; cultural studies, where … critique is more important than the object; and the newer close reading, which is informed by both, and which I advocate” (p. 18). 22 Recognizing and foregrounding intersubjectivity, in which absolute consensus on a concept is impossible due to the social aspect of knowledge, Bal (2002) nevertheless sees the productivity and value in exploring a concept in various fields and within itself, groping for its dynamic meanings and operations. Within this interdisciplinary landscape, she advances the possibility of “an analytical practice that is both open and rigorous” (p. 13), carefully oscillating between the openness of branching out and groping in various disciplines for a specific concept, and the incisive and deliberate ways in which this practice then zeroes in on how this concept, in its multiple iterations, has been and can be framed, articulated, thought, and brought to bear on research objects (such as artworks). This concept-based methodology and the elasticity/fluidity of concepts, for Bal, allow the analysis to sidestep the danger of being caught in an unhelpful discussion of the rigid versus the arbitrary/sloppy, “neither dogmatic nor free-floating” (p. 44). Rather than adhering to a particular discipline and its in-house methodology and canon, such interdisciplinary approach grants one the license to create a constellation that would not have been possible otherwise, driven by the intention of analyzing together concepts and phenomena such as pedagogy, information capital, digitality, and mediation. By doing so, a methodology is constructed specifically for the purpose of the study, one that is theoretically pertinent and compelling for the research objects (which the next chapter elaborates on). As she argues for the value of such engagement, the process by which a concept is “brought to bear on [an] artefact, can be innovative as well as conducive to insights relevant beyond the artefact itself” (2002, p. 32). Despite this methodology’s focus on concepts, Bal emphasizes by the method of close reading that it is ultimately driven and anchored by the research objects. During her explication of cultural analysis, she adds that “theory can be meaningful only when it is deployed in close interaction 23 with the objects” (p. 44), allowing “the objects to speak back” (p. 45). This is a process where both the objects and concepts influence one another reciprocally during the program of research, ensuring that the concepts and the framework they contribute to are not simply ‘applied’ to the objects and the objects are not merely there to ‘illustrate’ the concepts / theories. To use Bal’s terms, the concepts and objects ‘encounter’ one another, in “a performative interaction between object, theory, and analyst” (2003, p. 24). 3. Why Cultural Analysis? The following lists a few examples of research that explicitly utilized cultural analysis, that were done at or in association with the Amsterdam School of Cultural Analysis (ASCA) at the University of Amsterdam, and the Centre for Cultural Analysis, Theory, and History (CATH) at the University of Leeds. Both are prominent schools of cultural analysis, the former is the department founded by Bal in the 1990’s while the latter was founded in 2001 to also take on cultural analysis as its main method of investigation into the nexus of cultural theory, art history, and visual art. The examples intend to provide a sense of the variety of interdisciplinary inquiries possible under such method. • Using the concept of 'framing' to look at biblical manuscripts and illustrations, across the fields of art history, medieval studies, feminism, and Jewish studies, to investigate and argue that a particular illustration creates space for the possibility for female subjectivity and resistant viewing (Frojmovic, 2007). 24 • Using Walter Benjamin’s concept of the life-map to look at concepts such as childhood, place, and memory in the work of artist Charlotte Salomon, which focused on the autobiographical and trauma, to propose the reading that Salomon’s work is an assertion of agency of life during the Holocaust, while travelling through disciplines such as history, feminism, literary theory, visual art, and more (Pollock, 2007). • Using Bal’s concept of ‘critical intimacy’ (extended from Gayatri Spivak, a position that is against any pedagogical ‘distance’) as a specific theoretical stance to look at the concept of pedagogy, taking into account concepts such as risk, trust, and the immeasurable, to interrogate the managerialism championed by the British education system as it was implemented in art education, and to resist learning outcomes and measurable objectives (Wilks, 2007). • Using the concept of ‘visuality,’ specifically the biological gaze, to analyze Rosalind Franklin’s photograph of DNA and other scientific representations, tackling concepts such as nature, essence, materials, to insist that vision is not innocent, citing literature from fields such as visual studies, intellectual history, science studies, etc. (Keller, 1999). In various ways, all these studies employ one or more main concepts as the lens through which to analyze the research objects, working with and through several other related concepts that both have risen out of and have guided the analysis, often traversing through several disciplines, to advance an argument about the objects based on the author’s analysis. For these studies, cultural analysis enabled them to be interdisciplinary and to leverage theoretical perspectives from various traditions, strategically and intentionally, to produce findings and arguments that would not have been possible had they been confined to their respective scholarly 25 traditions. As artist and writer Allan deSouza (2018) writes in his book How Art Can Be Thought, analyses of art require mobility between different fields of knowledge to create a network, and such method-assemblage is precisely the strength of cultural analysis. “Rather than favor one method exclusively over another, each might allow particular insights, and their cross-referencing and testing in relation to the artwork being critiqued allows us to develop expanded models of interpretation that might otherwise not be available” (p. 71). In her edited book Conceptual Odysseys: Passages to Cultural Analysis, Griselda Pollock (2007) offers the methodology as a way to continue the intellectual work that has been produced in the last quarter in the twentieth century in the arts and humanities. For her, undoubtedly this last quarter was dominated by theories of various kinds, including but not limited to: post-structuralism, post-colonialism, feminism, Freudian, Marxist. Acknowledging the massive amount of impact this theoretical turn has had on the arts and humanities to shake and reshape the disciplines, she also notes the pundits who articulate a perspective of theory-fatigue, where the once-radical teachings seem to be reducible to “Theory 101 slogans (the author is dead, the gaze is male, the subject is split, there is nothing but text, etc.)” (p. xiv). Pollock, however, warns against any reactive moves away from theoretical engagement, and asserts that cultural analysis, of “transdisciplinary encounters with and through concepts” (p. xv), is a way with which one can continue the urgent work of theoretical and critical analysis that had been conducted by the theoretical turn, but also allows one to move “beyond the initial engagement determined by specific theoretical paradigms” (p. xv). As she writes, following Bal, “concepts themselves arose inside specific theoretical projects. They now move out of – travel from – their own originating site to become tools for thinking within the larger domain of cultural analysis, a domain that seeks 26 to create a space of encounter between the many practices that constitute the arts and humanities” (p. xv). Situated between several fields, cultural analysis equips this research with the necessary methodological framework and rationale to conduct its work. In addition, as Pollock advocates above, cultural analysis continues the significant intellectual work theories have done for the arts and humanities by moving not away from theories but delving into concepts to investigate broadly across disciplines and to produce new knowledge that is only possible through such interdisciplinary encounters. For her, by “putting concepts to work through new encounters between arts, cultures, and concepts” (p. xvii), artists, art historians, and cultural theorists are then able to produce radically new knowledge about visual art today. 4. Introduction to Theoretical Framework This section and the next chapter look at the specific theories/method-assemblage to be formulated and consulted, in order to conduct the reciprocal analyses with the artworks. The current chapter opens the discussion on methodology by outlining Bal’s theoretical explication for cultural analysis and its suitability for the present study, or the how and the why of the methodology. The actual theoretical framework that is utilized to analyze the artworks and concepts, or the what of the methodology, draws from critical theories of art and technology. It predominately takes from the ideas of avant-garde artist Bertolt Brecht (1936), while being informed by the media/tech scholarship of Andrew Feenberg (1999), Martin Heidegger (1977), and Marshall McLuhan (1964). Specifically, it leans on Brecht’s writings (specifically his 27 theorization of the estrangement effect in theatre, which has been extended to the critical task of revealing the device) and extensions of his thoughts by historian John Willet (Brecht on Theatre, 1964), German studies scholar Roswitha Mueller (Bertolt Brecht and the Theory of Media, 1989), and Marxist political theorist Fredric Jameson (Brecht and Method, 1998, and Aesthetics and Politics, 1980), while being informed by Feenberg’s theorization of technology such as the technical code (Questioning Technology, 1999, which plays a more prominent role in Chapter 5), Heidegger’s theorizations of technology and tools (specifically the essay “Questions Concerning Technology,” 1954, and sections of Being and Time, 1962), and McLuhan’s theorization of the medium taken from several sources (“Notes on the Media as Art Forms,” 1954, “Myth and Mass Media,” 1959, Understanding Media, 1964, among others). In short, the study assembles and utilizes a Brechtian methodology that is informed by media theory (or perhaps, a Brechtian theory of media and technology), the pertinence, fit, and significance of which is elaborated in the next chapter. This theoretical framework combines the lenses offered by these scholars not just because such a method-assemblage has not been attempted before, but more importantly because of their theoretical complementarity as a framework for media/techno analysis and the artistic tactic of estrangement it offers. Together, their work suggests the concept of estrangement as a method to examine how visual art might destabilize the logic of informatics, towards the potential of a way of knowing beyond the one reified by ICT. To summarize, this research utilizes cultural analysis, guided by the Brechtian methodology and the concept of estrangement, to examine the eight artworks and how they engage with, confront, and interrogate ICT assemblages and the socio-political issues within. Examined in relation with the concepts of information, mediation, and noise, my method assemblage 28 demonstrates that the potency of these artworks lies in their capacity to estrange ICT’s protocols by deviating from their normative functions. In doing so, the artworks destabilize the information-based way of knowing sustained by ICT assemblages and insist on embodied contexts and performative materials – the noise that persists beyond information’s encoding act. 29 Chapter 3: Estrangement as Artistic Potential Following the previous chapter’s introduction to cultural analysis as a way to do research, this chapter works through several theoretical perspectives to construct and put forward the methodological lens for the study: a Brechtian methodology informed by theories of media and technology. This theoretical framework asserts that visual art has the capacity to cultivate different ways of knowing, a potential that is at once critical and pedagogical. And Brecht’s tactic of estrangement aptly encapsulates such potential by deviating from and thus rupturing the norm, an act that precisely takes up the call for examining the technological apparatus. To achieve this, the chapter does three tasks. It begins by investigating what different scholars of art education and art history have articulated to be visual art’s capacity, what it can do (and for some, what it should do). The answer draws an affinity between visual art and a propensity to create anew, to generate the unknown, to allow the different and the other to emerge. Secondly, through a detour to Foucault’s (1981, 1997) explication of ‘critique,’ it draws a parallel between the critical, the new/different, and the pedagogical. I argue that the artworks discussed in this research have the potential to foster pedagogical encounters, functioning “as the vehicles through which we come to know differently” (Ellsworth, 2005, p. 37). The ‘pedagogical’ as it is utilized in this research refers to a quality that creates the conditions of coming to know differently, something that as the art education scholars point out below, is precisely what visual art can do. Used in this way, the term goes beyond the transmission of specific skills and knowledge to a broader socio-political sense, referring to the acculturation of worldviews. A look at the etymology of the term pedagogy provides support for this perspective. 30 The term cannot be reduced to methods of instruction or knowledge transmission in an institutional setting (didactics), but rather refers to the socio-political process whereby human beings “are socialized, acculturated, formed, led out or self-directed into new realms of knowledge and new ways of knowing” (Hamilton, 2009, p. 14), which is laden with the social values of a particular regime. For Hamilton, the question is, to whom can one attribute the operation of these regimes? In addition to Ellsworth and Hamilton, there are also other scholars who, while discussing pedagogy primarily as an act occurring between student and teacher, are expanding the term beyond planned transmission of knowledge to an embodied, lived pedagogy that makes space for new possibilities within the students’ own becoming (Aoki, 2005) and one that emphasizes lived experience and practice over a strict focus on theory (van Manen, 1990). This perspective of the pedagogical in an expanded social, cultural, and political sense provides an orientation for the arguments of the art education scholars below, who in various ways articulate art’s pedagogical potential to unsettle, undercut, and disrupt such regimes. As scholar of media and pedagogy Elizabeth Ellsworth (2005) argues, pedagogy is experiential, lived, and embodied, therefore it necessarily negates the instrumentality of objectified knowledge. Contemporary visual art, one of the learning spaces she has identified, can function as a ‘pedagogical pivot point’ that engages the viewers “in a continuous passage toward knowings that are forever incomplete” (p. 57), not necessarily providing answers but creating conditions for the viewers to come to know differently. If pedagogy means ‘to lead’ from an etymological sense, and not necessarily prescribing where one is leading another to (Aoki, 1991/2005), then it stands to reason that a pedagogical experience involves travelling from one point to other points, resulting in one or multiple shifts in one’s knowing. 31 The first two parts, of establishing that artworks have the propensity to foster ways of knowing otherwise and how this potential is both critical and pedagogical, are crucial, for they pave the ground for the third part, which then delves into an elaboration of Brecht’s concept of estrangement and its cogent insight when read together with theories of media and technology. The third part establishes that the new/different, through the act of making something strange, is exactly what is needed to reveal the device for technological examination and critique. Taken together, the chapter advances the argument that visual art engaged with digital media polemics has the capacity to subvert normalized and entrenched ways of knowing through its potential for fostering ways of knowing otherwise, specifically in the present context of ICT and information society, through the tactic of estrangement. The chapter ends by taking this discussion into the specific realm of media and technology, leading into the next four chapters where it expands to the context of digital media polemics and is used to read through the case studies of artworks. While some studies focus on the pedagogical and generative capacities of art practices for the students or artists themselves, for the purpose of this study, the potential of visual art to subvert, create anew, and generate the unknown is located between the artwork and the public, as one encounters the work in some fashion.2 Through cultural analysis, I focus on the pedagogical potential of these contemporary artworks for the viewers, rather than specific strategies art educators might employ to bring these artworks into the classrooms and create such pedagogical encounters. This research also holds the position that the distinction between the ‘process’ and the ‘artwork’ is porous, as often the process is a major component of the work, and the work is not 2 Of course, included in these encounters are also the viewers’ own disposition and prior knowledge, the artists’ intention and artist statements, the educational programming of the exhibition space, and other art educational initiatives, all of which contribute in vital ways to one’s experience with the work, but would be impossible to enumerate and account for. 32 necessarily ever complete but progresses and changes as it engages with the public and unfolds in time. Therefore, the theorization of pedagogical potential is not necessarily restricted to finished artworks nor the art-making process. 1. The New/Different vs. the Critical in Visual Art Theory This section begins the argument for artistic potential by consulting a few scholars of art theory and art history. It identifies a distinction between two perspectives on artistic potential, one that is termed ‘critical’ and one that relies on terminologies such as ‘new’ and ‘different,’ the latter purporting to be moving beyond the former. The section offers a different perspective, one that identifies the need for the critical to always necessarily entail the new, thereby questioning the distinction between the two. In later sections, Foucault’s notion of critique reinforces this argument, which is further supported through the perspectives of several art education scholars, whose work provide a space to consider the connections between the different, the critical, and the pedagogical. In other words, the dichotomy to be introduced in this section is complicated and reconsidered through the subsequent sections. Albeit a broad stroke, arguably for a major part of the twentieth-century, through the dominance of the linguistic turn, critical theory, and cultural theory, spurred on by the global political landscape, artists were engaged heavily in theoretical discourse, intent on producing ‘subjects’ that are emancipated and liberated from the subjugating bourgeois machinery and culture industry. In recent years, however, one can see a surge or perhaps resurgence of something else: objects, practice, materials, process. In the post-structural climate where many scholars and 33 artists are frustrated with the linguistic turn and its legacy of everything being ‘textual’ (Derrida, 1978), there is a trend and desire to focus on the object/practice/embodiment as opposed to subject/theory/detached knowing. In recent years, speculative realism (Mackay, 2007), new materialism (Barad, 2007), Deleuzian ontology (Deleuze, 1994), Badiou’s preoccupation with ‘truth procedures’ (Badiou, 2013), actor-network theory (Latour, 2005), and object-oriented ontology (Harman, 2002), were several of the schools of thought that gained much momentum in the humanities, and certainly in visual art. Despite their significant differences3, perhaps they all share a rhetoric that speculative realism has termed anti-correlationism (Mackay, 2007), a supposedly anti-Kantian stance that refuses to settle with Kant’s distinction between thinking and being, transcendence and immanence, subject and object. As such, they have been beneficial tools for many scholars and artists to draw from in a pursuit to move beyond post-structuralism and to realign the focus on materials and processes, the artistic practices with which most artists work/think through as their research and theorization. The move away from the distinctions noted above, such as subject and object, is further elaborated in the rest of the dissertation. For now, the focus is placed on the trend noted and the dichotomy it implies: is this new direction a move away from critique, and what does this mean for the potential of artworks? Artist and theorist Simon O’Sullivan (2010) comments on this particular trend he has observed in contemporary art and puts forward an argument that such practices can be distinguished in its newness, strangeness, and otherness (often with a focus on objects, materials, assemblages) from the tiresome focus contemporary art has had on institutional and ideological critique throughout the postmodern era. Drawing on Deleuzian terminologies (Deleuze, 1994), 3 For example, ANT being focused on framing everything as actants in relational networks while OOO rejects relations/networks and considers only elusive objects (Cole, 2015). 34 O’Sullivan elaborates on the various ways contemporary art practices of this nature can be likened to the aesthetic actualizations of Deleuze’s theory, with the potential to create the unprecedented, that which is yet to come. As he asserts, “art ruptures dominant regimes and habitual formations and in so doing actualizes other durations, other possibilities for life” (p. 205). In doing so, he creates a distinction between the artistic method of critique and the artistic potential for the new. Instead, I would suggest O’Sullivan’s argument allows one to consider the possibility that the latter is an expansion that is capacious enough to include the former. As he himself points out, art’s potential for the new, ‘the logic of difference,’ is nothing new. To consider this, let us examine ‘critique’ more closely. O’Sullivan (2010) references art historian Craig Owens’ (1980) seminal two-part essay on postmodern art practices published in October, “The Allegorical Impulse.” In this essay, Owens distinguishes between the self-assured autonomy of modernist practices, where the unity between the form and content is assumed (what he termed ‘symbolic’), and the self-reflective critical doubling of postmodern practices (what he termed ‘allegorical’). Here, the symbolic connotes some form of whole, self-contained essence, self-sustained stable presence, whereas the allegorical refers to a kind of doubling, multiple, gap, distance, which make possible acts of interpretation, reflection, and translation. In semiotic terms, the symbolic, as Owens uses the term, has an unquestioned one-to-one relationship where meaning is guaranteed, whereas the allegorical recognizes the slippage and contingency of meaning. Owens suggests that an “allegory occurs whenever one text is doubled by another” (p. 68) and “becomes the model of all commentary, all critique” (p. 69), akin to the intertextual palimpsest (necessarily reading through and rewriting other texts), since “the allegorist does not invent images but confiscates them … and in his hands the image becomes something other” (p. 69). Framed as such, 35 the allegorical potential of art practices seems to have an affinity with both critique and the transformative/other. He substantiated his argument with the following characteristic methods of postmodern practices: appropriation, site specificity, impermanence, accumulation, and hybridization. Although it would be extremely unrealistic to wish to encompass the multiplicity of postmodern art practices in two essays, all these examples point to a sense of contingency, destabilizing taken-for-granted categories and boundaries, as one appropriates, fragments, serializes, and remixes, echoing O’Sullivan’s focus on new assemblages/combinations that differ from the habitual and normal. The above perspective is captured in art historian Hal Foster’s seminal essay collection The Anti-Aesthetic (1998), where he characterizes postmodern art practices as the critique of representation, bent on exceeding and deconstructing the master narratives and Enlightenment logic of modernity – a postmodernism of resistance. That being said, it is important to recognize the heterogeneity in the various practices of criticality noted in Foster’s collection of essays and the breadth of art practices in general. There is also a distinction between the criticality of the historical avant-garde (Dada, for example) and the criticality of practices that could be categorized as postmodern, a distinction that is mainly rooted in the different historical periods, different specific tactics, different aspirations and limits. What might be gained from a more macro perspective – one of the valuable lessons of art education – is the ability to see the connection and to put forward the common theme of criticality as one of visual art’s potential – a mode of questioning, resisting, and envisioning otherwise. For O’Sullivan, Owens’ essay outlined and championed the trend of ‘critical’ practices that he himself sees as being superseded by more current practices, ones that seem to have moved or at least are trying to move beyond the trend of 36 critique. But what if the critical and the new, as O’Sullivan has distinguished, are not mutually exclusive? This is precisely what the chapter contends, to revisit and unpack the idea of criticality and to reposition it as something more capacious. Utilizing Bal’s (2002) method, the chapter travels through disciplines to explore the relationship between the critical and art practices, acknowledging but without being tied down to one discipline’s canon. The art critic David Joselit (2003) responds to Owens’ essay and also identifies such critical tendencies in postmodern practices, noting the widely-accepted need to ‘subvert,’ in order “to unveil the mechanisms of commercial culture” and “to deliver a fatal blow to the society of the spectacle” (p. 3). The place of criticality and subversion are pervasive to such an extent that the line between the artist and the critic becomes increasingly dubious, leading Joselit to ponder what are their respective roles now, and what exactly constitutes the efficacy of political and critical practices. He navigates through a case study of former alternative media producer Michael Shamberg and concludes by emphasizing, regardless of artists or critics, the imperative of being agile and mobile in order to always be able to create something new. For him, criticism needs to have the function of “inventing new categories of analysis that allow[s] for the invention of new kinds of objects” (p. 11), to which one might also add new subjectivities, new ways of knowing, etc. The ‘mobility’ of criticism is imperative as criticism inherently needs to be about the ‘new’ in order for it to innovate, as “efficacious criticism is short-lived, always vulnerable to the twin dangers of incorporation or irrelevance” (p. 11). The critical, in other words, needs the qualities of the new, different, other. 37 It needs to be acknowledged that the ‘different’ is a well-worn strategy of neoliberalism that readily commodifies the new and packages conformity as uniqueness (Baudrillard, 1998; Klein, 2000), and that innovation is the dangerous flipside of so-called creativity (jagodzinski, 2001). That being said, insofar as one can identify urgent issues that need to be tackled within the dominant regimes, it stands to reason that artistic practices should investigate how such regimes and their configurations can be made different. Within the literature of media art specifically, artist and theorist Stephen Wilson (2002) likewise makes a similar point when discussing the role of art within digital culture. He outlines what several artists and theorists, such as Peter Weibel and Erkki Huhtamo, offer as the potential of artistic interventions within digital culture, ranging from utopic actualizations of a cybernetic collective consciousness (also known as the noosphere), to benign explorations of the new configurations of identity, communication, relations, and reality introduced by digital media (often with a postmodern emphasis on fluidity), to more critical interventions into the media ecology of socio-political and cultural-economic forces, developing countermeasures. However, he summarizes by reminding the readers that “the cultural forces driving digital development can easily assimilate many artistic gestures [and] artists must constantly re-evaluate the effectiveness of their media intervention” (p. 653). The point about the mobility of critique may seem trivial, but it is crucial in the current argument for it allows one to think about the new and the critical not as distinct categories as in O’Sullivan’s case, but rather as one necessitating or implying the other. 38 2. Foucault and Critique This perspective on what ‘criticism’ is also echoes Foucault’s (1981, 1997) theorization of the concept. While the previous section opened with O’Sullivan’s dichotomy, the history of which can be seen in Owens’ and Foster’s canonical theorizations, this section utilizes Foucault’s theorization to question that dichotomy, with the next section providing more support for this argument. As Foucault elaborates below, the mobility of critique is crucial, for critique is not necessarily tied to a historical period or strategy, but rather characterized by transgressing the limits of certain regimes and transforming the present into something otherwise. In the lecture What is Critique, Foucault (1997) identifies “the critical attitude … as an act of defiance, as a challenge, as a way of limiting … governing” (p. 46), something that emerged in the sixteenth-century in response to the proliferation of governing assemblages in pedagogy, politics, and economics, among others. Here, Foucault gives a definition of criticism that most would be more accustomed with: “critique is the movement by which the subject gives himself the right to question truth on its effects of power and question power on its discourses of truth … critique would essentially insure the de-subjugation of the subject” (p. 47). The questioning of truth and power, the inextricable link between knowledge and power, the mutual necessity of knowledge-power for their production, legitimacy, and deployment, all seem familiar as Foucauldian thoughts. However, as Foucault elaborates on criticism, its propensity for the ‘new’ becomes more evident. In the lecture Practicing Criticism, Foucault (1981) puts forward that “critique … is a matter of pointing out on what kinds of assumptions, what kind of familiar, unchallenged, unconsidered modes of thought the practices we accept rest” (p. 154) and that “criticism … is 39 absolutely indispensable for any transformation” (p. 155). This focus on transformation and transgression is, perhaps surprisingly, the Kantian connection of Foucault, a Kantian tradition that Foucault (1997) refers to as ‘ontology of the present,’ which is a practice not concerned with the analysis of truths and universal epistemic structures, but a critique that “issues a call for change by revealing some limits as contingent and in need of transformation” (Hendricks, 2008, p. 363), taking “the form of a possible transgression” (Foucault, 1997, p. 113). Such Kantian critique moves one away from the usual lexicon associated with Foucault such as power, discipline, apparatus, but rather emphasizes the urgency and the potential of that which may yet be, that which is other than the present.4 In relation to the status quo, the critical attitude possesses such capacity “to imagine it otherwise than it is, and to transform it not by destroying it but by grasping it in what it is” (Foucault, 1997, p. 108). Seen in this light, Foucault’s conception of criticism would also seem to entail a necessity of the new, something that differs and transgresses the present, in order to transform it. As he suggests, “critique … separate[s] out, from the contingency that has made us what we are, the possibility of no longer being, doing, or thinking what we are, do, or think” (p. 114). The critical is the potential for fostering that which is beyond the present norm. The point that the critical and a propensity for the new/different/other are not mutually exclusive, or that the critical inherently entails and necessitates the new, may seem obvious, but it bears making. The point of ‘mobility’ that Joselit (2003) emphasized above is crucial, and echoes Claire Bishop’s (2012) similar point from Artificial Hells, that the radical, transformative, and innovative characteristics that some critique or art practices may exhibit or claim to possess at a 4 In a way, Foucault continues this Kantian tradition, picking up where he left off, by doing a Kantian critique of Kantian critique, questioning Kant’s faith in progress, freedom, reason, and rationality (Foucault, 1997, p. 51), as he asks, “how is it that rationalization leads to the furor of power” (p. 54)? 40 particular juncture in history is not a given and certainly not immune to the dangers of incorporation and irrelevance. This seeks to nullify the pervasive arguments that certain artistic practices or strategies are naturally (and will always be) critical, democratic, and utopian, such participatory art or media art. Therefore, the articulation of this argument is imperative for any study addressing the gap of scholarship in media art. The form of criticism needs to be agile, constantly moving, and remain in flux, in order for the different and the otherwise to be possible. 3. The New/Different vs. The Critical in Art Education Conceptualizing art practices as research, art education scholar Graeme Sullivan (2006) argues that the act of artmaking is not only an often-overlooked means of human understanding, but also takes a significant amount of cognitive and intellectual processes. For him, criticality is a prominent feature of the research acts of art practices, one that not only transforms the artist/researcher, but “similarly, a viewer or reader is changed by an encounter with an art object or a research text as prior knowledge is troubled by new possibilities” (p. 28). This framing of the critical as transformative and associating it with change and new possibilities, not only echoes Foucault’s theorization above, but also situates this pertinent framing within art practices/research and creates a segue for the arguments of the art education scholars below. This section continues to build the argument for artistic potential by consulting a few scholars of art education, Charles Garoian (2008, 2015), jan jagodzinski (2010), and Dennis Atkinson (2008), whose work identify the binary articulated by O’Sullivan above, but also blur the lines and highlight the overlap between them. O’Sullivan’s distinction of the critical and the new/different attenuates, and the two sides come together in these scholars’ conceptualization of pedagogy. Such perspectives 41 strengthen the argument this section has been building so far: artworks have a pedagogical and critical potential for cultivating the different and other. This consideration allows for an understanding of artistic potential that is more capacious and recognizes the need for critique to remain fluid and mobile in order to be potent – always needing to imagine otherwise. In the scholarship of Garoian (2008), he proposes a model of ‘prosthetic pedagogy’ that characterizes art and art education. One of the ways he explains the concept is by using the device of a pun, in that ‘prosthetic’ is used to denote the continuous coupling of unlikely elements that in turn could have generative and productive effects. These qualities of in-betweenness and openness, for him, create the possibilities for accidental encounters, unforeseen alliances, and unending performances, possibilities that are arguably pedagogical. “The performance of a pun … renders linear understandings and language out-of-joint to expose, examine, and transform … cultural assumptions and representations into ways of seeing, thinking, and doing differently” (2015, p. 487). Such engagements, between elements that may or may not be incompatible, constitute learning events that can generate the unknown ‘yet-to-come.’ Following Deleuze’s terminologies, Gaorian argues that the performances of such entangled elements is precisely the potential of art research and practices, through their experimental and exploratory processes. For him, such processes question and deterritorialize presumed boundaries to allow the unforeseen to emerge, always ready to connect and combine to create new assemblages. This line of thought bears resemblance to O’Sullivan’s (2010) perspective outlined earlier about the ‘new,’ especially in Garoian’s account with his students where qualities such as unpredictability, strangeness, disequilibrium, and differences were emphasized. 42 However, Garoian’s (2008) conceptualization of the ‘prosthetic’ simultaneously advances a project that appears very political in nature. The concept of prosthesis is utilized here “to challenge the utopian myth of wholeness and normality” and to foster “subjectivity that intersect, critique, and extend beyond academic, institutional, and corporate assumptions to enable the creation of new and diverse understandings through art practice” (p. 218). Socially and historically constructed norms, values, and representations, disseminated and perpetuated through mass media and discursive formations, are the targets of critical examinations. The call to question wholeness, normality, and stable boundaries bears resemblance to the scholarship of Donna Haraway (1991) in her resistance to the stable and totalizing categories such as ‘the human’ and her insistence on hybridity, the ‘cyborg’ model, and situated/partial knowledge. As such, Garoian’s (2015) articulation echoes those of scholars working in identity politics, culture studies, media studies, feminist studies, etc., in that he is advocating for art’s capacity to create “ways of learning that unsettle and render socially and historically constructed assumptions and representations out-of-joint” (p. 488). In the context of contemporary art specifically, he has championed the work of artists such as Judy Chicago and Cindy Sherman and their work on gender and biopolitics, noting the ‘prosthetic criticality’ their work generates. Seen in this light, his prosthetic model allows one to consider art’s propensity for the new, the critical, and the pedagogical simultaneously. In Visual Art and Education in an Era of Designer Capitalism, jagodzinski (2010) launches a sustained argument for visual art practices’ capacity and necessity to take on a combative stance towards the control, manipulation, and subjugation of neoliberalism in the form of an advanced consumer culture facilitated by ICT. Drawing from Jean Baudrillard’s (1998) work on consumerist society and Guy Debord’s (1994) work on the society of spectacle, jagodzinski reiterates their 43 point on the post-war rise of hyperreal consumerism facilitated by the image (static, moving, and beyond), and continues the argument with his term designer capitalism. For him, the danger of life becoming art is the aestheticization and subsequent commodification of the everyday, executed through the ‘design’ that captures affect and extracts attention for surplus value. This, reasonably so, extends a similar obligation to the education of visual art. “Every artist and teacher of art must face this fundamental antagonism” (p. 53) between art and design, where the former is in the process of being subsumed by the latter. Such tendency is prevalent in art education, jagodzinski observes, where visual art is instrumentalized for other academic subjects or the focus is shifted to design, fashion, architecture, animation, etc. By conferring use value upon it, such an act, for him, compromises art’s criticality (and the autonomy that is needed for this potential). “In the designer capitalism of digitalized information society, it has become imperative to put art and aesthetics to use” (p. 53). Jagodzinski is not shy about attacking certain iterations of contemporary art education and the buzzword of ‘creativity,’ which, for him, not only services designer capitalism as ‘innovation’ but also ignores how affect is captured and harnessed within the spectacle and its marketing, leaving social inequalities untouched. Instead, jagodzinski advocates for a model of proactive social critique, one that resists the crisis of art becoming subsumed under designer capitalism and also challenges the dominance of the latter. For him, this fraught relationship between art and design is indicative of a ‘trauma’ of art and its education in contemporary capitalism, as art “struggles to remain a separate sphere to keep a social critique alive” (p. 58) and to avoid the danger being co-opted by state or commercial interests. Confronted with capitalism in the information age, he therefore tasks art (and its education) with the obligation to ‘ruin’ 44 representation and go “beyond designer capitalism’s aestheticization of the world picture … to form a new sociopolitical potentiality” (p. 139). Jagodzinski is not aloof to the reality that visual art is inevitably embedded in society and its autonomy is necessarily imbricated with heteronomy – that it cannot avoid being a part of the social. One cannot know and act from the external vantage point of an ‘outside,’ for there is no outside. Nevertheless, the social critique he advocates for is not a scenario where art ‘services’ the socio-political. He maintains that “art is irreducible to any sort of critical, political, or cultural explanation or assimilation” (p. 59). In fact, his language shifts and becomes close to O’Sullivan’s and Foucault’s when he identifies art’s “potential as a critique as to how things may be otherwise” (p. 59), which can create “new ways of seeing, feeling, knowing … new ways of experiencing the world” (p. 99). Through these statements, he echoes the argument of this section that the critical sits in close proximity to the new, the different, and the otherwise. For him, the focus is not on a hermeneutic analysis of the meaning of art, but rather attending to what art can do – its potential for transformation and change. Similar to Foucault’s point earlier, such potential (‘force,’ in jagodzinski’s conceptualization) has an affinity with critique, as it could operate “in the transformative sense of undoing and reworking relations of power” (jagodzinski, 2010, p. 110).5 The possibilities of change (from the current norm) is always political. 5 It should be briefly noted that in the more recent What is art education? After Deleuze and Guattari (2017), jagodzinski examines the question with a focus on D+G’s proposal that art’s role, as articulated in their work What is Philosophy (1991), is to create new worlds through undoing and messing with the dominant configurations of affect, percept, and sensations. On the one hand, jagodzinski once again claims that art education overemphasizes self-expression and hermeneutics, while ignoring the question of affect (or that they are doing affect ‘wrong’ by only focusing on the body). On the other, he also concedes that affect has been completely co-opted by the marketing industry, ready to be manipulated and harnessed through big data and neuroscience, constituting affective capitalism. Ultimately, he writes, “the virtual realm of affects and percepts … provides a resistance to the market of exchange so that the affected viewer … generates thought that breaks with action or habit” (2017, p. 20). In other words, while emphasizing affect, his conceptualization still revolves around subversion of capital through the 45 The proposition of education scholar Dennis Atkinson (2008) may further elucidate a middle ground between the new and the critical, one that Garoian and jagodzinski have already alluded to above through their scholarship. Before getting into his theorization, a brief detour into the philosophy of Badiou (2013) may help, as Atkinson draws the momentum of his argument from Badiou’s philosophy on the ‘event.’ For Badiou (2013), in order for something to constitute an artistic event, form must innovate, something new and unprecedented must be present, to which he cites the emergence of abstract art and atonal music in their respective historical contexts as examples. In his framework, artworks need to be constituted by forms that were not considered as ‘forms’ before, that were not already created or accepted within the artistic fields, in order to configure new subjectivities. An event is something radically alien, other, new. “An event is something that brings to light a possibility that was invisible or even unthinkable … it opens up a possibility … an unexpected and unforeseeable possibility” (Badiou, 2013, p. 9 – 10), that emerges from “the rupture that the event is” (p. 10). For him, a constellation of such ‘events’ occurred in the twentieth century onwards in numerous fields, including visual art, where new forms emerged in rapid succession. Bearing many names (post-modernism, deconstruction, the avant-garde, etc.), he characterizes this period as “fueled by a radical critical methodology” where one might say “art was criticism [emphasis added]” (p. 77). possibility of transformation, change, and the new – in this case specifically a commitment to creating ‘possible worlds’ (New Earth, people-yet-to-come, A Life, etc.) in response to the Anthropocene. The present study wishes to sidestep the dichotomy maintained by some artists and scholars, between affect/sensations and theory/concepts, and instead move forward with the broader terms such as the new, different, other, strange, etc., as I believe these generic terms can point towards something that encompasses both affect and concepts, and effectively avoid the danger of getting caught in this unhelpful dichotomy. 46 Much like O’Sullivan (2010) did above, Badiou asserts that such strategy has exhausted itself and perhaps it is criticism itself that needs to be critiqued and re-examined, so visual art can be reconstituted in an ‘affirmative’ modality (an ‘affirmative split’ is another terminology Badiou uses to refer to the event). The affirmative function, for Badiou, seems to be less focused on the dismantling of tradition that preoccupied the post-modern art practices and movements. Perhaps Badiou echoes O’Sullivan’s (2010) distinction noted earlier, dividing artistic tendencies into “one of dissent (a turn from, or refusal of, the typical) and one of affirmation (or something different) … one of criticism, one of creativity” (p. 197). O’Sullivan likewise utilizes the event as a concept (although allegedly taken more from Deleuze than Badiou) for thinking about the contemporary art practices he has observed, characterized by the move away from critique. Here, the event is a point of indeterminacy, of potentiality that resides between the latency of the virtual and the action of the actual. Linking the concept to certain practices (O’Sullivan favors, in this instance, performance art), he champions “the transformative power of the event – the way in which it holds the potential to open up new pathways, new possibilities of being for all participants (artists and spectators as it were)” (p. 202). However, as Foucault’s (1981, 1997) elaboration on critique above indicates, the critical entails and necessitates some form of the new, the unprecedented, the transformative, allowing one to reconsider such distinction. Dennis Atkinson (2008) takes Badiou’s (2013) philosophy into the realm of art and education, outlining the necessity of a pedagogy, an event, that can counter the regulatory effects of the dominant regime (be it governmental, educational, or other such systems), in order to expand learning. Here, his usage of the concept event in his framework of ‘pedagogy against the state’ has 47 a more ostensibly critical tone. Against the control of specified learning outcomes, Atkinson champions the capacity of art practices to foster a ‘learning encounter,’ which “involves a disruption of established states of pedagogical knowledge and practice” (p. 235), practices that construct and legitimize one as a subject of a particular socio-educational regime. In his elaboration, examples are drawn from the arts and craft practices of children and students, but this framework could also encompass other art practices. Pitting the potential of the unknown against the established forms, Atkinson follows Badiou (2013) in articulating a need for the ‘that-which-is-not-yet,’ the unprecedented forms, systems, paradigms, states, etc., that are distinct from the legitimized practices, knowledge, and values that constitute the current dominant regimes. The not-yet can be that which does not yet exist, or is marginalized, existing on the fringe and unsanctioned. It is the ‘other,’ the noise of the current educational, social, economic, and political configurations. An event, specifically in terms of the pedagogical potential of art, “opens up new possibilities, new ways of seeing things, new ways of making sense of what is presented to us” (p. 238), puncturing the unquestioned and normalized knowledges and practices. Likewise, as jagodzinski (2010) elaborates on his conceptualization of art as a field of becoming, of transformative forces that can enable the ‘new’ to emerge, he announces that “such art is profoundly pedagogical [emphasis in original]” (p. 114). To recap, the chapter used O’Sullivan’s observation of contemporary art practices and the dichotomy he identified (critical vs. new) as a catalyst for an argument of the potential of artworks that encompasses the critical, the new/different, and the pedagogical. Art historical perspectives that support O’Sullivan’s identification of art practices with the critical, especially postmodern practices, were touched upon. Joselit’s argument on critique’s mobility opened up this perspective 48 and led to a closer examination of the concept through Foucault’s theorization, which allows for an expanded perspective on the critical. Coupled with the theorizations above from selected art education scholars, the chapter now contends that artistic encounters have the potential to foster the critical, the new/different, and pedagogical, a potential that is transformative, transgressive, and holds possibilities for the not-yet and unknown. The conceptualization of the artist-theorist by visual culture scholar Irit Rogoff (2003) summarizes the above aptly. Echoing Foucault’s conceptualization, Rogoff asserts that “criticality … is precisely the operations of recognizing the limitations of one’s thought” (p. 99). Here, responding to the invitation of writing on the question ‘what is an artist,’ she notes the blurring boundary between the artist and the critic, the historian, the theorist – as Joselit (2003) did above – and advocates for further porosity and entanglement. For her, the question of what an artist is also necessitates considerations for what a theorist is. Theory, by her definition, is the constant undoing of its own assured grounding, the practice of which is ultimately critical. In her formulation, artistic practices are considered as knowledge-production and have a propensity and capacity for criticality, which necessarily entails a rupture of the established ways of knowing, and an undoing of certain allegiances and models. If one follows her argument, that learning involves the criticality (which artist-theorists can foster) of transgressing the contingent limits and forms of established knowledge and practices, then it would seem to indicate that visual art has a potential that is at once critical, new/different, and pedagogical. 49 4. Brecht and Estrangement Thus far, the chapter has drawn from a variety of scholars situated in fields related to visual art and attended to their conceptualizations of what one might refer to as artistic potential. Moving forward, this study refers to the pedagogical and critical potential when it speaks to visual art’s capacity for fostering ways of knowing otherwise. What follows is a closer examination of such potential, and more specifically in the context of ICT. The exploration begins with Bertolt Brecht (1936), whose strategy is aptly emblematic of the coexistence and co-dependence of the critical attitude and a tactic for inducing the new, alien, and strange. It is the argument of this section that his concept and method of ‘estrangement’ encapsulates the above scholars’ insistence on the new/different, and by implication, creates the possibility for the pedagogical, and the criticality emphasized by Owens (1980), Joselit (2003), Garoian (2015), Atkinson (2008), and jagodzinski (2010). In other words, estrangement is a method for actualizing the pedagogical potential of visual/media art, which, as the next section demonstrates, takes up the call for an analysis of media/technology. As early as 1927, there are documentations of Brecht feeling his way towards a concept of strangeness, of dislocating the habitual. In 1935, after viewing a performance by Mei Lan-Fang’s theatre company in Moscow, Brecht started using the term ‘verfremdung’ (to alienate) and coined the term ‘verfremdungseffekt’ (translated as alienation, distancing, estrangement, A-effect, V-effect, etc.) in an essay called “Alienation Effect in Chinese Acting.” In the present study, the term ‘estrangement’ is utilized, to follow the canon established by Foster, Krauss, Bois, and Buchloh (2004). In this seminal essay Brecht outlines the preliminary pillars of his art form: to make the 50 incidents in the play appear strange so as to avoid audience identification, where “everyday things are thereby raised above the level of the obvious and automatic,” such that “the audience can no longer have the illusion of being the unseen spectator” (1936/1964, p. 92). Often, this is achieved by exacerbating certain norms and tropes, by emphasizing and foregrounding the medium of theatre. In this essay as well as subsequent ones, Brecht puts forward proposals and manifestos on how the estrangement effect could be utilized in theatre, in concise detail with respect to acting and stagecraft, such as particular gestures, costumes designs and fabric use, facial expressions during specific dialogues in certain plays, etc. The present study expands the scope beyond theatre into visual art, on which the concept has had a profound impact, and has been argued to be present in the work of contemporary artists such as Allan Sekula and Jeff Wall (Edwards, 2004) and occupy a significant position within postmodern practices and strategies, with its “self-reflexive, anti-illusionistic montage-like devices that interrupted the flow of his plays” (Foster, Krauss, Bois, Buchloh, 2004, p.33). As art historian Steve Edwards (2004) articulates, 1970’s avant-garde practices may very well be referred to as neo-Brechtian, where the representation’s constructed nature is made salient (in the case of the work of Sekula, the medium of the camera is scrutinized). “Brecht’s ideas had a significant impact on artists, photographers, and filmmakers who consciously produced work that combined an attention to political subjects, particularly gender, with formal devices intended to interrogate and disrupt the dominant conventions of the mass media” (Edwards, 2004, p. 154). Like Brecht, these artists utilized his method of ‘foregrounding the device’ to scrutinize the medium of visual art (photography and video mostly, in this instance), and by extension, the hidden ideological codes within society that appear natural and whole. In this context, the Brechtian effect 51 is manifested in several ways, including but not limited to “laying bare the device; breaking up the diegesis; pictorial fragmentation; direct address to the beholder; displaying techniques by which the spectator was solicited by the image” (Edwards, 2004, p. 166). Of course, while there remains an emphasis on interrogating the medium and elevating critical awareness to scrutinize unquestioned norms through a penchant for strangeness and alienation, the precise way such strategy is taken up and manifested in various artworks differs widely. It should also be noted that, as is evident from above, Brecht’s method has a relationship with the analysis and deconstruction of media, where the alienation effect arguably has the potential to reveal the conventions of media, to reveal the innerworkings of the spectacle machine. This point on media and revealing is taken up further in the next chapter. Foster, Krauss, Bois, and Buchloh (2004) define estrangement as an aesthetic operation designed to “alert the spectator/reader to a different perception of the world, to rupture the rote repetitions of everyday speech, … the formal devices and material tools of language as integral elements in the processes of meaning production” (p. 684). According to these art historians’ analysis, Brecht’s notion of interrogating the dominant ways of knowing and pointing out their contingency upon socio-political situations had significantly influenced Roland Barthes’ (1957) theorization of the sign and the necessity to investigate its naturalized and invisible operation. His work was “designed to free socially-conditioned phenomena from that stamp of familiarity which protects them against our grasp” (Brecht, 1948/1964, p. 192), which also sounds very similar to O’Sullivan’s Deleuzian argument that artworks “involve working against the habitual and the normative, working at the very edge of our subjectivities as they are” (2010, p. 206). Here, the tactic for strangeness, the new, the otherwise, dovetails with the critical attitude that seeks to 52 unravel the dominant and legitimized regimes. Like Brecht historian John Willet (1964) who points out the fact that Brecht wrote his essay after his visit to Moscow was no coincidence, Foster et al (2004) also maintain that Brecht drew from an earlier iteration of strangeness: the theorization of Russian Formalist Viktor Shklovsky and his 1917 essay “Art as Device” or “Art as Technique,” written in the era of the October Revolution that outlined the artistic imperative (and implicitly a political one) of defamiliarizing the rote in order to critically examine and transform the formulaic and conventional into something new – a revolution, no less. “The A-effect consists in turning the object of which one is to be made aware … from something ordinary, familiar, immediately accessible, to something peculiar, striking, and unexpected. What is obvious is in a certain sense made incomprehensible … stripped of its inconspicuousness” (Brecht, 1940/1964, p. 143 – 144). The act of making one ‘aware’ of something normally unnoticed, is political in its intent. Brecht’s practice and theorization are staunchly political, advocating for the need of the ‘critical attitude’ in the artistic field. According to him, “criticism of society is ultimately revolution; there you have criticism taken to its logical conclusion and playing an active part” (1940/1964, p. 146). Similar to Foucault’s theorization of criticism noted above, Brecht’s estrangement method is designed, through making something strange, to reveal the historically-contingent and socially-constructed status of various entrenched truths, assumptions, practices, and modes of thought. As some have argued, emphasizing the constructed nature of such truths and practices “encourages the spectators of Brecht’s plays to take the matter of political change directly into their own hands” (Foster, Krauss, Bois, Buchloh, 2004, p. 684). 53 In addition, echoing the point Atkinson (2008) made earlier regarding learning entailing a disruption of the established forms of knowledge and practices, Brecht (1923) has also articulated, as part of his critical art theory and method, the presence of a ‘pedagogical attitude’ that is in his work, especially in the ‘lebrstück’ (learning plays) which sought to cultivate new ways of communicating and the acquisition of new ‘attitudes’ (or perhaps new ways of engaging the world), “as such it has current political importance as well as utopian qualities” (Mueller, 1989, p. 33). Such perspective reinforces the notion that the pedagogical, the critical, and the new, may indeed be located in the same space (in this case, in Brecht’s work). 5. Estrangement in The Context of Media and Technology Equipped with this tentative proposal for artistic potential, the study now ventures into the field of media and leverage the argument constructed thus far to explore how contemporary visual art might approach the polemics of digital media and what its potentials are in this specific context. German studies scholar Roswitha Mueller (1989) keenly observes that Brecht’s critical art theory and method conceived to combat the bourgeois ideology and the spectacle apparatus is, simultaneously, a theory of media. As she illustrates, Brecht’s attack on bourgeois ideology of the twenties is a call for subjects to wrest control away from the elite and “to achieve control over their means of production” (p. 25) to prevent a future where media access is even more restricted and control even more dense, “in order to bring about a more democratic structure of communication” (p. 21). In various instances, both filmic and dramatic, Brecht ‘estranges’ conventional narratives and representations via techniques that interrupt and fragment, such as the montage. Brecht, though 54 at times prone to technological utopianism like his colleague Walter Benjamin,6 was cautious against the illusionism and immersion that can be induced by the new media at the time, which was antithetical to the critical awareness he was trying to cultivate. Seen in this light, his theory and practice is in many ways engaged with media politics, as it was preoccupied with the critique of the spectacle, the revealing of its convention and constructed nature (both against fascism as well as later, Hollywood), the mass-accessible and therefore emancipatory and democratic potential of film and radio (a belief I will probe in later chapters), and the possibility of cultivating critical audiences with agency instead of passive consumers. Scholars of digital humanities Burdick et al (2012) have also commented that as technologies can become so naturalized that their effects appear self-evident, the task is “to denaturalize these technologies and create fissures for new, imaginative possibilities to come about” (p. 135). While noted in passing and without reference to Brecht, they suggest that ‘epistemological defamiliarization’ can potentially “wake us from our passive consumption” (p. 135). The need to critically examine the technological device that ordinarily operates outside of conscious awareness, legitimized and concealed by the dominant regimes and habitual formations, is echoed by the two following scholars, who also point towards a form of estranging as a potential strategy. While philosopher Martin Heidegger’s (1962) project of ‘fundamental ontology’ in Being and Time, the existential phenomenology of ‘dasein’ (being-there), is not elaborated here, I wish to turn to his concepts of tools for the moment, which can be helpful in the discussion on 6 As Mueller (1989) reminds, at one point both Brecht and Benjamin shared “an initial euphoria about the emancipatory power of technology” (p. 13), believing that video and radio were naturally more democratic. 55 ‘estrangement.’ It should be noted that although Heidegger does not use the term ‘technology’ explicitly in Being and Time, but rather ‘tools,’ the present study follows philosophers of technology such as Yoni van den Eede (2010) and Peter-Paul Verbeek (2005) in incorporating Heideggerian tool-analysis within the examination of technology – where machines, devices, tools, things, technology, can be conceptualized together. In advocating for an ‘understanding-of-being,’ a background knowing that eschews the Cartesian distinction between subject and object but instead foregrounds dwelling in the world, Heidegger distinguishes between readiness-to-hand and presence-at-hand, in relation to tools. In his example, the hammer used by a carpenter recedes into its usage within the world (readiness-to-hand), “it must, as it were, withdraw” (p. 70), and only becomes (temporarily) apparent when it is broken (unreadiness-to-hand), where it enters a mode of obtrusiveness, thereby taking on the quality of present-at-hand. The latter are different types of encounters with the things of the world, a different mode of concern, where “the entities which are most closely ready-to-hand may be met as something unusable, not properly adapted for the use we have decided upon,” and “when its unusability is thus discovered, equipment becomes conspicuous” (p. 73). When this occurs, “the presence-at-hand of the ready-to-hand makes itself known in a new way” (p. 104), and “the environment announces itself afresh” (p. 105). While Heidegger’s point was to distance from the dominant metaphysical model through an elaboration of dasein in place of subject/object, what is significant for the present discussion is his concept of this shift of state. The example suggests that, in order for such a shift to take place and for a different knowing to be possible, some sort of deviation to the normative state and function needs to occur, such that the tool/media/mediation/technology becomes evident, ‘revealed’ from its hidden operations. While holding a more nuanced approach that integrates the Frankfurt school and Latour into philosophy of technology, philosopher of technology Andrew Feenberg (1999) is 56 aligned with Heidegger in that he also suggests the dominant ‘technical codes’ should be unveiled, and that “a critical theory of technology can uncover …, demystify the illusion …, and expose the relativity of prevailing technical choices” (p. 87). To add a third scholar to the mix, Brecht’s (1967/2007) insistence on the device/form has parallels with McLuhan’s (1964) focus on the medium. Against the criticism of his contemporary, Marxist realist Gyorgy Lukács (1938/2007), who insists on the realist ‘content’ for the proletariat, Brecht counters with the imperative of innovating, experimenting with, and defamiliarizing ‘form.’ One of these forms was that of language, and Brecht was “eager to demonstrate that language was not a neutral vehicle made to transparently convey concepts” (Foster el al, 2004, p. 33). This understanding of the non-neutrality of the device/form, and the need to highlight it can be likened to McLuhan’s claim that “the effect of media, like their message, is really in their form and not in their content” (1959, p. 342), and that “the ‘content’ of any medium blinds us to the character of the medium” (1964/1994, p. 9). Such blindness, numbing, or ‘narcosis,’ allow the medium to operate on the peripheral, where we as the users “remain completely unaware of the physical, psychological and social effects media have on us” (van den Eede, 2010, p. 145). As early as 1954 in an essay titled “Notes on the Media as Art Forms,” before the explicit coinage of his well-known aphorism, McLuhan was already insisting on an examination of the communication apparatus itself, arguing that there is widespread tendency for “ignoring the form of communication … which is more significant than the information or idea transmitted” (as cited in Gordon, 2010, p. 86). If the media is an environment/ground that, through its “invisibility and unawareness … saturates the whole of attention” (McLuhan, 1964/1995, p. 275), then the imperative would be to create situations conducive for the environment/ground to emerge from the imperceptible everyday. 57 Interestingly, McLuhan (1954) cites one of the writers that Lukács explicitly denigrated (he himself prefers the realism of Balzac), James Joyce, as a prime example of the prescient artist that articulated the inadequacy of printed language in an increasingly electric and sonic world. According to Terrence Gordon (2010), the unsettling effect of Joyce’s Finnegans Wake, which in Brechtian terms would constitute an estrangement of form, “creates a visible environment of media effects intended to jar us awake” (p. 86), contrasting against the comfort of the everyday language, creating an anti-environment. Insofar as the Brechtian estrangement effect is an “instant of intrusion into the everyday … which always takes off from the numbness and familiarity of everyday life” (Jameson, 1998, p. 84), one can pinpoint a similarity between Brecht and McLuhan in their call for revealing and examining ‘form’ through its disruption. Taken together, I argue that these specific approaches and concepts of Brecht (1936), Heidegger (1962), and McLuhan (1964) illustrate a productive tension between the clandestine operations of technological/media apparatuses and the artistic method of estrangement – where the latter offers a fitting answer to the issue of the former. The combined reading indicates a generative interaction between the conceit that technology can only be examined through deviation from its normalized function and the artistic potential of estrangement whereby the habitual, the norm, and the dominant is made anew and different to create the possibility of alternative ways of knowing, seeing, and doing. In other words, if bringing the technological medium to the foreground is imperative for the analysis of technology, and deviation from its normal everyday function is necessary for this purpose, then estrangement appears to be very well suited for the task of such analysis and critique. 58 6. Estrangement as Artistic Potential The present chapter has analyzed what various scholars have advanced as what artists and artworks can do or ought to do. Through this analysis, initiated by O’Sullivan’s (2010) proposition and supplemented by Foucault’s (1981, 1997) theorization of critique, I have argued for a different way of approaching the dichotomy. Instead, I wish to propose a lens that views the critical not as separate from the nebulous otherness and newness O’Sullivan is groping for, but rather as one of the latter’s possibilities. Shifting the focus this way, I believe this approach allows one to be more generous with the analysis of art, and creates room for a more capacious theorization that is not simply, for example, attempting to over-correct the dominance and legacy of conceptual art and the linguistic turn of the postmodern era. In the context of the present study, I have suggested that the concept of estrangement provides a space to think about the critical, the pedagogical, and the new/different simultaneously. Moreover, I argue that estrangement can offer helpful guidance for the present study, in its unique relation to theories of technology and digital media, as the above demonstrates. Specifically, if one follows Heidegger (1962) and McLuhan’s (1964) suggestion that the study of media/technology, which normally function outside of awareness due to their immersiveness, cannot occur without some form of deviation from the normal configuration, then the act of estranging holds promise for such an endeavor. To go back to Wilson’s (2002) list of potential noted earlier, estrangement is perhaps a tactic that both encompasses and goes beyond the often-listed potential of media art, such as its presumed affinity with the democratic, the participatory, the interactive, the open, the communicative, etc. The conceptualization of estrangement holds the possibility of these 59 capacities, but is not necessarily satisfied with them in perpetuity nor does it settle with them, but instead is always in flux, agitation, and play. Only by doing so, can the possibilities of fostering rupture, critique, and ways of knowing otherwise be substantiated. Moving forward, I utilize the argument articulated in this chapter and the concept of estrangement, in conjunction with the proceeding exploration of concepts, as a lens with which to analyze the case studies of specific artworks. Visual art engaged with digital media has the capacity to destabilize, disrupt, and subvert the normalized and entrenched information-based way of knowing through its pedagogical and critical potential for fostering ways of knowing otherwise, specifically in the present context of ICT, through the tactic of estrangement. Theorist of critical pedagogy Henry Giroux (2018) tasks educators/artists with the urgent responsibility of cultivating agency, change, public good, and democracy. For him, it’s the social responsibility of the educator/artist to enable practices and possibilities wherein one can “think otherwise in order to act otherwise, hold power accountable, and imagine the unimaginable” (para. 1). Similar calls to action can also be read in some of the articulations of other scholars noted above, such as jagodzinski (2010). While I am supportive of such a militant stance, it should be emphasized that, of course, the following arguments are simply ways of exploring, analyzing, and thinking about the artworks and their potential. I propose that, in various ways, the artworks in this research estrange phenomena and concepts related to digital media and its attendant polemics, but I make no conclusive claims regarding this act of estrangement, nor on their effects. In other words, I am not making an argument that these artworks do induce epistemic shifts or are artistic ‘events,’ nor that they offer a recipe or constitute a toolkit of sorts for creating such desired end results, but 60 rather I am making an argument, given the examination of the concepts and the artworks themselves, that they have such potential. As Ellsworth (2005) insists, the pedagogical potential of these sites of learning, such as visual art, cannot be prescribed and pre-determined like learning outcomes; it is “impossible for an artist, designer, architect, or teacher to anticipate what form a learning will take or how it will be used” (p. 54). “The instability and fluidity of pedagogy holds the potential for an unknowable and unforeseeable” (p. 55) encounter, which, like critique, necessitates a transformation without dictating and settling on a particular form, but rather remains mobile. The caveat may seem like an easy way out, but bears emphasizing, for as jagodzinski (2017) writes, “what art can ‘do’ refers to the effects it produces, and these are, for the most part, unpredictable” (p. 23). While the ‘effects’ certainly cannot be engineered and planned with precision, and that the arguments of certain artworks’ potential to estrange and cultivate different ways of knowing cannot in any way be guaranteed, such an argument and theorization can still be made. For even if it is simply a potential, it is still one worth exploring, speculating, and articulating. As jagodzinski (2010) reminds, “there are no guarantees that one will be transformed by such an art and its education; however, I believe that we as artists and educators … should orient ourselves towards just such a potentiality” (p. 139). 61 Chapter 4: The Logic of Commensurability This chapter begins the reciprocal analyses between concepts and research objects. In particular, under the premise of investigating the concept of the digital, it establishes the socio-political landscape and context within which the research is being conducted – one that recognizes the pervasive algorithmic activities of informatics within the big data economy of information society. As stated previously, in order for media art to address digital media and its socio-political and material conditions, it needs to examine the underpinning logic of information within the wider landscape of information capitalism. Stemming from this landscape, the chapter considers the suggestion that the imperative is to scrutinize the techno configurations (ICT) through which information capitalism operates – to reveal the device, so to speak, albeit in an expanded sense that goes beyond simply opening the black box. To substantiate this expansion, it draws a link between knowing and the digital through the pertinent unit of information, and position information as a way of knowing, containing its own logic which is based on the assumption that the information is commensurable with all phenomena. Establishing this argument is vital for the assertion that, drawing from the previous chapter, visual art engaged in digital media polemics have the potential to destabilize this way of knowing fostered by ICT. The latter half of the chapter re-emphasizes estrangement’s complementarity with the analysis of digital media and media art. The last part of chapter consists of the close reading of two case studies, where the work of performance artist Erica Scourti and critical engineer Julian Oliver are read in conjunction with the argument that the chapter has advanced and through the Brechtian theoretical framework articulated in the previous chapter. 62 1. The Task of Revealing In her extensive account of studying the hacktivist group Anonymous, anthropologist Gabriella Coleman (2014) recounts a conversation with Mercedes Haefer, a young member of the AnonOps network, a more militant faction within the loose, unpredictable, and non-cohesive Anonymous. Speaking about Anon’s support of Wikileaks (in the form of DDoS attack on Paypal) during their own effort to support the publishing of materials from whistleblower Chelsea Manning, Haefer explains that “it wasn’t about supporting Assange … it was about supporting freedom of speech and government transparency” (p. 199). Julian Assange, Anonymous, Chelsea Manning, Pirate Bay, and later, Edward Snowden, broadly categorized under the ‘fifth estate,’ “the hackers, leakers, independent journalists, and bloggers who serve the critical role that once fell to … the mainstream media” (Coleman, 2014, p. 84), all exhibit a strong adherence to the value of “freedom of information, expression, and sharing” (p. 112). Assange (2011) has, in conversation with several artists and the curator Hans Ulrich Obrist, stressed Wikileaks’ dedication to advocating for access to information and the right to expression without institutional silencing and repercussions. He equates the potential for a just society with complete transparency and the existence of an open and comprehensive information repository, leading him to urge that “we need to see power from every angle if we are to understand and shape it” (para. 20). The falsity of the network’s utopian potential has been discussed by numerous scholars (Assange, 2011; Carr, 2012; Galloway, 2004; Galloway, 2007; Hu, 2015; Koopman, 2019; Lash, 2007; Mejias, 2013; Stallabrass, 2003; Taylor, 2014; Winner, 1986). Speaking to the loose usage and frequent association of ‘decentralization’ with technological advances, philosopher of 63 technology Langdon Winner (1986) reminds us that “dreams of instant liberation from centralized social control have accompanied virtually every important new technological system introduced during the past century and a half” (p. 95-96), and that decentralization was once even touted as a quality that electricity will bring to society, a perspective that would seem short-sighted by today’s standard. Regarding the persistence of control in decentralization, media theorist, philosopher, and programmer Alexander Galloway (2007) has said that “there is a certain naïve rhetoric around networks being liberating, being anti-hierarchy, ‘information wants to be free,’ and so on.… We do not desire networks to be free, we expend any amount of energy to abolish them in a wash of retrograde, pyramidal reorganization” (para. 4). In an interview with curator Mohamed Salemy, Galloway (2013) distinguishes between early net optimism (with its potential for liberating and democratizing information) and a much more cautious and vigilante approach to information society by comparing the earlier work of Gilles Deleuze (1980), here epitomized by the rhizome, to his later work as demonstrated by the ‘societies of control’ (1992). The earlier optimism has been replaced by a much more apocalyptic vision of networked control and stochastic management. Open access activist and author Cory Doctorow (2010) also reminds us that ‘information wants to be free’ is only half of Stewart Brand’s aphorism uttered in 1984, that together with the other half, ‘information wants to be expensive because it’s so valuable,’ aptly highlights the paradox of the information age and the dominance of capitalist interests, which sustains the model that “an information economy must be based on buying and selling information” (Doctorow, 2007, para. 2). Media theorist and author of A Hacker’s Manifesto (2004) McKenzie Wark succinctly updates the aphorism as follows: “information wants to be free but is everywhere in chains” (p. 55). 64 There is ample literature, from law to media studies to information science to web design, on the various ways certain biases, values, and assumptions creep into technological designs, evident in recent publications such as Algorithms of Oppression: How Search Engines Reinforce Racism (2018), Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2017), Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech (2017), etc. Yet it is difficult to argue against the monumental ubiquity and dominance of the big data industry and the hold it has on the everyday. Despite the rich scholarship noted above, ‘mythinformation’ persists. As part of his conceptualization of the neutral, functionalist, and downright utopic perspectives people hold for technology, Winner (1986) has outlined, in as early as 1986, several persistent myths surrounding information: that it will level class distinction and de-hierarchize society, that it is ‘free’ both in the financial and mobile sense, and that computation will bring about a truer form of participatory democracy through the abundance of information, the latter being a social good and key to democracy (the more information the better!). In this discussion, he puts it succinctly by taking the argument out of the technological realm exclusively and places it within the larger political context. As he points out, what enthusiasts usually overlook in their utopian forecast is that the distribution of power remains, despite technological advancements. Meaning that while new devices grant particular affordances and capacities (for example, making things more convenient in general and increasing living standards), the changes of these advancements are proportional to the relationship between those in power and those who are not. Or, as scholar of communications and education Ulises Mejias (2013) writes of the tendencies for the network to reinforce inequalities, “the rich nodes in those networks tend to get richer” (p. 4). This point, of the imbrication between the technological and 65 socio-political, is overlooked if one prescribes to a perspective of technology that is neutral, autonomous, and essentialist. The above perspectives, which set up the socio-political stakes of information society, ask the following question in relation to visual art: is the role of the artist to reveal information and to advocate for transparency of the device? As art historian Jakub Zdebik (2011) shows, this had certainly been the case with the work of artists such as Hans Haacke, Mark Lombardi, and Josh On (with their diagrams and lists of power), whose legacy could be discerned in the contemporary work of artists dealing with surveillance and visibility such as Trevor Paglen and Taryn Simon. As the previous chapter shows, an argument of ‘revealing’ can be identified in the approach of Brecht, with an intention to scrutinize and upend conventions. However, the artists Morten Søndergaard and Jamie Allen (2016) argued against this approach, naming Brecht specifically, and insisting that simply revealing and seeking transparency is no longer a viable strategy for artists to critique the ‘infrastructure,’ which was their term for describing a distributed institutional assemblage of control that may be similar to Galloway’s ‘protocol.’ Allen and Søndergaard’s contention was that revealing the inner workings of the machine (literal machines as well as institutional assemblages) is not enough; instead one should engage it directly, to couple, to modulate, and to generate new information. In their example, they spoke of a project created for Eyebeam, New York, where they created PSA-like sound installations as public interventions. In addition, political philosopher Jodi Dean (2003) acutely points out how the logic of transparency has been ironically co-opted by the ICT industries to endorse and perpetuate the insatiable practice of data-mining, so much so that she characterizes ‘full publicity’ as “the ideology of technoculture” (p. 101), turning the aforementioned value espoused by Assange into fodder for capital. At first 66 glance, it would appear that there is a dichotomy forming between artists who seek to ‘reveal’ and artists who claim to be taking a further step to engage directly with the machine. This chapter re-asserts the relevance of the Brechtian approach as both a way of analyzing media art, and a tactic of media art practices, by expanding on the loose notion of revealing beyond the distinction between transparency or active political interventions. Instead, it asserts the more capacious perspective that ‘revealing’ can be more productively thought of as the conditions for a different way of coming to know and engage with the phenomena of the world, emphasizing estrangement’s focus on unsettling the norm and bringing about the unprecedented, rather than strictly transparency. It suggests that the artistic approaches of investigations for transparency or anti-technocratic political action are not completely dissimilar, if one frames them more broadly as attempts to foster ways of knowing beyond the digital and informatics as they currently are. The two case studies in this chapter by Erica Scourti and Julian Oliver will demonstrate this expanded reading. This is achieved, first and foremost, by establishing the perspective that ‘the digital’ is a way of knowing, which asks, what is the underlying epistemic assumption of ‘information’ and how might one make sense of the world in a different way? This chapter takes cyberculture and ICT scholar David Gunkel’s (2007) critique of digital reasoning and his proposition to ‘think otherwise’ as a springboard. Gunkel begins his book Thinking Otherwise: Philosophy, Communication, Technology by pinpointing an affinity between information and knowing by referring to the prevalence and seeming inevitability of binary thinking. As he outlines, the structure of digital information, the binary (1 or 0, on or off, two discrete variables), mirrors a structuralist understanding and possesses similarities with other 67 systems of difference, such as language (via Saussure), and arguably “characterizes fundamental structures of human cognition and communication” (p. 13). Gunkel, however, views the binaries as suspect in the first place (taking cues from Derridean deconstruction and Haraway’s cyborg theory). Binaries, such as subject/object, male/female, mind/body, privilege one over the other, containing a hierarchy that “installs, underwrites, and justifies systems of inequality, domination, and prejudice” (p. 43). For him, “what is needed is not more research data to prove one side or the other but a qualitatively different way of considering the philosophical dimensions of information and communication technology (ICT) … and a mode of critical thinking that is able to operate and proceed otherwise” (p. 4). Before venturing further, I need to note that while terms such as technology and media are used repeatedly, I follow the media theorists Jay Bolter and Richard Grusin (1999) in using these terms without assumptions of technological determinism or formalism. Instead, I proceed with the understanding that technology/media are never considered in isolation, but are rather “hybrids of technical, material, social, and economic factors” (p. 77), and that the terms are better considered as shorthand for denoting the plethora of social, discursive, and material forces that give rise to, sustain, and alter what one comes to understand as technology/media. Lastly, while the terms are obviously distinct in many ways, for the purpose of this research, ‘technology’ and ‘media’ are often used interchangeably, following McLuhan’s usage in which “any medium or technology” (1964/1994, p. 8) appear synonymous with “any extension of ourselves” (1964/1994, p. 7). Such extension might be “man’s effort to cope with his physical environment … and his attempt to subdue or control the environment by means of his imagination and ingenuity” (Wilson, 2002, p. 13), and that “the means by which the affordances of Nature are convened for the purposes of 68 civilization is called technology” (Gottlieb, 2018, p. 16), a definition that is complicated later in this chapter as well as in Chapter 6. 2. Black Boxes The ‘opening’ of the black box (generally thought of as an instrumental and opaque entity of only inputs and outputs and offers no access to its interior) has been utilized by sociologists of technology to advocate for studying the socially-contingent origins, functions, and structures of technologies (Winner, 1993). The rising ubiquity of algorithmic organization and decision-making, according to Microsoft Research principal researcher Kate Crawford and communication scholar Mike Ananny (2016), has led to resurgence in the topic of transparency as a measure of accountability within algorithmic operations. However, they intriguingly suggest that such a pursuit may have its pitfalls, and that ‘looking inside the black box’ may not be as advisable as it seems. While historically the concept of transparency functioned as an ideal that “offered a way to see inside the truth of a system” and hinged on an assumed sequence between seeing, knowledge, and agency, for Ananny and Crawford, “transparency arguments come at the cost of a deeper engagement with the material and ideological realities” (p. 2). The writers outline several points on the inadequacy of the pursuit for transparency, including how it creates a false binary of total openness or secrecy, how it can inadvertently contribute to the neoliberal model of information consumption and production (as Dean noted above), and how it may even ironically assist in strategic opacity by contributing to information glut. Instead, they propose to look not at the inside of one system, but across several as relational assemblages of the human and non-human. The inadequacy of revealing and the need to go beyond is perhaps most aptly summarized when 69 they write that the imperative is to ensure a system “is not only visible but also debated and changeable by observers who are able to consider how they know what they know about it” (p. 9), emphasizing conditions of knowing, alternative engagements, and social interactions. “To ask to look inside the black box’ is perhaps too limited a demand and ultimately an ill-fitted metaphor” (p. 10), they write. While Ananny and Crawford identified numerous origins for the concept of transparency, such as Enlightenment era empiricism and political accountability, Galloway (2011) traces the conceptual origins to Marx, where revealing the black box (a placeholder for the commodity, the sign, the spectacle, the Freudian ego) became a political imperative, to unveil the inner workings of the production circuit. Framed in this light, the black box and the arguments for revealing and transparency do not just concern digital media and technology, but have a significant place within the history of critique. For such critique to be relevant today, Galloway (2011) advocates that “it is no longer a question of illuminating the black box … but rather that of functionalizing the black box by programming it” (p. 244). In other words, not to ignore the configuration of cybernetic management through ICT, but to abandon a revealing/transparency project for an alternative model of critique that engages more directly. As Ananny and Crawford (2016) also suggest, “in digital contexts, transparency is not simply about revealing information … but continually deploying, configuring, and resisting platforms, algorithms, and machine-learning protocols” (p. 11). Has the argument of transparency and revealing lost its potency and currency? Or does it simply need to be reconsidered, expanded, and reframed? In order to set up such expansion and to argue for the 70 relationship between the digital and knowing, the next section first outlines the socio-political landscape of information capitalism, followed by a deeper look at the constituent unit of information – the bit – and its claims. 3. Information Capitalism and the Logic of Commensurability Before listing a few theoretical approaches to information society, it is worth pausing briefly on just what information is and does. Historian James Gleick (2011) traces the word ‘information’ to the watershed moment of information theory’s birth in 1948, when mathematician Claude Shannon arrived at this seemingly impossible universal unit. Shannon’s binary digit (bit) achieved an unprecedented universality through de-contextualization and standardization, giving rise to the first wave of cybernetics. Information is “a probability function with no dimension, no materiality, and no necessary connection with meaning” (as cited in Hayles, 1999, p. 18), an entity defined as pattern, not presence, and unencumbered by context. What information is and does, simultaneously, is purporting to be capable of representing anything and everything, having quickly been accepted in a large variety of fields and disciplines as the basest and most fundamental constituent unit. It is both content and form. This, of course, needs to be operationalized through computation, and “the power and universal applicability of computation has made it look to us as if, quite literally, everything might be made out of computation [emphasis in original]” (Golumbia, 2009, p. 19). In the contemporary environment, information has become the pertinent unit of economic exchange, housed within an increasingly dominant data-reliant and algorithmically-governed socio-economic structure, armed with the logic of Shannon’s information theory. As philosopher Baruch Gottlieb (2018) writes, “the computation of digital 71 information, which is abstracted multiple times from the phenomena in the world … provide a promise of absolute human control” (p. 5). This aptly gets at the main polemic of the study, that within the deployment of ICT for control and exploitation within information capitalism lies the abstracting act of information and its claim to represent the phenomena of the world, which the study proceeds to unravel and question through the case study of artworks. This research utilizes the concept of ‘information society’ as the socio-economic backdrop upon which to situate the discussion, which sociologist Scott Lash (2002) describes as characterized by “knowledge-intensive production and a post-industrial array of goods and service that are produced” (p. 2), while questioning its claims of rationality. Renowned sociologist of information Manuel Castells (1996) describes the informational society as a new mode of development where both the source of productivity and product output are information technology, information processing, and symbolic communication. Stressing the plurality of ‘informational societies,’ he describes them as configurations where “the core processes of knowledge generation, economic productivity, political/military power, and media communication are … deeply transformed by the informational paradigm, and are connected to global networks of wealth, power, and symbols working under such a logic” (p. 20). He argues that this new mode of development is inextricably linked to the capitalist mode of production and the aggressive post-Keynesian expansion of the 1980’s, such that it should be appropriately termed ‘informational capitalism.’ There is no shortage of scholarly perspectives on this information-capital configuration. In The Mode of Information (1990), media theorist Mark Poster, through a Foucauldian reading of 72 databases, counters the optimism of enthusiasts who foresee the perfect communication via free information by reminding the readers that market logic and the power of discourse (or rather, information) persist in the information age. Autonomist Marxist philosopher Maurizio Lazzarato (1996) conceptualizes the configuration through what the term ‘immaterial labour,’ encompassing the ways in which socio-cultural, affective, and communicative elements have all been subsumed into the production circuit, such that one is constantly performing immaterial labour. McKenzie Wark (2004) points to the dominance of the ‘vectorialist class’ in information society, a term she uses to designate the elite proprietors of the medium, platforms, vehicles upon which informatics operations are enacted – the ICT corporations. As she writes, “the commodification of information means the enslavement of the world to the interest of those whose margins depend on information’s scarcity” (p. 56). Information has the ‘virtual’ potential to be free and to transform, but not under the rule of the vectorialist class. ‘Communicative capitalism’ is the term preferred by Dean (2005), emphasizing the commodification of communication, rather than its former, more utopian, role within the discourse of the Habermasian public sphere. Lash (2007) has put forward, as an acute counter to Michel de Certeau’s (1984) optimistic but dated idea that the citizen produces tactically and organically on the ground level and away from the vertical and oppressive ordering of the city planner’s gaze, the argument of the ‘post-hegemonic power’ where control has left the verticality to be distributed and to permeate among the network of the everyday, operating insidiously and invisibly from within rather than from above, where “dominion is through communication” (p. 66). Media philosopher Matteo Pasquinelli (2009) cautions that the Foucauldian panopticon has been distributed through Google’s PageRank algorithm, an unseen mechanism that leverages and abuses the public’s general intellect (the elusive collective potential in Marxist terms) in order to commodify the attention and knowledge-labour of everyday users. The production and 73 consumption of information as the site of political struggle is made even more evident in writer Nicholas Carr’s (2012) concept of ‘digital sharecropping,’ strategically conflating work and play in a way that extracts value through the free labour voluntarily provided by the masses of users. Lastly, cultural theorist Ted Striphas (2015), using the term ‘algorithmic culture,’ describes the assumption that the multiplicity of being can all be encoded, which entails the conversion of all expressions, concepts, and relations into quantifiable bits. The socio-political and digital-economic configurations outlined by the scholars above led engineer and theorist Tung-Hui Hu (2015) to put forward, following Foucault’s disciplinary and decentralized society and Deleuze’s distributed control society, his model of the ‘cloud society,’ which “uses economic and affective incentives … to produce the subject position of the ‘user’” who involuntarily and naturally function as “active partners in the cloud’s mechanism of control” (p. 112), under “the sovereignty of data, a construction that joins war and security, users and use value, participation and opposition” (p. xxix). A common thread linking the arguments above is how the everyday has been capitalized via information, under the assumption that everything can be represented via information. This constitutes a way of knowing fostered by ICT, and this worldview is imperative for the operation and perpetuation of information capitalism. The internalization of this worldview, what I call the ‘logic of commensurability,’ is precisely the mark of this user/subject, which compels them to ceaselessly produce and consume information, to participate in the encoding of the world. Therefore, cultivating ways of knowing otherwise, of coming to know the world differently than one fostered by informatics, is vital. 74 This can perhaps be most aptly summarized by the similar concept of ‘computationalism,’ which I will explain through Jaron Lanier (2010) and David Golumbia’s (2009) usage. A well-known figure in the Silicon Valley circles who was a contemporary with early cyber pioneers such as John Perry Barlow (Electronic Frontier Foundation) and Kevin Kelly (Wired), Lanier is known for his work in virtual reality (and for coining the term VR) and for companies like Microsoft. However, his perspective has since changed drastically and is now a very vocal critic of cyberculture and components within, such as social media. In You are not a gadget, he defines computationalism as “the underlying philosophy … that the world can be understood as a computational process, with people as sub-processes” (2010, p. 153). But he argues that such claim cannot be substantiated. As he writes, “personal reductionism has always been present in information systems” (p. 68). “Information systems need to have information in order to run, but information underrepresents reality” (p. 69). Computationalism is a philosophical mistake, based on “the belief that computers can presently represent human thought or human relationships” (p. 69), with the assumptions that information’s inadequacy can be resolved by magnitude (big-data, the more the better). Lanier refers to the adherents of such perspective ‘cybernetic totalists,’ who, according to him, place too much stock in the ability of bits to represent. Against the ubiquity of cybernetic totalism that renders people “overly defined, and restricted in practice to what can be represented in a computer” (p. 10), he staunchly asserts a humanism that emphasizes the self and the individual. Lanier’s proposal perhaps veers dangerously back to anthropocentrism and attenuates the desire to de-centre the subject, as in the scholarship of speculative realism, OOO, originary technicity, or Deleuzian ontology. Nevertheless, coming from an expert in the field of computer 75 science, the assertion that information cannot adequately represent the world and the phenomena within, is particularly resonant and persuasive. Another computer-engineer-turned-media-theorist like Lanier and Tung-Hui Hu, Golumbia (2009) locates computationalism within analytical philosophy and defines it as the perspective that the human mind is ultimately a computer, leading to the assumption that “perhaps all of human and social experience can be explained via computational processes” (p. 8). He goes beyond this definition in his theorization to include utopic assumptions about computation and its intrinsically democratic potential to level hierarchies and solve social problems. Against these assumptions, Golumbia asserts that computationalism complements instrumental rationality and often aids institutional power rather than liberate the commons. The finer distinction between computationalism as used by these scholars and ‘logic of commensurability’ is teased out in a later section. 4. Universal Language and the Pertinent Unit Having established that information capitalism operates by encoding the everyday as information, this section takes a closer look at the pertinent unit that supports the logic of commensurability: the binary digit. Art theorist Marina Vischmitt (2006) categorizes the above techno-political configuration under the Marxist term ‘real subsumption,’ a process where “all social production has been brought into the fold of capitalist value extraction without any residue” (p. 48). One might say the process of real subsumption has been accelerated via the ICT industry. Just as architecture historian Robert Tavernor (2007) writes that any historical standards of measurement was only able to catch on when a sufficiently universal and general unit was created, the process whereby everyday attributes (such as personal preferences, communications, social 76 interactions, affective responses, attention span and user engagement, etc.) are injected into the production circuit likewise required such a pertinent unit, a role that has been filled by the symbolic representational unit of the bit. As this section demonstrates, under the framework of informatics, the bit creates the conditions for meaning-making through its binary structure. Gleick (2011) traces the line of thinking that has contributed to the growing momentum of and pursuit for a universal and decontextualized language from mathematicians and theorists such as Charles Babbage, George Boole, Shannon, Norbert Wiener, to Alan Turing. According to Gleick, Bertrand Russell (who Wiener studied with) and Alfred North Whitehead envisioned mathematics to be one such language in their work Principia Mathematica (1910). Boolean algebra was underpinned by a desire to “convert language, or truth, into algebraic symbols” (p. 164), writes Gleick, continuing “the dreams of Charles Babbage and Ada Lovelace: … numbers standing for anything at all” (p. 182). Demonstrating that such emphasis on discrete units make up the theoretical history of information, he summaries that “this elusive quarry had been pursued by Boole, and before him, Babbage, and long before either of them, Leibniz, all believing that the perfection of reasoning could come with the perfect encoding of thought” (p. 178). Such a belief in the possibility that one thing can stand for another, that units, symbols, code, or numbers can represent something else sufficiently, neutrally, and adequately, is what I refer to as ‘commensurability.’ In order for a closed system of commensurability to operate, there needs to be constituent units, general equivalents that can be abstracted and exchanged. Symbolic communication is, of course, one of the characteristic elements identified by Castells in his conception of the 77 informational society. Psychoanalyst Jacques Lacan (1955/1988) had suggested a link between restricted language (devoid of context and viewed through syntax) and cybernetics, arguing that both are governed by a reliance on rules and a belief in commensurability. He explicitly links language and binaries when he suggests that “the world of symbols … will tend towards the establishing of the binary order which leads to what we call cybernetics … that anything can be written in terms of 0 and 1” (p. 300). The media theorist Friedrich Kittler (1997) also points out that the symbolic has a strong affinity to the machinic, and that “numbers are symbolic as long as they are, like signs in general, substitutable, … representable through the two binary numbers” (p. 137). Foucault (1966/2005) terms “the Leibnizian project of establishing a mathematics of qualitative orders” (p. 63) the ‘Classical episteme,’ a model of knowing that sought a complete enumeration of the world. Having risen concurrently with sixteenth-century Rationalism, the Classical episteme sought “an absolute certain knowledge … by means of measurement with a common unit … in an order” (p. 61). Specifically, one of the tools for this universal calculus was the system of signs – a language, where “the ordering of things by means of signs constitutes all empirical forms of knowledge” (p. 64). The link between information and language is not surprising, considering Shannon’s seminal work on information theory was predicated on this link. This was the topic of Shannon’s presentation at the inaugural Macy Conference on Cybernetics on March 22, 1950, where he discussed the degree of redundancy in the English language and the most efficient means of calculating and transmitting intelligibility/thoughts (measured in bits, of course), which already presumes thoughts can be encoded (Gleick, 2011). The mathematicians and progenitors of cybernetics noted above such as Babbage, Boole, Turing, Wiener, and Shannon, all arrived at a similar conclusion: that at the very least, the systems 78 require two variables, be it on or off, true or false, all or nothing, this or that, and the ubiquitous units today, one or zero. Gleick cogently summarizes that “any difference meant a binary choice,” which “began the expressing of cogitations” (p. 161). In other words, under this framework, it is through difference7 that knowing and sense-making is made possible, the most basic iteration of which being the binary. One of the prominent figures in cybernetics “Gregory Bateson defined information as a difference that makes a difference; if there is no difference, there is no information” (1999, p. 103), writes media and literary theorist Katherine Hayles. Comparing mathematician Norbert Wiener’s (who coined the term cybernetics) perspective on information to linguist Ferdinand de Saussure’s conception of language, Hayles (1999) highlights a similarity in that both would argue meaning comes from difference. For Saussure, meaning is derived from a signifier’s differential relation to another, due to the arbitrariness and contingency of language. As Bishop (2012) reminds, the digital is code, which is, “at base, a linguistic model.” For Wiener, the meaning of communication likewise is contingent upon its difference from other messages, the abstract patterns of which is the target of information theory’s probabilistic calculations. The model of commensurability, where patterns of data are unencumbered by context and move readily between various substrates, is “constituted as a universal exchange system that allows data to move across boundaries” (Hayles, 1999, p. 98) between the material and informational. In other words, the digital could be framed as a system of difference based on binaries. 7 Difference here is simply used in a non-specialized way, characterized by non-resemblance. It does not refer to the concept in Deleuze’s Difference and Repetition, where repetition leads to difference, conceived of as a dynamic change agent necessary for the virtual to become actual in his ontology of immanence. 79 5. Digitality and Knowing Up to this point, I have relied on the ideas of key figures in the history of cybernetics and information studies to explore how current ICT is underpinned by assumptions of commensurability, which has a close relationship to knowing and meaning-making. Galloway (2014) goes a step further and pulls philosophy, specifically metaphysics, into the discussion, asserting that “digital thinking – the binarisms of being and other or self and world – is often synonymous with what it means to think at all” (p. xviii), referencing the metaphysical split between the knowing subject and the external object. He draws philosophy and the digital into a close comparison, showing that as “philosophy relies on opposition, reflection, or relation between two or more elements … digitality entails a basic distinction, whether zeroes and ones or some other set of discrete units” (p. xix). In an unconventional way, he defines the digital not through technical terms like most, but rather through the act of making-discrete, cleaving, dividing, and categorizing. In doing so, Galloway opens up the often quite literal perspectives on digital media to reframe the discussion around the ‘digital act’ of making discrete units, which broadens the discussion not only to the possibility of digitality and metaphysics (i.e. subject or object, transcendence or immanence), but also digitality and knowing. Galloway is not alone in framing the question of the digital in metaphysical terms. Media artist and theorist Stephen Wilson (2002) also points out that the Cartesian faith in reason and rationality, the privileging of the subject/mind over the senses/body, is an epistemic model that has continued all the way to the digital. “In the Enlightenment, the mind was seen as much more important than the body. It is easy to see the contemporary ascendancy of virtual bodies and places 80 as a continuation of these themes” (p. 633). This model is one of duality, that separates the subject and object, and in contemporary ICT manifests itself in the primacy of the immaterial and informational. In Hayles’ (1999) words, it is a privileging of pattern over presence. Like Galloway, Wilson (2002) notes the separating act of binaries: “by its very nature, digital representation requires the breaking apart of phenomena and their representation by symbolic bits” (p. 631). Under the epistemic model of the digital/informational, such representations create the conditions for knowing, where a subject comes to know the world through data-based representations. Similarly, Golumbia (2009) links the promises of computation with the tradition of rationalism and the desire to map all phenomena of the world. He also locates this tradition in the height of Enlightenment and the ‘patron saint of computationalism’ Leibniz, linking computation with “views according to which cognition is a process of abstracting away from the embeddedness of material culture” (p. 8). In this sense, computation, or digitality, not only links ways of knowing to rationality and abstraction, it also participates in continuing the primacy of the mind and subject over body and object. Philosopher and AI scholar Hubert Dreyfus (1991), taking from Heidegger, likewise calls into question the assumption held by philosophers from Plato to Descartes that knowledge is framed as “a relation of a subject to objects by way of mental meanings” (p. 3), such that the task is simply to discover context-free facts/objects and the rules and systems that govern them, leading to an ‘information-processing’ model of the mind. This, for him, is the legacy of Husserl’s Cartesian project. In light of these perspectives, the (digital) gap/split between the knowing subject and the objects to be known appears to be a pre-requisite for the conceptualization of knowing (knowing-that, to be specific), one that has been dominant in the history of Western philosophy. 81 Such knowing, the Cartesian epistemology, has been linked to technological assumptions of the explicit and intelligible (Guignon, 1983; Gottlieb, 2018). Philosopher of digital art Baruch Gottlieb (2018) articulates a similar argument through his perspective that technology is an abstraction, an act of domestication of the world. And such abstraction, starting at least with language, created a condition of knowing where the subject is extricated from the material world, the mind divorced from the body, and the world encoded and converted “into linear laws and principles … a mechanical model divided up into recomposable, standardized parts” (Gottlieb, 2018, p. 22 – 23). In short, encoded as objects of knowledge. In this epistemic model, the knowing subject comes to know through representations of the world, and such representations can only be constituted by a universal language of pertinent units, such as information. This epistemic model is perhaps also what Foucault (1966/2005) describes in The Order of Things, when he articulates that “to know is to discriminate” (p. 61). Against the Baroque episteme of similitude, Foucault advances the idea that the reigning model of knowing since the sixteenth-century is the Classical episteme of difference, a closed system where total enumeration is thought to be possible through comparison and categorization. Under the broadened definition of Galloway, one could say that such episteme has characteristics of the digital, or that such a model of knowing is digital, for it presumes binaries, be it subject/object or one/zero, are the necessary and sufficient conditions for knowing. The link between the digital and such dominant mode of knowing is highly relevant today and evident in the close relationship between traditional classification systems of knowledge (i.e. Library of Congress) and commercial search engines (Noble, 2018). Exploring how artists and their work tackle and propose alternatives against this way of knowing, one fostered by informatics – the logic of commensurability, is the main task of this research. 82 The study utilizes the term ‘logic of commensurability’ to denote the worldview of information and its claim of being capable of representing all aspects of the world and its phenomena through bit-based data, as noted previously. There are other who have coined terms that also describe epistemic models that are prescribed by ICT. Mejias’ (2013) concept of ‘network episteme’ argues that the network needs to be understood as “a system for organizing knowledge about the world … to see everything in terms of networks” (p. 9), enforcing a compulsory participation in the capitalist network that commodifies all aspects of the social. Lanier’s (2010) concept of computationalism is another one that is somewhat similar, as it points to an epistemic assumption that equates the materiality of the world with the digital (which he is staunchly against). Golumbia’s (2009) usage of computationalism also points to a worldview that assumes worldly and mental phenomena could all be encoded. What I have termed ‘the logic of commensurability’ shares similarities with the three arguments above but is also distinct from them. While Mejias considers the digital network as a way of knowing and emphasizes on its hegemony of control, he does not consider the underlying binary and its relationship to knowing. And while Lanier attacks information’s claim to knowing, his arguments often focus on issues such as human creativity and individualism, rather than ones within information capitalism. Golumbia, who is concerned with the ubiquitous beliefs of computationalism, links it with Enlightenment rationality and control, and also attacks its claim to knowing, but doesn’t focus on the relationship between information and knowing. Where they align, is the fact that they all point to the need to unthink, unlearn, and rupture the regimes fostered by the networks of computation. The logic of commensurability considers informatics as a builder of worldviews/episteme while emphasizing its place within the larger landscape of information capitalism and its operations of data-mining and algorithmic extractions. 83 The present section and the previous one have attempted to outline a link between the digital (specifically the binary digit) and knowing, by broadening the definition of ‘digital’ to refer to the logic of commensurability, of coming to know the world through encoding. This was intended to establish sufficient background for the argument that artistic practices that engage with ICT need not restrict themselves to either exposing the machine for transparency or confronting it directly, but rather to reframe the question through the ways of knowing fostered by the digital/technological. Through broadening the perspective on digital polemics, and leaning on what the previous chapter has articulated as visual art’s capacity to uncouple from the familiar, to reveal conventions, and to unsettle entrenched assumptions (Atkinson, 2008; Garoian, 2015; jagodzinski, 2010), one could instead reframe the question as follows: how can contemporary art engage productively with current digital media configurations (and their underlying logic), and cultivate ways of knowing otherwise, encompassing possibilities such as revealing and tactical engagement. All the case studies of artworks to follow demonstrate the pedagogical potential to foster different ways of knowing in relation to ICT, deploying a variety of interventions into information protocols by revealing clandestine information operations, co-opting the information mechanisms of the dominant regimes, creating alternative information, foregrounding marginalized information, or creating conditions for unforeseeable information to emerge. 6. The Digital as a Way of Knowing “Technologies change what we mean by ‘knowing’ and ‘truth’; they alter those deeply embedded habits of thought which give to a culture its sense of what the world is like” (p. 12), 84 says media ecologist Neil Postman (1992). Emphasizing the significant effect technology can have on individuals and society, he adds, “embedded in every tool is an ideological bias” (p. 13). In other words, technologies, for Postman, are lenses through which one comes to make sense of the world and the phenomena within. It would follow that ICT, and the digital aspect that undergirds it (especially in the sense of dividing the world into binaries), would be no different. It contains a certain set of assumptions, norms, truths, and conventions that permeate and govern society, that are the result of complex interactions between the affordances of technical materials themselves and the socio-political influences of various social forces. Such a way of knowing structures and underlies all facets of digital media one encounters on a daily basis – search engines, databases, websites, interfaces, tagging, profiles – to name only a few commonplace applications and protocols ubiquitous to contemporary information society. As noted above, digital information, as a way knowing, is one of ‘commensurability,’ a framework through which one can consider information as the discrete units that are exchangeable, and therefore capable of representing every aspect of the world. The description above would appear to bear similarities with a concept Heidegger (1977) outlined in his seminal work The Question Concerning Technology8. The essay is instructive for this chapter as it offers support for the argument that tech/media can prescribe particular worldviews. In the essay, he proposes that technology (techné), broadly defined, has the capacity 8 I am aware of the criticism towards Heidegger’s perspectives on technology held by some scholars of Science and Technology Studies (STS), such as Peter Paul Verbeek (2005). The critique is aimed at what is argued to be Heidegger’s one-sided argument that modern technology is inclined to control and dominance. Such deterministic lens is something I disagree with as well, therefore I will rely on Andrew Feenberg’s (1999) elaboration in the next chapter to proceed with some Heideggerian thoughts minus the determinism. While I agree with the criticism of STS scholars to an extent, I also think Heidegger’s thoughts on technology, consulted with some caveats in place, provide a strong argument for the ways in which modern technology can cultivate worldviews, an argument that can benefit the scholarship of visual and media art, within which the presence of philosophy of technology is little. 85 to shape, influence, and construct worldviews through which one comes to know and act. Referring to techné and episteme, he writes that “both words are names for knowing … as an opening up it is a revealing” (p. 13). In particular, for Heidegger, modern technology brings forth and reveals the world in a particular manner through the process of gestell (‘enframing’), resulting in a challenging-forth of the world as ‘standing reserve.’ Enframing sets up a human-world relation “which puts to nature the unreasonable demand that it supply energy that can be extracted and stored as such” (p. 14). He offers examples such as stored energy and human resources, cycling through the circuit of production and capital for future expenditure. I would add to the list of examples a more contemporary one – information. Heidegger’s concept of enframing suggests a way of knowing and understanding the world through the operative discrete units and calculable blocks/objects. In such a model, the subject comes to know through the acts of abstracting, ordering, and analyzing, from which one readily objectifies and commodifies. This is succinctly echoed by Gottlieb (2018) when he describes the technological act of “translation of the heterogenous and fluid world into discrete element of homogenous digital values” (p. 4). Moreover, such dominant model tends to obscure and prevent other ways of knowing. Heidegger (1977) adds that gestell “banishes man into that kind of revealing which is an ordering” and “where this ordering holds sway, it drives out every other possibility of revealing” (p. 27). Galloway (2012) makes this explicit when he asserts that “our intense investment in the worlds – our acute fact finding, our scanning and data mining, our spidering and extracting – is the precondition for how worlds are revealed” (p. 13). Despite its flaws of essentialism, the conceptualization that enframing “concentrates man upon ordering the real as standing reserve” (Heidegger, 1977, p. 19) is significant as it suggests that technology fosters a way of knowing through which one makes 86 sense of and act in the world, where the subject orders the world into substitutable bits. Given the above, enframing could be seen as the revealing of the world as information. As Postman (1992) noted above, such epistemic model is deeply embedded. In his book Questioning Technology, philosopher Andrew Feenberg (1999) emphasizes the complex socio-economic and political factors that contribute to the emergence of and that continue to influence various technologies, questioning any functional teleology and essential determinism. The former point addresses some of the rigidity and shortcomings of Heidegger’s perspective in relation to the ‘essence’ of technology, but nevertheless Feenberg agrees that “technical devices embody norms” (p. 86). Following sociologist Bruno Latour, he remarks that with the emergence of technological innovations comes the process of black-boxing, or ‘closure’ in Latour’s terms, whereby the socio-economic and cognitive-affective impact of these technologies, the norms, values, and worldviews that they possess, become “unquestioned background to every aspect of life” which “seem so natural and obvious they often lie below the threshold of conscious awareness” (p. 86 – 87). Making a case for the importance of philosophy of technology, Langdon Winner (1986) coins a similar concept that he terms ‘technological somnambulism.’ For him, even though technological advancements are “powerful forces acting to shape [human] activity and its meaning” (p. 6), conventional views rely on a neutral and instrumental/functional understanding of technology, paying little attention to, let alone examining, the implications and meaning of changes afforded by such advancements – effectively, sleepwalking through it all. Here the metaphor/model of the black box returns. It would appear that to unravel the black box in order to ‘reveal’ and ‘awaken’ would be logical, following this argument advanced by Feenberg, Latour, and Winner, but as 87 mentioned previously, this chapter reframes the argument beyond such a literal and restricted uptake of the metaphor. Lanier (2010) articulates a similar concept to ‘black box’ and ‘closure’ with his term ‘lock-in,’ which he describes as the processes whereby certain models of computational design supersede their competition and solidify as the seemingly natural, logical, and mandatory model. “Lock-in removes ideas that do not fit into the winning digital representation scheme, but it also reduces or narrows ideas it immortalizes” (p. 10), writes Lanier. Becoming the unquestioned reality, he pinpoints the phenomenon of lock-in in order to emphasize that things could always have been and can still be otherwise, that there were and always will be alternative models. He gives the examples such as Ted Nelson’s Xanadu (an alternative model to the Internet as it is today), and early computers that were designed without the contemporary file management systems, and even proposes alternatives such as files that cannot be copied (therefore negating the assumption that the digital is inherently more egalitarian and designed for sharing), demonstrating that the hidden technological everyday can always function differently. “Entrenched software philosophies become invisible through ubiquity” (p. 12), and in this case, the philosophical implication of the file is “the notion that human expression comes in severable chunks that can be organized as leaves on an abstract tree” (p. 13). While he does not go as far back as scholars such as McLuhan (1962/1995), Galloway (2007), and Gottlieb (2018), linking computation to visuality and knowledge, he does assert that the logic underlying contemporary computation (evident in functions such as MIDI and UNIX) is one of faith in discrete abstract symbols (i.e. information), which reduces the multiplicity of the world into that which can be represented in a computer. 88 Lash (2007) offers a similar argument through what he has termed ‘new new media ontology,’ describing the contemporary everyday where informatics influence and shape lives to such an extent that information conflates epistemology and ontology, where “modes of knowing are increasingly also modes of being” (2006, p. 581). Geographer Nigel Thrift (2004), while writing from the perspective of urban spaces and not technology in general, puts forward the term ‘technological unconscious,’ referring similarly to the ways in which complex assemblages of technological infrastructure function around and within humans but away from conscious awareness. Sociologist David Beer (2009) summarizes this trend in media scholarship aptly as a concern for “the ways in which software acts in often unseen and concealed ways to structure and sort people, places, and things” (p. 988), sinking into a taken-for-granted background. As the critical pedagogue Ivan Illich (1971) says, “surrounded by omnipotent tools, man is reduced to a tool of his tools” (p. 165), subjected to an unseen but ubiquitous assemblage that shapes seeing, knowing, and being. In all the above scholarship, there is an emphasis on the ways in which technological assemblages foster certain truths, assumptions, and values, and that they operate in a hidden manner within the everyday, leading one to assume that logically the countermeasure, and the task of the artist, would be that of revealing and transparency. But as indicated by the introduction to this chapter, the matter is not that simple or reductive. Or put differently, perhaps ‘revealing’ and ‘transparency’ are the wrong words with which to think through the issue. Towards the end of his elaboration on technology, Heidegger, through an etymological approach, offers art as the flipside of techné, which he suggests has the capacity of revealing in a manner other than that of gestell. As he writes, “essential reflection upon technology and decisive confrontation with it must happen 89 in a realm that is, on the one hand, akin to the essence of technology and, on the other, fundamentally different from it. Such a realm is art9” (1977, p. 35). In other words, Heidegger proposes that art too cultivates its own revealing, its own potential for ways of knowing the world, which has the capacity for subverting gestell. As the previous chapter proposed, the hidden operations of media/tech can only be examined when the dominant technological regimes and the norms they sustain are disrupted. And such rupture is precisely the capacity of the artistic tactic of estrangement, which has the pedagogical and critical potential to cultivate ways of knowing otherwise. Perhaps it is not so much an act of revealing, but rather the creating of a set of conditions, situations, events, for the audience to encounter, confront, and engage ICT in a way that would be conducive for the emergence of alternative ways of knowing. 7. Estranging ICT The previous sections established that the digital, that which underscores the field of informatics, fosters a particular way of knowing, one that I refer to as the logic of commensurability. As the scholars of media and technology above point out, such logic has a tendency to remain hidden, to operate in a mode that eludes awareness, in the normalized everyday. As the previous chapter has outlined and proposed, the concept of estrangement may be productive in analyzing media and media art. Through the tactic of estrangement, deviating from the normalized technical codes, media art precisely has the capacity to cultivate different ways of knowing in relation to ICT. Through a combined reading of Heidegger, McLuhan, and Brecht, 9 Heidegger here is referring to ‘poiésis,’ a presencing of the concealed, the “bringing-forth of the true into the beautiful” (p. 35). While the contemporary artworks analyzed in this research are a departure from such conception of art/poetry, Heidegger’s offer of art’s bringing-forth as a counter to the challenging-forth of gestell is a productive provocation to consider both their capacities to shape the way one relates to the world. 90 estrangement has been identified as possessing compatibility to the task of questioning the clandestine operation of technological assemblages. Concepts noted previously such as black box, technological somnambulism (Winner), technological unconscious (Thrift), lock-in (Lanier), closure (Latour), sink-in (Beer), withdrawal (Heidegger), or even narcosis (McLuhan), all point to an immersive background operation that eludes awareness and the much-needed critical examination. Feenberg’s (1999) proposal for technological intervention through unsanctioned uses, such as appropriation, breakdowns, and delays, brings home the arguments for technological analysis outlined in the previous chapter from McLuhan (anti-environments through foregrounding medium) and Heidegger (breakdowns that lead to awareness, present-at-hand), which estrangement aptly encompasses. Simply stated, estranging the technical code of the dominant regime can be generative in an analysis of the media environment. Feenberg’s theorization is consulted further in the next chapter. Emphasis on the ideal of ‘freeing information’ (which is vital and important, of course) can be considered not as the logical and sole solution to the hidden operations of the technical regimes, but rather as part of a larger spectrum of strategies, encounters, and engagements with ICT that could potentially cultivate ways of knowing differently. Examining the concept of ‘revealing’ through Brecht, specifically in the realm of art and ideological critique, furthers this expanded view. To the extent that one can characterize the rise of the avant-garde and the politicization of aesthetics as ‘the critique of representation,’ some scholars have traced the roots of such a genealogy to locate the crucial influence of writings by Ferdinand de Saussure, Roland Barthes, Walter Benjamin, Jacques Derrida, Foucault, and scholars of the Frankfurt School (Jameson, 1998; 91 Foster et al, 2004). Of particular interest for the moment, one can locate within Benjamin’s and Barthes’ writing, especially Mythologies (1957), which would pave the way for much examination and unravelling of the naturalized ‘sign’ within society, the influence of Brecht. If one follows Jameson’s (1998) assertion that it was Brecht who alerted Benjamin to the imperative of writing reflexively under capitalism, and that Mythologies was responsible for introducing the estrangement effect into French theory, then Brecht’s place within contemporary art, cultural studies, and post-structuralism cannot be overstated. “To reveal, underneath the pretended ‘naturalness’ of the petit-bourgeois ideology conveyed by the media” (Foster el al, 2004, p. 32), was the Brechtian project of Barthes’ Mythologies. The effect was utilized in Brecht’s plays “to make something look strange,” which “implies the antecedence of … a habit which prevents us from really looking at things, a kind of perceptual numbness … and its estrangement unveils that appearance” (Jameson, 1998, p. 39-40). For Brecht, starting with the theatre apparatus itself and extending to socio-political levels and the symbolic order, the estrangement of the bourgeois ancien régime reveals its constructed and arbitrary underpinnings in order for the assemblages of institutional norms to be scrutinized. In other words, revealing isn’t restricted to an act of increasing transparency, of making previously hidden information available. Rather, it can also be conceptualized as a method of cultivating different ways of knowing and orienting towards the world and the phenomena within. In the instance of Brecht, transparency, criticality, and scrutiny of the current regime could all conceivably lead to a different worldview on part of the audience. Its historical close ties with ideological critique as demonstrated above attests to this potential. Included in Brecht’s revealing of the politics of the everyday are also the assertion that the everyday needs to be different, and 92 the proposal of how it can be different. Estranging ICT involves a deviation from the normative functions, which not only reveals the issues, mechanisms, claims, and limits of the assemblage in question, but also disrupts the current ICT configurations of the dominant regimes. The artworks to be examined below, by Erica Scourti and Julian Oliver, arguably cultivate different ways of knowing in relation to ICT assemblages by both revealing their ubiquitous but hidden mechanisms and subverting their normative functions, making space for noise in their signal-processing. Artistic approaches such as deviation, dysfunction, and displacement, can be mobilized to estrange the technical everyday, to critically confront the numerous iterations of information capital and control listed in this chapter thus far, in order to cultivate ways of knowing other than the entrenched technological unconscious – a form of revealing beyond simply the revealing of information (i.e. Wikileaks) or revealing as information (i.e. enframing), but rather one characterized by the potential of something that exceeds the discrete unit, beyond the logic of commensurability, of understanding the world through informatics. As Mejias (2013) has also argued, the importance of actual political movements such as hacktivist operations and protests (for example, Occupy) cannot be overstated, but they are not enough and do not take the everyday and the common user into account. By expanding the discussion of revealing from modes of political action (or lack thereof) to one of knowing otherwise, the act of resisting ICT assemblages becomes much more granular and everyday. The expanded argument recognizes that the logic of informatics has permeated all corners of social life and become a potent structuring force. Expanding the discussion to one of knowing otherwise, beyond the logic of commensurability, creates room on its spectrum for the possibility of political action but also for cultivating different ways of engaging with ICT on an everyday level. The goal, 93 then, isn’t necessarily to turn everyone into hackers, to enact political change, or to teach everyone how to code (Raley, 2009), but “to reorganize our intimate ways of thinking … to give the mind the tools to envision how the network has shaped and molded us … to raise the possibility of alternatives” (Mejias, 2013, p. 14). The purpose of this move beyond pitting artworks that aim for political action against ones that reveal the machination of ICT assemblages is to arrive at a more capacious and productive framework for analyzing media art. This is accomplished precisely by shifting the focus to ways of knowing. To return to the beginning of this chapter, is the role of the artist within ICT-facilitated information capitalism to reveal this system? Brecht’s proposal of revealing the theatre apparatus may have, via Barthes’ semiotic critique, left its mark on various contemporary art practices, in the form of ideological critique of representation. Seen in this light, the concept of revealing, then, should not be taken literally, as in the work of Hans Haacke or Mark Lombardi, or Wikileaks, where nefarious networks of power are made literally visible via reports, lists, diagrams, and graphs, but rather requires an expanded definition. Such a move takes from the definition of pedagogy defined in the previous chapter, of coming to know differently, without prescribing the exact outcome of such learning/unlearning. Therefore, the pedagogical potential of specific media artworks could conceivably be a variety of effects that includes freeing information (Lombardi, Wikileaks) and political action (Occupy Wall Street, Anonymous). In such a way, the discussion of media art moves to the more productive and agile terrain of asking: how does this artwork challenge the ways we relate to, think about, and come to know ICT today? 94 In the case studies below, the first artwork approaches the issues from the user’s perspective (through Google products) while the second from a programmer’s perspective (through hacktivism), but both engage with contemporary digital media politics, specifically with data-mining and the logic of commensurability. Like all artworks chosen for this dissertation, the two projects here provide insight to the research question by tackling the socio-political issues of algorithmic operations. Moreover, they substantiate this chapter’s expansion of ‘revealing’ beyond the binary of transparency and political action to illustrate two different ways in which artworks estrange data operations and cultivate different ways of knowing in relation to ICT. 8. The Algorithmic Other In Lacanian psychoanalysis, subjectivity is constituted through the ‘other’ during the mirror stage for infants, an illusory ‘whole’ that comes from identifying with their mirrored selves (Lacan, 1949/2006). This identification is based on an alienation derived from the conflict between the perceived self that is whole, and the actual self that lacks control and motor skills – a fragmented self. Due to this, the subject is permanently split, the ego is at once ‘other,’ and the self attempts to rectify this split throughout one’s lifetime. Extending from the other in the mirror is the Big Other, social and trans-individual power structures such as language and law that govern how subjects behave – the apparatus that interpellates10 one into the subject position of a given social order with specific beliefs, assumptions, and truths. In the age of ICT-facilitated information capitalism, I would argue that the ‘other’ has been updated to include the code and algorithmic 10 The use of Althusserian language is deliberate here, as Todd McGowan (2007) points out, combining Lacan with Louis Althusser’s concepts, such as interpellation, effectively politicized Lacan’s work and proved to be indispensable for bringing Lacan’s work on the mirror stage and gaze to film theory and cultural theory. 95 operations underlying the interface through which one navigates on a daily basis to make sense of the world – an algorithmic other. A prominent apparatus of the above mentioned is undoubtedly Google. For the purpose of this section, the focus is mainly on the subsidiary known as Google Ads (previously as Google AdWords), which is an online advertising platform designed to deliver ads to web users of Google products and services. Marketing professionals design and determine ad campaigns utilizing certain keywords that might be relevant for the users. For such targeted and personalized ad campaigns to be effective, Google scours the users’ browsing histories and patterns, correlated with other sources of data under its expanding purview, to generate a massive amount of information that informs the mechanisms by which one is presented with an ad. As scholar of management and information systems Bob Travica (2015) writes of Google’s organizational intelligence, part of what makes Google so innovative and successful is its business model (‘profit-and-freebies’) of offering a whole suite of ‘free’ services (such as its search engine, Gmail, Google Docs, Google Maps, Youtube, Picasa, etc.) to a mass audience while generating revenue off this user base. “The core production activity of Internet search with Google is free of charge, but it creates digital ‘real estate’ that is being sold to advertisers,” writes Travica (p. 11). This growing estate includes everything that Google’s services touch, such as websites and users that use Google Maps or Youtube, anyone with a Google account, and anything that runs the Android operating system (which is 80% of the world’s mobile users at the time Travica was writing in 2015). This penetrating footprint grants Google Ads an unprecedented degree of access to the thoughts and behavior of the population, from which it derives detailed consumer profiles to sell to advertisers 96 and marketing professionals. As philosopher Colin Koopman (2014) writes, “information is not just about you … it also constitutes who you are [emphasis in original]” (para. 6). What does it mean when the self is constituted through the data resulted from the all the spidering, extracting, and harvesting efforts of algorithmic data-mining operations? For starters, the self is a proletariat, a digital serf. This is what Nicholas Carr (2012) calls digital sharecropping, when the landowners, or proprietors of digital platforms, grant users access to their services and tools for free, tacitly for exchange of whatever they produce on the land/platform (in this case, copious amounts of data). This conflation of what appears to be social activities, or even play, and the exchange structure underlying work and labour that is unique to the Internet, is discussed by scholar of media and culture Trebor Scholz (2013) and sociologist of media Christian Fuchs (2013), encapsulated in the terms playbor and digital labour. To echo Heidegger’s point above, Scholz argues that “social life on the Internet has become the ‘standing reserve,’ the site for the creation of value through ever more inscrutable channels of commercial surveillance” (p. 8). Here, enframing is positioned as the revealing and understanding of the world through the pertinent unit of information, which equates surplus value. Second, in a strange update, the mirror stage is perhaps now a schism between the material self and one that has been mapped and calculated, the self as a consumer profile, read and quantified by the algorithm. Is this self more accurate? Does it adequately represent what one considers to be the self? Is it more ‘whole’? 97 Figure 4.1, Erica Scourti, Life in AdWords, 2012. Video still, dimensions variable. © Erica Scourti. Courtesy of the artist. These two points, the industry that surrounds the algorithmic operations that capitalize on online labour and the encoding of the individual through such algorithmic operations of data mining, are both hinged on the logic of commensurability, a way of knowing that asserts the equation between the multiplicity of the world and the binary digit. And both these points are taken up in the work Life in AdWords (Figures 4.1 and 4.2) by London-based artist Erica Scourti (2012 – 2013), whose work often tackles social production on the Internet. From 2012 to 2013, Scourti wrote daily entries of her diary through the platform of Gmail, which Google would scan for relevant information, amassing a list of ‘audience keywords’ that would then inform a profile 98 (known as ‘affinity audience’) to be matched with potential advertisers’ keywords.11 Ads would appear in Scourti’s Gmail account, presenting her with products and services that might interest her, based on the personal information she has shared about her daily routines, mental states, thoughts, actions, in addition to her browsing history and online behavior. She would then perform for the webcam, reciting this list of advertising keywords on a daily basis, in a manner that is at once monotonous and deadpan, yet intimate and personal in the disclosure format of webcam videos. In these videos, shot in a variety of places, sometimes in what appears to be Scourti’s bedroom and other times at discrete public spaces, she recites to the audience a list of things that would constitute her individuality as interpreted and understood by Google: game cheats, self-defense instructions, exercise your eyes, London art galleries, London Olympics, how to relieve stress, tour Athens, yoga exercises, cocktail recipes, mindfulness CDs, tingling numbness in foot, ways to stop drinking, healthy food to eat, anger management classes, best wine glasses, etc. While this particular piece does not touch upon social media, for example, its focus on the way big data industry encodes the users and exploits their labour remains highly relevant today. Against the myth of information being free, scholar of digital labour Tiziana Terranova (2013) writes, “the Internet is animated by cultural and technical labour through and through, a continuous production of value that is completely immanent to the flows of the network society at large” (p. 47). A precedent for Scourti’s work might be located in Taiwanese performance artist Teh-Ching Hsieh’s (謝德慶) year-long performance piece where he punched into a punch clock 11 It should be noted that this function was removed in late 2017, when Google announced that they would no longer data-mine through personal emails. However, as Ginny Marvin (2017) writes for MarketingLand, “this is a big change in practice for Google, but the impact on advertisers is likely to be relatively minimal, in large part because Google’s audience modeling has become much more powerful. It doesn’t need to match a keyword to a potentially deleted email to get advertisers to buy Gmail ads anymore.” 99 at every hour, everyday (one year performance, 1980 – 1981). More emblematic of the practice of Taylorism, Hsieh’s work highlights the meticulous quantification and management of labour, of subjecting the worker to the detailed mapping of the dominant regime of technocracy. Like Scourti, he emphasizes the violence of such quantification through a machinic performance, an absurd self-portrait that foregrounds the extraction of value through physical labour and measured by time. Rule-bound (or rather, algorithmic), Hsieh subjected himself to rigorous parameters, confounding the distinction between his work (or all work) and life, while asserting the endurance of his body and the persistence of corporeal and lived experience. Life in AdWords updates Hsieh’s project in a way, by shedding light on the condition articulated by Terranova, where the logic of capital spills out from the factory/office to the digital landscape. In this environment that promotes myths of openness and liberty, labour is much more invisible, both in the sense that digital labour online melds with play and leisure, and that these myths conceal the material labour that takes place in factories, click farms, and e-waste dumps as part of the industry that is the Internet. Scourti makes visible such value-production and labour processes where the user becomes the product by estranging the act and result of using a whole suite of Google services. The tactic of estrangement foregrounds the medium of Google Ad’s data-mining operations but also the whole enterprise of algorithmic data-mining in general, of encoding and quantifying the subject through the information they provide and create. The result is an estrangement that doubles. On the one hand, the intimate human and personal disclosure of the self is made strange and ‘other’ through a deliberately reductive portrait of the self, robotic and monotonous, distilled to a series of products, services, and other attention-exploiting consumables. On the other hand, the process and service of Google Ads and its data-mining is also made strange, through the exaggeration of 100 its profiling mechanism and e-marketing conventions, uttered through the human it was intended to represent. In doing so, the logic of commensurability is accelerated and brought to the foreground, emphasizing its mapping and quantification process that reduces the subject to a series of keywords for advertisers. Yet simultaneously, by pitting this fragmented Google self, distilled to a set of marketing categories and descriptions, against the human subject in the video, palpably present and embodied with ever-changing subtleties in mood and appearances, there is something obviously inadequate about these categories and descriptions. The curator George Vasey (2014) writes of Scourti’s work, “by appropriating and repeating information that is given to us through advertising, the social script starts to become apparent and absurd” (p. 20). Figure 4.2, Erica Scourti, Life in AdWords, 2012. Video still, dimensions variable. © Erica Scourti. Courtesy of the artist. 101 I argue that the artwork’s pedagogical potential can be located in Scourti’s performance of her reclaimed keywords, creating alternative information which highlights both the act of encoding through data-mining and its limits in mapping and cataloguing the self. There are two layers of translation at work here. The self was translated into categories and keywords by algorithmic encoding. But Scourti has forced another translation, of decoding the keywords in an attempt to recreate the self, knowing that what comes out on the other side will be absurd, as all acts of translation leaves a trace. It is by rupturing the convention of data-mining through deviating from its intended usage that renders its operation visible for examination. And it is through such deviation that one is invited to think and know otherwise about the technology of data-mining. What the artwork emphasizes is the data-mining act, where one is interpellated by the ‘algorithmic other’ into an informatics social regime. Yet what the webcam disclosing self attempts is perhaps a reassertion of its agency. The personal and intimate format, directly addressing the viewer, reverses traditional viewership and the subject-object relationship. It gives primacy to the human subject and its excess, the body and its particularity, the voice and its situatedness, a temporal contingency that defies the static and reductive nature of data representations and categories. As curator George Vasey (2013) has phrased it, confronting the viewer “is a body that is ventriloquized by capital yet asserts its agency through a type of sarcastic repetition” (p. 8), one that nonchalantly parrots back the ludicrously inadequate keywords that claim to represent the user. By leveraging the convention of Google Ads and pushing the limits of its logic, Scourti has reclaimed the keywords that have been harvested from her digital labour, and created an alternative consumer profile, or perhaps a series of alternative information. As Scourti (2013) writes in an interview with Furtherfield, “the comically limited portrait the keywords paint maybe suggest that 102 despite the best efforts of Web 2.0 companies, we still are not quite reducible to a list of consumer preferences and lifestyles” (para. 32). “The current mythology of big data is that with more data comes greater accuracy and truth,” Kate Crawford writes (2014, para. 6). This epistemological position, itself building on the logic of commensurability, drives the big data economy and the design of its tools. What is hidden behind this ever-expanding drive, argues Crawford, is the unique anxiety of those who are doing the surveillance: “that no matter how much data they have, it is always incomplete, and the sheer volume can overwhelm the critical signals in a fog of possible correlations” (para. 5). According to her, there is a positive correlation between the size of data sets and the amount of oversight that can occur, rendering this drive and assumption a deeply flawed one. Crawford also notes the flipside of this anxiety, of those being subjected to surveillance and the desire to disappear. Both of these anxieties can be read in Life in AdWords. On the one hand, the data-mining operation is driven by the ‘epistemic big-data ambition’ of creating more and more accurate consumer profiles and getting to some sort of truth; on the other, Scourti’s performance effaces the self through exaggerating the convention of the Google Ad keywords and simultaneously asserts her agency precisely through this exaggeration. Rather than lamenting the loss of an ‘original self,’ the work brings to the foreground the operations of data-mining and its entwined nature with the self and its formation. Or, to phrase it in Lacanian language, it examines the mutually-constituting relationship between the self faced with its data representation (its mirrored self) which is always split and conflicted, and the 103 algorithmic other (information capitalism and its tendrils) that is driven by the faith in an elusive and impossible data totality. 9. (Non)communicative Dissent While Scourti’s project above arguably created the pedagogical conditions for knowing otherwise by exacerbating the logic and operations of Google Ads and data-mining, the next case study looks at two projects of Berlin-based New Zealand artist and ‘critical engineer’ Julian Oliver, one that co-opts and reverses the mechanisms of data-mining and surveillance, and one that completely refutes the mechanisms altogether. As I argued above, the theorization of media art could be more productive if it is reframed as the creation of conditions that may be conducive to alternative ways of making sense of the world, distinct from the dominant socio-political configurations (alternative to the current ways of engaging with the world through ICT), rather than being caught in the discussion of whether art practices that foster political action is more effective than ones displaying the nefarious machinations of institutional power. Oliver’s two projects from his ‘cyber weapons’ series may be a prime example of pieces that fit the expanded framework, encompassing both approaches noted above of revealing information and activist operations. Or, phrased under the terms of the expanded framework, they foster alternative ways of making sense of the world other than the logic of commensurability, which dictates that all phenomena (human and non-human alike) can be encoded as information and exchanged for value. By extension, they unsettle and destabilize the power configurations of ICT-facilitated capitalism where surreptitious and coercive data-mining is the norm, where certain types of information are 104 simply inaccessible, where it is expected that one will conduct digital unpaid labour while surplus is extracted from their production. The first piece, Transparency Grenade (Figure 4.3), was created in 2012 and consists of a translucent and metallic grenade-like sculpture object. The object houses a microphone, a computer, a powerful wireless antenna, and other custom electronics. The project is programmed so that as the pin is pulled and the grenade released (presumably onto the premises of some high security state or commercial institution), it is designed to record the audio of its surroundings, capture the data flow from the office’s internal network traffic (unencrypted emails, user names, web pages, IP address, etc.), and transmit this data securely back to the artist’s dedicated server for analysis. The information can then be made available via a dedicated website. In various exhibitions, the grenade is often shown in a glass case (fully operational), while occasionally the data feed is live-streamed and projected on the wall, capturing the data of audience members who are using the gallery’s wi-fi connection. The piece is essentially a prototype for a cyber weapon for guerrilla counter-surveillance (some will call it cyber terrorism). In doing so, the conventional roles and distribution of power/agency between the user and the institutions have reversed. The artist/user now conducts the data-mining and surveillance. The significance of this gesture, of course, cannot be overstated, especially in the present context of information capitalism as discussed in this chapter. Like the work of Wikileaks and Anonymous, it constitutes a hacktivist operation that, albeit breaching legal parameters if executed in reality (outside of a gallery), subjects the sovereignty of big data economy to its own methods, reaches for the utopic ideal of ‘information wants to be free,’ and grants the user some agency, however provisional. 105 Figure 4.3, Julian Oliver, Transparency Grenade, 2016. Edition 2, installation view. © Julian Oliver. By permission through GNU Free Documentation License. Like Scourti’s project Life in AdWords, Transparency Grenade ruptures the convention of data-mining through deviating (to be more precise, reversing) from its standard deployment, forcing one to examine its operations in the digital everyday. In this case, the work asks one to reconsider the ubiquity of data-mining and possible counter-strategies for things to be otherwise, puncturing the dominant knowledge and practice (echoing jagodzinski and Atkinson). Through the potential act of capturing and leaking data, it gestures to the possibility of reversing the distribution of power, a speculative scenario where the norms of information capitalism are upended. It emphasizes the violence of this ongoing operation while co-opting and turning the 106 strategy onto itself, demonstrating through its aggressive form – that of a grenade – the absurd normality of a mechanism that very much renders being online an ongoing cyber warfare. It estranges the online experience, confronting one with a device that insists the network is a battlefield with all of its implied violence. I argue that through such estrangement of the online experience and its aggressive reversal of corporate and state data-mining, the artwork activates the pedagogical potential for different ways of knowing, relating to, and engaging with ICT operations, to invite the audience to imagine how things could be otherwise. For Lazzarato (1996), the positioning of the human subject as an information-handler is precisely the violence of the communicative norm. But the piece also offers the possibility of agency and change, through the unlikely combination of weaponry and communication tools. Here, perhaps the double-edged nature of (communication) technology is mirrored by the grenade itself (in both its potential for violence and revolution). In short, the piece estranges the data-mining operations (and by extension, the very act of being online itself) through both the form (unequivocally a weapon) and the potential for action (reversing the data-mining and surveillance operation). Oliver offers in a 2012 interview that the piece contains the possibility of ‘exploding’ the nefarious confines of institutions for greater transparency, but I would suggest the piece also cultivates a sense of ‘productive paranoia’ for the everyday user to examine the device. As Mejias noted above, while the potential of leaks and other hacktivist strategies are significant, positioning the debate within the realm of different ways of thinking and knowing has greater impact for the average user. 107 Figure 4.4, Julian Oliver, No Network, 2013. © Julian Oliver. By permission through GNU Free Documentation License. Unlike Life in AdWords, which took the logic of Google Ads to the extreme and presented an exaggeratedly mediated and quantified user, Transparency Grenade is combative, going in the exact opposite direction to confront institutional powers while remaining firmly rooted in this combative online terrain. The second piece, titled No Network (Figure 4.4), takes a different approach and refrains from participating in this terrain altogether. Created in 2013, No Network presents a supplementary tactic to the cyber warfare of information extraction and access. This time, instead of confronting institutional powers on the net in perhaps an attempt to even the playing field, it withdraws by constructing a device that blocks signals, creating a space where one is completely blocked/shielded from the network and information cannot be shared/captured. In the form of a toy tank, the device creates a shield of 15 meters in diameter, blocking texts, calls, 108 and mobile data connectivity. At the time of writing, two more tanks are in development, one that blocks GPS capabilities and one that blocks access to wi-fi services. In essence, the tanks create mobile Faraday Cages. Compared to the offensive nature of Transparency Grenade, No Network functions on the defensive, tackling the same issues from a different angle. Emphasizing its militancy through its form of the tank, the piece forces one to be ‘outside’ of the information society, free of the demands of communicative capitalism. Earlier sections of the chapter have advanced the argument that terms such as revealing, black boxes, and transparency are perhaps better served if one shifts the focus from freeing information to the capacity for fostering ways of knowing beyond informatics. By doing so, one avoids the dichotomy that Søndergaard and Allen have identified and the situation where one utilizes political change as a gauge for measuring artistic merit. Transparency Grenade sits broadly on the spectrum, eschewing such dichotomy. On the one hand, it foregrounds the operations of data-mining and forces one to contemplate the ubiquity of its practice, by a process of excavation and information-leaking akin to Wikileaks. On the other, it is fiercely anti-technocratic and holds the potential for active intervention with the institutional apparatus itself, rather than a simple ‘presentation’ of such information. No Network similarly offers both a critique of state and commercial surveillance and also the potential for active intervention into the ICT operation itself (or perhaps actively passive/negative), creating a space for some temporary agency and autonomy through disengagement. While one seems to be explicitly combative and the other a firm refusal, I would argue that they both take aim at ICT assemblages and rupture the ways of knowing fostered by this regime, occupying a space that simultaneously reveals the machination of the informatics 109 social order and holds the potential for direct political engagement – making strange and creating anew one’s relationship with ICT. In a sense, both pieces are speculative and utopic, in that they point to the possibilities of other ways of relating to ICT, in a world where institutions will be accountable and information fully accessible, where the violence of surveillance can be turned on its head, where surreptitious data-mining need not be the norm, where one is able to disengage from the network completely, etc. The pedagogical potential of knowing otherwise is actualized through the tactic of estrangement. While not intentionally referencing Brecht, Oliver’s methodology is very much aligned with the estrangement tactic. As a self-declared ‘critical engineer,’ Oliver (2011) contributed to a manifesto co-written with Gordan Savičić and Danja Vasiliev that articulates the obligation of such a role, some of which include the need to raise the users’ awareness of technology’s influence and effects through deconstructing and inciting suspicion of the user experience. In a phrasing similar to ‘the medium is the message,’ line 5 of the manifesto reads that “the Critical Engineer recognizes that each work of engineering engineers its users, proportional to that user’s dependency upon it,” emphasizing the need to examine the medium and its effects. This, of course, is not meant to endorse techno-determinism, as line 6 adds that machines “describe inter-relationships encompassing devices, bodies, agents, forces, and networks,” similar to Bolter and Grusin’s definition noted earlier in this chapter. In this case, when one encounters the two artworks, with their unsanctioned uses of communication protocols through appropriation, breakdowns, noise, and obfuscation, the design is certainly for the viewer/user to question the influence, effects, and user experience of everyday ICT assemblages. “From a ‘use’ (or designer’s, or engineer’s) viewpoint, awareness of the technological mediation must be as low as possible. 110 From a ‘context’ (or individual’s, or reformer’s, or victim’s) viewpoint, consciousness of it should be as great as can possibly be accomplished” (van den Eede, 2010, p. 156-157). While Oliver’s two projects are arguably potent and urgently relevant, they present a zero-sum proposition, one aiming for head-on collision with institutional power through counter-surveillance, turning the operation against itself, while the other vows for resistance through a complete refusal to participate in the network. Due to the illegal nature of both projects if they are fully functional, they remain more as thought experiments. Mejias’ (2013) perspective is fitting here, when he says that his work “will not be calling on anyone to stop using digital networks or providing step-by-step instructions for dismantling any kind of digital network” (p. 12) as the point is not to long for a pre-digital era, but to recognize its pervasiveness and to engage in an ongoing co-construction of it through unsettling and undermining its way of knowing. As he writes, such work isn’t solely in the hands of a few hacktivists, but rather should be the responsibility of every user, for we are all implicated in the perpetuation of such logic. And the upending of such logic begins with the questioning of user protocols and conventions, something that estrangement can potentially cultivate. Mejias’ proposal appears aligned with mine as he argues for ways of disrupting the network that can realize forms of difference that are distinct from the dominant, with examples such as obstruction, interference, sabotage, misinformation, intensification, and revealing, among others. Oliver’s projects function as a foil to the last case study to be discussed in the dissertation (in Chapter 7), where the communicative is taken up again, but in a way that is neither a wholesale refusal nor an aggressive but ephemeral co-optation. “To unthink the logic of the digital network 111 is not to refuse to confront the network … but to reimagine one’s relationship to it … disrupting the flow of information by adding noise (information outside the logic of the system)” (Mejias, 2013, p. 90). Such disruption by introducing something ‘other,’ something outside of the current system of knowledge, is precisely what Ellsworth (2005) argues to be the pedagogical potential of these spaces of learning. To inject or exacerbate the noise within the signal, as a specific instantiation of estrangement, is explored in Chapter 7. This chapter has looked at various elaborations around information and capital and the underpinning logic of commensurability, established a link between digital media technologies and a particular way of knowing underscored by the binary of discrete units, and focused the discussion around how such epistemic model can be questioned and upended through artistic approaches that could defamiliarize and create other ways of coming to know and engage with the phenomena of the world. For the group Critical Art Ensemble, the strategy of tactical media lies precisely in defamiliarization, disruptive encounters that “offer participants … a new way of seeing, understanding, and interacting with a given system” (p. 7), writes art and media theorist Rita Raley (2009). If the operative assumption of ICT is that all phenomena can be framed under the logic of information, then the distinction between signal and noise, between sanctioned and legitimate information on the one hand and the excess or the not-yet on the other, becomes a significant question within digital media polemics. Noise is the materiality/ground with which signal-processing and meaning-making occur, the symptom of the medium’s presence, the uncontainable excess, the unfamiliar and unprecedented event, the ‘other’ of information, the incommensurable episteme. 112 Chapter 5: Knowing through Information In the essay What is Critique, Foucault (1997) elaborates on the inextricability between knowledge and power by stating that “nothing can exist as an element of knowledge if … it does not possess the effects of coercion … conversely, nothing can function as a mechanism of power if it is not deployed according to … systems of knowledge” (p. 61). For him, the operations of power and the truth claims they promote and legitimize cannot be sustained without systematic production and circulation of discourse. Media theorist Mark Poster (1990) extends Foucault’s work on discourse and the dominant practices it naturalizes into the mode of information, specifically databases. For him, Foucault’s theorization on how discourse reifies certain practices into norms of domination (Foucault’s example being surveillance and prisons) should be updated in the age of code to include information as well, the network of which constitutes a ‘superpanopticon.’ As Poster argues, “the discourse of databases, the Superpanopticon, is a means of controlling masses in the postmodern, post-industrial mode of information” (p. 97). Operating within this discourse, I would propose, is the logic of commensurability. In the information age, perhaps one might argue that the power-knowledge nexus should be updated as ‘power-information,’ “a regime of power-knowledge whose ethos is not discursive but informational” (Lash, 2002, p. xi). As media artist Natalie Jeremijenko (1997) writes, the apparent distinction between the objectivity of information and the influence of power needs to be questioned; the adage that information is power needs to include the point that it is also power that influences what gets constituted and becomes ‘information’ in the first place. Like knowledge, information and power 113 are in a mutually constituting and legitimating relationship, naturalizing norms of the dominant regime. As she asserts, “information is not fully abstractable in representational theories … it is partial, and contingent on how it is framed, rather than neutral, empirical, and comprehensive.” The task then, for her, is to question the epistemological claims of technologies such as ICT. Similarly, while artists like Iain Baxter were interested and content with framing information as art through the borrowing of its formal structures (with the infinite combination of sign/algorithms), art historian Ursula Anna Frohne (2013) highlights the move “toward a new epistemological framework that situated process-oriented, performative, and ephemeral art forms in relation to the internal logic of information, namely programming and planning” (p. 40), which she identified in the work of renowned Conceptual artist Dan Graham and his writings from the 1960’s. These points emphasize once more the task of examining, through visual/media art, the ways of knowing fostered by contemporary ICT assemblages, and to explore the possibility of artworks challenging such epistemological frameworks through tactics such as estrangement. If the way of knowing fostered by the digital is the ‘logic of commensurability,’ then it is through the binary digit that the logic operates within information society. The output of these operations is what I call ‘data-based representations’ – the collected data that come to represent certain phenomena. In other words: information. Rephrasing information as ‘data-based representations’ is important, for as the previous chapter has insisted, what information claims to offer is a way to adequately and neutrally represent – essentially creating models of – the world and its phenomena through the discrete units of bits. While the previous chapter outlined the relationship between the digital and knowing, and introduced the information-based way of knowing, this chapter spends more time looking deeply at information as it creates 114 models/representations through which one comes to know the world, and the limits and critique of such a way of knowing. While the previous chapter introduced the link between the digital and knowing by looking at the theoretical history of information followed by philosophy of technology, this chapter concretizes this link and pinpoint its shortcomings by once again consulting the work of Hubert Dreyfus (1991, 1992, 1998, 2007) in relation to ways of knowing. This is carried out in section 3, but also through the case studies of artworks. It should also be noted that this research follows Galloway (2013) and Barrowman (2018) and recognizes that ‘raw data’ – in the sense that the term ‘data’ usually denotes – does not exist, and that the presumed distinction between data (raw, pre-human, pre-interpretation, a given) and information (cleaned-up data, which can now be used to make sense) is dubious. Rosenberg (2013) recounts the history of the term data and writes that its connotation of being incontrovertible hard facts is an eighteenth-century invention. In this sense, ‘data’ doesn’t really exist, and there is only information – always already filtered through and constructed by the human. For the sake of this dissertation, the terms used are taken from their vernacular and professional usage (i.e. information society, data-mining, etc.). Before delving into the two case studies of artworks (one on search engines by Mongrel and the other on data representations such as charts and graphs by Richard Ibghy and Marilou Lemmens), this chapter extends the argument articulated previously regarding ICT assemblages and the logic of commensurability as a way of knowing they foster, to elaborate on two points that bear addressing: 1) whether the argument that ICT fosters a particular way of knowing implies a form of technological autonomy and determinism, and 2) how ‘ways of knowing’ is 115 conceptualized in this research and what are the limits of an information-based way of knowing. The first case study will speak to the argument against determinism and neutrality, insisting that technical assemblages are imbued with socio-political forces. The second case study will go even further to interrogate the claims of information to represent the phenomena of the world. 1. Determinism vs. Technical Regimes This section relies heavily on philosopher of technology Andrew Feenberg’s (1999) theorization against technological determinism, as well as the way normalized technological ‘hybrids’ function in surreptitious manners, and the imperative for critical examination and intervention. It returns a bit to the previous chapters, echoing the perspectives advanced in Chapter 3 regarding the imperative to destabilize the norms and convention fostered by socio-technical assemblages in order to scrutinize its mechanisms and operations. The section serves to pre-emptively defend against accusations of determinism, as a way to flesh out the study’s position, despite advancing the argument that ICT assemblages can foster a particular way of knowing. Feenberg characterizes the historical perspectives of Western civilization towards technology, both supportive and critical of, with assumptions of neutrality and autonomy, where the technical sphere is separate from socio-political spheres, uninfluenced and untouched by human and other non-human forces. Up until constructivism questioned this autonomy by introducing the socio-political elements, both the reigning technocratic arguments and the opposing dystopic anti-technocratic viewpoints held essentialist tendencies, where the former assumed technology is naturally driven by progress and necessity (and therefore civilization 116 advancement and the good of humankind), and the latter assumed technology is naturally characterized by control, efficiency, and domination12. For Feenberg, Heidegger’s theorization of technology is plagued by essentialism and determinism, leaving no room for any potential of agency and negotiation. But while he differs with Heidegger on the former points, Feenberg does agree that there is a need to examine an increasingly technocratic world, marked by a “link between technological thinking and modern administration” (p. 112). In Feenberg’s (1999) writing, technocracy is defined as “the use of technical delegations to conserve and legitimate an expanding system of hierarchical control” (p. 103), the successful clandestine embeddedness of which is hinged on the illusion that rationality is a neutral and given attribute of technology. ‘Delegation’ as it is used here is a Latourian term, referring to the process by which “norms are ’delegated’ to devices” (p. 102), norms that in turn are prescribed tacitly onto the populace. “Technocracy thus succeeds in masking its valuative bias behind the façade of pure technical rationality” (p. 103), an attribute that is not a given but delegated. Feenberg (1999) asserts that determinism is sustained via the assumptions that technology advances itself based on necessity, a kind of technological teleology, resulting in the perspective that there is an inherent functionalist logic to technology, which operates independently from societal factors. Like Bolter and Grusin’s point earlier that “social forces and technical forms are two aspects of the same phenomenon … hybrids of technical, material, social, and economic factors” (p. 77), he offers a sustained argument against determinism and instead insists that technical devices are the results of complex negotiations between a myriad of actors with varying 12 Typified by the scholarship of Heidegger (1977) and Jacques Ellul (1964), and certain works by the Frankfurt School such as Herbert Marcuse’s (1964) One-Dimensional Man. 117 interests, such that the normalized ‘technical codes’ are always overdetermined and relative. He uses the term technical code to refer to the normative values, assumptions, and knowledge prescribed and sustained by particular technological regimes/paradigms, a specific technological way of knowing, which enables certain understandings while inhibiting others. Prior to concretizing as invisible and self-evident in the process of closure, the emergent technical device is the contested ground of various socio-political and cultural-economic interests, jostling over which configuration will settle into the new norm. “The outcome of these disputes, a hegemonic order of some sort, brings technology into conformity with the dominant social forces” (Feenberg, 1999, p. 89). As Feenberg asserts, among the possible configurations of various actors and their interests, “the choice between them is political and the political implications of that choice will be embodied … in the technology” (p. 80). Lanier (2010) likewise insists that “the design of software builds the ideology into those actions that are the easiest to perform on the software designs that are becoming ubiquitous” (p. 47). The design of software and interfaces is not neutral nor innocent, driven solely by user experience and function. Instead, socio-political and cultural-economic factors, in addition to technical and material ones, influence the design that will ultimately become the norm, introducing and maintaining uses and actions that seem perfectly natural and their absence unthinkable. The imperative, for Feenberg, is the possibility of ‘democratic rationalization,’ strategies to critically unpack, reflect, scrutinize, and potentially counter the normalized codes of specific technical regimes. Only through this process might Gunkel’s proposition of thinking otherwise be possible. 118 Against arguments of technological essence being a causal principle, Feenberg (1999) suggests viewing technology as ‘ambivalent,’ always with the potential for perpetuating but also undermining the status quo, and more. Much more hopeful than the offerings of Heidegger, he insists on the possibility of everyday users, lobbyists, politicians, activists, and even other engineers and scientists, to interact with a dominant technical device in order to influence and alter its operative function and effects, especially the unquestioned epistemic and social horizon that the device contributes to and is embedded in. Believing that “a critical theory of technology can uncover that horizon, demystify the illusion of technical necessity, and expose the relativity of the prevailing technical choices” (p. 87), Feenberg puts forward the possibility of anti-programs and tactics (terms that he borrowed from Latour and de Certeau). Countering the avant-garde and revolutionary insistence on a total overturn of capitalism, Feenberg offers instead the stance of ‘micropolitics’ (a point that is also echoed by Rita Raley (2009) and Ulyses Mejias (2013) from the previous chapter). He proposes subversion of the technocratic way of knowing from within and below, through the potentialities of user interventions. In order to destabilize the dominant technical code, the ‘total administration,’ Feenberg touches on an imperative that will remain crucial throughout the present research: of deviating from the technical norm. In the present study, the tactic of estrangement refers precisely to such deviations. Against the systems manager/administrator of technocracy, such deviation from the norm in the apparatus can reveal the arbitrariness and embedded politics of technical choices, even offer alternative configurations. Through user intervention and counter-hegemonic code, such as “unforeseen breakdowns, partial disaggregation, and tactical re-appropriations” (p. 116), anti-programs and tactics that “subvert the dominant codes from within by introducing various 119 delays, combinations, and ironies” (p. 113). While mainly referring to active agents interacting with technical environments in ways unsanctioned and unplanned by engineers, designers, and administrators, the description above also bears resemblance to artistic practices, all of which could be seen as deviations from the norm or intended (and therefore legitimized) function. To the extent that such gestures and occurrences are considered undesirable by the administrators of technocracy and the presence of which prompts rectifying and suppression, one might also refer to deviations as ‘noise’ in a system that only allows for signals, a concept that I return to spend more time on in Chapter 7. Feenberg’s call for micropolitical tactics that examine the ubiquity of reigning technical regimes also builds a segue to the two case studies of artists, which are looked at in detail after the next section on ways of knowing. 2. Knowing (from AI to materials) This section elaborates on the concept of ways of knowing as it is utilized in this dissertation, pinpoints one of the most dominant of such ways, and identifies alternatives. As noted in the introduction, ways of knowing here refers to the “different understandings of the social world… and different ways of coming to understand that world” (Moses & Knutsen, 2019, p. 2). It refers to the reigning epistemic models that shape how one comes to know, to see, to make sense of the world, and that in turn significantly shapes the conditions of possibility for certain thoughts and action. Feenberg’s (1999) technical code would constitute another form of such ways of knowing. It is important to note that this research does not presume there is a finite list of such ways of knowing. As art education scholar Elliot Eisner (1985) writes in the compilation of essays Learning and Teaching the Ways of Knowing, “the roads to knowing are many” (p. 24). In the 120 collection, numerous modes are outlined, such as aesthetic, scientific, interpersonal, spiritual, formal, practical, etc. What this non-exhaustive list indicates, at the very least, is that ways of knowing can lead to the production and acquisition of what might traditionally be conceived as ‘knowledge,’ but also that which is more social, embodied, process-based, and occurs outside formal classroom instructions, such as know-how. For Foucault, critique is “an investigation into the legitimacy of historical modes of knowing” (1997, p. 58). Therefore, critique (one of the aforementioned capacities of artworks) also implicitly entails a question of how one knows, or specifically, through what epistemic regime does one come to know. As this section demonstrates, the dominant Cartesian way of knowing can be identified as a direct progenitor of the thinking behind AI and computation, and this link highlights not only the relationship between information and knowing (more specifically, knowing-that) but also information as a way of knowing. Philosopher Hubert Dreyfus (1991) makes the assertion that a specific epistemic model has been dominant since Plato, which has contributed to the epistemological assumption that a rational subject could come to understand the objective universe from a detached and isolated position, which grants legitimacy to the subject’s ‘discoveries’ about the world. For Dreyfus, this assumption can be identified in the scholarship from Plato to Descartes to Kant to Husserl, among others, and is manifested in the attempt of researchers and philosophers to “find context-free elements, cues, attributes, features … and to relate them through covering laws … or through rules and programs” (p. 2). One could think of this tradition as one that separates theory and practice, mind and body, subject and object, words and things – the traditional epistemology. According to 121 ANT (Actor-Network Theory) and STS (Science and Technology Studies) scholar John Law (2004), this way of knowing is also what drives knowledge-production in scientific research, which holds the assumption that there is a stable, rule-governed, and independent reality out there waiting to be discovered by our rational minds, since “knowledge of the world resides in the subject” (p. 26). One of the outcomes of such thinking, with the primacy given to theory and the mind, is what Dreyfus refers to as the ‘information-processing model of the mind,’ a concept that ties together acutely the current chapter’s focus on information and its relationship to ways of knowing, which I tease out in this section. The main concern for epistemology, as illustrated by radical constructivist Ernst von Glaserfeld (1984), is the acquisition of knowledge from the existence of an objective and independent reality where truths may be obtained, a concern that has long preoccupied philosophers since the pre-Socratic days13. In this epistemological model, the necessary and sufficient conditions for knowing is the division between a knowing subject creating knowledge and knowable objects existing in the world, where the relation between the two is that of an adequate ‘match,’ as “the metaphysical realists look for knowledge that matches reality … some kind of ‘homomorphism,’ … an equivalence of relations, sequence, or characteristic structure” (p. 19) – a relationship of commensurability. Here, knowledge is defined as “a description or image of the world as such” (p. 22), and the human subject ‘discovers’ descriptions and truths of the world and expands human knowledge through such process. 13 Such as Xenophanes and the debate of episteme versus techné. The same ‘techné’ that Heidegger drew from, but here the connotation veers from ‘craft’ into the direction of ‘practice.’ 122 Von Glaserfeld (1984), however, is not concerned about whether the objective world exists or not, hence why his model is a ‘radical’ departure from traditional epistemology in that he champions only the intentional subject/mind and its ability to continuously evaluate and generate more and more ‘useful’ knowledge through interactions and experiences – a rational and behaviorist approach that influenced the development of cybernetics, and vice versa (Gleick, 2011). Through doing so, however, his theory supports a division between ontology and epistemology, and perpetuates the primacy of the subject, as the focus on the truthful and real in ontology is superfluous considering the world is constructed through the interactive process of knowledge-acquisition. Dreyfus’ (1991) Heideggerian critique of Husserl’s Cartesian project, transcendental phenomenology, is directed precisely against such division that elevates the subject/mind over the object/world/body. In his account, Husserl’s project is a culmination of Cartesian impulses, as the ‘intentional objects,’ the phenomena to be excavated through the transcendental reduction process of ‘bracketing,’ assumes the world is a set of discoverable facts, objects, bits, ready to be encoded, enumerated, and taxonomized. In other words, Husserl’s intentionality theory of consciousness (where consciousness is always ‘directed’ at an object) is predicated on the epistemic model that positions the self-sufficient and knowing subject as the rational mind that creates knowledge of the knowable world of objects. Drawing from Heidegger, however, Dreyfus lays out a different epistemic model, one that refutes the framework where subjects create adequate representations of the world (homomorphic match, in von Glaserfeld’s words), but is instead simply embedded in the world, being-in-the-world, complicating such division. Dreyfus (1991) champions this overturning of the “traditional claim that the basic relation of the mind to the world is a relation of a subject to 123 objects by way of mental meanings” by challenging “both the Platonic assumption that human activity can be explained in terms of theory and the central place the Cartesian tradition assigns to the conscious subject” (p. 3). Importantly, Peter Paul Verbeek (2005) points out that through the non-transcendental focus on being-in-the-world, Heidegger paves the way for later materialist philosophies that emphasize ‘things’ (p. 95). Where this critique is the most vocal is in Dreyfus’ (1992, 1998, 2007) long-standing work on the philosophy of artificial intelligence. Seemingly tangential to the current discussion, his work on AI combines epistemology and an examination of AI approaches into a compelling critique of the assumptions underlying AI research that drew from the epistemic model described above (the information-processing model which stemmed from traditional epistemology), thereby emphasizing a link between such model and digitality/information. Relevant to the present study, the link extends the argument outlined in the previous chapter, that ICT is structured around and sustains a specific way of knowing – the logic of commensurability. Specifically, Dreyfus (1998) insists that the language of computation that underlies early AI research cannot account for the everyday phenomena of the world, with its ambiguity and context-specificity. Golumbia (2009) echoes this skepticism when he insists on the distinction between ‘natural language’ and precision/formula-based computing language, a distinction that seems to be lost on fields such as natural language processing, voice recognition, and data-mining. Contemporary AI in forms such as predictive analytics, machine learning, deep neural networks, natural language processing, and image recognition are evident in all forms of digital media that one might encounter on a daily basis (Collins, 2018; Koopman, 2019). The image of 124 the sentient cyborg or singularity that AI might conjure up is not only inaccurate but also ‘disguises’ the pervasiveness of AI. The logic of AI is engrained into every working algorithm, that given a certain set of conditions, another set of functions may follow (while some are designed for less specified goals, they still operate within certain parameters, given the pattern or training data observed), such that the resulting data analysis will come to represent a model for some worldly phenomena (such as human preferences and desires). Considering the lofty goals of AI in its nascent days, to represent human consciousness, it is a fitting terrain to explore the conjunction of the logic of commensurability and its claims to the question of knowing. If the world is understood to consist of information, then of course the mind has to be an information processor. The epistemic assumption here is that the mind comes to know by processing information (which, as the previous chapter outlined, operates by differences/binaries). The field of AI is thus the ideal arena to demonstrate and explore this assumption. Dreyfus’ (1991, 1992, 1998, 2007) work illustrates a crucial point that the present chapter belabors in making: that information (like all representation) is incapable of adequately representing the world and its phenomena, and the claim that the digital binary (and other such systems of binaries) can represent everything needs to be questioned. An unwelcomed figure in the early AI community, Dreyfus (1992) began his work by questioning the basic assumptions of the very first AI projects, what progenitors Newell and Simon in the 1950’s proclaimed, “that a computer’s string of bits could be made to stand for anything … based on the Cartesian idea that all understanding consists in forming and using appropriate symbolic representations” (p. x – xi). Dreyfus refers to the idea that “representing the fixed, context-free features of a domain and the principles governing their interaction explains the domain’s intelligibility” (p. xvii) as 125 ‘representationalism,’ an epistemic model that he locates in certain dominant schools of philosophy as well as cybernetics, especially in the assumption that everyday understanding can be reduced to a database of propositional knowledge. Contrary to such claims, he questions “that the shared world … could be represented as an explicit and formalized set of facts” (1998, p. 283). Drawing again from Heidegger’s concept of understanding-of-being, Dreyfus (1998) argues that “the everyday context which forms the background of communication is not a belief system or a set of rules and principles at all … but is rather a set of social skills, a kind of know-how, any aspect of which makes sense only on the rest of the shared social background” (p. 285), negating any conceptualization where the subject is a floating mind, detached from the world while keen on making sense of it. Golumbia (2009) makes a similar argument. Refuting the assumption that “everything … might be constructed from the algorithmic passing of information” (p. 19), he also argues against the assumed universal representational capabilities of computation “because we are material beings embedded in the physical and historical contexts of our experiences, and it turns out that what we refer to as ‘thought’ and ‘language’ and ‘self’ emerge from those physical materialities” (p. 20). In fact, despite attempts from the AI community to create a vast enough set of micro-worlds that would match real-life scenarios, Dreyfus (1998) insists that it is a futile task, as it is not simply an issue of collecting/creating enough data, but rather the centrality of the data-collecting subject is suspect in the first place. This brings to mind the theorization of deconstruction from Jacques Derrida (1978), where he upends structuralism by declaring that totalization is not possible not because of the infinity of the field and one simply needs time and better instruments, but rather the field is characterized by infinite play such that the primacy of the subject/centre needs to be questioned. 126 This is echoed in the recent AI scholarship of sociologist Harry Collins (2018), who likewise insists that the current trajectory of AI research will never to be able to achieve actual human-like intelligence due to the fact that it is not embedded in the social and contextual world. One of the latest forms of AI, “deep learning is pattern recognition by machines based on enormous number of examples” (p. 117), where programmers ‘train’ the program in supervised and unsupervised ways. Against the perspective of Google’s Director of Engineering and famed post-human technologist Ray Kurzweil, who believes that the human brain is simply a vast set of precedent-based pattern recognizers no different than current models of deep learning, Collins insists that our sense-making is only possible because we are embedded in the world, rather than detached observers collecting data from the ‘outside.’ For him, knowledge comes from interacting with others in the world, in a multitude of implicit ways which cannot be encoded through pattern recognition and predicted by training data. In this framework, know-how precedes knowledge/knowing-that, or perhaps the former is the necessary condition for the latter. Or as philosopher Simon Critchley (2009) writes, “Heidegger’s basic claim is that practice precedes theory” (para. 7). Knowing, then, cannot be separated from being, and questions of epistemology cannot be separated from ontology, as being is a pre-requisite of knowing. As philosopher Bryan Magee (1987) says in conversation with Dreyfus on the matter, one might say we are coping beings, or ‘being beings,’ rather than primarily knowing beings. 127 At a distance from the field of AI and digital media but treading very similar conceptual grounds, anthropologist Tim Ingold’s (2013) book Making makes a strong case of the inextricable relationship between knowing and doing/being. Eschewing ethnography, Ingold outlines a kind of knowing from the inside, against data-collection from a detached outsider perspective, which, for him, unethically separates knowing from being (p. 5). Ingold works his way through ideas of learning through doing, corresponding with materials hands-on, and the highly contingent and open-ended process of making (such as art), and arrives at a thesis that seeks to move beyond the stable and fixed containers of subjects and objects, persons and things, to a flux of material flow and energy correspondences. In conceiving of people and consciousness on the same plane as objects and materials (all as interacting and animating matter), Ingold attempts to do away completely with the binary distinctions between theory and practice, thinking and doing, subject and object, and instead insists on their porosity. Quantum physicist and feminist scholar Karen Barad (2007) advocates a similar perspective through her model of ‘agential realism,’ where she proposes “a new understanding of how discursive practices are related to the material world” (p. 34). Her model confounds the distinction between nature and culture, human and nonhuman, negating what she refers to as a Cartesian representational model where methods of representation are assumed to represent the objects of the world faithfully and neutrally. Ambitiously, her critique aims at everything from language’s claim to represent objects, political parties’ claims to represent people, to scholarly theories’ claims to represent the world, etc. I would make the argument that information’s claim of commensurability can be filed under this category as well. The term ‘data-based representation’ takes its usage of ‘representation’ precisely from Barad (2007) and Dreyfus (1992). Both scholars 128 use the term to denote a post-Cartesian tendency in theories of knowledge to assume one creates self-sufficient models of the world and comes to understand the world through such models. This is corroborated by psychologist Robert Epstein (2016) as he argues against the information-processing model of the brain, where it is assumed that information is stored in the brain, referring to this critical view as ‘anti-representational.’ The presumed totality and neutrality of these representations – a homomorphic relationship between the subject’s models/descriptions and the actual objects/phenomena of the world – is what is being undercut. Barad (2007) proposes instead a ‘performative’ model of looking at how fluid ‘agents’ are always entangled and interact (or intra-act, to use her preferred terminology) with one another, both human and nonhuman, similar to Ingold’s model of how matter interacts and animates one another. Like Ingold, she argues that “knowing does not come from standing at a distance and representing but rather from a direct material engagement with the world” (p. 49), where even “theorizing must be understood as an embodied practice” (p. 54), negating a strictly realist perspective held by scientists and the linguistic/discursive focus of social constructivists and post-structuralists. In doing so, Barad also eschews the distinction between theory and practice, words and things, discourse and materials, subject and object. Thoughts, concepts, and knowledge also consist of physical materials. In this model, “knowing is not a bounded or closed practice but an ongoing performance of the world” (p. 149). Gottlieb’s (2018) theorization of digital materialism is instructive here, for it aptly combines the polemic of AI and its decontextualization outlined by Dreyfus with the new materialist teachings of Barad: “all data is a kind of abstraction … in the Cartesian sense, in a perfect artificial consciousness divorced from context” (p. 14). But digital media, like all human operations, exists within the plurality and ongoing intra-activity of matter, 129 within nature, and thus the imperative is to seek beyond the reduction-to-form of information and, by extension, the socio-political configurations of the information society. Curriculum theorist Ted Aoki (1981) offers a similar point when he laments the empirical and objective approach to educational thought, noting that “within this perspective, assumed is the split between … the knower and the objective reality out there” and “that the only acceptable form of understanding is knowledge of facts and theory” (p. 224). Education scholar David Lusted (1985) speaks to a similar gap between abstracted theory and actual pedagogy when he argues against such model that assumes knowledge can be transmitted abstractly, without taking the embeddedness and contexts of learners into account. Like Dreyfus, Eisner (1985) attributes this hierarchy in knowledge to Plato, who privileges the rational over the material, the mind over the senses, in the quest for absolute and immutable knowledge. For Plato, “the more we advance towards the abstract the more we achieve episteme” (Eisner, p. 30). Such way of knowing, Eisner argues, persists to this day and is evident in the privileging of theory over practice and mind over body. Importantly, as Asad Haider (2017) reminds us, there is precedence for such an argument and the strongest historical case made against the Cartesian binary would be that of Baruch Spinoza (1677/2018) and his concept of ‘neutral monism,’ who argued that “mind and body are the same substance … the intellect is inseparable from concrete practice, and therefore rationality is the effect of material relations” (Haider, para. 15). This section has spent some time elaborating on the question of knowing, by reiterating that ‘ways of knowing’ refers to the different lenses through which one comes to make sense of the world, the epistemological fields/regimes that foster and legitimate certain truths, assumptions, 130 and norms. Following the scholarship of Dreyfus (1991), Ingold (2013), Barad (2007), and Gottlieb (2018), it has been argued that one such dominant epistemic model has been the Cartesian model, one that highlights the close relationship between information and traditional epistemology. Dreyfus in particular argues that such model has its legacy in contemporary discourse on information and AI, and against such decontextualized representational model, these scholars propose an alternative that is focused on context, contingency, embeddedness, and materiality. This conceptualization sets the stage for one to move away from the information-processing model, to explore alternative ways of knowing, and to investigate how certain artworks can engage with this nexus of knowing, art, and digital media. 3. Knowing through Information: Two Case Studies Having elaborated on how knowing is conceptualized, the next sections examine two instances of how artworks interrogate the implications ICT has on ways of knowing. As Microsoft researcher Kate Crawford and geographer/artist Trevor Paglen (2016) have pointed out, algorithms as a filter through which one comes to access and consume the world, despite its veil of neutrality, is very much enmeshed within the particular socio-political positions of those who are doing the coding, training, and implementing. Going back to Feenberg’s (1999) point above, in this case it’s the algorithms that are promoting and sustaining the normalized and invisible technical code which evade scrutiny. “Neural networks cannot invent their own classes; they’re only able to relate images they ingest to images that they’ve been trained on. And their training sets reveal the historical, geographical, racial, and socio-economic positions of their trainers” (2016, para. 19), Paglen writes regarding deep neural networks, a specific type of machine learning algorithm. In 131 particular, he argues that such veil of neutrality and quest for more data create more openings for capitalist and state influences, perpetuating existing inequalities. This concern is taken up by the first case study. In addition, while there is no shortage of instances in the recent past of algorithms displaying troubling tendencies, such as image recognition software categorizing African-Americans as gorillas (Noble, 2018), Asians as having their eyes closed, and innocuous beige surfaces as flag-worthy human nudity on social media (Steyerl, 2014), Paglen (2016) proposes that the problem isn’t even that the algorithms are ‘racist’ but “it’s that the idea of quantifying human activity in the first place, I find very violent” (para. 53). In the section above, Dreyfus articulates his rigorous argument against the claims and capacity of AI and its Cartesian legacy, stating that such approach grossly decontextualizes the embodied being that is situated in the world, and as such, is inadequate at forming the models/representations of the world from which meaning-making supposedly happens. More recently, Crawford (2016) offers the same criticism on machine learning, in that these operations are designed to only recognize patterns and make inferences from them. Understanding these patterns, however, is not something they can ‘learn.’ Questioning the very claim of data-based representations is taken up by the second case study below. In light of what has been outlined above with regard to ways of knowing, I am making the argument that the artists and their projects below cultivate ways of knowing that seek to move beyond the ‘representational’ model condemned by Dreyfus, Ingold, and Barad. Specifically, their projects challenge the logic of commensurability promoted by informatics, the informational capitalist regime within which it operates, and some of the values and truths they impart on the 132 general population, such as the assumptions that information and informational systems are neutral, that like all technical devices they are simply means to an end (a functionalist/instrumental perspective), and that all of the world and its phenomena can be adequately captured by informatics. 5. What Does the Search Engine Know? Mark Poster (1990) summarizes that enthusiasts of the information society often have some version of the following Enlightenment fantasy in mind when they are celebrating the utopic, educated, and egalitarian society where all information is available to all: that all phenomena can and will be encoded digitally, after which it will be open and free for all to access with no restrictions whatsoever. In this account there is no mention of who does the encoding, who will be responsible for categorizing and managing the body of information, or who designed the protocols by which to access this information. One of such protocols that has become indispensable to the process of accessing information (and the absence of which is unthinkable to the general individual) would certainly be the search engine. It is an application that one often utilizes within the browser to search for information they wish to access. In many ways, it is synonymous with searches on the Internet, or to use the Internet at all. To be able to yield search results, search engines have to ‘order the Internet,’ which is a process that is opaque in its precise details, varies depending on the individual proprietary search engine (Yahoo’s is different than Google’s, for example), and susceptible to be gamed, manipulated, and purchased. The last point, in fact, has been so imperative to the financial 133 and social success of any corporation, that it has spawned an entire industry, with its own set of knowledge and qualifications, known as Search Engine Optimization (SEO). An entire industry that announces the fallibility of a tool that at one point had, and perhaps still retains, the illusion of objectivity and neutrality – something that simply gives the user the ‘accurate’ information according to some universal taxonomy and assessment of relevance. Such tool is, of course, baked in with the biases of the programmers, in addition to serving commercial ends of the search engines. As information science and critical race theory scholar Safiya Umoja Noble (2018) writes, “Google is an advertising company, not a reliable information company” (p. 5), one that shockingly maintains the position that “it is not responsible for its algorithms” (p. 6). If the search engine is the main interface through which one accesses any information and knowledge in our society, then it is no exaggeration to say that as a piece of ICT, it exerts enormous influence on how and what one comes to know about the world and the phenomena within it. “Search does not merely present pages but structures knowledge, and the results retrieved in a commercial search engine create their own particular material reality” (p. 148), constituting a way of knowing. As data historian Daniel Rosenberg (2013) writes, “we need to pay better attention to the epistemological implications of search, an entirely new and already dominant form of inquiry … with its own notable blind spots both in design and use” (p. 35). Unlike other search engines that may attempt to crawl the Internet and categorize, attempting to slowly create an accurate map, Google utilizes the PageRank algorithm, which attempts to ascertain the ‘quality’ of the page based on the frequency by which other users have linked to the page in question (Pasquinelli, 2009). Along with a number of other qualities, the exact content and combination of which keeps changing, PageRank’s formula challenges and sustains 134 the SEO industry. This aspect of the Google search engine’s function might be seen as user-driven and user-centric, where the tool might even represent and respect the dynamic knowledge fields that keep changing according to the activities of the netizens. Scholars of information science and technology Lucas Introna and Helen Nissenbaum (2000) have argued otherwise, reporting that such system of ranking can easily be manipulated and privileges those who have the resources to fund these SEO initiatives or the technical knowledge to conduct these tasks themselves. In practice then, “the diversity of viewpoints represented on the web is liable to be substantially curtailed rather than broadened” and “these technologies conduce to the reproduction of narrow frameworks for knowledge-gathering on the web” (McGahan, 2008, p. 26 – 27). This is, in many ways, a structural problem, one that favors those that already hold a large audience and have the means to invest in getting more. In other words, one of centralization and consolidation of power. In 1999, in the context of UK’s attempt to rebrand as a tech-savvy global competitor, the artist group Mongrel launched the project Natural Selection. The project looks and functions just like a regular search engine. It involved the hacking of a search engine, carried out by Graham Harwood and Matthew Fuller, with the program functioning as a sort of add-on to the search engine. Unbeknownst to the user, using this search engine to access information may yield results that were strategically implanted by the artist group, when a few specific key words or key areas were touched upon. The artist group’s research and practice are heavily invested in hybridity and race politics, as its name implies, and part of this project continues this trajectory. In this case, Mongrel has designed the hack so that once certain key words are utilized in the search, the search result will contain one or several websites that were specially created for the project, by fellow artists, theorists, activists, musicians, and poets, such as Hakim Bey, Critical Art Ensemble, 135 Mervin Jarman, Stewart Home, Daniel Waugh, Richard Pierre-Davis, and Dimela Yekwai. The websites themselves range from audio-visual artworks and experimental poetry emphasizing the persistence of racial dominance and using hybridity as a counter, to mock versions of official government websites providing alternative and previously-suppressed information. As communications and media scholar Graham Meikle (2003) describes: The idea is that anyone searching for, say, neo-Nazi writings would find themselves instead at a site which ridiculed their views. But the search engine is not only activated by searches for what could be considered offensive key words. Let’s take ‘cats’ as a harmless example. When we run our search, the engine retrieves a list of matches to cat-related sites – all of which are aliases for links to Mongrel’s own content. Click on our cat choice and we might find ourselves reading a self-assessment questionnaire to gauge our chances of being accepted for immigration into the UK … Other destinations include an article about neo-Nazi rock bands by post-Situationist author Stewart Home and an essay about Islam and globalization by Hakim Bey. (p. 109) While the project was conceived in 1999, its relevance and urgency are still very much current, evident in recent scholarship such as Noble’s (2018) work on the “power of algorithms in the age of neoliberalism and the ways those digital decisions reinforce oppressive social relationships and enact new modes of racial profiling” (p. 1). The project, especially in its host of alternative websites, is rich for the analysis of race, immigration, and identity politics. This, however, is outside of the scope of the study, and considering the perspective “the work should be seen as both antiracist intervention and Internet art devoted to search technology critique” 136 (McGahan, 2008, p. 29), the present section focuses on the latter, and how search engines index the Internet into the epistemological terrain as we come to encounter it. While part of the project intends to thwart the expectations of the users who, for example, were looking for right wing materials but found anti-racist literature instead, which gives presence to subjugated voices and places race politics at the centre stage, the project is also very much about the very specific ways information is ranked (and manipulated), and how it may end up reinforcing the dominant regimes of power. While pundits might argue that these corrupted results are only rare hacktivist pranks, the point is of course that interventions into the standard operation of search engines is the norm, that they are susceptible to such manipulations that magnify existing distributions of power. As Harwood (n.d.) says in an interview, “the idea is to … start eroding the perceived neutrality of information science type systems. If people can start to imagine that a good proportion of the net is faked then we might start getting somewhere” (para. 2). It is, after all, a space of commerce, which Fuller (n.d.) describes as “proprietary culture masquerading as social resource” (para. 6). Even without the insidious assistance of commerce via SEO, the system still favors those who already have the attention of the majority – the system is designed to favor the dominant voice. Natural Selection tackles precisely the built-in mechanism of search engines that privilege and perpetuate the status quo, pointing out “that their methods of selecting and presenting results mask intrinsic biases towards the world views of the kinds of users that their programmers imagine” (Meikle, 2002, p. 109 – 110). This, of course, creates a cycle where the sites with popular appeal receive more backlinks which drive up their rank, at the expense of the other sites which may not have the same economic resource or popular appeal – the noise, the edge cases. “Classificatory structures are developed by the most powerful discourse in a society. The result is 137 the marginalization of the concepts outside the mainstream” (as cited in Noble, 2018, p. 140). In the attention economy, those that possess a surplus of attention value generate more value, while those that do not command as much attention are deemed of less value, which perpetuates their status. Such mechanism of sorting and definition of relevance is pernicious. As Fuller (2003) writes in relation to the programming of information access, “when most of this work is done by a closed culture of proprietary specialists, finding ways in which this process can be opened up … becomes even more imperative” (p. 86 – 87). In addition to the fact that the Google search engine has a design that favors the dominant social norms and can be gamed via SEO, there is another factor that has it predisposed to shape and perpetuate ways of knowing – its tendency to create echo chambers/filter bubbles. Net activist Eli Pariser (2011) writes about the filter bubble in detail, a phenomenon that stems from personalization algorithms utilized by Facebook, Google, Netflix, and news sites to tailor what one sees on the Internet based on what the algorithm ‘thinks’ the user would like to see, which is itself based on a combination of demographic information and history of demonstrated interest and websites visited. These ‘prediction engines’ are “constantly creating and refining a theory of who you are and what you’ll want next” (p. 128). Here, the logic of commensurability is encapsulated in the assumption that algorithms can and do represent human desires, behaviors, and identities. Media activist Astra Taylor (2014) calls these ‘algorithmic superegos,’ “systems that determine what we see and what we don’t see, channeling us toward certain choices while cutting others off … driven not by individual demand but by the pursuit of profit” (p. 131-132). Not only are the search engines structured so that those with the means to amplify their voices become more dominant, but they are also structured to cocoon a user in their own perspectives, further 138 entrenching the shallow diversity of the Internet. Noble (2018) points to the pernicious effects of the search engine’s non-neutrality and prescribed knowing in her example of the 2015 mass shooting in Charleston, South Carolina, that led to the murder of nine Africa-Americans. The shooter, a young white nationalist, revealed in his manifesto that his skewed perspective on American race politics was greatly influenced by Google search results, which often favored fake news sites that, while pretending to be “viable news source or a legitimate social and cultural organization, operate as fronts for organizations such as the CCC, the Ku Klux Klan, and thousands of hate-based websites” (p. 116), due to the inherent mechanisms and commercial aspect of the search engine. If the search engine fosters a particular way of knowing, one that encodes, categorizes, and ranks worldly phenomena, and greatly restricts what one can know, Natural Selection interrogates the presumed neutrality of such ranking and ruptures the filter bubble by forcing the viewer to confront information they would normally never encounter digitally. To be sure, the returned search results are not random, they are carefully curated for the political charge of very specific agendas (i.e. returning suppressed reports of anti-immigration police brutality when one searches for information on the British Airway Authority), so one could argue the filter has simply been readjusted, and not removed. Nevertheless, Mongrel’s project point to the need for alternatives to the dominant index. I argue that the pedagogical potential can be located in Mongrel’s estrangement of search engines’ cataloguing practices, insisting on their imbrication with power structures and inserting counter-information into the platforms. Through estranging the device and highlighting its 139 operations, Mongrel simultaneously ruptures the epistemological field of the standard search engine results by intervening with its own politically-charged content, but also attacks the very operation of the device and its presumed neutrality as a knowledge-categorization mechanism. To use Feenberg’s terms, their project problematizes the underlying technical code of the search engine, the way of knowing it fosters. The project creates a pedagogical condition to think otherwise about the search engines, where the user, by being confronted with search results that drastically deviate from the norm, is invited to consider the clandestine operations of knowledge-organization and information access. The medium is foregrounded through a procedure that turns the search engine (and subsequently, the digital landscape) into an anti-environment that probes the utopic promises of access to all information. ‘Information is free’ comes at a price. Mongrel’s project intervenes in the information economy by forcefully inserting their own information, the nature of its content usually overlooked or suppressed. Media philosopher Matteo Pasquinelli (2009) has argued for the need for “an alternative ranking system able to undermine the monopoly of attention economy and also the accumulation of value controlled by Google” (p. 159). Pasquinelli is mainly concerned with the extraction of general intellect by Google rather than a way of knowing that privileges the dominant, but the call for change remains equally relevant. Although he is not entirely convinced of the success rate or ethics of a kind of ‘People’s Rank,’ there is definitely a need for alternative associations that would provide the conditions for more diversity in information retrieval and availability of marginalized information. As McGahan (2008) suggests, “the user of Natural Selection is thus invited to look more closely at both the … marginalization … of the critical knowledge that the project’s sites 140 present, and the information technology supported protocols whereby this kind of marginalization is sustained” (p. 28). All the issues noted above with the search engine – the way ranking privileges those in dominant positions, the existence of an industry that allows ranking to be bought, the attempt for it to customize search results for the users’ interests, behaviours, and preferences – have at their roots the logic of commensurability. The assumption here is not just that all the information on the web can and need to be categorized and ranked, but also that relevance and quality can be adequately represented by algorithms, that a website’s pertinence to the user’s needs and interests can be represented and captured by Google’s ever-changing concoction of rules and tricks, one of which being the number of backlinks to the website in question. With the next artist group, the claim of information’s representational model itself is questioned. 6. The Cartographers and Counter-Graphs In the book Weapons of Math Destruction, mathematician and data scientist Cathy O’Neil (2016) advances the argument that despite the myth of impartiality and immunity to human judgement, the math-based algorithmic applications that drives the big data economy, in many ways, perpetuate discriminatory practices and inequality. She gave several examples, such as crime prediction software that inadvertently target neighborhoods of lower income, prison inmate surveys that disadvantage those with lower socio-economic backgrounds, university ranking algorithms that privilege those who can afford to game the system. Like what Natural Selection was inviting the user to consider, O’Neil stresses that “the math-powered applications powering 141 the data economy were based on choices made by fallible human beings … these models encode human prejudice, misunderstanding, and bias into the software systems” (p. 10). Aside from the fact that design is not innocent and very often ports over the perspectives of the dominant social order from real life to the algorithms, there is another point that O’Neil touches upon: the need to scrutinize these algorithms’ ability to represent what they claim to represent. They are, after all, only models, abstracted representations to capture, measure, and represent certain phenomena that are later mistaken as the phenomena themselves. As O’Neil reminds, “to create a model, we make choices about what’s important enough to include, simplifying the world into a toy version that can be easily understood and from which we can infer important facts and actions” (p. 24). And of course, this stage of choice-making is where ideology seeps in, where politics are intentionally or unintentionally inscribed. To use Feenberg’s terminology noted earlier in this chapter, the choice-making that occurs at this stage of technological development is highly political, and will determine the dominant technical regime that will prescribe and sustain specific norms, assumptions, and knowledge. Katherine Hayles (1999) warns of this process of ‘simplifying the world into a toy version,’ argues that “when we make moves that erase the world’s multiplicity, we risk losing sight of the variegated leaves, fractal branchings, and partial bark textures that make up the forest” (p. 12). She refers to this process as the ‘Platonic backhand,’ which is a reduction of the world’s noisy multiplicity into an abstract model. While this process has obvious benefits, and is also necessary from a logistical perspective, “the problem comes when the move circles around to constitute the abstract as the originary form from which the world’s multiplicity derives” (p. 12) – the Platonic forehand. Crucially, while the Platonic backhand is as ancient as any theory-making, the Platonic forehand can only be fully 142 realized with the aid of advanced computation that allows for algorithms to capture, predict, and generate worlds of human desires and behavior on an unprecedented level. The backhand is a process and discipline of creating the ideal Platonic forms – informatics, while the forehand is the logic that undergirds the big data economy – the assumption of commensurability and faith in the ability for such algorithmic models to represent what they claim to represent. In O’Neil’s examples, socio-economic backgrounds become a metric for determining whether an inmate is likely to commit a crime again, geographic (supposedly non-racial) information becomes a metric for predicting the likelihood of crime occurring in a particular neighborhood, browsing patterns and search engine queries become ways for machine-learning to occur in order to forecast user desire for goods and services, and information such as SAT scores, student-teacher ratios, acceptance and graduation rates, and alumni donation amount become ways of measuring ‘educational excellence.’ In the last example in particular, O’Neil touches upon a crucial point that applies to all models and representations: that they rely on ‘proxies.’ Certain phenomena are easier to quantify than others, such as the amount of money spent and the weight of goods transported, but ones such as happiness, friendship, and learning, are much more challenging to quantify in a meaningful way. Therefore, proxies are utilized, indicators that seem to be the next best thing, measurable values that can be operationalized that are arguably indicative of the phenomena being measured. “However, when you create a model from proxies, it is far simpler for people to game it … because proxies are easier to manipulate than the complicated reality they represent” (p. 52), writes O’Neil (2016). 143 Operating under the myth of informatics, opaque, inscrutable, and seemingly infallible, the results of these algorithmic models are taken as unshakable truths (regardless of how inadequate), which in turn have very tangible effects on certain population. As her book title shows, O’Neil wishes to emphasize the destructive powers of such irresponsibly-designed algorithms. The damage, in part, comes from the population’s faith in the two-part process that Hayles (1999) outlined earlier: the models’ operation to reduce noisy multiplicity into quantifiable bits, but also the ability of these quantifiable bits to represent, predict, and generate a world supposedly as noisy and multiple as that which it reduced in the first place. A representational model of understanding the world, as Barad (2007) noted earlier this chapter, is inadequate and does not take into account the ongoing and entangled performative intra-activity of materials. Under such an understanding, all models need to be recognized as reified and static slices of a constantly negotiating world; they have only temporarily cohered into a model in a particular space and time, under a specific discursive regime. All of this also echoes the scholarship of Dreyfus (1992), who has been asserting that AI cannot possibly map the entirety of worldly phenomena, as embodied instantiation in the world and all of its contextual messiness simply cannot be enumerated (something that Hayles would argue as well). Both Lewis Carroll and Jorge Luis Borges have created stories of a similar theme, Sylvie and Bruno (1889) and On Exactitude in Science (1946), respectively. In these stories, overzealous cartographers create maps for territories in their kingdoms, making them bigger and more accurate, until eventually the map is as large as the kingdoms themselves. The stories are emblematic of the impossibility of perfect representation (or more specifically, perfect encoding into information) and the absurdity of the attempt. The world and its phenomena can only be represented by the 144 world and its phenomena, with all of its noisy multiplicity and messy entanglement, continuously unfolding and performing. Maps, a way of visualizing data, are precursors of the infographics and data visualization that permeate society today, which visualize algorithmically captured data and predicts future data. Between 2013 and 2018, Richard Ibghy and Marilou Lemmens created, out of common craft materials such as bamboo sticks, string, and wires, a project called The Prophets (Figures 5.1 and 5.2), which consists of 500+ small, extremely delicate, hand-crafted models of various forms of economic data visualizations: bar graphs, pie charts, grids, line graphs, curves, etc. Utilizing a variety of media, the artists have a collaborative practice that often investigates the impossibility of translating lived experiences into symbolic systems of representation. The miniature models, looking very much like some popsicle stick art projects of an elementary school curriculum, are meticulously constructed and displayed on a long and minimal table, some standing and others lying flat. Situated beside each model are hand-written labels indicating the nature of the graphs’ content, such as Kuznets curve, the optimal tariff, rays of hope and tangential interests, inflation predicted on the pure sticky information model, closed loop solution, etc. As the title suggests, economic graphs and models, supported by the big data economy, capture, forecast, and generate information on the economic relations of all the various industries in the world, providing directions on efficiency, productivity, investments, and management. In claiming to represent the market trends, the graphs also encapsulate all the myriad factors – material, behavioural, affective, political, social, economic, technical – that contribute to the global trade, functioning as ‘proxies’ of these factors and all their complexities. In a framework of real subsumption where all areas of the social, the everyday, are subsumed under capitalism, such graphs and models in effect attempt 145 to map the world and all phenomena within as factors of the global economy. As visual culture theorist Susan Buck-Morss (1995) reminds, there is an inextricable link between the presumed rationality and objectivity of graphs and the mandate for surplus, efficiency, and productivity within capitalism, as historically it was data representations that, through visualizing the invisible, gave the nascent science of economics its legitimacy. Figure 5.1, Richard Ibghy & Marilou Lemmens, The Prophets, 2013-2018, approximately 500 mixed-media models, produced in collaboration with Henie Onstad Kunstsenter, Høvikodden. Courtesy of the artists. Installation view: To refuse/To wait/To sleep (January 13 – April 9, 2017), at the Morris and Helen Belkin Art Gallery, University of British Columbia. Photo: Rachel Topham Photography. 146 But of course, Ibghy and Lemmens’ graphs serve no function for investors, managers, and economists. They impart no useful information, for there are no numeric values attached to these graphs, no means of deciphering them, no indexes to indicate what is represented by the x axis or the sections of the pie chart. Their delicacy is equaled by a sense of insignificance, their data forms stripped of any real data (and therefore, value). The materiality and craft-ness highlight the forms: the chunks of glue joining the x and y axis, the imperfectly-cut bars of the graphs, the inexact cut sections making up the whole pie, the meticulous yet obviously-hand-made nature of the construction. By estranging the data visualizations through the absurd act of making them by hand, stripping them down to their minimal forms and rendering them useless while emphasizing the materiality of what is by nature an immaterial process that abstracts and reduces, the project invites one to consider the impossibility for data representations to adequately represent what they have abstracted. If the economic graphs, plotting various algorithmically captured and generated data, is a reduction of the world (including the human and non-human) into an abstract model, perhaps one might say The Prophets reverses the process by insisting on the lived experiences and materials of being-in-the-world, contextual and situated. In other words, the former goes from particular to abstract, while the latter brings the abstract back to the particular. The project, then, is a collection of counter-graphs, the idiosyncratic and dysfunctional nature of which exceeds the parameters of the summative and predictive algorithms, negating their function within the big data economy and neoliberal circuit. Considered as a way of knowing, the graphs and the algorithmic operations they visualize sustain the logic of commensurability, as well as the inscrutability and the myth of data being infallible as O’Neil noted above. This way of coming to know the world, through graphs, is negated by the handmade models, which exemplify 147 Ingold’s argument noted earlier this chapter: that one comes to know not from a (data) scientific ‘outside,’ but from within, from being situated within the complexities of a material world, from doing, from actually making the models by hand, in this case. Perhaps ultimately the handmade versions point to the inadequacy of the economic graphs by highlighting their overlooked materiality, the fact that these data are abstracted from noisy environments, that they are decontextualized from bodily and material instantiations, that they inevitably omit the material entanglement of all the agents and networks on which such models are built. The argument that the work nevertheless functions within another economic system – the art market, despite the graphs now being useless, is not lost on the artists. In many ways, the work is a futile gesture: divorcing the graphs from their original value only to accumulate value again through artistic labour, trading in one set of information for another. Ibghy and Lemmens’ 2014 work Is there anything left to be done at all? addresses precisely this complicity of artists within the neoliberal demand for surplus through ever-increasing productivity (which, in information capitalism, leads to the production of informational goods and services). If one follows the perspective of artists such as Iain Baxter and Jack Burnham who view artworks as information and artists as information producers (Cook, 2016), artists are certainly complicit with the information economy. Working within an understanding of this complicity, Ibghy and Lemmens nevertheless wish to both explore and resist the demand for productivity, investigating the possibility of being ‘unproductive’ by inviting several fellow artists to collaborate in ‘not making work.’ The project ultimately resulted in several video documentations of various mundane non-activities such as waiting, gesturing, making ad-hoc contraptions, interrupting themselves, and moving things around, while recognizing the paradox that they are producing artworks and therefore, information 148 and value. If the artists are indeed doing what Veronique Leblanc (the curator of their 2016 exhibition Putting Life to Work) says, “to unlearn what work is” (as cited in McLaughlin, 2016, p. 106), then in the age of information capitalism and knowledge economy, this unlearning begins by examining how information operates. The notion of unlearning sets up the pedagogical potential of The Prophets. In order for labour/work to be productive, it has to be subjected to the logic of commensurability, whereby it gets translated (via information-processing) into information, so that it could be manipulated and deployed algorithmically. I argue that the pedagogical potential of The Prophets is its questioning of precisely this assumption of encoding (that labour translates into information and therefore, value), and the need to resist such neoliberal ends. While Is there anything left to be done at all? depicts various aimless ‘bodily’ labour acts, The Prophets takes aim at the operation of immaterial labour and how it is captured through data. O’Neil (2016) touches upon this when she says that “it may have seemed that industrial workers and service workers were the only ones who could be modeled and optimized,” but under information capitalism where the immaterial can also be quantified and extracted with precision, “many companies are busy trying to optimize their white-collar workers by looking at patterns of their communications” (p. 115). She notes the case study where the tech company Cataphora measures the value of workers by several metrics, one of which is their capacity to generate good ideas. For this, they used the proxy of how ‘viral’ a string of words from various sources of communication became, with the idea being that if something was shared repeatedly through emails and other documents, it must be a good idea. In information capitalism, both the material and immaterial are subjected to value-extraction through the common denominator of the binary digit. 149 Despite Ibghy and Lemmens’ efforts, expending labour to no ends and stripping their graphs of useful data, our relationship to productivity is a fraught one, as whatever is produced in the age of digital sharecropping and data-mining inevitably accrues value and gets captured. The point that the artists might be making is that one cannot help but be productive in neoliberalism, that any unproductive act ultimately ends up being productive, as anything that is produced, any information, somehow ends up being subsumed under information capital, injected into the market. If one has to invest labour and produce information, at the very least it could be counter-information. Ibghy and Lemmens’ work has been described by artist, theorist, and curator Lorna Brown (2017) as “exploration of subjectivity in contemporary global capitalism … the ways in which experience cannot be captured by these [economic] systems however much they are internalized and acted out in everyday personal lives” (p. 33). Perhaps art practices inevitably contribute to the labour demands of neoliberalism, especially cognitive labour. Nevertheless, The Prophets highlights the inability to reduce the very act of making these small sculptures (and by extension, other embodied acts and phenomena within the social landscape) into the graphs of algorithmic operations that these sculptures were based on. Perhaps these handmade models are poor renditions of the actual graphs, but so too are the graphs inadequate representations of the world’s flux of phenomena. If, as Brown writes, data visualizations are the dominant means through which worlds are ‘known,’ the non-productive information of the counter-graphs rupture the epistemological mode of the graphs and models. What does such rupture invite one to think about, when they encounter the artists’ counter-graphs? As Ibghy says in a 2014 interview with Canadian Art, “the 150 inherent failure of a model … to contain the complexity and richness of human experience is very important in our work. We try to materialize, through the body and through action, a lot of these abstract principles, and we tend to gravitate towards what falls through the cracks” (Cooley, 2014, para. 30). The seminal design and infographic theorist Edward Tufte, according to data historian Daniel Rosenberg (2015), claims that visual representations of data reveal the true relationships waiting to be deciphered (reminiscent of traditional epistemology). The visualizations of these models/proxies make it possible to think clearly about the phenomena being analyzed, as the argument goes. Rosenberg proposes that “things become considerably more difficult, however … if your analytic goal is to complicate rather than simplify, to open multiple avenues of inquiry, and most important, to challenge the stability of underlying data” (p. 39). Tufte’s principles for ‘graphic elegance’ has nine points, arguing that good data representations must show the data, induce the viewer to think only about the substance rather than the form (technology, design, method, etc.), reveal the true data, to simplify complexity, etc. Rosenberg counters with the principles of a ‘graphic critique,’ many points of which are exemplified by The Prophets. In a way, Ibghy and Lemmens’ work is devoid of content, or at least positions the ‘form’ as the content, much like McLuhan’s (1964) idea that the medium is the message. In doing so, their work aligns with Rosenberg’s alternative principles of showing the graphic/form, inducing the viewer to think about the design, technology, and aspects of the production and consumption instead of just the data, highlighting the manipulation and non-neutrality of data instead of treating it as truths to be revealed, and pointing to the implications of data representations. 151 Figure 5.2, Richard Ibghy & Marilou Lemmens, The Prophets, 2013-2018, approximately 500 mixed-media models, produced in collaboration with Henie Onstad Kunstsenter, Høvikodden. Courtesy of the artists. Installation view: To refuse/To wait/To sleep (January 13 – April 9, 2017), at the Morris and Helen Belkin Art Gallery, University of British Columbia. Photo: Rachel Topham Photography. His critique of infographics succinctly echoes Hayle’s (1999) point about the reductive nature of informatics and O’Neil’s (2016) point about the inadequacy of algorithmic models, pitting the supposedly efficient and concise nature of the pertinent unit of information against the entangled phenomena in embodied contexts. Rosenberg (2015) argues that the supposed clarity can in fact obscure existing conflicts, contradictions, and differences, stripping multiplicity down to an illusory coherence. As he reminds, “anyone who uses data knows that clarity comes with trade-offs in many dimensions, not least of all historical … when we transpose historical data from one graphic to another, we lose aspects of both context and content … we lose embeddedness” (p. 152 57). Perhaps this is precisely what ‘falls through the cracks,’ as Ibghy phrased, the excess and noise that is absent from such models, due to omission and marginalization (the design biases) or plainly because such models cannot account for the slippery and contingent flux of phenomena which is beyond quantification (be it the imprecise and useless handmade graph-forms or the series of gestures and activities that led nowhere). Following Hayles (1999), there certainly are benefits to such models and the reduction they enact, but there is also the cost of understanding the world through (inevitably) biased and reduced models, which is often overlooked in an epistemological and economic model rooted in the logic of commensurability, where such reduction is necessary for the big data economy to operate within information capitalism. As Dreyfus (1992) insists, the way of knowing through representations – the logic underlying AI and informatics – neglects the situatedness of being-in-the-world and is therefore always insufficient. Perhaps, insufficient in the Derridean (1978) sense of ‘lack,’ where the supposedly self-sufficient and self-governing sign/structure (or to go further, data-representations and cybernetic systems) is actually marked by a lack of centre and guarantor of meaning (the floating signifier), invoking the presence for ‘supplement,’ the unaccountable excess and nebulous surplus that cannot be captured, mapped, or quantified by such structures/models. This idea of the supplement returns again next chapter in the discussion of ‘originary technicity.’ Tapping into this unaccountable excess as an artistic tactic is precisely what estrangement can accomplish (while of course, recognizing the danger of submitting this surplus to the instrumentality of the productive market). By estranging one of the forms of data representations (be it graphs, databases, or search engines), the artists invite one to think otherwise by highlighting that which falls through the cracks (oppressed and marginalized information in the case of Mongrel 153 and the possibility of uncaptured and non-productive information for Ibghy and Lemmens), the noise of embodied contexts and performative materials. 7. Conclusion This chapter has continued the argument introduced in the previous chapter that positioned the digital, or information, as a way of knowing within contemporary information capitalism. The chapter expands on the argument by elaborating its position on technology as hybrid assemblages, against technological determinism. This was followed by a deeper analysis on ways of knowing, specifically the dominant Cartesian model and its legacy in the information-processing model. Against such representationalism, alternative onto-epistemological perspectives are advanced, ones that highlight embodiment, context, contingency, materials, and the irreducibility of worldly phenomena into restricted and abstracted models/proxies/representations. This irreducibility of embodied contexts and the inadequacy of information to represent the world is explored through two artworks, both pointing at that which cannot be contained within the current configurations of information systems. Mongrel’s hijacking and rupture of search results with their various politically-charged alternative content resonates with section 2 and Feenberg’s (1999) articulation that technological assemblages are not neutral and their technical code could always be otherwise, while Ibghy and Lemmens’ highly material and embodied counter-graphs of an abstract and decontextual process responds to section 3 and O’Neil’s (2016) and Hayles’ (1999) points about the inadequacy of representations. Both projects assert adversarial positions in relation to dominant information mechanisms to attenuate the claims that information is a neutral and adequate representation of the world. Moving forward, the next chapters continue the perspectives outlined 154 thus far that undercut the anthropocentrism of the rational subject within the binary of subject/object, mind/body, abstract/embodied, and to emphasize the latter parts of each of these binaries, which are often overlooked but inevitably persist and rupture through the abstracted, closed, and supposedly stable models/representations. In particular, the next chapter focuses on the materiality of mediation. 155 Chapter 6: The Materiality of Mediation According to Peter Paul Verbeek (2005), one of the main arguments of classical philosophy of technology is that it presumes technology is predisposed to control and domination, such that the goal should be an attempt to free oneself from the alienated state that technology induces, and to reach an unmediated human state (here he is referring to the theorization of Karl Jaspers). If one follows the socio-political stakes outlined in Chapter 4, where the ICT assemblages of information society fosters a way of knowing that presumes the world can be subsumed as information and exploits the users’ digital labour, then it seems reasonable that the goal would be to rid oneself of the debilitating mediation of ICT operations of the big data economy. However, as Gottlieb (2018) writes from a materialist perspective, “technology is not separate from or invented by humanity but already emergent in the relation between ourselves and the world” (p. 146). This chapter rounds out the argument initiated in Chapter 4, anticipating the rebuttal that one should simply stop using or remove the digital medium altogether, by advancing the argument that one is always already mediated, and as such, the aim of media criticism and media art should be an examination of the often-neglected medium itself. In doing so, it also continues the argument against the binary of the rational subject of informatics abstracting from and dominating the embedded world. The notion that technology is independent from the human and other non-human elements of the world cannot be supported, if one follows the theoretical perspectives of originary technicity and digital materialism, both of which negate the separation of the human subject and the technological object, mind and body, human and non-human, episteme and techné. By asserting that we have always been mediated, and that the digital must be embedded within the material, 156 within nature, the chapter emphasizes that mediation is not a new phenomenon (section 1), nor are digital media technologies external to us (section 2). Rather, if a materialist perspective is followed, then the world and the phenomena within (including thoughts, consciousness, language, pixels, bits) all consist of matter. The digital is material, and the digital medium is part of the materiality that is the world, and as such, not external to the human and is simply another matter that intra-acts with the material that is human. As Mejias (2013) noted, the goal should not be an attempt to locate an outside to the network, but “to unthink the logic of the digital network … to reimagine one’s relationship to it … disrupting the flow of information” (p. 90). The Vancouver-based artist and theorist Jamie Hilder (2017) writes humorously that “the feeling of being a set of numbers is difficult to quantify, because it is a feeling” (p. 65). The operation of information capitalism, with all the ways it enumerates the subject, was the target of Hilder’s protest in his writing. However, as he concludes, the logical alternative to the capitalist system facilitated by the ICT-assemblages is not necessarily a flight away from such system to one unmediated by the digital medium. As he writes, “my error comes in believing that there is still a me outside of data” (p. 69). Instead, he urges one to work through the quantified subjectivity, for artworks to bring to awareness the effects of information organization on subjectivities. In other words, to focus precisely on the medium itself, as the state in which one is unmediated by the encoding of informatics is simply untenable at this point. As Chapter 1 has pointed out, a significant body of scholarship and practices in ‘media art’ treat the medium itself as neutral, relying on it to test and expand the technology’s capacity from an engineer’s or programmer’s perspective in order to explore the possibility of creating and 157 experiencing aesthetically-pleasing forms, sounds, and lights through these technological innovations without interrogating the technology itself. As Scott Lash (2002) has established, the information-power nexus of information society needs to be subjected to critique, and the target of that critique resides in information itself. For Lash, practices that rely on aesthetics and formal concerns do not have the potency for critique: “formalist art works from a logic of interior reflection that has nothing to do with the critique of information” (p. 220). Instead, what is needed is a focus on information itself, “a reflexive critique of its own conditions of existence,” and to “extend this sort of reflexive critique more generally to the global information and communications order” (p. 220). Or perhaps, framed in Andrew Feenberg’s (1999) terms, it is the ‘technical code’ and operation of information that must be subverted. In other words, the media art practices that treat the digital media itself as neutral, autonomous, and simply functional overlook the medium itself. And without examining the medium itself, the ICT assemblages, such practices end up uncritically embracing the technological innovations and tacitly supporting the state, military, commercial, and communication industries. Following Brecht, it is the form, the environment, the medium that must be brought to the foreground through estrangement, and only through so doing, can a critique of information and communication technology be possible. As part of the cultural analysis of digital media, the previous two chapters sought to interrogate and expand the notion of the digital and offer a critique of an information-based way of knowing. The present chapter and the next continue such critique, illustrating the persistence of the mediation and the imperative to consider the medium. If the analysis of digitality established the proposition that the digital fosters an epistemic model of commensurability, the logic of which can potentially be upended by certain artworks that invite 158 different ways of knowing, a reasonable line of thinking might be: perhaps one would be better off not being subjected to such epistemic model, to such mediation of ICT, or to mediation in general. This chapter counters this direction by insisting that one has always been mediated, through consulting the philosophy of originary technicity and digital materialism. Lash (2002) has already asserted his perspective that there is no ‘outside’ to ICT, that all phenomena have been subsumed under information society, including theory and critique, and there is no privileged and safe position from where one can claim to be unmediated. Originary technicity allows us to go even deeper than ICT to examine mediation in general, while digital materialism insists that such mediation, like all other phenomena, is material. The chapter then moves into two case studies, one with the performance artist Erica Scourti again, and the other with the digital artist John Gerrard. Scourti’s work helps us to consider, amidst an examination of the mediated selves and limits of AI, the inevitability of mediation of some kind. Gerrard, in turn, reminds us that such mediation is nevertheless always material, despite claims of immateriality. 1. Medium and Re/mediation I begin the discussion on the medium by returning to the theorization of McLuhan. As indicated in Chapter 3, McLuhan (1964) went at lengths to emphasize the importance of examining the form of media/technology rather than the message, idea, or information being communicated. A focus only on the message or content would imply a neutral perspective of technology, that the medium itself is an empty tool simply designed to solve problems efficiently, to be populated by the ideologies of those who use or communicate through the tool. “The effect of media, like their message, is really in their form and not in their content” (McLuhan, 1959, p. 342). To investigate 159 this further, he goes into language and asserts, as the social effect and ‘change of scale’ afforded by printed language, that it cultivates standardization, rationality, order, and linear causality, among others. For him, the medium is not just a tool or an in-between, between human and the world, but rather an environment, within which one is always immersed. As he writes in the essay The Relationship of Environment to Anti-environment (1966), which he regards as the major insight of Understanding Media, “any new technology, any extension or amplification of human faculties when given material embodiment, tends to create a new environment. This is true of clothing as of speech, or script, or wheel” (p. 1). Environments are pervasive, immersive, situated in the background, and defy being noticed. And in the context of the present study, “the age of information, it is information itself that becomes environmental” (p. 9). McLuhan’s (1964) conceptualization of media expands its limited definition as the press, television, journalism, and so forth, and reframes media/technology as an extension of one’s body and mind. Extensions, however, are also amputations, in the sense that they numb certain facets of awareness through the hyper-focus on other facets – hence McLuhan’s usage of the Narcissus metaphor. Lack of conscious awareness, numbness, that which resides in the habitual background – all these descriptions of media and technology echo the clandestine tendency of the technological everyday as outlined in Chapters 4 and 5. Against this, McLuhan provocatively offers the potential of artists to create ‘anti-environments,’ conditions and encounters that would allow one to become aware of such environments of mediation, revealing entrenched ways of seeing, dislodging the habitual, and rupturing dominant yet hidden knowledge and practices – a capacity that brings to mind the theorizations of artistic potential advanced in Chapter 3, especially that of Brecht. “Art as anti-environment is an indispensable means of perception … for environments, as such, are 160 imperceptible” (McLuhan, 1966/2005, p. 3-4). One of my main arguments is that estrangement is precisely one of these tactics that can create anti-environments, a pedagogical potential that can cultivate ways of knowing otherwise. Of course, due to the focus on the medium/device/form, McLuhan has been charged with technological determinism, which, as Bolter and Grusin (1999) explains, is a perspective that must be avoided for it presumes neutrality and autonomy on part of technology – a perspective that justifies a capitalist progress narrative. Against these charges, they suggest that McLuhan’s theorization entwined the social and technological more so than the detractors insisted. In his defense, they write that “we need not be afraid of McLuhan’s ‘formalism,’ as long as we remember that technical forms are only one aspect of technologies that are simultaneously social and economic” (p. 77). As they argue, it is imperative to keep in mind that the medium is necessarily an assemblage, a hybrid, of social, technical, economic, and material forces, and not simply neutral or instrumental/functional. Likewise, curriculum theorist Lance Mason (2016) writes about McLuhan’s textbook The City as Classroom and argues that McLuhan insists on the non-neutrality of media and the need to bring the environment – the media/tools which change and shape our understanding of the world – to the foreground for analysis. Such insistence on the non-neutrality of media, argues Mason, nullifies criticism of technological determinism or formalism, for McLuhan is precisely asking the reader to unpack assumptions that render the medium as innocent tools designed only for functional purposes. The invitation to focus on the medium isn’t to celebrate its autonomy or function, but rather to interrogate its message, its affordances and limits and influences on society and individuals, which necessarily implicate the social, political, economic, and material. 161 Conceiving of the medium as an environment provides another insight: if the condition of mediation is environmental, then one is always mediated, for one cannot be situated outside of an environment. As Mason (2016) writes, “what is considered media engagement can be understood as a form of experience … as all experiences are, in this sense, mediated ones. The most basic form of mediation is human language” (p. 91), and language as mediation is what the first case study explores below. The question isn’t whether one uses a tool/medium or not, for the medium is an environment within which an individual is situated and encounters the world. Bolter and Grusin’s (1999) concept of remediation can be instructive in this instance. As they write, “all mediation is remediation” (p. 55), each medium interacts and overlaps with another, such that “there is nothing prior to mediation” (p. 56), belying any distinctions between mediation and reality. They likewise consider language as mediation between subject and the world, but not as an isolated force, but rather as a medium embedded in the world and enmeshed with other media and matter, as “combinations of subject, media, and objects, which do not exist in their segregated forms” (p. 58). The curious paradox, they write, is that while media technologies deny mediation in their desire to erase themselves for transparent immediacy, for the ‘real,’ nevertheless, mediation always persists. This is evident in the engineers’, designers’, and technologists’ attempts to create media that look more ‘natural,’ where the interface disappears, that seamlessly connects with the senses. The telephone supposedly removed the mediation of writing and telegraph with the immediacy of the voice; TV supposedly removed the mediation of radio with the immediacy of images; the internet supposedly removed all distance for the instantaneity of communication; and lastly, VR supposedly removed all mediation for the immediacy of an immersive experience. But in all these media, the medium itself denies such desire and claim by their very existence. The 162 medium persists and is always “articulated through a network of formal, material, and social practices” (p. 67). Drawing from the arguments above, one is always already mediated. 2. Originary Technicity Philosopher Arthur Bradley (2011) writes that Western metaphysics, as noted in Chapter 5, is marked by a binary of transcendence versus immanence, the soul/mind versus the body, the immaterial versus the material, and episteme versus techné. Here, episteme refers to theoretical knowledge while techné is the “technical craft knowledge or know-how” (p. 166). Phrased differently, some liken this to theory versus practice. Confusingly, techné here is less about revealing a technocratic worldview and abstraction via informatics, following the ideas introduced in Chapters 4 and 5, but more along the lines of craft and practice. Philosopher Bernard Stiegler (1998) characterizes the former perspective on techné (with its close affinity to reason and rationality, as technocracy or ‘gestell’) as something that increased pronouncedly after the Industrial Revolution. Instead, he wishes to examine and undo an older relationship between episteme and techné (or technics, in his writing), as described above. In this binary, techné is that which is suppressed and considered less important than the dominant mind and knowledge of episteme. Such binary places emphasis on the immaterial, on thought, on theory, rather than the material, the practice, the non-human (technical) medium, which is positioned as a mere means, a mere supplement to the primacy of the mind. Bradley argues that Derrida and his student Stiegler’s concept of originary technicity is precisely a negation of binary between episteme and techné, an insistence on the entanglement of the two, where the porous agent of the human bleeds into (and has always bled into) the exteriority that is the non-human. Seen in this light, it bears resemblance 163 to the weight placed on matter, body, and situatedness in the world, framed as doing/making (Ingold, 2013), the performativity of matter (Barad, 2007), and know-how (Dreyfus, 1998) articulated in the previous chapter, against the abstracting and rational subject of anthropocentricism. The relegation of technics to the sideline has important historical ramifications in the present context of this research. Bradley (2011) identifies the theoretical perspective of a neutral and functional view on technology within classical philosophy, which distinguishes between that which has agency and intention (such as the human subject) and that which is inert and has no capacity to direct itself (such as an object), a neutral tool that follows the direction of whatever agent wields it. Without succumbing to determinism, there is a need to consider the socio-political landscape that contemporary digital technologies contribute to, shape, and operate within, while understanding technology is constituted by a network of social, material, economic, political, and other factors. Stiegler’s work is important in the present study as it expands on Heidegger’s concept of gestell outlined in Chapter 4. Following Heidegger, Stiegler (1998) begins his argument with the notion that Western metaphysics “culminates in the projection of a mathesis universalis that encourages a subject to establish itself as the master and possessor of nature … the modern age is essentially that of modern technics” (p. 7). But he quickly follows up by saying that such characterization of technics, as the apogee of the metaphysical subject/mind, is only one side of gestell. He agrees with Heidegger’s views that techné is a way of revealing and no mere means, but while Heidegger seems to character techné as an autonomous and determining force, dictating the human, Stiegler would argue that techné was never external to the human, but always already internal. The concept of originary technicity emphasizes a need to re-examine the place of the 164 medium by undercutting the privileged position of the human subject, but more importantly, it asserts that the human is always already, and always has been, technologized. Stiegler (1998) achieves this, drawing partly from Derrida’s deconstruction and Heidegger’s refusal of any “residual Cartesian dualism between subject and object, consciousness and world” (Bradley, 2011, p. 76), but also from philosopher Gilbert Simondon’s theory of technogenesis that reverses the idea of the subject asserting ‘form’ onto ‘matter’ through direct causality, but rather through a heterogeneous set of elements that interact together, a process that precedes the individual human agent. In addition, he draws from anthropologist Andre Leroi-Gourhan’s radical scholarship on human evolution that reverses the traditional conceptualization of the human subject being tool-wielding due to their intelligence. In Leroi-Gourhan’s re-write, it was only when the Australopithecus became upright and object-wielding did it free the mouth (which was used to wield objects), leading to the development of language, communication, and intelligence. In other words, the human subject is not tool-wielding because they possess significant cranial capacity, but rather they have significantly-developed brains because they are tool-wielding. For Stiegler (1998), at the very least the two co-developed, with “the human inventing the technical, the technical inventing the human” (p. 137). As such, “defining human qualities such as consciousness, intelligence and the capacity for symbolic thought are not the cause of tool-use but an effect” (Bradley, 2011, p. 12), and any consideration of the ‘human’ is inextricable with such extension, with technology. Thought of in this way, technology cannot be conceived as a mere prosthesis that has been added to the human or extends from the human, but has always been part of the human. This concept goes beyond McLuhan’s definition above where media/tech is defined as any extension of the human, and digs deep into the inextricability of the 165 two. “The human is always bound up with its non-human supplements” (Bradley, 2011, p. 98). Or as Stiegler (1998) writes more forcefully, “the human is the technical” (p. 116). In his seminal text on originary technicity, Technics and Time I: the Fault of Epimetheus14, Stiegler (1998) elaborates his thought in opposition to Rousseau’s desire to return to an unmediated Nature, a point in which humanity had no ‘prostheses’ yet. Instead, he proposes that the “human is immediately and irremediably linked to an absence … to a process of supplementation, of prosthetization … where everything is found mediated and … technicized” (p. 133). His reading of the Promethean myth is such that Prometheus’ gift must always be thought together with Epimetheus’ fault – which he describes as the human de-fault. To quickly recap Hesiod’s account, Epimetheus neglected humanity when he was charged with bestowing earthly creatures with means of self-sufficiency, as a result his fault prompted Prometheus to steal the fire that would be given to humanity. As this “gift made to humanity is not positive: it is there to compensate” (p. 193), humanity is marked by a lack, which will always need the supplement of fire (a placeholder for technology, knowledge, science, civilization, among others). Thus, for Stiegler, we have always been mediated, due to this ‘originary prostheticity.’ In fact, as he writes, “all human action … is after a fashion techné” (p. 94). Everything we do, all extensions, are technological, such that to distinguish between the human and technics is futile. While McLuhan (1964) expands the notion of media and technology to include any extension of ourselves and reframes them as ‘environments,’ such as language, Stiegler goes even further and boldly asserts, “the prosthesis is 14 In the book, Stielger also devotes much energy to the theorization of time as a technology, considering temporalization – being able to experience time – as a uniquely human attribute, and argues against the opposition between technics and time. However, I will not touch upon the notion of time, but rather focus on Stiegler’s complication of the human and the non-human. 166 not a mere extension of the human body; it is the constitution of this body qua human … it is not a means for the human but its end” (p. 152 - 153). The language of the ‘lack’ draws from Derrida’s (1976) concept of supplementarity elaborated in Of Grammatology, which foregrounds the importance of the supplement in constituting the human subject, previously thought of as whole and self-sufficient. The fact that language and writing is needed to supplement human memories (and form an integral part of civilization), points to the indispensability and ‘originary’ nature of such supplements, which attenuates the boundaries of the human. In other words, the lack presented by the human demonstrates the indistinguishable nature between the human and the supplement/technics (in this case, language). Stiegler (1998) continues this project by complicating the exterior/interior binary of the human subject, destabilizing the place of the subject and its mind, which by the traditional metaphysics and philosophy of technology has been considered as the seat of control, free will, and intention, pre-existing and asserting agency over objects, such as technology. By doing so, however, “we are putting the human cart before the technological horse: any interiority has actually been constituted retroactively by the process of technological exteriorisation” (Bradley, 2011, p. 123). The subjects’ lack and its co-emergence with technics/material/objects is part of its constitution. Such complication of the interior versus the exterior is reminiscent of Ingold’s (2013) and Barad’s (2007) arguments of the porosity and contingency of boundaries. It is inaccurate to speak of stable and given boundaries if we consider the world through an ontological model of matter intra-acting and animating one another, refuting the centrality of the anthropocentric subject and the idea that it is separate, prior to, and superior to the objects and phenomena around it. 167 Originary technicity is useful and relevant for the present study for the following reasons: 1) It undercuts the anthropocentric binaries of human versus world, words versus things, culture versus nature, emphasizing the material, embodied, embedded, lived contexts, against the universal and abstracted models/representations – the apogee of the human mind, thereby adding to the argument against informatics. This questioning of binaries can be traced back to Heidegger’s concept of techné outlined in Chapter 4 with its emphasis on poiésis (bringing-forth, revealing), which “effectively seeks to level the opposition between the natural and the man-made” (Bradley, 2011, p. 71) and Derrida’s (1978) deconstruction, which “is nothing other than the deconstruction of the historic opposition between thought and technics” (Bradley, 2011, p. 18), speech and writing, subject and object. Bradley considers both scholars’ concepts within the genealogy of originary technicity. 2) It emphasizes the non-neutrality of technology and argues against treating it as benign, instrumental, and inconsequential, which is one of the tendencies in media art discourse as outlined in the introduction. 3) Most crucially, it also insists that media/tech is not an extension or prosthesis, but always already entangled with the human. Therefore, one cannot be unmediated, and the task of combating the algorithmically-operated information capitalism cannot simply be a Rousseauian return to Nature, away from technology. The concept is significant and ties in with the main argument of this research: that the logic of commensurability needs to be questioned, especially what Chapter 5 had advanced in terms of the inadequacy of informatics as a way of knowing and the inability of information to represent the world, as it is abstracted, disembodied, and de-contextualized. As the previous chapter had argued, the process of computation and information is based on a logic that privileges the floating mind, divorced from the body that is embedded in the world, against which the perspectives of 168 Dreyfus (1998), Ingold (2013), and Barad (2007) provided counter-arguments that emphasized an anti-binary stance and the place of materiality and contexts. In language reminiscent of Barad and Ingold, Stiegler (1998) writes that technical object is in a sense ‘organized matter,’ the combination of a multitude of forces which forms various affordances and limits. And rather than asserting intention onto matter in a simple causal relationship, “the human has no longer the inventive role but that of an operator … listening to cues from the object itself, reading from the text of matter” (p. 75). From such perspective, the human is embedded in and indistinguishable from its material surroundings, including media/technology. Like McLuhan (1966) noted above, in our current age, information technology is the environment, which defies being noticed and remains hidden. Stiegler (1998) follows Heidegger’s concept as introduced in Chapters 3 and 4, the ways in which the tool remains inconspicuous. In the Heideggerian example he cites, one’s glasses, the lens through which one sees the world, are physically close yet cognitively so far away that one does not realize they are there. This is indicative of “the naturalized character of prostheses, through whose naturalization we see, feel, think” (Stiegler, 1998, p. 251). Like Epimetheus, the medium is forgotten. Read together with ‘originary technicity,’ one could say that we are always already immersed and mediated within an environment, within which one is situated and makes sense of the world. And this specific environment fosters a way of knowing informed by informatics. Therefore, the study’s main proposal, that Brechtian tactics of estrangement have the pedagogical potential of creating anti-environments that could foster ways of knowing differently, proceeds with the acknowledgement that one is always mediated and that such tactic begins from within the environment, while refuting 169 the opposition between human and non-human/media, abstraction and embodiment, ideas and material. Crucially, Bradley (2011) points out the tendency for Derridean deconstruction and originary technicity to be used alongside cybernetics (and the ‘singularity’ brand of post-humanism where the mind can simply be downloaded into a computer), to graft the deconstruction of mind and body, signified and signifier, subject and object, onto the deconstruction of machine and human. Eventually, this is used to justify a post-human machine/human hybrid where the machine subsumes the human and its body. I would argue that such usage of the concept ironically and unproductively creates another binary of dominance, where the immaterial/information subjugates the material/world in a presumed causal relationship, becoming the new transcendental signified. It contributes to what Hu (2015) calls the sovereignty of data, underscored by what I have been referring to as the logic of commensurability. If originary technicity focuses on the entanglement between human and media/technology, in a way that emphasizes the non-human and the know-how of material practices – things considered secondary to the subject – and undermines the transcendental floating human mind, then using it to champion the digital act of informatics, which abstracts data from lived and embodied experiences of being-in-the-world, would be counter to its lessons. The key is to insist on entanglement, recognizing the human is always already mediated, not a wholesale takeover of digital media technologies disguised as hybridity of human and machine. If the dissolution of the binaries of human and non-human/machine, subject and object, mind and body, only ends up with the reinstatement of oppositions such as material and immaterial/informatics, embedded and abstracted, where the latter subjugates the former, then we have gone in a very unproductive circle. Bradley (2011) writes, “we might see the subsequent 170 materialist turn in modern philosophy of technology … as a return of this repressed originary technicity,” (p. 10), which can function as a segue to the next section and the concept of digital materialism. The concept of digital materialism is an appropriate counter-argument to the tendency described above. Instead of reinstating the binary of the subject that abstracts through informatics and the embedded object of the world, the concept destabilizes the binary by insisting that even information and the digital are material all the way through, embedded in the world. 3. The Always Already Mediated Linguistic Subject If one is always already mediated and technologized, then perhaps language is precisely one such form of technology, an originary lens through which one comes to know the world. In What is an apparatus, philosopher Giorgio Agamben (2009) defines the apparatus as anything that captures, determines, orients, and controls. In a long list of examples that follows, which includes computers and cellphones, Agamben ends with language, and notes that it is perhaps the most ancient apparatus. As scholar of rhetoric Ian E. J. Hill (2018) reminds, technology and language are linked by etymology. Techné, therefore, entails the persuasiveness of both words and machines. Neil Postman (1992), likewise, refers to language as our most fundamental technology. Hidden from view, language arguably is an instrument that consists of “a set of assumptions of which we are barely conscious but which nonetheless direct our efforts to give shape and coherence to the world” (p. 123), he writes. “The world we live in today is a world in-formed by the abstraction technology of language” (p. 9), echoes Gottlieb (2018), a technology that began the elevation of the rational mind over embedded materials. 171 If language is one of the primary apparatuses/technologies par excellence (and therefore hidden), and one applies the Brechtian device of estrangement to language, the various post-structural/postmodern projects that have interrogated language by defamiliarizing them (such as the Dada word salads, the Russian Formalist zaum poetry, the absurdist plays of Samuel Beckett and Eugene Ionesco) might be reframed as a series of technological critique of the linguistic apparatus. Here, a parallel can be drawn between such practices and a critical theory of technology. Much like how postmodern literature focused on the breakdown of semiotic structures in order to show its constructed nature, the contingency of meaning, and its primacy in sense-making, it is likewise during its breakdown that technology becomes available for examination, and that its technical code which shapes our norms, behaviors, and worldviews can be revealed. Figure 6.1, Erica Scourti, Think You Know Me, 2015. Performance at Transmediale. © Erica Scourti. Courtesy of the artist. 172 This section draws from the work of renowned post-war playwright Beckett to provide insight on another work by Erica Scourti titled Think You Know Me (Figures 6.1 and 6.2). When language breaks down, manifested in ways such as Freudian slips, inability to convey thoughts to one another, nonsensical sequences and content, and grammatical or technical issues, its otherwise peripheral and invisible function becomes apparent. Theatre theorist Martin Esslin (2001) frames Beckett’s work as a destabilization on the idea of a whole and stable self, the rational subject. Beckett’s plays and novels are populated by non-characters who are often fragmented body parts, incomplete torsos, floating mouths, etc., arguably functioning more as particular components, facets, attitudes, or perspectives of humanity, rather than traditional ‘characters.’ The language used in his work are often designed to undo language, constituting dialogues or monologues that go nowhere. Indecipherable, circuitous, fragmented, nonsensical, and multiple, the language used by Beckett both rejects the primacy of language and highlights its inextricable relationship to the human, a way in which one attempts to make sense of the world. But of course, in his world, such pursuit for meaning and certainty is always futile. Questioning the centrality of language and simultaneously acknowledging the impossibility of doing away with it appears to be a main theme in Beckett’s work. If language is framed as a technology, this bears similarity to the concept of originary technicity and Scourti’s work, who often highlights the mutual-constitution of the human and non-human (i.e. algorithmic operations) from a critical perspective, while conceding the inextricable relationship between the two. Much of Beckett’s work depicts “the compulsion to talk … forever compelled to fill the void with words” (Esslin, 2001, p. 80). The desire to get away from language, from consciousness, from 173 thought, is impossible, for one is always already mediated, immersed in the linguistic register. As Esslin (2001) writes, “if Beckett’s plays are concerned with expressing the difficulty of finding meaning in a world subject to incessant change, his use of language probes the limitation of language both as a means of communication and as vehicle for the expression of valid statements, an instrument of thought” (p. 85). Entangled with language, the characters in Beckett’s work are forever split, incessantly communicating and attempting to locate meaning, to no avail. The paradox of wanting to cease communicating through language but being impossibly tied to it can be seen in Scourti’s work too, which in a way updates the idea for the information society. Performed at Transmediale media art festival in 2015, Think You Know Me entailed Scourti linking her smart phone keyboard’s predictive auto-finish function to her various online accounts and footprints, such as Gmail, Facebook, Twitter, Evernote, and her own website. Through doing so, a database has been created for the predictive algorithm, supposedly giving it access to Scourti’s personal information, what constitutes her as a data-based individual, and allowing it to learn, adapt, and better predict. Literally, equipping it with information that supposedly could allow the algorithm to finish her sentences. The performance itself consisted of Scourti typing into her smartphone and reading out a long monologue as suggested by her personalized predictive algorithm. The mechanism resulted in some of the following utterances: “Hello my name is live in the UK for a while to reply to your account after the war in the morning of my favorite colour is not the absence of fear in gone to the right to the right place at St. Andrews Street parking restrictions on my work and of the blue sky blue sky is the most of the day before the end of this and I am unable to find the right place for you can see the latest version and then we will try dm and then the Yeah I think the only way we do you want these days and will not …,” 174 reminiscent of Lucky’s monologue from one of Beckett’s (1954) most well-known plays, Waiting for Godot. Like Life in AdWords, Scourti explores the concept of the inevitably mediated self in contemporary information society, where the individual human subject is caught in the network. Here, the constitution of the self is entangled with the language of code, as manifested through interfaces, algorithms, and generally the logic of commensurability. In equating the self with its digital output (distilled to keywords in various taxonomies), the predictive algorithm claims to be able to create accurate and personalized statements about the self, or even, to know the self better than the self. While Life in AdWords saw Scourti decoding the encoded Gmail self by performing the translation-into-keywords and contrasting the intimate and temporal with the static and reductive ads, Think You Know Me performs the mediated self and unfolds in real time, as the utterances show the entanglement of Scourti and the algorithm in situ. The algorithm speaks through Scourti, and vice versa, entwined to such an extent that one cannot make clear distinctions, for the boundaries between the human and non-human are decidedly porous. As is the case with numerous Beckettian characters, and the contemporary user of ICT, Scourti cannot not communicate, even if the result is nonsensical and the meaning is always deferred. Scourti is caught likewise in the technological mediation of language, but in this case, the language is itself the output of other technologies: algorithmic operations informed by a whole slew of programs. I argue that through exacerbating machine learning operations and highlighting their limits, Scourti actualizes the pedagogical potential of the performance. On the one hand, the self is estranged, like Beckett’s characters who again are not really characters but fragmented automatons 175 who cannot really communicate with each other or the audiences. On the other, the predictive algorithm is also estranged; pushed beyond its limits, the promises of machine learning through data-mining the personal archives fall short, revealing its inadequacy and the logic underpinning its claims. Both of these instances of estrangement are executed through a breakdown of the communicative promises of language and ICT. Neither are adequate systems of representation. As noted above, Beckett often utilizes the tactic of disintegrating language, which, as Esslin (2001) theorizes, has the potential of heightening awareness that is often deadened by habit. Referring to Waiting for Godot, Esslin writes that “the routine of waiting for Godot stands for habit, which prevents us from reaching the painful but fruitful awareness of the full reality of being” (p. 59), for that habit paralyzes our attention. Through rupturing the speech act, the paradox that often occurs in Beckett’s work resurfaces in Scourti’s performance: while the technology of language dominates the self, who wishes to cease its relationship with the technology, the self simply cannot end this relationship, for the self is split and porous, entwined with this medium. Perhaps one way of reading this work is through Donna Haraway’s (1991) theorization of the cyborg, her metaphor for a socialist, feminist, and materialist call for the dissolution of boundaries and binaries that have underscored many traditional hierarchies and their legitimacy, dualisms such as “self/other, mind/body, culture/nature, male/female, civilized/primitive, whole/part, agent/resource” (p. 177). Haraway proposes the cyborg as a metaphor through which one can conceptualize a move beyond such binaries. As the wholeness of the self is an illusion, she sees the cyborg as a way to insist on the partiality and perversity of inevitable couplings. However, her faith that “high tech culture challenges these dualisms in intriguing ways” (p. 177) is questioned here, as the prosthetic quality of the cyborg is often co-opted and celebrated by post-176 human technologists, in ways that reinforces the dualisms that Haraway decries (mind>body, pattern>presence, agent>resource, etc.). This is also similar to the danger articulated earlier this chapter regarding Stiegler’s (1998) work and cybernetics. As media and literary theorist Katherine Hayles (1999) cogently points out, “although in many ways the post-human deconstructs the liberal humanist subject, it … shares with its predecessor an emphasis on cognition than embodiment” (p. 4). Such emphasis is rooted in the metaphysical promise of information, “that it can be free from the material constraints that govern the mortal world” (p. 13), and it ends up once again privileging the rational subject. While the cyborg reading of Scourti’s work may be an easy one since it too seems to emphasize a prosthetic relation between self and machine, I would argue that her work is not simply a celebration of this entanglement, the dissolution of the boundary between human and non-human/language/machine. Her exploration of the mediated self seems to concede its inevitability, that one will always be subjected to the economy of big data and informatics, but it foregrounds the ubiquity of machines and ridicules some of the claims proposed by them. Her work differs from the practices of artists who seem to celebrate a futurist perspective, like the work of renowned performance artist Stelarc, who is well-known for augmenting his body to interface with various machines. Similarly working with the concepts of porosity and couplings, the theorization of Ingold (2013) and Barad (2007) offer imageries that do not inadvertently privilege the immaterial. Haraway’s (1991) theorization of the cyborg as an attempt to dismantle the dualism and to elevate that one half of the binary which is the ‘other,’ the half that is often conceived as inferior or having derived its place belatedly from the authority of a transcendental signified, is undoubtedly invaluable. However, in framing contemporary digital technologies as offering a way to think 177 beyond these binaries, the argument creates an opening that normalizes informatics and the logic of commensurability. In order for the cyborg to exist, there must be a way to translate analogue to digital, to encode the materiality of the world, to digitize all lived phenomena. This, ironically, perpetuates the binaries of mind and body, subject and object, theory and practice (or episteme and techné). The cyborg approach advocates for the place of techné and the non-human to challenge the dominance of the rational male human subject, but if this approach then gets appropriated by the post-human and cybernetic proponents to advocate for the place of informatics/immaterial/abstraction over the embodied/material, then one has gone in an unproductive loop, similar to the danger noted above with originary technicity. Seen in this light, Barad’s agential realism and Ingold’s porous boundaries of animacy might be more productive lenses to think through concepts of the mediated self that is always split and partial. Scourti’s work acknowledges the inevitability of originary technicity but does not celebrate such entanglement nor advocate for a wholesale takeover of techné, but rather invites one to attend to the socio-political implications of such inevitable mediation, by estranging and exacerbating their operations. Jodi Dean’s (2005) theorization of what she terms ‘communicative capitalism,’ as introduced in Chapter 4, seems to be an appropriate concept to accompany both Beckett and Scourti’s work: a state in which one is compelled to communicate, and where such communicative norm is the platform on which domination is exercised and surplus is extracted. The dominance through language, specifically the communicative acts of texting, tweeting, blogging, reviewing, commenting, etc., are in a sense a magnified Beckettian scenario: one is subjugated through the technology of language and the AI operations of natural-language processing, but one cannot cease 178 to participate in its call. In the case of Think You Know Me, the cyborg is already the norm, and the average user is entangled with the machine to such an extent that it is difficult to tell whether the user is using the device or the device is using the user. The work does not celebrate such ambiguity, but rather, through estranging both the speaking self and the predictive algorithms, it unsettles this mundane operation, oscillating between meaning and incoherence, and questions this mediation by focusing on the medium itself (which is of course entangled with everything else). The question of the user’s way of knowing becomes even more pressing when predictive algorithms prescribe frameworks for knowledge-production, significantly influencing the communication of an individual, all of which hinge on the claim that machine learning algorithms are capable of capturing, representing, and forecasting the thoughts and behavior of individual users. Figure 6.2, Erica Scourti, Think You Know Me, 2015. Performance at Transmediale. © Erica Scourti. Courtesy of the artist. 179 4. Digital Materialism and the Cloud As an extension of the thought that “human contact with reality is always mediated” (Verbeek, 2005, p. 11), the materiality of the medium constitutes such mediation. If one is always already technologized, according to Stiegler, then the concept of the digital materialism offers a crucial addition: that such mediation is also always material. As introduced in Chapter 4, the programmer and theorist Tung-Hui Hu (2015) characterizes the cloud society as that which produces the ‘user’ position through the sovereignty of data, who ends up voluntarily supporting its mechanisms of control. How do we think about media art through the concept of materialism, if the medium itself seems so immaterial and rooted in the myth of the cloud? Ingold (2013), as noted previously, advances a materialist argument that refutes the distinction between theory and practice, knowing and doing, subject and object, human and nature, and instead insists that one comes to know through doing/being/making, and that there is no knowing from the outside. He builds this argument on a framework that negates fixed containers and categories of knowledge objects, and proposes an understanding of the world as ongoing flux of materials corresponding and animating one another, arriving at the argument that making is an act that engages with materials and is highly open, processual, and contingent. Barad’s (2007) materialist concept of agential realism also negates such boundaries, especially those between ontology and epistemology, things and words, nature and culture, to instead emphasize the intra-activity of performing matter, rendering all phenomena fluid and provisional. Radically undercutting the anthropocentrism within a representational model of epistemology, she insists instead on an onto-epistemology, where discourse should be properly thought of as “material-180 discursive practices … boundary-making practices that are formative of matter and meaning” (p. 146) with contingent boundaries. Such emphasis on matter can also be seen in Stiegler’s (1998) writing when he says that “the ideal requires the material in order to become itself in the first place” (p. 96). The discussion extends into the digital by leaning on philosopher Baruch Gottlieb (2018) and his theorization of digital materialism. For him, not only is it important to emphasize that digital media operations necessarily occur as part of nature, within the world’s flux of materials, but it is also crucial to recognize that any discussion of materialism is a political one as well. Figure 6.3, John Gerrard, Farm (Pryor Creek, Oklahoma), 2015. Simulation still, dimensions variable. © John Gerrard. Courtesy of the artist, Thomas Dane Gallery and Pace Gallery. 181 As Gottlieb (2018) writes, “data is not immaterial … there is no cloud, only someone else’s computer” (p. 130). Against the myth of the cloud, which contributes to an emancipatory and utopian rhetoric of digital media, he insists that “digital media technologies require enormous material resources to function” (p. 128), material in the sense of the literal matter needed for the process of fabrication and operation, but also the economic infrastructure needed to sustain the industry. Digging deep into the technical processes of computation, he spends much time emphasizing that these processes are material all the way through, from the raw metal and the chemical processes of the chips to the polymer of the pixels and the electric processes that activate the various parts of the CPU. This material layer sits on top of another, the global infrastructure of electronics production and telecommunications. Both of these material layers stand in distinction to the myth that information is immaterial, therefore the cost of doing business is next to none, which positions digital media to be inherently predisposed to a more egalitarian and democratic state. However, as anarchist and theorist Hakim Bey (1993) wrote in The Information War, contemporary society is “susceptible to the rhetoric of a metaphysical economy … and yet, this ‘first world’ economy … depends for its position (top of the pyramid) on a vast substructure of old-fashioned material production” (para. 9). Similarly, Richard Barbrook and Andy Cameron (1995) write about the mission of Silicon Valley to push for unrestrained growth and development in digital media technologies (which they refer to as the Californian Ideology), driven by faith in tech’s emancipatory potential, and counters that this development not only ignores but also depends on a body of unacknowledged working class. What is overlooked is the fact that “humanity, always enmeshed in and dependent on the fluxes of universal materiality, has 182 elaborated an enormous interface of industrial processes, which appropriate this materiality to human needs” (Gottlieb, 2018, p. 127). What is strategically neglected, is the body of data. The concept of digital materialism emphasizes the political relationship between the sovereignty of informatics and the subjugated materiality that the world is embedded in. “Digital data is neither immaterial nor identically reproducible. Every instance of every bit of digital data is materially unique in time and space” (Gottlieb, 2018, p. 123). To bring to the foreground the neglected plane of materials on which all consciousness, words, discourse, thoughts emerge, is also to rupture the myth of digital media, which perpetuates the binary by its very existence: ones and zeroes. Here, we have several binaries, the knowing subject vs. the objects to be known, the transcendent rational and objective mind vs. the superfluous body, the pertinent units of ones and zeroes that constitute all information, the abstracting act of making information out of the world’s excessive plurality, the dominant myth of informatics and the cloud vs. the actual environmental footprint and labour implications of the industry, etc. A discussion of digital materialism allows us to take all of these into account, beginning with the technological act of abstracting the world’s fluidity and heterogeneity into discrete units, rules, and categories, all the way to the technological assemblages of today’s information society. There is no shortage of scholarly perspectives on the network’s tyranny. Beyond digital media’s mythic capacity to emancipate, to bring about a more egalitarian society because information is immaterial (and therefore free), and to transcend social hierarchies and global boundaries, is the material reality of commerce, exploitation, and subjugation. To explore digital materialism in media art necessitates not just the acknowledgement that code and information are 183 also materials embedded in the world, interacting with all other materials, but also the need to examine the material conditions that give rise to and sustain digital media assemblages and the binary logic that undergird all computation – which subordinates the material and elevates the abstractions. In other words, it necessitates an examination of the machination of information capitalism and informatics as a worldview. With that note, this chapter ends with John Gerrard’s (2015) video of one of the numerous Google data centres, titled Farm (Figures 6.3 and 6.4). Gerrard’s practice often involves creating simulations of industrial and other architectural artifacts. In 2014, wishing to investigate what the Internet’s materiality looks like, he inquired with Google to see if he can photograph the exterior of one of their eight data farms in Oklahoma, to which they answered “there is no way that is going to happen” (Jones, 2015, para. 4). After consulting with the local police and finding out that the air space is not restricted, Gerrard hired a helicopter to fly him around the premises, allowing him to survey and take around 2,500 photographs, from which he created a 3D simulation video that very slowly moves around the exterior of the farm, using a game engine. The result: a virtual representation of an actual telecom industrial complex that thrives on sustaining the myth of its immateriality. What is highlighted, through the immaterial representation, is how Google so desperately wishes to deny its materiality and to maintain an image of the cloud. There are several points of estrangement here: certainly the making-visible of that which is usually invisible and elusive, refuting the usual messaging of the cloud, but also with the twist where the presentation of this actual space is not a simple documentary-type revealing, but rather created using a tool designed to create illusory and immaterial spaces, thereby re-184 emphasizing and exacerbating the myth of the cloud. It took the author and some other colleagues quite a while before realizing the video is that of a simulation, and not the real facilities. Unexpectedly, the promises of the material slipped away to reveal yet another digital artifact. I argue that the pedagogical potential emerges through such deviation from the conventional imaginary of the cloud, where one is invited to come to know differently in relation to ICT operations and their material presence. Figure 6.4, John Gerrard, Farm (Pryor Creek, Oklahoma), 2015. Simulation still, dimensions variable. © John Gerrard. Courtesy of the artist, Thomas Dane Gallery and Pace Gallery. 185 Of course, the medium is not innocent, either. The game engine has a drone-like camera, scanning around the parameter in a military fashion. The entanglement with the military is likely not lost on Gerrard, both in the fact that the engine he and his team used to develop the model was “first developed by the military to map out hostile environments but now used by game designers” (Compton, 2015, para. 4), and that the game industry has close ties with the military, with the development of some first-person and military games designed with the intention of training a new generation of potential soldiers and ingraining them with pro-military perspectives (Dyer-Whitford & de Peuter, 2009). The floating eye of the game engine is emblematic of the disembodied gaze of information, transcendent and untethered to context and materiality. Clinical and detached, the simulation’s point of view erases the camera apparatus, with no device nor crew in sight. The video highlights the erasure of three different media (which, as noted above, tends to be neglected and remain inconspicuous): the medium of creation (the game engine), the ‘body’ of the internet (the data farm), and the material presence of the farm’s employees. Herein lies another point of estrangement, one very crucial to this chapter: the materiality of the human labourer is absent, echoing the arguments of Gottlieb (1996), Bey (1993), Barbrook and Cameron (1995) above. In an ironic and fitting way, the piece creates an immaterial version of a very material industrial complex that exerts huge efforts to appear immaterial and to perpetuate such myth, elevating informatics in the same manner as discourse, subject, mind, at the expense of matter, object, body. As Stephen Wilson (2002) writes, the unrestrained growth of the tech industry demonstrates an oxymoron in that “it is supported by a drive of capitalist expansion and attempts to free itself from obligations to laborers” (p. 644). Or as Gottlieb (2016) writes, “the last 186 task of the labourer is to remove the traces of its labour” (p. 407). Like Bolter and Grusin’s theorization reminds, the paradox of the medium is that it will always persist. While information capitalism thrives on the myth of the cloud, its efforts to rid itself of the material is negated by the very fact that it requires materials – in forms such as labourers, copper, tin, plastic, transportation, oceanic cables, data farms, power grids – to exist. In 2014, designer Timo Arnall, who has an interest in visualizing immaterial processes such as Wi-Fi signals, shot a multi-screen film project titled Internet Machine. There are several similarities with Gerard’s project: they both take large data centres as their thematic focus, they both utilize a slow-moving camera lens that moves through and around the physical location (although in Gerard’s case it is a model), and they both present the location without any of the employees and regular business operations. The main difference lies in the fact that Arnall shot his footage in person at the Telefonica data centre in Alcalá, Spain, with full access to the facilities, resulting in a corporate-approved documentation, or a tour, of the data centre. While the piece functions as an engrossing and meditative study of the sound and interiors of a data centre, in addition to highlighting the presence of an overlooked space, it lacks the playful oscillation of the material/immaterial that is achieved by Gerrard’s 3D modelling of a site that is off-limits, a site that denies its existence and pretends to exist in the clouds. Like Cathy O’Neil (2016) pointed out in the previous chapter, all models/representations are inadequate toy versions of some material reality, and the 3D model of the farm exacerbates this point. Phrased differently, Gerrard’s piece goes further in its estrangement than Arnall’s, through its usage of a form that emphasizes the digital and pushes the trope even further, creating an eerily realistic yet sanitized illusory space that has been abstracted and scrubbed clean like data sets. The cleaning of cyberspace at the 187 expense of an embodied and noisy materiality is further explored in the next chapter’s first case study. 5. Conclusion To return to the points introduced by the concepts of mediation and originary technicity, both Scourti’s and Gerrard’s work noted in this chapter confound the conventional distinctions between mind and body, words and things, abstraction and embodiment, human and non-human/machines, by undercutting the anthropocentric position that the rational human mind traditionally holds, decontextualized and detached from the mess and contingency of the world. Specifically, both pieces refute the stance that one can distinguish between human and machine, immaterial and material, and privilege one over the other. In Scourti’s work, the emphasis is placed on the inextricability between the language/device and the user, while in Gerrard’s work, the materiality of a medium that has been advertised as immaterial, accessible, and utopic is revealed and highlighted, such that one cannot ignore the usually unseen substrate on which Internet operations are carried out. Both artworks can be read through Stiegler’s (1998) lens, where he argues that media/tech is not a simple prosthesis added onto the human, but rather the human is always already mediated. As noted above, this perspective is not a benign and wholesale celebration of the cyborg or cybernetics, the melding of the human and the machine, but instead emerged from a place of criticality that recognized the dominance of contemporary ICT assemblages as emblematic of the rational mind. Such perspective emphasizes the persistence of the medium, the environment, the 'gestell' of informatics, and that critical examination is not only imperative, but also must come from within the medium. 188 Chapter 7: Estrangement through Noise This chapter continues the exploration of the inevitable presence of the medium, by focusing on one conceptualization of such presence, which could also simultaneously function as a specific iteration of estrangement: noise. In doing so, it continues the discussion of noise as introduced at the end of Chapter 4, where noise was offered an example of estrangement in the politically-fraught distinction between signal and noise, and again in Chapter 5, where noise was talked about in relation to Hayles’ (1999) noisy multiplicity of the embedded world that cannot be reduced to information. As Mejias (2013) writes, “to unthink the logic of the digital network is not to refuse to confront the network … but to reimagine one’s relationship to it … disrupting the flow of information by adding noise (information outside of the logic of the system)” (p. 90). While this chapter’s examination of the concept of noise is partly influenced by noise theories from the sound art tradition, it is primarily referring to noise in an informational sense. 1. Immersion in Noise / Medium As a specific iteration of estrangement, noise is especially appropriate for the present study that focuses on media art’s position within information society. In Claude Shannon’s (1948) seminal essay A Mathematical Theory of Communication, one of the founding works of information theory, the mathematician elaborates in detail the representation of signals, information, or meaning, through mathematic symbols in the form of the binary digit. As he writes, “the fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point” (p. 379). The essay elaborates in detail the 189 various communication systems and ways of reducing or ‘combating’ noise, which is linked with errors, distortions, uncertainty, chance, and entropy, among others. It is an attempt to reduce the presence and potential disturbances of the medium, such that the pure, sterilized, and reductive signal may be operational. It is, in other words, an attempt to abstract and transmit an encoded piece of the world in a decontextualized format, without taking into account the inevitable mediation of the medium within which it emerged and ‘through’ which it must pass. In Shannon’s framework, it can be seen in the desire to ignore the material effects of the copper wires, the radio frequencies, the material components of the television that affects the output of the message, among others. In the context of the present study, and especially the previous chapter, it can be seen in the way the digital medium is overlooked for its content, the way the material and economic infrastructure of the internet is overlooked for its claims of untethered freedom and emancipation (as emphasized in the work of Gerrard), and the way the messy, contextual, and contingent embeddedness of being situated in the world is overlooked for the representational claims of the rational mind and informatics to encode the world into bits. Framed as such, noise is another concept through which one can think about the embodied contexts and performative materials – the suppressed medium of the world – that refuses to be encoded and highlights the inadequacy of information. Information, as Gottlieb (2018) argues, is ‘reduction-to-form,’ and imposing form onto matter is a metaphysical act that has its legacy in modern technics (Stiegler, 1998). Noise, then, is a way in which the uncontainable excess, the entangled and performative intra-activity of worldly materials, rupture through the imposed form/model/representation, be it information, databases, or signals. The presence of noise in signals is not only an estrangement of the reductive and abstracting act of data representations in 190 the big data economy, it is also the inevitable presence of the medium and its materiality. As cultural theorist Ted Striphas (2011) writes, the purpose of informatics is to “bring order to the cultural chaos by ferreting out the signal that exists amid all the noise” (para. 7). And while, like Hayles’ (1999) point highlighted in the previous chapter, Striphas concedes that such reduction (what Hayles termed the Platonic backhand) is often necessary for knowledge-production and theorization, he also argues that the livingness and inventiveness that get filtered out in algorithmic culture is a high cost. For “what is culture without noise? What is culture besides noise” (para. 10)? One might even ask, what is the world besides noise? As Gleick (2011) explains, in the context of Shannon’s work, the universe is marked by entropy, disorder, noise, and Shannon’s equation for information is “a measure of unexpectedness” (p. 228), or “the average number of yes-no questions needed to guess the unknown message” (p. 281). Or as physicist and cyberneticist Heinz von Foerster wrote, “information can be considered as order wrenched from disorder” (as cited in Gleick, 2011, p. 248). Phrased differently, information is an attempt to reduce entropy, to impose order and structure, and to extract value from noise. The artist Alan Kaprow (1987) writes that the avant-garde musician John Cage famously introduced two concepts to contemporary art and music: chance and noise. Like Shannon’s conceptualization of noise, the two concepts are strongly linked in Cage’s thoughts. Kaprow writes that “as Cage brought the chancy and noisy world into the concert hall (following Duchamp, who did the same in the art gallery), a next step was simply to move right out into that uncertain world and forget the framing devices of concert hall, gallery, stage, and so forth” (p. 224), suggesting that the Cagean thought pushes for the dissolution of structures, systems, and boundaries, that the world and environment is inevitably characterized by noise and chance. One might add 191 ‘information’ to Kaprow’s list of framing devices, and in doing so, highlights noise as the other, the rupture, the excluded but inevitable non-information in an encoded world. Linking noise to the anti-binary stance I have held so far, art historian Branden Joseph (2007) argues that Cage’s practice, the non-teleological, non-hierarchal “reconfiguration of the subject-object/listener-work relation into … a multidimensional, transformational field was an explicit challenge not only to abstraction but to dialectics” (p. 61). Framed as such, Cage’s work foregrounds immanence over transcendence, and highlights the noise that is the lived everyday. This anti-binary stance is echoed by the work of several sound theorists. Jacques Attali (1985) and Salomé Voegelin (2010) had both argued that vision can be linked to a totalizing tendency, abstracting phenomena into decontextualized output. Noise, on the other hand, is what permeates all, and offers an alternative understanding. Voegelin defined visuality as a drive for total and objective knowledge/truth, a propensity afforded by the ‘gap’ between the seeing subject and the seen object. On the other hand, aurality possesses no such gap, for that hearing is everywhere, and that the sonic offers a more contextual and inter-subjective methodology. Sound theorist Frances Dyson (2009) reinforces this anti-Cartesian perspective when she proposes that sound possesses a quality that “rattles the foundation of Western metaphysics, by questioning the status of the object and the subject” (p. 4). Like the argument advanced in Chapter 5 and the emphasis on embodied contexts, noise offers a different way of knowing, against the traditional epistemology, which holds that a subject comes to know worldly objects by forming models/representations through detached abstraction. 192 The artist Dan Lander (1990) writes that noise is the “sounds of life” (p. 10). Noise, in this framing, comes to stand in for the irreducible multiplicity of the everyday, and the material world within which it is embedded. Or phrased differently, to attend to the performative materials of the everyday is also to acknowledge the omni-presence of noise. Understood this way, the concept of noise aptly encapsulates the various other concepts examined in this research so far, such as estrangement, originary technicity, the medium as an environment that operates in the background, and the destabilization and critique of the encoding operations of ICT assemblages. Further, the concept of noise also ties back to pedagogy. Kedrick James and Ernesto Peña’s (2016) proposal of ‘glitch pedagogy’ can be instructive here. Bearing many similarities to the concept of noise and is perhaps an iteration of it, glitch is likewise often unwanted, suppressed, and unforeseeable – a troubling and disruptive deviation from normative operations and functions. As the authors write, glitches, framed as the unexpected results that elude control, involve “disrupting the perceived affordances of an object or system” (p. 113), and such disruptions can be highly pedagogical in their provocations and interrogations of the object or system. Recognizing that accidents and errors are inherent to learning, the authors propose that “glitch is understood to be the formative source of learning” (p. 124). As noted in the Chapter 3, pedagogy has been conceptualized as necessarily lived, embodied, and experiential (Aoki, 2005; Ellsworth, 2005). Defying predetermined outcomes, pedagogy has to contend with the multiplicity of the world. Such attunement to the possibility of being made different, creates the conditions through which one comes to know differently. In creating the conditions for knowing otherwise, pedagogy makes space for noise, for that which 193 cannot be predetermined and that which is ‘other’ than the present regime of knowledge and practice. The focus on embodied lived experience, that which is often overlooked or suppressed, links back to the immanent noise of the world. Tying the concept of noise back to the pedagogical and critical potential of visual/media art, one could position it as a prime example of estrangement, the unwanted factor that ruptures the dominant knowledge and practice, and therefore, as possessing pedagogical potential. 2. Noise as the ‘Other’ of Information Echoing Bolter and Grusin (1999) as articulated in Chapter 6, Stephen Crocker (2007) also elaborates that while the principle of information theory is such that ‘medial noise’ is undesirable and hinders the efficient delivery of information, the medium and its noise are an inevitable part of communication. Extrapolating the lesson of the medium out of information theory and expanding into society at large, Crocker writes that “the political is not so much about the definition of goals [messages], so much as the way that the medium in which our actions take place affects what we can be and do” (para. 8). Emphasizing as McLuhan does that the medium is an environment and also leaning on the philosophy of means/relations by Michel Serres, Crocker writes that “noise is the presence of the medium through which the message must pass … we can never eliminate the space of transmission” (para. 11). If the medium always persists, and noise is the presence of the medium, then noise is omni-present and ineradicable. “There is always a context of communication or an environment and so there is always a noisy third term” (para. 11) beyond the sender and receiver. In the medium that is the world, noise is the grounding of all phenomena. Highlighting the noise of the medium, is the productive force of estrangement in 194 relation to ICT assemblages. Through McLuhan’s (1966) lens, if today information is environmental, then one might say noise is the anti-environment. “Noise directs us away from the message itself toward the medium in which it occurs” (Crocker, 2007, para. 13). In Crocker’s terminology borrowed from Giorgio Agamben, it introduces a state of exception that reveals the medium. Artist and theorist Joseph Nechvatal (2011) makes an important point when he says that noise is not simply disruptive either, and that the flipside of such disruption is that it also creates something new. “The notion of noise as creation itself is thus an important one” (p. 10). This point ties the argument back to Chapter 3, that artistic potential such as estrangement deviates and ruptures the dominant mode of knowledge and practice, but in doing so, it also creates the possibility for the new and other, something that is not yet known. As noted before, in this sense noise is not simply the intrusive or previously oppressed, but it is also the unforeseen, the unprecedented. For him, the potency of noise is that it allows one “to move from what exists and is known to the limits of knowledge and experience, and therefore move into the realm of the unknown – a move from the familiar to the unconceived” (p. 20). This theorization of noise not only draws the concept close to estrangement, defamiliarization, and the unknown, but it also linked the argument back to Chapter 3, where the conceptualization of artistic potential to cultivate the unforeseeable and unprecedented is tied to Foucault’s (1981, 1997) theorization that critique is a transgression of the limits of knowledge. Like Kaprow (1987) suggested above with the dissolution of structures through noise, Nechvatal (2011) likewise elaborates on the inevitability of noise to seep through. Using the terms 195 of Georges Bataille, he notes that all stable structures (representations, identity, information, even concepts) are inevitably characterized by excess. “Systems form, but only ever imperfectly, around … noise” (Crocker, 2007, para. 9). If noise is “an incoherent and multivalent excess that defeats attempts at reducing reality to the indexical level of representation” (Nechvatal, 2011, p. 19), then this is precisely the potency of noise as a concept in relation to art – "this non-representational counter ... which breaks us out of the fascination and complicity with the mass media mode of communication" (p. 26). Or as musician and theorist Mattin (2010) writes, “noise cannot be represented” (n.p.). The formlessness of noise is antithetical to the reduction-to-form of information. It cannot be fully encoded as information. Phrased differently, noise is the ‘other’ of information. Seen in this light, noise is the exemplary iteration of estrangement in information society, the making-strange of signal-processing and ICT operations. The overlooked, the suppressed, the unforeseen, the illegitimate, the unwanted, noise is that which emerges from the medium that has been excluded from the structure of signals, of the dominant information society. To amplify such noise is the pedagogical potential of media art. The artworks by Eva and Franco Mattes and Trevor Paglen below will explore such amplification. Nechvatal (2011) proposes that “noise can block, distort, or change the meaning of a message in both human and electronic communication” (p. 17). All of the three noted potentials entail a disruption or counter to the established and dominant norms, such that something new emerges. ‘Blocking’ is a sub-tactic that has already been pointed out, as in the signal-blocking work of Julian Oliver in Chapter 4. Similarly, artists and designer such as Adam Harvey with his project Computer Vision Dazzle (2010 – ongoing) and Zach Blas’ Facial Weaponization Suite (2011 – 2014) both tackle biometrics, facial recognition, and surveillance by creating masks and/or 196 makeup that would obfuscate the face and compromise facial recognition AI operations. While such projects are important and respond to the current endemic situation of ICT-facilitated surveillance, I now turn to projects that do not so much as block the signals, but seek to highlight something unwanted or unforeseen, to such an extent that they change the way we make sense of information capitalism today. As Lash (2002) pointed out in the previous chapter, there is no outside to ICT, and no position that has not been subsumed and therefore mediated by information. Therefore, one needs to confront information and its signal-processing head-on. Before delving into the case studies of artworks, it bears repeating a point introduced in Chapter 3. Like all other estrangement tactics, something that is considered noise at a given time is also susceptible to the twin danger of being incorporated into the mainstream and being made irrelevant (Joselit, 2003), of becoming ‘signal.’ Hence, there is a need for critique to remain mobile. Just as there is nothing inherently radical about media art, especially ones that focus on the participatory and interactive, “noise does not have an unchanging, artist unworthiness or worth … rather noise is what lies outside of our habitual comfort zone at any point in time … it suggests an outside other and points us elsewhere” (Nechvatal, 2011, p. 35). Noise is not necessarily static, obfuscation, or cacophony, but rather it is the affordance of deviation from the dominance of signals. 3. Noise of the Internet In 2003, after having been wrongfully accused by the FBI of being involved in terrorist conspiracies and therefore subjected to invasive scrutiny and interrogation, Hasan Elahi began 197 working on a project where he sent a barrage of personal information to the FBI on a daily basis, quantifiably detailing his behaviours and patterns. For him, the project has become an exaggerated spam act on the intelligence agency’s attempt to mine data. The act of uploading everything, for Elahi, became conceptually equivalent with uploading nothing. The willing abandon of privacy became a critical and potent statement on the lack thereof. Here is an example, quite literally, of an excess that sought to rupture the logic of surveillance to render it meaningless. However, by 2013, it appears the enterprise of surveillance still persists, evident with the exposure of the NSA mass surveillance operations via Edward Snowden. Elahi’s project, now habituated and popularized in the form of social media, has not overloaded the network, but has only contributed to the increase of quantity and intensity of data-mining. It is easy to frame Elahi’s project as exemplary of a ‘noise’ function, in the way it consists of a copious amount of unwanted and perhaps useless information, used to flood an organization designed to collect data. But I would argue this form of noise, where it is pointed out that the contemporary user today and the internet in general produces an unfathomable amount of information about everyday, is no longer potent. As scholar of informatics and media Peter Krapp (2011) writes, “what is noise to one may be message to another” (p. xvi). The large amount of data Elahi wished would overload the system has only fed the justification for its expansion and operational budget. Informatics has only gotten more ubiquitous since 2003. As Snowden (2019) writes, NSA’s surveillance program, STELLARWIND, thrives on generating patterns (metadata) on a macro scale, rather than on an individual’s data. In other words, extracting value from ‘noise’ and turning it into information. The piece’s exacerbation of FBI’s data-gathering is its conceptual strength, but in practice its output – public display of a large collection of personal photographs – only replicates the social norm today. 198 “For noise to be first noise, it must destabilize us. It must initially jar. It must challenge” (Nechvatal, 2011, p. 19), and the piece no longer accomplish this. As Kate Crawford (2014) suggests, the anxiety of the big data industry is not that such operations would be unable to handle the enormous amount of data, but rather the fear is that they have not gotten everything yet and therefore must continue their operation, in an ever-expanding quest to be comprehensive. Figure 7.1, Eva and Franco Mattes, Dark Content, 2016. Customized IKEA desks, monitors, videos, headphones, various cables. © Eva and Franco Mattes. Courtesy of the artists and Postmasters, New York. Installed at Carroll/Fletcher Gallery, London. 199 In this instance, increasing the volume of information production alone no longer appears to constitute noise for the information system. In fact, it constitutes signals, and directly feeds the system. Instead, this section introduces an artwork by net artists and hacktivists Eva and Franco Mattes, which introduces a different iteration of noise of the Internet, that centers not on the data being produced, but on the neglected material substrate that sustains the system. In previous chapters, arguably most of the noted artworks implicate the issue of digital labour, of producing data in our everyday lives and inadvertently providing surplus/value for those that capture and exploit this data. In particular, Ibghy and Lemmens’ (2013 – 2018) work highlights the inevitability of being subsumed within the system of immaterial labour, but nevertheless wishes to explore the inadequacies of models/representations that seek to capture and “contain the complexity and richness of human experience” (Cooley, 2014, para. 30). Similarly, John Gerrard’s (2015) work, Farm, acts as a foil to the ubiquity of immaterial labour and reminds the viewers that amidst the all-encompassing purview of information capitalism and its myth of the cloud, the materiality of the infrastructure and labour persists. This section looks at the Mattes’ work that examines precisely the infrastructure and labour that persist as background noise in the global ICT system. In an article titled “Proxy politics: Signals and noise,” artist and theorist Hito Steyerl (2014) asks one of the crucial questions that has been repeated throughout this dissertation: if the operative assumption of ICT is that all phenomena can be encoded, who determines the distinction between signal and noise? Algorithms, software, applications, are inevitably political. Sustaining the way of knowing fostered by ICT – the logic of commensurability – benefits the stakeholders within information capitalism who profit through selling information and suppressing noise. As Steyerl 200 illuminates, “this division corresponds to a much older social formula: to distinguish between noise and speech is to divide a crowd between citizens and rabble. If someone didn’t want to take someone else seriously, or to limit their rights and status, one pretends that their speech is just noise” (p. 6). Unless we question the neutrality of information and its capacity to represent the world, we risk legitimizing and perpetuating the dominant information systems and the way they categorize, rank, and exploit the everyday. Upending the information order by amplifying the suppressed noise is crucial. Figure 7.2, Eva and Franco Mattes, Dark Content, 2016. Customized IKEA desks, monitors, videos, headphones, various cables. © Eva and Franco Mattes. Courtesy of the artists and Postmasters, New York. Installed at BAK, Utrecht. 201 Steyerl (2014) begins her article by elaborating on computational photography, the mechanism of contemporary smart phone cameras where a significant portion of the photo is determined by the internal AI that has accessed and assessed the user’s image preferences and past photos to inform the way in which it takes future photographs. The AI learns how to distinguish between signal and noise based on internal programming and user data. The very same mechanisms, according to Steyerl, are also utilized to manage the content of online platforms. A variety of algorithms have been trained to assess and clean up the unwanted information present on platforms and search engines such as Google, Facebook, and Twitter, content that has been deemed to be noise rather than signals, meaning anything that is unsanctioned, ranging from automated proxy spams to fake Twitter accounts to advertisement. The army of bots cleaning up the web is matched by an army of humans. According to Steyerl, Facebook content, for example, is policed to ensure they follow strict rules. “Those rules are still policed by humans, or more precisely a global subcontracted workforce from Turkey, the Philippines, Morocco, Mexico, and India, working from home, earning around 4 USD per hour” (p. 2). This hidden material industry of the Internet is the topic of the Mattes’ (2015) piece Dark Content (Figures 7.1 and 7.2). Initially experiencing difficulty locating content moderators, the artists ended up posing as companies wishing to hire moderators, using the same recruiting service that likely Facebook and Google would have used, and successfully interviewed one hundred such individuals. The interview results were turned into videos, featuring generic avatars reciting the interview transcripts in deadpan robotic speeches. These videos were played on monitors attached to scattered fragments of office furniture in unintuitive places – under the desk, on the desk legs, 202 on an office chair, all forcing the viewers to bend and contort in order to view the videos. Speaking to the elusive nature of such industry, the artists comment that “big companies like Google or Facebook don’t want their users – us – to know that their content is being reviewed. They want to be perceived as transparent tools of self-expression, where everyone can ‘share’ whatever they want, when in fact they have incredibly sophisticated ways to control content” (Mattes, 2015, para. 4). According to the interview results, the moderators reveal that it isn’t just about removing the gore and sex, but also complying with special requests, such as the removal of all Bin Laden content during the US presidential campaign and Facebook’s removal of any sensitive material that would hinder their attempt to get into the Chinese market. Apart from the exhibitions, the videos have also been released online, but only on the Darknet. Appropriately, the Darknet is perhaps the most emblematic of the excess that cannot be contained in the online world, which the ‘official’ Internet desperately tries to cover up using the content moderators. This account of the hidden industry highlights a few points: 1) information on the Internet is hardly the free and open terrain some might claim but is instead highly policed, 2) the distinction between noise and signal is policed by those in power, be it proprietors of the platforms or the ruling state, but most importantly, 3) while these employees work to remove the noise from the Internet, it is also these very labourers who are the noise of the global ICT system. As defined above, noise is that which is outside the logic of information systems, the irreducible and inevitable excess of the material world that cannot be contained within the restricted and abstracted structures. Under this framing, certainly all the data that do not fit within the guidelines of the platforms – illegitimate ads, spam, dissent, violence, nudity, protest, etc. – would constitute as noise, which will be followed by swift censorship. In the politics of signal vs. noise, those with 203 commercial and/or sovereign power dictate and control what can be sanctioned as signals – what can be seen and known. The fact that content moderation exists attenuates the myth that information is free and democratic. It is also indicative of the inevitable rupture of the dominant structure of these platforms, that within the channels of the Internet, noise will always persist and interfere with the communication of the legitimate messages. Dark Content makes this fact salient by pointing to the existence of content moderation, but more importantly, it also highlights a secondary degree of noise: the hidden labour of the Internet, the suppressed presence of its material infrastructure. Through these interviews, played through the sanitized and anonymous avatars, the existence of a hidden workface is emphasized by their lived experiences and material particularities. Herein lies the pedagogical potential of the project, its estrangement of the emancipatory, free, and participatory myth of information society. By inviting one to consume the videos’ content – the disclosure of content moderator’s working conditions – in bizarre configurations of office furniture, the artists create the conditions for knowing otherwise in relation to production and consumption on the net, and essentially, the online experience in general. It reveals that information is controlled rather than being made available and accessible, a piece of information that is itself actively suppressed. In addition, it stresses the presence of a significant but hidden working class, precarious wage labourers the total population of which amounted to more than a hundred thousand in 2014 (Wenger, 2016). Watching these videos within and around upturned office furniture fragments, the project cultivates different ways of knowing in relation to the framing of the Internet as lightweight, free, and capable of abolishing all class distinctions. Instead, it weighs down that assumption with the noise of the Internet’s infrastructure, the anonymous 204 human labourers whose lived experiences and embodied contexts highlight their unacknowledged and marginalized status as noise. Playing with the distinction between signal and noise, the artists are “exposing the kinks in what Franco calls the shiny and sanitized version of the Internet we are all familiar with” (Wenger, 2016, para. 5). It is important to recognize that in parallel to digital labour – “the exploitation of users’ unpaid labour, who engage in the creation of content … that is at the heart of profit generation” (Fuchs, 2013, p. 237) – lies the hidden labour that scrubs the Internet of all the content that is outside of the information system of profit-generation. The question of labour is made explicit in the installations, when the viewers confront a smattering of office furniture parts. In viewing the video content, the viewers have to ‘work,’ often crouching down, sitting on the floor, sandwiching themselves between the wall and the piece, looking up at a video projecting downwards, etc. Not only is the hidden labour of the Internet directly exposed through these interviews, but it is also made clear that in information society, what these viewers are doing – consuming content – is work. Here the piece resonates with Ibghy and Lemmens’ work noted in Chapter 5 and the inevitable complicity with information capitalism. In fact, the artists have often exploited the online transactional system for their work, paying users for their personal data. Nevertheless, like Ibghy and Lemmens, the Mattes are resisting the productivity of signal-processing by introducing this non-productive excess: an office in disarray, foregrounding the suppressed material reality of such labour and temporarily halting the work of the content moderators. The piece also echoes Gerrard’s video animation noted in Chapter 6, reiterating the point that behind the façade of immateriality lies a material infrastructure of human labourers subjected to the global economic forces. As media studies scholar Sarah Roberts (2014) suggests, the myth that information is organized automatically and neutrally is linked to “our view 205 of technology as being somehow magically not human” (as cited in Chen, para. 7). Like the never-ending content the moderators are hired to remove, the Mattes’ project itself functions as noise in the net ecology. Unwanted, suppressed, disruptive, but also potentially generative, the project tackles the limits of one’s knowledge of the digital landscape and moves, as much as possible, into the unforeseeable. 4. Estrangement Revisited Estrangement makes strange the dominant and legitimized configurations of ICT assemblages, disrupting the information order by amplifying the ‘other’ of information: noise. Such tactics of displacement, deviation, and dysfunction precisely take up the call to examine technological assemblages (Heidegger, 1962, 1977; McLuhan, 1964; Feenberg, 1999). As noted above, noise does not only interfere, it is also a generative force. Chapter 3 advanced the argument that visual art has the capacity to cultivate different ways of knowing, a critical and pedagogical potential that creates the conditions for the unknown, unforeseeable, and unprecedented. Such potential is encapsulated in the tactic of estrangement. In all of the artworks examined so far, arguably the artists have created works that, through making the information configuration somehow unfamiliar and alien, cultivate different ways of knowing, relating to, and engaging with ICT operations. The artworks invite the audience to imagine how things could be otherwise. The last of the case studies, to be examined in this section, pushes this conceit one step further, for a doubling of estrangement. 206 Accompanying the 2010 exhibition Noise & Capitalism at CAC Brétigny, France, improvisation musician and theorist Mattin (2011) writes that noise functions in a way that exceeds the logic of framing, the logic of calculability (to which I would add the logic of commensurability), becoming impossible to be captured and measured. “With its epistemic violence … noise alienates us” (n.p.). Such alienation positions it as a gesture of estrangement. That being said, Mattin is cautious, and laments how noise is often co-opted and implemented literally as a predictable music genre. Such caution echoes that which was noted above and in Chapter 3, that what was once considered noise may not be noise forever, and for critique to be potent, it needs to remain mobile and dynamic. What is potent about noise is not necessarily its affinity to a certain sonic quality, but rather its adversarial relationship to the dominant and its ability to make a situation strange. As a conceptual tool, noise is an invitation to rupture the normalized everyday, to cultivate the unknown. As Mattin (2011) writes, “the goal is to create an unprecedented situation – strange for everybody” (n.p.). For him, noise is not just an estrangement for the audience/student but for the performer/teacher as well. In a similar vein, education scholar David Lusted (1986) argues that what pedagogy addresses and examines is not a unilinear transmission of knowledge, but the transformative encounter between student, teacher, and knowledge. The encounter cannot be predetermined, such that it is also a venture into the unknown, the not-yet, and the unforeseeable for the artists themselves. Through such conceptualization, estrangement, with its potential for the otherwise, is doubled. The pedagogical potential, the degree to which such encounter is ‘transformative,’ cannot be prescribed or pre-determined like learning outcomes, since “what happens to the production of knowledge when the artwork engages or is engaged with by a member of the audience is hard to 207 measure” (Snaebjornsdottir, 2012, p. 14). As media and pedagogy scholar Elizabeth Ellsworth (2005) reminds, it is “impossible for an artist, designer, architect, or teacher to anticipate what form a learning will take” (p. 54). Bearing in mind this line of thinking, that “pedagogy holds the potential for an unknowable and unforeseeable” (p. 55) encounter, the final case study exemplifies such pedagogical potential of noise and estrangement, while being situated within the socio-political landscape of information capitalism. The artist and geographer Trevor Paglen is well-known for his artistic research that interrogates the surveillance state. The objects of his research range from secret military bases and covert surveillance programs to speculative satellite designs and the ubiquity of AI. His research focus on the socio-political landscape of the algorithmic surveillance state and his collaborative proximity to stakeholders such as Edward Snowden (through filmmaker Laura Poitras) and Wikileaks (through its spokesperson Jacob Applebaum) makes him an appropriate artist to conclude the study. The art historian John Jacob (2018) writes that Paglen’s practice can roughly be divided into two categories, one that attempts to render visible that which often remains invisible, such as photographs of surveillance satellites and classified military intelligence facilities, and one that considers the otherwise, and opens “a discursive space in which to imagine an alternative … imagine another and potentially different world” (p. 66). In this section I focus on one work that falls into the second category. To understand the work in question, it is worthwhile to briefly look at Paglen’s practice in general. Much of Paglen’s earlier work was focused on ‘revealing,’ through the medium of photography or digital image-making, that which has remained invisible, specifically the various 208 material infrastructures of covert and classified government and military operations – NSA headquarters, surveillance satellites, drones, secret military training and research facilities, names of covert missions, etc. The same strategy has been applied to his more recent series focused on ‘machine vision,’ where he and his team of developers would attempt to visualize what object-recognition and facial-recognition algorithms ‘see’ when they decipher their data input. According to Jacob’s division, and Paglen’s own, his other group of projects fall into a category of ‘impossible objects’ that eschew an artefact’s initial corporate and state purpose, such as a useless satellite. It is likely in reference to these projects that Paglen (2018) says, “I want art that helps us see how the historical moment could be different” (p. 13). Like Feenberg’s (1999) urge to investigate and introduce alternative ‘technical code,’ the normative values and assumptions built-in, prescribed, and sustained by specific technological regimes, Paglen envisions for these impossible objects to propose alternative ‘scripts.’ It is worth arguing that Paglen’s two strands of practice are simply various stages of the same spectrum, as the argument advanced in Chapter 4 shows that revealing can be more fruitfully reconsidered not as literally making things visible but the provision of conditions for the audience to encounter, confront, and engage ICT in a way that might allow for the emergence of alternative ways of knowing. Nevertheless, for this case study, the section focuses on the impossible objects. It is in this series of projects that the pedagogical potential of Paglen’s practice is most pronounced, in his invitation for the viewers to reconsider, imagine, and come to know the world and its phenomena differently than its current configuration would allow or sanction. A different way of relating to and engaging with the ICT assemblages that have become so entrenched and normalized. 209 Figure 7.3, Trevor Paglen, Autonomy Cube, 2015. Plexiglass cube, computer components, 14” x 14” x 14”. © Trevor Paglen. Courtesy of the artist and Metro Pictures, New York. 210 The work this section focuses on is titled Autonomy Cube (Figures 7.3 and 7.4), a roughly 16’ x 16’ x 16’ transparent cube which carefully houses what looks to be a circuit board assemblage. Developed with activist and Wikileaks associate Jacob Applebaum, who is also a key member of the Tor organization, the piece creates a WiFi network within the institution that it is situated in (usually a museum), and reroutes the traffic through the Tor network, which grants the users anonymity. Tor, which stands for The Onion Router, is a free and open source communication software that reroutes internet traffic through thousands of volunteer relays of encryption, like layers of an onion, which prevents one from being tracked for their location and internet activity. In practice, this allows any museum visitor to use the internet, access any webpage or contribute any content, free from surveillance and fear of state persecution or commercial exploitation. There is one way to interpret the piece which positions it within the tradition of institutional critique, alongside the practices of artists such as Daniel Buren and Andrea Fraser. Paglen (2018) himself speaks to the project’s recognition that museums are becoming more Google-like in their attempts to obtain data from museum visitors and to conduct their own data-mining and usability testing. Positioning art environments as extensions of the state-commercial apparatus with their ICT operations and information capitalist logic is cogent, and the piece’s attempt to take art museums to task on internet privacy is no less important. However, this section engages in a broader reading that looks at the implication for ICT in general, rather than restrict it to the context of art institutions. As Jacob (2018) writes, Autonomy Cube asks a question that is relevant for any discussions of ICT assemblages: “what might the Internet, severed from its surveillance functions, look like” (p. 66)? 211 Figure 7.4, Trevor Paglen, Autonomy Cube, 2015. Plexiglass cube, computer components, 14” x 14” x 14”. © Trevor Paglen. Courtesy of the artist and Metro Pictures, New York. I argue that this question is Paglen’s invitation for one to think otherwise about our relationship to ICT-facilitated surveillance, and in asking this question the work offers a pedagogical encounter. According to the artist, the most important thing Autonomy Cube does is “to point out the fact that the idea of using the internet to look up information in a surveillance-free environment seems odd” (p. 15), and in doing so, estranges the online experience. The fact that this alone makes browsing the internet ‘strange’ is itself a significant point. Ubiquitous surveillance, commercial search engines, not owning one’s data, exploitation of digital labour, 212 have all become the norm. And deviating from such norm is precisely the act of estrangement. Paglen (2018) acutely points out how our digital literacy has largely been shaped by the entrenched business ethos of large tech and telecom companies. “It’s gotten to the point where the idea of not tracking how people use communication devices, or creating spaces that are free from metadata collection and behavior tracking, seems almost inconceivable” (Paglen, 2018, p. 15). At the end of Chapter 4, I wrote that Julian Oliver’s all-or-nothing approach, where he created devices to either block the transmission of signals or surreptitiously obtain the data of others, functions as a nice theoretical foil to Paglen’s Autonomy Cube. While Oliver’s two projects took very powerful approaches to either be combative and co-opt, or a wholesale refusal to engage, Paglen’s stance here is more nuanced. Recognizing that there is no ‘outside’ to the information society, the piece engages with the issue head-on, but not in an extra-judicial way, such as Transparency Grenade. The piece is disruptive in its intention “to interrupt normal circuits of exchange and to imagine a different kind of internet” (Bryan-Wilson, 2018, p. 89), but it does so by imagining what the technology could achieve if used in a different manner, counter to the state and commercial usage, and opening up the terrain to the multifaceted and unpredictable nature of user engagement. Herein lies the doubling of estrangement, that of the artist himself. The oddity of a browsing experience and digital engagement free of commercial and state surveillance is the initial estrangement. But as this section has argued, noise, the irreducible multiplicity and indeterminacy of the material world within which it is embedded, introduces something unprecedented for everyone. If co-opted and commercialized data, the output of digital labour, becomes ‘information’ 213 or ‘signals,’ then the data produced through the auspices of a free network certainly constitutes ‘noise’ from perspectives of the state and commercial enterprises. In granting users the means to exist online without surveillance from the tendrils of information capitalism and the state, Paglen has opened up space for the unforeseeable. Like Jacob (2018) writes in relation to Paglen’s work, “although a space may be controlled by a hegemonic class or entity, [Henri] Lefebvre insisted that control may be countered by the creation of new, autonomous spaces” (p. 27). He has created the radical conditions for a pedagogical encounter, the tools for autonomy, without any means of dictating and prescribing beforehand how users and museum visitors will take up this opportunity and potential. Like the improvisation methodology of Mattin, the creation of such conditions is one where the artist is estranged as well. Creating the conditions for noise entails affordances of something unprecedented, without necessarily prescribing the structure and form of such noise. He has simply provided the means for one to imagine how the world can be different, and such potential is precisely the uncontainable performance of noise, rupturing through the imposed structure of data-mining and surveillance. Without dictating how, the piece insists “that our lives should be autonomous from the implicit power structure that govern our world … a constant reminder of a potential for liberated network freedom” (Kholeif, 2018, p. 116). It is important to note that this is not in any way an endorsement of the wholesale celebration of interactivity or participation that is rampant in the discourse of media art. This section argues that part of the potency of Autonomy Cube is Paglen’s open-ended invitation for the users to engage with it in unforeseeable ways, but its radicality lies in its productive subversion of internet protocols and the creation of an environment that is free of algorithm-facilitated surveillance and data-mining, not simply the fact that it is open to user- engagement. Numerous 214 critical perspectives have been advanced against celebratory stances of interactive and participatory art, such as the critique 1) that these practices are actually very prescribed and nowhere as democratic as they claim to be (Stallabrass, 2003; Burnett, 2007), 2) that the process of art-consumption is already a dynamic and participatory engagement, and therefore interactive art is too literal and redundant (Burnett, 2007; Simanowski, 2011), and most importantly, 3) that interactive and participatory art that uses digital media simply repeats the now-ubiquitous mechanisms of data-driven ludic, commercial, and government initiatives in information society (Barney et al, 2016; Bishop, 2012; Galloway, 2015; jagodzinksi, 2010; Simanowski, 2011; Stallabrass, 2003). As Barney, Coleman, Ross, Sterne, and Tembeck (2016) point out in The Participatory Condition in the Digital Age, the unprecedented contemporary pervasiveness of the participatory impulse and expectations can be traced to the unique conflation of two questionable assumptions: that an increase in communication is an increase in democracy/freedom, and that media naturally facilitates this participation. This critique is acutely argued by art historian Claire Bishop (2012) when she reminds that the prevalence of the participatory in contemporary art is not an unequivocally laudable factor. Characteristics which may have been considered radical, subversive, or transformative at one point in history are neither immutable in their status in relation to the socio-political landscape nor immune to being co-opted by the dominant mainstream. As she writes, “today, participation also includes social networking sites and any number of communication technologies relying on user-generated content” (p. 30). Galloway (2015) summarizes the political stakes succinctly when he laments that “it’s frustrating to see art … that simply repeat the kind of tricks that Google or Amazon have co-opted … there’s nothing radical today about interactivity in art …. Interactivity is at best duplicitous if not reactionary” (para. 13). Therefore, it is crucial to note that the openness of Paglen’s piece is only meaningful and 215 significant in the context that it is primarily interrogating the data-mining practices of commercial and state ICT assemblages and co-opting interactivity for autonomous ends. It is radical not because it is interactive, but rather because it is reclaiming interactivity from commerce and the state. Ending the case studies with this particular piece is intentional and crucial, for it brings us back to the points raised in Chapter 3, about the impossibility to have predetermined outcomes with artworks (Ellsworth, 2005; jagodzinski, 2010; jagodzinksi, 2017). While the four main chapters of the study conducted reciprocal analyses of artworks and concepts and argued that these artworks have the capacity to estrange one’s understanding and relationship to ICT assemblages in information capitalism, it is important to once again emphasize that such outcomes, estrangement, cannot be predetermined and engineered. To that effect, Paglen’s piece is important for artists to check their positionality, and to recognize that, to a certain extent, the tactic of estrangement cuts both ways. If the noise of copious amounts of information, such as Elahi’s work, is no longer sufficient in the post-Snowden era, Paglen seems to offer instead, the noise of the irreducibility of human and inhuman contingency, embeddedness, slippage, and materiality. For all of NSA’s systematic and comprehensive efforts to enact algorithm-facilitated surveillance, and for all the processing power of machine learning and predictive algorithms, the NSA was unable to predict the actions of Snowden, as “the greatest and most unpredictable vulnerability of secrecy as a system of organizing human activity is human” (Jacob, 2018, p. 57). Creating an anti-environment of noise, Autonomy Cube estranges and disrupts the current configuration of ICT assemblages in relation to 216 the average user, and creates space for embodied contexts and performative materials, beyond the encoding sovereignty of information capitalism. 5. Conclusion This chapter has continued the argument advanced in the previous chapter, that insists on the presence of the medium, the material substrate (or environment) within which worldly phenomena are embedded in. The chapter positions the concept of noise as a specific iteration of such presence, and the amplification of noise as a form of estrangement. Noise, being the ‘other’ of information, is an appropriate form of the estrangement tactic for this research to conclude on. Its resistance to the reduction-to-form of information, its rupture of restricted boundaries of stable structures (including information), and its irreducible excess of the material world, all question the way of knowing fostered by information – the logic of commensurability. Both the Mattes’ and Paglen’s work analyzed in this chapter resist the encoding logic of information by amplifying the noise – the overlooked, the suppressed, the unforeseen, the illegitimate. In the case of Dark Content, the Mattes bring to light the existence, the working conditions, and insightful reflections of the content moderators, who not only scrub the internet but are also themselves scrubbed from the online landscape of sanctioned and legitimate information. This project amplifies the noise of the hidden labourers and also the close link between being a digital user and the perpetual work that usage entails. Autonomy Cube, on the other hand, emphasizes the need for autonomy from state and commercial surveillance facilitated by ICT, and uses the same technology to create the means for users to consider the possibility of a different world. In doing so, it creates the affordance for unforeseeable noise to emerge. As Lusted (1985) writes, pedagogy entails a transformative 217 encounter between the student, teacher, and knowledge. Autonomy Cube creates the conditions for a new type of relation to ICT, where the not-yet and unforeseeable can emerge, unknown in advance to neither the users nor the artist. Perhaps this makes Autonomy Cube an exemplary work of the pedagogical potential of media art. 218 Chapter 8: Conclusion 1. Summary I have used the interdisciplinary method of ‘cultural analysis’ (Bal, 2007) to conduct several analyses of artworks in relation to concepts related to digital media assemblages, specifically the socio-political and onto-epistemological configurations of information and communication technologies. To do so, the study proposed the theoretical framework of ‘estrangement’ as a tactic that encapsulates the pedagogical potential of artworks to unsettle the dominant regimes and the need to examine the technological medium through a disruption of its normative operations – an answer to the call for investigating the technical code. By applying the framework of estrangement to the case studies of artworks, as an instantiation of artistic potential for cultivating different ways of knowing in relation to ICT assemblages, the study argues that in various ways, the examined artworks subvert the epistemic model of ICT. Specifically, each artwork in some way destabilizes the dominant way of knowing sustained by informatics – the logic of commensurability – through deviating from the normative function or foregrounding that which had been obscured by the normative function, and in so doing, creates a critical and pedagogical condition that invites the viewers to know otherwise. To recap, Chapter 4 began the examination of digital media and media art by diving into the concept of information. It outlined the problem where information capitalism extracts value from the everyday by encoding all worldly phenomena into information. Such operation is legitimized and sustained by a way of knowing fostered by the digital, what I have termed the logic 219 of commensurability. The chapter approached this notion by linking the digital – the binary of ones and zeros that underscores informatics – with a Cartesian way of knowing, and positions such worldview as that which media art needs to tackle, through the tactic of estrangement and its pedagogical potential for cultivating ways of knowing otherwise. Chapter 5 elaborated on that which was advanced in Chapter 4, specifically the way of knowing fostered by informatics, drawing a link between contemporary models of AI (the worldview that the brain is an information-processor and the world consists of information) and traditional epistemology based on the metaphysical distinction and dominance of the knowing subject/mind over the knowable object/body. Crucially, this chapter argues against such binary of domination and against the claim that information is capable of representing the phenomena of the world (Dreyfus, 1992; Stiegler, 1998; Hayles, 1999; Barad, 2007; Ingold, 2013; O’Neil, 2016; Gottlieb, 2018). Instead of such an abstracted and decontextualized way of knowing, these scholars offer alternatives to focus on context, contingency, embeddedness, and materiality. Such argument highlights information’s shortcomings both in the sense that ICT assemblages are vested with specific political interests (i.e. proprietors of information platforms) and that such algorithmic models are inadequate at capturing the multiplicity of the embedded world. While such argument was being fleshed out, it was also emphasized that the study does not endorse technological determinism, and that arguments of ICT’s affordances and agency need to be understood as part of an assemblage or hybrid that includes the political, economic, social, and material. Chapter 6 brings the argument back to the ‘media’ of digital media to advance the perspective that the critique of ICT assemblages and the way of knowing fostered by informatics 220 must proceed with the acknowledgement that the human is always already mediated, such that this critique must come from within the mediation itself, not from an elusive ‘outside.’ Such a caveat is important, for it makes clear that techné is not inherently predisposed to control, and something that one should simply remove, but rather it is something that is inextricably entangled with the human. By not making ICT the ‘other,’ this research takes a non-binary stance that refuses the anthropocentrism of Cartesian rationalism and contemporary informatics, but also insists that the critical task for media art is to interrogate the medium itself, from within, without assumptions that one can be without technology. This stance follows the perspective of the previous chapter, emphasizing the importance of the medium, which is aligned with objects, matter, embodiment, beyond functioning merely as passive vehicles for the rational and abstracting mind of the subject. The hierarchal relationship that exists between the two elements of the binaries (such as mind and body) necessitates that such binaries are politically-fraught, as noted by Haraway (1991) and Gunkel (2007). This chapter also reinforces the call to action introduced in Chapter 1, that media art needs to examine media itself, specifically information and communication technology, rather than positioning the technology as neutral and instrumental. Chapter 7 argues for noise as an exemplary iteration of estrangement in the context of information capitalism, as the ‘other’ of information. As a concept, noise is another way to think about this dissertation’s focus on the performative materials and embodied contexts that cannot and refuses to be encoded by information’s abstracting act. Following the previous chapter, it reiterates the persistence of the medium, and positions noise as the irreducible multiplicity of the material world that is at once disruptive and generative, capable of destabilizing the dominant configurations through its amplification and creating space for the unforeseeable. 221 In addition, all four previous chapters examined two artworks as part of the study’s reciprocal analysis of artworks and concepts, where each artwork takes on the theorization advanced in each chapter and adds to the analysis. Each chapter also makes the argument that the artworks, in various ways, hold the pedagogical potential for cultivating ways of coming to know differently in relation to ICT operations. This is accomplished through the tactic of estrangement, of visual art’s potential to cultivate the unknowable and unforeseeable, to allow the different and the other to emerge, to uncouple from the ordinary. By analyzing the artworks noted in the previous chapters through the concept of estrangement, the imperative to scrutinize the media/technology itself through excavating and destabilizing its technical code to create an anti-environment is highlighted, rather than the common approaches listed in the introduction, such as ones that are essentialist and celebratory (the assumption that media art is always interactive and inherently democratic), innovation-driven (with a focus on developing more sophisticated technology for commercial purposes), or simply utilize digital media to explore other themes. Examined in relation to digitality, information, mediation, and noise, the eight artworks each brought forth ICT-related issues specific to their contexts (i.e. search engines, Google Ads, data-mining, alternative servers, etc.) in a process of reciprocal analysis where the concepts and research objects informed one another. As part of the concept-based methodology of cultural analysis, the study moved through various scholarly terrains, in and out of art education, media studies, and media art discourse. It sought to create a theoretical constellation that goes beyond the logic of commensurability by emphasizing the non-222 neutrality and limits of information, undercutting its privileging of the rational human subject, and insisting on the embodied contexts and performative materials of the everyday and the world. The study examines and positions information (the binary digits that constitute the digital) as a way of knowing, and asserts that the materiality of mediation is an ineradicable component, which needs to be foregrounded through estrangement in order to scrutinize the medium itself. Situating the discussion of media art within the space of ‘ways of knowing’ is primarily due to the argument that various technologies foster particular worldviews and that digital media technologies (where the binary of information shows a strong relationship to traditional epistemology) should be positioned as a way of knowing. This is crucial, as this re-positioning accomplishes two outcomes: 1) it offers a non-binary solution to the media art debate (Allen & Søndergaard, 2016) of revealing the mechanisms of the ICT assemblages or a more active and direct artistic intervention by moving the discussion to a broader level that encompasses both, which recognizes that ICT has permeated all levels of the everyday (Mejias, 2013), and more importantly, 2) it leverages what art education scholars have argued to be art’s pedagogical capacity to cultivate ways of knowing otherwise to combat the way of knowing fostered by informatics – the logic of commensurability – and initiates the main argument of this research. As the dissertation moved through the concepts and artworks, the following emerged as a consistent point woven throughout: each artwork challenges the dominant ICT in question by intervening into its information system. For Scourti (2015), it was the creation of alternative information – resulting in a hyper-mediated self – which probed commercial data-mining and quantification by exacerbating the conventions of Google Ads and predictive algorithms. Oliver 223 (2013, 2016), on the other hand, went in the opposite direction, directly confronting such exploitative and capitalist data-mining operations, and creating information tools that both co-opt and deny these operations. Similar in its direct engagement, Mongrel (1999) hijacked the entire information-retrieval system altogether in order to inject their own counter-information. Questioning the fundamental claims of data visualizations, Ibghy and Lemmens (2013-2018) produced non-productive information to subvert the truth claims of data representations, while contemplating if one could ever really create non-productive information products that do not get subsumed into the capitalist system. For Gerrard (2015), it was an investigation into the often-neglected materiality of the digital industry, the hidden information (which is still material) that sustains the myth of the immateriality within information capitalism. The Mattes (2016) likewise investigated the previously suppressed information regarding immaterial and digital labour, information that points to the ineradicable materiality that undergirds and ‘cleans’ the Internet. Lastly, Paglen (2015) creates conditions for the radical possibility of engaging in information exchange free from state and commercial interests, on an alternative information platform altogether. 2. Conclusion and Key Points All of the noted artworks cultivate different ways of coming to know and engage with ICT, specifically by interrogating and subverting: 224 1. The ways in which information capitalism sustains the myth that information is neutral and how it perpetuates power structures, which are often embedded within the design and deployment of algorithmic operations 2. The cultivation and sustaining of an information-based way of knowing – the logic of commensurability – which is structured by binaries such as subject/object, mind/body, abstract/embodied And these artworks (the list below goes beyond the case studies and includes artworks mentioned briefly) have demonstrated such pedagogical potential through the tactic of estrangement, specifically by ‘estranging’ and intervening into the current and dominant information configuration. In other words, to introduce and amplify the noise; to disrupt the legitimized division between signal and noise, such as: 1. Creating counter-information and/or alternative information, often by exacerbating the current operations of information (Scourti, Ibghy + Lemmens) 2. Hijacking and/or blocking information (Oliver, Bridle, Blas) 3. Producing too much information (Elahi) 4. Revealing and highlighting hidden and marginalized information (Mongrel, Mattes, Gerrard) 5. Hijacking the means through which one accesses and encounters information (Mongrel) 6. Setting the conditions that would allow one to create information without surveillance (Paglen) 225 Broadly speaking, each artwork examines particular facets of contemporary ICT and the logic of commensurability that undergirds the field, such as search engines, content moderation on the web, and data-mining for machine learning algorithms. In various ways, they revolve around and trace the contours of a landscape marked by algorithmic-facilitated data-mining practices that encode the everyday and exploit user data in the big data economy for further entrenchment of decentralized control. By probing these operations of ICT, the artworks each examine and interrogate the reigning technical code that govern them, bringing them to the foreground by estranging their otherwise unexamined conventions and protocols. Through this Brechtian tactic, everyday digital conventions and unquestioned norms functioning outside of consciousness is ruptured, creating situations conducive for new ways of knowing, seeing, and doing to emerge (in this case, specifically different ways of engaging and orienting oneself to ICT). As noted previously, such capacity for cultivating different ways of making sense of the world and the phenomena within is precisely the pedagogical potential of artworks (Atkinson, 2008; jagodzinski, 2010; Garoian, 2015). To return to the artistic potential as something that is simultaneously critical and pedagogical, the artworks analyzed here cultivate such potential through the subversion of the logic of commensurability by disrupting the specific medium’s informational order. Albeit reductive, the following table (Table 8.1) aims to summarize the medium being foregrounded and examined by each artwork, the ways in which they utilized the tactic of estrangement, and the resulting intervention into the information system of each medium. 226 artist medium being examined estrangement method informational intervention Scourti (Life in AdWords) Google Ads; commercial data-mining operations within Google human/personal disclosure estranged through Google Ads algorithms and vice versa alternative information created by leveraging and exacerbating Google Ads convention Oliver the Internet and the online experience; state and commercial data-mining and surveillance estranging the online experience by reversing the roles, co-opting the strategies used by state and commercial interests; estranging the online experience by a complete negation and forcing one offline both the aggressive extraction/capture of hidden information, and the complete refusal of any informational exchange online Mongrel search engines and SEO search engine estranged through hacking and manipulation of search results co-optation of information platform and provision of counter-information Ibghy + Lemmens data representations of algorithmic processes data representations estranged through material process and lack of productive value counter-graphs and counter-information (or non-productive information) Scourti (Think You Know Me) machine learning algorithms of smartphones and other applications the self and algorithms estranged through a deliberate blurring of the two exacerbating the function and exposing the limits of machine learning algorithms and its information claims Gerrard Google server farms myth of cloud and immateriality estranged by the presentation of the physicality of the digital presentation (and creation) of hidden information that foregrounds the materiality of the Internet 227 Mattes content moderation; consumption and production within information capitalism (digital labour) browsing experience estranged by disclosure of content moderation and unconventionally-placed interfaces revealing of information that brings to light the hidden labour force that is required to maintain the information society Paglen the internet and its usage, being online in general, state and commercial data-mining and surveillance standard surveillance process estranged by using alternative servers provision of a platform that allows for the creation of information that is free from state and commercial extraction all ICT platforms, services, and protocols; the logic of commensurability some form of deviation from the normative functions, assumptions, and operations disruption to and/or through information and information protocols Table 8.1, Summary of Case Studies In all eight artworks examined in the study, the reoccurring tactic is some form of disruption of the informational order of the ICT assemblages the artworks are targeting. In other words, against the reduction-to-form of information and its capitalist flows, the inevitable excess of noise is amplified. As the irreducible multiplicity and indeterminacy of the everyday and the material world, noise is antithetical to information and its abstracting act. As I have shown throughout the previous chapters, information and its way of knowing – the logic of commensurability – presume information is capable of adequately and neutrally representing the phenomena of the material world. And such representational and Cartesian onto-epistemology, that presumes one comes to know as a detached and rational subject abstracting and domesticating the world of objects, needs to be questioned (Dreyfus, 1992; Stiegler, 1998; Hayles, 1999; Barad, 2007; Ingold, 2013; Gottlieb, 2018). Noise, as a concept, refocuses on the material, embodied, 228 embedded, and lived contexts, and emphasizes those which have been neglected or cannot be captured and predicted by information’s abstracting act. In the examples of the artworks examined, this would be the human particularities that cannot be captured by data-mining, the information and population that are inadvertently or deliberately excluded by search engines and databases, the material presence of infrastructure and labour that is lost in the myth of the cloud, the unforeseeable possibilities of a non-productive information platform, among others. As the ‘other’ of information, noise is the most appropriate iteration of the estrangement tactic in this context. The above lists the diverse ways artworks can engage with ICT assemblages and cultivate ways of knowing otherwise. This diversity is necessary, for noise, as an estrangement tactic, does not have an inherent form that will always remain critical. Estrangement is adversarial, without dictating exactly how and in what form the tactics will be realized. And any further specification would be unproductive, as critique needs to be fluid and mobile. What I have outlined and proposed as radical and potent strategies to cultivate different ways of knowing will not always remain radical, in the same way that media and net art that focus on communication, exchange, and the author-less are no longer radical. As noted in Chapter 3, there are innumerable ways to enter these artworks and practices, and the effects of artworks cannot be prescribed or pre-determined (Ellsworth, 2005; jagodzinski, 2017). Nevertheless, I have analyzed the artworks in relation to various media theories and philosophies of technology and articulated the various ways they create conditions for coming to know the world differently, invitations for one to consider, relate to, and engage with ICT assemblages in drastically different ways. Such shift in orientation, worldviews, ways of knowing, 229 is arguably pedagogical. And as jagodzinski (2010) cogently writes, such pedagogical potential, while never something that can be guaranteed, is still worth pursuing, and that “we as artists and educators … should orient ourselves towards just such a potentiality” (p. 139). 3. Moving Forward At the time of writing this conclusion, the European Commission in Brussels just released the European Data Strategy on February 19, 2020. According to the New York Times (Satariano, 2020), the EU’s cautionary approach to their ‘digital future’ and the industries of big data and AI is a major concern for American tech giants, who fear regulations would impede their stake in the market. As the article suggests, the EU’s perspective differs from the American’s in that “American lawmakers and regulators largely left Silicon Valley companies alone, allowing the firms to grow unimpeded and with little scrutiny of problems” (para. 17). In contrast, the European Data Strategy (2020) is concerned about the “accumulation of vast amounts of data by Big Tech companies [and] the role of data in creating or reinforcing imbalances in bargaining power” (p. 15), and the European Commission’s (2020) press release emphasizes strict EU rules, human-centred design, fundamental rights, non-discrimination, and “to give citizens better control over who can access their machine-generated data” (para. 17). However, even with its apparent focus on data ethics, the strategy also speaks to expanding the European AI industry, tapping into the unfulfilled potential of data, and becoming a leader in the data economy. As the concluding summary above pointed out, the study argues that the eight artworks interrogate one or both of the following: 1) the non-neutral and politically-fraught nature of 230 information, which is often baked into the design and deployment of algorithms, and 2) the information-based way of knowing which abstracts from lived and embodied material experience into decontextualized metrics. The European Data Strategy makes it clear that the topic of this research is still highly relevant and urgent today, that there is indeed a need to tackle to unchecked power of the tech giants and to introduce more regulations that would hinder the pernicious effects of commercial/state-driven machine learning and its weaponization. What the European Data Strategy doesn’t touch upon is the fundamental suspicion towards information that this research proposes, that information’s claim to adequately and neutrally represent the world is suspect, and that the abstracting act of information is inherently violent and perpetuates a binary of hierarchy. In that regard, the artworks analyzed in this dissertation have much to offer, as their pedagogical and critical potential to cultivate ways of knowing otherwise could invite one to consider different ways of orienting, relating to, and engaging with ICT assemblages. This could entail not only resistance to commercial exploitation of one’s data within information capitalism, but perhaps even a deeper questioning of the epistemic assumption that frames one as an ‘information person’ in the first place – the logic of commensurability. 4. Afterword: The Order of Things This very last section of the dissertation is used to talk about my positionality and to contextualize the research a bit. Art historian James Elkins (2005) proposes that there are four types of practice-based visual art PhDs: 1) one that takes the form of an extended MFA degree where the artist creates work and a piece of writing intended to supplement the work, 2) one where the work and the writing are of equal merit and constitute the research together, 3) one that is a 231 creative dissertation of some kind where the dissertation is an artwork, and 4) one where the artwork(s) alone is the research, and no writing is necessary. I would suggest that my doctoral research is different than all of the above. The obvious difference is that I did not make work as part of the fulfillment of requirements for the doctoral program. I did, however, and still do, maintain an artistic practice that runs in parallel to the doctoral pursuit. Therefore, this doctoral research does not seek to explain my work like an artist statement, nor is it an artwork in itself, but rather it functions as a part of my larger artistic practice, as research that informs and constitutes my material practice. As such, it makes sense to end by talking about my latest project. This is not an attempt to present the project as the ultimate example of the estrangement tactic confronting digital politics, but simply a way of folding my position as a practicing media artist into the present research. Between 2018 and 2019 I received research and production funding from the Canada Council for the Arts for a media arts project. The piece, tentatively titled The Order of Things (Figures 8.1 and 8.2), is a continuation of my ongoing media installation practice with a research focus on interrogating information capitalism. The title takes from Michel Foucault’s book The Order of Things (1966), and draws a connection between the history of order, categorizations, and taxonomies, with the ubiquity of algorithmic data-mining processes which constitute subject-formation in contemporary information society – the “informational person” in Koopman’s (2019) terminology. As referenced in Chapter 5, the history of classification systems is symptomatic of the society’s stratification of power, fortifying sanctioned knowledge and marginalizing the noise, a practice that has continued to today’s information system such as databases and search engines (Noble, 2018). 232 Figure 8.1, Kevin Day, The Order of Things, 2019. Aluminum channels, speakers, plexiglass, computer components, dimensions variable. © Kevin Day. Courtesy of the artist. Scholars such as Tung-Hui Hu (2015) traces the information-power complex from Foucault’s disciplinary society through Deleuze’s control society to today’s ‘cloud society,’ governed by the sovereignty of data. Koopman (2019) likewise draws from Foucault, and notes the similarity between infopower and Foucault’s concepts of discipline and biopower in their techniques of regulation, control, and governance. For Koopman, it is important to recognize that the informational turn did not emerge out of nowhere, but rather it struck a chord with so many disciplines when it emerged in 1948 through Shannon and Wiener’s work because the societal landscape was already receptive to such ideas. He writes, “we were already spooling the thread of 233 data a century ago when we initialized universal formats for persons on humble forms, plain cards, ordinary dossiers, and unassuming documents” (Koopman, 2019, p. 155). There is a need to recognize that the digital act of information, as Chapter 4 has argued, is not restricted to digital media, but also has an affinity with traditional epistemology. Information is not new, if one recognizes the long history of the subject domesticating and abstracting the world into discrete units, proxies, representations, and objects. The act of encoding is, of course, political. Related to such imbrication with power is the epistemic claim it’s making, the claim that information can adequately and neutrally represent phenomena of the world. It is, to go back to Heidegger (1977), a particular way of revealing the world. And once such a way of revealing has concretized, it becomes a challenge to know otherwise. When one views the world through such lens, one naturally looks for information. However, “it would be a mistake to assume that the personal information in question was already lying around waiting to be collected” (Koopman, 2019, p. 157). Rather, it is the encoding process itself that “brought the personal information of informational persons into being” (p. 157). The world does not consist of informational objects waiting to be discovered (Dreyfus, 1991; Law, 2004), but rather the informational objects are the products of such an epistemic model and its technique of encoding. “Information thus became political precisely when we became our information” (Koopman, 2019, p. 155). The act of encoding the everyday and the users, the reductive and inadequate nature of such encoding, and the question of who drives the operation of encoding, are some of the topics that the project below tries to tackle. 234 Figure 8.2, Kevin Day, The Order of Things, 2019. Aluminum channels, speakers, plexiglass, computer components, dimensions variable. © Kevin Day. Courtesy of the artist. The interactive sound installation consists of 7 standing ‘scanning stations’ made of aluminum square channels, with the dimensions 3’ x 2’ x 7’, each equipped with a single-board computer, motion sensor, breadboard, amplifier, and speaker (pointing downwards), resting on a piece of plexiglass. The structures are placed in a row, facing the audience, inviting them to step into the roughly human-sized structures. The stations are designed so that upon entering, the sensors activate the single-board computers to (pretend to) scan, analyze, describe, and quantify the audience members, and present these predictive and analytic statements used in data-mining and e-commerce via the speakers (for example, statements such as ‘you may also like…’ or ‘you 235 have exhibited signs of…’ or ‘your data has indicated…’). In actuality, the computers are running a program that has been programmed to, once the sensors are triggered, play a random combination of descriptions, narratives, predictions, suggestions that have been formulated and recorded beforehand and stored in a database. In doing so, the scanning stations highlight the logic of human-as-data, and emphasizes the fallibility, non-neutrality, and limits of informatics through the presentation of incoherent and nonsensical disinformation. The modular, standing, human-sized structures were conceived to resemble some sort of dystopic scanning booths, where the structures’ anthropocentric size emphasizes a history of measuring bodies and quantifying people. Its close affinity to the human form highlights a logic of mapping and measurement, leading to the potential of sequences, categories, and order. By implicating the body, the project foregrounds the hierarchal relationship whereby the epistemic model and operation of information subjugates the world of materials, including that of the body, echoing the binaries of mind vs. body, subject vs. object, abstraction vs. embodiment, among others. Furtively captured and inadequate at representing the users, information becomes the pertinent unit of economic exchange and the site of representational politics in information capitalism. The project seeks to subvert such practices through the creation of an installation that highlights the process of data-extraction and subsequent representation of users through this information, regardless of how decontextualized, inadequate, and inevitably determined by the designer’s biases. This reductive and fallible act is made evident through the computers’ absurd and false attributes forcefully prescribed onto the audience members, negating the claim that 236 information is capable of adequately and neutrally representing the phenomena of the world. What is exacerbated is the gap between the biased and manipulated information on the one hand, and the idiosyncratic and irreducible users/audiences on the other hand. The potency of the piece lies in this gap, which is the noise that amplifies the operations of data-mining and dislocates the norm of our expectations about their accuracy and neutrality. Like Scourti’s pieces, the tactic here is to exaggerate the entangled state of the user and informatics to point to the omnipresence of such operations, but also to amplify the inevitable noise of that which cannot be reduced to information – often through foregrounding the limits of information’s ability to capture and prescribe the material presence of the human. Through exaggerating the ubiquitous act of data-extraction, by explicitly linking this process to the quantification of human subjects and the prescription of disinformation (or, the amplification of noise), my intention is to estrange what is otherwise the very mundane ICT operation of capturing, representing, and prescribing the users’ behaviours and attributes. What is hopefully accomplished through such estrangement is the emphasis that the extraction and prescription of information is a politically-fraught act, and furthermore, that the process itself of abstracting from the embedded user to create information is inadequate at representing the embodied contexts and performative materials of the world. The original French title of Foucault’s book (1966) is Les mots et les choses – the words and the things – and as Barad (2007) decries, “representationalism … separates the world into ontologically disjunct domains of words and things” (p. 137). For Barad, words, like all other representations, cannot simply correspond to and represent things of the world like some self-contained and self-sufficient category with stable boundaries and properties, as if a knowing subject can come to know the (presumably stable) 237 worldly objects in a detached manner. “This account refuses the representationalist fixation on words and things … advocating instead a relationality between specific material (re)configurings of the world … and specific material phenomena” (p. 139). Phrased differently, words, like information, are also material things. As such, information’s claim to represent the world in a static, abstract, and decontextualized manner has to be questioned. To echo Hayles (1999) and O’Neil’s (2016) points, the representations (such as information) become pernicious when one does not realize they are reductions – models, proxies, heuristics – of the world’s noisy multiplicity and instead rely on them as the lens through which to make sense of the world. Artworks that create the pedagogical conditions to destabilize such lenses, to insist on the excess and multiplicity of the material world in the face of informatics, and to consider some unforeseeable configurations beyond the logic of commensurability, are vital, as we move even further into digitality and the sovereignty of data. ≠ 238 References Adorno, T. (2007). Aesthetics and politics. New York, NY: Verso. Agamben, G. (2009). “What is an apparatus?” and other essays. Stanford, CA: Stanford University Press. Allen, J., & Søndergaard, M. (2016, June 3). Acoustic infrastructures [Conference session]. Sound Art Matters. Symposium conducted at Aarhus University, Denmark. Alsina, P. (Interviewer) & Galloway, A. R. (Interviewee). (2007). We are the gold farmers [Interview transcript]. Retrieved from http://cultureandcommunication.org/galloway/interview_barcelona_sept07.txt Ananny, M., & Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 1-17. AND (Interviewer), Mattes, E. & Mattes, F. (Interviewees). (2015). Interview / Eva and Franco Mattes [interview transcript]. Retrieved from Abandon Normal Devices website: https://www.andfestival.org.uk/blog/interview-with-eva-and-franco-mattes/ Aoki, T., Pinar, W., & Irwin, R. (2005). Curriculum in a new key: The collected works of Ted T. Aoki. Mahwah, NJ: Lawrence Erlbaum Associates. Atkinson, D. (2008). Pedagogy against the state. International Journal of Art & Design Education, 27(3), 226-240. Attali, J. (1985). Noise: The political economy of music. Minneapolis, MA: University of Minnesota Press. Badiou, A. (2013). Philosophy and the event. Cambridge, UK: Polity Press. 239 Bal, M., & Gonzales, B. (1999). The practice of cultural analysis: Exposing interdisciplinary interpretation. Stanford, CA: Stanford University Press. Bal, M. (2002). Travelling concepts in the humanities: A rough guide. Toronto, ON: University of Toronto Press. Bal, M. (2003). Visual essentialism and the object of visual culture. Journal of Visual Culture, 2(1), 5-32. Bal, M. (2007). Working with concepts. In G. Pollock (Ed.), Conceptual odysseys: Passages to cultural analysis (pp. 1-10). London, UK: I. B. Tauris. Barad, K. M. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Durham, NC: Duke University Press. Barbrook, R. & Cameron, A. (1995). The Californian ideology. Mute, 1(3). Barney, D., Coleman, G., Ross, C., Sterne, J., & Tembeck, T. (Eds.). (2016). The participatory condition in the digital age. Minneapolis, MN: University of Minnesota Press. Barrowman, N. (2018). Why data is never raw: On the seductive myth of information free of human judgement. The New Atlantis: A Journal of Technology and Society. Retrieved from https://www.thenewatlantis.com/publications/why-data-is-never-raw Bataille, G. (1988). The accursed share: An essay on general economy. New York, NY: Zone Books. Baudrillard, J. (1998). The consumer society: Myths and structures. London, UK: Thousand Oaks. Beckett, S. (1954). Waiting for Godot: Tragicomedy in 2 acts. New York, NY: Grove Press. Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society, 11(6), 985-1002. 240 Bey, H. (1993). The information war. CTheory, 22. Retrieved from http://ctheory.net/ctheory_wp/the-information-war/ Bishop, C. (2012). Artificial hells: Participatory art and the politics of spectatorship. New York, NY: Verso Books. Bolter, J. D. & Grusin, R. A. (1999). Remediation: Understanding new media. Cambridge, MA: MIT Press. Bradley, A. (2011). Originary technicity: The theory of technology from Marx to Derrida. New York, NY: Palgrave Macmillan. Brecht, B. & Willett, J. (1964). Brecht on theatre: The development of an aesthetic. New York, NY: Hill and Wang. Brown, L, & Hilder, J. (2017). To refuse/to wait/to sleep: M&A. Vancouver, BC: Morris and Helen Belkin Art Gallery, UBC. Buck-Morss, S. (1995). Envisioning capital: Political economy on display. In L. Cooke & P. Wollen (Eds.), Visual display: Culture beyond appearances (pp. 110-141). Seattle, WA: Bay Press Seattle. Burdick, A. (2012). Digital humanities. Cambridge, MA: MIT Press. Burnett, R. (2007). Projecting minds. In O. Grau (Ed.), Media art histories (pp. 309-337). Cambridge, MA: MIT Press. Castells, M. (2010). The rise of the network society. Malden, MA: Wiley-Blackwell. Coleman, G. (2014). Hacker, hoaxer, whistleblower, spy: The many faces of Anonymous. New York, NY: Verso. Carr, N. (2012, May 4). The Economics of Digital Sharecropping, Rough Type. [Web log comment]. Retrieved from: http://www.roughtype.com/?p=1600 241 Certeau, M. d. (1984). The practice of the everyday life. Berkley, CA: University of California Press. Chen, A. (2014, Oct. 23). The laborers who keep dick pics and beheadings out of your Facebook feed. Wired. Retrieved from https://www.wired.com/2014/10/content-moderation/ Cole, A. (2015). Those obscure objects of desire: Andrew Cole on the uses and abuses of object-oriented ontology and speculative realism. Artforum, summer 2015, 318-323. Coleman, G. (2014). Hacker, hoaxer, whistleblower, spy: The many faces of Anonymous. New York, NY: Verso. Collins, H. (2018). Artifictional intelligence: Against humanity’s surrender to computers. Cambridge, UK: Polity Press. Compton, N. (2015, Feb. 10). ‘John Gerrard: Farm’ at London’s Thomas Dane Gallery explores the unfathomable proportions of modern technology. Wallpaper. Retrieved from https://www.wallpaper.com/art/john-gerrard-farm-at-londons-thomas-dane-gallery-explores-the-unfathomable-proportions-of-modern-technology Cook, S. (2016). Information. London, UK: Whitechapel Gallery + MIT Press. Cooley, A. (Interviewer), Ibghy, R. & Lemmens, M. (Interviewees). (2014). Richard Ibghy and Marilou Lemmens on non-doing in art [Interview transcript]. Retrieved from Canadian Art website: https://canadianart.ca/interviews/richard-ibghy-marilou-lemmens-non-art/ Cornell, L. (Interviewer), Paglen, T. (Interviewee). (2018). Lauren Cornell in conversation with Trevor Paglen [Interview transcript]. Retrieved from Trevor Paglen. New York, NY: Phaidon Press. 242 Correa, M. (Interviewer) & Galloway, A. R. (Interviewee). (2015). The philosophical origins of digitality [Interview transcript]. Retrieved from Triple Ampersand website: http://tripleampersand.org/the-philosophical-origins-of-digitality/ Crawford, K. (2014). The anxieties of big data. The New Inquiry. Retrieved from https://thenewinquiry.com/the-anxieties-of-big-data/ Crawford, K., & Paglen, T. (2016). Artificial intelligence is hard to see: Social and ethical impacts of AI [Interview transcript]. Retrieved from http://opentranscripts.org/transcript/ai-is-hard-to-see-social-ethical-impacts/ Critchley, S. (2009, June 22). Being and time, part 3: Being-in-the-world. The Guardian. Retrieved from https://www.theguardian.com/commentisfree/belief/2009/jun/22/heidegger-religion-philosophy Crocker, S. (2007). Noises and exceptions: Pure mediality in Serres and Agamben. Ctheory. Retrieved from http://ctheory.net/ctheory_wp/noises-and-exceptions-pure-mediality-in-serres-and-agamben/ D’Alleva, A. (2012). Methods and theories of art history. London, UK: Laurence King. Dean, J. (2003). Why the net is not a public sphere. Constellations, 10(1), 95-112. Dean, J. (2005). Communicative capitalism: Circulation and the foreclosure of politics. Cultural Politics: An International Journal, 1(1), 51-74. Debord, G. (1994). The society of the spectacle. New York, NY: Zone Books. Deleuze, G. (1992). Postscript on the societies of control. October, 59, 3-7. Deleuze, G., & Guattari, F. (1980/1993). A thousand plateaus: Capitalism and schizophrenia. Minneapolis, MN: University of Minnesota Press. Deleuze, G. (1994). Difference and repetition. London, UK: Athlone Press. 243 Derrida, J. (1976). Of Grammatology. Baltimore, MA: Johns Hopkins University Press. Derrida, J. (1978). Writing and di
UBC Theses and Dissertations
Estranging information : media art’s pedagogical potential in the age of information capitalism Day, Kevin 2020
Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
- 24-ubc_2020_november_day_kevin.pdf [ 2.62MB ]
- JSON: 24-1.0394149.json
- JSON-LD: 24-1.0394149-ld.json
- RDF/XML (Pretty): 24-1.0394149-rdf.xml
- RDF/JSON: 24-1.0394149-rdf.json
- Turtle: 24-1.0394149-turtle.txt
- N-Triples: 24-1.0394149-rdf-ntriples.txt
- Original Record: 24-1.0394149-source.json
- Full Text