UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

S.PA.C.E.S. socio political adaptative communication enabled spaces 2009

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata


ubc_2009_fall_calderon_roberto.pdf [ 16.14MB ]
JSON: 1.0067617.json
JSON-LD: 1.0067617+ld.json
RDF/XML (Pretty): 1.0067617.xml
RDF/JSON: 1.0067617+rdf.json
Turtle: 1.0067617+rdf-turtle.txt
N-Triples: 1.0067617+rdf-ntriples.txt

Full Text

S.P.A.C.E.S. Socio-Political Communication Enabled Spaces by Roberto Calderon B.Arch., Universidad Iberoamericana, 2005 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty of Graduate Studies (Interdisciplinary Studies) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2009 c© Roberto Calderon 2009 Abstract The Socio-Political Adaptative Communication Enabled Spaces (SPACES) research proposes a model for conceptualizing, understanding and constructing Cyborg En- vironments. A Cyborg Environment is an autopoietic system of inter-acting hu- mans and space cyborgs – entities that have enhanced their senses through technol- ogy – based on a politics of action and embodiment that results in social systems that allow for communication to take place. The present document presents this conceptual model, its foundation in Architecture, Human Computer Interaction and Cognitive Science and a set of experiments conducted to test its validity. ii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Structure of this document . . . . . . . . . . . . . . . . . . . . . 6 2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1 Architectural perspectives . . . . . . . . . . . . . . . . . . . . . 8 2.1.1 De-localized architectures . . . . . . . . . . . . . . . . . 9 2.1.2 Inter-acting architectures . . . . . . . . . . . . . . . . . 10 2.2 Human computer interaction perspectives . . . . . . . . . . . . . 11 2.2.1 Ubiquitous computing . . . . . . . . . . . . . . . . . . . 12 2.2.2 Spatial information . . . . . . . . . . . . . . . . . . . . 13 2.3 Cognitive science perspectives . . . . . . . . . . . . . . . . . . . 14 2.3.1 Space as an experience . . . . . . . . . . . . . . . . . . . 14 2.3.2 Space as cognitive relationship. . . . . . . . . . . . . . . 16 3 Cyborg environments . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1 Definition of a cyborg environment . . . . . . . . . . . . . . . . 20 3.1.1 Definition of a cyborg space . . . . . . . . . . . . . . . . 22 3.2 Models for a cyborg environment . . . . . . . . . . . . . . . . . 29 3.2.1 Cyborg space . . . . . . . . . . . . . . . . . . . . . . . . 30 3.2.2 Cyborg communication . . . . . . . . . . . . . . . . . . 32 iii Table of Contents 3.2.3 Cyborg system . . . . . . . . . . . . . . . . . . . . . . . 35 4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.1 Similar experiments . . . . . . . . . . . . . . . . . . . . . . . . 41 4.1.1 Space as stimuli . . . . . . . . . . . . . . . . . . . . . . 41 4.1.2 Space as information . . . . . . . . . . . . . . . . . . . . 44 4.1.3 Agent-human interactions . . . . . . . . . . . . . . . . . 45 4.1.4 Technologically enhanced built architectures . . . . . . . 46 4.2 A pilot study of interactive spatial perception . . . . . . . . . . . 48 4.2.1 Device description . . . . . . . . . . . . . . . . . . . . . 48 4.2.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . 48 4.2.3 Experimental design . . . . . . . . . . . . . . . . . . . . 49 4.2.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.3 Training wheel study . . . . . . . . . . . . . . . . . . . . . . . . 56 4.3.1 Device description . . . . . . . . . . . . . . . . . . . . . 59 4.3.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . 61 4.3.3 Experimental design . . . . . . . . . . . . . . . . . . . . 63 4.3.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.4 Spatial effects of visual stimuli . . . . . . . . . . . . . . . . . . . 74 4.4.1 Device description . . . . . . . . . . . . . . . . . . . . . 74 4.4.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . 75 4.4.3 Experimental design . . . . . . . . . . . . . . . . . . . . 77 4.4.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.5 Spatial effects of directional light intensity . . . . . . . . . . . . 84 4.5.1 Device description . . . . . . . . . . . . . . . . . . . . . 85 4.5.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . 87 4.5.3 Experimental design . . . . . . . . . . . . . . . . . . . . 88 4.5.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.6 SENA prototype . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.6.1 Device description . . . . . . . . . . . . . . . . . . . . . 94 4.6.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . 102 4.6.3 Experimental design . . . . . . . . . . . . . . . . . . . . 104 4.6.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 5.1 Recapitulation of findings . . . . . . . . . . . . . . . . . . . . . 123 5.2 Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 5.3 Future perspectives . . . . . . . . . . . . . . . . . . . . . . . . . 130 5.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 iv Table of Contents Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 v List of Tables 1.1 Hypotheses and experiments conducted. . . . . . . . . . . . . . . 5 4.1 Initial pilot study, tasks to be performed by test subjects. . . . . . 51 4.2 First pilot study, stimuli used. . . . . . . . . . . . . . . . . . . . . 52 4.3 Measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.4 Experimental design for human training wheel experiment. . . . . 65 4.5 Stimuli tested and their h-s-v characteristics used to test spatiality of visual stimulation. . . . . . . . . . . . . . . . . . . . . . . . . 76 4.6 Scale used to measure spatial size. . . . . . . . . . . . . . . . . . 77 4.7 Biased sizes primed during pre-test. . . . . . . . . . . . . . . . . 79 4.8 Experimental design for spatial size of visual stimuli measurements. 80 4.9 Length of virtual space. . . . . . . . . . . . . . . . . . . . . . . . 87 4.10 Directional stimuli spatial perception, configurations. . . . . . . . 88 4.11 Stimuli definition. . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.12 Treatments applied. . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.13 Experimental design for the sena experiment. . . . . . . . . . . . 105 4.14 Percentages of total stimulus-action correlations across all subjects. 109 5.1 Theoretical model and experiments conducted. . . . . . . . . . . 124 vi List of Figures 1.1 Structure of this document. . . . . . . . . . . . . . . . . . . . . . 7 2.1 Space perception loop. . . . . . . . . . . . . . . . . . . . . . . . 16 2.2 Stimuli evoke a spatial experience. . . . . . . . . . . . . . . . . . 17 2.3 Navigation process based on Jul and Furnas. . . . . . . . . . . . . 18 3.1 Evoking a spatial experience through stimuli. . . . . . . . . . . . 28 3.2 Models for a cyborg environment . . . . . . . . . . . . . . . . . . 29 3.3 Cube model for a cyborg architecture taxonomy. . . . . . . . . . . 33 3.4 Serres’s model of communication. . . . . . . . . . . . . . . . . . 34 3.5 Diamond model for a cyborg’s communication ability. . . . . . . 35 3.6 Communication process between two cyborgs. . . . . . . . . . . . 36 3.7 Citizenship based on the access to different stages of action. . . . 38 3.8 Paired space and human perceptive loops. . . . . . . . . . . . . . 39 3.9 Human and space encoded agent forming an organic interactive system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.1 General setting and dimensions. . . . . . . . . . . . . . . . . . . 49 4.2 Stimulus presented. . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.3 Initial pilot study, example stimulus. . . . . . . . . . . . . . . . . 52 4.4 Experiment flow. . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.5 Motion patterns of a subject according to each randomized stimu- lus presented for 120 seconds. . . . . . . . . . . . . . . . . . . . 54 4.6 Arrangement of a female participant with tags over color cubes. . 55 4.7 Two theories of control based on reflective consciousness manipu- lation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.8 Coordinate system. . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.9 Interactive coffee table and power key. . . . . . . . . . . . . . . . 60 4.10 A participant interacting with the space. . . . . . . . . . . . . . . 61 4.11 Space model and alterations performed. . . . . . . . . . . . . . . 64 4.12 Experiment flow. . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.13 A participant interacting with the power key. . . . . . . . . . . . . 68 vii List of Figures 4.14 Recorded paths with a latency of 120 seconds of a test subject in the semi-immersive task group. . . . . . . . . . . . . . . . . . . . 69 4.15 Location iterations for the x axis of a test subject in the immersive task group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.16 Means of total distance of movement in both bidimensionally and tridimensionally altered color spaces. . . . . . . . . . . . . . . . . 71 4.17 Means of total distance of movement in each bidimensionally and three-dimensionally altered color spaces. . . . . . . . . . . . . . . 72 4.18 Means of distance from the new home location to the hypothesized displaced home location. . . . . . . . . . . . . . . . . . . . . . . 73 4.19 A user rating a specific visual characteristic. . . . . . . . . . . . . 75 4.20 Software used to train users in spatial perception with both color and non-color primers. . . . . . . . . . . . . . . . . . . . . . . . 78 4.21 Experiment flow. . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.22 Linear regression on perceived spatial size of color value alterations. 82 4.23 Means of spatial size perceived with different peripheral light stim- ulations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.24 Head mounted display and virtual world. . . . . . . . . . . . . . . 86 4.25 Treatments applied (highly illuminated surfaces are represented by red planes). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.26 Effects of directional light intensity. Motion analyzer software. . . 90 4.27 Spatial effects of directional light intensity. Experiment Flow. . . . 91 4.28 Analysis of a participant. . . . . . . . . . . . . . . . . . . . . . . 92 4.29 Spatial stimuli used by the system. . . . . . . . . . . . . . . . . . 96 4.30 Experiment setup. . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.31 Emotion selector. . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.32 Interaction bayesian network. . . . . . . . . . . . . . . . . . . . . 99 4.33 Communication bayesian network. . . . . . . . . . . . . . . . . . 101 4.34 Experiment flow. . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.35 Means of actions that correlate to one, and only one, stimulus. . . 107 4.36 Count of appearances of human actions related to each spatial stim- ulus. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.37 Means of behavior states recorded during each stimulus, across all groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.38 Means of total count of overall motion high appearances. . . . . . 113 4.39 Means of total count of overall position right appearances. . . . . 114 4.40 Means of high motion related to high valued stimuli. . . . . . . . 115 4.41 Means of high motion related to low valued stimuli. . . . . . . . . 115 4.42 Effects of a-priori beliefs of an interactive system. . . . . . . . . . 116 4.43 Means of accurately decoded messages. . . . . . . . . . . . . . . 118 viii List of Figures 4.44 Agreement on message transmission and decoding ease. . . . . . 119 ix Acknowledgements This research would not have been possible without the invaluable support of my supervisory committee. I am deeply grateful to Sidney Fels from the department of Electrical and Computer Engineering, Oliver Neumann from the department of Architecture and Lawrence Ward from the department of Psychology. x To my hero, Martha Arámburu. To my inspiration, Cécile Beaufils. xi Chapter 1 Introduction Digital technology has become so ubiquitous that the very essence that makes our societies has begun to depend on the capabilities of the computing machine. We rely on the power of computers to do business, defend our nations or prevent world- wide catastrophes. In his article Transmitting Architecture: the Transphysical City, Marcos Novak writes that . . . Cyberspace as a whole, and networked virtual environments in par- ticular, allow us to not only theorize about potential architectures in- formed by the best of current thought, but to actually construct such spaces for human inhabitation in a completely new kind of public realm. . . [50] Novak envisions an Architecture of cyberspace based on global networks and human interaction. His “liquid spaces” molded by algorithms that interweave and interact when human variables exert force on them are informational clusters that allow for human interaction that escapes the physical world. Within this new realm, the designer stops being interested in the physical component of the ar- chitectural structure and focuses on the definition of variables that build coherent self-regulating and self-constructing architectural systems. The materiality of glass and concrete succumbs to information, while Architecture remains, grows and mu- tates to this new kind of spatiality. In Novak’s words, This [cyberspace] does not imply a lack of constraint, but rather a substitution of one kind of rigor for another. When bricks become pixels, the tectonics of Architecture become informational. City plan- ning becomes data structure design, construction costs become com- putational costs, accessibility becomes transmissibility, proximity is measured in numbers of required links and available bandwidth. Ev- erything changes, but architecture remains.[50] Architecture has always been the solidification of culture and the discipline of construction. Over the centuries architects have discussed and built the spatial realities where we perform our soliloquies and social interactions. Nevertheless, 1 Chapter 1. Introduction contemporary technology has radically affected how we interact with our world and other human beings. We have become inhabitants of the de-localized city and the enhanced living; our cities are becoming less dependent on their topography and humans are becoming self-designed cyborgs.1 What is the role of architectural practice in this reality? How will the Archi- tecture of tomorrow be? But above all, how should the architect of today and tomorrow address a society no longer in need of built spaces? The Socio-Political Adaptative Communication Enabled Spaces (SPACES) re- search envisions a world where humans cohabit with their environments as they do with other human beings. It forsees an organic and complex correlation be- tween humans and built spaces that can perceive the world and consciously act upon it. This ideal forms the foundation of a new paradigm of Architecture that centers on the social, political and adaptative nature of an environment with the right to become a citizen of our societies. Five definitions form the core of such understanding: • Social: A spatial entity able to interact with its inhabitants should become part of a social system of inter-action formed by, at least, itself and a human inhabitant. • Political: Interactivity between entities with distinct action capabilities ren- ders the need for a political structure – citizenship – based on the abilities to inter-act. • Adaptative: Action based and social entities can only arise through the adapt- ability that characterizes living things. Memory, cognition and use of control render an entity as adaptable. • Communication: The inter-active social definition of these spaces is based on their capability to communicate meaning. • Space: These entities are conceived as spatial experiences. These concepts fit within an understanding of Architecture as an informational system that can be de-localized[7] and fragmented into the networked[62] nature of our present world. Such understanding of Architecture as a relationship between humans and mutating interactive environments[6], conceptualized as prostheses of the human body[15] and the city[55], has already promoted built architectures that can be conceived as interfaces[33] to an informational realm, or prosthetic en- hancements to every day living[37]. Nevertheless, the SPACES definitions have 1These concepts will be discussed further in the sections that follow. 2 1.1. Contributions attempted to expand this knowledge and recognize built space as a purely infor- mational and cognitive process dependent on a complex system of co-relationships between human behavior and spatial perceptions driven by rough artificial intel- ligences. The present investigation has conceptualized an architectural entity that embodies humans, creates a social connection with them and acts intelligently on its perceived reality – i.e. its inhabitants. By modeling this new paradigm we are achieving a deeper understanding of our technologically enhanced world and allowing for a more human implementa- tion of such enhanced built environments. Above all, the present research aims at raising awareness of architectures that are both interfaces to sociality and active participants of a cyborg society. This should trigger a deeper discussion about self- modifying environments that embody and are embodied by humans in an endless process of social co-adaptation. The following sections of this chapter presents the main contributions of the investigation and the structure of this thesis. 1.1 Contributions The present exploration has attempted to create a model of space that is coherent with the needs and technological knowledge of our contemporary societies. The present research has been founded on two concepts that have become widespread in our contemporary world: • Spatial Experience: Space is a collection of cognitive processes that interact between them to create a spatial experience of the world. Therefore, space is not dependent of the objects in the real world, but on the relationships between cognitive events that arise when stimuli are perceived. • Cyborg: A cyborg is an entity that has enhanced its natural abilities through prosthetic enhancements. There exist human cyborgs, humans that have been enhanced through any pharmaceutical or electronic technology, and space cyborgs, spaces that have been enhanced to perceive, synthetically undergo a cognition process, and act upon the world. The present investigation has assumed that a human spatial experience is cog- nitive and independent of the objects that evoke it, thus that space can perceive a ‘space’ of its own if enhanced with a synthetic cognitive ability. Such space and its inhabitants form an autopoietic system of prosthetic nature, i.e. cyborg nature, in which its parts communicate, socialize and share control within a political struc- ture of inter-action. This has allowed the present investigation to conceptualize a 3 1.1. Contributions system of inter-relationships between a space enhanced with cognitive abilities and human beings, or a Cyborg Environment: • Cyborg Environments: Both humans and cyborg spaces2 are entities that undergo spatial experiences that depend on one another, i.e. space cyborgs evoke spatial experiences in humans while humans evoke spatial experiences in space cyborgs. When paired, these two beings form self regulating sys- tems called Cyborg Environments that are based on perception and action, and that allow for sociality to arise. For instance, if one of the entities is al- tered the other entity suffers a correlated alteration that can be defined as so- cial. Furthermore, these inter-action systems allow for meaningful commu- nication to take place outside their self-regulatory relationship. Successful communication depends on the agreement of a previously agreed semantic arrangement of actions, i.e. a code, that can be used to transmit a message. The conceptualized model is based on the definition of a space enhanced with cognition, i.e. a cyborg space, its communication abilities and the systems formed of cyborg spaces and humans: • Cyborg space: Embodiment, enclosure, alteration capabilities of a cyborg space. • Cyborg communication: Communication capabilities of a cyborg space. • Cyborg system: Self-regulatory (autopoietic) systems formed of cyborg spaces and humans. This model has been validated by testing six hypotheses: 1. A cyborg space can embody humans and humans can embody space, 2. A cyborg space, i.e. an enclosure, can be evoked though stimuli, 3. The perception of a cyborg space can be manipulated by altering the stimuli that evokes it, 4. Cyborg communication depends on an agreed code that lies outside the in- teractive process. 5. Cyborg systems depend on action. 2Cluster of stimuli that evokes a spatial experience of prosthetic and informational nature (within cyberspace) that forms a self-regulating network of relationships between itself and its inhabitants by being able to undergo spatial experiences of its own. 4 1.1. Contributions 6. Cyborg systems are autopoietic, i.e. self regulating networks of relation- ships. One pilot study and four experiments were conducted during a span of two years. Each one of the four experiments was designed to test the presented hy- potheses according to table 1.1. Hypothesis. Experiments.. Cyborg space can embody humans. Training Wheel Study. Enclosures can be evoked through stimuli. Effects of visual stimuli. Enclosures can be altered through stimuli. Effects of visual stimuli. Effects of directional light Cyborg communication lies outside the inter-action. SENA prototype. Cyborg systems depend on action. SENA prototype. Cyborg systems are autopoietic. SENA prototype. Table 1.1: Hypotheses and experiments conducted. The first pilot study, A pilot study of interactive spatial perception, studied the behavioral and psychological effects of mutating spaces. Measurements of attention, immersion, motion patterns and gender were performed. The second experiment, Training Wheel Study, measured the embodying capa- bilities of a changing and ‘perceiving’ space – a software agent controlling a visual characteristic space according to human movement. Measurements of immersion, motion patterns and embodiment were done. The third experiment, Spatial effects of visual stimuli, measured the spatial ca- pabilities of various color stimuli presented peripherally to human subjects. Mea- surement of spatial size perceived by human beings was performed. The fourth experiment, Spatial effects of directional light intensity, measured the spatial and behavioral effects of various directional light intensities. Measure- ments of kinesthetic movement, head rotations and subjective perception of space were done. The fifth experiment, SENA prototype, explored a simple implementation of a Cyborg Environment formed of a cyborg space and humans. Measurements of in- teractivity, stimuli-behavior correlation, message transmission, sociality, aesthetic perception and a-priori knowledge biases were performed. The final contribution of the SPACES research is of methodological nature. The experiments conducted in the present investigation have proven that a user centered approach to spatial behaviors can and should be included in architectural practice. The general approach to Architecture as a static process of creation – i.e. user screening, construction and post-construction validation – can be replaced 5 1.2. Structure of this document with an approach to design founded on user studies that investigate human percep- tion and behavior through simple testing prototypes. This allows for the creation of concrete and replicable models that can be applied to several stages of architectural design. Furthermore, the increasing flexibility and interactivity of built environments should allow for this process to be done throughout the lifespan of a building en- hancing it and altering to the changing needs of its inhabitants. This proposed methodology is equivalent to software development where design is centered not on the hardware of the specific architecture of the computing machine, but on the human usage of such infrastructure. In other words, Architecture should focus on the development of states and interfaces that a specific building can provide, rather that on the specific structure that promotes such events, this can only be achieved through constant user centered validations of such interactions. 1.2 Structure of this document The present document is divided in four parts, presented by figure 1.1. The first part, Introduction, contains chapter one and two. Chapter one presents the need for a new architectural perspective and introduces some concepts that will be developed later in the document. Chapter two presents related work that has been done in the fields of Architecture, Human Computer Interaction and Cog- nitive Science that has served as foundation to the conceptualization of Cyborg Environments. The second part, Cyborg Environments, is formed of chapter three. This chap- ter presents the concept of Cyborg Environment and proposes a model for under- standing and creating such entities. The model is based on the definition of a cyborg space, its communication abilities, and the autopoietic systems that result when humans and cyborg spaces interact. The third part, Experiments, is formed of chapter four. Chapter four introduces similar experiments that have been conducted by other researchers and presents the experiments that were performed as part of the present research. Each experiment explores and tests one or several concepts proposed by chapter Two. Finally the fourth part, Conclusions, is formed of chapter five and presents the implications of the experiments so far conducted and gives future perspectives on the topic of Cyborg Environments. 6 1.2. Structure of this document IntroductionIntroduction Architecture Human Computer Interaction Cognitive Science Related work Definition of Cyborg Environment Models for a Cyborg Environment Space Cyborg Cyborg Communication Cyborg Systems Cyborg Environments Similar Experiments Experiments Interactive spatial perception Training Wheel Spatial effects of visual stimuli Spatial effects of directional light SENA prototype Experiments Implications Future Perspectives Conclusions Figure 1.1: Structure of this document. 7 Chapter 2 Related work 2.1 Architectural perspectives In a short article entitled Virtual Architecture - Real Space, Hani Rashid declares that the present society is in “the very early stages of a digital revolution whose direction we will not be certain of for some time”[57]. Affordable personal com- puting systems have become widespread and their effects on society strong. We live in a global village[44] of increasing interactivity and complexity[51]. What is the role of Architecture in this changed world? Shumacher believes that Architecture has to react to societal and technological changes. It has to maintain its ability to deliver solutions. But its very problems are no longer predefined [today]. In fact, these problems are themselves a function of the ongoing autopoiesis of Architecture. Architectural experimentation has to leap into the dark, hoping that sufficient frag- ments of its manifold audience will throw themselves into architec- ture’s browsing trajectory.[62] The current fragmentation of architectural practice into diversified and multi- faceted practices makes an introduction to architectural thinking a task with a high risk of producing extremely simplified results. Nevertheless, a general taxonomy of the work that is relevant to the present research is needed. Two main architectural thought clusters have served as foundation for the theorization of socio-political adaptative communication enabled spaces: • Non localized: Architectures focused on the de-localization of Architecture into informational networks. • Inter-acting: Architectures exploring the inter-active3 capabilities of space as a real-time mutating entity. 3The word inter-action is used in this document to emphasize the correlation between actions belonging to different entities within an interactive system. 8 2.1. Architectural perspectives 2.1.1 De-localized architectures The virtualization – conceptualization as information – of reality that has resulted from the informational revolution of our era has changed the way we understand and act upon our world. As Lucy Bullivant puts it, even “war has come to be fought and projected virtually as well as physically; commerce relies on the fourth dimen- sion of the spatialization of time achieved through dislocated virtual connectivity.”[4] Such “redefinition of human relations”[4] has resulted in a de-localization of Ar- chitecture into the informational networks of our present societies. This de-localization of reality into communication is evident in Ben Rubin’s Listening Post structure. “Anyone who types a message in a chatroom... is calling out for a response”[7] Ben Rubin says. In response, he has created a structure that builds a visual/sound scape from a torrent of endless communication activity. His ephemeral work represents network communication as a tectonic matter that can be used to construct architectural space. In other words, “pliable and responsive digital environments potentially constitute specific new types of structures raising the haptic and intuitive threshold of public and private space.”[4] Moreover, since the city itself can be conceived as an informational entity, it is possible to conceptualize an architectural network that can be spatially modeled. Contemporary efforts like Neil Leach’s swarm tectonics, which crystallize over a site, or veech media architecture’s vma-mobile environments, that aim at the “im- plantation of a temporary architectural environment into the urban setting which could activate and polarize the urban citizens”[62], demonstrate this conceptual- ization. This transparency between real (the city) and virtual (the network) has been carried into further explorations of informational and interactive nature. Usman Haque’s Sky Ear explores the disruption of the “perceptual boundaries between the physical and virtual by encouraging people to become creative participants in a Hertzian performance, allowing [them] to see [their] daily interactions within the invisible topographies of Hertzian space.”[9] By releasing a colorful structure into the air participants of this architecture were able to extend their senses into the invisible wave of telecommunications, generally hidden to the naked eye. MIT’s Media House project is the architectural translation of such immaterial- ity. Usually described by stating that the house is the computer, the structure is the network, the entity leaves behind any anthropomorphic or aesthetic definition in order to be “identifiable in social, psychological and sensorial dimensions”[8] and modeled and manipulated through LAN (Local Area Network) definitions. Con- crete and glass are replaced by bits that can be translated into sensations combined and presented by different types of surfaces: screens, light, air, temperature. The control of such informational bits is the main concern of these architectures 9 2.1. Architectural perspectives of the intangible. Somlai-Fischer’s interactive spatial installations “talk about a new relationship between technology and design, in which the role and effect of technology reveals a more profound relation between design and design tools”[5] and impersonate the main objectives of what will later become inter-activity. Networked relationships and their communicative channels form the language of these architectures. Architecture is understood as a network of human so- matosensory experiences that can be evoked through the architectural entity ac- cording to the system needs. Jason Bruges, for example, “...redefines the role of the architect as maker of responsive environments”[6]. In his Memory Wall for the Puerta America Hotel in Madrid Bruges uses a collection of computing vision and action algorithms to create a system where The motions of the individuals inside [the space] act as a catalyst for an ambient visual projection, in which motion and form are captured, filtered and projected onto the wall surfaces in a continuous loop, with memories of the day building up on them.[6] For Bruges, his “compositions are not complete without the interaction of an individual. . . [because] each person experiencing one of [his] works will have their own unique memory of it”[6]. This inter-relationship between a system’s state, human state and perceptual memory is the ultimate result of a de-localized Archi- tecture. 2.1.2 Inter-acting architectures Ubiquitous computing and mobile communication technologies have placed the emphasis of interaction on connectivity rather than on location-dependent struc- tures. Place, as a point for inter-action, has been supplanted by the concept of interface or connection point. In this sense, Architecture as a built structure “be- comes an embarrassment; it slows things down and moves attention away from [connectivity]. At the very least one could say it marginalizes... [the ability to] . . . connect to anybody, anywhere, any time through all the senses. . . ”[3] However, according to Diller and Scofidio, Architecture can become a part of this liveness. For them, liveness is “the mechanism of interactivity that originated in broadcasting, where electronic news is the instantaneous relay of the world”[55] and thus a state reproducible through other media, e.g. space. In their words, Real-time is key. Lag time, delay, search time, download time, feed- back time are unwelcome mediations of liveness. Real time is the speed of computational performance, the ability of the computer to 10 2.2. Human computer interaction perspectives respond to the immediacy of an interaction without temporal media- tion. Un-mediated means im-mediate. But whether motivated by the desire to preserve the real or to fabricate it, liveness is synonymous with the real, and the real is an object of uncritical desire for both techno-extremes [technophilic and technophobic].[55] Changing form or providing the inhabitants of their architecture with an aug- mented reality, Diller and Scofidio’s work from the the 1990’s responded to an ever-changing environment of interactions between human action, meaning and form through the manipulation of information. For them Architecture was catego- rized according to its ability to produce and manipulate information. Furthermore, all activity in architectural space was conceived as inter-connection between hu- mans and their surrounding space. In other words, between architecture and human flesh, or “the outermost surface of the body bordering all relations in space”.[15] A similar approach was taken by Oosterhuis’s Polynuclear Landscape from 1998. Polynuclear Landscape is a programmable surface, a flesh or skin, that interacts in real time with its inhabitants. In this project Oosterhuis “. . . aims at designing a building body displaying real time behavior”[54]. This inter-active process is extended to both creators and users of the architectural entity, both “co- designers and co-users work in real time both during the design process and in the life-cycle of the building body, the process never stops and leads to a mature time-based architecture.”[54] Here, Architecture “engages and creates adaptive and tactile physical environ- ments which surround and envelope our physical and conceptual bodies... re- sult[ing] in a seamless integration of information, technology and its users, gen- erating an endlessly infinite sensitive surface... [a] liquefaction of the post-urban environment.”[55] This fluid nature of an Architecture preoccupied with temporal nature and liq- uidity had been depicted by Novak’s work in the 1980’s and 1990’s. The hybrid nature, both real and virtual, hard and soft, fast and slow of his projects are of- ten described as “extreme provocation”[73]. Nevertheless, they crystallize a data driven phenomenon centered in fluidity and departing from structure. Architectural “form follows neither function nor form”[73], but the complex tensions that that arise within human-computer inter-actions. 2.2 Human computer interaction perspectives Human Computer Interaction (HCI) studies focus on providing human centered interface solutions to computing machines. However, computing systems are be- coming diluted in our environment and their interfaces are turning more physical. 11 2.2. Human computer interaction perspectives As a result HCI knowledge has become of spatial nature and focused on the inter- action between humans and their environments. A categorization of some of the most important experimentations relevant to the present research, and done as Human Computer Interaction investigations to address said issues, follows. An attempt to be both concise and broad in the selec- tion of examples clearly exposing the understanding of space and spatial relation- ships in HCI thought has been done by proposing the following categorization: • Ubiquitous: Explorations of the spatial nature and social capabilities of an environment formed of computing machines. • Spatial: Explorations of the physical, kinesthetic and haptic properties of spatial manipulation of information. 2.2.1 Ubiquitous computing Ubiquitous Computing is a term coined by Mark D. Weiser[69] to describe a future where computing machines are hidden in every-day objects and interaction with them is seamless. The term does not allude to the standardization of the machine, but to its replication into a multitude of forms interacting between themselves to create an environment of computers. The conceptualization of computing interfaces as spatial entities has been car- ried onto the possibility of their inclusion on social and public systems of human interaction. Research by Eriksson et al. on public space enhanced by collaborative technology is one important example. Preoccupied with the fact that public space is becoming governed and purpose-oriented, and that private devices are effective isolators of the individual, the group has searched for ways to transform private devices into human-to-human interactive interfaces within public domains. The group proposes a democratic space where . . . public users should be able to change [information]... towards a situation where the public can expose, comment and edit elements of the public space. Thereby, the space is formed and shaped by people passing by and not only by mimicking commercial interests.[23] This conceptualization of space “created not only by the physical space, but more by the people present”[23] is built on the idea of market place, i.e. a space where people are “able to come to. . . with their goods, trade, look around, play games, talk to each other, pick up stuff and leave again.”[23] The market place allows a democratization of space generating uncontrolled and parallel social in- teractions otherwise inexistent. 12 2.2. Human computer interaction perspectives This work predicts an era of community-oriented collaborative interfaces pro- vided by architectural spaces, and the understanding of space as a networked infor- mational interface. As Kerne writes, “an interface functions as means of contact, a border zone, a layer hosting exchange, a nexus where resources such as in- formation - and power are circulated and transformed, a channel through which interactors communicate, or a conduit for message passing.”[40] 2.2.2 Spatial information Space cognition strongly depends on our interaction with space. Because “natural viewing situations are usually kinetic, involving a moving observer and moving objects”[22], a wide range of perceptions take place in human environmental per- ception. Using vision we find figures and weights, through sound and smell we find recognizable ambients, and if in reach, touch and taste allow us to intimately explore our surrounding. Haptic perception, provided by touch and kinesthetic movement, has proven to be strongly related to such spatial exploration and cogni- tion. According to O’Neil, People gain environmental understanding from tangible physical ex- perience, from coming in contact with natural and built elements, and from moving through spaces, as well as from seeing objects in space . . . when reinforced with our visual perception these holistic systems form our phenomenological understanding of the environment so that the whole sensory envelope creates in us the sense of spatiality.[53] Virtual reality has its roots in the human ability to represent reality in terms of a conceptual expression. Representation of three-dimensional objects was initiated with the mathematical expression, which evolved into the geometrical definitions that gave rise to the perspectival representation of the spatial world. The computer allowed for perspectival computation to be done at a high speed and thus created real-time exploration of virtual (inexistent) reality. However, the disparity between the represented images that stimulate the eyes of a person experiencing virtual reality, and other perceptual senses used by space perception – haptic or kinesthetic – results in what is known as “partial immersion” and a sense of a falsified reality, an artifice. The work by Galyean et al. in 1991 investigated an intuitive tool for sculpting virtual volumes that tried to solve such “partial immersion”. Haptic force and di- rectional resistance, generally present in the real world, was achieved by the poor man’s force feedback unit, providing resistance mimicking real object alteration. This understanding of virtuality as provision of stimulation resulting in a percep- 13 2.3. Cognitive science perspectives tion of reality components – light or forces – became the foundation for future explorations on spatial perception. Chen et al., for example, argue that . . . interactive data exploration in virtual environments is mainly fo- cused on vision-based and non-contact sensory channels such as vi- sual/auditory displays. In addition to seeing and hearing, enabling users to touch, feel and manipulate virtual objects in a virtual environ- ment provides a realistic sense of immersion in the environment that is otherwise not possible.[11] Humans relate to space by visual, kinesthetic and haptic exploration. By mov- ing through it we are able to explore its characteristics and form an accurate model of our surrounding. The Radial Arm Maze is an important paradigm in the study of space perception introduced by David Olton in the 1970’s [17]. The paradigm has helped analyze a big range of spatial issues like “natural foraging behavior, short and long-term memory, spatial and nonspatial memory, working and refer- ence memory, drug effects on behavior ageing effects, strain and species differ- ences in spatial competence, and strategic choice behavior.”[26] Furthermore, it has also helped analyze spatial experience in contemporary virtual reality[56] by providing valuable information of brain correlations between different brain areas and human motion in space. It is now known that kinesthetic exploration is an im- portant factor in the creation of appropriate mental maps of the space that surrounds a human body. 2.3 Cognitive science perspectives Spatial perception is the result of a complex system of perceptions and brain com- putations that result in a spatial experience. Humans are equipped with sensors that gather information of both their body and their environment, this information is then structured in a coherent form by the brain to result in the experience humans call space. 2.3.1 Space as an experience In his book Visual Space Perception, Maurice Hershenson describes the concept of the perceptual world as the foundation to any visual spatial cognition. In his words, . . . the physical world exists outside the observer. The perceptual or visual world is experienced by the observer. It is produced by activity in the eye-brain system when patterned light stimulates the eyes of an 14 2.3. Cognitive science perspectives observer.... The perceptual world is normally externalized, i.e., it is usually experienced as ’out there’. . . [22] Spatiality is a cognitive process depending purely on the observer and detached from the object that evokes it. All objects are sources of information – stimuli –, when these excite the sensory apparatus of a human being they have a probability of promoting a spatial experience. Not all objects evoke a spatial experience, but all objects perceived are located within a spatial experience. For example, some of an object’s properties stimulate the retina of an observer, who in turn undergoes a cognitive process to identify the perceived stimuli as an ob- ject. Some of the object’s characteristics – color, size or position – help the human observer to categorize and structure his or her relationship to such perceived object. With a collection of several objects, additional to the object previously perceived, the subject can achieve an enveloping sensation commonly known as environment. This is, the observer considers himself or herself positioned in an envelope formed of interrelated objects – stimuli producers. This enveloping sensation is what will be called in this study a Spatial Experience. This Spatial Experience is formed of what Nigel Foreman and Raphael Gillet consider as egoncentrically encoded space, or “discrimination of a spatial locus with reference to the body midline or vertical visual meridian,”[26] and a further cognitive process requiring a higher level of cognition to construct a model of the world through “memory for inter-relationships between objects in a more global spatial framework” [26] or what Foreman and Gillet name allocentric encoding and navigational spatial skill. Both processes work together to deliver a Spatial Experience to a perceiving being. Such experiential process is then linked back to the human sensory system al- lowing for verification and calibration of the constructed mental model through kinesthetic, haptic and sensorial – e.g. audible stimuli corresponding to same per- ceived object – perceptions. The outlined process (perception, spatial experience and calibration) will be named in this study as a Space Perception Loop and is depicted by figure 2.1. Stimuli are being gathered continuously by the perceiving entity through its senses. By considering the senses as static systems adding little or no information to the stimuli perceived we can hypothesize that certain stimuli can repetitively ex- cite the appropriate cognitive apparatus that in turn promotes a significant spatial experience. The spatial experience is separated from the object by at least three processes: the channel4, the senses and the cognitive human apparatus. Space is 4A channel is here conceptualized as the way which a collection of stimuli uses to arrive to the human senses 15 2.3. Cognitive science perspectives Figure 2.1: Space perception loop. therefore not ’outside’ and dependent of the objects that appear to form it, but ‘in- side’ and dependent of the cognitive processes that evoke it. Space is an experience and depends on a cognitive process (figure 2.2). 2.3.2 Space as cognitive relationship. Robert Harnish defines cognition as “the mental ‘manipulation’ (creation, transfor- mation, deletion) of mental representations.”[30] For a human to undergo a spa- tial experience, a cognitive process is necessary. The information that is gathered through the senses is processed by the brain to fit a model of relations, a mental model. When perceivers – humans – gather information from the world they ar- range their perceptions in cognitive relationships. By comparing different stimuli the brain can create models that can be used for ongoing or future processes related to their interaction with reality. One of these processes is evident in how humans navigate their environment. Human beings have to gather information through their senses and use it in a nav- igational process of relationships in order to achieve their expected goals. Hirtle and Sorrows, in their article Navigation in Electronic Environments, assert that “many of the same cognitive principles that are important in navigation of phys- ical spaces are also involved in navigation in electronic environments.”[32] Their model (figure 2.3) outlines the mental process that people undergo during spatial tasks. Centered in data acquisition and processing the model has a strong similar- ity to human interaction models, specially those promoted by Donald Norman in 16 2.3. Cognitive science perspectives Figure 2.2: Stimuli evoke a spatial experience. Human Computer Interaction.[49] This understanding of spatial navigation as data management allows for the conceptualization of space as an information-based cognitive process of relation- ships between stimuli. Because neither perceivers nor stimuli are static, changes in any of them affect such inter-relationships and result in a perception of the world as fourth-dimensional. According to this, there exist two types of stimuli changes or alterations that result in spatial model updating. • Stimuli: Stimuli based changes are produced by modifications in stimulus characteristics. Altering one or several characteristics of a single stimulus obliges a reconsideration of the mental model that said stimulus forms part of. Reality is a complex collection of stimuli that can be described in simpler characteristics (e.g. for a visual stimulus, hue, saturation and value). • Perceiver: Perceiver based changes are produced by alterations in the stim- ulus observer. Changes in point of view, perceptual capabilities or brain lesions (e.g. hemispatial neglect) result in mental model updating. 17 2.3. Cognitive science perspectives Figure 2.3: Navigation process based on Jul and Furnas. 18 Chapter 3 Cyborg environments Asymptote –Hani Rashid–’s definition of E-gora well defines the increasing influ- ence of contemporary technology in the cultural, economic and political compo- nents of our society. E-gora is: . . . [a] globally accessible non-place... [consisting] not only of Internet space and its substrata of e-mail, chat, virtual reality markup language (VRML), and CUSeeMe5, but also those familiar territories such as public access television, C-SPAN, court TV, and even the voyeuristic spectacles of ‘caught on tape’ programming that are so influential in the vast-electro-sphere we now call a community.[57] Because of the increasing inclusion of communication technology and ubiqui- tous computing in our world, the spaces of the future will be based on the same blocks that define the E-gora. Architecture in a world of communication becomes dependent on the immateriality of its connectivity definition. Tectonics – as vis- ible characteristics giving expression to the relationship between structure and force[63] – become dependent on forces of inter-activity forces, that expand in various directions. An example of this conceptualization of Architecture as an informational en- tity is veech media architecture’s ORF Schauplatze Der Zukunf project from 1999. Their structure proposes a “simultaneity between virtual and real [by allowing] the user an opportunity to actively alter his/her environment and communicate these modifications to a third party via invisible transmission.”[62] The real inhabitancy of such architectural space is not the structural beams that mark the location where the environmental transformation occurs, but the interactive process that arises be- tween the experient of such alteration and the receiver of the sent modifications. The participants of these architectures become not only inhabitants of a built structure, but symbiotic parts of the architectural system. Because these structures have been conceptualized as informational clusters, the humans that inhabit them become embodied parts of these de-territorialized and interacting architectures. In 5Video conferencing technology 19 3.1. Definition of a cyborg environment other words, these architectures can be conceived as systems of co-embodying en- tities – space and human – that interact between themselves to generate, manipulate and transmit information in a coherent way. 3.1 Definition of a cyborg environment A cyborg is an entity that has used biological or electronic technology to enhance its natural definition. The word cyborg was first used by Manfred Clynes in an article called Cyborgs and Space[12] and it is formed of the words cybernetic and organism that depict the radical need of technology enhancements needed by as- tronauts to survive in non-natural environments like space. Gray defines a cyborg as a . . . self-regulating organism that combines the natural and artificial to- gether in one system. Cyborgs do not have to be part human, for any organism/system that mixes the evolved and the made, the living and the inanimate, is technically a cyborg. This would include biocomput- ers based on organic processes, along with roaches with implants and bioengineered microbes.[28] In this sense, a cyborg is an entity that has chosen to extend its abilities through prosthetic entities that, in a strong manner, challenge the natural evolution of its body. A prosthetic and reconfigured body, a cyborg, cannot conceive its reality as static. For this organism, the ambient can be re-codified and personalized in a multiplicity of suitable alternatives, depending on the prostheses used to interact with it. The cyborg itself can accelerate, or de-accelerate, through technology his adaptation to any environment. It is the original definition of a Cyborg by Clynes and Kline that states that “humans could be modified with implants and drugs so that they could exist in space without space suits.”[28] Donna Haraway writes in her Cyborg Manifesto that High-tech culture challenges these dualisms [organism/machine, mind/body, animal/human, energy/fatigue, public/private, nature/culture, male/female, primitive/civilized] in intriguing ways. It is not clear who makes and who is made in the relation between humans and machine. It is not clear what is mind and what is body in machines that resolve into cod- ing practices. In so far as we know ourselves in both formal discourse (e.g.,biology) and in daily practice (e.g., the homework economy in the integrated circuit), we find ourselves to be cyborgs, hybrids, mo- saics, chimeras. Biological organisms have become biotic systems, 20 3.1. Definition of a cyborg environment communication devices like others. There is no fundamental ontolog- ical separation in our formal knowledge of machine and organism, of technical and organic.[29] This lack of distinction between machine and organism, or space and inhabi- tant has created the need of architectural entities of prosthetic nature. Due to the fact that cybernetic entities are seamless and self-regulating organisms that can be enhanced at any moment, inter-action with them is only possible through a process of adaptative cyborgization with them by becoming part of their prosthetic exten- sions. In other words, in order to attend to the volatile nature of the increasing number of human cyborgs, architectural space has to become a cyborg. According to Teysoot, . . . the first task architecture ought to assume, therefore, is that of defin- ing and imagining an environment not just for ‘natural’ bodies but for bodies projected outside themselves, absent and ecstatic, by means of their technologically extended senses. Far from assimilating the tool with the body according to the mechanistic tradition of Cartesian du- alism, we must conceive tool and instrument ‘like a second sort of body, incorporated into and extending our corporal powers.’ [Leder, The Absent Body, p.34]”[15] Similar in nature, both our bodies and their environment can be paired and connected to create a both-ways cyborg system of ontological coupling[34]. Ar- chitecture becomes the extension of the body and the body becomes the extension of architecture, i.e. prosthetic. According to Leder, “incorporation is what enables us to acquire new abilities - these abilities can settle into fixed habits. As time passes, these repeated habits are definitely ‘incorporated’ and disappear from our view. They become enveloped within the interior of a body-structure”[42], they became prosthetic. This process of embodiment causes the body’s limits to “literally delaminate into the multiple surfaces and interfaces of cyberspace... [the body then undergoes] a mutation, be- coming a living (and thus dying) machine.”[15] Herein lies the purpose of spatiality in a world of cyborg prosthetics: embodiment of the world. A Cyborg Environment is a conceptual model that addresses this prosthetic relationship. It is formed of humans and cyborg spaces that inter-act in a self- regulatory manner through their perception and action capabilities. The system that arises from this inter-action is based on a social structure of co-adaptation that allows for meaninful communication to take place if a code, outside the co- adaptative inter-action, is agreed upon by all parties of the system. 21 3.1. Definition of a cyborg environment The conceptualized system, a Cyborg Environment, depends on the definition of a cyborg space, or a spatial reality of cyborg, i.e. prosthetic, nature. The sections below explain the definition of such cyborg space. 3.1.1 Definition of a cyborg space A cyborg space is a cluster of stimuli that evokes a spatial experience of prosthetic and informational nature (within cyberspace) that forms a self-regulating network of relationships between itself and its inhabitants by being able to undergo spatial experiences of its own. In other words, a cyborg space is: • defined as stimuli, • dependent of presence, • within cyberspace, • autopoietic, and • subject to spatial experiences. A cyborg space is defined as stimuli The head-mounted display is an example of the understanding of space – virtual- ized space, in this case – as an entity that can be embodied by the human body. Generally used to deliver highly immersive – i.e. closer to real – environmental stimulation, it has proven that correct emulation of spatial characteristics provides a strong sense of embodying space or the virtual representation of oneself in the virtual world. For example, the research of Rothbaum et al. in Fear Of Flying Ex- posure Therapy used virtual reality to simulate fear experiences. Their experiments have used . . . an immersive head-mounted display (HMD) that consists of a dis- play screen for each eye, earphones, and a head-tracking device, while sitting or standing on a low platform atop a bass speaker, thus plac- ing the user within a multisensory, 360-degree environment that can provide visual, auditory, and kinesthetic cues (i.e., vibrations).[59] to measure and study immersion. According to the researchers, “although the user’s experience is entirely computer-generated, the individual’s perception over- looks the role of technology in the experience”[59] and renders the experience as sufficiently real to be embodied by some of the human participants. 22 3.1. Definition of a cyborg environment If we return to Warren Robinett’s original description of the use of head mounted display to “project our eyes, ears, and hands in robot bodies at distant and danger- ous places. . . [or being] able to create synthetic senses that let us see things that are invisible to our ordinary senses,”[58] it is possible to understand this technology as an entity based on visual stimuli that provides the prosthetic effect of inhabiting the world. Therefore, a conception of space based on three vectors – (x, y, z) – and em- powered by a fourth t vector – time – to fit a thermodynamic world can be re- placed with a more contemporary understanding of space as a sensory perception of events outside the human body and recorded as a stream[39] in the human mind. This definition allows the creation of a space science based on the world’s intrinsic changing nature and the physiological-based perception of stimuli available to the observer. Therefore, a cyborg space is such a collection of perceivable stimuli that result in a sense of being there or embodying space. A cyborg space is dependent of presence A cyborg is defined by its extended perception and action capabilities. Moreover, a cyborg is defined and constrained by its environment and can only become a cyborg in a space of prosthetic nature. On 2006 Tichon and Banks researched the differences of delivering virtual environments for exposure therapy with a Desktop PC or a 150-degree screen. The study suggested6 “that porting a virtual exposure therapy environment from a semi- immersive interface to [a] desktop PC does not significantly impact presence”.[68] In other words, immersive and non immersive visual stimulation can equally affect human perception if such stimulation is done appropriately. According to their description, “psychologically, a successful virtual experience will make the user become involved in the world to the point where he or she experiences a sense of presence in the virtual world of ‘really being there.’ ”[68] Because the cyborg space is defined as a collection of stimuli that results in a spatial experience, a proper delivery of stimuli that creates a strong sense of pres- ence is needed. Presence can be tested with the Presence Questionnaire developed by Wilmer and Singer[71] in 1998 which measures four factors of accurate pres- ence, or immersion: 6The results of the experiment should be taken with caution. Virtual door control was made automatically in the 150-degrees experiment, while it was done manually by the experimenter in the Desktop PC environment; this could have potentially affected the control variable of the exper- iment. In the words of the experimenters, “presence is usually enhanced when the user can exert a greater level of control over the task environment or has an increased ability to interact in the environment.”[68] 23 3.1. Definition of a cyborg environment 1. The human control of the environment being presented, 2. Successful reduction of distraction factors, 3. Level of sensory stimulation conveying sufficient information to the beholder’s senses and 4. The realism of the information being presented, correlating with the characteristics of the real world. Tichon and Banks’ experiments have provided proof, however, that said pres- ence is not dependent on the apparatus presenting the stimuli, but on appropriate stimulation methodologies and the design of the stimuli. An architectural spatiality based on its ability to embody human inhabitants should be defined and measured through its capabilities to promote presence. This, as we have seen is not depen- dent on the technology available, but on the appropriate implementation of such technology, and is scientifically measurable. A cyborg space is within cyberspace Humans have learned to shape their surrounding according to their needs. Envi- ronment modification can be achieved by either appropriation of an existent space, alteration of an environment’s existing qualities, or by superimposing alien struc- tures constructed from the environment’s parts. The technology of every era has allowed for different methods of creating this alterations of the environment, or the construction of structures within it. Digital computers7 ignited cybernetics, a concept that “elaborated Descartes’s mechanistic view of the world and looked at humans as information processing machines”[16] creating the possibility of conceptualizing information processing machines as humanized entities. Robots are the outcome of the cybernetic concept, their function being that of enhancing human activities, but also freeing humans. In 1921, Karel Capek’s play R.U.R. –Rossum’s Universal Robots– presented the idea of artificial people called robots. Robot, in Czech, means “both forced labor and worker”[10] and depicted an entity that would build a world where “everybody [would] be free from worry and liberated from the degradation of labor.”[10] The robot became an essential component of a society founded in performance and comfort. The ultimate task of the computer, as Landauer explains, was “to re- place humans in the performance of tasks”[13] and to automate the human abilities, 7“The race against German scientists to build an atomic bomb and the need to break the codes of the Nazi cipher machines were major forces behind the development of high-speed calculating machines. . . .”[16] 24 3.1. Definition of a cyborg environment liberating and extending the capabilities of its users. With ubiquitous computing, and cyborgization, cities began becoming robots of large scale that would augment the living environment of humans by providing services otherwise inexistent in the natural world – wireless connectivity, traffic control, mobile advertisement,etc. This network of services is cyberspace, or the ultimate robot surrounding. Cre- ated purely of non-material construction blocks – stimulation, communication abil- ities, information – it radically differs from its natural counterpart – real space – and can only be compared with similar extreme environments inhabited by cyber- netic organisms. As Gray writes, “disembodiment in cyberspace is hyperbodiment in outer space, but both places are dependent on machines and therefore both places are inhabited only by machines - and cyborgs, of course.”[28] Cyberspace overlaps reality, it enhances and overtakes it like a virus assimilat- ing a new host in its collectivity. It is a fact supported by Hirtle and Sorrows that “many of the same cognitive principles that are important in navigation of phys- ical space are also involved in navigation in electronic environments.”[18] This exchangeability of mental maps arising from both real space and cyberspace result in a tight relationship between the two. This allows cyberspace to co-exist with reality by embedding itself into it and providing the latter with de-territorialization and interacting capabilities. For human cyborgs, the world around them, how they apprehend and connect to it mimics how they understand and manipulate their self. For living cybernetic organisms the environment is an extension of their body, as much as their body is an extension of the world. They know that their actions tense and relax the streams of data that holds the world together, and accept that said actions are a result of the information clusters they have temporarily chosen to belong to. A cyborg space is autopoietic The concept of autopoiesis is a synthetic approach to model complex and inter- acting systems of relationships. It allows us to understand and theorize the struc- ture and functioning of living entities, machines and organizations. Humberto Mat- urana and Francisco Varela define an autopoietic machine as, A machine organized (defined as a unity) as a network of processes of production (transformation and destruction) of components that pro- duces the components which: (i) through their interactions and trans- formations continuously regenerate and realize the network of pro- cesses (relations) that produced them; and (ii) constitute it (the ma- chine) as a concrete unity in the space in which they (the components) exist by specifying the topological domain of its realization as such a 25 3.1. Definition of a cyborg environment network. It follows that an autopoietic machine continuously gener- ates and specifies its own organization through its operation as a sys- tem of production of its own components, and does this in an endless turnover of components under conditions of continuous perturbations and compensation of perturbations. Therefore, an autopoietic machine is an homeostatic (or rather a relations-static) system which has its own organization (defining network of relations) as the fundamental variable which it maintains constant.[35] Entities that belong to an autopoietic system can also be formed of autopoietic sub-entities. A human being is an autopoietic system, in the same sense that a tiny brain stem cell in his or her head is an autopoietic, enclosed system. Within the walls of the stem cell a complex network of relationships allows for complex pro- cesses that permit its existence. In the same manner, the human skin encloses an autopoietic system of networked relationships that allow a human to live. Further- more, the skin that encloses a human system connects it to other entities that form part of either the environment or the human collective. This forms a network of inter-acting parts of autopoietic definition called environment.8 Autopoietic systems are “purposeless systems”[35] enclosed into their own self regulation process. There exist no goal to achieve, yet the system is considered to be alive. The system’s regulatory rules are only self evident to the parts of the system – moreover, these parts are unaware of the former. A lack of “inputs” and “outputs” isolates the system and creates a self-contained unobservable entity. In Maturana’s words, Since the autopoietic machine has no inputs or outputs, any correla- tion between regularly occurring independent events that perturb it, and the state to state transitions that arise from these perturbations, which the observer may pretend to reveal, pertain to the history of the machine in the context of the observation, and not to the operation of its autopoietic organization.[35] Cyborgs are autopoietic. They are part of systematic interactivity of autopoi- etic form with their environment – of which they are both separated and linked to – and other cyborgs of the same collective[16].9 Their electric, biological and 8The skin is not considered an input-output device, but a connection point between parts of an autopoietic system. 9Human skin is an important connection with the environment, while language is an important connection with other equally capable humans. Humans must have been seen as cyborgs or post- sapiens by other sapiens excluded from the human collective autopoietic system through their inac- cessibility to spoken language. 26 3.1. Definition of a cyborg environment pharmaceutical enhancements link them to a vast and complex network of cyborg relations. By interacting with other parts of the autopoietic system these entities “regener- ate and realize the network of processes” that make them cyborgs. In other words, cyborgs are not cyborgs not by choice, but by definition. They were born cyborgs in constant connection with their environment. Their actions are both consequence of the system they belong to, and cause of the specific state of said system of net- worked relationships. There is no purpose in this inter-relationship, but a mere coherence in the flow of tightly related information. Finally, the autopoietic relationship between a cyborg, other cyborgs and their environment is unmeasurable, due to its own enclosed definition. Understanding an autopoietic system is only possible by projecting the definitions of such system into another system[35]. By creating a simplified model and becoming part of its autopoiesis, we are able to describe, measure and replicate the original autopoietic system the simplified model relates to. An evident difference exists between the two systems, but it is only through this approximation that we can understand the nature of a cyborg space. A cyborg space is subject to spatial experiences Space can be conceptualized as a spatial experience resulting from a perceptual and cognitive process. By a process of synthesis over the cognitive models and processes outlined in in the previous sections it is possible to hypothesize that spatial perception is a process formed of at least three parts, 1. A collection of stimuli 2. that are bound together by a cognitive relationship 3. resulting in a specific space map or representation of space. This process could be synthetically achieved by any entity capable of such pro- cesses. It is possible to theorize that there exist different types of spatialities, each corresponding to the perceiving capabilities of any entity capable of perception, cognition and action. Each one of these entities would perceive the world in a dif- ferent manner and would undergo a cognitive process within it’s own biological – or electronic – capabilities. In other words, if an entity is capable of perceiving stimuli of any kind, constructing a spatial model through a cognitive process, and acting upon the perceived stimuli by using said model we can consider such entity capable of spatial experiences. 27 3.1. Definition of a cyborg environment Figure 3.1: Evoking a spatial experience through stimuli. The present research focuses on two types of spatiality: human spatiality and space spatiality. A human spatiality is the spatial experience arising from the cog- nitive process undergone by a human after perceiving space stimuli (light, sound, etc.). A space spatiality is the spatial experience arising from a synthetic cognitive process undergone by a space after perceiving human stimuli (movement, position, etc.). Furthermore, following the previous sections it is possible to hypothesize three states of perceivable stimuli, that in turn result in the perception of spatial experi- ences (figure 3.1): • A fixed, properly arranged, collection of stimuli, • An induced relationship between two or more stimuli, • An alteration of one or several stimuli over time. 28 3.2. Models for a cyborg environment It is possible to design systems based purely on stimuli that result in controlled Spatial Experiences. The present experiment has focused on stimuli arrangement and stimuli alteration, while controlling any induced relationships between stimuli. 3.2 Models for a cyborg environment Three models (figure 3.2) have been created to understand, taxonomize and con- struct cyborg environments: • Cyborg space: Figure 3.2(a). This model represents the definitions of a Space Cyborg from its embodiment enclosing and alteration capabilities. It defines a single self-standing cyborg space entity. • Cyborg communication: Figure 3.2(b). This model represents the communi- cation capabilities of a Cyborg Space, it defines the methods used to interact with similar entities – space or human cyborgs. • Cyborg systems: Figure 3.2(c). This model represents the cohesive system that arises when cyborgs – human or space cyborgs –interact. CYBORG CYBORG (a) Cyborg Space CYBORG CYBORG COMMUNICATION (b) Cyborg Communication CYBORG CYBORG COMMUNICATION CYBORG SYSTEM (c) Cyborg System Figure 3.2: Models for a cyborg environment 29 3.2. Models for a cyborg environment 3.2.1 Cyborg space A cyborg space is defined as a cluster of stimuli evoking a spatial experience of prosthetic and autopoietic nature. This definition can be modeled by measuring three capabilities, or dimensions, of cyborg spaces: • Embodiment: Ability to achieve an autopoietic relationship with its inhabi- tants by promoting presence, or the sensation of “really being there”, • Enclosure: Ability to evoke a spatial experience in its inhabitants through stimuli within cyberspace, and • Alteration: Ability to change over time. Embodiment According to Sidney Fels, “people form relationships with objects external to their own self. These objects may be other people, devices, or other external entities. The types of relationships that form and the aesthetics of the relationships motivate the development of interaction skill with objects as well as bonding.”[25] When studying a spatiality of cyborg nature it is critical to be able to conceive the possibility of a bidirectional embodiment. It is possible for any cyborg, due to its own nature, to ontologically become more machine or more organism in order to complement or dialogue with another cyborg. The process is simple and relates to the ability of said cyborg entity to take differential control over its two components. By doing so, it becomes part of – embodied by – the second cyborg and allows for dialogue to take place. Due to the equal nature of all participants within the autopoietic system the communication can be done reciprocally and both entities become self-embodied. Fels proposes four prototypical relationships that have been named after their aesthetic definition: 1. In the first one, named Achieving, the human cyborg “stimulates the object with responds”[25]. Embodiment is achieved by the level of control that the stimulating entity can achieve over the stimulated one. The human cyborg is satisfied by achieving its task by using a space cyborg as its tool. 2. In the second one, Doing, the controlling human cyborg has extended into the embodied space cyborg’s self, making it part of its own. The resulting intimacy is due to a transparency of the controlling device provided by the space cyborg’s nature, or its interface transparency, as well as the ability of the embedding cyborg to control the embodied one. 30 3.2. Models for a cyborg environment 3. The third, entitled Contemplation, is the result of the space cyborg stimu- lating the human cyborg. According to Fels, “based on the person’s own knowledge and beliefs, the stimulus may be satisfying.”[25] 4. The last type of relationship is called Belonging and is manifested when the space cyborg intimately embeds the human cyborg. A sense of belonging to the space cyborg can be experienced by an embodied human cyborg. Enclosure Space can be conceptualized as formed by two parts, the spatial liquid or mental representation of the space, and the spatial membrane that contains such liquid. The spatial liquid concept represents the allocentric perception of space – mapping and navigation – as a purely cognitive process independent of reality. The membrane, concept represents the ego-centric – stereopsis and visual cues – characteristics of space perception that are dependent on objects that evoke stimuli of spatial nature. For example, when a human experient explores a virtual maze he or she uses vi- sual cues presented by the maze’s membrane to understand the spatial liquid that represents the virtual environment. Furthermore, because human cyborgs interact with a very large environment – reality – throughout their lives it is important to conceive architectural delimitation to be within this larger spatial liquid. Three main states of space enclosure, i.e. architectural delimitation of reality, can be then hypothesized. 1. Closed Cube. A fragment of existent liquid is perfectly enclosed with the use of a membrane, a new liquid is then formed through complete encapsulation. 2. Opened Cube. A semi-closed membrane partially delimits an existent liquid. The partiality of the enclosure generates an incomplete and flowing mixture of both the existent liquid and the new liquid. The continuous flow between both entities defines the relationship between them. 3. Exploded Cube. A fragmented membrane serves as disintegrated/integrative connection between two sub-sets of existent liquid. The high level of frag- mentation and the apparent null spatial relationship between both realities drives the interaction to a stronger semantic level. Both linked sub-sets re- create themselves as new liquids by semantic definition rather than by phys- ical connectivity. 31 3.2. Models for a cyborg environment Alteration Transformations of the Cyborg Space – or delimited space liquid as explained above – can take place through three main methods, 1. Membrane Alteration. This kind of alteration involves any semantic and/or physical transformation of the enclosing membrane in order to obtain change in the spatial liquid . 2. Liquid Alteration. This alteration involves any morphologic alteration - i.e. bipartition, elongation, multiplication, expansion, compression - of the spa- tial liquid without the use of the spatial membrane. The semantic definition of the spatial liquid allows such transformation without physically altering the membrane that contains it. 3. Fusion Alteration. This alteration is defined by progressive or conservative mixture of two spatial liquids resulting in a third homogeneous spatial liquid. Model: the cube model The model has three dimensions relating to the level of embodiment, type of spatial enclosure and type of alteration capable by any space cyborg. This model has been called The Cube Model (figure 3.3). The Cube Model is an indexing of possible states of a spatiality of cyborg nature. Cyborg spatial entities have three main qualities: they are able to embody other cyborgs for their own purposes or let themselves embody; they deal with space as an information generator and communication channel; and, finally, they are non-static entities always in change. The model creates a categorization of each one of this characteristics and joins the result in a three dimensional model. 3.2.2 Cyborg communication Serres’s communication theory Communication is essential for the Space Cyborg. Due to its prosthetic nature it employs communication both within its own structure and between its prostheses and the entities that connect to them. Communication, however, depends on a constant opposition between the code used to en-code a message and the noise both the channel and the interactive process produces. Michel Serres writes in his book Hermes: literature, science, philosophy that It can be shown easily enough that no method of communication is universal: on the contrary, all methods are regional, in other words, 32 3.2. Models for a cyborg environment Figure 3.3: Cube model for a cyborg architecture taxonomy. isomorphic to one language. The space of linguistic communication (which, therefore, is the standard model of any space of communica- tion) is not isotropic. An object that is the universal communicator or that is universally communicated does, however, exist: the tech- nical object in general. That is why we find, at the dawn of history, that the first diffusion belongs to it: its space of communication is isotropic. Let there be no misunderstanding: at stake here is a defini- tion of prehistory. History begins with regional language and the space of anisotropic communication. Hence this law of three states: tech- nological isotropy, linguistic anisotropy, linguistic-technical isotropy. The third state should not be long in arriving.[64] For Serres, language is both limited and defined by parasites or bifurcations, in his words, The story doesn’t yet tell of the banquet, but of another story that tells, not yet of the banquet, but of another story that again. . . And what is 33 3.2. Models for a cyborg environment Figure 3.4: Serres’s model of communication. spoken of is what it is a question of: bifurcations and branchings. That is to say, parasites. The story indefinitely chases in front of itself what it speaks of.[65] Parasites are the interferences or bifurcations that occur when a message is transmitted. For Serres, all systems of communication have parasites – noise – that result in messages. In other words, noise is a fundamental part of the message and it gives existence to it. There is meaning because there exists bifurcations and branchings in the transmission of information. Serres’s model of communication (figure 3.4) is based on the opposition of such parasite, or noise, to the message’s code. His model of communication is formed of four factors that interact between each other and shift over time, resulting in different communication scapes. The universal communicator is achieved through the elimination of noise, while a non-communicator is defined by the exclusion of the code in the system. Model: the diamond model For Donna Haraway “Biological organisms have become biotic systems, commu- nication devices like others”[29] Built to communicate, both human cyborg and space cyborg rely on their inter-connection for true existence. The Diamond Model is based on Serre’s model of communication and is formed of five entities forming the vertexes of the geometric representation represented by figure 3.5. Where (1) Human Cyborg A, (1’) Human Cyborg B, (2) Space Cyborg, (3) Code and (4) Noise. In a system of interacting cyborgs (figure 3.6), the code is defined as the proper arrangement of stimuli in a semantic manner that en-code a message, while the noise is defined as all external stimuli or inadequately arranged stimuli negating such en-coding. Both code and noise vertices vary across time and dynamically 34 3.2. Models for a cyborg environment Figure 3.5: Diamond model for a cyborg’s communication ability. affect the communication system that fluctuates between pure-code and pure-noise within each conversation – as proposed by Serres. Furthermore, the code and noise vertices lay outside the autopoietic interaction. Since both code and noise are in- dependent of the stimuli, i.e. are the semantic relationship between stimuli, they may or may not arise in an interactive system. Therefore, a code should be agreed upon – or learned – and used appropriately to reduce the noise that derives from any interactive activity. 3.2.3 Cyborg system Cyborgs are constructed to inter-act with – embody – other cyborgs. Due to their technological extensions, cybernetic entities both depend on and construct the en- vironment they belong to. They become part of a system formed of interacting prosthetic abilities, i.e. both cyborg and environment create a structure of inter- activity based on communication a societal system. Chris Hables, interested in the political role of cyborgs in a post-human society, 35 3.2. Models for a cyborg environment CODE Stimuli arrangement (Semantics) CYBORG Perceives/Produces stimuli Encoded Msg. CYBORG Perceives/Produces stimuli Message Autopoietic interaction Figure 3.6: Communication process between two cyborgs. defines the idea of citizenship as a result of interaction between members of a community. In his words: Currently, judgments about the suitability for citizenship of individual humans and cyborgs are made on the grounds of their ability to take part in the discourse of the polis[28]. This ability is acquired by being either ‘natural’ – being born in such commu- nity – or by proving a ‘belonging’ to such community. The cyborgs proposed by the present research are based and defined by their prosthetic inter-acting abilities, i.e. their capabilities to use its technological extensions to act upon its already acting environment. Both human and space cyborgs, due to their extension upon each other, are part of a system of inter-action where dialogue can only be achieved through embodiment, as defined by Fels.[25] Therefore, such citizenship – or def- inition of the capabilities of dialogue – in a Cyborg Environment is defined by the ability to use said acting capabilities to achieve meaningful communication. Cyborg citizenship Access to the various levels of action within a cyborg collective is restricted to citizens with different capabilities. A structure of this nature allows the group to achieve bigger goals with minimum effort. While some citizens of such collective have access to higher levels of action, others are restricted to important and basic acting functions of the community. Both parts of the system are essential for the correct functioning of the group and rely on the interaction that arises between each other. Inter-action is the main asset for a cyborg citizen, i.e. the ability to connect its own actions with those belonging to other organisms of the collective. The 36 3.2. Models for a cyborg environment capability of an entity to inter-act is based on the entity’s own ability to act upon the world and other citizens. Norman describes any human action upon the world through three main actions. For him, “to get something done, you have to start with some notion of what is wanted – the goal that is to be achieved. Then you have to do something to the world, that is, take action to move yourself or manipulate someone or something. Finally, you check to see that your goal was made.”[49] This cycle can be fully described in seven parts that Norman calls the “seven stages of user activity”: • Establishing the Goal, • Forming the Intention, • Specifying the Action Sequence, • Executing the Action, • Perceiving the System State, • Interpreting the State and • Evaluating the system state with respect to the Goals and Inten- tions [48] Norman’s definition of action upon the world can be used to define a structure – citizenship – based on access to different stages of action. The more action an entity can get access to, the more power over the system it can achieve. This political structure is conformed of four different citizenships (figure 3.7): • Tools: Tools are entities that can execute an action sequence and/or perceive the state of the world. They are equipped with sensing and/or acting mecha- nisms that allow them to act upon the world. • Actors: In addition to the capabilities of tools, actors can interpret the per- cepts gathered through their sensing systems and are able to construct a set of several commands – ability to abstraction – in order to achieve higher commands. • Agents: Agents are actors that have control over the intention to act and are able to evaluate the interpreted perceptions. Agents can be considered as “intelligent” due to their high abstracting capabilities, however, these entities do not have access or control over their goals. Agents are goal-oriented and can rarely modify this. • Super-agents: Super agents have access and control over their goals. These entities have superior control over their interacting capabilities and the sys- tem they belong to. 37 3.2. Models for a cyborg environment Figure 3.7: Citizenship based on the access to different stages of action. This taxonomy yields a hierarchy of action potential that allows entities in higher citizenship strata to control lower entities or include them as part of their own functions. This process of layering allows an entity to delegate lower action functions to lower entities of the system and extend onto several systems of simul- taneous activity. For example, a super-agent can belong to several action systems by delegating functions to automated agents under one or several goals, the super- agent can then focus on the definition and control of various goals. Layering also means that a citizen can deliberately set its highest action capability as static and become a lower entity. For example, when a super-agent sets a fixed goal during a delimited period of time and focuses on its lower level functions; during this time the super-agent functions as an agent and can even be subject to super-agent con- trol. This layering concept suggests that systems constructed for lower entities can be tested using higher entities – providing that the higher capabilities of the latter are properly controlled. A system of agents can be tested with super-agents if the latter have fixed goals during the testing sessions. Model: the cyborg system The Space Perception Loop (figure 2.1) previously explained can be used to theo- rize an inter-connection between human cyborgs and space cyborgs. If both entities are considered as Agents, i.e. are actors that have control over the intention to act and are able to evaluate the interpreted perceptions, it is possible to link together both their Space Perception Loops to create a fluent system of constant inter-action. 38 3.2. Models for a cyborg environment Figure 3.8: Paired space and human perceptive loops. This new system, depicted by figure 3.8, would be considered an autopoietic system with no other goal but to maintain the relationships that conform it. In said network, both entities gather information about their world (light and direction in the case of the human, movement and position in the case of the space), create a spatial model (using egocentric and/or allocentric relationships) and act by updat- ing their own state (movement and position in the case of the human, light and direction in the case of the space). A Space Cyborg fitting the above model would have to possess perception, cognition and action capabilities. For testing purposes this entity can be modeled as an agent able to perceive the world, undergo a spatial experience, create a space model and act on the world to calibrate its spatial model. Because said agent is encoded into a space form – can only be understood by human beings as space – it has been named as Space Encoded Agent. Figure 3.9 presents the parts of such entity in relation to the previously explained interconnection of human and space perception loops. By using a Space Encoded Agent, i.e. a cyborg citizen without access to the creation and manipulation of higher-order goals, specifically designed to interact through space with a human participant, it is possible to construct a prototypical interactive spatial environment formed of equally empowered entities that can permit the measurement and analysis of Cyborg Environments. 39 3.2. Models for a cyborg environment Figure 3.9: Human and space encoded agent forming an organic interactive system. 40 Chapter 4 Experiments 4.1 Similar experiments The following sections present the most relevant research that has been conducted by researchers interested in one or several parts of what has been defined as Cyborg Environments. The presented explorations have been organized in the following clusters: • Stimuli: Those investigations interested in the conception of space as stimuli, or the capabilities of spatial stimuli to promote human activity and engage- ment; • Information: Research interested in space as mediatic or semantic systems of information; • Agent interactions: Research interested in the relationship between agents and humans and how to create better interactive agents; and • Cyborg architectures: Those investigations interested in the enhancement of built spaces through technology, thus in creating cyborg-like architectures. 4.1.1 Space as stimuli Environmental stimuli as interfaces When speaking of interactive spaces we are immediately pointed to previous works in spaces that have been enhanced by technology to interact with human presence. Hiroshi Ishii’s work on the MIT Media lab has experimented on the haptic relation- ship between information and humans. His research group Tangible Media Group follows one objective: “to blur the boundary between our bodies and cyberspace and to turn the architectural space into an interface between the people, bits, and atoms.” [36] A representative example of this effort is AmbientROOM. By using “light, shadow, sound, airflow and water movement in an augmented architectural space the system aims to provide the architectural space with cyborg-like enhance- ments that explores the background awareness and foreground activity”[38]. 41 4.1. Similar experiments The prototypical space, depicted by AmbientROOM takes advantage of hu- mans’ ability to process background information and uses projectors, fans and speakers to generate a stream of data that enhance the perceptive capabilities of its inhabitants. Displayed ripples allow the users to monitor the state of a dis- tant living being, light patches represent human presence, sound is used to present “natural soundscapes”[38] and physical objects are used to modify the state of the room. The environment becomes a gateway to digital information and the objects become physical media controls. Engaging properties of spatial stimuli Ernest Edmonds’s[20] work in the Creativity and Cognition Studios at the Univer- sity of Technology of Sidney defines a model for engagement with interactive art based on three attributes: • Attractors, “those things that encourage the audience to take note of the sys- tem in the first place”[24], • Sustainers, “those attributes that keep the audience engaged during an initial encounter”[24] and • Relaters, “aspects that help a continuing relationship to grow so that the audience returns to the work on future occasions.”[24]. His studies suggest the possibility that predictable human responses can be obtained by visual and aural stimuli projected onto reality. Moreover, the properties of these stimuli can be of extremely rough definition – generally vertical color lines or sinusoidal wave sounds – but are dependent on complex systems with the ability to perceive and engage their human observers. In his work Shaping Form[19] in the Speculative Data and The Creative Imag- inary exhibition in 2007, a set of canvassed simple visual stimuli mutate over time due to human presence and engage their observers in an invisible interactive pro- cess. The square plasma screens mounted on the gallery’s walls present a series of vertical color lines that are the representation over time of events gathered by cam- eras embedded in the same artworks. By interacting with the objects the observers of these entities shape their form in unpredictable and interaction-engaging ways. This understanding that an object’s perceived form can be detached from its physicality – and made dependent on underlying and complex logic structures – is more evident in Broadway One, presented in the SIGGRAPH Art exhibition in Los Angeles in 2004. In this project, Edmonds explores the possibility of a “synaesthetic work”[19] where visual and aural stimuli are dependent on the same 42 4.1. Similar experiments generating logic. A generative algorithm produces two numbers that are then used by a presenter10 to create a physical state perceivable by human beings. In Ed- monds’s words: “The image display section [of the presenter] waits for a list of two integers. The first integer relates to a position, and the second a color. The audio output waits for the same two integers but treats the first as a position in time not space, and the second as sound.”[21] The complex and almost chaotic relationship that arises between Edmond’s interactive objects and their viewers result in an aesthetic experience that exceeds the possibilities of static artworks. This has led other researchers to believe that physical responses can be obtained by visual cues projected on spatial realities. Spatial stimuli and kinesthetic actions Several artifacts have explored the relationship between computer-generated stim- uli and kinesthetic responses. Andrew Hieronymi’s MOVE is “an installation, com- puter vision and full-body interaction”[31] system where humans are able to play- fully interact with images projected onto space. The prototype is composed of a computer, a camera and a projector mounted on a ceiling and both directed towards the floor. The camera is used to detect the position of a human participant within a projected set of objects. This information is then transformed into a virtual rep- resentation of the human body that can be used to process collisions between the objects being projected and the biological entity. Based on avatar-based actions generally present in action games, six environ- ments or modules were created: Jump, Avoid, Chase, Throw, Hide and Collect. Each one of these actions aimed at promoting a specific way of interaction be- tween humans and graphical information. The abstraction of the human body and its kinesthetic actions into virtualized objects with position over time has allowed Hieronymi to create a seamless connection between real and virtual physicality. Humans are both solid bodies and position-motion information, while objects are both images projected on the floor and virtual bodies. MOVE has successfully achieved a bi-dimensional – virtual and real – ecosys- tem of inter-action. Human inhabitants of this reality have no choice but to interact with it, while the reality is inevitably affected by its humans inhabitants. Although the system is constrained into it’s own definition – modules of specific interaction methods –, it demonstrates that by understanding the physical and virtual world as informational systems it is possible to link both into a self regulating system. 10In this case a plasma screen mounted on a wall and a set of speakers. 43 4.1. Similar experiments 4.1.2 Space as information Media-based environments Bubble Cosmos, presented at SIGGRAPH 2006 explores the concept of Fantarac- tion. “Fantaraction places emphasis on entertainment and art and imparts momen- tary surprise or pleasure” [46] to construct a surrealistic interaction between mod- ifiable objects loaded with media. The prototype constructs smoke-filled bubbles that float in the air, a projector is then used to shine light onto the contained smoke, giving the impression that images are contained within the bubbles. Breaking the bubbles triggers a burst sound and a colorful effect of a spreading smoke-like im- age. This bubble display system allows a physical interaction with media otherwise intangible. By bursting the contained images into a colorful disappearance, partici- pants of this reality achieve a haptic control over digital information in a subtle, yet powerful, manner. A strong sense of control over the information that forms part of the system is provided by a simple act: deciding the fate of each informational bit. The physical actions involved with this judgment create a connection between the mediatized information and the haptic and proprioceptive abilities of the hu- man participants. This interconnection of human abilities and media allows the possibility of interactions between physical stimuli –light or sound – and virtual information – location or state – with a further semantic component. Semantic environments Narumi et al. have explored the interaction between physical information – per- ceived by humans as stimuli – and semantic information. Their prototype Inter- glow is an interactive miniature world that “facilitates close interaction and com- munication among users in real spaces by using multiplexed visible-light commu- nication technology”[47]. Inter-glow is a model of a dining table and four chairs corresponding to four members of a virtual family – father, mother, daughter and son. Four light sources flickering at different rates hang vertically from the miniature living room’s ceiling. By directing each one of these towards the center of the small table, human users can trigger the presence of a virtual participant. Twelve combinations of member co-existence exist. They lead to 12 different conversations – e.g. between the father and the daughter – controllable through this light interface. The human participants of this art installation can achieve control of the se- mantic relations between each virtual family member – and the overall system – by discovering each virtual participant, uncovering the relationships between partici- pants and controlling the flow of the developing story. This interactive exploration 44 4.1. Similar experiments of the “relationship between characters”[47] allows the observers of Inter-glow to give meaning to an otherwise inexistent world. Although no significant physical alteration of the miniature space occurs during the interactive process, the environment radically changes in the mind of each hu- man participant. Each person has a different mental model of the spatial and inter- personal relationships that conform the virtualized family. This has been achieved by co-relating virtualized semantic links and haptic spatial controls – one has to grab the light over the father’s seat to make him join or start a conversation. The virtual family and its story, exist only in the memory of the people that have inter-acted with the work, yet, it is only malleable through spatial and physi- cal alterations of the miniaturized living room. This creates a coherent and manip- ulable system of spatial – physical miniature living room – and semantic – virtual relations between imaginary participants – components otherwise unachievable. 4.1.3 Agent-human interactions Agents with social capabilities Any truly interactive system involves two equally empowered parts in reciprocal informational exchange. The resulting system is what Maturana calls an autopoi- etic system – homeostatic entity dependent on its own organizational rules[35]. Some explorations have attempted to focus on this dialogical relation between humans and machine-based environments. Bickmore and Picard’s research on long-term human-computer relationships explore socially programed relational agents based on dialogue. Their project consisted of an anthropomorphic animated exer- cise advisor for MIT FitTrack. The “embodied agent” was designed with both non- verbal behaviors “used for co-verbal communicative and interactional functions”[2] – i.e. gaze, hand gesture, body posture, face expressions and presence – and verbal behaviors within four “conversational frames” – task-oriented, social, empathetic or encouraging. This verbal behaviors were either, close forms of addressing the humans, exchanges of empathic information, dialogues of social information, di- alogues of information related to training tasks or behaviors searching for future interaction. Bickmore and Picard’s evaluation of their agent suggested that “people will readily engage in relational dialogue with a software agent, and that this can have positive impacts on users’ perceived relationship with the agent.”[2] Furthermore, it proved that it is possible to design agents with social dialogue capabilities, given a clear understanding of the interactive process that arises from a bi-dimensional and conversational interaction. 45 4.1. Similar experiments Agents that control humans Some works by researchers like Mackay explore a more intricate relationship of control flow between the parts that build an interacting system. Their McPie Inter- active Theater “explores an unusual style of interaction between human users and visual software agents”[43]. They propose an animated figure – McPie Character – designed with a sole goal: “shape users’ behavior”[43]. In the experiment pre- sented by Mackay, the system’s agent was instructed to prompt users to “tap their heads with their arms”. This was done by making the agent respond “interestingly” to humans moving their arms and “enthusiastically” to users tapping their heads. The investigators analyzed a number users of the McPie Interactive Theater and concluded that three types of interactions generally arouse: trying to manipu- late the agent, identifying with the character as a representation of themselves and trying to establish a meaningful communication with the agent. Within these interacting strategies,“some [users]did tap their heads” but as part of interactions that “were more complex”[43] and usually within social or random gestures. This proved that specific human responses can be controlled by agents if said actions lie within the social and random rules used by humans to control such agent. 4.1.4 Technologically enhanced built architectures Architecture as stage for information Some researchers, especially architects, have hypothesized that the built environ- ment can be used, not only as interface to information – as proposed by Hiroshi Ishii[36] – but as medium and enhancement for communication. According to these investigators, the architectural structure of the built environment can be con- ceived as information and thus transformed into a canvas allowing informational interconnectivity between human, agents and the urban reality. Kas Oosterhuis conceives a building as “a unibody, as an input-output device.”[54] where the built structure – doors, windows, walls, etc. – allows for the control of information to take place. In his words, all “processes run by buildings, products and users to- gether play a key evolutionary role in the worldwide process of the information and transformation of information”[54]. This conception of the architectural structure led researchers to explore the concept of inhabitable interfaces. Jeffrey Huang and Muriel Waldvogel present the swisshouse as an architectural implementation that “allows unsophisticated users to collaborate and be aware of each other over distance.”[33] The prototype is de- scribed as a choreography of “interactive elements in the space” that “allows users 46 4.1. Similar experiments to instantly separate, combine, and costumize environments for specific collabora- tive activities.”[33] The 3,200 sqft swisshouse building has been equipped with lumen projectors, high-resolution plasma screens, panoramic cameras, ambient speakers, streaming hardware, audioconferencing system, microphones and radio-frequency identifica- tion readers, all discretely included in the architectural configuration of the space. By using this structure for remote conferencing and teaching, support of informa- tion for art exhibits and collaboration in project development, the researchers have agreed that the . . . power of an architecture driven communication interface seems to be partly due that it does not emphasize the deployment of ever newer or more sophisticated technologies to be embedded ubiquitously, but instead focuses on shaping new types of physical environments that have a relation to specific geographical places and that people can in- habit to communicate with other people in other geographical places.[33] The success of the swisshouse lies mainly due to architectural and functional capabilities that are independent of the technology used to enhance them. In other words, the media technology serves only as enhancer of the informational flow already provided by the building’s architectural structure. Architecture as fluctuating virtual-real form However successful these models have proven to be for videoconferencing, tele- existence and the presentation of information supporting real data, they have un- derestimated the informational capabilities of the physical structure that contains them. Because the previous explorations have been done in areas of study focus- ing on the application and development of technology they have rarely taken into account the perceptual and aesthetic component that architectural form adds to the interactive process. Asymptote’s – Hani Rashid – participation at the 2000 Venice Biennale, the FluxSpace 2.0 Pavillion project explored the aesthetic and informational capabili- ties of architectural structure. The construction, measuring “thirty meters in length and [rising] two stories in height”[57] was a pneumatic form sustained by a metal structure designed to create “a tangible oscillation between the physical exterior and the fluid continuously reconfigured state of [the] interior” [57]. Enclosed by the metallic frame, two rotating one-way mirrors constantly changed the visual characteristics of the space, while two 180 degrees cameras simultane- ously broadcasted nearly 1.6 million variations of the mutating space to voyeuristic 47 4.2. A pilot study of interactive spatial perception observers around the globe. In this project, the technology was used “not only as a tool of production and representation but as a means of engagement via interactiv- ity and connections to global networks.”[57] In the words of Rashid, FluxSpace 2.0 Pavillion“sought to engage an audi- ence including but not limited to visitors to the Biennale by providing a simulta- neous spatial experience for a virtual audience.”[57] Both real and virtual experi- ents, however, perceived their observed reality as a fluctuating environment. This exploration proved to be a streaming informational system tied to a network of perceptions through time. 4.2 A pilot study of interactive spatial perception A pilot study was performed to observe and to measure how humans interact with static and changing spaces. The experiment was an initial approach to the theory of an autopoietic system composed of a human cyborg and a space cyborg. 4.2.1 Device description The experiment consisted in a defined spatial location of 2.5 by 2.5 meters and a lateral projection on a wall screen provided by a projector located at a height of 1.5 meters from the floor and 5 meters from the wall. The space included 3 boxes of 45 by 45 by 45 centimeters colored in red, green and purple. Two spaces – private and public – were defined and marked with tape on the floor. Initially all the boxes were put in the private section and subjects were put in the public section of the space. Figure 4.2 presents the prototype being used. A Polhemus Fastrack attached to a Linux computer read the (x, y, z) position of each box and the human experients. Sensors were attached to the bottom center of each box and to the lower back of the subjects through a belt. A continuous recording of human and objects’ position was translated into a plan diagram show- ing the motion patterns and overall position during a specific stimulus. 4.2.2 Measurements Measurements on this study focused on the analysis of • motion and position patterns recorded, • objects produced during a set of tasks during the experiment, • notes taken by the experimenter, and 48 4.2. A pilot study of interactive spatial perception (a) General Setting. (b) Dimensions. Figure 4.1: General setting and dimensions. • experients’ subjective perception of the experiment. A post-experiment questionnaire was handed to the experients to understand the subjective perception of the space; it consisted of a close-ended section – tar- geted at measuring the embodying and perceptual differences between test subjects – and an open-ended part – aimed at understanding the personal perception of the space. 4.2.3 Experimental design A group of 6 subjects – 3 males and 3 females of age 20 to 39 – were asked to perform 6 tasks in about 20 minutes. All experients were informed that each of these tasks would measure their ability to creatively solve problems under timed situations and limited resources. A plastic bag containing 3 pieces of paper and a pencil was provided to each of the experients before the experiment began. On the floor, within the public area of the space 6 folded pieces of paper were placed. Subjects could only see the number on each piece of paper and would have to pick up and unfold the paper to see the content (table 4.1). Experients were instructed to take, in ascending order, each one of the papers and read the instructional phrase within. Each instruction would have to be completed before continuing to the next one. Task completion was not a critical measurement, thus, the subjects were told that judgment about the completion of their tasks was to be done by themselves. Finally, the participants were not allowed to communicate with the experimenter or reach anything outside the pre-defined perimeter until the experiment was finished. 49 4.2. A pilot study of interactive spatial perception Figure 4.2: Stimulus presented. The tasks created and depicted on table 4.1 were intended to involve the users in a semi-awareness state fluctuating between awareness of their surrounding and introspection. Creative tasks were used to achieve this. However, creation can be of two natures, • Semantic: where strong relationship to mental construction is needed and • Physical: where mental effort is reduced and physical effort is encouraged. An analysis of several tasks was performed in order to predict the type of cre- ation that would result in each one of them. A short list was then created, based on the semantic or physical characteristics of the expected products, and 6 were finally chosen for the present pilot study. Table 4.1 present the tasks chosen. 50 4.2. A pilot study of interactive spatial perception Task No. Instructional Phrase Type of creation 1 The short story of a “pixel” Semantic 2 Tree-house/Doll-house Physical 3 Drawing diary Semantic 4 5 minutes sleep Physical 5 Shakespeare Semantic 6 Modern Sculpture Physical Table 4.1: Initial pilot study, tasks to be performed by test subjects. Treatments applied Nine randomized visual spatial stimuli were presented to the experients during 2 minutes. Each stimulus consisted of a projected square area of 2.5 by 2.5 meters divided in two vertical areas, left and right. Each area – left and right – was illumi- nated with one of three colors – red, green or purple – creating a collection of nine stimuli. Due to the fact that the present experiment dealt with the measurement of cog- nitive and kinesthetic actions arising from a collection of visual spatial stimuli, it was imperative to make sure that each cognitive state to be measured was related to one, and only one, stimulus. It was hypothesized that by presenting an uncon- scious, yet perceived, transient stimulus between each experimented visual spatial stimulus, cognitives states would be separated by a third cognitive state belong- ing to said transient stimulus. As a result, the cognitive states belonging to the experimental conditions would be fundamentally different. According to Kirchner and Thorpe[41], at approximately 120 ms the brain can begin to determine the content of a flashed attended image without being conscious of the stimulus. Following this rationale, a transient image of aproximately 200ms was included in the transition from one stimulus to the next. Figure 4.3 presents an example stimulus with only the left area lit in red. Table 4.2 presents all the stimuli used in the experiment. Experiment flow The experiment lasted for approximately 35 minutes (figure 4.4). During the first 5 minutes the experimenter carefully explained the procedure of task selection and completion and helped the experients in positioning the Polhemus sensor that would track their position. During the following 20 minutes, users would focus on the task completion while the experimenter would take observational notes. Fi- 51 4.2. A pilot study of interactive spatial perception Figure 4.3: Initial pilot study, example stimulus. Note that for representation pur- poses the right area is colored gray, although in reality there was no visual stimulus presented in said area. Area Left Right Stimulus type Duration Red Red Experimental 2 minutes. Red Experimental 2 minutes. Red Experimental 2 minutes. Green Green Experimental 2 minutes. Green Experimental 2 minutes. Green Experimental 2 minutes. Purple Purple Experimental 2 minutes. Purple Experimental 2 minutes. Purple Experimental 2 minutes. White White Transient 200ms. Table 4.2: First pilot study, stimuli used. Introduction 5 minutes Interaction 20 minutes Assessment 10 minutes Figure 4.4: Experiment flow. 52 4.2. A pilot study of interactive spatial perception nally, during the next 10 minutes, the participants would answer the post-experiment questionnaire and an informal interview would take place. 4.2.4 Results At first, subjects seemed not to respond to the stimuli given to them on a con- trollable and measurable pattern, however, a closer analysis of the data showed that experients had strong reactions and repeated patterns. Analysis of movement and decision taking by pairing the stimulus and the resulting patterns showed that kinesthetic responses were not direct, but involved complex perceptive structures. Behavioral reactions to stimuli Following Dehaene [14] the responses or reactions obtained in the experiment have been classified as follows: • Conscious: Attention to the stimulus presented with strong responses: Sub- jects react to color stimuli at a cognitive level. Entering the space, subjects needed to create a categorization of both the images presented and the ob- jects available. Because color differentiation was evident subjects needing to move a box picked the one matching the visual stimulus. • Pre-conscious: Attention to previous stimulus with strong responses: Sub- jects reacted to previously primed colors. Both color and direction were carried onto subsequent tasks. • Subliminal-attended: Attention to the present stimulus with weak responses: Subjects seemed to understand the intrinsic relationships between spaces, if an object was put on a private section it was kept there until proper ap- propriation of the privacy of such area was performed. Primed colors and directions affected their interaction with such objects. Figure 4.5 presents the motion patterns of a participant – black dots – and color cubes – color dots – during each randomized stimulus. Visual analysis of the behavioral actions of this subject show that during the first three stimuli the boxes selected correspond to the same color as the one being presented. Private-Public distinctions The initially provided public/private privacy dichotomy was strongly effective, dur- ing the initial moments of the experiment subjects interacted with the boxes under the private section, however, with time of interaction came an appropriation of the 53 4.2. A pilot study of interactive spatial perception Figure 4.5: Motion patterns of a subject according to each randomized stimulus presented for 120 seconds. 54 4.2. A pilot study of interactive spatial perception private space. The result was evident: the boxes could now move into the public space. Creation of a Home Location All subjects created a Home Location where they felt comfortable at their entrance to the prototype. Whenever introspection was needed they would return to such Home Location. In such location, awareness of the space was reduced to the mini- mum, but stimuli presented during that reduction proved to be carried to following creative tasks. This is, after introspection subjects would choose colors and mo- tions related to the stimulus presented during introspection, and not under stimulus presented during task completion. Gendered space Users from various genders interacted differently with space. Male users tended to play with objects, while female subjects tended to create more semantic oriented products by writing or labeling the cubes (figure 4.6). Difference in posture was also evident; most of male subjects kept a standing posture while female subjects tended to adopt comfortable sitting or crouching positions. Figure 4.6: Arrangement of a female participant with tags over color cubes. Qualitative analysis showed that females take a longer time to adapt to their 55 4.3. Training wheel study new surrounding and cautiously take over its properties, however when they have finish such appropriation they tend to protect it more than male participants. This was uncovered by most female participants asking the experimenter to properly record the position of the spatially arranged color boxes. On the contrary, male participants didn’t seem to be troubled when the boxes were returned to their orig- inal place at the end of the experiment. 4.3 Training wheel study The previous pilot study showed that humans might have predictable behaviors to spatial stimuli. A belief that this was decipherable was the main drive for a further study that would explore this embodying relationship between humans and spaces. The initial interest of the training wheel study was to conceive an architectural space of cyborg-like nature that could promote controllable kinesthetic actions in human beings. An autopoietic system is made of bi-directional relationships between its in- teracting parts. Demonstrating that interactive and perceiving spaces are able to take control over their inhabitants is the first step in proving that such entities have the ability to affect, in a predictable way their perceived world – formed of human inhabitants. Findings from previous pilot studies showed that human perception of space is not purely spatial, but linked to semantic structures that interact and form while living in a space. It was hypothesized that by promoting previously measured men- tal states through spatial stimuli it could be possible to promote voluntary actions in human beings. By creating a model of such mental states, human kinesthetic actions could be controlled in quantity and direction. In this sense, the prototype would work as a human training wheel, promoting specific voluntary movement by suggesting it. Control based on reflective consciousness manipulation According to Zelazo’s [72] model of consciousness published in 2004, a conscious entity has scaling levels of reflection and grouping of information that allows it to interconnect meaningful entities and create complex mental associations. Accord- ing to his model, the most basic conscious iteration is called MinC – or minimal consciousness – , this level of action-reaction allows any living being with the abil- ity to respond to external stimuli. As the organism evolves it begins to form higher levels of abstraction, on a second level, it forms a recursive consciousness able to label its actions, and further re-entry iterations allow the organism to create group- 56 4.3. Training wheel study ing of labels in semantic constructions. More reflective levels are then added to achieve more complex mental structures. Stimuli can be conceived to be of either textual or contextural nature. Textual stimuli are the ones where the focus of attention of a perceiving subject lies – the specific perceived stimulus that an animal is attending to at each moment. Contex- tual stimuli are secondary stimuli that are related to the textual stimulus. Following this scheme, the present study proposes a theoretical model for controlling higher levels of consciousness, i.e. reflective consciousness, based on two actions (figure 4.7): • Disruption: The cognitive model of consciousness and perception is dis- rupted using a primed stimulus or abruptly breaking the spatial mental map created through experience. • Encapsulation: By tightly relating a textual stimulus into one or several con- textual stimuli A disruption action deals directly (figure 4.7(a)) with the mental representation of space. For example, if we could control the laws of physics and could dramat- ically eliminate the force of gravity for a moment, it would be possible to create a disruption of the mental model that dictates that all things fall to the ground. In the case of the present experiment the disruption has been done on the laws that control the state of the interactive space and give its spatial representation. As it will be shown later in the document, participants of the experiment are stimulated with color depending on their location within the prototype. Moving in specific directions causes the projector to generate a specific stimulus – e.g. moving left causes a green stimulus and moving right a blue one. By inverting this directional control we can dramatically disrupt the mental representation of such spatial re- lationship – e.g. moving left causes a blue stimulus while moving right causes a green stimulus. An encapsulation action (figure 4.7(b)) deals with related stimuli that otherwise would not be dependent on each other. For example, by making a kinesthetic effort we can move through space and explore its properties. In order to walk forward we balance our body towards the direction we want to walk to and coordinate this motion with a complex movement of muscles in legs and arms that allow us to maintain an equilibrium and balance our weight a step forward. This same effect of moving through space can be temporarily achievable by a system that allows the participants to displace their position in the world without displacing their body through slightly moving a mini-joystick with a thumb. The stimuli generally related to kinesthetic effort are now encapsulated into said thumb effort. In the present experiment a tool has been created: by moving a LED on top of a coffee table 57 4.3. Training wheel study (a) Disruption. (b) Encapsulation. Figure 4.7: Two theories of control based on reflective consciousness manipulation 58 4.3. Training wheel study experients can move in space without any effort and receive the same stimuli as if they had. 4.3.1 Device description The prototype consists of a neutral space of 2.5 by 2.5 meters and a projection wall receiving a 2.5 by 2.5 meters lateral projection of a filled color square. Such projection is the only quality of the space and source of light. A camera placed on the projector takes the high luminance values and converts them to colored pixels to be placed on top of the filled square. Proper alignment of the camera allows the system to ”paint” white objects with any desired color. A second camera on the ceiling keeps track of any human participants and locates their center. This location point is then translated to a (x, y) coordinate within the camera’s view as depicted by figure 4.8. X Y 20 80 140 200 260 320 40 80 120 160 200 240 farthest from screen closest to screen left right Figure 4.8: Coordinate system. In order to test the previously exposed encapsulation theory, a 45 by 45 by 45 centimeters corrugated box was enhanced with a clear plastic window of 7 by 5 centimeters on a purple box. A camera was suspended inside and used to track the location of an LED placed on top of the clear window. An LED was attached to a 3V Lithium Battery and was given to the subject as a Power Key to control the space through such interactive coffee table. Figure 4.9 presents these devices. Position of either human beings or Power Key over the interactive coffee table were recorded on memory on a latency of 1 second. After 2 minutes the recorded positions were drawn on an image and the contents of the memory deleted. A 59 4.3. Training wheel study (a) Interactive coffee table. (b) Power Key. Figure 4.9: Interactive coffee table and power key. 60 4.3. Training wheel study new collection of tracked points was then initialized. This allowed a collection of clustered position trackings over time that can be easily analyzed and compared. Finally, position sensing on either the ceiling-mounted camera (table 4.8) or the interactive coffee table is mapped to a location in an BGR color space. Position values on the Y axis represent a green value from 0 to 255 and postion values on the X axis represent a blue value from 0 to 255. Z is maintained at 0. The resulting (x, y, z) BGR value is then projected onto the wall (figure 4.10). A spatial mental map of this space can be represented by a BG plan, as represented by figure 4.11(a). Figure 4.10: A participant interacting with the space. 4.3.2 Measurements The present study focused on proving that it can be possible to create a prosthetic connection between a human and a space. This theory suggested that by altering visual spatial stimuli, human kinesthetic responses could be controlled efficiently. In order to test these hypotheses several measurements were done: • Human Position: Human position over time was recorded and drawn on images containing 2 minutes of recordings. The collection of images was analyzed for trends and patterns. • Power Key Position: The position of the Power Key on top of the interac- tive coffee table was recorded and drawn on images containing 2 minutes of 61 4.3. Training wheel study recordings. The collection of images was analyzed for trends and patterns. • Location Iterations: The count of total human locations over the span of the experiment were broken down into graphs representing the location itera- tions in both x and y axes, i.e. the number of times that a human spent in the same x or y location. The collection of graphs was analyzed for trends and patterns. • Home Displacement: The geometrical distance between a found home loca- tion and a new location where subjects were expected to be moved through changes in space was computed and analyzed using ANOVA. The rationale is explained below. These measurements were created to test the presented theory of control based on reflective consciousness manipulation based on two actions, namely disruption and encapsulation. Table 4.3 presents the above outlined measurements in relation to the control theory exposed above. Control Method Measurement Disruption Human Position Location Iterations Home Displacement Encapsulation Power Key Position Table 4.3: Measurements. Encapsulation was measured through qualitative analysis of the recorded states of a minimized model of the experimental space. The construction of an interactive coffee table with the same spatial map used for kinesthetic interaction allowed the encapsulation of kinesthetic movement into movement of a Power Key. Disruption, however, was analyzed by measuring specific alterations of the mental map formed by human perceivers. It was hypothesized that there exist two methods of modifying a mental map of spatial properties: altering the components of such map, or inserting an external component to mutate said map. The spa- tial characteristics of the current prototype depend on a bidimensional BG model (figure 4.11(a)), therefore, two alterations of this model are possible: • Bidimensional: Altering the BG components of the bidimensional model. • Three-dimensional: Including a third component, in this case red values (R), to form a three-dimensional spatial model (BGR). 62 4.3. Training wheel study 4.3.3 Experimental design Treatments applied The present prototype has a mental representation of bidimensional properties (fig- ure 4.11(a)). Two types of alteration of such GB plan have been previously defined, namely bidimensional and three-dimensional. The present study’s implementation of these alterations was performed as follows: • Bidimensional: A Bidimensional alteration is the result of switching the X and Y values of the GB representation of the space. This results in a GB plan differing from the initial BG one. In the present implementation, this transformation is instantaneous, i.e. the values are interchanged once. • Three-dimensional: A Tridimensional alteration is the result of adding a third component R to the GB representation of the space. In the present prototype, this transformation was performed gradually, i.e. the R value (0 - 255) increased during time and followed a Bidimensional alteration. This results in a GBR cube. According to previous findings, human subjects have two states while inter- acting with a space: A Home Location, defined at their first contact with their new environment, and an action performance related one. Each state has specific qualities of kinesthetic activity, awareness and assumed reflective consciousness control.11 When initially interacting with space, humans search for a safe location (Home) where they return for introspection and creative activity. This location can be de- termined during the first moments of interaction with space by finding the highest count of position iterations (total count of seconds that a human subject spent on such position). This procedure yields a (x, y) pair that represents the initial Home location of a human being interacting with space. It has been hypothesized that the Home Location is representable both on (x, y) coordinates of real space and (x, y) coordinates of a BG plan. Because the interactive model implies that a location in space is linked to a (x, y) BG state of the space, an alteration (bidimensional or three-dimensional) of such model implies that the previously computed Home Location has been trans- lated to a new (x, y) coordinate belonging to the new mental map. Because the alteration of the model is known, it is possible to compute the exact translation of the Home Location through: 11Findings of the pilot study of interactive spatial perception. 63 4.3. Training wheel study BidisplacedHomeLocation = (y, x) Figure 4.11 presents the spatial model and its alterations. Figure 4.11(a) presents the original BG plane that represents the cognitive spatial model formed by the in- teracting participants. Moving towards the right of the space and farthest from the projection screen would result in the projection becoming blue. Moving towards the left of the space and closest from the projection screen would result in the projection becoming green. Figure 4.11(b) presents a bidimensionally altered space map. The cartesian co- ordinates have been interchanged, thus if a subject moves towards the left closest area to the screen he would experience a blue illumination, while moving to the right farthest area from the screen he would experience a green illumination. The theoretical Displaced Home Location is represented by the red dot and its transla- tion represented by the red line. Figure 4.11(c) presents a three-dimensional altered space map. A third dimen- sion consisting of red values has been added continuously to the color space. Figure 4.11(d) presents the spatial mental map as it would be conceived by a test subject. (a) Original. (b) Bidimensional. (c) Tridimensional. (d) Tridimensional. Figure 4.11: Space model and alterations performed. According to our previous findings, humans will always return to their Home Locations during introspection. It can be hypothesized that if a successful alteration of the mental map has been performed the Home Location will correlate with such mental map. In other words, humans will return to the displaced Home Location. During the alteration of the spatial mental model a new most frequent location, presumed to be the new home location chosen by participants, was computed fol- lowing the same methodology used to compute the initial Home Location. The highest count of position iterations was found in both x and y. This yielded a New Home Location that could be compared to the the Dis- placed Home Location to test the present theory. Distance between each of these measurements was done geometrically by finding the vector magnitude between 64 4.3. Training wheel study each point using equation 4.1: D = √ (Xdisp −Xhome)2 + (Ynewhome − Ynewhome)2 (4.1) Where (Xhome, Yhome) defines the Home Location or the Displaced Home Lo- cation, and (Xnewhome, Ynewhome) defines the New Home Location. Equation 4.1 was used in both Bidimensional and Three-dimensional alteration measurements. Finally, the previous pilot studies showed that interaction with space is depen- dent on different levels of attention. The different tasks that humans undertake in a space determine the level of engagement and have strong effects on the cognition of the space and thus the interactive process that arises between humans and cy- borg spaces. To measure the effects of engagement and attention three tasks were defined: • Non-Immersive: Low level of engagement is promoted by driving partici- pants’ attention to a complex task requiring concentration. Users were given a task that required constant attention to a written tutorial and the creation of an academic summary, • Semi-Immersive: Intermediate level of engagement is promoted by shifting the attention of participants between the space and a task requiring concen- tration. Users were given a task that required the creation of a drawing re- lated to the subject’s imagination. • Immersive: High level of engagement is promoted by driving participants’ attention to the space. Users were given no task but to interact with space. Statistical analysis Bidimensional Alteration Tridimensional Alteration Non-Immersive . . . . . . Semi-Immersive . . . . . . Immersive . . . . . . Table 4.4: Experimental design for human training wheel experiment. The experiment was designed as a Completely Randomized Factorial ANOVA with Two Factors, CRia; where i = immersion level and a = alteration type. Al- teration type, i.e. Bidimensional and Tridimensional alteration of the space, was considered a within subjects factor, and immersion level was considered a between 65 4.3. Training wheel study factor. The 3 x 2 design is presented in table 4.4. Nine subjects, 6 males and 3 fe- males aged 20 to 39 were randomly assigned to one of the three immersion groups, i.e. each group had 3 subjects. The analyses used an alpha level of .05 for all statistical tests. Experiment flow The experiment lasted 45 minutes (figure 4.12) and was divided in three parts (Fig- ure 4.12(a)). During the first part, lasting 5 minutes, experients were instructed on the flow of the experiment and signed a consent form. The second part, lasting 18 minutes, was dedicated to interaction and measurements. Finally, during the last 10 minutes test subjects answered a post-experiment questionnaire and participated in a filmed interview. The second part of the experiment, lasting 18 minutes, was divided in four parts (figure 4.12(b)): Introduction 5 minutes Interaction: 18 minutes Assessment 10 minutes (a) Experiment flow. Human 2 minutes Power Key 6 minutes Human Bidimensional Alteration 5 minutes Human Tridimensional Alteration 5 minutes (b) Interaction Flow. Figure 4.12: Experiment flow. 1. During the first 2 minutes the space would respond to users’ kinesthetic ac- tions. The space would then compute a home location – the (x, y) position where subjects stayed most of the time. 2. During the next 6 minutes the system would respond to movements of the Power Key on the surface of the interactive coffee table. 3. During the next 5 minutes the system would respond, again, to users’ kines- thetic actions. But this time using a bidimensionally altered spatial mental map – interchanging x and y motion controls. A New Home Location – the (x, y) position where subjects stayed most of the time – would then be computed. 66 4.3. Training wheel study 4. Finally, the last 5 minutes the system would respond, again, to users’ kines- thetic actions. But this time being three-dimensionally altered. A New Home Location – the (x, y) position where subjects stayed most of the time – would then be computed. 4.3.4 Results Human and power key position Different task groups had different directional movements. Subjects from the Non- Immersive group had marked Home Locations and Displaced Locations were rare. Subjects from the Semi-Immersive group had also marked locations, however, lo- cation translations were radical as shown by peak differences between graphs. Sub- jects from the Immersive group had a wider range of positions, but aligned towards a fixed area, depicted by clusters of peaks in the same area, before and after alter- ations. Differences between back-front and left-right movements are evident and might have been affected by the task definition. There were marked differences between task groups in the appropriation of the space and use of the Power Key interface (figure 4.13). Non-Immersive task sub- jects begun a rapid exploration of the spatial capabilities of the space – promoted by the tutorial –, however, later explorations were rare and use of the Power Key interface was reduced to the minimum and only used to select ambient colors re- lated to initially chosen Home Position. Semi-immersive task subjects showed a higher grade of appropriation and exploration of space qualities through human movement, however Power Key interactions were reduced to a small quantity and only used to select a color or pattern for “inspiration”. Immersive task subjects per- formed a high level of exploration of the space through movement and an average exploration of the Power Key interface. Subjects of this group would select an am- bient color using the interface and explore their movement’s resulting patterns in different parts of the space. Figure 4.14 presents the motion recordings of a subject in the semi-immersive task group. The recordings are collection of 120 seconds marked as seen by the prototype.. The unmodified color space lasts from second 0 to second 120 (figure 4.14(a)). The bidimensionally altered space lasts from second 120 to second 960 (figures 4.14(b) to 4.14(h)), while the three-dimensionally altered space is experienced from second 960 to 1320 (figures 4.14(i) to 4.14(k)) By observing the recordings it is evident that the subject remained mostly static during the first 120 seconds (figure 4.14(a)). By gaining access to the Power Key in the second 240, the user successfully encapsulated her kinesthetic actions in the tool and explored other positions – states – of the space (figure 4.14(c)). These 67 4.3. Training wheel study positions would later be searched by kinesthetic action, specially after the second 720 (figure 4.14(g)). Comparing second 120 (figure 4.14(a))and second 1200 (fig- ure 4.14(j)) it is clear that the subject had displaced her home location to a new spatial state during the three-dimensional alteration. Figure 4.13: A participant interacting with the power key. Location iterations Analysis of the location iteration graphs constructed with the data showed a clear difference between interaction groups. Visual inspection of the graphs showed an evident distinction betwen immersive and non-immersive groups. Immersive groups showed location iteration graphs that were scattered across more locations in both x and y axes, depicting constant motion across the whole experiment. Non- immersive groups resulted in interaction graphs centered in one or two positions across the duration of the experiment and under all test conditions. Furthermore it was clear that color-space alterations – bidimensional and three- dimensional alterations of the spatial mental map – evoked motion in the experi- 68 4.3. Training wheel study (a) 120 secs. (b) 240 secs. (c) 360 secs. (d) 480 secs. (e) 600 secs. (f) 720 secs. (g) 840 secs. (h) 960 secs. (i) 1080 secs. (j) 1200 secs. (k) 1320 secs. Figure 4.14: Recorded paths with a latency of 120 seconds of a test subject in the semi-immersive task group. ment participants. The iteration graphs across all immersion groups showed that when the spatial model was altered, position was scattered in both x and y axes. Users would move more and explore different locations in the space – depicted by a higher count of graph peaks. Figure 4.15 presents the iteration graphs – of a subject in the immersive group – constructed during the first part of the experiment (figure 4.15(a)) where the space was left unaltered and a home position was computed, the second part of the ex- periment (figure 4.15(b)) where the space was altered bidimensionally and a bidi- mensionally altered New Home Location was computed, and during the third part of the experiment (figure 4.15(c)) where the space was altered three-dimensionally and a three-dimensionally New Home Location was computed. The graphs clearly show that motion was more frequent during bidimension- ally – and three-dimensionally – altered color spaces (tables 4.15(c) and 4.15(c)). This interactive effect was the result of the alterations performed on the spatial map of the environment. Furthermore, it is evident that although movement was performed during the first part of the experiment, most of the time was spent in a single location. The initially computed Home Location is depicted by the peak in pixel (x ≈ 69 4.3. Training wheel study (a) Unaltered space. (b) Bidimensionally altered space. (c) Three-dimensionally altered space. Figure 4.15: Location iterations for the x axis of a test subject in the immersive task group. 70 4.3. Training wheel study 230). During a bidimensional alteration the New Home Location computed is de- picted by the peak in pixel (x ≈ 55). Finally, during a three-dimensional alteration the New Home Location computed is depicted by the peak in pixel (x ≈ 175). Home displacement The distance between the first Home Location and a New Home Location com- puted during space alteration was analyzed. Total distance of movement in both Bidimensionally and Tridimensionally altered color spaces (figure 4.16) showed to be not statistically significant, F(2,15) = .258, p = .776, between treatment groups. Further contrasts analysis showed that even if the semi-immersive group appeared to score lower than the non-immersive group, such effect was not statistically sig- nificant, p = .483. This can mean that either subjects were not sufficient in order to detect a statistically significant trend or that the space successfully achieved a displacement in the three different task groups without any difference. Distance Moved Immersive Semi-immersive Non-Immersive 20 40 60 80 Figure 4.16: Means of total distance of movement in both bidimensionally and tridimensionally altered color spaces. Differences between Bidimensional and Tridimensional (figure 4.17) alterations were found to be not significantly different, F(1,12) = .128, p = .736, from each other. This proves that that Bidimensional and Tridimensional alterations have the same effect on the human and can be used alternatively. A deeper analysis of each alteration showed that Bidimensional alterations of the space were not significantly different , F(2,6) = .032, p = .968, between groups. In the same manner, the total movement achieved during Tridimensional alterations of the space was not statis- tically significant, F(2,6) = 1.165, p = .374, between different task groups. Distance from both the Bidisplaced and Tridisplaced New Home Locations 71 4.3. Training wheel study Distance Moved Immersive Semi-immersive Non-Immersive 20 40 60 80 100 120 Bidimensional Three-dimensional Figure 4.17: Means of total distance of movement in each bidimensionally and three-dimensionally altered color spaces. to the hypothesized Displaced Home Location, shown in figure 4.18 were ana- lyzed. Analysis of the differences between task groups showed that for Bidimen- sional alterations of the space the task treatments were not significantly different, F(2,6) = .226, p = .804, from one another. Although tridimensional alterations showed a stronger effect, the differences between task groups proved to be not sig- nificantly different, F(2,6) = 2.793, p = .139. Contrast analysis showed that the apparent difference between tridimensionally altered immersive subjects were not different,p = .081, from non-immersive ones under the same spatial alteration. Analysis showed that the effects achieved across task groups through Bidimen- sional alterations were not statistically significant, F(1,12) = .014, p = .908, to the ones obtained through tridimensional alterations. This suggested that both spatial alterations achieve similar motion effects. Although no statistical significance was found between treatments it is believed that moving a human being to a position 70 pixels away from a desired location – within a 320 x 240 pixels space – proves quite successful. Qualitative analysis Abruptly changing the definition of the space proved to be not noticeable, yet the results were higher than initially expected. Subjects would try to re-gain control over their environment by finding the new location where the desired ambient vi- sual stimulus was to be found. Some of them even felt frustrated by later changes in the color space. Encapsulating a specific stimulus proved to be successful, users would try re- 72 4.3. Training wheel study Distance Immersive Semi-immersive Non-Immersive 80 100 120 140 160 Bidimensional Three-dimensional Figure 4.18: Means of distance from the new home location to the hypothesized displaced home location. gaining control of the space and would usually use the tool to explore the spatial capabilities of their environment as if by moving in space. Most of the subjects would search for the initial home location using the new mental model. Since it was easier for subjects to explore the space through the key they would engage in a moving spree to understand their space and thus gain a more complete mental map of their surrounding. Most of the subjects N = 6/9 found a prosthetic connection with their environ- ment, feeling it comfortable and intuitive to control it. However, only some N = 4/9 felt that they maintained control of the spatial qualities. Most of the subjects N=8/9 agreed that all their kinesthetic actions were chosen to their own free will. Finally, most subjects N = 7/9 understood the relationship between their location in space and the ambient color, although only some N = 5/9 noticed a change in the color space, usually members of the Semi-Immersive and Immersive task groups. Only one subject found it difficult to re-create a plan section of the space. Subjects in the Semi-Immersive and Immersive task groups were highly at- tracted by the primed color of white objects and movement actions. They rapidly discovered that their movement would trigger a rewarding pattern and engaged themselves in motion activity to recreate, or even dissipate the patterns. One sub- ject commented that “I couldn’t get rid of the huge red blob”, while others com- mented that “loved the patterns that would appear by moving the white sheets of paper.” The interactive definition of the primed stimuli engaged the subjects in motion, while ambient color proved to be remembered most of the time and was linked to a location. Interactive visual stimuli promote movement while static am- bient visual stimuli promotes location. 73 4.4. Spatial effects of visual stimuli 4.4 Spatial effects of visual stimuli Spatial cognition is the result of a complex collection of physiological and cogni- tive processes that arise in egocentric and allocentric perception of the world. There are many theories on how humans create space and how it is stored in the brain for use in the future. These theories deal with stimuli that arise naturally or artifi- cially in the world, e.g. texture, occlusion, stereopsis, etc., and generally produced by objects situated in a three-dimensional or bi-dimensional reality. However, the model for Cyborg Environments proposed by the present reseach theorizes the ex- istence of an autopoietic system composed of interconnected space-perceiving en- tities (figure 3.9). There exist an large number of stimuli that could be used for this purpose and an infinite number of combinations and alterations that could be performed on such a collection. The need for a simple methodology to measure this inter-connection was the purpose of the present experiment. Space has proven to have a strong relationship with human visual perception. Gibson suggests that “the basis of the so-called perception of space is the pro- jection of its objects and the elements as an image, and the consequent gradual change of size and density in the image as the objects and elements recede from the observer”[27] In Gibson’s theory, texture plays an important role in defining the spatiality of the objects in the world. The farther away an object is from the perceiver the denser its texture appears to be. This perceptions are used by the brain to form a an organized and coherent surrounding. However, it is possible to theorize that even rougher stimuli could promote a spatial sensation comparable to the one arising from geometrical and textural means. The present pilot study focused on the study of the most simplified visual stim- uli: light properties. Although several categorizations of light properties exist, the present study focused on measuring the spatial effect of Hue, Chromatic Strength and Contrast 4.4.1 Device description The device is composed of a darkened room of about 3 by 4 meters containing an e-lumens hemispherical display and projector that allows peripheral view stimula- tion. A chair permitted experients to position their eyes at the same height as the projector’s lens and a mouse allowed them to control a pointer to make selections. The experiment setup is presented in figure 4.19. An application was written in python 2.5 that allowed for the presentation of a seamless light stimulation across the whole display and allowed the experients to select 6 buttons depicting spatial size. The selection interface was located at the far right point of the screen – visual field – and clear perception of it was only 74 4.4. Spatial effects of visual stimuli achievable by rotating the head 90 degrees to the right. All answers were stored in a text file for further analysis. Figure 4.19: A user rating a specific visual characteristic. 4.4.2 Measurements Large amounts of research have been done on the effects that light properties have on perceived object distances. Work by Taylor and Sumner, for example, demon- strated that an “apparent nearness of bright colors”[67] is perceived under experi- mental conditions. Their theories ranged from pupillary adjustments to perceived contour sharpness of the objects. However, the present study deals with the em- bodying sensation of space that an enclosure provides during its lifetime’s alter- ations – i.e. the Cube Model for Cyborg Environments. Therefore, an approach that follows this definition is necessary. A decision was taken to explore a single condition of the Cube Model: a closed cube undergoing a membrane alteration. In other words, a surrounding stimulation that by changing its properties has an effect on the perceived form or scale of the enclosure. The measurement of the present pilot study would then have to focus on the perceived scale – size – of the surrounding composed of a simple visual stimulus. Using the device described above, eight subjects were peripherally stimulated twice with 18 randomized configurations of light – a total of 36 stimuli. Table 75 4.4. Spatial effects of visual stimuli 4.5 presents the specific characteristics – hue, saturation and value– of the stimuli used. All values have been scaled to a normalized range from 1 to 255, to allow a comparison between them. Tested Value Stimulus Characteristics Test Value Hue Saturation Value Hue 5 5 255 255 55 55 255 255 105 105 255 255 155 155 255 255 205 205 255 255 255 255 255 255 Saturation 5 125 5 125 55 125 55 125 105 125 105 125 155 125 155 125 205 125 205 125 255 125 255 125 Value 5 125 125 5 55 125 125 55 105 125 125 105 155 125 125 155 205 125 125 205 255 125 125 255 Table 4.5: Stimuli tested and their h-s-v characteristics used to test spatiality of visual stimulation. Experients were asked to focus their vision at the center point of their frontal midline for at least 2 seconds and use a mouse to select a radio button scale – placed to the far right of their visual field – to answer to the question: How big does your surrounding appear to be?. Responses available were: tiny, very-small, small, large, extra-large and huge. Each one had a value from 1 to 6, where larger numbers denote larger perceived spaces (figure 4.6). There are two types of spatiality defined by Foreman and Gillet, one is egocen- trical – referenced to the body midline – and the second one allocentric or related to a model of the world. In the latter, space is constructed by understanding the rela- tionship between its forming entities and allows us to conceive extreme spaces like outer space or a rabbit’s hole. Because the present experiment is interested in the spatial experiences developing from perceived stimuli we will focus on measure- 76 4.4. Spatial effects of visual stimuli ments and priming related to inhabitable spaces. The researcher is aware, however, of other spatialities that might affect the results – like the conception of outer space or a hive. Value Response 1 Tiny 2 Very Small 3 Small 4 Large 5 Very Large 6 Huge Table 4.6: Scale used to measure spatial size. Constrained scaling Following West and Ward’s[70] methodology to account for idiosyncratic biases in psychophysical scales the experients of the experiment were trained to judge the scale of a space using a constrained scale. All test subjects participated in a pre-test session where they were trained to judge the spatial size of a space depicted by a photograph. A software application was written in Python 2.5 that presented users with a pool of randomized images and provided a selection of 6 radio buttons following table 4.6. Users were asked to judge the size of the space shown by selecting a button and submitting their answer, appropriate feedback was given after each of their answers. Images of the software are shown in figure 4.20. 4.4.3 Experimental design A-priori hue bias countermeasures. Humans tend to remember the characteristics of spaces they have visited in the past. Color vision is strongly linked to biological advantages in human beings, like pattern decoding or information recognition. The advantage is so strong that our perception of the world is both limited and enhanced by it, making previous expe- riences of color an important bias in the perception of space. It was hypothesized that hues – colors – remembered from previous spatial experiences could affect our measurements of spatial size. To countermeasure this effect the experiment tested 77 4.4. Spatial effects of visual stimuli (a) Non-color primed training software. (b) Color primed training software. Figure 4.20: Software used to train users in spatial perception with both color and non-color primers. 78 4.4. Spatial effects of visual stimuli the effect of priming subjects with specific hue biases during the constrained scal- ing pre-test session. Two versions of the pre-test software was written for each of these testing conditions: • The first group was trained to judge the spatial size represented by images in black and white, using a selection of 25 images(figure 4.20(a)). • The second group was trained to judge the spatial size represented by images that were color-tinted according to five hue variations, and only 5 images were used to train the experients (figure 4.20(b)). The biases were random- ized and are presented on table 4.7. Value Response 5 Small 105 Tiny 155 Very-Large 205 Huge 255 Small Table 4.7: Biased sizes primed during pre-test. It was hypothesized that if hue was to be considered an important bias, the used color priming would have a significant effect on the second group, altering, in a predictable manner, the participant’s perceived spatial size. Treatments applied Finally, in both hue-bias countermeasurement conditions three types of stimuli were presented: • Hue: It has been suggested by Lars Sivik that it’s not hue “which affects how exciting or calming a color is but the chromatic strength of each hue. “[45] Further work by Mikellides provides more evidence that saturation “is the key dimension affecting how exciting or calming a color is perceived”[45]. This suggests that hue dimension of visual stimulation will not have a sig- nificant or controllable effect on spatial perception. Testing this condition allows the research to rule out any possible interaction where hue plays a role within other conditions. 79 4.4. Spatial effects of visual stimuli • Chromatic Strength: Based on Mikellides findings the present research be- lieves that changes in saturation of a visual stimulus will have a significant effect on spatial perception. The stimuli will be modeled from an HSV space. While Hue and value will be kept the same, saturation will be changed. • Contrast: Due to the particles in the air objects acquire “a reduction of [ap- parent] contrast in [their] proximal representation... depending on [their] dis- tance from the viewer.“[22] This is called Aerial Perspective and is believed to provide the user with visual cues of distance. The research will test this with stimuli modeled from a HSV space with variance in value, maintaining Hue and Saturation static. Statistical analysis Hue 5 55 105 155 205 255 Hue non-primed Hue primed Saturation 5 55 105 155 205 255 Hue non-primed Hue primed Value 5 55 105 155 205 255 Hue non-primed Hue primed Table 4.8: Experimental design for spatial size of visual stimuli measurements. The experiment was designed as a Completely Randomized Factorial ANOVA with Three Factors, CRpvc; where p = prime, v = value presented and c = stimulus characteristic. Prime was considered a between groups factor, and value presented and stimulus characteristic were considered within groups factors. Eight subjects aged 19 to 29 were randomly assigned to one of the prime treatment groups, result- ing in each group having 4 subjects. The 2 by 6 by 3 design is depicted by table 4.8. The analyses used an alpha level of .05 for all statistical tests. Experiment flow The experiment had a duration of 20 minutes (figure 4.21). During the first 5 minutes users were carefully instructed about the flow of the experiment and the tasks they would have to complete. During the following 5 minutes experients were 80 4.4. Spatial effects of visual stimuli Introduction 5 minutes Training 5 minutes Stimuli rating 10 minutes Figure 4.21: Experiment flow. asked to complete a training task where they would learn how to measure the size of a space using the constrained scaling software – the version used depended on the randomized hue-bias group the users were part of. After completion, experients would step into a darkened room where they would rate the spatial size of the stimuli being presented. Subjects’ completion time varied, but never exceeded 10 minutes. No post-experiment interview or data collection was performed. 4.4.4 Results Figures 4.23(a) and 4.23(b) present the means of perceived spatial size according to each stimulus characteristic. The dotted lines present the color-primed – primed with hue variances during the constrained scaling training session – group. Finally, Figure 4.23(c) presents the effect of color-priming in the perception of spatial size in hue variations. A one way ANOVA showed that hue had a significant effect on spatial size perception, F(5,90) = 13.360, p = .000. However a-priori contrast analysis showed that value 5 and 205 were not significantly different, p = .144, and that value 5 and 255 were also not significantly different, p = .883. In other words, a significant effect on spatial size perception was only achieved from values 5 to 155. It is important to note that due to the characteristics of the Hue-Saturation-Value space values 255 and 5 are almost indistinguishable. Light saturation did not prove to have any significant effect on spatial size per- ception, F(5,90) = 0.620, p = .685. Contrast analyses showed that no significant difference between values was achieved. On the other hand, the value characteristic of the stimuli proved to have a sig- nificant effect on the spatial size perceived by human beings, F(5,90) = 76.367, p = .000. Furthermore, contrast analysis showed all values to be significantly different from each other. A look at the plotted means in figure 4.23(b) shows that an predictable effect was achieved by varying the stimuli intensity – value. A wider range of spatial size was obtained and the effects were sufficiently controllable. By performing a linear regression analysis (figure 4.22) we can state that the perceivable size of an environment, is then measurable, on a 1 to 6 scale with the 81 4.4. Spatial effects of visual stimuli function: S = 0.016v + 1.3 Where S = Size, and v = color value, is to be read by approximation to Table 4.7. A further analysis with light intensity in lumens should be performed, but is outside the scope of the present document. Value Presented Size Perceived 5 55 105 155 205 255 1 2 3 4 5 6 S = 0.016v + 1.3 Figure 4.22: Linear regression on perceived spatial size of color value alterations. Color-priming effect An univariate analysis was performed to look for a significant difference between the primed and non-primed groups for each of the light characteristics tested. There was significant difference, F(1,84) = 11.397, p = .001, between primed and non- primed groups in the Hue characteristic. There were also significant differences, F(1,84) = 8.262, p = .005, between primed and non-primed groups in the Satura- tion characteristic. Finally, differences were significant, F(1,84) = 5.764, p = .019, in the Value characteristic. Although significant differences existed across all measurements, between the group that was primed with color and the one that wasn’t, an in-depth analysis of contrast showed that these differences were not constant. Only certain values would be significantly different. No apparent and consistent effect was found. An analysis of the effects that this hue-based priming had on the hue-perception could serve as support to this point. Figure 4.23(b) shows that although the color- primed group’s spatial size perception differs from the non-color-primed group the trend does not correspond to the imposed trend primed in the pre-test. Furthermore, it seems that experients perceived the space as bigger when they were color-primed. 82 4.4. Spatial effects of visual stimuli Value Presented Size Perceived 5 55 105 155 205 255 1 2 3 4 5 6 Hue - Color Primed Hue - Non Color Primed Saturation - Color Primed Saturation - Non Color Primed Value - Color Primed Value - Non Color Primed (a) Means over all tests. Value Presented Size Perceived 5 55 105 155 205 255 1 2 3 4 5 6 Hue - Non Color Primed Saturation - Non Color Primed Value - Non Color Primed (b) Means over non-color primed. Value Presented Size Perceived 5 55 105 155 205 255 1 2 3 4 5 6 Hue - Color Primed Hue - Non Color Primed Values Primed in Pre-test (c) Effect of hue values primed. Figure 4.23: Means of spatial size perceived with different peripheral light stimu- lations. 83 4.5. Spatial effects of directional light intensity Contrast analysis showed that significance in difference was only achieved in value 55, F(1,14) = 12.444, p = .003, and value 155, F(1,14) = 5.211, p = .039, and the differences don’t follow the trend imposed by the priming – value 55 should have been lower for color primed groups. These inconsistencies in effects have suggested that the differences found in the analysis are due to a very small pool of users – only 8 persons were tested – and not the direct effect of color priming. The belief that the color of a remembered space could affect the perception of a space based on pure hue-stimulation proved to be false. This liberates the present measurements from idiosyncratic and cultural effects that color might have on spatial perception. Further studies, with more test subjects should be performed to strengthen this point. 4.5 Spatial effects of directional light intensity The previously exposed pilot study on spatial effects of visual stimuli suggested that a peripheral visual stimulation can evoke strong and controllable spatial expe- riences in human beings. However, humans are rarely on a fixed position.12 Head rotations, limb movements and change of the body within space result in the per- ception of stimuli variations that help the brain update the spatial mapping of the world where the body acts.[52] Humans rely on the fact that acting upon the world – e.g. moving the body to a different location – will result in correlated stimuli changes. This fact allows perceiving subjects to update the model of the world appropriately. A study was performed to attend to the question of how much stimulus vari- ation should be correlated to human changes in order to allow for cognitive maps to develop. There is a strong relationship between kinesthetic motion and spatial mapping [52] that arises in navigation of real environments and consistent proof was found in the previous studies of the present research that movement is an im- portant factor in space perception and interaction with it. Human movement can be reduced, for the purposes of this pilot study, to: • Body motion in relationship to a presented stimulus • Head rotation in relationship to a presented stimulus The previous pilot proved that strong and controllable spatial perception can be achieved through alterations in light intensity – value. Because peripheral stimula- tion rarely occurs in the real world a study of directional stimulation – i.e. stimuli 12This has been suggested by the initial pilot study on interactive spaces. 84 4.5. Spatial effects of directional light intensity not covering the whole visual field of a human perceptor – was needed. It was hypothesized that a more complex perception, linked to the previously outlined human movements would arise from these stimuli characteristics. 4.5.1 Device description In order to test the effects in spatial perception of directional visual stimuli, a vir- tual environment was created (figure 4.24). This allowed full control of stimulus characteristics and removal of external stimuli and objectual properties – e.g. tex- ture – that could bias the space perception measurements. A virtual world was written in C++ and the OpenGL library. A Fastrack Polhe- mus tracking system was used to measure position change in a 2 by 2 meters area and the data was correlated to camera changes in the virtual world. In the same way head rotation in both the Z and X axis were transformed to changes in cam- era rotation and tilt. The result was presented to the experients wearing a VR Pro Head Mounted Display set. A Polhemus position and orientation recording was made with a latency of 1 seconds and written to a text file for future analysis. A Polhemus position and orientation recording was made with a latency of 1 second and written to a text file for future analysis. The previous pilot study suggested that changes in light intensity – value – cor- relate to changes in perceived spatial size. According to our findings, the perceived spatial size is measurable with the function S = 0.016v + 1.3. Therefore, it is possible to analyze low and high values and interpolate the remaining values. Two stimuli values where then tested: • Intensity value 55 that is perceived as a dark space and sized 2.39 in the presented spatial scale. (Very Small) • Intensity value of 255 that is perceived as a bright space and sized 5.33 in the presented spatial scale. (Very Large) It was acknowledged that the virtual implementation of a space allowed an immense flexibility in form and stimulus presentation. However it was also hy- pothesized that this flexibility could be used to test real-life applications of the model of this research. By using a cubed virtual space the pilot study would be consistent with our previous experimentations, fit conceptually with the enclosure level of the Cube Model outlined in the first part of this document, and allow the translation of any findings to a life-sized implementation. The design of the world was then simplified to a box with a flat floor, ceiling and straight walls providing concrete directionality – down, up, left, right, back and front. The box faces were 85 4.5. Spatial effects of directional light intensity (a) Head mounted display. (b) Virtual world (Textures have been exaggerated). Figure 4.24: Head mounted display and virtual world. 86 4.5. Spatial effects of directional light intensity then mapped using OpenGl with an almost imperceptible texture with one of the two previously outlined intensity value. Finally, the virtual space was proportioned according to table 4.9. This was done to account for the screen ratio, as much as possible, of the head mounted display. It was found that different proportions caused experients to be confused due to the fact that walls of the virtual space occupied more than two times the visual field provided by the VR Pro Goggles LCD displays, and extreme head tilts were needed to explore the space. It was found that the used proportions allowed a more natural exploration of the virtual reality space. Length Width Height Floor 1 1 A Wall 1 0.5 Ceiling 1 1 Table 4.9: Length of virtual space. 4.5.2 Measurements The present pilot study was focused on understanding how real-life applications of the previous findings on visual stimulation would affect spatial cognition and ex- ploration. A virtual prototype was chosen to eliminate or control biases that could arise in a real life prototype. Specifically speaking, on perspectival cues arising from stimuli limits in directional stimuli presentation, and textural cues provided by materials in the real world. Furthermore, the difference between peripheral stim- ulation and directional stimulation, and the difference between directional stimuli configurations had to be measured and understood. A quantitative and in-depth analysis was outside the scope of the present study and couldn’t have been fit in the extremely reduced time-span alloted for the pi- lot study. Therefore, a qualitative and careful analysis was done in the search for initial findings that could serve as pointers to further explorations and implementa- tions. With this in mind the present study acknowledges that further measurements and analyzes could have been performed, but have been scheduled for future ex- plorations. Four qualitative analyses were done to understand, • the effect of non-light, or external, cues in space navigation and exploration, • the effect of stimulus limits in directional stimulation, • the difference between horizontal and vertical stimulation, and 87 4.5. Spatial effects of directional light intensity • the difference between peripheral stimulation and directional stimulation 4.5.3 Experimental design Treatments applied Using the device outlined above eight subjects were presented three times to set of 6 randomized stimulus configurations. Subjects experienced a total of 18 stimuli during 20 seconds each. Table 4.10 presents each configuration as seen initially by a test subject. Figure 4.25 presents their graphical representation. Name Value (Intensity) Presented on Low stimulation 55 all surfaces High stimulation 255 all surfaces Vertical single stimulation 255 ceiling Vertical bi-stimulation 255 floor and ceiling Horizontal single stimulation 255 front wall Horizontal bi-stimulation 255 right and left wall Table 4.10: Directional stimuli spatial perception, configurations. Qualitative analysis Readings of experient position and head orientation were saved to a text file every second. These data were analyzed through an application written in python 2.5 for this purpose (figure 4.26). The software permits an experimenter to visualize the experient’s position in relation to previous and future states, head rotation, and head tilt. It also presents the spatial configuration of the virtual prototype through a plan and section drawing marking walls with high values – value of 255 – over the experiment timespan. A plot of scattered points marks the recorded (x, y) positions of the human perceiver and a marker shows the subject’s position at a specific time. Finally, a slider allows the experimenter to set the desired time to analyze, or browse through the experiment’s flow with ease. Experiment flow The experiment had a total duration of 15 minutes (figure 4.27). During the first 5 minutes the experimenter explained the test subjects their tasks and helped them set the VR ProView head mounted display on their heads. During the next 6 minutes users would interact with the virtual environment. Users were told that the virtual 88 4.5. Spatial effects of directional light intensity (a) Low stimulation. (b) High stimulation. (c) Vertical single stimulation. (d) Vertical bi-stimulation. (e) Horizontal single stimulation. (f) Horizontal bi-stimulation. Figure 4.25: Treatments applied (highly illuminated surfaces are represented by red planes). 89 4.5. Spatial effects of directional light intensity Figure 4.26: Effects of directional light intensity. Motion analyzer software. 90 4.5. Spatial effects of directional light intensity world would change across time and that they would experience different spatial states. Finally, during the last 4 minutes an informal interview took place. Introduction 5 minutes Interaction 6 minutes Interview 4 minutes Figure 4.27: Spatial effects of directional light intensity. Experiment Flow. 4.5.4 Results Users of the virtual world seemed fascinated by the effect provided by the high- resolution image of the ProView head mounted display and by the effect provided by the correlation between their kinesthetic movement and the seen world. Most of the experients – 6 of the 8 subjects tested– had never before tried Virtual Reality technology, this caused a sense of enamourment that faded away after few seconds of being inside the world. Users seemed to choose an initial location to where they would return after exploring the virtual world. This was evident by analyzing the data. Furthemore, movements seemed to have been done mostly in one direction, back and forth, due to technological constraints. The limits of the head mounted display screens were the cause of this bias. For example, experients searching for limits of a stimulus – i.e. the limits of an illuminated wall – would need to move backwards to fit the perception within the visual field provided by the displays. Stimuli limits – perspectival cues – and texture provided the most important cues for spatial perception. When the space had none of these clues readily avail- able to the perceiving humans a considerable subjective change in spatial size was noted. However, kinesthetic movements and careful visual exploration of, almost imperceptible, textural cues helped subjects correct their spatial model. A more detailed exposure of the results of this experiment follow. Figure 4.28 presents the analysis of one participant’s interaction data. Under low stimulation (figure 4.28(a)) head rotations were limited and movement was rare. Under high stimulation (figure 4.28(b)) movements were more common and usually in search of the limits of the virtual space. Under horizontal bi-stimulations (figure 4.28(c)) the participant had strong head rotations but remained in the center of the space. During horizontal single stimulations the user paid close attention to the presented plane, but moved backwards in order to find its visual limits. Finally the average position of the experient, as most of the participants, seemed to be close to the center of the space with few movements in one or two directions. 91 4.5. Spatial effects of directional light intensity (a) Low stimulation. (b) High stimulation. (c) Horizontal bi-stimulation. (d) Horizontal single stimulation. Figure 4.28: Analysis of a participant. 92 4.5. Spatial effects of directional light intensity Non-light and external cues The surface of the virtual world was made to have a barely noticeable texture. Most users – 7 out of 8 – commented that this texture cue helped them “see how close to the wall [they] were”. Although their kinesthetic actions allowed them to deduce their position [52] in the virtual world, users generally looked for visual cues that would allow them to “be sure [they] were close to the wall”. If a stimuli configuration – generally when all walls were in low (55) value – didn’t provide any cues, subjects would kinesthetically approach the space limits in search of textural cues that could allow them to see if they were “next to the wall or at the corner”. Texture seemed to provide closeness and perspectival information to the human perceivers when perspectival cues from stimuli limits were unavailable. Stimulus limits In the present experiment stimuli are placed within a perspectival reality. Stimuli directions were constrained to the faces of a virtual box and thus stimuli limits were defined by the cube’s edges. Subjects commented, and was later observed in the data, that when a new stimulus would appear they would scan for its limits in order to find the “limits of the space”. It was evident that the bias caused by display size and reduced visual field of view was the result of this search for limit comprehension. Initial perception of a full high stimulation or a low or high stimulation (ta- ble 4.10) resulted in disoriented subjects and either strong head rotations or no movement at all. Some subjects – 2 out of 8 – seemed paralyzed during these stimulations, generally waiting for the experience “to be over”, while most of the subjects – 6 out of 8 – engaged themselves in searching for cues that would help them construct the limits of the space. All subjects, however, noted that the space “seemed to have changed” when full or low stimulation was provided. A subject commented that “the space seemed like a big room when it was all illuminated”, while other subject noted that “when the lights were turned off the room felt very small”. Horizontal and vertical stimulation As we have seen, stimuli limits played an important role in spatial perception, therefore, vision was oriented in the direction of the lit surfaces in search of spatial cues. Bi-stimulations (table 4.10) resulted in alternated head rotations or tilts – left- right, up-down – in the directions of the presented stimuli, while single stimulations resulted in single head rotation or tilts – up, down, left, right – of attentional nature. 93 4.6. SENA prototype Movement seemed to have been strongly affected by the direction of the stimu- lus presented. Horizontal 4.10) stimulation – walls – resulted in apparently higher rates of movement, while vertical stimulation – floor and/or ceiling – resulted in a low rate of movement and a high count of head tilts. Peripheral stimulation and directional stimulation Perceived spatial size seemed to have been considerable affected during peripheral stimulation due to a lack of perspectival, textural and kinesthetic cues. This effect was not observed during directional stimulation because the present implementa- tion allowed some of these clues to be evident, specially perspectival ones. It was proven that small, almost inexistent, pieces of information help humans perceivers form a mental model of their surrounding and bias our previous findings on spatial size perception based on pure visual peripheral stimulation. Strong differences were found between peripheral stimulation and directional stimulation. The main factor being that directional stimuli generally provide per- spectival cues that help human perceivers construct a mental representation of their environment. 4.6 SENA prototype A Space Encoded Agent is a simplified interactive agent designed specifically to test the existence of a Cyborg Environment. Its components, as was explained at the beginning of this document, are meant to interconnect with human users in a seamless manner by fitting human perception and cognition of space. The present experiment is an initial implementation of a Space Encoded Agent based on the findings of the previous studies of this research. The prototype was built to gather comprehensive data that would support the presented model for Cyborg Environments and is based on the understanding of the 1. Behavioral effects of spatial environment interactivity, the 2. Effects of knowledge or belief of an interactive system in the interaction process, and the 3. Inter-relational and social capabilities of interactive spatial environments. 4.6.1 Device description SENA – acronym for Space ENcoded Agent– is a basic Space Encoded Agent that perceives human motion and position, creates a spatial model and acts upon its 94 4.6. SENA prototype world – i.e. human inhabitants – by producing directional light stimuli. A space of 4 by 5 meters was delimited using plain white plastic construction tarps. Two projectors were then mounted outside the delimited space. One projec- tor was used to create a rear-projection – a wall stimulus depicted in figure 4.29(a) – of a white square with one of two light intensity values. The second projector was used to create a ceiling-down projection – floor stimulus depicted in figure 4.29(b) – of a white square with one of two light intensity values. The setup of the experiment is depicted in figure 4.30. Light values are presented in table 4.11 and are based on our previous findings of visual stimuli and its perceived spatiality in an Hue-Value-Saturation definition. The selection of intensity values was done using the function S = 0.016v + 1.3 to provide one of two spatial sizes according to table 4.6: • Spatial size of 2.5 (between very small and small), using an intensity value of 75. • Spatial size of 6 (huge), using an intensity value of 255. Name Projected onto (R,G,B,) Value Wall Low Wall (75, 75, 75) Wall High Wall (255, 255, 255) Floor Low Floor (75, 75, 75) Floor High Floor (255, 255, 255) Table 4.11: Stimuli definition. A camera mounted on the ceiling was used to detect rate of motion and position of a human test subject. The motion algorithm was designed following Bevilac- qua et al.[1] change detection algorithm to account for high noise levels and rapid changes in illumination resulting from stimuli changes. An Emotion Selector (figure 4.31) was constructed using a keyboard contain- ing four keys labeled with four emotions – Stress, Excitement, Depression and Relaxation – taken from Russell’s Affect Grid[60] and used in the measurement of a message transmission explained in detail in a section below. A Pentium III computer running Ubuntu 7.04 was host to the Space Encoded Agent and controlled both projectors and the camera. Every second the system read its state (direction and value of the stimulus being presented) and its world (the rate of motion and position of a human) and recorded the total count of states so far observed into a binary database. This total count of states would be then used 95 4.6. SENA prototype (a) Wall stimulus (b) Floor stimulus Figure 4.29: Spatial stimuli used by the system. 96 4.6. SENA prototype Figure 4.30: Experiment setup. Figure 4.31: Emotion selector. 97 4.6. SENA prototype in Bayesian inference, following Russell [61]. The overall count of states recorded with a 1 second latency was recorded into a binary file for future analysis. The previously presented definition of autopoiesis states that an autopoietic system is fundamentally purposeless. However, the parts that conform the network of interelationships that conform it are determined by internal goals that define their position in the system. It was acknowledged that the system could not be de- signed to perform a specific outcome or goal if a true autopoietic system was to be constructed. Nevertheless, each part would have to have pre-determined goals that would allow and maintain their connectivity with other members of the structure. According to the presented autopoietic system based of a human cyborg and a space cyborg (figure 3.9) any node of the network has as main goal to gather information, undergo a spatial experience and produce information. Furthermore, it was hypothesized that such purposeless agent could learn a code not related to it’s own autopoietic description and use it to communicate a message. The latter would lay outside the autopoietic description and would allow to test the inter-relational and social capabilities of interactive spatial environments. Two existence goals13 were the base of the design of the SENA prototype: • Autopoiesis: The main goal of the agent that allows it to become autopoietic. The goal is formed of three sub-goals, namely, perceive the world, create a spatial model and act accordingly. • Communication: A test goal measuring the interelational and social capa- bilities of an agent. The goal is to learn a code by observation and use it to convey a message. Autopoiesis goal Cyborg Space perception is straightforward. The agent perceives the world, every second, through a motion detection algorithm that returns a set of world states. This information, along with the knowledge of its own state is memorized into a mental spatial model. A space encoded agent uses a Bayesian network to accomplish the latter. The network is designed – following the findings of previous pilot studies and the definition of an autopoietic system between space cyborgs and human cy- borgs – to represent the spatial relationships between a space cyborg’s world and its own state. Figure 4.32 presents this Bayesian network. Using this spatial mental map – Bayesian network – the agent is able to infer the the probability of a light intensity (Value) and its wall or floor direction (Direction) 13A goal not determining an action task but the management of information that results in an entity being part of a system – alive. 98 4.6. SENA prototype Communication (Transmit/Receive) Direction (Wall/Floor) Value (High/Low) Position (Left/Right) Motion (Fast/Slow) Figure 4.32: Interaction bayesian network. being the cause of a specific human activity (Motion) and location on the left or right side of the delimited space (Position) given the agent’s state of learning a code (Receiver) or transmitting a message (Transmitter). The collection of inferences, or conditional probability tables that arise from the spatial mental map, are the representation of the world as perceived by said agent. Actions to be taken by the agent are then based on the analysis of this repre- sentation towards maintaining the autopoietic nature of the system. In other words, the conditional probability tables are used to decide a state that maintains an inter- active process – a flow of information within the autopoietic network. A careful analysis of the autopoietic system, based on the findings of the initial studies and the definition of the same system yields the following logic: 1. If human actions truly correlate to specific space states –as found by the spatial effects of visual stimuli study –, test subjects should respond in a predictable manner to randomized stimuli making the probability of such correlations higher. 2. A change in human state when a correlation to a stimulus is strong should imply that such correlation is now irrelevant, thus a search for a new correlation should follow. Because we are testing the correlation hypothesis, a randomized state should be selected. 3. If a change in space state is triggered but human actions remain the same we can either infer that the unchanged human actions are the result of the newly selected spatial stimulus or that there 99 4.6. SENA prototype exists no correlation between the two and the human actions are not affected by both the previous stimulus or the newly selected state. In either case, human actions will eventually add up to cause the system to again believe in a strong correlation between the new stimulus and the added human actions. In other words, a space cyborg will construct an accurate mental model of the world. If it believes that the actions of the human cyborg (its world) are the result of its present state and such actions change, then a new randomized state will be produced. The space cyborg will then learn the human actions resulting from its new state. This logic was then translated into the following algorithm[61]: repeat every second if P(Direction,Value|Motion,Position,Communication)>0.50 if Motion and/or Position != those from previous loop set random Direction other than present Direction set random Value other than present Value else continue else continue Communication goal An interacting entity can prove to communicate if it has the ability to engage in meaningful dialogue. For the present research, dialogue is defined as the ability to promote third-order couplings [34] – self-imposed-changes that result in the autopoietic system 14 suffering a shift of state. If by changing its own state, an entity causes other members of the autopoietic system to suffer a co-related change, there exists dialogue. Communication, therefore, can be expressed in terms of message transmis- sion. When an entity sends a message it changes its own state according to its own experience. This produces a “structural congruence”[34] between entities of the autopoietic network that in turn results in members of the system suffering funda- mental changes of co-adaptation. If this co-relation between the entity that sent a message and the entities that received the message is strong, a message decoding will occur. For example, if the spatial system is able to feel sad by setting its own state to a one that results in other members of the system feeling sad, we can assert that there has been a message decoding – a third-order coupling promoted by a dialogue – and thus a social act. Social co-relation is then a function of message deciphering accuracy. 14The network the entity belongs to. 100 4.6. SENA prototype Following the methodology used by Smith and MacLean [66] in their article Communicating emotion through a haptic link: Design space and methodology, a communication goal was included into the present agent. Such a goal lies out- side the autopoietic system and thus, theoretically speaking, does not affect the purposelessness of the network. A second Bayesian Network– spatial model of a world of semantic nature – was created to allow the agent to infer the probability of its light (Value) and wall or floor direction (Direction) states being the cause of a specific emotional hu- man state from Russell’s [60] bi-dimensional Affect Grid based on unique pairs of arousal (Arousal) and pleasantness (Pleasantness). Figure 4.33 presents this Bayesian network. Direction (Wall/Floor) Arousal (High/Low) Pleasantness (High/Low) Value (High/Low) Figure 4.33: Communication bayesian network. The agent would use this spatial model to create a representation of the seman- tic world being perceived in the form of conditional probability tables – learn a code through experience. Once sufficient experience was gathered the agent would be given a randomized list of messages to transmit to a human being. The agent would then access its representation of the world and set its light (Value) and wall or floor direction (Direction) states according to it. The action algorithm is straight- forward, from experience find learned Arousal/Pleasantness pairs create a randomized list of 16 learnt (Arousal,Pleasantness) for each Arousal/Pleasantness in the list find highest P(Direction,Value|Arousal,Pleasantness) set Direction and Value. wait for response and store. 101 4.6. SENA prototype 4.6.2 Measurements Analysis can be performed by observing and comparing the mental spatial map – Bayesian network – and representation of the world – conditional probability table – of each autopoietic system that evolved during interaction. Because the overall count of states recorded can be easily extracted from a binary file it is possible to reconstruct the conditional probability tables for analysis. Furthermore, statistical analysis can be performed on the total count of states. For example, the total seconds that humans were moving when the space was a wall low stimulation, or the total seconds that humans were in the right part of the space during the whole experiment. Behavioral effect of interactive systems It was found, on the previous experiments, that spatial stimuli have behavioral effects on humans that perceive and interact with them. However, a deeper under- standing of the actions that would arise in an autopoietic system formed of a space cyborg and a human cyborg was needed. The measures performed to achieve such a comprehension were: 1. Number of actions that correlate to one, and only one, stimulus. 2. Behaviors related to each stimulus. 3. Behaviors related to each interactive treatment. The first measurement tests the accuracy of the interactive rationale of the pro- totype in accordance to the human actions measured during the experiment. Fol- lowing the rationale used to design the Autopoiesis Goal of the presented space en- coded agent, it is possible to deduce a logical true correlation between space states and human states. The Autopoiesis Goal algorithm is designed so that the cyborg state changes only if its mental model is disrupted. 15 Logically, if humans act in a predictable manner each space state will produce one, and only one, human state. This can be found by searching the mental representation of the world – conditional probability tables – of the space cyborg for space states pairs (direction, value) that correlate to a single (position,motion) state. The second measurement is targeted at understanding the overall behavioral effects that each stimulus had on human participants. Analysis of the means of total counts of position and motion states related to every (direction, value) pair was performed. 15If it believes that the actions of the human cyborg are the result of its present state and such actions change a new random state will be produced. 102 4.6. SENA prototype Finally, the third measurement deals with a deeper understanding of the overall behavior resulting from different interactive conditions. Number of position and motion counts were compared across all treatments and depict the overall interac- tion activity. The analysis focused on high motion, i.e. the result of interaction activity, and right position, i.e. being on the right part of the space. Effects of a-priori beliefs of an interactive system Four variables were measured to understand the participant’s perception of the in- teractive system, namely, • Beauty: Measured with the agreement to the statement “My experience to- day was beautiful”. • Pleasantness: Measured with the agreement to the statement“My experience today was pleasant”. • Comfort: Measured with the answer to the agreement to the statement “I felt comfortable interacting through this system”. • Involvement: Measured with the agreement to the statement “I felt involved in the experiment”. The data were gathered through the post-experiment questionnaire that allowed our subjects to rate their agreement with each measured variable from 1 to 5. Communication ability of an interactive spatial environment If the system has both efficiently learned a code and used it to convey a message a correlation between the system’s experience and the subject’s decoding of the message should exist. Since the experients are blind to the fact that they have provided the code to the system, selecting an (Arousal, P leasantness) pair to decode a message that equals the cyborg space’s beliefs implies that the space encoded agent has successfully sent a decipherable message. Research done by Smith and MacLean[66] on emotion communication through a simple bi-dimensional haptic link provided a credible benchmark and methodol- ogy to analyze the accuracy of such message decoding. Furthermore, previous re- search done by Mikellides[45] on the relationship between color and physiological arousal showed that only small psychological effects on emotion can be achieved through chromatic strength (saturation). In his research light intensity and hue affected unpredictably and insignificantly the emotions of human subjects. 103 4.6. SENA prototype Measurement of this communication ability was done by analyzing the number of accurate message decodings done by human subjects in the second part of the experiment. Finally, by contrasting the findings of the present experiment to the results of Smith and MacLean[66] it is possible to measure the performance of the cyborg space in comparison to human beings. Qualitative analysis A qualitative analysis of the questionnaire, post-experiment interview and notes taken by the experimenter was performed. The analysis focused on an analysis of users’ • belief that the system can be used to communicate intimate feelings with a loved one, • decisions to choose movements and locations during the experiment, • aesthetic perception of the prototype, • attempt to guess the rules of the system, • proposal of use of the present interactive system, and • the experimenter’s observations. and was performed over each of the treatment groups. 4.6.3 Experimental design Applied treatments In order to measure the behavioral effects of a-priori knowledge and understand the interactive process that arises between humans and their environments two experi- mental conditions were created, a non-interactive and an interactive one – exposed to the agent outlined here. Furthermore, within the interactive condition, three subconditions were created in order to test the effect of a-priori knowledge in the interaction process. Namely, the belief of a non interactive environment, the be- lief of an interactive environment manipulated by a machine, and the belief on an interactive environment manipulated by a human being. Table 4.12 presents the arrangement of the treatments applied. This arrangement resulted in four test groups, • Non-interactive Control: A first control group was exposed to a non-interactive system formed of a single randomized fixed directional light value. 104 4.6. SENA prototype • Interactive Control: A second control group was exposed to an interactive system and told that changes were randomized across time. No interaction was expected from the subjects of this group. • Interactive A.I.: A first treatment group was exposed to an interactive system and told that the space was controlled by an artificial intelligence attempting to interact with them. Interaction was expected from the users of this group. • Interactive Human: A second treatment group was exposed to an interactive system and told that the space was controlled by the actions of a second test subject in a hidden replica of the system. Subjects were expected to interact with this remote participant. Non-interactive Control. Interactive Control Belief of Human-Computer Interaction Belief of Human-Human Interaction Table 4.12: Treatments applied. Non-Interactive Control Interactive Control . . . Interactive A.I. . . . Interactive Human . . . Table 4.13: Experimental design for the sena experiment. Statistical analysis The experiment was designed as a Completely Randomized Factorial ANOVA with One Factor, CRi; where i = interactivity. The design is presented in table 4.13. Twenty subjects subjects – 13 females and 7 males aged 18 to 38 and randomly assigned to one of the groups. This design was applied to all our analyses of the data recorded by the Bayesian network, and to analyze the responses to the post-experiment questionnaire. 105 4.6. SENA prototype Experiment flow The experiment had a total duration of 45 minutes (figure 4.34). During the first 5 minutes subjects were introduced to the experiment and a consent form was signed. The experimenter would carefully explain the tasks to be performed and answer any questions. Due to its limited access to the formation of goals, as exposed by the previ- ously discussed cyborg citizenship structure (figure 3.7), an agent can only truly interact with another agent. 16. It was theorized that a human being, a super-agent, can become an agent if sufficient control is performed on his or her goal-formation capabilities. By removing specific tasks in an experimental condition it is possible to construct an agent-agent autopoietic system. Following this logic, during the second part of the experiment subjects would be required to spend thirty minutes inside the delimited space. Participants were not allowed to bring any portable devices, magazines or books and were told to live in the delimited space. No inter- action tasks were given to the participating subjects nor instructions to manipulate the state of the system in an attempt to eliminate any goal-oriented activity. As only task, subjects were asked to use the provided Emotion Selector to update their emo- tion when a shift towards any of given emotions occurred. Because this last task did not have any apparent effect on the interactive process it was not considered a goal-forming task. During the next 5 minutes subjects in the interactive groups were asked to decode 16 messages supposedly being sent by the experimenter, the artificial intel- ligence or the hidden experient through the characteristics of the room. Subjects were asked to select their responses using the same keyboard they had previously used to input their emotions. During the last 5 minutes participants were asked to answer a questionnaire and a short interview took place. Introduction 5 minutes Interaction 30 minutes Decoding 5 minutes Assessment 5 minutes Figure 4.34: Experiment flow. 16Super-agent to agent interaction is incoherent 106 4.6. SENA prototype 4.6.4 Results It is important to note that subjects on the interactive human group seemed to be- lieve that they were interacting with a hidden participant. All of them made strong efforts to communicate, and were sometimes disappointed that the hidden partici- pant seemed not as active as them. However, some of the participants commented that after a few minutes of interaction with the system it was clear to them that there was no hidden subject. This fact should be taken into consideration for reading the following results. Number of actions that correlate to one, and only one, stimulus Correlations Control I. control I. A.I. I. human 0 .5 1 1.5 2 Figure 4.35: Means of actions that correlate to one, and only one, stimulus. A logical correlation between space states and human states was analyzed with the query[61]: P (Direction, V alue|Motion, Position, Communication) done over all possible Direction/Value pairs corresponding to each Motion/Position pair. Following our design and implementation for the system’s Autopoiesis Goal, where a hypothesis is considered to be strong if exceeds a threshold ofProbability > 0.5, true hypotheses were selected if such query resulted in a value greater than 0.5, i.e. 50% probability of being true. Then, if by any Direction/Value pair existed one and only one Motion/Position true hypothesis a true correlation between space state and human state was counted. A fully correlated system (each one of all four pos- sible human states correlated to only one of the four possible space states) would score 4. The findings are depicted in figure 4.35. 107 4.6. SENA prototype ANOVA analysis of the number of logical correlations showed a significant effect, F(3,16) = 3.658, p = .035, of a-priori knowledge on the correlation between space stimuli and human behaviors. The interactive A.I. group scored the lowest, while the interactive control the highest. An analysis of contrasts between the interactive groups (control, A.I. and human) and the non-interactive control showed that a significant effect was only achieved by the interactive control, p = .009, and the interactive human, p = .018, groups. A not significant difference, p = .207, against the non interactive group was achieved by the interactive A.I group. The results suggested that non interactive spaces, i.e. static, were not able to achieve any logical correlations between spatial stimuli and human actions. Inter- active spaces, except when humans are told that they’re interacting with an artificial intelligence, seemed to achieve a higher count of logical correlations. Contrast analysis between the interactive groups showed that the interactive A.I. group was not significantly different from the interactive control group, p = .120, that the interactive human group was not statisticall significant from the in- teractive control group, p = .747, and that the interactive human group was not statistically significant from the interactive human group, p = .207. This analysis suggested that the logical algorithm implemented in the Autopoiesis Goal of the prototype proved successful in achieving equal effects on stimuli-behavior correla- tions across interactive groups. Furthermore, it proves that no significant effect on logical correlations is achieved by neither the A.I. or human groups in comparison to the interactive control group. These findings demonstrate that the designed interactive agent successfully constructed an autopoietic system, i.e. a network of co-relationships, with its hu- man inhabitants. Further analyses exposed bellow will help provide additional proof to this statement. Behaviors related to each stimulus Correlations between spatial stimuli and human behaviors was analyzed using a Chi-Square Test for Goodness of Fit. It was found that human activity resulting from changes in spatial stimuli were statistically significant, χ2(9, N = 72320) = 2473.666p = .000, between each other. This suggested that human actions var- ied due to spatial stimuli changes, and not due to variances between participants’ actions. Table 4.14 presents the percentages of total total count of correlations be- tween all spatial stimuli and all human actions. Figure 4.36 presents the total count of appearances of specific behavior actions according to each stimulus. The statistical differences depicted in figure 4.36 between stimuli and the per- centages of correlated stimuli-action pairs shown in table 4.14 shows a clear depen- dency between stimuli and human behavior. This suggested that human subjects 108 4.6. SENA prototype and the stimuli-based spaces formed an self-regulating network of stimuli-action relationships, i.e. an autopoietic system. Position Left Position Right Motion Low Motion High Floor Low 5.8% 4.3% 7.9% 2.2% Floor High 7.5% 4.3% 9.8% 1.9% Wall Low 12.2% 3.7% 13.8% 2.1% Wall High 5.9% 6.3% 11.2% 1.0% Table 4.14: Percentages of total stimulus-action correlations across all subjects. Stimulus Count of appearances Position Left Position Right Motion Low Motion High Floor Low Floor High Wall Low Wall High 2000 4000 6000 8000 10000 Figure 4.36: Count of appearances of human actions related to each spatial stimu- lus. In order to understand the nature of these correlations, an in-depth analysis was performed for each different behavioral state, using the means of human activity recorded during each stimulus and shown in figure 4.37: • Position left: The human behavior of a being on the left part of the space did not seem to significantly, F(3,76) = 1.932, p = .132, vary during different 109 4.6. SENA prototype spatial stimuli. However, analysis of contrasts showed that wall low stim- ulations resulted in an effect significantly different, p = .023, to the ones obtained by other stimulations. • Position right: The human behavior of being on the right part of the space – did not seem to significantly, F(3,76) = 0.657, p = .581, vary during differ- ent spatial stimuli. Contrast analysis showed that various stimulations had no significant effect on this behavior. Although wall high stimuli seemed to be affected by wall high stimulations, contrast analysis showed that this effect was not significantly different, p = .176, to the ones obtained by other stimuli. • Motion low: The human behavior of being stationary did not seem to sig- nificantly, F(3,76) = 0.924, p = .433, vary during different spatial stimuli. Contrast analysis showed that various stimuli had no significant effect on this behavior. The apparent effect of wall low stimulations showed not to be significantly different, p = .165 , to the ones obtained by other stimuli. • Motion high: The human behavior of moving did not seem to significantly, F(3,76) = 1.258, p = .295, vary during different spatial stimuli. However, contrast analysis showed that floor low, floor high and wall low were almost significantly higher to a wall high stimulation, p = .061. This effect is not apparent or significant and should be discarded as a real effect. Count of appearances Floor Low Floor High Wall Low Wall High 100 200 300 400 500 Motion High Motion Low Position Right Position Left Figure 4.37: Means of behavior states recorded during each stimulus, across all groups. 110 4.6. SENA prototype A second analysis was then performed for each stimulation state: • Floor low: During floor low stimulations, differences between behavioral states proved to be not significant, F(3,76) = 2.506, p = .065. Contrast analysis showed that none of the behaviors were significantly different. This showed that under floor low stimulations no significant effect can be achieved. • Floor high: During floor high stimulations, differences between behavioral states were significant, F(3,76) = 3.592, p = .017. Contrast analysis showed that position right and left did not differ from eachother. p = .227 meaning that floor high stimulations did not have an effect on human position. How- ever, contrast analysis showed that motion high and motion low differend significantly, p = .003, suggesting that floor high stimuli have an effect on motion. • Wall low: During wall low stimulations, differences between behavioral states were significant, F(3,76) = 8.024, p = .000. Contrast analysis showed that a significant difference between motion low and motion high, p = .000, existed, as well as between position right and position left, p = .005.In other words wall low stimulations promoted low motion and left positions. • Wall high: During wall high stimulations, differences between behavioral states were significant, F(3,76) = 4.847, p = .004. Contrast analysis showed that differences between position left and position right were not signifi- cantly different, p = .866, while differences between motion high and mo- tion low were significant, p = .003. This suggested that wall high stimuli had only effects on motion. Both sets of findings suggested two strong links between human behaviors, namely: • Left Static: By analyzing the means plot depicted in figure 4.37 it is pos- sible to see that motion low and position left – a human not moving and staying in the left part of the space – were strongly linked. However, this link seemed to have been disrupted during wall high stimulations. Contrast analysis showed that under wall high stimulations motion low and position left were significantly different p = .050. • Right Moving: Position right and motion were also strongly related – hu- mans moving when on the right part of the space. Although an apparent disruption occurred during wall high stimulations, contrast analysis showed that during such stimulus position right and motion high were not signifi- cantly different p = .363 and thus, their link remained unaffected. 111 4.6. SENA prototype Such strong links might have been the result of an important bias caused by the Emotion Selector device. The device was placed in the left area of the space and notes taken from the experimenter’s observations show that experients that chose to interact with such device generally remained static, causing an effect interaction betwen the keyboard position and human position. Only one significant behavioral effect was achieved by the various stimuli. A wall low stimuli resulted in subjects remaining in the left part of the space. A strong link found between subject being on the left area of the space and an increase of low motion – remaining static – has been attributed to the location of the Emotion Selector device. It has been hypothesized that during this stimuli subjects were concentrated in updating their emotion and this affected their movement patterns. Another relevant effect on motion was achieved by floor low, floor high and wall low stimulations that caused humans to move more than when a wall high stimulus. In other words, wall high stimulations resulted in human subjects moving less. Even if this behavior was not statistically significant it is noteworthy. A strong disruption of the link between motion low and position left 17 caused by the high wall stimuli suggests that said stimulus caused humans to move less and stay in the right part of the space – unable to interact with the Emotion selector. Therefore, this stimulus has a non significant effect strictly on motion. By analyzing each spatial stimulus individually it was found that, floor low stimulations didn’t have significantly different effects in human behavior, i.e. there was no apparent behavioral pattern. However, floor high and wall high stimula- tions had only significant effect in motion behaviors, while wall low stimuli had significantly different effects in both motion and position. These results seemed to be aligned with the previous findings of number of actions that correlate to one, and only one, stimulus. In said logical analysis we had found that the maximum mean of correlations existent is 1.8, in the present analysis we have found that only one unique correlation between stimulus and behavior exists within our sena prototype (motion low in the left area of the space due to wall low stimulation), while a possible behavior seems to have almost evolved (reduced motion on the right area of the space due to high wall stimulation). Behaviors related to the the non-interactive treatment and the interactive treatments An analysis of the specific human actions affected by the interactive treatments was then performed. Fast motion – most of the times a result of a change of position and will to interact or to control the environment – was more frequent in the interactive 17Result of the Emotion Selector device 112 4.6. SENA prototype Count Control I. control I. A.I. I. human 50 150 250 350 Figure 4.38: Means of total count of overall motion high appearances. groups (figure 4.38). Although the effect was not overall significantly different, F(3,16) = 1.489, p = .255, between groups, contrasts analyses showed that the Human group had an almost significant effect against the non-interactive control group, p = .055. No significant difference from the non-interactive control group was found for either the interactive control group, p = .455, and the interactive A.I. group, p = .256. Further contrast analyses were performed between the interactive groups. There was no statistically significant difference between the interactive control group and both the interactive A.I group, p = .685, and the interactive human group, p = .210. These findings suggested that although overall motion was observed to be higher in interactive groups the effect was not statistically significant from a non- interactive treatment. Moreover, interactive treatments did not differ from the in- teractive control group, proving that no significant interaction – as measured by motion rate – can be achieved by a-priori knowledge of the system. No correlations between human position and interactivity were found (figure 4.39). A human being on the left part of the space did not seem to be significantly affected, F(3,16) = .621, p = .612, by interactivity treatments applied. Parallel to this result, human position in the right part of the space didn’t seem to be sig- nificantly affected, F(3,16) = .621, p = .612, by the interactive treatments tested. In other words, the interactiveness of the environment didn’t have an effect in the location that human participants would choose. It was noticed that the level of interactivity – high motion counts – related to each stimulus seemed to be affected by the interactive treatments. Analyzing the data showed that fast motion as the result of a wall high stimulus (figure 4.40) was significantly different, F(3,16) = 4.944, p = .013, between both non-interactive and interactive treatment groups. Contrast analyses, however, showed that, com- 113 4.6. SENA prototype Count Control I. control I. A.I. I. human 50 250 450 650 850 Figure 4.39: Means of total count of overall position right appearances. pared to the non-interactive control group the only significantly different effect was achieved by the interactive human group, p = .002. Further contrast analysis per- formed within interactive groups showed that, compared to the interactive control group, the interactive A.I group was no significantly different, p = .620, while the interactive human group was significantly different, p = .031. This proved that within interactive groups the belief of interacting with a human being affected the interactive process – depicted by higher motion rates. Interactivity – fast motion – as the result of a floor high stimulus (figure 4.40) seemed to have been affected by the interactive treatments, however, statistical analysis showed that groups were not statistically significant, F(3,16) = 2.632, p = .086, between each other. contrast analysis showed that only the interactive human group achieved a significant, p = 0.14, effect, in comparison to the non-interactive control group. Further contrast analysis done within interactive groups showed that both the interactive A.I group, p = .962, and the interactive human group, p = .291, were not statistically significant from the interactive control group. This suggested that no difference in interactivity – motion rate – was achieved through a-priori differences within interactive groups. Low values in both floor and wall forms had unpredictable effects (figure 4.41). Differences between all non-interactive and interactive groups in high motion due to wall low stimulations was not significantly different, F(3,16) = 1.174, p = .351. High motion due to floor low stimulations was also not significantly different, F(3,16) = .97, p = .897, between both non-interactive and interactive treatment groups. Analysis of contrasts showed that no group had a significant difference with the non-interactive group, and contrasts between interactive groups showed that no group was significantly different from the interactive control group. 114 4.6. SENA prototype High motion count Control I. control I. A.I. I. human 0 50 100 150 Floor high stimuli Wall high stimuli Figure 4.40: Means of high motion related to high valued stimuli. High motion count Control I. control I. A.I. I. human 0 50 100 150 Floor low stimuli Wall low stimuli Figure 4.41: Means of high motion related to low valued stimuli. 115 4.6. SENA prototype Effects of a-priori beliefs of an interactive system Results in the previous section already show that a-priori beliefs that users may have of an interactive spatial environment can affect their interaction with them. Recapitulating the results from the previous sub-section we can confidently assert that the belief of being interacting with a human significantly affects the will to interact and has unpredictable effects when low stimulations are provided. Further analyses of questionnaire responses complement these findings (figure 4.42). Agreement Control I. control I. A.I. I. human 1 2 3 4 5 6 Beauty Pleasantness Comfort Involvement Figure 4.42: Effects of a-priori beliefs of an interactive system. Beauty A marked difference (figure 4.42) in the aesthetic perception of the space was found to depend in previous beliefs about the system. Analysis of users’ agree- ment to the statement “My experience today was beautiful” showed that there ex- isted a significant difference, F(3,16) = 3.690, p = .034, between both interac- tive and non interactive treatment groups. However, contrast analysis between all groups showed that the only significant difference was achieved by the A.I group, which was significantly different to both the interactive control group , p = .018, and to the non-interactive control group, p = .008. No other significant differences between were found. Although test subjects in the interactive A.I group seem to have found their experience more beautiful than those in the interactive human group, no statistical difference, p = .150, between them was found, furthermore, the interactive human group was not significantly different from the interactive control group.p = .274. In other words, subjects in both interactive (human and A.I.) groups found their experience significantly more beautiful than those in the non-interactive control 116 4.6. SENA prototype group, but people that believed to be interacting with a human did not experience a more beautiful experience than those in the interactive control group. Pleasantness A significant difference (figure 4.42), F(3,16) = 3.269, p = .049, between all non-interactive and interactive groups was found in the users’ agree- ment to the statement “My experience today was pleasant”. Contrast analysis showed that the interactive A.I. group was the only significantly different, p = .011, from the non-interactive control group. Furthermore, a significant difference, p = .023 was found between the interactive A.I. group and the interactive human group. Since the latter was not different, p = .724, from the non-interactive control group we can confidently assert that a pleasant experience can only be experienced by subjects that believe that they are interacting with an artificial intelligence – a machine. The fact that test subjects in the interactive human group did not have a pleasant experience proved, by qualitative analysis, to be a result of lack of the understanding of the interactive process. Comfort A significant difference (figure 4.42), F(3,16) = 5.754, p = .007, be- tween treatment groups was found in participants agreement to the statement “I felt comfortable interacting through this system”. This time, however, the interac- tive A.I. group was not significantly different, p = .653, from the non-interactive control group. Furthermore, the interactive control group scored lower, but not significantly p = .085, than the non-interactive control group, while the interac- tive human group scored significantly lower than the non-interactive control group p = .005. This suggested that interactivity did not make a spatial environment more comfortable than a non interactive one. On the contrary, it risked becoming significantly less comfortable when users felt a loss of control. Knowledge of the interactive controls, and the system itself, did not make the system more comfort- able. Involvement Although a difference of involvement between groups was noted in some of the questionnaires and interviews (figure 4.42), there was no signifi- cant effect,F(3,16) = .722, p = .553, between both interactive and non-interactive treatment groups. Users from the various treatment groups did not agreed differ- ently to the statement “I felt involved in the experiment”. Contrast analysis did not show any particular difference between groups. Involvement was not promoted by interactive environments more than in static non-interactive environments. 117 4.6. SENA prototype Communication ability of an interactive spatial environment All interactive groups – control, A.I. and human – were not significantly different, F(2,12) = 0.338, p = .720, in decoding a message through space (figure 4.43). Contrast analysis showed that even if the A.I. group appear to have performed better, it was not significantly different, p = .427, to the interactive control group, and the interactive human group, p = .688. Messages Decoded I.Control A.I. I. human 4 6 8 Figure 4.43: Means of accurately decoded messages. The interactive A.I. group had a mean 7.8 responses out of 16 (48% accuracy) followed by the interactive human group with a mean of 7 responses out of 16 (43% accuracy), a small improvement from the interactive control group that achieved a mean of 6.2 responses out of 16 (38% accuracy). These results are in line with Smith and MacLean’s[66] research where accuracy achieved ranged between 48.3 to 59.5 percent. Analyzing the questionnaire (figure 4.44) a significant difference, F(3,16) = 3.493, p = .040, was found between non-interactive and interactive treatment groups. However, contrast analyses showed that the only statistically significant difference between groups was achieved by the interactive human group, and the interactive A.I group, p = .006. Compared to the interactive control group, neither the interactive A.I group, p = .129, or the interactive human group, p = .129, were significantly different. A similar, but not statistically significant, F(3,16) = 2.426, p = .103, difference was found between interactive and non interactive treatments groups in the ease of decoding a message (figure 4.44). Contrast analysis showed that the only statisti- cally significant difference between interactive groups was achieved between the interactive A.I. and the interactive human group, p = .033. Compared to the inter- active control group, neither the interactive A.I group, p = .063, or the interactive human group, p = .743, were significantly different. 118 4.6. SENA prototype Agreement Control I. control I. A.I. I. human 1 2 3 4 5 Transmitting Ease Decoding Ease Figure 4.44: Agreement on message transmission and decoding ease between non-interactive control, interactive control, interactive A.I and interactive human groups. Qualitative analysis Non-interactive control group Two users did not think that the system could be used to communicate intimate feelings; three users thought that the system could but would need improvements. One subject commented that feelings would be dependent on participant previous experiences, playing an important bias on com- munication. Two subjects moved when in distress or discomfort while three chose a comfortable position near the keyboard; one subject even commented choosing this location “mostly because it was something [she] could interact with.” Subjects chose locations where they could experience the space around them better. A sub- ject commented choosing the center of the space to “have a clear view all around” or the corner because he “realized [he] could see more.” Only four users found the space aesthetically pleasant, even if their questionnaire responses showed they had a pleasant experience. One subject qualified the space as “nothing special” and “neutral”. Two subjects felt connected to the space. Three subjects didn’t feel any emotional connection with the space, one even considered “pouring [his] emotions” into the keyboard but didn’t feel that a connection existed between the controller and these emotions. No subjects were able to understand the rules of the system, although one subject guessed it had something to do with movement, he couldn’t place it as a rule in the system. It is noticeable that two users would use the system as a very intimate companion, most of them proposed emotion-related alternatives, one proposed an interrogation room and another proposed to use it as a “journal, so that [he could know] what spaces best suit [his] moods”. One subject would use the system for relaxation and two wouldn’t use it in a daily basis. Further 119 4.6. SENA prototype observation done by the experimenter shows that users were quite aware of their emotional changes at the beginning of the experiment. After approximately 10 minutes users would not use the emotional keyboard as often. One user, however, updated his emotion quite frequently. Interactive control group Only one subject believed that the system could com- municate intimate feelings. A subject commented not knowing if the system could communicate emotions but “certainly make [your loved ones] feel uncomfort- able/comfortable”. Most of the subjects choose their actions in space in order to be comfortable. A subject found “light shinning down [...] annoying” and when “wanted to be alone [he] went to the corners/sides”. Only one subject found the space to be pleasant, the rest of the group coincides that the space is not pleasant or slightly pleasant. One of these subjects commented that it was a not “particu- larly pleasant”, while other found it “austere and plain”. No subjects developed an understanding or the rules of the system. One subject commented that he “did not feel [he] had much control over [the system]”, another subject guessed “body movements”, and a third subject believed the space was controlled by the exper- imenter. Furthermore, two out of five subjects were very annoyed by “sudden” changes in space and high illumination stimuli. They both resolved this issue by moving to a more comfortable position. One subject would use this system to communicate with family, friends or employers and other subject would use it as a relaxation or meditation space. Another subject answered that she didn’t “see how [the system] should translate to an every-day thing”. Observations from the exper- imenter indicated that subjects became frustrated when changes in space occurred in what they believed was a randomly triggered change. Users felt an upsetting and strong lack of control, generally depicted by important changes in position, motion or emotional state. Only one out of five subjects did not have strong motion or position changes when a space change was triggered, instead this subject updated his emotion on almost every spatial change. Interactive A.I. group Two users did not think that the system could be used to communicate emotions. One subject commented that “you need more complicated spaces for communicat- ing that”, while a second added that “the projected feelings aren’t refined enough for [her] to do so.” One subject said that the system could “somewhat” be used for this purpose, while another experient added that only if the interlocutor knows her “well enough”. Only one subject thought that the system could be “definitely” used to communicate intimate feelings. Three subjects remarked that their position and motion in the space was determined by the location of the keyboard used to update 120 4.6. SENA prototype their emotion. One of these subjects remembered choosing “somewhere near the keyboard [emotion controller] and [sitting] in a position that [she] felt comfort- able”. Two other users commented moving randomly, one of them wrote that he moved “just once every few minutes, [without] particular pattern.” Four out of five subjects found the prototype aesthetically pleasant. Three subjects out of five felt emotionally connected to the space, of which one commented that “the lighting definitely affected [her] emotions”. All the subjects saw a strong connection be- tween their emotions and their environment, making two subjects believe that the space was controlled by updating their emotion several times. One subject, nev- ertheless, claimed to know the rules of the system through movement and added that he “could predict the next change, but could not control it as [he] wanted it to be.” Four out of five subjects would use the system in a daily basis, to help them relax, daydreaming or to control their stress. Observations from the exper- imenter shows that all subjects updated their emotion carefully during the whole experiment. These updates generally took place when a change in space occurred. Subjects tried hard at the beginning of the experiment to control the environment by moving different locations in space. After not being able to find a clear cor- relation between their actions and space changes most subjects lost interest in this alternative and remained static until a further change in space occurred. Changes in stimuli were generally triggered by minute changes in the subjects’ state. Finally, users in this group didn’t seemed more active than those in other interactive groups. After a while most users of the present group remained static in one position. The occasional changes in space triggered minor position adjustments and the search for more comfortable positions, but did not result in radically strong reactions. Interactive human group Three users out of five thought that the system could be used to communicate intimate feelings, they all agreed that such feelings would have to be very intimate and general – not specific. One subject commented that this would only happen if a language was “established through time and explic- itly”. One subject was “curious whether the buttons [she] pressed ... [regarding her] feelings were also implicitly affecting the other person’s space”. When asked about the aesthetics of the space, two subjects commented that bright floor stimuli made them feel under the spotlight and were sometimes annoying. One subject even commented that when stimulation came from the ceiling “it felt wrong to sit there”. Low stimulations were seen as calming or rising bubbles. Three, out of five subjects believed that the space was aesthetically pleasant. Only two sub- jects remarked that their movements were done accordingly to they what “felt” at the moment. One subject commented that “it didn’t seem like [his] actions were affecting [his space] much, so [he] just sat down for most of [the experiment]”. An- 121 4.6. SENA prototype other subject agreed that she “was more reacting to the space than trying to convey a message to the other person.” One subject out of five strongly believed that the space was controlled by the hidden experimenter. Only one subject out of five was aware that the changes in space were controlled by motion. Four subjects didn’t find a connection between the stimuli and their actions. Of them, one commented that changes “seemed random”. When asked if they would use the system in a daily basis, only one subject didn’t find an application for the system. The rest of the subjects agreed that it could be used for relaxation purposes. A coffee shop, and aquarium or a yoga room were examples of proposals by the participants. Ob- servations from the experimenter show that users were some of the times trying to communicate with the “hidden participant”. Only one subject tried passionately to communicate with said hidden subject. Although all experients strongly believed that a person was interacting with them, they couldn’t create a mental connection between the space changes and the other person. A user commented that it was “hard to decide if the space was controlled by a person or by a machine”. All sub- jects agreed that sometimes the changes seemed triggered by themselves and not by a human being. Exploration of the space was done at the beginning, in most of the cases, and later explorations were rare. One subject even decided to sleep during the experiment. 122 Chapter 5 Conclusions The present research has proposed a model for the conceptualization, understand- ing and design of Cyborg Environments. This conceptual model is formed of three parts that describe a system of cyborg nature based on communication and interac- tivity: • Cyborg Space: A cyborg space is defined by its embodiment, enclosure and alteration capabilities. • Cyborg Communication: A cyborg space can communicate with other cy- borgs by controlling the relationship between code – stimuli that are pro- duced by its activity – and the noise that affects it. • Cyborg System: A system of cyborg spaces and cyborg humans is an au- topoietic system of interactivity based on a citizenship of action. A series of experiments have been conducted to collect scientific data to test this theoretical model. The sections that follow recapitulate the findings of these investigations and discuss them within a more global framework. 5.1 Recapitulation of findings The experiments conducted during the duration of this research were designed to test specific parts of the conceptual model for Cyborg Environments. Table 5.1 presents the relationship between each of these experiments and the hypotheses that arise from said model and previously exposed in table 1.1. The current findings of these experimentations suggest that the model is suc- cessful in describing a new spatiality based on intimate interaction between human beings and architectural spaces enhanced for interaction and spatial perception. The sections below present a summary of the results belonging to each topic of the presented model. 123 5.1. Recapitulation of findings Model Part Model sub-part Experiments. Cyborg Space Embodiment Training Wheel Study Enclosure type Effects of visual stimuli Alteration type Effects of visual stimuli Effects of directional light Cyborg Communication Message transmission SENA prototype Cyborg Interactions Cyborg citizenship SENA prototype Autopoiesis SENA prototype Table 5.1: Theoretical model and experiments conducted. Limitations The experiments conducted by the present research were performed under tight time and budget constraints. Software implementations were deployed without proper testing, spatial setups were often done in shared facilities and hardware used, e.g. projectors, were often not calibrated between different experiments. This results in a reduced replicability that can be eliminated in future explorations by paying attention to these details. Additionally, due to time constraints, the number of participants within each experiment was low. This considerably reduced the statistical power of the an- alyzes performed and had a detrimental effect on the findings, which should be understood with caution. Larger sample sizes should correct this limitation and allow for more in-depth analyzes of the relationship between cyborg spaces and humans. Embodiment An analysis of the kinesthetic actions taken by human beings during their inter- action with mutating built space suggested that motion patterns can be modeled and predicted with a high degree of accuracy. A pilot study of interactive spatial perception offered an insight to the fact that behavioral activity of human beings within interactive environments depend on attentional variations to changing stim- uli, private and public distinctions within the space, appropriation through what was defined as a “home location”, and gender. The data collected by the Training Wheel Study suggests that a complex pro- cess of embodiment arises between human beings and interactive environments. A theory of control based on the manipulation of reflective consciousness proved that by manipulating the spatial relationships of a system’s control it is possible to evoke controllable kinesthetic movements in human subjects. This space-human 124 5.1. Recapitulation of findings embodiment is the result of a tight connection between human cognitive models of space and spatially designed interactive controls. Enclosure type A study on the spatial effects of different characteristics of peripheral visual stim- ulus showed that spatial size perception can be evoked in a predictable manner by altering the value component of an HSV –Hue, Saturation, Value– color space. The rate of change in spatial size perceived, related to changes in value can be computed by the function: S = 0.016v + 1.3 Where S=Spatial size measured in a scale from 1 to 6 (tiny, very small, small, large, extra-large and huge), and v = color intensity value. Although these results are part of initial findings that need deeper understand- ing, they have proved that it is possible to conceive an enclosed space changing in size purely through visual stimulation. Alteration type The findings of the correlation between visual stimuli intensity – Value in an HSV color space – and humanly perceived spatial size leads the present research to con- sider the possibility of evoking a spatial experience through non-stereoscopic stim- uli without spatial visual cues. In addition, it suggested that by altering the char- acteristics of the visual stimulus it is possible to alter such spatial perception in a predictable manner. These findings prove that a spatial liquid – mental represen- tation of the space – can be altered without the physical modification of spatial membranes – physical limits, e.g. walls. This corroborates a part of the proposed conceptual model that proposes the conception of space perceived as a spatial liq- uid that is independent of objects that enclose such perceived spatiality. Further explorations on the spatial effects of directional light intensity showed that the alteration of spatial membranes18 enclosing a space result in human behav- ioral changes – motion rate and attention – dependent on the cognitive process of space perception. Perspective and texture cues proved to give rise to strong spa- tial perceptions overcoming the previous findings of spatial size alteration through the manipulation of stimulus intensity. However, directional stimulation proved to affect head rotations and directional kinesthetic motion linked to bodily assess- ment of the cognitive spatial map created by space-perceiving human beings. This 18Ego-centric – stereopsis and visual cues – characteristics of space perception that are dependent on objects that evoke stimuli of spatial nature. 125 5.1. Recapitulation of findings strong dependency between the spatial membrane’s physical definitions and the humanly created spatial model suggests that alterations of the perceived space can be performed by modifying the former. The results obtained in both experiments prove that space strongly depends on the cognitive process of stimuli perception, mental representation and kinesthetic assessment undergone by humans perceiving space. Through an understanding of this process and appropriate stimuli manipulation it is possible to evoke spatial alterations according to the theorized model of space alteration. Message transmission The proposed conceptual model of cyborg message transmission states that space cyborgs can communicate with cyborgs of both human and spatial nature. The code for such transmission is theorized to exist outside the inter-active process between cyborgs and in constant opposition with the noise of the communication channel. Experimental data gathered through the SENA prototype on message encoding – proper arrangement of stimuli – suggested that is possible for a space cyborg to learn a code, i.e. the semantic relationships between a chosen stimuli, that lies outside the realm of its interactive definition. Additional analysis on message trans- mission accuracy proved that simple space cyborgs have the same communication performance than human beings using a constrained communication channel of only four degrees of freedom – stimuli. These findings authenticate the proposed conceptual model for cyborg commu- nication capabilities and allow the theorization of Cyborg Environments as systems capable of social interaction. Cyborg citizenship and autopoiesis It was hypothesized that a system composed of two space-perceiving cyborgs would be of autopoietic nature. The SENA prototype was conceived and designed as an autopoietic space cyborg belonging to an autopoietic system of human-space in- teraction. Analysis of the correlations between space stimuli and human behaviors showed that a strong link existed between the two, forming enclosed self-regulating networks of state relationships. This proved that Cyborg Environments can be de- signed as systems of co-organizing coupled relationships that, according to Matu- rana, are defined as Social. The effects of a-priori knowledge of a Cyborg Space on interactivity was an- alyzed using the SENA prototype. Although previously gained beliefs of a Space Cyborg had significant effects on interactivity measures, e.g. motion, it played no apparent role the correlation between specific space stimuli and human actions. 126 5.2. Implications The analysis proved that even if said knowledge had significant effects on beauty and pleasantness perceptions of the interactive process it had no effects on the com- fort and involvement in such interaction. This suggested that information about a Space Cyborg can only affect the subjective perception of the interaction and its rate without affecting the nature,i.e. correlation between entities, of the autopoietic system. Furthermore, the cyborg citizenship model based on an entity’s selective access to action proved to provide a successful methodology in the creation of autopoietic systems formed of comparable, i.e. equally action-capable, entities. Future devel- opments in artificial intelligence should make possible the existence of super-agent to super-agent –human– autopoietic systems of interaction. 5.2 Implications The findings of the present study attend to a small part of the definitions provided by the theorized models for Cyborg Environments. Further theoretical investiga- tions and experimentations should be performed in order to assess such conceptu- alization. Nevertheless, the present explorations have already uncovered important implications for Architecture, Human Computer Interaction and Cognitive Science. Architecture Ubiquitous computing, communication networks and artificial intelligence have radically affected how humans use and perceive their architecturally constructed spaces. A new paradigm that is coherent with such reality is needed by architec- tural thought in order to conceptualize, design and construct built environments that are coherent with such reality. The SPACES research has demonstrated that a new paradigm for understanding technologically enhanced inhabitable spaces is possible. Space has proven to be a cognitive process dependent on stimuli that can be delibered synthetically to human perceivers. By performing simple alterations on visual stimuli it is possible to evoke controllable spatial perceptions that can be con- sidered architectural. Furthermore, predictable behavioral patterns that arise from this cognitive process can be used to construct action-based systems of autopoietic and social nature prone to communication. It is possible to conceive an approach to architectural design based on the conscious and creative manipulation of the com- ponents of this cognitive process, and thus hypothesize an architectural practice of Spatial Experience creation. The architectures that can be created through this pro- cess have demonstrated a high level of embodiment, spatiality and sociality rarely 127 5.2. Implications experienced by human being in static buildings. Architectural thought will have to critically include the possibility of these non- objectual, social and embodying spatialities in its discourse in order to understand and attend to their inevitable appearance in our built environment. The present investigations should raise questions regarding the understanding of space as an object-dependent phenomenon of static nature and point towards new paradigms of space creation based on perception and action. Furthermore, the new paradigms can be addressed with moderately complex and contemporary technology easily available to all researchers and institutions. Most of the experiments implemented in these investigations were done using hardware and software that is easily avail- able and that can be easily adapted to the built environment without excessive and unnecessary costs. Finally, technology should be a promoter of architectural thought, and not an obstacle to it. Technological advances should be used in the investigation of spatial paradigms in a scientific manner. Research should be done –i.e. user testing, cog- nitive measurements, simulations, etc. – before blindly applying new technologies to the construction and design of spaces enhanced with technology. Human computer interaction Human Computer Interaction studies are already being affected by easily available ubiquitous computing. Computers are no longer a commodity, but part of our daily environment. Interaction with machines has become invisible and users often find themselves using the built environment to interact with information. Work has been previously done to understand how humans interact between themselves and their technologically built environment. Prototypes using archi- tectural features – interactive walls, tables, built and virtual environments – are becoming more common in both academia and the industry. Nevertheless, the be- havioral and interactive effects of these implementations are, most of the times, unknown. The present study has demonstrated that – at least for visual information – by defining information as a collection of simplified stimuli, i.e. light properties, it is possible to measure and model their effect on the interactive process that arises between humans and their technologized environment. The computing machines of the future will not only have to be designed from a human-centered usability point of view, but as part of autopoietic – self regulating – and sociable systems of co-adaptation with the capability of becoming semantic. Furthermore, closer attention should be paid to the cognitive spatial perception of environment-like solutions for data manipulation and presentation and the a-priori knowledge that users have of their environment. 128 5.2. Implications Finally, the present research has proved that strong behavioral effects arise from spatial stimulation. These could be counterproductive, if not properly predicted, in systems where kinesthetic movement or attention become a bias in the interaction with information, e.g. large screen displays. Space cognition According to the various areas of study that conform the field of space cognition, space is the result of the perceptive, cognitive and action processes undergone by human beings. A large collection of theories have focused on modeling various parts of these processes independently, but no compelling unifying theory has been able to present them as part of the same event of space perception. The creation of a global theory of space is outside the scope of the present research. However, the present experiments support the idea that various percep- tual, cognitive and activity processes interact between themselves to result in what can be called spatial perception. There is no unique process – or serial collection of them – that result in said perception of space, but a network of interacting and non-fixed processes used by human beings to inhabit the world. This interactive collection of brain events can be modeled and understood in a global framework if the system is understood as a network of relationships, an autopoietic system, between processes that participate in said spatial perception. Each process within this network is a self regulating entity that might or not be related to other processes within spatial cognition. Nevertheless, when the system becomes active, a structural coupling inevitably takes place and all parts become coherently dependent, resulting in an articulate process of spatial perception of the world. Using this conceptualization of space cognition, the SPACES research has hy- pothesized the existence of entities with access to spatial perception that are neither biological or within space.19. A successful implementation of a space with spatial perception done by the present research has proved that spatial cognition is inde- pendent of the intrinsic characteristics of the world, and depends on the relation- ships between them and the cognitive processes that arise from perceiving them, mapping them and acting upon them. This suggests that if all properties of the world are suddenly changed – inverted or exchanged with different ones – human beings will continue on perceiving a spatial experience of their surrounding. 19A space with spatial perception cannot be considered to be within space if space is perceived as out there 129 5.3. Future perspectives 5.3 Future perspectives As our environment becomes more technologized it is possible to visualize a world formed of Cyborg Environments. It is imperative to understand the capabilities of these entities and their implications to the human life taking place within them. From the various paths that have to be taken, in order to measure and investigate these plausible futures, we must first explore the • societal possibilities of Cyborg Environments, • their implications to Architecture, and • the theorization of a city of networked space and human cyborgs, i.e. an urban Cyborg Environment. Cyborg Environments have proven to be self regulating systems that can be conceived as social structures. The present investigation has explored only the foundation of this fact, but a deeper understanding of the inter-relationships that arise between humans and Space Cyborgs is needed. This knowledge should give birth to a new conceptualization of built space as a self-regulating system of stimuli and actions. Finally, because Cyborg Environments can be thought as autopoietic entities within an autopoietic system it is important to conceptualize, investigate and model an urban reality formed of Cyborg Environments. 5.4 Conclusions The SPACES research has explored the initial issues of what could be labeled as architectural cybernetics. The experiments presented have explored the relation- ship between cyborgized humans and cyborgized spaces that conform inter-active systems. A theoretical model of space has been created based on space liquidity, human perception and human-computer inter-actions. Furthermore, the systems formed by various cyborgs of human or spatial nature have proven to be autopoietic and lead to social and communication enabled entities. New questions have arisen from these findings. How can emotionally empowered space cyborgs socialize with humans? What are the semantics of a linguistic structure based on perception and embodiment? What are the consequences of a society formed of human and space cyborgs? What are the rights and obligations of space-encoded agents within our society? How will cities be shaped when both its inhabitants and building blocks can communicate autopoietically between them? Finding questions to these an- swers is the objective of future explorations and the ongoing SPACES research. 130 Bibliography [1] Di Stefano A. Bevilacqua and A. Lanza. An efficient change detection al- gorithm based on a statistical nonparametric camera noise model. Image Processing, 2004. ICIP ’04. 2004 International Conference on, pages 2347– 2350, 2004. [2] Timothy W. Bickmore and Rosalind W. Picard. Establishing and main- taining long-term human-computer relationships. ACM Trans.Comput.- Hum.Interact., 12(2):293–327, 2005. [3] Ole Bouman. Architecture, liquid, gas. Architectural design, 75(1):14–22, Jan 2005. [4] Lucy Bullivant. 4dspace: Interactive architecture - introduction. Architectural design, 75(1):5–7, Jan 2005. [5] Lucy Bullivant. Induction house: aether architecture - adam somlai-fischer. Architectural design, 75(1):97–98, Jan 2005. [6] Lucy Bullivant. Jason bruges: light and space explorer. Architectural design, 75(1):79–81, Jan 2005. [7] Lucy Bullivant. The listening post: Ben rubin and mark hansen. Architectural design, 75(1):91–93, Jan 2005. [8] Lucy Bullivant. Media house project: the house is the computer, the structure is the network. Architectural design, 75(1):51–53, Jan 2005. [9] Lucy Bullivant. Sky ear, usman haque. Architectural design, 75(1):8–11, Jan 2005. [10] Paul Allen Carter. The creation of tomorrow : fifty years of magazine science fiction. New York : Columbia University Press, 1977. [11] Hui Chen and Hanqiu Sun. Real-time haptic sculpting in virtual volume space. In VRST ’02: Proceedings of the ACM symposium on Virtual real- ity software and technology, pages 81–88, New York, NY, USA, 2002. ACM Press. 131 Bibliography [12] Manfred E. Clynes and Nathan S. Kline. Cyborgs and space. Astronautics, September 1960. [13] R. Beau Lotto Dale Purves. Why we see what we do : an empirical theory of vision. Sinauer Associates, Sunderland, Mass., 2003. [14] S. Dehaene. Conscious, and subliminal processing: a testable taxonomy. Trends in Cognitive Science, 2006. Other Contributor(s): J.-P. Changeux, L. Naccache, J. Sackur and C. Sergent. [15] Elizabeth Diller. Flesh : architectural probes. Princeton Architectural Press, New York, 1994. Other Contributor(s): Scofidio, Ricardo. Teyssot, Georges, 1946- Diller + Scofidio. [16] Daniel Dinello. Technophobia! : science fiction visions of posthuman tech- nology. University of Texas Press, Austin, 2005. [17] Paul A. Dudchenko. An overview of the tasks used to test working memory in rodents. Neuroscience and Biobehavioral Reviews, 28(7):699–709, 2004. [18] Gary L. Allen (ed.). Applied spatial cognition : from research to cognitive technology. Lawrence Erlbaum Associates, Mahwah, N.J., 2007. [19] Ernest Edmonds. http://www.ernestedmonds.com. [20] Ernest Edmonds. On creative engagement with interactive art. In GRAPHITE ’05: Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia, pages 297– 297, New York, NY, USA, 2005. ACM. [21] Ernest Edmonds and Mark Fell. Broadway one. In SIGGRAPH ’04: ACM SIGGRAPH 2004 Art gallery, page 30, New York, NY, USA, 2004. ACM. [22] J. T. Enright. The eye, the brain, and the size of the moon: Toward a uni- fied oculomotor hypothesis for the moon illusion., pages 59–121. The Moon illusion. Erlbaum, Hillsdale, NJ, 1989. [23] Eva Eriksson, Thomas Riisgaard Hansen, and Andreas Lykke-Olesen. Re- claiming public space: designing for public interaction with private devices. In TEI ’07: Proceedings of the 1st international conference on Tangible and embedded interaction, pages 31–38, New York, NY, USA, 2007. ACM Press. [24] Lizzie Muller Ernest Edmonds and Matthew Connell. On creative engage- ment. Visual Communication, 5(307), 2006. 132 Bibliography [25] Sidney Fels. Designing intimate experiences. In IUI ’04: Proceedings of the 9th international conference on Intelligent user interfaces, pages 2–3, New York, NY, USA, 2004. ACM. [26] N. Foreman and R. Gillet. A handbook of spatial research paradigms and methodologies. Psychology Press, East Sussex, U.K., 1997. [27] James Jerome Gibson. The perception of the visual world. Greenwood Press1974, Westport, Conn., 1950. [28] Chris Hables Gray. Cyborg citizen : politics in the posthuman age. Routledge, New York, 2001. [29] Donna J. Haraway. A Cyborg Manifesto. Readings in the philosophy of technology. Rowman & Littlefield Publishers, Lanham, Md., 2004. Other Contributor(s): David M. Kaplan. [30] Robert M. Harnish. Minds, brains, computers : an historical introduction to the foundations of cognitive science. Blackwell Publishers, Malden, MA ; Oxford, 2002. [31] Andrew Hieronymi and Togo Kida. Move. In SIGGRAPH ’06: ACM SIG- GRAPH 2006 Emerging technologies, page 23, New York, NY, USA, 2006. ACM. [32] Stephen Hirtle and Molly Sorrows. Navigation in electronic environments. In Gary L. Allen, editor, Applied spatial cognition: from research to cognitive technology. Lawrence Erlbaum Associates, Mahwah, N.J., 2007. [33] Jeffrey Huang and Muriel Waldvogel. The swisshouse: an inhabitable inter- face for connecting nations. In DIS ’04: Proceedings of the 5th conference on Designing interactive systems, pages 195–204, New York, NY, USA, 2004. ACM. [34] Francisco Varela Humberto Maturana. Tree of knowledge. Shambhala Publi- cations, Inc., 1992. [35] Francisco J. Varela Humberto R. Maturana. Autopoiesis and cognition : the realization of the living. D. Reidel Pub. Co., Dordrecht, Holland ; Boston, 1980. [36] Hiroshi Ishii. Tangible bits: designing the seamless interface between people, bits, and atoms. In IUI ’03: Proceedings of the 8th international conference on Intelligent user interfaces, pages 3–3, New York, NY, USA, 2003. ACM. 133 Bibliography [37] Hiroshi Ishii, Craig Wisneski, Scott Brave, Andrew Dahley, Matt Gorbet, Brygg Ullmer, and Paul Yarin. ambientroom: integrating ambient media with architectural space. In CHI ’98: CHI 98 conference summary on Human factors in computing systems, pages 173–174, New York, NY, USA, 1998. ACM. [38] Hiroshi Ishii, Craig Wisneski, Scott Brave, Andrew Dahley, Matt Gorbet, Brygg Ullmer, and Paul Yarin. ambientroom: integrating ambient media with architectural space. In CHI ’98: CHI 98 conference summary on Human factors in computing systems, pages 173–174, New York, NY, USA, 1998. ACM. [39] William James. Psychology: briefer course. With a new foreword, chapter 11, pages 166–188. Collier books ; BS24. Collier Books, New York, 1962. Notes: An abridgment of the author’s Principles of psychology. [40] Andruid Kerne. Interface ecology. interactions, 5(1):64, 1998. [41] H. Kirchner and S. Thorpe. Ultra-rapid object detection with saccadic eye movements: visual processing speed revisited. Visual Research, 46:1762– 1776, 2006. [42] Drew Leder. The absent body. University of Chicago Press, Chicago, 1990. [43] Wendy E. Mackay, Ernest Holm Svendsen, and Bjarne Horn. Who’s in control?: exploring human-agent interaction in the mcpie interactive theater project. In CHI ’01: CHI ’01 extended abstracts on Human factors in com- puting systems, pages 363–364, New York, NY, USA, 2001. ACM. [44] Marshall McLuhan. Understanding media : the extensions of man. McGraw- Hill, New York, 1964. [45] B. Mikellides. Color and physiological arousal. Journal of Architectural and Planning Research, 7(1):13–20, SPR 1990. [46] Masahiro Nakamura, Go Inaba, Jun Tamaoki, Kazuhito Shiratori, and Jun’ichi Hoshino. Bubble cosmos. In SIGGRAPH ’06: ACM SIGGRAPH 2006 Emerging technologies, page 3, New York, NY, USA, 2006. ACM. [47] Takuji Narumi, Atsushi Hiyama, Tomohiro Tanikawa, and Michitaka Hirose. inter-glow. In SIGGRAPH ’07: ACM SIGGRAPH 2007 posters, page 145, New York, NY, USA, 2007. ACM. 134 Bibliography [48] Donald Norman. Cognitive Engineering., pages 31–61. User Centered Sys- tem Design: new perspectives on human-computer interaction. L. Erlbaum Associates, Hillsdale, N.J, 1986. [49] Donald A. Norman. The psychology of everyday things. Basic Books, New York, 1988. [50] Marcos Novak. Transmitting architecture: the transphysical city. from http://www.ctheory.net/articles.aspx?id=76, 11/29/1996 1996. [51] Piergiorgio Odifreddi. The mathematical century: the 30 greatest problems of the last 100 years. Princeton University Press, Princeton, N.J., 2004. [52] John O’Keefe. The hippocampal cognitive map. In Jaques Paillard, editor, Brain and Space. Oxford University Press, Oxford, 1991. [53] Mire Eithne O’Neill. Corporeal experience: a haptic way of knowing. Journal of architectural education, 55(1):3–12, Sept 2001. [54] Kas Oosterhuis. [Hypercorpi.] Hyperbodies : toward an e-motive architec- ture. Birkhauser, Basel ; Boston, 2003. [55] Christian Pongratz. [Nati con il computer. English] Natural born CAADe- signers : young American architects. Birkhauser Pub., Basel ; Boston, 2000. Other Contributor(s): Perbellini, Maria Rita. Translated from the Italian: Nati con il computer. [56] L. Pugnetti, L. Mendozzi, E. Barbieri, F. D. Rose, and E. A. Attree. Nervous system correlates of virtual reality experience. Proc. of the 1st European Con- ference on Disability, Virtual Reality and Associated Technologies. Reading, UK: The University of Reading, pages 239–246, 1996. [57] Hani Rashid. Asymptote : flux. Phaidon Press, London ; New York, NY, 2002. Other Contributor(s): Couture, Lise Anne, 1959-; Variant Title: Flux, Asymptote. [58] Warren Robinnet. Technological augmentation of memory, perception, and imagination., 1991. Banff Centre for the Arts. [59] BarbaraOlasov Rothbaum, Page Anderson, Elana Zimand, Larry Hodges, Delia Lang, and Jeff Wilson. Virtual reality exposure therapy and standard (in vivo) exposure therapy in the treatment of fear of flying. Behavior Therapy, 37(1):80–90, 3 2006. 135 Bibliography [60] James A. Russell, Anna Weiss, and Gerald A. Mendelsohn. Affect grid: A single-item scale of pleasure and arousal. Journal of personality and social psychology, 57(3):493–502, 09 1989. [61] Stuart J. Russell. Artificial intelligence : a modern approach. Prentice Hall/Pearson Education, Upper Saddle River, N.J., 2003. Other Contribu- tor(s): Norvig, Peter. [62] P. Schumacher and Z. Hadid. Latent utopias : experiments within contempo- rary architecture. Springer, Wien, 2002 2002. [63] Eduard F. Sekler. Structure, construction. tectonics. In Gyorgy Kepes, editor, Structure in Art and in Science. Georges Braziler Inc., New York, 1965. [64] Michel Serres. Hermes: literature, science, philosophy. The Johns Hopkins University Press, London, 1982. [65] Michel Serres. The parasite. Baltimore: Johns Hopkins University Press, 1982. [66] J. Smith and K. MacLean. Communicating emotion through a haptic link: Design space and methodology. International Journal of Human-Computer Studies, 65(4):376–387, APR 2007. [67] Inez L. Taylor and F.C. Sumner. Actual brightness and distance of individual colors when their apparent distance is held constant. The Journal of Psychol- ogy, 19:79–85, 1945. [68] J. Tichon and J. Banks. Virtual reality exposure therapy: 150-degree screen to desktop pc. Cyberpsychology & behavior : the impact of the Internet, multimedia and virtual reality on behavior and society, 9(4):480–489, Aug 2006. [69] Mark Weiser. Some computer science issues in ubiquitous computing. Com- mun. ACM, 36(7):75–84, 1993. [70] Robert L. West, Lawrence M. Ward, and Rahul Khosla. Constrained scaling: the effect of learned psychophysical scales on idiosyncratic response bias. Perception & Psychophysics, 62(1), 2000. [71] Bob G. Witmer and Michael J. Singer. Measuring presence in virtual envi- ronments: A presence questionnaire. Presence, 7(3):225–240, 1998. [72] Philip David Zelazo. The development of conscious control in childhood. Trends in Cognitive Sciences, 8(1):12–17, 2004. 136 Bibliography [73] Peter Zellner. Hybrid space : new forms in digital architecture. Thames & Hudson, London, 1999. 137


Citation Scheme:


Usage Statistics

Country Views Downloads
China 9 25
France 6 0
United States 6 0
Unknown 1 0
City Views Downloads
Beijing 9 0
Roubaix 6 0
Mountain View 4 0
Buffalo 1 0
Sunnyvale 1 0
Unknown 1 18

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}


Share to:


Related Items