UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

S.PA.C.E.S. socio political adaptative communication enabled spaces Calderon, Roberto 2009

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata


ubc_2009_fall_calderon_roberto.pdf [ 16.14MB ]
JSON: 1.0067617.json
JSON-LD: 1.0067617+ld.json
RDF/XML (Pretty): 1.0067617.xml
RDF/JSON: 1.0067617+rdf.json
Turtle: 1.0067617+rdf-turtle.txt
N-Triples: 1.0067617+rdf-ntriples.txt
Original Record: 1.0067617 +original-record.json
Full Text

Full Text

S.P.A.C.E.S.Socio-Political Communication Enabled SpacesbyRoberto CalderonB.Arch., Universidad Iberoamericana, 2005A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinThe Faculty of Graduate Studies(Interdisciplinary Studies)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)August 2009c Roberto Calderon 2009AbstractThe Socio-Political Adaptative Communication Enabled Spaces (SPACES) researchproposes a model for conceptualizing, understanding and constructing Cyborg En-vironments. A Cyborg Environment is an autopoietic system of inter-acting hu-mans and space cyborgs – entities that have enhanced their senses through technol-ogy – based on a politics of action and embodiment that results in social systemsthat allow for communication to take place. The present document presents thisconceptual model, its foundation in Architecture, Human Computer Interactionand Cognitive Science and a set of experiments conducted to test its validity.iiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Structure of this document . . . . . . . . . . . . . . . . . . . . . 62 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.1 Architectural perspectives . . . . . . . . . . . . . . . . . . . . . 82.1.1 De-localized architectures . . . . . . . . . . . . . . . . . 92.1.2 Inter-acting architectures . . . . . . . . . . . . . . . . . 102.2 Human computer interaction perspectives . . . . . . . . . . . . . 112.2.1 Ubiquitous computing . . . . . . . . . . . . . . . . . . . 122.2.2 Spatial information . . . . . . . . . . . . . . . . . . . . 132.3 Cognitive science perspectives . . . . . . . . . . . . . . . . . . . 142.3.1 Space as an experience . . . . . . . . . . . . . . . . . . . 142.3.2 Space as cognitive relationship. . . . . . . . . . . . . . . 163 Cyborg environments . . . . . . . . . . . . . . . . . . . . . . . . . . 193.1 Definition of a cyborg environment . . . . . . . . . . . . . . . . 203.1.1 Definition of a cyborg space . . . . . . . . . . . . . . . . 223.2 Models for a cyborg environment . . . . . . . . . . . . . . . . . 293.2.1 Cyborg space . . . . . . . . . . . . . . . . . . . . . . . . 303.2.2 Cyborg communication . . . . . . . . . . . . . . . . . . 32iiiTable of Contents3.2.3 Cyborg system . . . . . . . . . . . . . . . . . . . . . . . 354 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.1 Similar experiments . . . . . . . . . . . . . . . . . . . . . . . . 414.1.1 Space as stimuli . . . . . . . . . . . . . . . . . . . . . . 414.1.2 Space as information . . . . . . . . . . . . . . . . . . . . 444.1.3 Agent-human interactions . . . . . . . . . . . . . . . . . 454.1.4 Technologically enhanced built architectures . . . . . . . 464.2 A pilot study of interactive spatial perception . . . . . . . . . . . 484.2.1 Device description . . . . . . . . . . . . . . . . . . . . . 484.2.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . 484.2.3 Experimental design . . . . . . . . . . . . . . . . . . . . 494.2.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 534.3 Training wheel study . . . . . . . . . . . . . . . . . . . . . . . . 564.3.1 Device description . . . . . . . . . . . . . . . . . . . . . 594.3.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . 614.3.3 Experimental design . . . . . . . . . . . . . . . . . . . . 634.3.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.4 Spatial effects of visual stimuli . . . . . . . . . . . . . . . . . . . 744.4.1 Device description . . . . . . . . . . . . . . . . . . . . . 744.4.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . 754.4.3 Experimental design . . . . . . . . . . . . . . . . . . . . 774.4.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 814.5 Spatial effects of directional light intensity . . . . . . . . . . . . 844.5.1 Device description . . . . . . . . . . . . . . . . . . . . . 854.5.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . 874.5.3 Experimental design . . . . . . . . . . . . . . . . . . . . 884.5.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 914.6 SENA prototype . . . . . . . . . . . . . . . . . . . . . . . . . . 944.6.1 Device description . . . . . . . . . . . . . . . . . . . . . 944.6.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . 1024.6.3 Experimental design . . . . . . . . . . . . . . . . . . . . 1044.6.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235.1 Recapitulation of findings . . . . . . . . . . . . . . . . . . . . . 1235.2 Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1275.3 Future perspectives . . . . . . . . . . . . . . . . . . . . . . . . . 1305.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130ivTable of ContentsBibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131vList of Tables1.1 Hypotheses and experiments conducted. . . . . . . . . . . . . . . 54.1 Initial pilot study, tasks to be performed by test subjects. . . . . . 514.2 First pilot study, stimuli used. . . . . . . . . . . . . . . . . . . . . 524.3 Measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . 624.4 Experimental design for human training wheel experiment. . . . . 654.5 Stimuli tested and their h-s-v characteristics used to test spatialityof visual stimulation. . . . . . . . . . . . . . . . . . . . . . . . . 764.6 Scale used to measure spatial size. . . . . . . . . . . . . . . . . . 774.7 Biased sizes primed during pre-test. . . . . . . . . . . . . . . . . 794.8 Experimental design for spatial size of visual stimuli measurements. 804.9 Length of virtual space. . . . . . . . . . . . . . . . . . . . . . . . 874.10 Directional stimuli spatial perception, configurations. . . . . . . . 884.11 Stimuli definition. . . . . . . . . . . . . . . . . . . . . . . . . . . 954.12 Treatments applied. . . . . . . . . . . . . . . . . . . . . . . . . . 1054.13 Experimental design for the sena experiment. . . . . . . . . . . . 1054.14 Percentages of total stimulus-action correlations across all subjects. 1095.1 Theoretical model and experiments conducted. . . . . . . . . . . 124viList of Figures1.1 Structure of this document. . . . . . . . . . . . . . . . . . . . . . 72.1 Space perception loop. . . . . . . . . . . . . . . . . . . . . . . . 162.2 Stimuli evoke a spatial experience. . . . . . . . . . . . . . . . . . 172.3 Navigation process based on Jul and Furnas. . . . . . . . . . . . . 183.1 Evoking a spatial experience through stimuli. . . . . . . . . . . . 283.2 Models for a cyborg environment . . . . . . . . . . . . . . . . . . 293.3 Cube model for a cyborg architecture taxonomy. . . . . . . . . . . 333.4 Serres’s model of communication. . . . . . . . . . . . . . . . . . 343.5 Diamond model for a cyborg’s communication ability. . . . . . . 353.6 Communication process between two cyborgs. . . . . . . . . . . . 363.7 Citizenship based on the access to different stages of action. . . . 383.8 Paired space and human perceptive loops. . . . . . . . . . . . . . 393.9 Human and space encoded agent forming an organic interactivesystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.1 General setting and dimensions. . . . . . . . . . . . . . . . . . . 494.2 Stimulus presented. . . . . . . . . . . . . . . . . . . . . . . . . . 504.3 Initial pilot study, example stimulus. . . . . . . . . . . . . . . . . 524.4 Experiment flow. . . . . . . . . . . . . . . . . . . . . . . . . . . 524.5 Motion patterns of a subject according to each randomized stimu-lus presented for 120 seconds. . . . . . . . . . . . . . . . . . . . 544.6 Arrangement of a female participant with tags over color cubes. . 554.7 Two theories of control based on reflective consciousness manipu-lation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584.8 Coordinate system. . . . . . . . . . . . . . . . . . . . . . . . . . 594.9 Interactive coffee table and power key. . . . . . . . . . . . . . . . 604.10 A participant interacting with the space. . . . . . . . . . . . . . . 614.11 Space model and alterations performed. . . . . . . . . . . . . . . 644.12 Experiment flow. . . . . . . . . . . . . . . . . . . . . . . . . . . 664.13 A participant interacting with the power key. . . . . . . . . . . . . 68viiList of Figures4.14 Recorded paths with a latency of 120 seconds of a test subject inthe semi-immersive task group. . . . . . . . . . . . . . . . . . . . 694.15 Location iterations for the x axis of a test subject in the immersivetask group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.16 Means of total distance of movement in both bidimensionally andtridimensionally altered color spaces. . . . . . . . . . . . . . . . . 714.17 Means of total distance of movement in each bidimensionally andthree-dimensionally altered color spaces. . . . . . . . . . . . . . . 724.18 Means of distance from the new home location to the hypothesizeddisplaced home location. . . . . . . . . . . . . . . . . . . . . . . 734.19 A user rating a specific visual characteristic. . . . . . . . . . . . . 754.20 Software used to train users in spatial perception with both colorand non-color primers. . . . . . . . . . . . . . . . . . . . . . . . 784.21 Experiment flow. . . . . . . . . . . . . . . . . . . . . . . . . . . 814.22 Linear regression on perceived spatial size of color value alterations. 824.23 Means of spatial size perceived with different peripheral light stim-ulations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.24 Head mounted display and virtual world. . . . . . . . . . . . . . . 864.25 Treatments applied (highly illuminated surfaces are represented byred planes). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894.26 Effects of directional light intensity. Motion analyzer software. . . 904.27 Spatial effects of directional light intensity. Experiment Flow. . . . 914.28 Analysis of a participant. . . . . . . . . . . . . . . . . . . . . . . 924.29 Spatial stimuli used by the system. . . . . . . . . . . . . . . . . . 964.30 Experiment setup. . . . . . . . . . . . . . . . . . . . . . . . . . . 974.31 Emotion selector. . . . . . . . . . . . . . . . . . . . . . . . . . . 974.32 Interaction bayesian network. . . . . . . . . . . . . . . . . . . . . 994.33 Communication bayesian network. . . . . . . . . . . . . . . . . . 1014.34 Experiment flow. . . . . . . . . . . . . . . . . . . . . . . . . . . 1064.35 Means of actions that correlate to one, and only one, stimulus. . . 1074.36 Count of appearances of human actions related to each spatial stim-ulus. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1094.37 Means of behavior states recorded during each stimulus, across allgroups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1104.38 Means of total count of overall motion high appearances. . . . . . 1134.39 Means of total count of overall position right appearances. . . . . 1144.40 Means of high motion related to high valued stimuli. . . . . . . . 1154.41 Means of high motion related to low valued stimuli. . . . . . . . . 1154.42 Effects of a-priori beliefs of an interactive system. . . . . . . . . . 1164.43 Means of accurately decoded messages. . . . . . . . . . . . . . . 118viiiList of Figures4.44 Agreement on message transmission and decoding ease. . . . . . 119ixAcknowledgementsThis research would not have been possible without the invaluable support of mysupervisory committee. I am deeply grateful to Sidney Fels from the departmentof Electrical and Computer Engineering, Oliver Neumann from the department ofArchitecture and Lawrence Ward from the department of Psychology.xTo my hero,Martha Ar´amburu.To my inspiration,C´ecile Beaufils.xiChapter 1IntroductionDigital technology has become so ubiquitous that the very essence that makes oursocieties has begun to depend on the capabilities of the computing machine. Werely on the power of computers to do business, defend our nations or prevent world-wide catastrophes. In his article Transmitting Architecture: the Transphysical City,Marcos Novak writes that. . . Cyberspace as a whole, and networked virtual environments in par-ticular, allow us to not only theorize about potential architectures in-formed by the best of current thought, but to actually construct suchspaces for human inhabitation in a completely new kind of publicrealm. . . [50]Novak envisions an Architecture of cyberspace based on global networks andhuman interaction. His “liquid spaces” molded by algorithms that interweave andinteract when human variables exert force on them are informational clusters thatallow for human interaction that escapes the physical world. Within this newrealm, the designer stops being interested in the physical component of the ar-chitectural structure and focuses on the definition of variables that build coherentself-regulating and self-constructing architectural systems. The materiality of glassand concrete succumbs to information, while Architecture remains, grows and mu-tates to this new kind of spatiality. In Novak’s words,This [cyberspace] does not imply a lack of constraint, but rather asubstitution of one kind of rigor for another. When bricks becomepixels, the tectonics of Architecture become informational. City plan-ning becomes data structure design, construction costs become com-putational costs, accessibility becomes transmissibility, proximity ismeasured in numbers of required links and available bandwidth. Ev-erything changes, but architecture remains.[50]Architecture has always been the solidification of culture and the disciplineof construction. Over the centuries architects have discussed and built the spatialrealities where we perform our soliloquies and social interactions. Nevertheless,1Chapter 1. Introductioncontemporary technology has radically affected how we interact with our worldand other human beings. We have become inhabitants of the de-localized city andthe enhanced living; our cities are becoming less dependent on their topographyand humans are becoming self-designed cyborgs.1What is the role of architectural practice in this reality? How will the Archi-tecture of tomorrow be? But above all, how should the architect of today andtomorrow address a society no longer in need of built spaces?The Socio-Political Adaptative Communication Enabled Spaces (SPACES) re-search envisions a world where humans cohabit with their environments as theydo with other human beings. It forsees an organic and complex correlation be-tween humans and built spaces that can perceive the world and consciously actupon it. This ideal forms the foundation of a new paradigm of Architecture thatcenters on the social, political and adaptative nature of an environment with theright to become a citizen of our societies. Five definitions form the core of suchunderstanding: Social: A spatial entity able to interact with its inhabitants should becomepart of a social system of inter-action formed by, at least, itself and a humaninhabitant. Political: Interactivity between entities with distinct action capabilities ren-ders the need for a political structure – citizenship – based on the abilities tointer-act. Adaptative: Action based and social entities can only arise through the adapt-ability that characterizes living things. Memory, cognition and use of controlrender an entity as adaptable. Communication: The inter-active social definition of these spaces is basedon their capability to communicate meaning. Space: These entities are conceived as spatial experiences.These concepts fit within an understanding of Architecture as an informationalsystem that can be de-localized[7] and fragmented into the networked[62] nature ofour present world. Such understanding of Architecture as a relationship betweenhumans and mutating interactive environments[6], conceptualized as prosthesesof the human body[15] and the city[55], has already promoted built architecturesthat can be conceived as interfaces[33] to an informational realm, or prosthetic en-hancements to every day living[37]. Nevertheless, the SPACES definitions have1These concepts will be discussed further in the sections that follow.21.1. Contributionsattempted to expand this knowledge and recognize built space as a purely infor-mational and cognitive process dependent on a complex system of co-relationshipsbetween human behavior and spatial perceptions driven by rough artificial intel-ligences. The present investigation has conceptualized an architectural entity thatembodies humans, creates a social connection with them and acts intelligently onits perceived reality – i.e. its inhabitants.By modeling this new paradigm we are achieving a deeper understanding ofour technologically enhanced world and allowing for a more human implementa-tion of such enhanced built environments. Above all, the present research aims atraising awareness of architectures that are both interfaces to sociality and activeparticipants of a cyborg society. This should trigger a deeper discussion about self-modifying environments that embody and are embodied by humans in an endlessprocess of social co-adaptation.The following sections of this chapter presents the main contributions of theinvestigation and the structure of this thesis.1.1 ContributionsThe present exploration has attempted to create a model of space that is coherentwith the needs and technological knowledge of our contemporary societies. Thepresent research has been founded on two concepts that have become widespreadin our contemporary world: Spatial Experience: Space is a collection of cognitive processes that interactbetween them to create a spatial experience of the world. Therefore, spaceis not dependent of the objects in the real world, but on the relationshipsbetween cognitive events that arise when stimuli are perceived. Cyborg: A cyborg is an entity that has enhanced its natural abilities throughprosthetic enhancements. There exist human cyborgs, humans that have beenenhanced through any pharmaceutical or electronic technology, and spacecyborgs, spaces that have been enhanced to perceive, synthetically undergoa cognition process, and act upon the world.The present investigation has assumed that a human spatial experience is cog-nitive and independent of the objects that evoke it, thus that space can perceive a‘space’ of its own if enhanced with a synthetic cognitive ability. Such space and itsinhabitants form an autopoietic system of prosthetic nature, i.e. cyborg nature, inwhich its parts communicate, socialize and share control within a political struc-ture of inter-action. This has allowed the present investigation to conceptualize a31.1. Contributionssystem of inter-relationships between a space enhanced with cognitive abilities andhuman beings, or a Cyborg Environment: Cyborg Environments: Both humans and cyborg spaces2 are entities thatundergo spatial experiences that depend on one another, i.e. space cyborgsevoke spatial experiences in humans while humans evoke spatial experiencesin space cyborgs. When paired, these two beings form self regulating sys-tems called Cyborg Environments that are based on perception and action,and that allow for sociality to arise. For instance, if one of the entities is al-tered the other entity suffers a correlated alteration that can be defined as so-cial. Furthermore, these inter-action systems allow for meaningful commu-nication to take place outside their self-regulatory relationship. Successfulcommunication depends on the agreement of a previously agreed semanticarrangement of actions, i.e. a code, that can be used to transmit a message.The conceptualized model is based on the definition of a space enhanced withcognition, i.e. a cyborg space, its communication abilities and the systems formedof cyborg spaces and humans: Cyborg space: Embodiment, enclosure, alteration capabilities of a cyborgspace. Cyborg communication: Communication capabilities of a cyborg space. Cyborg system: Self-regulatory (autopoietic) systems formed of cyborg spacesand humans.This model has been validated by testing six hypotheses:1. A cyborg space can embody humans and humans can embody space,2. A cyborg space, i.e. an enclosure, can be evoked though stimuli,3. The perception of a cyborg space can be manipulated by altering the stimulithat evokes it,4. Cyborg communication depends on an agreed code that lies outside the in-teractive process.5. Cyborg systems depend on action.2Cluster of stimuli that evokes a spatial experience of prosthetic and informational nature (withincyberspace) that forms a self-regulating network of relationships between itself and its inhabitantsby being able to undergo spatial experiences of its own.41.1. Contributions6. Cyborg systems are autopoietic, i.e. self regulating networks of relation-ships.One pilot study and four experiments were conducted during a span of twoyears. Each one of the four experiments was designed to test the presented hy-potheses according to table 1.1.Hypothesis. Experiments..Cyborg space can embody humans. Training Wheel Study.Enclosures can be evoked through stimuli. Effects of visual stimuli.Enclosures can be altered through stimuli. Effects of visual stimuli.Effects of directional lightCyborg communication lies outside the inter-action. SENA prototype.Cyborg systems depend on action. SENA prototype.Cyborg systems are autopoietic. SENA prototype.Table 1.1: Hypotheses and experiments conducted.The first pilot study, A pilot study of interactive spatial perception, studiedthe behavioral and psychological effects of mutating spaces. Measurements ofattention, immersion, motion patterns and gender were performed.The second experiment, Training Wheel Study, measured the embodying capa-bilities of a changing and ‘perceiving’ space – a software agent controlling a visualcharacteristic space according to human movement. Measurements of immersion,motion patterns and embodiment were done.The third experiment, Spatial effects of visual stimuli, measured the spatial ca-pabilities of various color stimuli presented peripherally to human subjects. Mea-surement of spatial size perceived by human beings was performed.The fourth experiment, Spatial effects of directional light intensity, measuredthe spatial and behavioral effects of various directional light intensities. Measure-ments of kinesthetic movement, head rotations and subjective perception of spacewere done.The fifth experiment, SENA prototype, explored a simple implementation of aCyborg Environment formed of a cyborg space and humans. Measurements of in-teractivity, stimuli-behavior correlation, message transmission, sociality, aestheticperception and a-priori knowledge biases were performed.The final contribution of the SPACES research is of methodological nature.The experiments conducted in the present investigation have proven that a usercentered approach to spatial behaviors can and should be included in architecturalpractice. The general approach to Architecture as a static process of creation –i.e. user screening, construction and post-construction validation – can be replaced51.2. Structure of this documentwith an approach to design founded on user studies that investigate human percep-tion and behavior through simple testing prototypes. This allows for the creation ofconcrete and replicable models that can be applied to several stages of architecturaldesign.Furthermore, the increasing flexibility and interactivity of built environmentsshould allow for this process to be done throughout the lifespan of a building en-hancing it and altering to the changing needs of its inhabitants. This proposedmethodology is equivalent to software development where design is centered noton the hardware of the specific architecture of the computing machine, but on thehuman usage of such infrastructure. In other words, Architecture should focus onthe development of states and interfaces that a specific building can provide, ratherthat on the specific structure that promotes such events, this can only be achievedthrough constant user centered validations of such interactions.1.2 Structure of this documentThe present document is divided in four parts, presented by figure 1.1.The first part, Introduction, contains chapter one and two. Chapter one presentsthe need for a new architectural perspective and introduces some concepts that willbe developed later in the document. Chapter two presents related work that hasbeen done in the fields of Architecture, Human Computer Interaction and Cog-nitive Science that has served as foundation to the conceptualization of CyborgEnvironments.The second part, Cyborg Environments, is formed of chapter three. This chap-ter presents the concept of Cyborg Environment and proposes a model for under-standing and creating such entities. The model is based on the definition of acyborg space, its communication abilities, and the autopoietic systems that resultwhen humans and cyborg spaces interact.The third part, Experiments, is formed of chapter four. Chapter four introducessimilar experiments that have been conducted by other researchers and presents theexperiments that were performed as part of the present research. Each experimentexplores and tests one or several concepts proposed by chapter Two.Finally the fourth part, Conclusions, is formed of chapter five and presents theimplications of the experiments so far conducted and gives future perspectives onthe topic of Cyborg Environments.61.2. Structure of this documentIntroductionIntroductionArchitectureHuman Computer InteractionCognitive ScienceRelated workDefinition of Cyborg EnvironmentModels for a Cyborg EnvironmentSpace CyborgCyborg CommunicationCyborg SystemsCyborgEnvironmentsSimilar ExperimentsExperimentsInteractive spatial perceptionTraining WheelSpatial effects of visual stimuliSpatial effects of directional lightSENA prototypeExperimentsImplicationsFuture PerspectivesConclusionsFigure 1.1: Structure of this document.7Chapter 2Related work2.1 Architectural perspectivesIn a short article entitled Virtual Architecture - Real Space, Hani Rashid declaresthat the present society is in “the very early stages of a digital revolution whosedirection we will not be certain of for some time”[57]. Affordable personal com-puting systems have become widespread and their effects on society strong. Welive in a global village[44] of increasing interactivity and complexity[51].What is the role of Architecture in this changed world? Shumacher believesthatArchitecture has to react to societal and technological changes. It hasto maintain its ability to deliver solutions. But its very problems areno longer predefined [today]. In fact, these problems are themselvesa function of the ongoing autopoiesis of Architecture. Architecturalexperimentation has to leap into the dark, hoping that sufficient frag-ments of its manifold audience will throw themselves into architec-ture’s browsing trajectory.[62]The current fragmentation of architectural practice into diversified and multi-faceted practices makes an introduction to architectural thinking a task with a highrisk of producing extremely simplified results. Nevertheless, a general taxonomy ofthe work that is relevant to the present research is needed. Two main architecturalthought clusters have served as foundation for the theorization of socio-politicaladaptative communication enabled spaces: Non localized: Architectures focused on the de-localization of Architectureinto informational networks. Inter-acting: Architectures exploring the inter-active3 capabilities of spaceas a real-time mutating entity.3The word inter-action is used in this document to emphasize the correlation between actionsbelonging to different entities within an interactive system.82.1. Architectural perspectives2.1.1 De-localized architecturesThe virtualization – conceptualization as information – of reality that has resultedfrom the informational revolution of our era has changed the way we understandand act upon our world. As Lucy Bullivant puts it, even “war has come to be foughtand projected virtually as well as physically; commerce relies on the fourth dimen-sion of the spatialization of time achieved through dislocated virtual connectivity.”[4]Such “redefinition of human relations”[4] has resulted in a de-localization of Ar-chitecture into the informational networks of our present societies.This de-localization of reality into communication is evident in Ben Rubin’sListening Post structure. “Anyone who types a message in a chatroom... is callingout for a response”[7] Ben Rubin says. In response, he has created a structure thatbuilds a visual/sound scape from a torrent of endless communication activity. Hisephemeral work represents network communication as a tectonic matter that canbe used to construct architectural space. In other words, “pliable and responsivedigital environments potentially constitute specific new types of structures raisingthe haptic and intuitive threshold of public and private space.”[4]Moreover, since the city itself can be conceived as an informational entity, it ispossible to conceptualize an architectural network that can be spatially modeled.Contemporary efforts like Neil Leach’s swarm tectonics, which crystallize over asite, or veech media architecture’s vma-mobile environments, that aim at the “im-plantation of a temporary architectural environment into the urban setting whichcould activate and polarize the urban citizens”[62], demonstrate this conceptual-ization.This transparency between real (the city) and virtual (the network) has beencarried into further explorations of informational and interactive nature. UsmanHaque’s Sky Ear explores the disruption of the “perceptual boundaries betweenthe physical and virtual by encouraging people to become creative participants ina Hertzian performance, allowing [them] to see [their] daily interactions within theinvisible topographies of Hertzian space.”[9] By releasing a colorful structure intothe air participants of this architecture were able to extend their senses into theinvisible wave of telecommunications, generally hidden to the naked eye.MIT’s Media House project is the architectural translation of such immaterial-ity. Usually described by stating that the house is the computer, the structure is thenetwork, the entity leaves behind any anthropomorphic or aesthetic definition inorder to be “identifiable in social, psychological and sensorial dimensions”[8] andmodeled and manipulated through LAN (Local Area Network) definitions. Con-crete and glass are replaced by bits that can be translated into sensations combinedand presented by different types of surfaces: screens, light, air, temperature.The control of such informational bits is the main concern of these architectures92.1. Architectural perspectivesof the intangible. Somlai-Fischer’s interactive spatial installations “talk about anew relationship between technology and design, in which the role and effect oftechnology reveals a more profound relation between design and design tools”[5]and impersonate the main objectives of what will later become inter-activity.Networked relationships and their communicative channels form the languageof these architectures. Architecture is understood as a network of human so-matosensory experiences that can be evoked through the architectural entity ac-cording to the system needs. Jason Bruges, for example, “...redefines the role ofthe architect as maker of responsive environments”[6]. In his Memory Wall for thePuerta America Hotel in Madrid Bruges uses a collection of computing vision andaction algorithms to create a system whereThe motions of the individuals inside [the space] act as a catalyst foran ambient visual projection, in which motion and form are captured,filtered and projected onto the wall surfaces in a continuous loop, withmemories of the day building up on them.[6]For Bruges, his “compositions are not complete without the interaction of anindividual. . . [because] each person experiencing one of [his] works will have theirown unique memory of it”[6]. This inter-relationship between a system’s state,human state and perceptual memory is the ultimate result of a de-localized Archi-tecture.2.1.2 Inter-acting architecturesUbiquitous computing and mobile communication technologies have placed theemphasis of interaction on connectivity rather than on location-dependent struc-tures. Place, as a point for inter-action, has been supplanted by the concept ofinterface or connection point. In this sense, Architecture as a built structure “be-comes an embarrassment; it slows things down and moves attention away from[connectivity]. At the very least one could say it marginalizes... [the ability to]. . . connect to anybody, anywhere, any time through all the senses. . . ”[3]However, according to Diller and Scofidio, Architecture can become a part ofthis liveness. For them, liveness is “the mechanism of interactivity that originatedin broadcasting, where electronic news is the instantaneous relay of the world”[55]and thus a state reproducible through other media, e.g. space. In their words,Real-time is key. Lag time, delay, search time, download time, feed-back time are unwelcome mediations of liveness. Real time is thespeed of computational performance, the ability of the computer to102.2. Human computer interaction perspectivesrespond to the immediacy of an interaction without temporal media-tion. Un-mediated means im-mediate. But whether motivated by thedesire to preserve the real or to fabricate it, liveness is synonymouswith the real, and the real is an object of uncritical desire for bothtechno-extremes [technophilic and technophobic].[55]Changing form or providing the inhabitants of their architecture with an aug-mented reality, Diller and Scofidio’s work from the the 1990’s responded to anever-changing environment of interactions between human action, meaning andform through the manipulation of information. For them Architecture was catego-rized according to its ability to produce and manipulate information. Furthermore,all activity in architectural space was conceived as inter-connection between hu-mans and their surrounding space. In other words, between architecture and humanflesh, or “the outermost surface of the body bordering all relations in space”.[15]A similar approach was taken by Oosterhuis’s Polynuclear Landscape from1998. Polynuclear Landscape is a programmable surface, a flesh or skin, thatinteracts in real time with its inhabitants. In this project Oosterhuis “. . . aims atdesigning a building body displaying real time behavior”[54]. This inter-activeprocess is extended to both creators and users of the architectural entity, both “co-designers and co-users work in real time both during the design process and inthe life-cycle of the building body, the process never stops and leads to a maturetime-based architecture.”[54]Here, Architecture “engages and creates adaptive and tactile physical environ-ments which surround and envelope our physical and conceptual bodies... re-sult[ing] in a seamless integration of information, technology and its users, gen-erating an endlessly infinite sensitive surface... [a] liquefaction of the post-urbanenvironment.”[55]This fluid nature of an Architecture preoccupied with temporal nature and liq-uidity had been depicted by Novak’s work in the 1980’s and 1990’s. The hybridnature, both real and virtual, hard and soft, fast and slow of his projects are of-ten described as “extreme provocation”[73]. Nevertheless, they crystallize a datadriven phenomenon centered in fluidity and departing from structure. Architectural“form follows neither function nor form”[73], but the complex tensions that thatarise within human-computer inter-actions.2.2 Human computer interaction perspectivesHuman Computer Interaction (HCI) studies focus on providing human centeredinterface solutions to computing machines. However, computing systems are be-coming diluted in our environment and their interfaces are turning more physical.112.2. Human computer interaction perspectivesAs a result HCI knowledge has become of spatial nature and focused on the inter-action between humans and their environments.A categorization of some of the most important experimentations relevant tothe present research, and done as Human Computer Interaction investigations toaddress said issues, follows. An attempt to be both concise and broad in the selec-tion of examples clearly exposing the understanding of space and spatial relation-ships in HCI thought has been done by proposing the following categorization: Ubiquitous: Explorations of the spatial nature and social capabilities of anenvironment formed of computing machines. Spatial: Explorations of the physical, kinesthetic and haptic properties ofspatial manipulation of information.2.2.1 Ubiquitous computingUbiquitous Computing is a term coined by Mark D. Weiser[69] to describe a futurewhere computing machines are hidden in every-day objects and interaction withthem is seamless. The term does not allude to the standardization of the machine,but to its replication into a multitude of forms interacting between themselves tocreate an environment of computers.The conceptualization of computing interfaces as spatial entities has been car-ried onto the possibility of their inclusion on social and public systems of humaninteraction. Research by Eriksson et al. on public space enhanced by collaborativetechnology is one important example. Preoccupied with the fact that public spaceis becoming governed and purpose-oriented, and that private devices are effectiveisolators of the individual, the group has searched for ways to transform privatedevices into human-to-human interactive interfaces within public domains. Thegroup proposes a democratic space where. . . public users should be able to change [information]... towards asituation where the public can expose, comment and edit elements ofthe public space. Thereby, the space is formed and shaped by peoplepassing by and not only by mimicking commercial interests.[23]This conceptualization of space “created not only by the physical space, butmore by the people present”[23] is built on the idea of market place, i.e. a spacewhere people are “able to come to. . . with their goods, trade, look around, playgames, talk to each other, pick up stuff and leave again.”[23] The market placeallows a democratization of space generating uncontrolled and parallel social in-teractions otherwise inexistent.122.2. Human computer interaction perspectivesThis work predicts an era of community-oriented collaborative interfaces pro-vided by architectural spaces, and the understanding of space as a networked infor-mational interface. As Kerne writes, “an interface functions as means of contact,a border zone, a layer hosting exchange, a nexus where resources such as in-formation - and power are circulated and transformed, a channel through whichinteractors communicate, or a conduit for message passing.”[40]2.2.2 Spatial informationSpace cognition strongly depends on our interaction with space. Because “naturalviewing situations are usually kinetic, involving a moving observer and movingobjects”[22], a wide range of perceptions take place in human environmental per-ception. Using vision we find figures and weights, through sound and smell wefind recognizable ambients, and if in reach, touch and taste allow us to intimatelyexplore our surrounding. Haptic perception, provided by touch and kinestheticmovement, has proven to be strongly related to such spatial exploration and cogni-tion. According to O’Neil,People gain environmental understanding from tangible physical ex-perience, from coming in contact with natural and built elements, andfrom moving through spaces, as well as from seeing objects in space. . . when reinforced with our visual perception these holistic systemsform our phenomenological understanding of the environment so thatthe whole sensory envelope creates in us the sense of spatiality.[53]Virtual reality has its roots in the human ability to represent reality in terms ofa conceptual expression. Representation of three-dimensional objects was initiatedwith the mathematical expression, which evolved into the geometrical definitionsthat gave rise to the perspectival representation of the spatial world. The computerallowed for perspectival computation to be done at a high speed and thus createdreal-time exploration of virtual (inexistent) reality. However, the disparity betweenthe represented images that stimulate the eyes of a person experiencing virtualreality, and other perceptual senses used by space perception – haptic or kinesthetic– results in what is known as “partial immersion” and a sense of a falsified reality,an artifice.The work by Galyean et al. in 1991 investigated an intuitive tool for sculptingvirtual volumes that tried to solve such “partial immersion”. Haptic force and di-rectional resistance, generally present in the real world, was achieved by the poorman’s force feedback unit, providing resistance mimicking real object alteration.This understanding of virtuality as provision of stimulation resulting in a percep-132.3. Cognitive science perspectivestion of reality components – light or forces – became the foundation for futureexplorations on spatial perception. Chen et al., for example, argue that. . . interactive data exploration in virtual environments is mainly fo-cused on vision-based and non-contact sensory channels such as vi-sual/auditory displays. In addition to seeing and hearing, enablingusers to touch, feel and manipulate virtual objects in a virtual environ-ment provides a realistic sense of immersion in the environment thatis otherwise not possible.[11]Humans relate to space by visual, kinesthetic and haptic exploration. By mov-ing through it we are able to explore its characteristics and form an accurate modelof our surrounding. The Radial Arm Maze is an important paradigm in the studyof space perception introduced by David Olton in the 1970’s [17]. The paradigmhas helped analyze a big range of spatial issues like “natural foraging behavior,short and long-term memory, spatial and nonspatial memory, working and refer-ence memory, drug effects on behavior ageing effects, strain and species differ-ences in spatial competence, and strategic choice behavior.”[26] Furthermore, ithas also helped analyze spatial experience in contemporary virtual reality[56] byproviding valuable information of brain correlations between different brain areasand human motion in space. It is now known that kinesthetic exploration is an im-portant factor in the creation of appropriate mental maps of the space that surroundsa human body.2.3 Cognitive science perspectivesSpatial perception is the result of a complex system of perceptions and brain com-putations that result in a spatial experience. Humans are equipped with sensorsthat gather information of both their body and their environment, this informationis then structured in a coherent form by the brain to result in the experience humanscall space.2.3.1 Space as an experienceIn his book Visual Space Perception, Maurice Hershenson describes the concept ofthe perceptual world as the foundation to any visual spatial cognition. In his words,. . . the physical world exists outside the observer. The perceptual orvisual world is experienced by the observer. It is produced by activityin the eye-brain system when patterned light stimulates the eyes of an142.3. Cognitive science perspectivesobserver.... The perceptual world is normally externalized, i.e., it isusually experienced as ’out there’. . . [22]Spatiality is a cognitive process depending purely on the observer and detachedfrom the object that evokes it. All objects are sources of information – stimuli –,when these excite the sensory apparatus of a human being they have a probabilityof promoting a spatial experience. Not all objects evoke a spatial experience, butall objects perceived are located within a spatial experience.For example, some of an object’s properties stimulate the retina of an observer,who in turn undergoes a cognitive process to identify the perceived stimuli as an ob-ject. Some of the object’s characteristics – color, size or position – help the humanobserver to categorize and structure his or her relationship to such perceived object.With a collection of several objects, additional to the object previously perceived,the subject can achieve an enveloping sensation commonly known as environment.This is, the observer considers himself or herself positioned in an envelope formedof interrelated objects – stimuli producers. This enveloping sensation is what willbe called in this study a Spatial Experience.This Spatial Experience is formed of what Nigel Foreman and Raphael Gilletconsider as egoncentrically encoded space, or “discrimination of a spatial locuswith reference to the body midline or vertical visual meridian,”[26] and a furthercognitive process requiring a higher level of cognition to construct a model of theworld through “memory for inter-relationships between objects in a more globalspatial framework” [26] or what Foreman and Gillet name allocentric encodingand navigational spatial skill. Both processes work together to deliver a SpatialExperience to a perceiving being.Such experiential process is then linked back to the human sensory system al-lowing for verification and calibration of the constructed mental model throughkinesthetic, haptic and sensorial – e.g. audible stimuli corresponding to same per-ceived object – perceptions. The outlined process (perception, spatial experienceand calibration) will be named in this study as a Space Perception Loop and isdepicted by figure 2.1.Stimuli are being gathered continuously by the perceiving entity through itssenses. By considering the senses as static systems adding little or no informationto the stimuli perceived we can hypothesize that certain stimuli can repetitively ex-cite the appropriate cognitive apparatus that in turn promotes a significant spatialexperience. The spatial experience is separated from the object by at least threeprocesses: the channel4, the senses and the cognitive human apparatus. Space is4A channel is here conceptualized as the way which a collection of stimuli uses to arrive to thehuman senses152.3. Cognitive science perspectivesFigure 2.1: Space perception loop.therefore not ’outside’ and dependent of the objects that appear to form it, but ‘in-side’ and dependent of the cognitive processes that evoke it. Space is an experienceand depends on a cognitive process (figure 2.2).2.3.2 Space as cognitive relationship.Robert Harnish defines cognition as “the mental ‘manipulation’ (creation, transfor-mation, deletion) of mental representations.”[30] For a human to undergo a spa-tial experience, a cognitive process is necessary. The information that is gatheredthrough the senses is processed by the brain to fit a model of relations, a mentalmodel. When perceivers – humans – gather information from the world they ar-range their perceptions in cognitive relationships. By comparing different stimulithe brain can create models that can be used for ongoing or future processes relatedto their interaction with reality.One of these processes is evident in how humans navigate their environment.Human beings have to gather information through their senses and use it in a nav-igational process of relationships in order to achieve their expected goals. Hirtleand Sorrows, in their article Navigation in Electronic Environments, assert that“many of the same cognitive principles that are important in navigation of phys-ical spaces are also involved in navigation in electronic environments.”[32] Theirmodel (figure 2.3) outlines the mental process that people undergo during spatialtasks. Centered in data acquisition and processing the model has a strong similar-ity to human interaction models, specially those promoted by Donald Norman in162.3. Cognitive science perspectivesFigure 2.2: Stimuli evoke a spatial experience.Human Computer Interaction.[49]This understanding of spatial navigation as data management allows for theconceptualization of space as an information-based cognitive process of relation-ships between stimuli. Because neither perceivers nor stimuli are static, changes inany of them affect such inter-relationships and result in a perception of the worldas fourth-dimensional. According to this, there exist two types of stimuli changesor alterations that result in spatial model updating. Stimuli: Stimuli based changes are produced by modifications in stimuluscharacteristics. Altering one or several characteristics of a single stimulusobliges a reconsideration of the mental model that said stimulus forms partof. Reality is a complex collection of stimuli that can be described in simplercharacteristics (e.g. for a visual stimulus, hue, saturation and value). Perceiver: Perceiver based changes are produced by alterations in the stim-ulus observer. Changes in point of view, perceptual capabilities or brainlesions (e.g. hemispatial neglect) result in mental model updating.172.3. Cognitive science perspectivesFigure 2.3: Navigation process based on Jul and Furnas.18Chapter 3Cyborg environmentsAsymptote –Hani Rashid–’s definition of E-gora well defines the increasing influ-ence of contemporary technology in the cultural, economic and political compo-nents of our society. E-gora is:. . . [a] globally accessible non-place... [consisting] not only of Internetspace and its substrata of e-mail, chat, virtual reality markup language(VRML), and CUSeeMe5, but also those familiar territories such aspublic access television, C-SPAN, court TV, and even the voyeuristicspectacles of ‘caught on tape’ programming that are so influential inthe vast-electro-sphere we now call a community.[57]Because of the increasing inclusion of communication technology and ubiqui-tous computing in our world, the spaces of the future will be based on the sameblocks that define the E-gora. Architecture in a world of communication becomesdependent on the immateriality of its connectivity definition. Tectonics – as vis-ible characteristics giving expression to the relationship between structure andforce[63] – become dependent on forces of inter-activity forces, that expand invarious directions.An example of this conceptualization of Architecture as an informational en-tity is veech media architecture’s ORF Schauplatze Der Zukunf project from 1999.Their structure proposes a “simultaneity between virtual and real [by allowing] theuser an opportunity to actively alter his/her environment and communicate thesemodifications to a third party via invisible transmission.”[62] The real inhabitancyof such architectural space is not the structural beams that mark the location wherethe environmental transformation occurs, but the interactive process that arises be-tween the experient of such alteration and the receiver of the sent modifications.The participants of these architectures become not only inhabitants of a builtstructure, but symbiotic parts of the architectural system. Because these structureshave been conceptualized as informational clusters, the humans that inhabit thembecome embodied parts of these de-territorialized and interacting architectures. In5Video conferencing technology193.1. Definition of a cyborg environmentother words, these architectures can be conceived as systems of co-embodying en-tities – space and human – that interact between themselves to generate, manipulateand transmit information in a coherent way.3.1 Definition of a cyborg environmentA cyborg is an entity that has used biological or electronic technology to enhanceits natural definition. The word cyborg was first used by Manfred Clynes in anarticle called Cyborgs and Space[12] and it is formed of the words cybernetic andorganism that depict the radical need of technology enhancements needed by as-tronauts to survive in non-natural environments like space. Gray defines a cyborgas a. . . self-regulating organism that combines the natural and artificial to-gether in one system. Cyborgs do not have to be part human, for anyorganism/system that mixes the evolved and the made, the living andthe inanimate, is technically a cyborg. This would include biocomput-ers based on organic processes, along with roaches with implants andbioengineered microbes.[28]In this sense, a cyborg is an entity that has chosen to extend its abilities throughprosthetic entities that, in a strong manner, challenge the natural evolution of itsbody. A prosthetic and reconfigured body, a cyborg, cannot conceive its realityas static. For this organism, the ambient can be re-codified and personalized in amultiplicity of suitable alternatives, depending on the prostheses used to interactwith it. The cyborg itself can accelerate, or de-accelerate, through technology hisadaptation to any environment. It is the original definition of a Cyborg by Clynesand Kline that states that “humans could be modified with implants and drugs sothat they could exist in space without space suits.”[28] Donna Haraway writes inher Cyborg Manifesto thatHigh-tech culture challenges these dualisms [organism/machine, mind/body,animal/human, energy/fatigue, public/private, nature/culture, male/female,primitive/civilized] in intriguing ways. It is not clear who makes andwho is made in the relation between humans and machine. It is notclear what is mind and what is body in machines that resolve into cod-ing practices. In so far as we know ourselves in both formal discourse(e.g.,biology) and in daily practice (e.g., the homework economy inthe integrated circuit), we find ourselves to be cyborgs, hybrids, mo-saics, chimeras. Biological organisms have become biotic systems,203.1. Definition of a cyborg environmentcommunication devices like others. There is no fundamental ontolog-ical separation in our formal knowledge of machine and organism, oftechnical and organic.[29]This lack of distinction between machine and organism, or space and inhabi-tant has created the need of architectural entities of prosthetic nature. Due to thefact that cybernetic entities are seamless and self-regulating organisms that can beenhanced at any moment, inter-action with them is only possible through a processof adaptative cyborgization with them by becoming part of their prosthetic exten-sions. In other words, in order to attend to the volatile nature of the increasingnumber of human cyborgs, architectural space has to become a cyborg. Accordingto Teysoot,. . . the first task architecture ought to assume, therefore, is that of defin-ing and imagining an environment not just for ‘natural’ bodies but forbodies projected outside themselves, absent and ecstatic, by means oftheir technologically extended senses. Far from assimilating the toolwith the body according to the mechanistic tradition of Cartesian du-alism, we must conceive tool and instrument ‘like a second sort ofbody, incorporated into and extending our corporal powers.’ [Leder,The Absent Body, p.34]”[15]Similar in nature, both our bodies and their environment can be paired andconnected to create a both-ways cyborg system of ontological coupling[34]. Ar-chitecture becomes the extension of the body and the body becomes the extensionof architecture, i.e. prosthetic.According to Leder, “incorporation is what enables us to acquire new abilities- these abilities can settle into fixed habits. As time passes, these repeated habitsare definitely ‘incorporated’ and disappear from our view. They become envelopedwithin the interior of a body-structure”[42], they became prosthetic. This processof embodiment causes the body’s limits to “literally delaminate into the multiplesurfaces and interfaces of cyberspace... [the body then undergoes] a mutation, be-coming a living (and thus dying) machine.”[15] Herein lies the purpose of spatialityin a world of cyborg prosthetics: embodiment of the world.A Cyborg Environment is a conceptual model that addresses this prostheticrelationship. It is formed of humans and cyborg spaces that inter-act in a self-regulatory manner through their perception and action capabilities. The systemthat arises from this inter-action is based on a social structure of co-adaptationthat allows for meaninful communication to take place if a code, outside the co-adaptative inter-action, is agreed upon by all parties of the system.213.1. Definition of a cyborg environmentThe conceptualized system, a Cyborg Environment, depends on the definitionof a cyborg space, or a spatial reality of cyborg, i.e. prosthetic, nature. The sectionsbelow explain the definition of such cyborg space.3.1.1 Definition of a cyborg spaceA cyborg space is a cluster of stimuli that evokes a spatial experience of prostheticand informational nature (within cyberspace) that forms a self-regulating networkof relationships between itself and its inhabitants by being able to undergo spatialexperiences of its own. In other words, a cyborg space is: defined as stimuli, dependent of presence, within cyberspace, autopoietic, and subject to spatial experiences.A cyborg space is defined as stimuliThe head-mounted display is an example of the understanding of space – virtual-ized space, in this case – as an entity that can be embodied by the human body.Generally used to deliver highly immersive – i.e. closer to real – environmentalstimulation, it has proven that correct emulation of spatial characteristics providesa strong sense of embodying space or the virtual representation of oneself in thevirtual world. For example, the research of Rothbaum et al. in Fear Of Flying Ex-posure Therapy used virtual reality to simulate fear experiences. Their experimentshave used. . . an immersive head-mounted display (HMD) that consists of a dis-play screen for each eye, earphones, and a head-tracking device, whilesitting or standing on a low platform atop a bass speaker, thus plac-ing the user within a multisensory, 360-degree environment that canprovide visual, auditory, and kinesthetic cues (i.e., vibrations).[59]to measure and study immersion. According to the researchers, “although theuser’s experience is entirely computer-generated, the individual’s perception over-looks the role of technology in the experience”[59] and renders the experience assufficiently real to be embodied by some of the human participants.223.1. Definition of a cyborg environmentIf we return to Warren Robinett’s original description of the use of head mounteddisplay to “project our eyes, ears, and hands in robot bodies at distant and danger-ous places. . . [or being] able to create synthetic senses that let us see things that areinvisible to our ordinary senses,”[58] it is possible to understand this technologyas an entity based on visual stimuli that provides the prosthetic effect of inhabitingthe world.Therefore, a conception of space based on three vectors – (x;y;z) – and em-powered by a fourth t vector – time – to fit a thermodynamic world can be re-placed with a more contemporary understanding of space as a sensory perceptionof events outside the human body and recorded as a stream[39] in the human mind.This definition allows the creation of a space science based on the world’s intrinsicchanging nature and the physiological-based perception of stimuli available to theobserver. Therefore, a cyborg space is such a collection of perceivable stimuli thatresult in a sense of being there or embodying space.A cyborg space is dependent of presenceA cyborg is defined by its extended perception and action capabilities. Moreover, acyborg is defined and constrained by its environment and can only become a cyborgin a space of prosthetic nature.On 2006 Tichon and Banks researched the differences of delivering virtualenvironments for exposure therapy with a Desktop PC or a 150-degree screen. Thestudy suggested6 “that porting a virtual exposure therapy environment from a semi-immersive interface to [a] desktop PC does not significantly impact presence”.[68]In other words, immersive and non immersive visual stimulation can equally affecthuman perception if such stimulation is done appropriately. According to theirdescription, “psychologically, a successful virtual experience will make the userbecome involved in the world to the point where he or she experiences a sense ofpresence in the virtual world of ‘really being there.’ ”[68]Because the cyborg space is defined as a collection of stimuli that results in aspatial experience, a proper delivery of stimuli that creates a strong sense of pres-ence is needed. Presence can be tested with the Presence Questionnaire developedby Wilmer and Singer[71] in 1998 which measures four factors of accurate pres-ence, or immersion:6The results of the experiment should be taken with caution. Virtual door control was madeautomatically in the 150-degrees experiment, while it was done manually by the experimenter inthe Desktop PC environment; this could have potentially affected the control variable of the exper-iment. In the words of the experimenters, “presence is usually enhanced when the user can exerta greater level of control over the task environment or has an increased ability to interact in theenvironment.”[68]233.1. Definition of a cyborg environment1. The human control of the environment being presented,2. Successful reduction of distraction factors,3. Level of sensory stimulation conveying sufficient information tothe beholder’s senses and4. The realism of the information being presented, correlating withthe characteristics of the real world.Tichon and Banks’ experiments have provided proof, however, that said pres-ence is not dependent on the apparatus presenting the stimuli, but on appropriatestimulation methodologies and the design of the stimuli. An architectural spatialitybased on its ability to embody human inhabitants should be defined and measuredthrough its capabilities to promote presence. This, as we have seen is not depen-dent on the technology available, but on the appropriate implementation of suchtechnology, and is scientifically measurable.A cyborg space is within cyberspaceHumans have learned to shape their surrounding according to their needs. Envi-ronment modification can be achieved by either appropriation of an existent space,alteration of an environment’s existing qualities, or by superimposing alien struc-tures constructed from the environment’s parts. The technology of every era hasallowed for different methods of creating this alterations of the environment, or theconstruction of structures within it.Digital computers7 ignited cybernetics, a concept that “elaborated Descartes’smechanistic view of the world and looked at humans as information processingmachines”[16] creating the possibility of conceptualizing information processingmachines as humanized entities.Robots are the outcome of the cybernetic concept, their function being that ofenhancing human activities, but also freeing humans. In 1921, Karel Capek’s playR.U.R. –Rossum’s Universal Robots– presented the idea of artificial people calledrobots. Robot, in Czech, means “both forced labor and worker”[10] and depictedan entity that would build a world where “everybody [would] be free from worryand liberated from the degradation of labor.”[10]The robot became an essential component of a society founded in performanceand comfort. The ultimate task of the computer, as Landauer explains, was “to re-place humans in the performance of tasks”[13] and to automate the human abilities,7“The race against German scientists to build an atomic bomb and the need to break the codesof the Nazi cipher machines were major forces behind the development of high-speed calculatingmachines. . . .”[16]243.1. Definition of a cyborg environmentliberating and extending the capabilities of its users. With ubiquitous computing,and cyborgization, cities began becoming robots of large scale that would augmentthe living environment of humans by providing services otherwise inexistent in thenatural world – wireless connectivity, traffic control, mobile advertisement,etc.This network of services is cyberspace, or the ultimate robot surrounding. Cre-ated purely of non-material construction blocks – stimulation, communication abil-ities, information – it radically differs from its natural counterpart – real space –and can only be compared with similar extreme environments inhabited by cyber-netic organisms. As Gray writes, “disembodiment in cyberspace is hyperbodimentin outer space, but both places are dependent on machines and therefore both placesare inhabited only by machines - and cyborgs, of course.”[28]Cyberspace overlaps reality, it enhances and overtakes it like a virus assimilat-ing a new host in its collectivity. It is a fact supported by Hirtle and Sorrows that“many of the same cognitive principles that are important in navigation of phys-ical space are also involved in navigation in electronic environments.”[18] Thisexchangeability of mental maps arising from both real space and cyberspace resultin a tight relationship between the two. This allows cyberspace to co-exist withreality by embedding itself into it and providing the latter with de-territorializationand interacting capabilities.For human cyborgs, the world around them, how they apprehend and connectto it mimics how they understand and manipulate their self. For living cyberneticorganisms the environment is an extension of their body, as much as their body isan extension of the world. They know that their actions tense and relax the streamsof data that holds the world together, and accept that said actions are a result of theinformation clusters they have temporarily chosen to belong to.A cyborg space is autopoieticThe concept of autopoiesis is a synthetic approach to model complex and inter-acting systems of relationships. It allows us to understand and theorize the struc-ture and functioning of living entities, machines and organizations. Humberto Mat-urana and Francisco Varela define an autopoietic machine as,A machine organized (defined as a unity) as a network of processes ofproduction (transformation and destruction) of components that pro-duces the components which: (i) through their interactions and trans-formations continuously regenerate and realize the network of pro-cesses (relations) that produced them; and (ii) constitute it (the ma-chine) as a concrete unity in the space in which they (the components)exist by specifying the topological domain of its realization as such a253.1. Definition of a cyborg environmentnetwork. It follows that an autopoietic machine continuously gener-ates and specifies its own organization through its operation as a sys-tem of production of its own components, and does this in an endlessturnover of components under conditions of continuous perturbationsand compensation of perturbations. Therefore, an autopoietic machineis an homeostatic (or rather a relations-static) system which has itsown organization (defining network of relations) as the fundamentalvariable which it maintains constant.[35]Entities that belong to an autopoietic system can also be formed of autopoieticsub-entities. A human being is an autopoietic system, in the same sense that a tinybrain stem cell in his or her head is an autopoietic, enclosed system. Within thewalls of the stem cell a complex network of relationships allows for complex pro-cesses that permit its existence. In the same manner, the human skin encloses anautopoietic system of networked relationships that allow a human to live. Further-more, the skin that encloses a human system connects it to other entities that formpart of either the environment or the human collective. This forms a network ofinter-acting parts of autopoietic definition called environment.8Autopoietic systems are “purposeless systems”[35] enclosed into their own selfregulation process. There exist no goal to achieve, yet the system is considered tobe alive. The system’s regulatory rules are only self evident to the parts of thesystem – moreover, these parts are unaware of the former. A lack of “inputs” and“outputs” isolates the system and creates a self-contained unobservable entity. InMaturana’s words,Since the autopoietic machine has no inputs or outputs, any correla-tion between regularly occurring independent events that perturb it,and the state to state transitions that arise from these perturbations,which the observer may pretend to reveal, pertain to the history of themachine in the context of the observation, and not to the operation ofits autopoietic organization.[35]Cyborgs are autopoietic. They are part of systematic interactivity of autopoi-etic form with their environment – of which they are both separated and linkedto – and other cyborgs of the same collective[16].9 Their electric, biological and8The skin is not considered an input-output device, but a connection point between parts of anautopoietic system.9Human skin is an important connection with the environment, while language is an importantconnection with other equally capable humans. Humans must have been seen as cyborgs or post-sapiens by other sapiens excluded from the human collective autopoietic system through their inac-cessibility to spoken language.263.1. Definition of a cyborg environmentpharmaceutical enhancements link them to a vast and complex network of cyborgrelations.By interacting with other parts of the autopoietic system these entities “regener-ate and realize the network of processes” that make them cyborgs. In other words,cyborgs are not cyborgs not by choice, but by definition. They were born cyborgsin constant connection with their environment. Their actions are both consequenceof the system they belong to, and cause of the specific state of said system of net-worked relationships. There is no purpose in this inter-relationship, but a merecoherence in the flow of tightly related information.Finally, the autopoietic relationship between a cyborg, other cyborgs and theirenvironment is unmeasurable, due to its own enclosed definition. Understandingan autopoietic system is only possible by projecting the definitions of such systeminto another system[35]. By creating a simplified model and becoming part of itsautopoiesis, we are able to describe, measure and replicate the original autopoieticsystem the simplified model relates to. An evident difference exists between thetwo systems, but it is only through this approximation that we can understand thenature of a cyborg space.A cyborg space is subject to spatial experiencesSpace can be conceptualized as a spatial experience resulting from a perceptualand cognitive process. By a process of synthesis over the cognitive models andprocesses outlined in in the previous sections it is possible to hypothesize thatspatial perception is a process formed of at least three parts,1. A collection of stimuli2. that are bound together by a cognitive relationship3. resulting in a specific space map or representation of space.This process could be synthetically achieved by any entity capable of such pro-cesses. It is possible to theorize that there exist different types of spatialities, eachcorresponding to the perceiving capabilities of any entity capable of perception,cognition and action. Each one of these entities would perceive the world in a dif-ferent manner and would undergo a cognitive process within it’s own biological –or electronic – capabilities. In other words, if an entity is capable of perceivingstimuli of any kind, constructing a spatial model through a cognitive process, andacting upon the perceived stimuli by using said model we can consider such entitycapable of spatial experiences.273.1. Definition of a cyborg environmentFigure 3.1: Evoking a spatial experience through stimuli.The present research focuses on two types of spatiality: human spatiality andspace spatiality. A human spatiality is the spatial experience arising from the cog-nitive process undergone by a human after perceiving space stimuli (light, sound,etc.). A space spatiality is the spatial experience arising from a synthetic cognitiveprocess undergone by a space after perceiving human stimuli (movement, position,etc.).Furthermore, following the previous sections it is possible to hypothesize threestates of perceivable stimuli, that in turn result in the perception of spatial experi-ences (figure 3.1): A fixed, properly arranged, collection of stimuli, An induced relationship between two or more stimuli, An alteration of one or several stimuli over time.283.2. Models for a cyborg environmentIt is possible to design systems based purely on stimuli that result in controlledSpatial Experiences. The present experiment has focused on stimuli arrangementand stimuli alteration, while controlling any induced relationships between stimuli.3.2 Models for a cyborg environmentThree models (figure 3.2) have been created to understand, taxonomize and con-struct cyborg environments: Cyborg space: Figure 3.2(a). This model represents the definitions of aSpace Cyborg from its embodiment enclosing and alteration capabilities. Itdefines a single self-standing cyborg space entity. Cyborg communication: Figure 3.2(b). This model represents the communi-cation capabilities of a Cyborg Space, it defines the methods used to interactwith similar entities – space or human cyborgs. Cyborg systems: Figure 3.2(c). This model represents the cohesive systemthat arises when cyborgs – human or space cyborgs –interact.CYBORG CYBORG(a) Cyborg SpaceCYBORG CYBORGCOMMUNICATION(b) Cyborg CommunicationCYBORG CYBORGCOMMUNICATIONCYBORGSYSTEM(c) Cyborg SystemFigure 3.2: Models for a cyborg environment293.2. Models for a cyborg environment3.2.1 Cyborg spaceA cyborg space is defined as a cluster of stimuli evoking a spatial experience ofprosthetic and autopoietic nature. This definition can be modeled by measuringthree capabilities, or dimensions, of cyborg spaces: Embodiment: Ability to achieve an autopoietic relationship with its inhabi-tants by promoting presence, or the sensation of “really being there”, Enclosure: Ability to evoke a spatial experience in its inhabitants throughstimuli within cyberspace, and Alteration: Ability to change over time.EmbodimentAccording to Sidney Fels, “people form relationships with objects external to theirown self. These objects may be other people, devices, or other external entities.The types of relationships that form and the aesthetics of the relationships motivatethe development of interaction skill with objects as well as bonding.”[25]When studying a spatiality of cyborg nature it is critical to be able to conceivethe possibility of a bidirectional embodiment. It is possible for any cyborg, dueto its own nature, to ontologically become more machine or more organism inorder to complement or dialogue with another cyborg. The process is simple andrelates to the ability of said cyborg entity to take differential control over its twocomponents. By doing so, it becomes part of – embodied by – the second cyborgand allows for dialogue to take place. Due to the equal nature of all participantswithin the autopoietic system the communication can be done reciprocally and bothentities become self-embodied.Fels proposes four prototypical relationships that have been named after theiraesthetic definition:1. In the first one, named Achieving, the human cyborg “stimulates the objectwith responds”[25]. Embodiment is achieved by the level of control that thestimulating entity can achieve over the stimulated one. The human cyborg issatisfied by achieving its task by using a space cyborg as its tool.2. In the second one, Doing, the controlling human cyborg has extended intothe embodied space cyborg’s self, making it part of its own. The resultingintimacy is due to a transparency of the controlling device provided by thespace cyborg’s nature, or its interface transparency, as well as the ability ofthe embedding cyborg to control the embodied one.303.2. Models for a cyborg environment3. The third, entitled Contemplation, is the result of the space cyborg stimu-lating the human cyborg. According to Fels, “based on the person’s ownknowledge and beliefs, the stimulus may be satisfying.”[25]4. The last type of relationship is called Belonging and is manifested when thespace cyborg intimately embeds the human cyborg. A sense of belonging tothe space cyborg can be experienced by an embodied human cyborg.EnclosureSpace can be conceptualized as formed by two parts, the spatial liquid or mentalrepresentation of the space, and the spatial membrane that contains such liquid. Thespatial liquid concept represents the allocentric perception of space – mapping andnavigation – as a purely cognitive process independent of reality. The membrane,concept represents the ego-centric – stereopsis and visual cues – characteristics ofspace perception that are dependent on objects that evoke stimuli of spatial nature.For example, when a human experient explores a virtual maze he or she uses vi-sual cues presented by the maze’s membrane to understand the spatial liquid thatrepresents the virtual environment. Furthermore, because human cyborgs interactwith a very large environment – reality – throughout their lives it is important toconceive architectural delimitation to be within this larger spatial liquid.Three main states of space enclosure, i.e. architectural delimitation of reality,can be then hypothesized.1. Closed Cube. A fragment of existent liquid is perfectly enclosed with the useof a membrane, a new liquid is then formed through complete encapsulation.2. Opened Cube. A semi-closed membrane partially delimits an existent liquid.The partiality of the enclosure generates an incomplete and flowing mixtureof both the existent liquid and the new liquid. The continuous flow betweenboth entities defines the relationship between them.3. Exploded Cube. A fragmented membrane serves as disintegrated/integrativeconnection between two sub-sets of existent liquid. The high level of frag-mentation and the apparent null spatial relationship between both realitiesdrives the interaction to a stronger semantic level. Both linked sub-sets re-create themselves as new liquids by semantic definition rather than by phys-ical connectivity.313.2. Models for a cyborg environmentAlterationTransformations of the Cyborg Space – or delimited space liquid as explainedabove – can take place through three main methods,1. Membrane Alteration. This kind of alteration involves any semantic and/orphysical transformation of the enclosing membrane in order to obtain changein the spatial liquid .2. Liquid Alteration. This alteration involves any morphologic alteration - i.e.bipartition, elongation, multiplication, expansion, compression - of the spa-tial liquid without the use of the spatial membrane. The semantic definitionof the spatial liquid allows such transformation without physically alteringthe membrane that contains it.3. Fusion Alteration. This alteration is defined by progressive or conservativemixture of two spatial liquids resulting in a third homogeneous spatial liquid.Model: the cube modelThe model has three dimensions relating to the level of embodiment, type of spatialenclosure and type of alteration capable by any space cyborg. This model has beencalled The Cube Model (figure 3.3).The Cube Model is an indexing of possible states of a spatiality of cyborgnature. Cyborg spatial entities have three main qualities: they are able to embodyother cyborgs for their own purposes or let themselves embody; they deal withspace as an information generator and communication channel; and, finally, theyare non-static entities always in change. The model creates a categorization of eachone of this characteristics and joins the result in a three dimensional model.3.2.2 Cyborg communicationSerres’s communication theoryCommunication is essential for the Space Cyborg. Due to its prosthetic nature itemploys communication both within its own structure and between its prosthesesand the entities that connect to them. Communication, however, depends on aconstant opposition between the code used to en-code a message and the noiseboth the channel and the interactive process produces. Michel Serres writes in hisbook Hermes: literature, science, philosophy thatIt can be shown easily enough that no method of communication isuniversal: on the contrary, all methods are regional, in other words,323.2. Models for a cyborg environmentFigure 3.3: Cube model for a cyborg architecture taxonomy.isomorphic to one language. The space of linguistic communication(which, therefore, is the standard model of any space of communica-tion) is not isotropic. An object that is the universal communicatoror that is universally communicated does, however, exist: the tech-nical object in general. That is why we find, at the dawn of history,that the first diffusion belongs to it: its space of communication isisotropic. Let there be no misunderstanding: at stake here is a defini-tion of prehistory. History begins with regional language and the spaceof anisotropic communication. Hence this law of three states: tech-nological isotropy, linguistic anisotropy, linguistic-technical isotropy.The third state should not be long in arriving.[64]For Serres, language is both limited and defined by parasites or bifurcations, inhis words,The story doesn’t yet tell of the banquet, but of another story that tells,not yet of the banquet, but of another story that again. . . And what is333.2. Models for a cyborg environmentFigure 3.4: Serres’s model of communication.spoken of is what it is a question of: bifurcations and branchings. Thatis to say, parasites. The story indefinitely chases in front of itself whatit speaks of.[65]Parasites are the interferences or bifurcations that occur when a message istransmitted. For Serres, all systems of communication have parasites – noise –that result in messages. In other words, noise is a fundamental part of the messageand it gives existence to it. There is meaning because there exists bifurcations andbranchings in the transmission of information.Serres’s model of communication (figure 3.4) is based on the opposition of suchparasite, or noise, to the message’s code. His model of communication is formedof four factors that interact between each other and shift over time, resulting indifferent communication scapes. The universal communicator is achieved throughthe elimination of noise, while a non-communicator is defined by the exclusion ofthe code in the system.Model: the diamond modelFor Donna Haraway “Biological organisms have become biotic systems, commu-nication devices like others”[29] Built to communicate, both human cyborg andspace cyborg rely on their inter-connection for true existence. The Diamond Modelis based on Serre’s model of communication and is formed of five entities formingthe vertexes of the geometric representation represented by figure 3.5.Where (1) Human Cyborg A, (1’) Human Cyborg B, (2) Space Cyborg, (3)Code and (4) Noise.In a system of interacting cyborgs (figure 3.6), the code is defined as the properarrangement of stimuli in a semantic manner that en-code a message, while thenoise is defined as all external stimuli or inadequately arranged stimuli negatingsuch en-coding. Both code and noise vertices vary across time and dynamically343.2. Models for a cyborg environmentFigure 3.5: Diamond model for a cyborg’s communication ability.affect the communication system that fluctuates between pure-code and pure-noisewithin each conversation – as proposed by Serres. Furthermore, the code and noisevertices lay outside the autopoietic interaction. Since both code and noise are in-dependent of the stimuli, i.e. are the semantic relationship between stimuli, theymay or may not arise in an interactive system. Therefore, a code should be agreedupon – or learned – and used appropriately to reduce the noise that derives fromany interactive activity.3.2.3 Cyborg systemCyborgs are constructed to inter-act with – embody – other cyborgs. Due to theirtechnological extensions, cybernetic entities both depend on and construct the en-vironment they belong to. They become part of a system formed of interactingprosthetic abilities, i.e. both cyborg and environment create a structure of inter-activity based on communication a societal system.Chris Hables, interested in the political role of cyborgs in a post-human society,353.2. Models for a cyborg environmentCODEStimuli arrangement(Semantics)CYBORGPerceives/ProducesstimuliEncoded Msg.CYBORGPerceives/ProducesstimuliMessageAutopoietic interactionFigure 3.6: Communication process between two cyborgs.defines the idea of citizenship as a result of interaction between members of acommunity. In his words:Currently, judgments about the suitability for citizenship of individualhumans and cyborgs are made on the grounds of their ability to takepart in the discourse of the polis[28].This ability is acquired by being either ‘natural’ – being born in such commu-nity – or by proving a ‘belonging’ to such community. The cyborgs proposed bythe present research are based and defined by their prosthetic inter-acting abilities,i.e. their capabilities to use its technological extensions to act upon its alreadyacting environment. Both human and space cyborgs, due to their extension uponeach other, are part of a system of inter-action where dialogue can only be achievedthrough embodiment, as defined by Fels.[25] Therefore, such citizenship – or def-inition of the capabilities of dialogue – in a Cyborg Environment is defined by theability to use said acting capabilities to achieve meaningful communication.Cyborg citizenshipAccess to the various levels of action within a cyborg collective is restricted tocitizens with different capabilities. A structure of this nature allows the group toachieve bigger goals with minimum effort. While some citizens of such collectivehave access to higher levels of action, others are restricted to important and basicacting functions of the community. Both parts of the system are essential for thecorrect functioning of the group and rely on the interaction that arises between eachother.Inter-action is the main asset for a cyborg citizen, i.e. the ability to connectits own actions with those belonging to other organisms of the collective. The363.2. Models for a cyborg environmentcapability of an entity to inter-act is based on the entity’s own ability to act uponthe world and other citizens. Norman describes any human action upon the worldthrough three main actions. For him, “to get something done, you have to start withsome notion of what is wanted – the goal that is to be achieved. Then you haveto do something to the world, that is, take action to move yourself or manipulatesomeone or something. Finally, you check to see that your goal was made.”[49]This cycle can be fully described in seven parts that Norman calls the “seven stagesof user activity”: Establishing the Goal, Forming the Intention, Specifying the Action Sequence, Executing the Action, Perceiving the System State, Interpreting the State and Evaluating the system state with respect to the Goals and Inten-tions [48]Norman’s definition of action upon the world can be used to define a structure– citizenship – based on access to different stages of action. The more actionan entity can get access to, the more power over the system it can achieve. Thispolitical structure is conformed of four different citizenships (figure 3.7): Tools: Tools are entities that can execute an action sequence and/or perceivethe state of the world. They are equipped with sensing and/or acting mecha-nisms that allow them to act upon the world. Actors: In addition to the capabilities of tools, actors can interpret the per-cepts gathered through their sensing systems and are able to construct a setof several commands – ability to abstraction – in order to achieve highercommands. Agents: Agents are actors that have control over the intention to act andare able to evaluate the interpreted perceptions. Agents can be considered as“intelligent” due to their high abstracting capabilities, however, these entitiesdo not have access or control over their goals. Agents are goal-oriented andcan rarely modify this. Super-agents: Super agents have access and control over their goals. Theseentities have superior control over their interacting capabilities and the sys-tem they belong to.373.2. Models for a cyborg environmentFigure 3.7: Citizenship based on the access to different stages of action.This taxonomy yields a hierarchy of action potential that allows entities inhigher citizenship strata to control lower entities or include them as part of theirown functions. This process of layering allows an entity to delegate lower actionfunctions to lower entities of the system and extend onto several systems of simul-taneous activity. For example, a super-agent can belong to several action systemsby delegating functions to automated agents under one or several goals, the super-agent can then focus on the definition and control of various goals. Layering alsomeans that a citizen can deliberately set its highest action capability as static andbecome a lower entity. For example, when a super-agent sets a fixed goal during adelimited period of time and focuses on its lower level functions; during this timethe super-agent functions as an agent and can even be subject to super-agent con-trol. This layering concept suggests that systems constructed for lower entities canbe tested using higher entities – providing that the higher capabilities of the latterare properly controlled. A system of agents can be tested with super-agents if thelatter have fixed goals during the testing sessions.Model: the cyborg systemThe Space Perception Loop (figure 2.1) previously explained can be used to theo-rize an inter-connection between human cyborgs and space cyborgs. If both entitiesare considered as Agents, i.e. are actors that have control over the intention to actand are able to evaluate the interpreted perceptions, it is possible to link togetherboth their Space Perception Loops to create a fluent system of constant inter-action.383.2. Models for a cyborg environmentFigure 3.8: Paired space and human perceptive loops.This new system, depicted by figure 3.8, would be considered an autopoieticsystem with no other goal but to maintain the relationships that conform it. In saidnetwork, both entities gather information about their world (light and direction inthe case of the human, movement and position in the case of the space), create aspatial model (using egocentric and/or allocentric relationships) and act by updat-ing their own state (movement and position in the case of the human, light anddirection in the case of the space).A Space Cyborg fitting the above model would have to possess perception,cognition and action capabilities. For testing purposes this entity can be modeledas an agent able to perceive the world, undergo a spatial experience, create a spacemodel and act on the world to calibrate its spatial model. Because said agent isencoded into a space form – can only be understood by human beings as space –it has been named as Space Encoded Agent. Figure 3.9 presents the parts of suchentity in relation to the previously explained interconnection of human and spaceperception loops. By using a Space Encoded Agent, i.e. a cyborg citizen withoutaccess to the creation and manipulation of higher-order goals, specifically designedto interact through space with a human participant, it is possible to construct aprototypical interactive spatial environment formed of equally empowered entitiesthat can permit the measurement and analysis of Cyborg Environments.393.2. Models for a cyborg environmentFigure 3.9: Human and space encoded agent forming an organic interactive system.40Chapter 4Experiments4.1 Similar experimentsThe following sections present the most relevant research that has been conductedby researchers interested in one or several parts of what has been defined as CyborgEnvironments. The presented explorations have been organized in the followingclusters: Stimuli: Those investigations interested in the conception of space as stimuli,or the capabilities of spatial stimuli to promote human activity and engage-ment; Information: Research interested in space as mediatic or semantic systemsof information; Agent interactions: Research interested in the relationship between agentsand humans and how to create better interactive agents; and Cyborg architectures: Those investigations interested in the enhancement ofbuilt spaces through technology, thus in creating cyborg-like architectures.4.1.1 Space as stimuliEnvironmental stimuli as interfacesWhen speaking of interactive spaces we are immediately pointed to previous worksin spaces that have been enhanced by technology to interact with human presence.Hiroshi Ishii’s work on the MIT Media lab has experimented on the haptic relation-ship between information and humans. His research group Tangible Media Groupfollows one objective: “to blur the boundary between our bodies and cyberspaceand to turn the architectural space into an interface between the people, bits, andatoms.” [36] A representative example of this effort is AmbientROOM. By using“light, shadow, sound, airflow and water movement in an augmented architecturalspace the system aims to provide the architectural space with cyborg-like enhance-ments that explores the background awareness and foreground activity”[38].414.1. Similar experimentsThe prototypical space, depicted by AmbientROOM takes advantage of hu-mans’ ability to process background information and uses projectors, fans andspeakers to generate a stream of data that enhance the perceptive capabilities ofits inhabitants. Displayed ripples allow the users to monitor the state of a dis-tant living being, light patches represent human presence, sound is used to present“natural soundscapes”[38] and physical objects are used to modify the state of theroom. The environment becomes a gateway to digital information and the objectsbecome physical media controls.Engaging properties of spatial stimuliErnest Edmonds’s[20] work in the Creativity and Cognition Studios at the Univer-sity of Technology of Sidney defines a model for engagement with interactive artbased on three attributes: Attractors, “those things that encourage the audience to take note of the sys-tem in the first place”[24], Sustainers, “those attributes that keep the audience engaged during an initialencounter”[24] and Relaters, “aspects that help a continuing relationship to grow so that theaudience returns to the work on future occasions.”[24].His studies suggest the possibility that predictable human responses can beobtained by visual and aural stimuli projected onto reality. Moreover, the propertiesof these stimuli can be of extremely rough definition – generally vertical color linesor sinusoidal wave sounds – but are dependent on complex systems with the abilityto perceive and engage their human observers.In his work Shaping Form[19] in the Speculative Data and The Creative Imag-inary exhibition in 2007, a set of canvassed simple visual stimuli mutate over timedue to human presence and engage their observers in an invisible interactive pro-cess. The square plasma screens mounted on the gallery’s walls present a series ofvertical color lines that are the representation over time of events gathered by cam-eras embedded in the same artworks. By interacting with the objects the observersof these entities shape their form in unpredictable and interaction-engaging ways.This understanding that an object’s perceived form can be detached from itsphysicality – and made dependent on underlying and complex logic structures –is more evident in Broadway One, presented in the SIGGRAPH Art exhibitionin Los Angeles in 2004. In this project, Edmonds explores the possibility of a“synaesthetic work”[19] where visual and aural stimuli are dependent on the same424.1. Similar experimentsgenerating logic. A generative algorithm produces two numbers that are then usedby a presenter10 to create a physical state perceivable by human beings. In Ed-monds’s words: “The image display section [of the presenter] waits for a list oftwo integers. The first integer relates to a position, and the second a color. Theaudio output waits for the same two integers but treats the first as a position in timenot space, and the second as sound.”[21]The complex and almost chaotic relationship that arises between Edmond’sinteractive objects and their viewers result in an aesthetic experience that exceedsthe possibilities of static artworks. This has led other researchers to believe thatphysical responses can be obtained by visual cues projected on spatial realities.Spatial stimuli and kinesthetic actionsSeveral artifacts have explored the relationship between computer-generated stim-uli and kinesthetic responses. Andrew Hieronymi’s MOVE is “an installation, com-puter vision and full-body interaction”[31] system where humans are able to play-fully interact with images projected onto space. The prototype is composed of acomputer, a camera and a projector mounted on a ceiling and both directed towardsthe floor. The camera is used to detect the position of a human participant withina projected set of objects. This information is then transformed into a virtual rep-resentation of the human body that can be used to process collisions between theobjects being projected and the biological entity.Based on avatar-based actions generally present in action games, six environ-ments or modules were created: Jump, Avoid, Chase, Throw, Hide and Collect.Each one of these actions aimed at promoting a specific way of interaction be-tween humans and graphical information. The abstraction of the human body andits kinesthetic actions into virtualized objects with position over time has allowedHieronymi to create a seamless connection between real and virtual physicality.Humans are both solid bodies and position-motion information, while objects areboth images projected on the floor and virtual bodies.MOVE has successfully achieved a bi-dimensional – virtual and real – ecosys-tem of inter-action. Human inhabitants of this reality have no choice but to interactwith it, while the reality is inevitably affected by its humans inhabitants. Althoughthe system is constrained into it’s own definition – modules of specific interactionmethods –, it demonstrates that by understanding the physical and virtual world asinformational systems it is possible to link both into a self regulating system.10In this case a plasma screen mounted on a wall and a set of speakers.434.1. Similar experiments4.1.2 Space as informationMedia-based environmentsBubble Cosmos, presented at SIGGRAPH 2006 explores the concept of Fantarac-tion. “Fantaraction places emphasis on entertainment and art and imparts momen-tary surprise or pleasure” [46] to construct a surrealistic interaction between mod-ifiable objects loaded with media. The prototype constructs smoke-filled bubblesthat float in the air, a projector is then used to shine light onto the contained smoke,giving the impression that images are contained within the bubbles. Breaking thebubbles triggers a burst sound and a colorful effect of a spreading smoke-like im-age.This bubble display system allows a physical interaction with media otherwiseintangible. By bursting the contained images into a colorful disappearance, partici-pants of this reality achieve a haptic control over digital information in a subtle, yetpowerful, manner. A strong sense of control over the information that forms partof the system is provided by a simple act: deciding the fate of each informationalbit. The physical actions involved with this judgment create a connection betweenthe mediatized information and the haptic and proprioceptive abilities of the hu-man participants. This interconnection of human abilities and media allows thepossibility of interactions between physical stimuli –light or sound – and virtualinformation – location or state – with a further semantic component.Semantic environmentsNarumi et al. have explored the interaction between physical information – per-ceived by humans as stimuli – and semantic information. Their prototype Inter-glow is an interactive miniature world that “facilitates close interaction and com-munication among users in real spaces by using multiplexed visible-light commu-nication technology”[47].Inter-glow is a model of a dining table and four chairs corresponding to fourmembers of a virtual family – father, mother, daughter and son. Four light sourcesflickering at different rates hang vertically from the miniature living room’s ceiling.By directing each one of these towards the center of the small table, human userscan trigger the presence of a virtual participant. Twelve combinations of memberco-existence exist. They lead to 12 different conversations – e.g. between the fatherand the daughter – controllable through this light interface.The human participants of this art installation can achieve control of the se-mantic relations between each virtual family member – and the overall system – bydiscovering each virtual participant, uncovering the relationships between partici-pants and controlling the flow of the developing story. This interactive exploration444.1. Similar experimentsof the “relationship between characters”[47] allows the observers of Inter-glow togive meaning to an otherwise inexistent world.Although no significant physical alteration of the miniature space occurs duringthe interactive process, the environment radically changes in the mind of each hu-man participant. Each person has a different mental model of the spatial and inter-personal relationships that conform the virtualized family. This has been achievedby co-relating virtualized semantic links and haptic spatial controls – one has tograb the light over the father’s seat to make him join or start a conversation.The virtual family and its story, exist only in the memory of the people thathave inter-acted with the work, yet, it is only malleable through spatial and physi-cal alterations of the miniaturized living room. This creates a coherent and manip-ulable system of spatial – physical miniature living room – and semantic – virtualrelations between imaginary participants – components otherwise unachievable.4.1.3 Agent-human interactionsAgents with social capabilitiesAny truly interactive system involves two equally empowered parts in reciprocalinformational exchange. The resulting system is what Maturana calls an autopoi-etic system – homeostatic entity dependent on its own organizational rules[35].Some explorations have attempted to focus on this dialogical relation betweenhumans and machine-based environments. Bickmore and Picard’s research onlong-term human-computer relationships explore socially programed relational agentsbased on dialogue. Their project consisted of an anthropomorphic animated exer-cise advisor for MIT FitTrack. The “embodied agent” was designed with both non-verbal behaviors “used for co-verbal communicative and interactional functions”[2]– i.e. gaze, hand gesture, body posture, face expressions and presence – and verbalbehaviors within four “conversational frames” – task-oriented, social, empatheticor encouraging. This verbal behaviors were either, close forms of addressing thehumans, exchanges of empathic information, dialogues of social information, di-alogues of information related to training tasks or behaviors searching for futureinteraction.Bickmore and Picard’s evaluation of their agent suggested that “people willreadily engage in relational dialogue with a software agent, and that this can havepositive impacts on users’ perceived relationship with the agent.”[2] Furthermore,it proved that it is possible to design agents with social dialogue capabilities, givena clear understanding of the interactive process that arises from a bi-dimensionaland conversational interaction.454.1. Similar experimentsAgents that control humansSome works by researchers like Mackay explore a more intricate relationship ofcontrol flow between the parts that build an interacting system. Their McPie Inter-active Theater “explores an unusual style of interaction between human users andvisual software agents”[43]. They propose an animated figure – McPie Character– designed with a sole goal: “shape users’ behavior”[43]. In the experiment pre-sented by Mackay, the system’s agent was instructed to prompt users to “tap theirheads with their arms”. This was done by making the agent respond “interestingly”to humans moving their arms and “enthusiastically” to users tapping their heads.The investigators analyzed a number users of the McPie Interactive Theaterand concluded that three types of interactions generally arouse: trying to manipu-late the agent, identifying with the character as a representation of themselves andtrying to establish a meaningful communication with the agent.Within these interacting strategies,“some [users]did tap their heads” but as partof interactions that “were more complex”[43] and usually within social or randomgestures. This proved that specific human responses can be controlled by agents ifsaid actions lie within the social and random rules used by humans to control suchagent.4.1.4 Technologically enhanced built architecturesArchitecture as stage for informationSome researchers, especially architects, have hypothesized that the built environ-ment can be used, not only as interface to information – as proposed by HiroshiIshii[36] – but as medium and enhancement for communication. According tothese investigators, the architectural structure of the built environment can be con-ceived as information and thus transformed into a canvas allowing informationalinterconnectivity between human, agents and the urban reality. Kas Oosterhuisconceives a building as “a unibody, as an input-output device.”[54] where the builtstructure – doors, windows, walls, etc. – allows for the control of information totake place. In his words, all “processes run by buildings, products and users to-gether play a key evolutionary role in the worldwide process of the informationand transformation of information”[54].This conception of the architectural structure led researchers to explore theconcept of inhabitable interfaces. Jeffrey Huang and Muriel Waldvogel present theswisshouse as an architectural implementation that “allows unsophisticated usersto collaborate and be aware of each other over distance.”[33] The prototype is de-scribed as a choreography of “interactive elements in the space” that “allows users464.1. Similar experimentsto instantly separate, combine, and costumize environments for specific collabora-tive activities.”[33]The 3,200 sqft swisshouse building has been equipped with lumen projectors,high-resolution plasma screens, panoramic cameras, ambient speakers, streaminghardware, audioconferencing system, microphones and radio-frequency identifica-tion readers, all discretely included in the architectural configuration of the space.By using this structure for remote conferencing and teaching, support of informa-tion for art exhibits and collaboration in project development, the researchers haveagreed that the. . . power of an architecture driven communication interface seems tobe partly due that it does not emphasize the deployment of ever neweror more sophisticated technologies to be embedded ubiquitously, butinstead focuses on shaping new types of physical environments thathave a relation to specific geographical places and that people can in-habit to communicate with other people in other geographical places.[33]The success of the swisshouse lies mainly due to architectural and functionalcapabilities that are independent of the technology used to enhance them. In otherwords, the media technology serves only as enhancer of the informational flowalready provided by the building’s architectural structure.Architecture as fluctuating virtual-real formHowever successful these models have proven to be for videoconferencing, tele-existence and the presentation of information supporting real data, they have un-derestimated the informational capabilities of the physical structure that containsthem. Because the previous explorations have been done in areas of study focus-ing on the application and development of technology they have rarely taken intoaccount the perceptual and aesthetic component that architectural form adds to theinteractive process.Asymptote’s – Hani Rashid – participation at the 2000 Venice Biennale, theFluxSpace 2.0 Pavillion project explored the aesthetic and informational capabili-ties of architectural structure. The construction, measuring “thirty meters in lengthand [rising] two stories in height”[57] was a pneumatic form sustained by a metalstructure designed to create “a tangible oscillation between the physical exteriorand the fluid continuously reconfigured state of [the] interior” [57].Enclosed by the metallic frame, two rotating one-way mirrors constantly changedthe visual characteristics of the space, while two 180 degrees cameras simultane-ously broadcasted nearly 1.6 million variations of the mutating space to voyeuristic474.2. A pilot study of interactive spatial perceptionobservers around the globe. In this project, the technology was used “not only as atool of production and representation but as a means of engagement via interactiv-ity and connections to global networks.”[57]In the words of Rashid, FluxSpace 2.0 Pavillion“sought to engage an audi-ence including but not limited to visitors to the Biennale by providing a simulta-neous spatial experience for a virtual audience.”[57] Both real and virtual experi-ents, however, perceived their observed reality as a fluctuating environment. Thisexploration proved to be a streaming informational system tied to a network ofperceptions through time.4.2 A pilot study of interactive spatial perceptionA pilot study was performed to observe and to measure how humans interact withstatic and changing spaces. The experiment was an initial approach to the theoryof an autopoietic system composed of a human cyborg and a space cyborg.4.2.1 Device descriptionThe experiment consisted in a defined spatial location of 2.5 by 2.5 meters and alateral projection on a wall screen provided by a projector located at a height of 1.5meters from the floor and 5 meters from the wall. The space included 3 boxes of45 by 45 by 45 centimeters colored in red, green and purple. Two spaces – privateand public – were defined and marked with tape on the floor. Initially all the boxeswere put in the private section and subjects were put in the public section of thespace. Figure 4.2 presents the prototype being used.A Polhemus Fastrack attached to a Linux computer read the (x;y;z) positionof each box and the human experients. Sensors were attached to the bottom centerof each box and to the lower back of the subjects through a belt. A continuousrecording of human and objects’ position was translated into a plan diagram show-ing the motion patterns and overall position during a specific stimulus.4.2.2 MeasurementsMeasurements on this study focused on the analysis of motion and position patterns recorded, objects produced during a set of tasks during the experiment, notes taken by the experimenter, and484.2. A pilot study of interactive spatial perception(a) General Setting. (b) Dimensions.Figure 4.1: General setting and dimensions. experients’ subjective perception of the experiment.A post-experiment questionnaire was handed to the experients to understandthe subjective perception of the space; it consisted of a close-ended section – tar-geted at measuring the embodying and perceptual differences between test subjects– and an open-ended part – aimed at understanding the personal perception of thespace.4.2.3 Experimental designA group of 6 subjects – 3 males and 3 females of age 20 to 39 – were asked toperform 6 tasks in about 20 minutes. All experients were informed that each ofthese tasks would measure their ability to creatively solve problems under timedsituations and limited resources. A plastic bag containing 3 pieces of paper and apencil was provided to each of the experients before the experiment began.On the floor, within the public area of the space 6 folded pieces of paper wereplaced. Subjects could only see the number on each piece of paper and wouldhave to pick up and unfold the paper to see the content (table 4.1). Experientswere instructed to take, in ascending order, each one of the papers and read theinstructional phrase within. Each instruction would have to be completed beforecontinuing to the next one. Task completion was not a critical measurement, thus,the subjects were told that judgment about the completion of their tasks was to bedone by themselves. Finally, the participants were not allowed to communicatewith the experimenter or reach anything outside the pre-defined perimeter until theexperiment was finished.494.2. A pilot study of interactive spatial perceptionFigure 4.2: Stimulus presented.The tasks created and depicted on table 4.1 were intended to involve the usersin a semi-awareness state fluctuating between awareness of their surrounding andintrospection. Creative tasks were used to achieve this. However, creation can beof two natures, Semantic: where strong relationship to mental construction is needed and Physical: where mental effort is reduced and physical effort is encouraged.An analysis of several tasks was performed in order to predict the type of cre-ation that would result in each one of them. A short list was then created, basedon the semantic or physical characteristics of the expected products, and 6 werefinally chosen for the present pilot study. Table 4.1 present the tasks chosen.504.2. A pilot study of interactive spatial perceptionTask No. Instructional Phrase Type of creation1 The short story of a “pixel” Semantic2 Tree-house/Doll-house Physical3 Drawing diary Semantic4 5 minutes sleep Physical5 Shakespeare Semantic6 Modern Sculpture PhysicalTable 4.1: Initial pilot study, tasks to be performed by test subjects.Treatments appliedNine randomized visual spatial stimuli were presented to the experients during 2minutes. Each stimulus consisted of a projected square area of 2.5 by 2.5 metersdivided in two vertical areas, left and right. Each area – left and right – was illumi-nated with one of three colors – red, green or purple – creating a collection of ninestimuli.Due to the fact that the present experiment dealt with the measurement of cog-nitive and kinesthetic actions arising from a collection of visual spatial stimuli, itwas imperative to make sure that each cognitive state to be measured was relatedto one, and only one, stimulus. It was hypothesized that by presenting an uncon-scious, yet perceived, transient stimulus between each experimented visual spatialstimulus, cognitives states would be separated by a third cognitive state belong-ing to said transient stimulus. As a result, the cognitive states belonging to theexperimental conditions would be fundamentally different.According to Kirchner and Thorpe[41], at approximately 120 ms the brain canbegin to determine the content of a flashed attended image without being consciousof the stimulus. Following this rationale, a transient image of aproximately 200mswas included in the transition from one stimulus to the next.Figure 4.3 presents an example stimulus with only the left area lit in red. Table4.2 presents all the stimuli used in the experiment.Experiment flowThe experiment lasted for approximately 35 minutes (figure 4.4). During the first5 minutes the experimenter carefully explained the procedure of task selectionand completion and helped the experients in positioning the Polhemus sensor thatwould track their position. During the following 20 minutes, users would focuson the task completion while the experimenter would take observational notes. Fi-514.2. A pilot study of interactive spatial perceptionFigure 4.3: Initial pilot study, example stimulus. Note that for representation pur-poses the right area is colored gray, although in reality there was no visual stimuluspresented in said area.AreaLeft Right Stimulus type DurationRed Red Experimental 2 minutes.Red Experimental 2 minutes.Red Experimental 2 minutes.Green Green Experimental 2 minutes.Green Experimental 2 minutes.Green Experimental 2 minutes.Purple Purple Experimental 2 minutes.Purple Experimental 2 minutes.Purple Experimental 2 minutes.White White Transient 200ms.Table 4.2: First pilot study, stimuli used.Introduction5 minutesInteraction20 minutesAssessment10 minutesFigure 4.4: Experiment flow.524.2. A pilot study of interactive spatial perceptionnally, during the next 10 minutes, the participants would answer the post-experimentquestionnaire and an informal interview would take place.4.2.4 ResultsAt first, subjects seemed not to respond to the stimuli given to them on a con-trollable and measurable pattern, however, a closer analysis of the data showedthat experients had strong reactions and repeated patterns. Analysis of movementand decision taking by pairing the stimulus and the resulting patterns showed thatkinesthetic responses were not direct, but involved complex perceptive structures.Behavioral reactions to stimuliFollowing Dehaene [14] the responses or reactions obtained in the experiment havebeen classified as follows: Conscious: Attention to the stimulus presented with strong responses: Sub-jects react to color stimuli at a cognitive level. Entering the space, subjectsneeded to create a categorization of both the images presented and the ob-jects available. Because color differentiation was evident subjects needingto move a box picked the one matching the visual stimulus. Pre-conscious: Attention to previous stimulus with strong responses: Sub-jects reacted to previously primed colors. Both color and direction werecarried onto subsequent tasks. Subliminal-attended: Attention to the present stimulus with weak responses:Subjects seemed to understand the intrinsic relationships between spaces,if an object was put on a private section it was kept there until proper ap-propriation of the privacy of such area was performed. Primed colors anddirections affected their interaction with such objects.Figure 4.5 presents the motion patterns of a participant – black dots – andcolor cubes – color dots – during each randomized stimulus. Visual analysis of thebehavioral actions of this subject show that during the first three stimuli the boxesselected correspond to the same color as the one being presented.Private-Public distinctionsThe initially provided public/private privacy dichotomy was strongly effective, dur-ing the initial moments of the experiment subjects interacted with the boxes underthe private section, however, with time of interaction came an appropriation of the534.2. A pilot study of interactive spatial perceptionFigure 4.5: Motion patterns of a subject according to each randomized stimuluspresented for 120 seconds.544.2. A pilot study of interactive spatial perceptionprivate space. The result was evident: the boxes could now move into the publicspace.Creation of a Home LocationAll subjects created a Home Location where they felt comfortable at their entranceto the prototype. Whenever introspection was needed they would return to suchHome Location. In such location, awareness of the space was reduced to the mini-mum, but stimuli presented during that reduction proved to be carried to followingcreative tasks. This is, after introspection subjects would choose colors and mo-tions related to the stimulus presented during introspection, and not under stimuluspresented during task completion.Gendered spaceUsers from various genders interacted differently with space. Male users tended toplay with objects, while female subjects tended to create more semantic orientedproducts by writing or labeling the cubes (figure 4.6). Difference in posture wasalso evident; most of male subjects kept a standing posture while female subjectstended to adopt comfortable sitting or crouching positions.Figure 4.6: Arrangement of a female participant with tags over color cubes.Qualitative analysis showed that females take a longer time to adapt to their554.3. Training wheel studynew surrounding and cautiously take over its properties, however when they havefinish such appropriation they tend to protect it more than male participants. Thiswas uncovered by most female participants asking the experimenter to properlyrecord the position of the spatially arranged color boxes. On the contrary, maleparticipants didn’t seem to be troubled when the boxes were returned to their orig-inal place at the end of the experiment.4.3 Training wheel studyThe previous pilot study showed that humans might have predictable behaviors tospatial stimuli. A belief that this was decipherable was the main drive for a furtherstudy that would explore this embodying relationship between humans and spaces.The initial interest of the training wheel study was to conceive an architecturalspace of cyborg-like nature that could promote controllable kinesthetic actions inhuman beings.An autopoietic system is made of bi-directional relationships between its in-teracting parts. Demonstrating that interactive and perceiving spaces are able totake control over their inhabitants is the first step in proving that such entities havethe ability to affect, in a predictable way their perceived world – formed of humaninhabitants.Findings from previous pilot studies showed that human perception of spaceis not purely spatial, but linked to semantic structures that interact and form whileliving in a space. It was hypothesized that by promoting previously measured men-tal states through spatial stimuli it could be possible to promote voluntary actionsin human beings. By creating a model of such mental states, human kinestheticactions could be controlled in quantity and direction. In this sense, the prototypewould work as a human training wheel, promoting specific voluntary movementby suggesting it.Control based on reflective consciousness manipulationAccording to Zelazo’s [72] model of consciousness published in 2004, a consciousentity has scaling levels of reflection and grouping of information that allows it tointerconnect meaningful entities and create complex mental associations. Accord-ing to his model, the most basic conscious iteration is called MinC – or minimalconsciousness – , this level of action-reaction allows any living being with the abil-ity to respond to external stimuli. As the organism evolves it begins to form higherlevels of abstraction, on a second level, it forms a recursive consciousness able tolabel its actions, and further re-entry iterations allow the organism to create group-564.3. Training wheel studying of labels in semantic constructions. More reflective levels are then added toachieve more complex mental structures.Stimuli can be conceived to be of either textual or contextural nature. Textualstimuli are the ones where the focus of attention of a perceiving subject lies – thespecific perceived stimulus that an animal is attending to at each moment. Contex-tual stimuli are secondary stimuli that are related to the textual stimulus. Followingthis scheme, the present study proposes a theoretical model for controlling higherlevels of consciousness, i.e. reflective consciousness, based on two actions (figure4.7): Disruption: The cognitive model of consciousness and perception is dis-rupted using a primed stimulus or abruptly breaking the spatial mental mapcreated through experience. Encapsulation: By tightly relating a textual stimulus into one or several con-textual stimuliA disruption action deals directly (figure 4.7(a)) with the mental representationof space. For example, if we could control the laws of physics and could dramat-ically eliminate the force of gravity for a moment, it would be possible to createa disruption of the mental model that dictates that all things fall to the ground. Inthe case of the present experiment the disruption has been done on the laws thatcontrol the state of the interactive space and give its spatial representation. As itwill be shown later in the document, participants of the experiment are stimulatedwith color depending on their location within the prototype. Moving in specificdirections causes the projector to generate a specific stimulus – e.g. moving leftcauses a green stimulus and moving right a blue one. By inverting this directionalcontrol we can dramatically disrupt the mental representation of such spatial re-lationship – e.g. moving left causes a blue stimulus while moving right causes agreen stimulus.An encapsulation action (figure 4.7(b)) deals with related stimuli that otherwisewould not be dependent on each other. For example, by making a kinesthetic effortwe can move through space and explore its properties. In order to walk forwardwe balance our body towards the direction we want to walk to and coordinate thismotion with a complex movement of muscles in legs and arms that allow us tomaintain an equilibrium and balance our weight a step forward. This same effectof moving through space can be temporarily achievable by a system that allows theparticipants to displace their position in the world without displacing their bodythrough slightly moving a mini-joystick with a thumb. The stimuli generally relatedto kinesthetic effort are now encapsulated into said thumb effort. In the presentexperiment a tool has been created: by moving a LED on top of a coffee table574.3. Training wheel study(a) Disruption.(b) Encapsulation.Figure 4.7: Two theories of control based on reflective consciousness manipulation584.3. Training wheel studyexperients can move in space without any effort and receive the same stimuli as ifthey had.4.3.1 Device descriptionThe prototype consists of a neutral space of 2.5 by 2.5 meters and a projectionwall receiving a 2.5 by 2.5 meters lateral projection of a filled color square. Suchprojection is the only quality of the space and source of light. A camera placed onthe projector takes the high luminance values and converts them to colored pixelsto be placed on top of the filled square. Proper alignment of the camera allows thesystem to ”paint” white objects with any desired color. A second camera on theceiling keeps track of any human participants and locates their center. This locationpoint is then translated to a (x;y) coordinate within the camera’s view as depictedby figure 4.8.XY20 80 140 200 260 3204080120160200240farthest from screenclosest to screenleft rightFigure 4.8: Coordinate system.In order to test the previously exposed encapsulation theory, a 45 by 45 by 45centimeters corrugated box was enhanced with a clear plastic window of 7 by 5centimeters on a purple box. A camera was suspended inside and used to track thelocation of an LED placed on top of the clear window. An LED was attached toa 3V Lithium Battery and was given to the subject as a Power Key to control thespace through such interactive coffee table. Figure 4.9 presents these devices.Position of either human beings or Power Key over the interactive coffee tablewere recorded on memory on a latency of 1 second. After 2 minutes the recordedpositions were drawn on an image and the contents of the memory deleted. A594.3. Training wheel study(a) Interactive coffee table.(b) Power Key.Figure 4.9: Interactive coffee table and power key.604.3. Training wheel studynew collection of tracked points was then initialized. This allowed a collection ofclustered position trackings over time that can be easily analyzed and compared.Finally, position sensing on either the ceiling-mounted camera (table 4.8) orthe interactive coffee table is mapped to a location in an BGR color space. Positionvalues on the Y axis represent a green value from 0 to 255 and postion values onthe X axis represent a blue value from 0 to 255. Z is maintained at 0. The resulting(x;y;z) BGR value is then projected onto the wall (figure 4.10). A spatial mentalmap of this space can be represented by a BG plan, as represented by figure 4.11(a).Figure 4.10: A participant interacting with the space.4.3.2 MeasurementsThe present study focused on proving that it can be possible to create a prostheticconnection between a human and a space. This theory suggested that by alteringvisual spatial stimuli, human kinesthetic responses could be controlled efficiently.In order to test these hypotheses several measurements were done: Human Position: Human position over time was recorded and drawn onimages containing 2 minutes of recordings. The collection of images wasanalyzed for trends and patterns. Power Key Position: The position of the Power Key on top of the interac-tive coffee table was recorded and drawn on images containing 2 minutes of614.3. Training wheel studyrecordings. The collection of images was analyzed for trends and patterns. Location Iterations: The count of total human locations over the span of theexperiment were broken down into graphs representing the location itera-tions in both x and y axes, i.e. the number of times that a human spent in thesame x or y location. The collection of graphs was analyzed for trends andpatterns. Home Displacement: The geometrical distance between a found home loca-tion and a new location where subjects were expected to be moved throughchanges in space was computed and analyzed using ANOVA. The rationaleis explained below.These measurements were created to test the presented theory of control basedon reflective consciousness manipulation based on two actions, namely disruptionand encapsulation. Table 4.3 presents the above outlined measurements in relationto the control theory exposed above.Control Method MeasurementDisruptionHuman PositionLocation IterationsHome DisplacementEncapsulation Power Key PositionTable 4.3: Measurements.Encapsulation was measured through qualitative analysis of the recorded statesof a minimized model of the experimental space. The construction of an interactivecoffee table with the same spatial map used for kinesthetic interaction allowed theencapsulation of kinesthetic movement into movement of a Power Key.Disruption, however, was analyzed by measuring specific alterations of themental map formed by human perceivers. It was hypothesized that there exist twomethods of modifying a mental map of spatial properties: altering the componentsof such map, or inserting an external component to mutate said map. The spa-tial characteristics of the current prototype depend on a bidimensional BG model(figure 4.11(a)), therefore, two alterations of this model are possible: Bidimensional: Altering the BG components of the bidimensional model. Three-dimensional: Including a third component, in this case red values (R),to form a three-dimensional spatial model (BGR).624.3. Training wheel study4.3.3 Experimental designTreatments appliedThe present prototype has a mental representation of bidimensional properties (fig-ure 4.11(a)). Two types of alteration of such GB plan have been previously defined,namely bidimensional and three-dimensional. The present study’s implementationof these alterations was performed as follows: Bidimensional: A Bidimensional alteration is the result of switching the Xand Y values of the GB representation of the space. This results in a GBplan differing from the initial BG one. In the present implementation, thistransformation is instantaneous, i.e. the values are interchanged once. Three-dimensional: A Tridimensional alteration is the result of adding athird component R to the GB representation of the space. In the presentprototype, this transformation was performed gradually, i.e. the R value (0- 255) increased during time and followed a Bidimensional alteration. Thisresults in a GBR cube.According to previous findings, human subjects have two states while inter-acting with a space: A Home Location, defined at their first contact with theirnew environment, and an action performance related one. Each state has specificqualities of kinesthetic activity, awareness and assumed reflective consciousnesscontrol.11When initially interacting with space, humans search for a safe location (Home)where they return for introspection and creative activity. This location can be de-termined during the first moments of interaction with space by finding the highestcount of position iterations (total count of seconds that a human subject spent onsuch position). This procedure yields a (x;y) pair that represents the initial Homelocation of a human being interacting with space. It has been hypothesized that theHome Location is representable both on (x;y) coordinates of real space and (x;y)coordinates of a BG plan.Because the interactive model implies that a location in space is linked to a(x;y) BG state of the space, an alteration (bidimensional or three-dimensional) ofsuch model implies that the previously computed Home Location has been trans-lated to a new (x;y) coordinate belonging to the new mental map. Because thealteration of the model is known, it is possible to compute the exact translation ofthe Home Location through:11Findings of the pilot study of interactive spatial perception.634.3. Training wheel studyBidisplacedHomeLocation = (y;x)Figure 4.11 presents the spatial model and its alterations. Figure 4.11(a) presentsthe original BG plane that represents the cognitive spatial model formed by the in-teracting participants. Moving towards the right of the space and farthest from theprojection screen would result in the projection becoming blue. Moving towardsthe left of the space and closest from the projection screen would result in theprojection becoming green.Figure 4.11(b) presents a bidimensionally altered space map. The cartesian co-ordinates have been interchanged, thus if a subject moves towards the left closestarea to the screen he would experience a blue illumination, while moving to theright farthest area from the screen he would experience a green illumination. Thetheoretical Displaced Home Location is represented by the red dot and its transla-tion represented by the red line.Figure 4.11(c) presents a three-dimensional altered space map. A third dimen-sion consisting of red values has been added continuously to the color space. Figure4.11(d) presents the spatial mental map as it would be conceived by a test subject.(a) Original. (b) Bidimensional. (c) Tridimensional. (d) Tridimensional.Figure 4.11: Space model and alterations performed.According to our previous findings, humans will always return to their HomeLocations during introspection. It can be hypothesized that if a successful alterationof the mental map has been performed the Home Location will correlate with suchmental map. In other words, humans will return to the displaced Home Location.During the alteration of the spatial mental model a new most frequent location,presumed to be the new home location chosen by participants, was computed fol-lowing the same methodology used to compute the initial Home Location. Thehighest count of position iterations was found in both x and y.This yielded a New Home Location that could be compared to the the Dis-placed Home Location to test the present theory. Distance between each of thesemeasurements was done geometrically by finding the vector magnitude between644.3. Training wheel studyeach point using equation 4.1:D =q(Xdisp Xhome)2 + (Ynewhome Ynewhome)2 (4.1)Where (Xhome;Yhome) defines the Home Location or the Displaced Home Lo-cation, and (Xnewhome;Ynewhome) defines the New Home Location. Equation 4.1was used in both Bidimensional and Three-dimensional alteration measurements.Finally, the previous pilot studies showed that interaction with space is depen-dent on different levels of attention. The different tasks that humans undertake in aspace determine the level of engagement and have strong effects on the cognitionof the space and thus the interactive process that arises between humans and cy-borg spaces. To measure the effects of engagement and attention three tasks weredefined: Non-Immersive: Low level of engagement is promoted by driving partici-pants’ attention to a complex task requiring concentration. Users were givena task that required constant attention to a written tutorial and the creation ofan academic summary, Semi-Immersive: Intermediate level of engagement is promoted by shiftingthe attention of participants between the space and a task requiring concen-tration. Users were given a task that required the creation of a drawing re-lated to the subject’s imagination. Immersive: High level of engagement is promoted by driving participants’attention to the space. Users were given no task but to interact with space.Statistical analysisBidimensional Alteration Tridimensional AlterationNon-Immersive . . . . . .Semi-Immersive . . . . . .Immersive . . . . . .Table 4.4: Experimental design for human training wheel experiment.The experiment was designed as a Completely Randomized Factorial ANOVAwith Two Factors, CRia; where i = immersion level and a = alteration type. Al-teration type, i.e. Bidimensional and Tridimensional alteration of the space, wasconsidered a within subjects factor, and immersion level was considered a between654.3. Training wheel studyfactor. The 3 x 2 design is presented in table 4.4. Nine subjects, 6 males and 3 fe-males aged 20 to 39 were randomly assigned to one of the three immersion groups,i.e. each group had 3 subjects. The analyses used an alpha level of .05 for allstatistical tests.Experiment flowThe experiment lasted 45 minutes (figure 4.12) and was divided in three parts (Fig-ure 4.12(a)). During the first part, lasting 5 minutes, experients were instructed onthe flow of the experiment and signed a consent form. The second part, lasting 18minutes, was dedicated to interaction and measurements. Finally, during the last10 minutes test subjects answered a post-experiment questionnaire and participatedin a filmed interview. The second part of the experiment, lasting 18 minutes, wasdivided in four parts (figure 4.12(b)):Introduction5 minutesInteraction:18 minutesAssessment10 minutes(a) Experiment flow.Human2 minutesPower Key6 minutesHumanBidimensionalAlteration5 minutesHumanTridimensionalAlteration5 minutes(b) Interaction Flow.Figure 4.12: Experiment flow.1. During the first 2 minutes the space would respond to users’ kinesthetic ac-tions. The space would then compute a home location – the (x;y) positionwhere subjects stayed most of the time.2. During the next 6 minutes the system would respond to movements of thePower Key on the surface of the interactive coffee table.3. During the next 5 minutes the system would respond, again, to users’ kines-thetic actions. But this time using a bidimensionally altered spatial mentalmap – interchanging x and y motion controls. A New Home Location –the (x;y) position where subjects stayed most of the time – would then becomputed.664.3. Training wheel study4. Finally, the last 5 minutes the system would respond, again, to users’ kines-thetic actions. But this time being three-dimensionally altered. A New HomeLocation – the (x;y) position where subjects stayed most of the time – wouldthen be computed.4.3.4 ResultsHuman and power key positionDifferent task groups had different directional movements. Subjects from the Non-Immersive group had marked Home Locations and Displaced Locations were rare.Subjects from the Semi-Immersive group had also marked locations, however, lo-cation translations were radical as shown by peak differences between graphs. Sub-jects from the Immersive group had a wider range of positions, but aligned towardsa fixed area, depicted by clusters of peaks in the same area, before and after alter-ations. Differences between back-front and left-right movements are evident andmight have been affected by the task definition.There were marked differences between task groups in the appropriation of thespace and use of the Power Key interface (figure 4.13). Non-Immersive task sub-jects begun a rapid exploration of the spatial capabilities of the space – promotedby the tutorial –, however, later explorations were rare and use of the Power Keyinterface was reduced to the minimum and only used to select ambient colors re-lated to initially chosen Home Position. Semi-immersive task subjects showed ahigher grade of appropriation and exploration of space qualities through humanmovement, however Power Key interactions were reduced to a small quantity andonly used to select a color or pattern for “inspiration”. Immersive task subjects per-formed a high level of exploration of the space through movement and an averageexploration of the Power Key interface. Subjects of this group would select an am-bient color using the interface and explore their movement’s resulting patterns indifferent parts of the space. Figure 4.14 presents the motion recordings of a subjectin the semi-immersive task group. The recordings are collection of 120 secondsmarked as seen by the prototype..The unmodified color space lasts from second 0 to second 120 (figure 4.14(a)).The bidimensionally altered space lasts from second 120 to second 960 (figures4.14(b) to 4.14(h)), while the three-dimensionally altered space is experiencedfrom second 960 to 1320 (figures 4.14(i) to 4.14(k))By observing the recordings it is evident that the subject remained mostly staticduring the first 120 seconds (figure 4.14(a)). By gaining access to the Power Keyin the second 240, the user successfully encapsulated her kinesthetic actions in thetool and explored other positions – states – of the space (figure 4.14(c)). These674.3. Training wheel studypositions would later be searched by kinesthetic action, specially after the second720 (figure 4.14(g)). Comparing second 120 (figure 4.14(a))and second 1200 (fig-ure 4.14(j)) it is clear that the subject had displaced her home location to a newspatial state during the three-dimensional alteration.Figure 4.13: A participant interacting with the power key.Location iterationsAnalysis of the location iteration graphs constructed with the data showed a cleardifference between interaction groups. Visual inspection of the graphs showedan evident distinction betwen immersive and non-immersive groups. Immersivegroups showed location iteration graphs that were scattered across more locationsin both x and y axes, depicting constant motion across the whole experiment. Non-immersive groups resulted in interaction graphs centered in one or two positionsacross the duration of the experiment and under all test conditions.Furthermore it was clear that color-space alterations – bidimensional and three-dimensional alterations of the spatial mental map – evoked motion in the experi-684.3. Training wheel study(a) 120 secs. (b) 240 secs. (c) 360 secs. (d) 480 secs.(e) 600 secs. (f) 720 secs. (g) 840 secs. (h) 960 secs.(i) 1080 secs. (j) 1200 secs. (k) 1320 secs.Figure 4.14: Recorded paths with a latency of 120 seconds of a test subject in thesemi-immersive task group.ment participants. The iteration graphs across all immersion groups showed thatwhen the spatial model was altered, position was scattered in both x and y axes.Users would move more and explore different locations in the space – depicted bya higher count of graph peaks.Figure 4.15 presents the iteration graphs – of a subject in the immersive group –constructed during the first part of the experiment (figure 4.15(a)) where the spacewas left unaltered and a home position was computed, the second part of the ex-periment (figure 4.15(b)) where the space was altered bidimensionally and a bidi-mensionally altered New Home Location was computed, and during the third partof the experiment (figure 4.15(c)) where the space was altered three-dimensionallyand a three-dimensionally New Home Location was computed.The graphs clearly show that motion was more frequent during bidimension-ally – and three-dimensionally – altered color spaces (tables 4.15(c) and 4.15(c)).This interactive effect was the result of the alterations performed on the spatialmap of the environment. Furthermore, it is evident that although movement wasperformed during the first part of the experiment, most of the time was spent in asingle location.The initially computed Home Location is depicted by the peak in pixel (x 694.3. Training wheel study(a) Unaltered space.(b) Bidimensionally altered space.(c) Three-dimensionally altered space.Figure 4.15: Location iterations for the x axis of a test subject in the immersivetask group.704.3. Training wheel study230). During a bidimensional alteration the New Home Location computed is de-picted by the peak in pixel (x 55). Finally, during a three-dimensional alterationthe New Home Location computed is depicted by the peak in pixel (x 175).Home displacementThe distance between the first Home Location and a New Home Location com-puted during space alteration was analyzed. Total distance of movement in bothBidimensionally and Tridimensionally altered color spaces (figure 4.16) showed tobe not statistically significant,F(2;15) = :258;p = :776, between treatment groups.Further contrasts analysis showed that even if the semi-immersive group appearedto score lower than the non-immersive group, such effect was not statistically sig-nificant, p = :483. This can mean that either subjects were not sufficient in orderto detect a statistically significant trend or that the space successfully achieved adisplacement in the three different task groups without any difference.Distance MovedImmersive Semi-immersive Non-Immersive20406080Figure 4.16: Means of total distance of movement in both bidimensionally andtridimensionally altered color spaces.Differences between Bidimensional and Tridimensional (figure 4.17) alterationswere found to be not significantly different, F(1;12) = :128;p = :736, from eachother. This proves that that Bidimensional and Tridimensional alterations have thesame effect on the human and can be used alternatively. A deeper analysis of eachalteration showed that Bidimensional alterations of the space were not significantlydifferent , F(2;6) = :032;p = :968, between groups. In the same manner, the totalmovement achieved during Tridimensional alterations of the space was not statis-tically significant, F(2;6) = 1:165;p = :374, between different task groups.Distance from both the Bidisplaced and Tridisplaced New Home Locations714.3. Training wheel studyDistance MovedImmersive Semi-immersive Non-Immersive20406080100120BidimensionalThree-dimensionalFigure 4.17: Means of total distance of movement in each bidimensionally andthree-dimensionally altered color spaces.to the hypothesized Displaced Home Location, shown in figure 4.18 were ana-lyzed. Analysis of the differences between task groups showed that for Bidimen-sional alterations of the space the task treatments were not significantly different,F(2;6) = :226;p = :804, from one another. Although tridimensional alterationsshowed a stronger effect, the differences between task groups proved to be not sig-nificantly different, F(2;6) = 2:793;p = :139. Contrast analysis showed that theapparent difference between tridimensionally altered immersive subjects were notdifferent,p = :081, from non-immersive ones under the same spatial alteration.Analysis showed that the effects achieved across task groups through Bidimen-sional alterations were not statistically significant, F(1;12) = :014;p = :908, to theones obtained through tridimensional alterations. This suggested that both spatialalterations achieve similar motion effects. Although no statistical significance wasfound between treatments it is believed that moving a human being to a position70 pixels away from a desired location – within a 320 x 240 pixels space – provesquite successful.Qualitative analysisAbruptly changing the definition of the space proved to be not noticeable, yet theresults were higher than initially expected. Subjects would try to re-gain controlover their environment by finding the new location where the desired ambient vi-sual stimulus was to be found. Some of them even felt frustrated by later changesin the color space.Encapsulating a specific stimulus proved to be successful, users would try re-724.3. Training wheel studyDistanceImmersive Semi-immersive Non-Immersive80100120140160BidimensionalThree-dimensionalFigure 4.18: Means of distance from the new home location to the hypothesizeddisplaced home location.gaining control of the space and would usually use the tool to explore the spatialcapabilities of their environment as if by moving in space. Most of the subjectswould search for the initial home location using the new mental model. Since itwas easier for subjects to explore the space through the key they would engage in amoving spree to understand their space and thus gain a more complete mental mapof their surrounding.Most of the subjects N = 6/9 found a prosthetic connection with their environ-ment, feeling it comfortable and intuitive to control it. However, only some N =4/9 felt that they maintained control of the spatial qualities. Most of the subjectsN=8/9 agreed that all their kinesthetic actions were chosen to their own free will.Finally, most subjects N = 7/9 understood the relationship between their locationin space and the ambient color, although only some N = 5/9 noticed a change in thecolor space, usually members of the Semi-Immersive and Immersive task groups.Only one subject found it difficult to re-create a plan section of the space.Subjects in the Semi-Immersive and Immersive task groups were highly at-tracted by the primed color of white objects and movement actions. They rapidlydiscovered that their movement would trigger a rewarding pattern and engagedthemselves in motion activity to recreate, or even dissipate the patterns. One sub-ject commented that “I couldn’t get rid of the huge red blob”, while others com-mented that “loved the patterns that would appear by moving the white sheets ofpaper.” The interactive definition of the primed stimuli engaged the subjects inmotion, while ambient color proved to be remembered most of the time and waslinked to a location. Interactive visual stimuli promote movement while static am-bient visual stimuli promotes location.734.4. Spatial effects of visual stimuli4.4 Spatial effects of visual stimuliSpatial cognition is the result of a complex collection of physiological and cogni-tive processes that arise in egocentric and allocentric perception of the world. Thereare many theories on how humans create space and how it is stored in the brain foruse in the future. These theories deal with stimuli that arise naturally or artifi-cially in the world, e.g. texture, occlusion, stereopsis, etc., and generally producedby objects situated in a three-dimensional or bi-dimensional reality. However, themodel for Cyborg Environments proposed by the present reseach theorizes the ex-istence of an autopoietic system composed of interconnected space-perceiving en-tities (figure 3.9). There exist an large number of stimuli that could be used forthis purpose and an infinite number of combinations and alterations that could beperformed on such a collection. The need for a simple methodology to measurethis inter-connection was the purpose of the present experiment.Space has proven to have a strong relationship with human visual perception.Gibson suggests that “the basis of the so-called perception of space is the pro-jection of its objects and the elements as an image, and the consequent gradualchange of size and density in the image as the objects and elements recede fromthe observer”[27] In Gibson’s theory, texture plays an important role in definingthe spatiality of the objects in the world. The farther away an object is from theperceiver the denser its texture appears to be. This perceptions are used by thebrain to form a an organized and coherent surrounding. However, it is possible totheorize that even rougher stimuli could promote a spatial sensation comparable tothe one arising from geometrical and textural means.The present pilot study focused on the study of the most simplified visual stim-uli: light properties. Although several categorizations of light properties exist, thepresent study focused on measuring the spatial effect of Hue, Chromatic Strengthand Contrast4.4.1 Device descriptionThe device is composed of a darkened room of about 3 by 4 meters containing ane-lumens hemispherical display and projector that allows peripheral view stimula-tion. A chair permitted experients to position their eyes at the same height as theprojector’s lens and a mouse allowed them to control a pointer to make selections.The experiment setup is presented in figure 4.19.An application was written in python 2.5 that allowed for the presentation ofa seamless light stimulation across the whole display and allowed the experientsto select 6 buttons depicting spatial size. The selection interface was located atthe far right point of the screen – visual field – and clear perception of it was only744.4. Spatial effects of visual stimuliachievable by rotating the head 90 degrees to the right. All answers were stored ina text file for further analysis.Figure 4.19: A user rating a specific visual characteristic.4.4.2 MeasurementsLarge amounts of research have been done on the effects that light properties haveon perceived object distances. Work by Taylor and Sumner, for example, demon-strated that an “apparent nearness of bright colors”[67] is perceived under experi-mental conditions. Their theories ranged from pupillary adjustments to perceivedcontour sharpness of the objects. However, the present study deals with the em-bodying sensation of space that an enclosure provides during its lifetime’s alter-ations – i.e. the Cube Model for Cyborg Environments. Therefore, an approachthat follows this definition is necessary.A decision was taken to explore a single condition of the Cube Model: a closedcube undergoing a membrane alteration. In other words, a surrounding stimulationthat by changing its properties has an effect on the perceived form or scale of theenclosure. The measurement of the present pilot study would then have to focuson the perceived scale – size – of the surrounding composed of a simple visualstimulus.Using the device described above, eight subjects were peripherally stimulatedtwice with 18 randomized configurations of light – a total of 36 stimuli. Table754.4. Spatial effects of visual stimuli4.5 presents the specific characteristics – hue, saturation and value– of the stimuliused. All values have been scaled to a normalized range from 1 to 255, to allow acomparison between them.Tested Value Stimulus CharacteristicsTest Value Hue Saturation ValueHue5 5 255 25555 55 255 255105 105 255 255155 155 255 255205 205 255 255255 255 255 255Saturation5 125 5 12555 125 55 125105 125 105 125155 125 155 125205 125 205 125255 125 255 125Value5 125 125 555 125 125 55105 125 125 105155 125 125 155205 125 125 205255 125 125 255Table 4.5: Stimuli tested and their h-s-v characteristics used to test spatiality ofvisual stimulation.Experients were asked to focus their vision at the center point of their frontalmidline for at least 2 seconds and use a mouse to select a radio button scale –placed to the far right of their visual field – to answer to the question: How bigdoes your surrounding appear to be?. Responses available were: tiny, very-small,small, large, extra-large and huge. Each one had a value from 1 to 6, where largernumbers denote larger perceived spaces (figure 4.6).There are two types of spatiality defined by Foreman and Gillet, one is egocen-trical – referenced to the body midline – and the second one allocentric or related toa model of the world. In the latter, space is constructed by understanding the rela-tionship between its forming entities and allows us to conceive extreme spaces likeouter space or a rabbit’s hole. Because the present experiment is interested in thespatial experiences developing from perceived stimuli we will focus on measure-764.4. Spatial effects of visual stimuliments and priming related to inhabitable spaces. The researcher is aware, however,of other spatialities that might affect the results – like the conception of outer spaceor a hive.Value Response1 Tiny2 Very Small3 Small4 Large5 Very Large6 HugeTable 4.6: Scale used to measure spatial size.Constrained scalingFollowing West and Ward’s[70] methodology to account for idiosyncratic biasesin psychophysical scales the experients of the experiment were trained to judge thescale of a space using a constrained scale. All test subjects participated in a pre-testsession where they were trained to judge the spatial size of a space depicted by aphotograph.A software application was written in Python 2.5 that presented users with apool of randomized images and provided a selection of 6 radio buttons followingtable 4.6. Users were asked to judge the size of the space shown by selecting abutton and submitting their answer, appropriate feedback was given after each oftheir answers. Images of the software are shown in figure Experimental designA-priori hue bias countermeasures.Humans tend to remember the characteristics of spaces they have visited in thepast. Color vision is strongly linked to biological advantages in human beings, likepattern decoding or information recognition. The advantage is so strong that ourperception of the world is both limited and enhanced by it, making previous expe-riences of color an important bias in the perception of space. It was hypothesizedthat hues – colors – remembered from previous spatial experiences could affect ourmeasurements of spatial size. To countermeasure this effect the experiment tested774.4. Spatial effects of visual stimuli(a) Non-color primed training software.(b) Color primed training software.Figure 4.20: Software used to train users in spatial perception with both color andnon-color primers.784.4. Spatial effects of visual stimulithe effect of priming subjects with specific hue biases during the constrained scal-ing pre-test session. Two versions of the pre-test software was written for each ofthese testing conditions: The first group was trained to judge the spatial size represented by images inblack and white, using a selection of 25 images(figure 4.20(a)). The second group was trained to judge the spatial size represented by imagesthat were color-tinted according to five hue variations, and only 5 imageswere used to train the experients (figure 4.20(b)). The biases were random-ized and are presented on table 4.7.Value Response5 Small105 Tiny155 Very-Large205 Huge255 SmallTable 4.7: Biased sizes primed during pre-test.It was hypothesized that if hue was to be considered an important bias, theused color priming would have a significant effect on the second group, altering, ina predictable manner, the participant’s perceived spatial size.Treatments appliedFinally, in both hue-bias countermeasurement conditions three types of stimuliwere presented: Hue: It has been suggested by Lars Sivik that it’s not hue “which affects howexciting or calming a color is but the chromatic strength of each hue. “[45]Further work by Mikellides provides more evidence that saturation “is thekey dimension affecting how exciting or calming a color is perceived”[45].This suggests that hue dimension of visual stimulation will not have a sig-nificant or controllable effect on spatial perception. Testing this conditionallows the research to rule out any possible interaction where hue plays arole within other conditions.794.4. Spatial effects of visual stimuli Chromatic Strength: Based on Mikellides findings the present research be-lieves that changes in saturation of a visual stimulus will have a significanteffect on spatial perception. The stimuli will be modeled from an HSV space.While Hue and value will be kept the same, saturation will be changed. Contrast: Due to the particles in the air objects acquire “a reduction of [ap-parent] contrast in [their] proximal representation... depending on [their] dis-tance from the viewer.“[22] This is called Aerial Perspective and is believedto provide the user with visual cues of distance. The research will test thiswith stimuli modeled from a HSV space with variance in value, maintainingHue and Saturation static.Statistical analysisHue5 55 105 155 205 255Hue non-primedHue primedSaturation5 55 105 155 205 255Hue non-primedHue primedValue5 55 105 155 205 255Hue non-primedHue primedTable 4.8: Experimental design for spatial size of visual stimuli measurements.The experiment was designed as a Completely Randomized Factorial ANOVAwith Three Factors, CRpvc; where p = prime, v = value presented and c = stimuluscharacteristic. Prime was considered a between groups factor, and value presentedand stimulus characteristic were considered within groups factors. Eight subjectsaged 19 to 29 were randomly assigned to one of the prime treatment groups, result-ing in each group having 4 subjects. The 2 by 6 by 3 design is depicted by table4.8. The analyses used an alpha level of .05 for all statistical tests.Experiment flowThe experiment had a duration of 20 minutes (figure 4.21). During the first 5minutes users were carefully instructed about the flow of the experiment and thetasks they would have to complete. During the following 5 minutes experients were804.4. Spatial effects of visual stimuliIntroduction5 minutesTraining5 minutesStimuli rating10 minutesFigure 4.21: Experiment flow.asked to complete a training task where they would learn how to measure the size ofa space using the constrained scaling software – the version used depended on therandomized hue-bias group the users were part of. After completion, experientswould step into a darkened room where they would rate the spatial size of thestimuli being presented. Subjects’ completion time varied, but never exceeded 10minutes. No post-experiment interview or data collection was performed.4.4.4 ResultsFigures 4.23(a) and 4.23(b) present the means of perceived spatial size accordingto each stimulus characteristic. The dotted lines present the color-primed – primedwith hue variances during the constrained scaling training session – group. Finally,Figure 4.23(c) presents the effect of color-priming in the perception of spatial sizein hue variations.A one way ANOVA showed that hue had a significant effect on spatial sizeperception,F(5;90) = 13:360;p = :000. However a-priori contrast analysis showedthat value 5 and 205 were not significantly different,p = :144, and that value 5 and255 were also not significantly different, p = :883. In other words, a significanteffect on spatial size perception was only achieved from values 5 to 155. It isimportant to note that due to the characteristics of the Hue-Saturation-Value spacevalues 255 and 5 are almost indistinguishable.Light saturation did not prove to have any significant effect on spatial size per-ception, F(5;90) = 0:620;p = :685. Contrast analyses showed that no significantdifference between values was achieved.On the other hand, the value characteristic of the stimuli proved to have a sig-nificant effect on the spatial size perceived by human beings,F(5;90) = 76:367;p =:000. Furthermore, contrast analysis showed all values to be significantly differentfrom each other.A look at the plotted means in figure 4.23(b) shows that an predictable effectwas achieved by varying the stimuli intensity – value. A wider range of spatial sizewas obtained and the effects were sufficiently controllable.By performing a linear regression analysis (figure 4.22) we can state that theperceivable size of an environment, is then measurable, on a 1 to 6 scale with the814.4. Spatial effects of visual stimulifunction:S = 0:016v + 1:3Where S = Size, and v = color value, is to be read by approximation to Table4.7. A further analysis with light intensity in lumens should be performed, but isoutside the scope of the present document.Value PresentedSize Perceived5 55 105 155 205 255123456S = 0:016v + 1:3Figure 4.22: Linear regression on perceived spatial size of color value alterations.Color-priming effectAn univariate analysis was performed to look for a significant difference betweenthe primed and non-primed groups for each of the light characteristics tested. Therewas significant difference, F(1;84) = 11:397;p = :001, between primed and non-primed groups in the Hue characteristic. There were also significant differences,F(1;84) = 8:262;p = :005, between primed and non-primed groups in the Satura-tion characteristic. Finally, differences were significant,F(1;84) = 5:764;p = :019,in the Value characteristic.Although significant differences existed across all measurements, between thegroup that was primed with color and the one that wasn’t, an in-depth analysisof contrast showed that these differences were not constant. Only certain valueswould be significantly different. No apparent and consistent effect was found.An analysis of the effects that this hue-based priming had on the hue-perceptioncould serve as support to this point. Figure 4.23(b) shows that although the color-primed group’s spatial size perception differs from the non-color-primed group thetrend does not correspond to the imposed trend primed in the pre-test. Furthermore,it seems that experients perceived the space as bigger when they were color-primed.824.4. Spatial effects of visual stimuliValue PresentedSize Perceived5 55 105 155 205 255123456Hue - Color PrimedHue - Non Color PrimedSaturation - Color PrimedSaturation - Non Color PrimedValue - Color PrimedValue - Non Color Primed(a) Means over all tests.Value PresentedSize Perceived5 55 105 155 205 255123456Hue - Non Color PrimedSaturation - Non Color PrimedValue - Non Color Primed(b) Means over non-color primed.Value PresentedSize Perceived5 55 105 155 205 255123456Hue - Color PrimedHue - Non Color PrimedValues Primed in Pre-test(c) Effect of hue values primed.Figure 4.23: Means of spatial size perceived with different peripheral light stimu-lations.834.5. Spatial effects of directional light intensityContrast analysis showed that significance in difference was only achieved in value55, F(1;14) = 12:444;p = :003, and value 155, F(1;14) = 5:211;p = :039, andthe differences don’t follow the trend imposed by the priming – value 55 shouldhave been lower for color primed groups. These inconsistencies in effects havesuggested that the differences found in the analysis are due to a very small pool ofusers – only 8 persons were tested – and not the direct effect of color priming.The belief that the color of a remembered space could affect the perceptionof a space based on pure hue-stimulation proved to be false. This liberates thepresent measurements from idiosyncratic and cultural effects that color might haveon spatial perception. Further studies, with more test subjects should be performedto strengthen this point.4.5 Spatial effects of directional light intensityThe previously exposed pilot study on spatial effects of visual stimuli suggestedthat a peripheral visual stimulation can evoke strong and controllable spatial expe-riences in human beings. However, humans are rarely on a fixed position.12 Headrotations, limb movements and change of the body within space result in the per-ception of stimuli variations that help the brain update the spatial mapping of theworld where the body acts.[52] Humans rely on the fact that acting upon the world– e.g. moving the body to a different location – will result in correlated stimulichanges. This fact allows perceiving subjects to update the model of the worldappropriately.A study was performed to attend to the question of how much stimulus vari-ation should be correlated to human changes in order to allow for cognitive mapsto develop. There is a strong relationship between kinesthetic motion and spatialmapping [52] that arises in navigation of real environments and consistent proofwas found in the previous studies of the present research that movement is an im-portant factor in space perception and interaction with it. Human movement canbe reduced, for the purposes of this pilot study, to: Body motion in relationship to a presented stimulus Head rotation in relationship to a presented stimulusThe previous pilot proved that strong and controllable spatial perception can beachieved through alterations in light intensity – value. Because peripheral stimula-tion rarely occurs in the real world a study of directional stimulation – i.e. stimuli12This has been suggested by the initial pilot study on interactive spaces.844.5. Spatial effects of directional light intensitynot covering the whole visual field of a human perceptor – was needed. It washypothesized that a more complex perception, linked to the previously outlinedhuman movements would arise from these stimuli characteristics.4.5.1 Device descriptionIn order to test the effects in spatial perception of directional visual stimuli, a vir-tual environment was created (figure 4.24). This allowed full control of stimuluscharacteristics and removal of external stimuli and objectual properties – e.g. tex-ture – that could bias the space perception measurements.A virtual world was written in C++ and the OpenGL library. A Fastrack Polhe-mus tracking system was used to measure position change in a 2 by 2 meters areaand the data was correlated to camera changes in the virtual world. In the sameway head rotation in both the Z and X axis were transformed to changes in cam-era rotation and tilt. The result was presented to the experients wearing a VR ProHead Mounted Display set. A Polhemus position and orientation recording wasmade with a latency of 1 seconds and written to a text file for future analysis. APolhemus position and orientation recording was made with a latency of 1 secondand written to a text file for future analysis.The previous pilot study suggested that changes in light intensity – value – cor-relate to changes in perceived spatial size. According to our findings, the perceivedspatial size is measurable with the function S = 0:016v + 1:3. Therefore, it ispossible to analyze low and high values and interpolate the remaining values. Twostimuli values where then tested: Intensity value 55 that is perceived as a dark space and sized 2.39 in thepresented spatial scale. (Very Small) Intensity value of 255 that is perceived as a bright space and sized 5.33 inthe presented spatial scale. (Very Large)It was acknowledged that the virtual implementation of a space allowed animmense flexibility in form and stimulus presentation. However it was also hy-pothesized that this flexibility could be used to test real-life applications of themodel of this research. By using a cubed virtual space the pilot study would beconsistent with our previous experimentations, fit conceptually with the enclosurelevel of the Cube Model outlined in the first part of this document, and allow thetranslation of any findings to a life-sized implementation. The design of the worldwas then simplified to a box with a flat floor, ceiling and straight walls providingconcrete directionality – down, up, left, right, back and front. The box faces were854.5. Spatial effects of directional light intensity(a) Head mounted display.(b) Virtual world (Textures have been exaggerated).Figure 4.24: Head mounted display and virtual world.864.5. Spatial effects of directional light intensitythen mapped using OpenGl with an almost imperceptible texture with one of thetwo previously outlined intensity value.Finally, the virtual space was proportioned according to table 4.9. This wasdone to account for the screen ratio, as much as possible, of the head mounteddisplay. It was found that different proportions caused experients to be confuseddue to the fact that walls of the virtual space occupied more than two times thevisual field provided by the VR Pro Goggles LCD displays, and extreme head tiltswere needed to explore the space. It was found that the used proportions allowed amore natural exploration of the virtual reality space.Length Width HeightFloor 1 1A Wall 1 0.5Ceiling 1 1Table 4.9: Length of virtual space.4.5.2 MeasurementsThe present pilot study was focused on understanding how real-life applications ofthe previous findings on visual stimulation would affect spatial cognition and ex-ploration. A virtual prototype was chosen to eliminate or control biases that couldarise in a real life prototype. Specifically speaking, on perspectival cues arisingfrom stimuli limits in directional stimuli presentation, and textural cues provided bymaterials in the real world. Furthermore, the difference between peripheral stim-ulation and directional stimulation, and the difference between directional stimuliconfigurations had to be measured and understood.A quantitative and in-depth analysis was outside the scope of the present studyand couldn’t have been fit in the extremely reduced time-span alloted for the pi-lot study. Therefore, a qualitative and careful analysis was done in the search forinitial findings that could serve as pointers to further explorations and implementa-tions. With this in mind the present study acknowledges that further measurementsand analyzes could have been performed, but have been scheduled for future ex-plorations. Four qualitative analyses were done to understand, the effect of non-light, or external, cues in space navigation and exploration, the effect of stimulus limits in directional stimulation, the difference between horizontal and vertical stimulation, and874.5. Spatial effects of directional light intensity the difference between peripheral stimulation and directional stimulation4.5.3 Experimental designTreatments appliedUsing the device outlined above eight subjects were presented three times to set of6 randomized stimulus configurations. Subjects experienced a total of 18 stimuliduring 20 seconds each. Table 4.10 presents each configuration as seen initially bya test subject. Figure 4.25 presents their graphical representation.Name Value (Intensity) Presented onLow stimulation 55 all surfacesHigh stimulation 255 all surfacesVertical single stimulation 255 ceilingVertical bi-stimulation 255 floor and ceilingHorizontal single stimulation 255 front wallHorizontal bi-stimulation 255 right and left wallTable 4.10: Directional stimuli spatial perception, configurations.Qualitative analysisReadings of experient position and head orientation were saved to a text file everysecond. These data were analyzed through an application written in python 2.5for this purpose (figure 4.26). The software permits an experimenter to visualizethe experient’s position in relation to previous and future states, head rotation, andhead tilt. It also presents the spatial configuration of the virtual prototype througha plan and section drawing marking walls with high values – value of 255 – overthe experiment timespan. A plot of scattered points marks the recorded (x;y)positions of the human perceiver and a marker shows the subject’s position at aspecific time. Finally, a slider allows the experimenter to set the desired time toanalyze, or browse through the experiment’s flow with ease.Experiment flowThe experiment had a total duration of 15 minutes (figure 4.27). During the first 5minutes the experimenter explained the test subjects their tasks and helped them setthe VR ProView head mounted display on their heads. During the next 6 minutesusers would interact with the virtual environment. Users were told that the virtual884.5. Spatial effects of directional light intensity(a) Low stimulation. (b) High stimulation.(c) Vertical single stimulation. (d) Vertical bi-stimulation.(e) Horizontal single stimulation. (f) Horizontal bi-stimulation.Figure 4.25: Treatments applied (highly illuminated surfaces are represented byred planes).894.5. Spatial effects of directional light intensityFigure 4.26: Effects of directional light intensity. Motion analyzer software.904.5. Spatial effects of directional light intensityworld would change across time and that they would experience different spatialstates. Finally, during the last 4 minutes an informal interview took place.Introduction5 minutesInteraction6 minutesInterview4 minutesFigure 4.27: Spatial effects of directional light intensity. Experiment Flow.4.5.4 ResultsUsers of the virtual world seemed fascinated by the effect provided by the high-resolution image of the ProView head mounted display and by the effect providedby the correlation between their kinesthetic movement and the seen world. Mostof the experients – 6 of the 8 subjects tested– had never before tried Virtual Realitytechnology, this caused a sense of enamourment that faded away after few secondsof being inside the world.Users seemed to choose an initial location to where they would return afterexploring the virtual world. This was evident by analyzing the data. Furthemore,movements seemed to have been done mostly in one direction, back and forth, dueto technological constraints. The limits of the head mounted display screens werethe cause of this bias. For example, experients searching for limits of a stimulus– i.e. the limits of an illuminated wall – would need to move backwards to fit theperception within the visual field provided by the displays.Stimuli limits – perspectival cues – and texture provided the most importantcues for spatial perception. When the space had none of these clues readily avail-able to the perceiving humans a considerable subjective change in spatial size wasnoted. However, kinesthetic movements and careful visual exploration of, almostimperceptible, textural cues helped subjects correct their spatial model. A moredetailed exposure of the results of this experiment follow.Figure 4.28 presents the analysis of one participant’s interaction data. Underlow stimulation (figure 4.28(a)) head rotations were limited and movement wasrare. Under high stimulation (figure 4.28(b)) movements were more common andusually in search of the limits of the virtual space. Under horizontal bi-stimulations(figure 4.28(c)) the participant had strong head rotations but remained in the centerof the space. During horizontal single stimulations the user paid close attention tothe presented plane, but moved backwards in order to find its visual limits. Finallythe average position of the experient, as most of the participants, seemed to beclose to the center of the space with few movements in one or two directions.914.5. Spatial effects of directional light intensity(a) Low stimulation. (b) High stimulation.(c) Horizontal bi-stimulation. (d) Horizontal single stimulation.Figure 4.28: Analysis of a participant.924.5. Spatial effects of directional light intensityNon-light and external cuesThe surface of the virtual world was made to have a barely noticeable texture.Most users – 7 out of 8 – commented that this texture cue helped them “see howclose to the wall [they] were”. Although their kinesthetic actions allowed them todeduce their position [52] in the virtual world, users generally looked for visualcues that would allow them to “be sure [they] were close to the wall”. If a stimuliconfiguration – generally when all walls were in low (55) value – didn’t provideany cues, subjects would kinesthetically approach the space limits in search oftextural cues that could allow them to see if they were “next to the wall or at thecorner”. Texture seemed to provide closeness and perspectival information to thehuman perceivers when perspectival cues from stimuli limits were unavailable.Stimulus limitsIn the present experiment stimuli are placed within a perspectival reality. Stimulidirections were constrained to the faces of a virtual box and thus stimuli limitswere defined by the cube’s edges. Subjects commented, and was later observedin the data, that when a new stimulus would appear they would scan for its limitsin order to find the “limits of the space”. It was evident that the bias caused bydisplay size and reduced visual field of view was the result of this search for limitcomprehension.Initial perception of a full high stimulation or a low or high stimulation (ta-ble 4.10) resulted in disoriented subjects and either strong head rotations or nomovement at all. Some subjects – 2 out of 8 – seemed paralyzed during thesestimulations, generally waiting for the experience “to be over”, while most of thesubjects – 6 out of 8 – engaged themselves in searching for cues that would helpthem construct the limits of the space. All subjects, however, noted that the space“seemed to have changed” when full or low stimulation was provided. A subjectcommented that “the space seemed like a big room when it was all illuminated”,while other subject noted that “when the lights were turned off the room felt verysmall”.Horizontal and vertical stimulationAs we have seen, stimuli limits played an important role in spatial perception,therefore, vision was oriented in the direction of the lit surfaces in search of spatialcues. Bi-stimulations (table 4.10) resulted in alternated head rotations or tilts – left-right, up-down – in the directions of the presented stimuli, while single stimulationsresulted in single head rotation or tilts – up, down, left, right – of attentional nature.934.6. SENA prototypeMovement seemed to have been strongly affected by the direction of the stimu-lus presented. Horizontal 4.10) stimulation – walls – resulted in apparently higherrates of movement, while vertical stimulation – floor and/or ceiling – resulted in alow rate of movement and a high count of head tilts.Peripheral stimulation and directional stimulationPerceived spatial size seemed to have been considerable affected during peripheralstimulation due to a lack of perspectival, textural and kinesthetic cues. This effectwas not observed during directional stimulation because the present implementa-tion allowed some of these clues to be evident, specially perspectival ones. It wasproven that small, almost inexistent, pieces of information help humans perceiversform a mental model of their surrounding and bias our previous findings on spatialsize perception based on pure visual peripheral stimulation.Strong differences were found between peripheral stimulation and directionalstimulation. The main factor being that directional stimuli generally provide per-spectival cues that help human perceivers construct a mental representation of theirenvironment.4.6 SENA prototypeA Space Encoded Agent is a simplified interactive agent designed specifically totest the existence of a Cyborg Environment. Its components, as was explainedat the beginning of this document, are meant to interconnect with human usersin a seamless manner by fitting human perception and cognition of space. Thepresent experiment is an initial implementation of a Space Encoded Agent basedon the findings of the previous studies of this research. The prototype was builtto gather comprehensive data that would support the presented model for CyborgEnvironments and is based on the understanding of the1. Behavioral effects of spatial environment interactivity, the2. Effects of knowledge or belief of an interactive system in the interactionprocess, and the3. Inter-relational and social capabilities of interactive spatial environments.4.6.1 Device descriptionSENA – acronym for Space ENcoded Agent– is a basic Space Encoded Agent thatperceives human motion and position, creates a spatial model and acts upon its944.6. SENA prototypeworld – i.e. human inhabitants – by producing directional light stimuli.A space of 4 by 5 meters was delimited using plain white plastic constructiontarps. Two projectors were then mounted outside the delimited space. One projec-tor was used to create a rear-projection – a wall stimulus depicted in figure 4.29(a)– of a white square with one of two light intensity values. The second projectorwas used to create a ceiling-down projection – floor stimulus depicted in figure4.29(b) – of a white square with one of two light intensity values. The setup of theexperiment is depicted in figure 4.30.Light values are presented in table 4.11 and are based on our previous findingsof visual stimuli and its perceived spatiality in an Hue-Value-Saturation definition.The selection of intensity values was done using the function S = 0:016v+ 1:3 toprovide one of two spatial sizes according to table 4.6: Spatial size of 2.5 (between very small and small), using an intensity valueof 75. Spatial size of 6 (huge), using an intensity value of 255.Name Projected onto (R,G,B,) ValueWall Low Wall (75, 75, 75)Wall High Wall (255, 255, 255)Floor Low Floor (75, 75, 75)Floor High Floor (255, 255, 255)Table 4.11: Stimuli definition.A camera mounted on the ceiling was used to detect rate of motion and positionof a human test subject. The motion algorithm was designed following Bevilac-qua et al.[1] change detection algorithm to account for high noise levels and rapidchanges in illumination resulting from stimuli changes.An Emotion Selector (figure 4.31) was constructed using a keyboard contain-ing four keys labeled with four emotions – Stress, Excitement, Depression andRelaxation – taken from Russell’s Affect Grid[60] and used in the measurement ofa message transmission explained in detail in a section below.A Pentium III computer running Ubuntu 7.04 was host to the Space EncodedAgent and controlled both projectors and the camera. Every second the systemread its state (direction and value of the stimulus being presented) and its world(the rate of motion and position of a human) and recorded the total count of statesso far observed into a binary database. This total count of states would be then used954.6. SENA prototype(a) Wall stimulus(b) Floor stimulusFigure 4.29: Spatial stimuli used by the system.964.6. SENA prototypeFigure 4.30: Experiment setup.Figure 4.31: Emotion selector.974.6. SENA prototypein Bayesian inference, following Russell [61]. The overall count of states recordedwith a 1 second latency was recorded into a binary file for future analysis.The previously presented definition of autopoiesis states that an autopoieticsystem is fundamentally purposeless. However, the parts that conform the networkof interelationships that conform it are determined by internal goals that definetheir position in the system. It was acknowledged that the system could not be de-signed to perform a specific outcome or goal if a true autopoietic system was to beconstructed. Nevertheless, each part would have to have pre-determined goals thatwould allow and maintain their connectivity with other members of the structure.According to the presented autopoietic system based of a human cyborg anda space cyborg (figure 3.9) any node of the network has as main goal to gatherinformation, undergo a spatial experience and produce information. Furthermore,it was hypothesized that such purposeless agent could learn a code not related to it’sown autopoietic description and use it to communicate a message. The latter wouldlay outside the autopoietic description and would allow to test the inter-relationaland social capabilities of interactive spatial environments.Two existence goals13 were the base of the design of the SENA prototype: Autopoiesis: The main goal of the agent that allows it to become autopoietic.The goal is formed of three sub-goals, namely, perceive the world, create aspatial model and act accordingly. Communication: A test goal measuring the interelational and social capa-bilities of an agent. The goal is to learn a code by observation and use it toconvey a message.Autopoiesis goalCyborg Space perception is straightforward. The agent perceives the world, everysecond, through a motion detection algorithm that returns a set of world states. Thisinformation, along with the knowledge of its own state is memorized into a mentalspatial model. A space encoded agent uses a Bayesian network to accomplish thelatter. The network is designed – following the findings of previous pilot studiesand the definition of an autopoietic system between space cyborgs and human cy-borgs – to represent the spatial relationships between a space cyborg’s world andits own state. Figure 4.32 presents this Bayesian network.Using this spatial mental map – Bayesian network – the agent is able to infer thethe probability of a light intensity (Value) and its wall or floor direction (Direction)13A goal not determining an action task but the management of information that results in an entitybeing part of a system – alive.984.6. SENA prototypeCommunication(Transmit/Receive)Direction(Wall/Floor)Value(High/Low)Position(Left/Right)Motion(Fast/Slow)Figure 4.32: Interaction bayesian network.being the cause of a specific human activity (Motion) and location on the left orright side of the delimited space (Position) given the agent’s state of learning a code(Receiver) or transmitting a message (Transmitter). The collection of inferences,or conditional probability tables that arise from the spatial mental map, are therepresentation of the world as perceived by said agent.Actions to be taken by the agent are then based on the analysis of this repre-sentation towards maintaining the autopoietic nature of the system. In other words,the conditional probability tables are used to decide a state that maintains an inter-active process – a flow of information within the autopoietic network. A carefulanalysis of the autopoietic system, based on the findings of the initial studies andthe definition of the same system yields the following logic:1. If human actions truly correlate to specific space states –as foundby the spatial effects of visual stimuli study –, test subjects shouldrespond in a predictable manner to randomized stimuli makingthe probability of such correlations higher.2. A change in human state when a correlation to a stimulus isstrong should imply that such correlation is now irrelevant, thusa search for a new correlation should follow. Because we aretesting the correlation hypothesis, a randomized state should beselected.3. If a change in space state is triggered but human actions remainthe same we can either infer that the unchanged human actionsare the result of the newly selected spatial stimulus or that there994.6. SENA prototypeexists no correlation between the two and the human actions arenot affected by both the previous stimulus or the newly selectedstate. In either case, human actions will eventually add up tocause the system to again believe in a strong correlation betweenthe new stimulus and the added human actions.In other words, a space cyborg will construct an accurate mental model of theworld. If it believes that the actions of the human cyborg (its world) are the resultof its present state and such actions change, then a new randomized state will beproduced. The space cyborg will then learn the human actions resulting from itsnew state. This logic was then translated into the following algorithm[61]:repeat every secondif P(Direction,Value|Motion,Position,Communication)>0.50if Motion and/or Position != those from previous loopset random Direction other than present Directionset random Value other than present Valueelse continueelse continueCommunication goalAn interacting entity can prove to communicate if it has the ability to engage inmeaningful dialogue. For the present research, dialogue is defined as the abilityto promote third-order couplings [34] – self-imposed-changes that result in theautopoietic system 14 suffering a shift of state. If by changing its own state, anentity causes other members of the autopoietic system to suffer a co-related change,there exists dialogue.Communication, therefore, can be expressed in terms of message transmis-sion. When an entity sends a message it changes its own state according to its ownexperience. This produces a “structural congruence”[34] between entities of theautopoietic network that in turn results in members of the system suffering funda-mental changes of co-adaptation. If this co-relation between the entity that sent amessage and the entities that received the message is strong, a message decodingwill occur. For example, if the spatial system is able to feel sad by setting its ownstate to a one that results in other members of the system feeling sad, we can assertthat there has been a message decoding – a third-order coupling promoted by adialogue – and thus a social act. Social co-relation is then a function of messagedeciphering accuracy.14The network the entity belongs to.1004.6. SENA prototypeFollowing the methodology used by Smith and MacLean [66] in their articleCommunicating emotion through a haptic link: Design space and methodology,a communication goal was included into the present agent. Such a goal lies out-side the autopoietic system and thus, theoretically speaking, does not affect thepurposelessness of the network.A second Bayesian Network– spatial model of a world of semantic nature –was created to allow the agent to infer the probability of its light (Value) and wallor floor direction (Direction) states being the cause of a specific emotional hu-man state from Russell’s [60] bi-dimensional Affect Grid based on unique pairsof arousal (Arousal) and pleasantness (Pleasantness). Figure 4.33 presents thisBayesian network.Direction(Wall/Floor)Arousal(High/Low)Pleasantness(High/Low)Value(High/Low)Figure 4.33: Communication bayesian network.The agent would use this spatial model to create a representation of the seman-tic world being perceived in the form of conditional probability tables – learn acode through experience. Once sufficient experience was gathered the agent wouldbe given a randomized list of messages to transmit to a human being. The agentwould then access its representation of the world and set its light (Value) and wallor floor direction (Direction) states according to it. The action algorithm is straight-forward,from experiencefind learned Arousal/Pleasantness pairscreate a randomized list of 16 learnt (Arousal,Pleasantness)for each Arousal/Pleasantness in the listfind highest P(Direction,Value|Arousal,Pleasantness)set Direction and Value.wait for response and store.1014.6. SENA prototype4.6.2 MeasurementsAnalysis can be performed by observing and comparing the mental spatial map– Bayesian network – and representation of the world – conditional probabilitytable – of each autopoietic system that evolved during interaction. Because theoverall count of states recorded can be easily extracted from a binary file it ispossible to reconstruct the conditional probability tables for analysis. Furthermore,statistical analysis can be performed on the total count of states. For example, thetotal seconds that humans were moving when the space was a wall low stimulation,or the total seconds that humans were in the right part of the space during the wholeexperiment.Behavioral effect of interactive systemsIt was found, on the previous experiments, that spatial stimuli have behavioraleffects on humans that perceive and interact with them. However, a deeper under-standing of the actions that would arise in an autopoietic system formed of a spacecyborg and a human cyborg was needed. The measures performed to achieve sucha comprehension were:1. Number of actions that correlate to one, and only one, stimulus.2. Behaviors related to each stimulus.3. Behaviors related to each interactive treatment.The first measurement tests the accuracy of the interactive rationale of the pro-totype in accordance to the human actions measured during the experiment. Fol-lowing the rationale used to design the Autopoiesis Goal of the presented space en-coded agent, it is possible to deduce a logical true correlation between space statesand human states. The Autopoiesis Goal algorithm is designed so that the cyborgstate changes only if its mental model is disrupted. 15 Logically, if humans act in apredictable manner each space state will produce one, and only one, human state.This can be found by searching the mental representation of the world – conditionalprobability tables – of the space cyborg for space states pairs (direction;value)that correlate to a single (position;motion) state.The second measurement is targeted at understanding the overall behavioraleffects that each stimulus had on human participants. Analysis of the means oftotal counts of position and motion states related to every (direction;value) pairwas performed.15If it believes that the actions of the human cyborg are the result of its present state and suchactions change a new random state will be produced.1024.6. SENA prototypeFinally, the third measurement deals with a deeper understanding of the overallbehavior resulting from different interactive conditions. Number of position andmotion counts were compared across all treatments and depict the overall interac-tion activity. The analysis focused on high motion, i.e. the result of interactionactivity, and right position, i.e. being on the right part of the space.Effects of a-priori beliefs of an interactive systemFour variables were measured to understand the participant’s perception of the in-teractive system, namely, Beauty: Measured with the agreement to the statement “My experience to-day was beautiful”. Pleasantness: Measured with the agreement to the statement“My experiencetoday was pleasant”. Comfort: Measured with the answer to the agreement to the statement “I feltcomfortable interacting through this system”. Involvement: Measured with the agreement to the statement “I felt involvedin the experiment”.The data were gathered through the post-experiment questionnaire that allowedour subjects to rate their agreement with each measured variable from 1 to 5.Communication ability of an interactive spatial environmentIf the system has both efficiently learned a code and used it to convey a messagea correlation between the system’s experience and the subject’s decoding of themessage should exist. Since the experients are blind to the fact that they haveprovided the code to the system, selecting an (Arousal;Pleasantness) pair todecode a message that equals the cyborg space’s beliefs implies that the spaceencoded agent has successfully sent a decipherable message.Research done by Smith and MacLean[66] on emotion communication througha simple bi-dimensional haptic link provided a credible benchmark and methodol-ogy to analyze the accuracy of such message decoding. Furthermore, previous re-search done by Mikellides[45] on the relationship between color and physiologicalarousal showed that only small psychological effects on emotion can be achievedthrough chromatic strength (saturation). In his research light intensity and hueaffected unpredictably and insignificantly the emotions of human subjects.1034.6. SENA prototypeMeasurement of this communication ability was done by analyzing the numberof accurate message decodings done by human subjects in the second part of theexperiment. Finally, by contrasting the findings of the present experiment to theresults of Smith and MacLean[66] it is possible to measure the performance of thecyborg space in comparison to human beings.Qualitative analysisA qualitative analysis of the questionnaire, post-experiment interview and notestaken by the experimenter was performed. The analysis focused on an analysis ofusers’ belief that the system can be used to communicate intimate feelings with aloved one, decisions to choose movements and locations during the experiment, aesthetic perception of the prototype, attempt to guess the rules of the system, proposal of use of the present interactive system, and the experimenter’s observations.and was performed over each of the treatment groups.4.6.3 Experimental designApplied treatmentsIn order to measure the behavioral effects of a-priori knowledge and understand theinteractive process that arises between humans and their environments two experi-mental conditions were created, a non-interactive and an interactive one – exposedto the agent outlined here. Furthermore, within the interactive condition, threesubconditions were created in order to test the effect of a-priori knowledge in theinteraction process. Namely, the belief of a non interactive environment, the be-lief of an interactive environment manipulated by a machine, and the belief on aninteractive environment manipulated by a human being. Table 4.12 presents thearrangement of the treatments applied.This arrangement resulted in four test groups, Non-interactive Control: A first control group was exposed to a non-interactivesystem formed of a single randomized fixed directional light value.1044.6. SENA prototype Interactive Control: A second control group was exposed to an interactivesystem and told that changes were randomized across time. No interactionwas expected from the subjects of this group. Interactive A.I.: A first treatment group was exposed to an interactive systemand told that the space was controlled by an artificial intelligence attemptingto interact with them. Interaction was expected from the users of this group. Interactive Human: A second treatment group was exposed to an interactivesystem and told that the space was controlled by the actions of a second testsubject in a hidden replica of the system. Subjects were expected to interactwith this remote participant.Non-interactive Control.InteractiveControlBelief of Human-Computer InteractionBelief of Human-Human InteractionTable 4.12: Treatments applied.Non-Interactive ControlInteractive Control . . .Interactive A.I. . . .Interactive Human . . .Table 4.13: Experimental design for the sena experiment.Statistical analysisThe experiment was designed as a Completely Randomized Factorial ANOVA withOne Factor, CRi; where i = interactivity. The design is presented in table 4.13.Twenty subjects subjects – 13 females and 7 males aged 18 to 38 and randomlyassigned to one of the groups.This design was applied to all our analyses of the data recorded by the Bayesiannetwork, and to analyze the responses to the post-experiment questionnaire.1054.6. SENA prototypeExperiment flowThe experiment had a total duration of 45 minutes (figure 4.34). During the first 5minutes subjects were introduced to the experiment and a consent form was signed.The experimenter would carefully explain the tasks to be performed and answerany questions.Due to its limited access to the formation of goals, as exposed by the previ-ously discussed cyborg citizenship structure (figure 3.7), an agent can only trulyinteract with another agent. 16. It was theorized that a human being, a super-agent,can become an agent if sufficient control is performed on his or her goal-formationcapabilities. By removing specific tasks in an experimental condition it is possibleto construct an agent-agent autopoietic system. Following this logic, during thesecond part of the experiment subjects would be required to spend thirty minutesinside the delimited space. Participants were not allowed to bring any portabledevices, magazines or books and were told to live in the delimited space. No inter-action tasks were given to the participating subjects nor instructions to manipulatethe state of the system in an attempt to eliminate any goal-oriented activity. As onlytask, subjects were asked to use the provided Emotion Selector to update their emo-tion when a shift towards any of given emotions occurred. Because this last taskdid not have any apparent effect on the interactive process it was not considered agoal-forming task.During the next 5 minutes subjects in the interactive groups were asked todecode 16 messages supposedly being sent by the experimenter, the artificial intel-ligence or the hidden experient through the characteristics of the room. Subjectswere asked to select their responses using the same keyboard they had previouslyused to input their emotions.During the last 5 minutes participants were asked to answer a questionnaireand a short interview took place.Introduction5 minutesInteraction30 minutesDecoding5 minutesAssessment5 minutesFigure 4.34: Experiment flow.16Super-agent to agent interaction is incoherent1064.6. SENA prototype4.6.4 ResultsIt is important to note that subjects on the interactive human group seemed to be-lieve that they were interacting with a hidden participant. All of them made strongefforts to communicate, and were sometimes disappointed that the hidden partici-pant seemed not as active as them. However, some of the participants commentedthat after a few minutes of interaction with the system it was clear to them that therewas no hidden subject. This fact should be taken into consideration for reading thefollowing results.Number of actions that correlate to one, and only one, stimulusCorrelationsControl I. control I. A.I. I. human0.511.52Figure 4.35: Means of actions that correlate to one, and only one, stimulus.A logical correlation between space states and human states was analyzed withthe query[61]:P(Direction;ValuejMotion;Position;Communication)done over all possible Direction/Value pairs corresponding to each Motion/Positionpair.Following our design and implementation for the system’s Autopoiesis Goal,where a hypothesis is considered to be strong if exceeds a threshold ofProbability>0:5, true hypotheses were selected if such query resulted in a value greater than 0.5,i.e. 50% probability of being true. Then, if by any Direction/Value pair existed oneand only one Motion/Position true hypothesis a true correlation between space stateand human state was counted. A fully correlated system (each one of all four pos-sible human states correlated to only one of the four possible space states) wouldscore 4. The findings are depicted in figure 4.35.1074.6. SENA prototypeANOVA analysis of the number of logical correlations showed a significanteffect,F(3;16) = 3:658;p = :035, of a-priori knowledge on the correlation betweenspace stimuli and human behaviors. The interactive A.I. group scored the lowest,while the interactive control the highest. An analysis of contrasts between theinteractive groups (control, A.I. and human) and the non-interactive control showedthat a significant effect was only achieved by the interactive control, p = :009, andthe interactive human, p = :018, groups. A not significant difference, p = :207,against the non interactive group was achieved by the interactive A.I group.The results suggested that non interactive spaces, i.e. static, were not able toachieve any logical correlations between spatial stimuli and human actions. Inter-active spaces, except when humans are told that they’re interacting with an artificialintelligence, seemed to achieve a higher count of logical correlations.Contrast analysis between the interactive groups showed that the interactiveA.I. group was not significantly different from the interactive control group, p =:120, that the interactive human group was not statisticall significant from the in-teractive control group, p = :747, and that the interactive human group was notstatistically significant from the interactive human group, p = :207. This analysissuggested that the logical algorithm implemented in the Autopoiesis Goal of theprototype proved successful in achieving equal effects on stimuli-behavior correla-tions across interactive groups. Furthermore, it proves that no significant effect onlogical correlations is achieved by neither the A.I. or human groups in comparisonto the interactive control group.These findings demonstrate that the designed interactive agent successfullyconstructed an autopoietic system, i.e. a network of co-relationships, with its hu-man inhabitants. Further analyses exposed bellow will help provide additionalproof to this statement.Behaviors related to each stimulusCorrelations between spatial stimuli and human behaviors was analyzed using aChi-Square Test for Goodness of Fit. It was found that human activity resultingfrom changes in spatial stimuli were statistically significant,  2(9;N = 72320) =2473:666p = :000, between each other. This suggested that human actions var-ied due to spatial stimuli changes, and not due to variances between participants’actions. Table 4.14 presents the percentages of total total count of correlations be-tween all spatial stimuli and all human actions. Figure 4.36 presents the total countof appearances of specific behavior actions according to each stimulus.The statistical differences depicted in figure 4.36 between stimuli and the per-centages of correlated stimuli-action pairs shown in table 4.14 shows a clear depen-dency between stimuli and human behavior. This suggested that human subjects1084.6. SENA prototypeand the stimuli-based spaces formed an self-regulating network of stimuli-actionrelationships, i.e. an autopoietic system.Position Left Position Right Motion Low Motion HighFloor Low 5.8% 4.3% 7.9% 2.2%Floor High 7.5% 4.3% 9.8% 1.9%Wall Low 12.2% 3.7% 13.8% 2.1%Wall High 5.9% 6.3% 11.2% 1.0%Table 4.14: Percentages of total stimulus-action correlations across all subjects.StimulusCount of appearancesPosition LeftPosition RightMotion LowMotion HighFloor Low Floor High Wall Low Wall High200040006000800010000Figure 4.36: Count of appearances of human actions related to each spatial stimu-lus.In order to understand the nature of these correlations, an in-depth analysis wasperformed for each different behavioral state, using the means of human activityrecorded during each stimulus and shown in figure 4.37: Position left: The human behavior of a being on the left part of the space didnot seem to significantly, F(3;76) = 1:932;p = :132, vary during different1094.6. SENA prototypespatial stimuli. However, analysis of contrasts showed that wall low stim-ulations resulted in an effect significantly different, p = :023, to the onesobtained by other stimulations. Position right: The human behavior of being on the right part of the space –did not seem to significantly, F(3;76) = 0:657;p = :581, vary during differ-ent spatial stimuli. Contrast analysis showed that various stimulations hadno significant effect on this behavior. Although wall high stimuli seemedto be affected by wall high stimulations, contrast analysis showed that thiseffect was not significantly different, p = :176, to the ones obtained by otherstimuli. Motion low: The human behavior of being stationary did not seem to sig-nificantly, F(3;76) = 0:924;p = :433, vary during different spatial stimuli.Contrast analysis showed that various stimuli had no significant effect onthis behavior. The apparent effect of wall low stimulations showed not to besignificantly different, p = :165 , to the ones obtained by other stimuli. Motion high: The human behavior of moving did not seem to significantly,F(3;76) = 1:258;p = :295, vary during different spatial stimuli. However,contrast analysis showed that floor low, floor high and wall low were almostsignificantly higher to a wall high stimulation, p = :061. This effect is notapparent or significant and should be discarded as a real effect.Count of appearancesFloor Low Floor High Wall Low Wall High100200300400500Motion HighMotion LowPosition RightPosition LeftFigure 4.37: Means of behavior states recorded during each stimulus, across allgroups.1104.6. SENA prototypeA second analysis was then performed for each stimulation state: Floor low: During floor low stimulations, differences between behavioralstates proved to be not significant, F(3;76) = 2:506;p = :065. Contrastanalysis showed that none of the behaviors were significantly different. Thisshowed that under floor low stimulations no significant effect can be achieved. Floor high: During floor high stimulations, differences between behavioralstates were significant, F(3;76) = 3:592;p = :017. Contrast analysis showedthat position right and left did not differ from eachother. p = :227 meaningthat floor high stimulations did not have an effect on human position. How-ever, contrast analysis showed that motion high and motion low differendsignificantly, p = :003, suggesting that floor high stimuli have an effect onmotion. Wall low: During wall low stimulations, differences between behavioralstates were significant, F(3;76) = 8:024;p = :000. Contrast analysis showedthat a significant difference between motion low and motion high, p = :000,existed, as well as between position right and position left, p = :005.In otherwords wall low stimulations promoted low motion and left positions. Wall high: During wall high stimulations, differences between behavioralstates were significant, F(3;76) = 4:847;p = :004. Contrast analysis showedthat differences between position left and position right were not signifi-cantly different, p = :866, while differences between motion high and mo-tion low were significant, p = :003. This suggested that wall high stimulihad only effects on motion.Both sets of findings suggested two strong links between human behaviors,namely: Left Static: By analyzing the means plot depicted in figure 4.37 it is pos-sible to see that motion low and position left – a human not moving andstaying in the left part of the space – were strongly linked. However, thislink seemed to have been disrupted during wall high stimulations. Contrastanalysis showed that under wall high stimulations motion low and positionleft were significantly different p = :050. Right Moving: Position right and motion were also strongly related – hu-mans moving when on the right part of the space. Although an apparentdisruption occurred during wall high stimulations, contrast analysis showedthat during such stimulus position right and motion high were not signifi-cantly different p = :363 and thus, their link remained unaffected.1114.6. SENA prototypeSuch strong links might have been the result of an important bias caused by theEmotion Selector device. The device was placed in the left area of the space andnotes taken from the experimenter’s observations show that experients that choseto interact with such device generally remained static, causing an effect interactionbetwen the keyboard position and human position.Only one significant behavioral effect was achieved by the various stimuli. Awall low stimuli resulted in subjects remaining in the left part of the space. A stronglink found between subject being on the left area of the space and an increase oflow motion – remaining static – has been attributed to the location of the EmotionSelector device. It has been hypothesized that during this stimuli subjects wereconcentrated in updating their emotion and this affected their movement patterns.Another relevant effect on motion was achieved by floor low, floor high andwall low stimulations that caused humans to move more than when a wall highstimulus. In other words, wall high stimulations resulted in human subjects movingless. Even if this behavior was not statistically significant it is noteworthy. A strongdisruption of the link between motion low and position left 17 caused by the highwall stimuli suggests that said stimulus caused humans to move less and stay in theright part of the space – unable to interact with the Emotion selector. Therefore,this stimulus has a non significant effect strictly on motion.By analyzing each spatial stimulus individually it was found that, floor lowstimulations didn’t have significantly different effects in human behavior, i.e. therewas no apparent behavioral pattern. However, floor high and wall high stimula-tions had only significant effect in motion behaviors, while wall low stimuli hadsignificantly different effects in both motion and position.These results seemed to be aligned with the previous findings of number ofactions that correlate to one, and only one, stimulus. In said logical analysis we hadfound that the maximum mean of correlations existent is 1.8, in the present analysiswe have found that only one unique correlation between stimulus and behaviorexists within our sena prototype (motion low in the left area of the space due towall low stimulation), while a possible behavior seems to have almost evolved(reduced motion on the right area of the space due to high wall stimulation).Behaviors related to the the non-interactive treatment and the interactivetreatmentsAn analysis of the specific human actions affected by the interactive treatments wasthen performed. Fast motion – most of the times a result of a change of position andwill to interact or to control the environment – was more frequent in the interactive17Result of the Emotion Selector device1124.6. SENA prototypeCountControl I. control I. A.I. I. human50150250350Figure 4.38: Means of total count of overall motion high appearances.groups (figure 4.38). Although the effect was not overall significantly different,F(3;16) = 1:489;p = :255, between groups, contrasts analyses showed that theHuman group had an almost significant effect against the non-interactive controlgroup, p = :055. No significant difference from the non-interactive control groupwas found for either the interactive control group,p = :455, and the interactive A.I.group, p = :256. Further contrast analyses were performed between the interactivegroups. There was no statistically significant difference between the interactivecontrol group and both the interactive A.I group, p = :685, and the interactivehuman group, p = :210.These findings suggested that although overall motion was observed to behigher in interactive groups the effect was not statistically significant from a non-interactive treatment. Moreover, interactive treatments did not differ from the in-teractive control group, proving that no significant interaction – as measured bymotion rate – can be achieved by a-priori knowledge of the system.No correlations between human position and interactivity were found (figure4.39). A human being on the left part of the space did not seem to be significantlyaffected, F(3;16) = :621;p = :612, by interactivity treatments applied. Parallelto this result, human position in the right part of the space didn’t seem to be sig-nificantly affected, F(3;16) = :621;p = :612, by the interactive treatments tested.In other words, the interactiveness of the environment didn’t have an effect in thelocation that human participants would choose.It was noticed that the level of interactivity – high motion counts – related toeach stimulus seemed to be affected by the interactive treatments. Analyzing thedata showed that fast motion as the result of a wall high stimulus (figure 4.40) wassignificantly different, F(3;16) = 4:944;p = :013, between both non-interactiveand interactive treatment groups. Contrast analyses, however, showed that, com-1134.6. SENA prototypeCountControl I. control I. A.I. I. human50250450650850Figure 4.39: Means of total count of overall position right appearances.pared to the non-interactive control group the only significantly different effect wasachieved by the interactive human group, p = :002. Further contrast analysis per-formed within interactive groups showed that, compared to the interactive controlgroup, the interactive A.I group was no significantly different, p = :620, while theinteractive human group was significantly different, p = :031. This proved thatwithin interactive groups the belief of interacting with a human being affected theinteractive process – depicted by higher motion rates.Interactivity – fast motion – as the result of a floor high stimulus (figure 4.40)seemed to have been affected by the interactive treatments, however, statisticalanalysis showed that groups were not statistically significant, F(3;16) = 2:632;p =:086, between each other. contrast analysis showed that only the interactive humangroup achieved a significant, p = 0:14, effect, in comparison to the non-interactivecontrol group. Further contrast analysis done within interactive groups showedthat both the interactive A.I group, p = :962, and the interactive human group,p = :291, were not statistically significant from the interactive control group. Thissuggested that no difference in interactivity – motion rate – was achieved througha-priori differences within interactive groups.Low values in both floor and wall forms had unpredictable effects (figure 4.41).Differences between all non-interactive and interactive groups in high motion dueto wall low stimulations was not significantly different, F(3;16) = 1:174;p = :351.High motion due to floor low stimulations was also not significantly different,F(3;16) = :97;p = :897, between both non-interactive and interactive treatmentgroups. Analysis of contrasts showed that no group had a significant differencewith the non-interactive group, and contrasts between interactive groups showedthat no group was significantly different from the interactive control group.1144.6. SENA prototypeHigh motion countControl I. control I. A.I. I. human050100150Floor high stimuliWall high stimuliFigure 4.40: Means of high motion related to high valued stimuli.High motion countControl I. control I. A.I. I. human050100150Floor low stimuliWall low stimuliFigure 4.41: Means of high motion related to low valued stimuli.1154.6. SENA prototypeEffects of a-priori beliefs of an interactive systemResults in the previous section already show that a-priori beliefs that users mayhave of an interactive spatial environment can affect their interaction with them.Recapitulating the results from the previous sub-section we can confidently assertthat the belief of being interacting with a human significantly affects the will tointeract and has unpredictable effects when low stimulations are provided. Furtheranalyses of questionnaire responses complement these findings (figure 4.42).AgreementControl I. control I. A.I. I. human123456BeautyPleasantnessComfortInvolvementFigure 4.42: Effects of a-priori beliefs of an interactive system.Beauty A marked difference (figure 4.42) in the aesthetic perception of the spacewas found to depend in previous beliefs about the system. Analysis of users’ agree-ment to the statement “My experience today was beautiful” showed that there ex-isted a significant difference, F(3;16) = 3:690;p = :034, between both interac-tive and non interactive treatment groups. However, contrast analysis between allgroups showed that the only significant difference was achieved by the A.I group,which was significantly different to both the interactive control group , p = :018,and to the non-interactive control group, p = :008. No other significant differencesbetween were found.Although test subjects in the interactive A.I group seem to have found theirexperience more beautiful than those in the interactive human group, no statisticaldifference, p = :150, between them was found, furthermore, the interactive humangroup was not significantly different from the interactive control group.p = :274.In other words, subjects in both interactive (human and A.I.) groups found theirexperience significantly more beautiful than those in the non-interactive control1164.6. SENA prototypegroup, but people that believed to be interacting with a human did not experiencea more beautiful experience than those in the interactive control group.Pleasantness A significant difference (figure 4.42), F(3;16) = 3:269;p = :049,between all non-interactive and interactive groups was found in the users’ agree-ment to the statement “My experience today was pleasant”. Contrast analysisshowed that the interactive A.I. group was the only significantly different, p =:011, from the non-interactive control group. Furthermore, a significant difference,p = :023 was found between the interactive A.I. group and the interactive humangroup. Since the latter was not different,p = :724, from the non-interactive controlgroup we can confidently assert that a pleasant experience can only be experiencedby subjects that believe that they are interacting with an artificial intelligence – amachine. The fact that test subjects in the interactive human group did not havea pleasant experience proved, by qualitative analysis, to be a result of lack of theunderstanding of the interactive process.Comfort A significant difference (figure 4.42), F(3;16) = 5:754;p = :007, be-tween treatment groups was found in participants agreement to the statement “Ifelt comfortable interacting through this system”. This time, however, the interac-tive A.I. group was not significantly different, p = :653, from the non-interactivecontrol group. Furthermore, the interactive control group scored lower, but notsignificantly p = :085, than the non-interactive control group, while the interac-tive human group scored significantly lower than the non-interactive control groupp = :005. This suggested that interactivity did not make a spatial environmentmore comfortable than a non interactive one. On the contrary, it risked becomingsignificantly less comfortable when users felt a loss of control. Knowledge of theinteractive controls, and the system itself, did not make the system more comfort-able.Involvement Although a difference of involvement between groups was notedin some of the questionnaires and interviews (figure 4.42), there was no signifi-cant effect,F(3;16) = :722;p = :553, between both interactive and non-interactivetreatment groups. Users from the various treatment groups did not agreed differ-ently to the statement “I felt involved in the experiment”. Contrast analysis did notshow any particular difference between groups. Involvement was not promoted byinteractive environments more than in static non-interactive environments.1174.6. SENA prototypeCommunication ability of an interactive spatial environmentAll interactive groups – control, A.I. and human – were not significantly different,F(2;12) = 0:338;p = :720, in decoding a message through space (figure 4.43).Contrast analysis showed that even if the A.I. group appear to have performedbetter, it was not significantly different, p = :427, to the interactive control group,and the interactive human group, p = :688.Messages DecodedI.Control A.I. I. human468Figure 4.43: Means of accurately decoded messages.The interactive A.I. group had a mean 7.8 responses out of 16 (48% accuracy)followed by the interactive human group with a mean of 7 responses out of 16 (43%accuracy), a small improvement from the interactive control group that achieved amean of 6.2 responses out of 16 (38% accuracy). These results are in line withSmith and MacLean’s[66] research where accuracy achieved ranged between 48.3to 59.5 percent.Analyzing the questionnaire (figure 4.44) a significant difference, F(3;16) =3:493;p = :040, was found between non-interactive and interactive treatmentgroups. However, contrast analyses showed that the only statistically significantdifference between groups was achieved by the interactive human group, and theinteractive A.I group, p = :006. Compared to the interactive control group, neitherthe interactive A.I group,p = :129, or the interactive human group,p = :129, weresignificantly different.A similar, but not statistically significant,F(3;16) = 2:426;p = :103, differencewas found between interactive and non interactive treatments groups in the ease ofdecoding a message (figure 4.44). Contrast analysis showed that the only statisti-cally significant difference between interactive groups was achieved between theinteractive A.I. and the interactive human group, p = :033. Compared to the inter-active control group, neither the interactive A.I group, p = :063, or the interactivehuman group, p = :743, were significantly different.1184.6. SENA prototypeAgreementControl I. control I. A.I. I. human12345Transmitting EaseDecoding EaseFigure 4.44: Agreement on message transmission and decoding ease betweennon-interactive control, interactive control, interactive A.I and interactive humangroups.Qualitative analysisNon-interactive control group Two users did not think that the system could beused to communicate intimate feelings; three users thought that the system couldbut would need improvements. One subject commented that feelings would bedependent on participant previous experiences, playing an important bias on com-munication. Two subjects moved when in distress or discomfort while three chosea comfortable position near the keyboard; one subject even commented choosingthis location “mostly because it was something [she] could interact with.” Subjectschose locations where they could experience the space around them better. A sub-ject commented choosing the center of the space to “have a clear view all around”or the corner because he “realized [he] could see more.” Only four users foundthe space aesthetically pleasant, even if their questionnaire responses showed theyhad a pleasant experience. One subject qualified the space as “nothing special”and “neutral”. Two subjects felt connected to the space. Three subjects didn’tfeel any emotional connection with the space, one even considered “pouring [his]emotions” into the keyboard but didn’t feel that a connection existed between thecontroller and these emotions. No subjects were able to understand the rules of thesystem, although one subject guessed it had something to do with movement, hecouldn’t place it as a rule in the system. It is noticeable that two users would usethe system as a very intimate companion, most of them proposed emotion-relatedalternatives, one proposed an interrogation room and another proposed to use it asa “journal, so that [he could know] what spaces best suit [his] moods”. One subjectwould use the system for relaxation and two wouldn’t use it in a daily basis. Further1194.6. SENA prototypeobservation done by the experimenter shows that users were quite aware of theiremotional changes at the beginning of the experiment. After approximately 10minutes users would not use the emotional keyboard as often. One user, however,updated his emotion quite frequently.Interactive control group Only one subject believed that the system could com-municate intimate feelings. A subject commented not knowing if the system couldcommunicate emotions but “certainly make [your loved ones] feel uncomfort-able/comfortable”. Most of the subjects choose their actions in space in order tobe comfortable. A subject found “light shinning down [...] annoying” and when“wanted to be alone [he] went to the corners/sides”. Only one subject found thespace to be pleasant, the rest of the group coincides that the space is not pleasantor slightly pleasant. One of these subjects commented that it was a not “particu-larly pleasant”, while other found it “austere and plain”. No subjects developedan understanding or the rules of the system. One subject commented that he “didnot feel [he] had much control over [the system]”, another subject guessed “bodymovements”, and a third subject believed the space was controlled by the exper-imenter. Furthermore, two out of five subjects were very annoyed by “sudden”changes in space and high illumination stimuli. They both resolved this issue bymoving to a more comfortable position. One subject would use this system tocommunicate with family, friends or employers and other subject would use it as arelaxation or meditation space. Another subject answered that she didn’t “see how[the system] should translate to an every-day thing”. Observations from the exper-imenter indicated that subjects became frustrated when changes in space occurredin what they believed was a randomly triggered change. Users felt an upsetting andstrong lack of control, generally depicted by important changes in position, motionor emotional state. Only one out of five subjects did not have strong motion orposition changes when a space change was triggered, instead this subject updatedhis emotion on almost every spatial change.Interactive A.I. groupTwo users did not think that the system could be used to communicate emotions.One subject commented that “you need more complicated spaces for communicat-ing that”, while a second added that “the projected feelings aren’t refined enoughfor [her] to do so.” One subject said that the system could “somewhat” be used forthis purpose, while another experient added that only if the interlocutor knows her“well enough”. Only one subject thought that the system could be “definitely” usedto communicate intimate feelings. Three subjects remarked that their position andmotion in the space was determined by the location of the keyboard used to update1204.6. SENA prototypetheir emotion. One of these subjects remembered choosing “somewhere near thekeyboard [emotion controller] and [sitting] in a position that [she] felt comfort-able”. Two other users commented moving randomly, one of them wrote that hemoved “just once every few minutes, [without] particular pattern.” Four out of fivesubjects found the prototype aesthetically pleasant. Three subjects out of five feltemotionally connected to the space, of which one commented that “the lightingdefinitely affected [her] emotions”. All the subjects saw a strong connection be-tween their emotions and their environment, making two subjects believe that thespace was controlled by updating their emotion several times. One subject, nev-ertheless, claimed to know the rules of the system through movement and addedthat he “could predict the next change, but could not control it as [he] wanted itto be.” Four out of five subjects would use the system in a daily basis, to helpthem relax, daydreaming or to control their stress. Observations from the exper-imenter shows that all subjects updated their emotion carefully during the wholeexperiment. These updates generally took place when a change in space occurred.Subjects tried hard at the beginning of the experiment to control the environmentby moving different locations in space. After not being able to find a clear cor-relation between their actions and space changes most subjects lost interest in thisalternative and remained static until a further change in space occurred. Changes instimuli were generally triggered by minute changes in the subjects’ state. Finally,users in this group didn’t seemed more active than those in other interactive groups.After a while most users of the present group remained static in one position. Theoccasional changes in space triggered minor position adjustments and the searchfor more comfortable positions, but did not result in radically strong reactions.Interactive human group Three users out of five thought that the system couldbe used to communicate intimate feelings, they all agreed that such feelings wouldhave to be very intimate and general – not specific. One subject commented thatthis would only happen if a language was “established through time and explic-itly”. One subject was “curious whether the buttons [she] pressed ... [regardingher] feelings were also implicitly affecting the other person’s space”. When askedabout the aesthetics of the space, two subjects commented that bright floor stimulimade them feel under the spotlight and were sometimes annoying. One subjecteven commented that when stimulation came from the ceiling “it felt wrong tosit there”. Low stimulations were seen as calming or rising bubbles. Three, outof five subjects believed that the space was aesthetically pleasant. Only two sub-jects remarked that their movements were done accordingly to they what “felt” atthe moment. One subject commented that “it didn’t seem like [his] actions wereaffecting [his space] much, so [he] just sat down for most of [the experiment]”. An-1214.6. SENA prototypeother subject agreed that she “was more reacting to the space than trying to conveya message to the other person.” One subject out of five strongly believed that thespace was controlled by the hidden experimenter. Only one subject out of five wasaware that the changes in space were controlled by motion. Four subjects didn’tfind a connection between the stimuli and their actions. Of them, one commentedthat changes “seemed random”. When asked if they would use the system in adaily basis, only one subject didn’t find an application for the system. The rest ofthe subjects agreed that it could be used for relaxation purposes. A coffee shop,and aquarium or a yoga room were examples of proposals by the participants. Ob-servations from the experimenter show that users were some of the times trying tocommunicate with the “hidden participant”. Only one subject tried passionately tocommunicate with said hidden subject. Although all experients strongly believedthat a person was interacting with them, they couldn’t create a mental connectionbetween the space changes and the other person. A user commented that it was“hard to decide if the space was controlled by a person or by a machine”. All sub-jects agreed that sometimes the changes seemed triggered by themselves and notby a human being. Exploration of the space was done at the beginning, in mostof the cases, and later explorations were rare. One subject even decided to sleepduring the experiment.122Chapter 5ConclusionsThe present research has proposed a model for the conceptualization, understand-ing and design of Cyborg Environments. This conceptual model is formed of threeparts that describe a system of cyborg nature based on communication and interac-tivity: Cyborg Space: A cyborg space is defined by its embodiment, enclosure andalteration capabilities. Cyborg Communication: A cyborg space can communicate with other cy-borgs by controlling the relationship between code – stimuli that are pro-duced by its activity – and the noise that affects it. Cyborg System: A system of cyborg spaces and cyborg humans is an au-topoietic system of interactivity based on a citizenship of action.A series of experiments have been conducted to collect scientific data to testthis theoretical model. The sections that follow recapitulate the findings of theseinvestigations and discuss them within a more global framework.5.1 Recapitulation of findingsThe experiments conducted during the duration of this research were designed totest specific parts of the conceptual model for Cyborg Environments. Table 5.1presents the relationship between each of these experiments and the hypothesesthat arise from said model and previously exposed in table 1.1.The current findings of these experimentations suggest that the model is suc-cessful in describing a new spatiality based on intimate interaction between humanbeings and architectural spaces enhanced for interaction and spatial perception.The sections below present a summary of the results belonging to each topic of thepresented model.1235.1. Recapitulation of findingsModel Part Model sub-part Experiments.Cyborg SpaceEmbodiment Training Wheel StudyEnclosure type Effects of visual stimuliAlteration type Effects of visual stimuliEffects of directional lightCyborg Communication Message transmission SENA prototypeCyborg Interactions Cyborg citizenship SENA prototypeAutopoiesis SENA prototypeTable 5.1: Theoretical model and experiments conducted.LimitationsThe experiments conducted by the present research were performed under tighttime and budget constraints. Software implementations were deployed withoutproper testing, spatial setups were often done in shared facilities and hardwareused, e.g. projectors, were often not calibrated between different experiments.This results in a reduced replicability that can be eliminated in future explorationsby paying attention to these details.Additionally, due to time constraints, the number of participants within eachexperiment was low. This considerably reduced the statistical power of the an-alyzes performed and had a detrimental effect on the findings, which should beunderstood with caution. Larger sample sizes should correct this limitation andallow for more in-depth analyzes of the relationship between cyborg spaces andhumans.EmbodimentAn analysis of the kinesthetic actions taken by human beings during their inter-action with mutating built space suggested that motion patterns can be modeledand predicted with a high degree of accuracy. A pilot study of interactive spatialperception offered an insight to the fact that behavioral activity of human beingswithin interactive environments depend on attentional variations to changing stim-uli, private and public distinctions within the space, appropriation through whatwas defined as a “home location”, and gender.The data collected by the Training Wheel Study suggests that a complex pro-cess of embodiment arises between human beings and interactive environments.A theory of control based on the manipulation of reflective consciousness provedthat by manipulating the spatial relationships of a system’s control it is possible toevoke controllable kinesthetic movements in human subjects. This space-human1245.1. Recapitulation of findingsembodiment is the result of a tight connection between human cognitive models ofspace and spatially designed interactive controls.Enclosure typeA study on the spatial effects of different characteristics of peripheral visual stim-ulus showed that spatial size perception can be evoked in a predictable mannerby altering the value component of an HSV –Hue, Saturation, Value– color space.The rate of change in spatial size perceived, related to changes in value can becomputed by the function:S = 0:016v + 1:3Where S=Spatial size measured in a scale from 1 to 6 (tiny, very small, small,large, extra-large and huge), and v = color intensity value.Although these results are part of initial findings that need deeper understand-ing, they have proved that it is possible to conceive an enclosed space changing insize purely through visual stimulation.Alteration typeThe findings of the correlation between visual stimuli intensity – Value in an HSVcolor space – and humanly perceived spatial size leads the present research to con-sider the possibility of evoking a spatial experience through non-stereoscopic stim-uli without spatial visual cues. In addition, it suggested that by altering the char-acteristics of the visual stimulus it is possible to alter such spatial perception in apredictable manner. These findings prove that a spatial liquid – mental represen-tation of the space – can be altered without the physical modification of spatialmembranes – physical limits, e.g. walls. This corroborates a part of the proposedconceptual model that proposes the conception of space perceived as a spatial liq-uid that is independent of objects that enclose such perceived spatiality.Further explorations on the spatial effects of directional light intensity showedthat the alteration of spatial membranes18 enclosing a space result in human behav-ioral changes – motion rate and attention – dependent on the cognitive process ofspace perception. Perspective and texture cues proved to give rise to strong spa-tial perceptions overcoming the previous findings of spatial size alteration throughthe manipulation of stimulus intensity. However, directional stimulation provedto affect head rotations and directional kinesthetic motion linked to bodily assess-ment of the cognitive spatial map created by space-perceiving human beings. This18Ego-centric – stereopsis and visual cues – characteristics of space perception that are dependenton objects that evoke stimuli of spatial nature.1255.1. Recapitulation of findingsstrong dependency between the spatial membrane’s physical definitions and thehumanly created spatial model suggests that alterations of the perceived space canbe performed by modifying the former.The results obtained in both experiments prove that space strongly depends onthe cognitive process of stimuli perception, mental representation and kinestheticassessment undergone by humans perceiving space. Through an understanding ofthis process and appropriate stimuli manipulation it is possible to evoke spatialalterations according to the theorized model of space alteration.Message transmissionThe proposed conceptual model of cyborg message transmission states that spacecyborgs can communicate with cyborgs of both human and spatial nature. The codefor such transmission is theorized to exist outside the inter-active process betweencyborgs and in constant opposition with the noise of the communication channel.Experimental data gathered through the SENA prototype on message encoding– proper arrangement of stimuli – suggested that is possible for a space cyborgto learn a code, i.e. the semantic relationships between a chosen stimuli, that liesoutside the realm of its interactive definition. Additional analysis on message trans-mission accuracy proved that simple space cyborgs have the same communicationperformance than human beings using a constrained communication channel ofonly four degrees of freedom – stimuli.These findings authenticate the proposed conceptual model for cyborg commu-nication capabilities and allow the theorization of Cyborg Environments as systemscapable of social interaction.Cyborg citizenship and autopoiesisIt was hypothesized that a system composed of two space-perceiving cyborgs wouldbe of autopoietic nature. The SENA prototype was conceived and designed as anautopoietic space cyborg belonging to an autopoietic system of human-space in-teraction. Analysis of the correlations between space stimuli and human behaviorsshowed that a strong link existed between the two, forming enclosed self-regulatingnetworks of state relationships. This proved that Cyborg Environments can be de-signed as systems of co-organizing coupled relationships that, according to Matu-rana, are defined as Social.The effects of a-priori knowledge of a Cyborg Space on interactivity was an-alyzed using the SENA prototype. Although previously gained beliefs of a SpaceCyborg had significant effects on interactivity measures, e.g. motion, it played noapparent role the correlation between specific space stimuli and human actions.1265.2. ImplicationsThe analysis proved that even if said knowledge had significant effects on beautyand pleasantness perceptions of the interactive process it had no effects on the com-fort and involvement in such interaction. This suggested that information about aSpace Cyborg can only affect the subjective perception of the interaction and itsrate without affecting the nature,i.e. correlation between entities, of the autopoieticsystem.Furthermore, the cyborg citizenship model based on an entity’s selective accessto action proved to provide a successful methodology in the creation of autopoieticsystems formed of comparable, i.e. equally action-capable, entities. Future devel-opments in artificial intelligence should make possible the existence of super-agentto super-agent –human– autopoietic systems of interaction.5.2 ImplicationsThe findings of the present study attend to a small part of the definitions providedby the theorized models for Cyborg Environments. Further theoretical investiga-tions and experimentations should be performed in order to assess such conceptu-alization. Nevertheless, the present explorations have already uncovered importantimplications for Architecture, Human Computer Interaction and Cognitive Science.ArchitectureUbiquitous computing, communication networks and artificial intelligence haveradically affected how humans use and perceive their architecturally constructedspaces. A new paradigm that is coherent with such reality is needed by architec-tural thought in order to conceptualize, design and construct built environmentsthat are coherent with such reality. The SPACES research has demonstrated thata new paradigm for understanding technologically enhanced inhabitable spaces ispossible.Space has proven to be a cognitive process dependent on stimuli that can bedelibered synthetically to human perceivers. By performing simple alterations onvisual stimuli it is possible to evoke controllable spatial perceptions that can be con-sidered architectural. Furthermore, predictable behavioral patterns that arise fromthis cognitive process can be used to construct action-based systems of autopoieticand social nature prone to communication. It is possible to conceive an approach toarchitectural design based on the conscious and creative manipulation of the com-ponents of this cognitive process, and thus hypothesize an architectural practice ofSpatial Experience creation. The architectures that can be created through this pro-cess have demonstrated a high level of embodiment, spatiality and sociality rarely1275.2. Implicationsexperienced by human being in static buildings.Architectural thought will have to critically include the possibility of these non-objectual, social and embodying spatialities in its discourse in order to understandand attend to their inevitable appearance in our built environment. The presentinvestigations should raise questions regarding the understanding of space as anobject-dependent phenomenon of static nature and point towards new paradigmsof space creation based on perception and action. Furthermore, the new paradigmscan be addressed with moderately complex and contemporary technology easilyavailable to all researchers and institutions. Most of the experiments implementedin these investigations were done using hardware and software that is easily avail-able and that can be easily adapted to the built environment without excessive andunnecessary costs.Finally, technology should be a promoter of architectural thought, and not anobstacle to it. Technological advances should be used in the investigation of spatialparadigms in a scientific manner. Research should be done –i.e. user testing, cog-nitive measurements, simulations, etc. – before blindly applying new technologiesto the construction and design of spaces enhanced with technology.Human computer interactionHuman Computer Interaction studies are already being affected by easily availableubiquitous computing. Computers are no longer a commodity, but part of our dailyenvironment. Interaction with machines has become invisible and users often findthemselves using the built environment to interact with information.Work has been previously done to understand how humans interact betweenthemselves and their technologically built environment. Prototypes using archi-tectural features – interactive walls, tables, built and virtual environments – arebecoming more common in both academia and the industry. Nevertheless, the be-havioral and interactive effects of these implementations are, most of the times,unknown.The present study has demonstrated that – at least for visual information – bydefining information as a collection of simplified stimuli, i.e. light properties, itis possible to measure and model their effect on the interactive process that arisesbetween humans and their technologized environment.The computing machines of the future will not only have to be designed from ahuman-centered usability point of view, but as part of autopoietic – self regulating– and sociable systems of co-adaptation with the capability of becoming semantic.Furthermore, closer attention should be paid to the cognitive spatial perception ofenvironment-like solutions for data manipulation and presentation and the a-prioriknowledge that users have of their environment.1285.2. ImplicationsFinally, the present research has proved that strong behavioral effects arise fromspatial stimulation. These could be counterproductive, if not properly predicted, insystems where kinesthetic movement or attention become a bias in the interactionwith information, e.g. large screen displays.Space cognitionAccording to the various areas of study that conform the field of space cognition,space is the result of the perceptive, cognitive and action processes undergone byhuman beings. A large collection of theories have focused on modeling variousparts of these processes independently, but no compelling unifying theory has beenable to present them as part of the same event of space perception.The creation of a global theory of space is outside the scope of the presentresearch. However, the present experiments support the idea that various percep-tual, cognitive and activity processes interact between themselves to result in whatcan be called spatial perception. There is no unique process – or serial collectionof them – that result in said perception of space, but a network of interacting andnon-fixed processes used by human beings to inhabit the world.This interactive collection of brain events can be modeled and understood ina global framework if the system is understood as a network of relationships, anautopoietic system, between processes that participate in said spatial perception.Each process within this network is a self regulating entity that might or not berelated to other processes within spatial cognition. Nevertheless, when the systembecomes active, a structural coupling inevitably takes place and all parts becomecoherently dependent, resulting in an articulate process of spatial perception of theworld.Using this conceptualization of space cognition, the SPACES research has hy-pothesized the existence of entities with access to spatial perception that are neitherbiological or within space.19. A successful implementation of a space with spatialperception done by the present research has proved that spatial cognition is inde-pendent of the intrinsic characteristics of the world, and depends on the relation-ships between them and the cognitive processes that arise from perceiving them,mapping them and acting upon them. This suggests that if all properties of theworld are suddenly changed – inverted or exchanged with different ones – humanbeings will continue on perceiving a spatial experience of their surrounding.19A space with spatial perception cannot be considered to be within space if space is perceived asout there1295.3. Future perspectives5.3 Future perspectivesAs our environment becomes more technologized it is possible to visualize a worldformed of Cyborg Environments. It is imperative to understand the capabilities ofthese entities and their implications to the human life taking place within them.From the various paths that have to be taken, in order to measure and investigatethese plausible futures, we must first explore the societal possibilities of Cyborg Environments, their implications to Architecture, and the theorization of a city of networked space and human cyborgs, i.e. anurban Cyborg Environment.Cyborg Environments have proven to be self regulating systems that can beconceived as social structures. The present investigation has explored only thefoundation of this fact, but a deeper understanding of the inter-relationships thatarise between humans and Space Cyborgs is needed. This knowledge should givebirth to a new conceptualization of built space as a self-regulating system of stimuliand actions. Finally, because Cyborg Environments can be thought as autopoieticentities within an autopoietic system it is important to conceptualize, investigateand model an urban reality formed of Cyborg Environments.5.4 ConclusionsThe SPACES research has explored the initial issues of what could be labeled asarchitectural cybernetics. The experiments presented have explored the relation-ship between cyborgized humans and cyborgized spaces that conform inter-activesystems. A theoretical model of space has been created based on space liquidity,human perception and human-computer inter-actions. Furthermore, the systemsformed by various cyborgs of human or spatial nature have proven to be autopoieticand lead to social and communication enabled entities. New questions have arisenfrom these findings. How can emotionally empowered space cyborgs socialize withhumans? What are the semantics of a linguistic structure based on perception andembodiment? What are the consequences of a society formed of human and spacecyborgs? What are the rights and obligations of space-encoded agents within oursociety? How will cities be shaped when both its inhabitants and building blockscan communicate autopoietically between them? Finding questions to these an-swers is the objective of future explorations and the ongoing SPACES research.130Bibliography[1] Di Stefano A. Bevilacqua and A. Lanza. An efficient change detection al-gorithm based on a statistical nonparametric camera noise model. ImageProcessing, 2004. ICIP ’04. 2004 International Conference on, pages 2347–2350, 2004.[2] Timothy W. Bickmore and Rosalind W. Picard. Establishing and main-taining long-term human-computer relationships. ACM Trans.Comput.-Hum.Interact., 12(2):293–327, 2005.[3] Ole Bouman. Architecture, liquid, gas. Architectural design, 75(1):14–22,Jan 2005.[4] Lucy Bullivant. 4dspace: Interactive architecture - introduction. Architecturaldesign, 75(1):5–7, Jan 2005.[5] Lucy Bullivant. Induction house: aether architecture - adam somlai-fischer.Architectural design, 75(1):97–98, Jan 2005.[6] Lucy Bullivant. Jason bruges: light and space explorer. Architectural design,75(1):79–81, Jan 2005.[7] Lucy Bullivant. The listening post: Ben rubin and mark hansen. Architecturaldesign, 75(1):91–93, Jan 2005.[8] Lucy Bullivant. Media house project: the house is the computer, the structureis the network. Architectural design, 75(1):51–53, Jan 2005.[9] Lucy Bullivant. Sky ear, usman haque. Architectural design, 75(1):8–11, Jan2005.[10] Paul Allen Carter. The creation of tomorrow : fifty years of magazine sciencefiction. New York : Columbia University Press, 1977.[11] Hui Chen and Hanqiu Sun. Real-time haptic sculpting in virtual volumespace. In VRST ’02: Proceedings of the ACM symposium on Virtual real-ity software and technology, pages 81–88, New York, NY, USA, 2002. ACMPress.131Bibliography[12] Manfred E. Clynes and Nathan S. Kline. Cyborgs and space. Astronautics,September 1960.[13] R. Beau Lotto Dale Purves. Why we see what we do : an empirical theory ofvision. Sinauer Associates, Sunderland, Mass., 2003.[14] S. Dehaene. Conscious, and subliminal processing: a testable taxonomy.Trends in Cognitive Science, 2006. Other Contributor(s): J.-P. Changeux, L.Naccache, J. Sackur and C. Sergent.[15] Elizabeth Diller. Flesh : architectural probes. Princeton Architectural Press,New York, 1994. Other Contributor(s): Scofidio, Ricardo. Teyssot, Georges,1946- Diller + Scofidio.[16] Daniel Dinello. Technophobia! : science fiction visions of posthuman tech-nology. University of Texas Press, Austin, 2005.[17] Paul A. Dudchenko. An overview of the tasks used to test working memoryin rodents. Neuroscience and Biobehavioral Reviews, 28(7):699–709, 2004.[18] Gary L. Allen (ed.). Applied spatial cognition : from research to cognitivetechnology. Lawrence Erlbaum Associates, Mahwah, N.J., 2007.[19] Ernest Edmonds. http://www.ernestedmonds.com.[20] Ernest Edmonds. On creative engagement with interactive art. In GRAPHITE’05: Proceedings of the 3rd international conference on Computer graphicsand interactive techniques in Australasia and South East Asia, pages 297–297, New York, NY, USA, 2005. ACM.[21] Ernest Edmonds and Mark Fell. Broadway one. In SIGGRAPH ’04: ACMSIGGRAPH 2004 Art gallery, page 30, New York, NY, USA, 2004. ACM.[22] J. T. Enright. The eye, the brain, and the size of the moon: Toward a uni-fied oculomotor hypothesis for the moon illusion., pages 59–121. The Moonillusion. Erlbaum, Hillsdale, NJ, 1989.[23] Eva Eriksson, Thomas Riisgaard Hansen, and Andreas Lykke-Olesen. Re-claiming public space: designing for public interaction with private devices.In TEI ’07: Proceedings of the 1st international conference on Tangible andembedded interaction, pages 31–38, New York, NY, USA, 2007. ACM Press.[24] Lizzie Muller Ernest Edmonds and Matthew Connell. On creative engage-ment. Visual Communication, 5(307), 2006.132Bibliography[25] Sidney Fels. Designing intimate experiences. In IUI ’04: Proceedings of the9th international conference on Intelligent user interfaces, pages 2–3, NewYork, NY, USA, 2004. ACM.[26] N. Foreman and R. Gillet. A handbook of spatial research paradigms andmethodologies. Psychology Press, East Sussex, U.K., 1997.[27] James Jerome Gibson. The perception of the visual world. GreenwoodPress1974, Westport, Conn., 1950.[28] Chris Hables Gray. Cyborg citizen : politics in the posthuman age. Routledge,New York, 2001.[29] Donna J. Haraway. A Cyborg Manifesto. Readings in the philosophy oftechnology. Rowman & Littlefield Publishers, Lanham, Md., 2004. OtherContributor(s): David M. Kaplan.[30] Robert M. Harnish. Minds, brains, computers : an historical introduction tothe foundations of cognitive science. Blackwell Publishers, Malden, MA ;Oxford, 2002.[31] Andrew Hieronymi and Togo Kida. Move. In SIGGRAPH ’06: ACM SIG-GRAPH 2006 Emerging technologies, page 23, New York, NY, USA, 2006.ACM.[32] Stephen Hirtle and Molly Sorrows. Navigation in electronic environments. InGary L. Allen, editor, Applied spatial cognition: from research to cognitivetechnology. Lawrence Erlbaum Associates, Mahwah, N.J., 2007.[33] Jeffrey Huang and Muriel Waldvogel. The swisshouse: an inhabitable inter-face for connecting nations. In DIS ’04: Proceedings of the 5th conference onDesigning interactive systems, pages 195–204, New York, NY, USA, 2004.ACM.[34] Francisco Varela Humberto Maturana. Tree of knowledge. Shambhala Publi-cations, Inc., 1992.[35] Francisco J. Varela Humberto R. Maturana. Autopoiesis and cognition : therealization of the living. D. Reidel Pub. Co., Dordrecht, Holland ; Boston,1980.[36] Hiroshi Ishii. Tangible bits: designing the seamless interface between people,bits, and atoms. In IUI ’03: Proceedings of the 8th international conferenceon Intelligent user interfaces, pages 3–3, New York, NY, USA, 2003. ACM.133Bibliography[37] Hiroshi Ishii, Craig Wisneski, Scott Brave, Andrew Dahley, Matt Gorbet,Brygg Ullmer, and Paul Yarin. ambientroom: integrating ambient media witharchitectural space. In CHI ’98: CHI 98 conference summary on Humanfactors in computing systems, pages 173–174, New York, NY, USA, 1998.ACM.[38] Hiroshi Ishii, Craig Wisneski, Scott Brave, Andrew Dahley, Matt Gorbet,Brygg Ullmer, and Paul Yarin. ambientroom: integrating ambient media witharchitectural space. In CHI ’98: CHI 98 conference summary on Humanfactors in computing systems, pages 173–174, New York, NY, USA, 1998.ACM.[39] William James. Psychology: briefer course. With a new foreword, chapter 11,pages 166–188. Collier books ; BS24. Collier Books, New York, 1962. Notes:An abridgment of the author’s Principles of psychology.[40] Andruid Kerne. Interface ecology. interactions, 5(1):64, 1998.[41] H. Kirchner and S. Thorpe. Ultra-rapid object detection with saccadic eyemovements: visual processing speed revisited. Visual Research, 46:1762–1776, 2006.[42] Drew Leder. The absent body. University of Chicago Press, Chicago, 1990.[43] Wendy E. Mackay, Ernest Holm Svendsen, and Bjarne Horn. Who’s incontrol?: exploring human-agent interaction in the mcpie interactive theaterproject. In CHI ’01: CHI ’01 extended abstracts on Human factors in com-puting systems, pages 363–364, New York, NY, USA, 2001. ACM.[44] Marshall McLuhan. Understanding media : the extensions of man. McGraw-Hill, New York, 1964.[45] B. Mikellides. Color and physiological arousal. Journal of Architectural andPlanning Research, 7(1):13–20, SPR 1990.[46] Masahiro Nakamura, Go Inaba, Jun Tamaoki, Kazuhito Shiratori, andJun’ichi Hoshino. Bubble cosmos. In SIGGRAPH ’06: ACM SIGGRAPH2006 Emerging technologies, page 3, New York, NY, USA, 2006. ACM.[47] Takuji Narumi, Atsushi Hiyama, Tomohiro Tanikawa, and Michitaka Hirose.inter-glow. In SIGGRAPH ’07: ACM SIGGRAPH 2007 posters, page 145,New York, NY, USA, 2007. ACM.134Bibliography[48] Donald Norman. Cognitive Engineering., pages 31–61. User Centered Sys-tem Design: new perspectives on human-computer interaction. L. ErlbaumAssociates, Hillsdale, N.J, 1986.[49] Donald A. Norman. The psychology of everyday things. Basic Books, NewYork, 1988.[50] Marcos Novak. Transmitting architecture: the transphysical city. fromhttp://www.ctheory.net/articles.aspx?id=76, 11/29/1996 1996.[51] Piergiorgio Odifreddi. The mathematical century: the 30 greatest problemsof the last 100 years. Princeton University Press, Princeton, N.J., 2004.[52] John O’Keefe. The hippocampal cognitive map. In Jaques Paillard, editor,Brain and Space. Oxford University Press, Oxford, 1991.[53] Mire Eithne O’Neill. Corporeal experience: a haptic way of knowing. Journalof architectural education, 55(1):3–12, Sept 2001.[54] Kas Oosterhuis. [Hypercorpi.] Hyperbodies : toward an e-motive architec-ture. Birkhauser, Basel ; Boston, 2003.[55] Christian Pongratz. [Nati con il computer. English] Natural born CAADe-signers : young American architects. Birkhauser Pub., Basel ; Boston, 2000.Other Contributor(s): Perbellini, Maria Rita. Translated from the Italian: Naticon il computer.[56] L. Pugnetti, L. Mendozzi, E. Barbieri, F. D. Rose, and E. A. Attree. Nervoussystem correlates of virtual reality experience. Proc. of the 1st European Con-ference on Disability, Virtual Reality and Associated Technologies. Reading,UK: The University of Reading, pages 239–246, 1996.[57] Hani Rashid. Asymptote : flux. Phaidon Press, London ; New York, NY,2002. Other Contributor(s): Couture, Lise Anne, 1959-; Variant Title: Flux,Asymptote.[58] Warren Robinnet. Technological augmentation of memory, perception, andimagination., 1991. Banff Centre for the Arts.[59] BarbaraOlasov Rothbaum, Page Anderson, Elana Zimand, Larry Hodges,Delia Lang, and Jeff Wilson. Virtual reality exposure therapy and standard (invivo) exposure therapy in the treatment of fear of flying. Behavior Therapy,37(1):80–90, 3 2006.135Bibliography[60] James A. Russell, Anna Weiss, and Gerald A. Mendelsohn. Affect grid: Asingle-item scale of pleasure and arousal. Journal of personality and socialpsychology, 57(3):493–502, 09 1989.[61] Stuart J. Russell. Artificial intelligence : a modern approach. PrenticeHall/Pearson Education, Upper Saddle River, N.J., 2003. Other Contribu-tor(s): Norvig, Peter.[62] P. Schumacher and Z. Hadid. Latent utopias : experiments within contempo-rary architecture. Springer, Wien, 2002 2002.[63] Eduard F. Sekler. Structure, construction. tectonics. In Gyorgy Kepes, editor,Structure in Art and in Science. Georges Braziler Inc., New York, 1965.[64] Michel Serres. Hermes: literature, science, philosophy. The Johns HopkinsUniversity Press, London, 1982.[65] Michel Serres. The parasite. Baltimore: Johns Hopkins University Press,1982.[66] J. Smith and K. MacLean. Communicating emotion through a haptic link:Design space and methodology. International Journal of Human-ComputerStudies, 65(4):376–387, APR 2007.[67] Inez L. Taylor and F.C. Sumner. Actual brightness and distance of individualcolors when their apparent distance is held constant. The Journal of Psychol-ogy, 19:79–85, 1945.[68] J. Tichon and J. Banks. Virtual reality exposure therapy: 150-degree screento desktop pc. Cyberpsychology & behavior : the impact of the Internet,multimedia and virtual reality on behavior and society, 9(4):480–489, Aug2006.[69] Mark Weiser. Some computer science issues in ubiquitous computing. Com-mun. ACM, 36(7):75–84, 1993.[70] Robert L. West, Lawrence M. Ward, and Rahul Khosla. Constrained scaling:the effect of learned psychophysical scales on idiosyncratic response bias.Perception & Psychophysics, 62(1), 2000.[71] Bob G. Witmer and Michael J. Singer. Measuring presence in virtual envi-ronments: A presence questionnaire. Presence, 7(3):225–240, 1998.[72] Philip David Zelazo. The development of conscious control in childhood.Trends in Cognitive Sciences, 8(1):12–17, 2004.136Bibliography[73] Peter Zellner. Hybrid space : new forms in digital architecture. Thames &Hudson, London, 1999.137


Citation Scheme:


Usage Statistics

Country Views Downloads
France 12 0
United States 11 0
China 9 40
Russia 5 0
India 2 0
United Kingdom 1 0
Unknown 1 0
Sri Lanka 1 0
City Views Downloads
Unknown 13 29
Beijing 9 0
Roubaix 6 0
Mountain View 4 0
Ashburn 3 0
Bangalore 1 0
Boardman 1 0
Shiv 1 0
Wilmington 1 0
Buffalo 1 0
Sunnyvale 1 0
Edinburgh 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items

Admin Tools

To re-ingest this item use button below, on average re-ingesting will take 5 minutes per item.


To clear this item from the cache, please use the button below;

Clear Item cache