Open Collections

UBC Graduate Research

Counterfeiting Daily : An Exploration of the use of Generative Adversarial Neural Networks in the Architectural… Wallish, Sean 2019-04

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
42591-Wallish_Sean_Counterfeiting_Daily.pdf [ 38.51MB ]
Metadata
JSON: 42591-1.0387289.json
JSON-LD: 42591-1.0387289-ld.json
RDF/XML (Pretty): 42591-1.0387289-rdf.xml
RDF/JSON: 42591-1.0387289-rdf.json
Turtle: 42591-1.0387289-turtle.txt
N-Triples: 42591-1.0387289-rdf-ntriples.txt
Original Record: 42591-1.0387289-source.json
Full Text
42591-1.0387289-fulltext.txt
Citation
42591-1.0387289.ris

Full Text

Counterfeiting Daily:An Exploration of the use of Generative Adversarial Neural Networks in the Architectural Design ProcessbySEAN WALLISHBFA Emily Carr University of Art and Design, 2016Submitted in partial fulfillment of the requirements for the degree of Master of Architecture in The Faculty ofApplied ScienceCommittee ChairCommittee MembersJoseph DahmenJoseph DahmenAmber Frid-JimenezChris MacdonaldChris Macdonald© April 2019AbstractThroughout time architectural representation has changed greatly from the tools it is created with, to the contexts it is created in. In the digital world the creation and consumption of architectural ideas has shifted as the prevalence of data has allowed for architecture to take on new forms. It has also created an over reliance on technology without proper understanding of how they function and the biases they carry with them. As Artificial Intelligence develops, and architecture processes are further augmented by role of the machine, it is important to understand how technology produces. The following project explores the idea of architectural representation as cultural data and the possibility for algorithms to create based on this data. The project starts by tracing the history of technology and architecture from the introduction of the pencil, to the invention of photography, to the digital world that is now so prevalent. It looks at images as a form of cultural and architectural data and explores the possibility of AI creating images. It questions what it means for an algorithm to create without knowledge of the cultural histories that the image carries. It hypothesizes the role of the architect in a world where optimization has taken over many traditional tasks and leaves the architect to work in realms that cannot be optimized like the realm of style and the historical, cultural, ethical, and political realms that are tied to it.iiTable of ContentsAbstractAcknowledgmentsList of FiguresGP1 ReportGPII Design ProjectBibliographyBackground and ContextMethodologyExperiment 2 - Elements of ArchitectureResultsExperiment 3 - Counterfeiting DailyAnalysisDiscussioniiiiiivv17152931353959ivAcknowledgmentsFirst I would like to thank my committee members. Joe Dahmen thanks for chairing this committee and allowing me to explore this topic. At the beginning we both had little understanding of the topic and no idea where this project would end up. You encouraged me to keep working at let the project unfold and see where the project would lead. When I needed to work out the next stage you were always there to engage and question where the project was going and the implications the different decisions would make on the outcome of the project. Amber-Frid Jimenez thank you for bringing an understanding of neural networks and their impacts on individuals and society to the project. In our first meeting I realized that I was no longer talking to architects about AI but I was talking to someone who knew the tech better then me and would force me to gain a deeper understanding. Your conversations at our committee meetings challenged me to think about not just the how to use neural networks but how bringing them to the design process would have an impact on the individuals that would come in contact with the results being pushed by the algorithm.Chris Macdonald thank you for trying to be a voice of reason and grounding the project in architecture. You were never fooled by the promises of the tech and what it could be, you pushed me to question and root the project in architectural history. You also taught me to not take myself too serious and just produce work, it doesn’t matter if I think that it is good or bad, if I am trying things out I am learning and developing and that is what matters. Thanks to the countless members of my cohort who had to listen to me drone on and on about my project. I feel like it is all I talked about for the last year. I specifically want to thank Chris Smith and Felix Lavergne you two were always willing to go for a beer and help me work through tough thoughts about my projects, or willing to receive texts messages of me being overly excited about the results that I got out of the AI.Thanks to my non-architecture friends who provided me with an outlet that would allow me to escape the madness of the UBC campus. Dave Wiebe, Louis Dombowsky, and many others you helped me get through a challenging time in life that was being compounded by the challenges of a thesis and I will ever be thankful for that. Thank you to my family. Mom and Dad there is no way that this would have ever been possible without you, you provided support in whatever way I needed it. Thank you for your prayers. Thank you for our conversations whether they were meaningful or just about our frustrations with the Oilers. It all helped. Danielle and Amy you two are great, our group chats were always encouraging and I am glad that we started them during this last semester so I could keep up on what is going on in your lives. Thanks for the messages and thanks for your prayers.List of Figures1. Palladio, Andrea. Villa Almerico (Villa Rotunda), from I quattro libri dell’architettura di Andrea Palladio2. Vermeer, Johannes. The Kitchen Maid (The Milkmaid)3. Ruskin, John. “Figure 4 Problem 1”4. Ruskin, John. “Figure 25 Problem X”5. Le Corbusier. Plan Voisin6. Daguerre, Louis-Jacques-Mandé. View of the diorama of the Boulevard des Capucines7. Marville, Charles. Rue au lard8. Atget, Eugène. Parisian rooftops9. Steiglitz, Alfred Fountain10. Plug-in University Node, project Elevation 11. Generator, project White Oak, Florida Perspective12. Guggenheim Museum Bilbao13. Frei Otto Experimenting with Soap Bubbles14. Sagrada Familia Church15. Screenshot of provinggrounds.io16. Screenshot of algorithms.design17. Neuron Diagram18. Perceptron Diagram19. Neural Network Diagram201024 × 1024 images generated using the CELEBA-HQ dataset21. Generative Adversarial Neural Network22. Heydar Aliyev Cultural Center23. Selection From Zaha Dataset Batch Downloaded From Google #124. Selection From Zaha Dataset Batch Downloaded From Google #225. Results From Zaha trained GAN session 1 epochs 1-3027. Enlarged Selection From Results #128. Enlarged Selection From Results #228. Selection of Windows Dataset Batch Downloaded from Google/Flickr30. Selection of Doors Dataset Batch Downloaded from Google/Flickr31. Selection of Window Results32. Selection of Door Results33. GAN Process Diagram34. Traditional Design Process Diagram35. GAN Colaborative Design Process Diagram36. Selection from Site Dataset37. Selection from Site Results38. Selection from Site Interpretations39. Selection from Exteriors Dataset40. Selection from Exteriors Results41. Selection from Exteriors Interpretations42. Selection from Refining Axonometrics43. Site #9 Axonometric44. Program Diagram45. Selection from Interiors Dataset46. Selection from Interiors Results47. Interior Program Placement Diagramv112222333445556677Page Number891011131416-2526-282930353637383940424344454647484950505152List of Figures Continued48. Site Plan49. Second Floor Plan50. Long Section51. Short Section52. Elevation A53. Elevation B54. Elevation C55. Elevation D56. Exterior(above and Interior (below) pix2pix pairs57. Exterior Line Drawings For Algorithm Input58. Interior Line Drawings For Algorithm Input59. Pix2pix Rendersvi535454545454545455565657GP1 Report - Background and ContextMuch has been discussed about architectural drawing, with one of the foremost examples being Robin Evans work, Translations from Drawings to Buildings, it presents the myth of the invention of drawing and how it progressed to shape architectural history. With the invention of the first ana-log drawing device, the precursor to more sophisticated pencils, representation allowed for a new role in the design process, one that required the “suspension of critical disbelief… for architects to do their jobs”(Evans, 154). Design moved on from the collecting of materials and combining them in to a structure, and the reference now  could exist before and after the physical object.  The pencil became the technological development which allowed the orthographic era to come to be the primary way of thinking and communicating, a time “in which thought was structured by rule-bound lines with beginnings and ends”(May, 14). Architectural drawing developed to privilege the  “geometric gestures structured by the laws of scale and proportion represented the silence of lived spatial experience, thus placing form and materiality at the center of thought”(15). Drawings were guided by ideas of planarity and projection, elevations, plans, and sections arose as the architectural drawings best suited to communicate the context of the orthographic. Through the invention of the pencil a method architects had a way to pass on spatial and cultural data, a way that no longer existed only in physical buildings but now in the realm of the reference. Throughout the orthographic era the reference became the data through which architects learned what came before them and how to build for the future, and as development of drawing continued recognizable styles where developed that came to hold a wealth of knowledge ranging from the cultural to the technical.Some of the most recognizable drawings from history belong to Andrea Palladio (Fig. 1). The pencil allowed him to visualize, develop, and represent the spatial ideals that he intended his buildings to portray. Ideas of symmetry and proportion are amplified in the top down plans that have come to be used as precedents for the architects to follow him. Much of the influence of Palladio is not just shown in his architectural drawings, they were augmented by another form of representation. In 1439 Johannes Gutenberg invented the printing press which allowed the marks made by the hand to be reproduced faster then previous methods. Palladio took to the written word to further communicate the theories of his drawings and built work. His treatises and engravings traveled Europe and spread his ideas of architecture (Purdy, 193). The lack of context in his drawings allowed for the focus to be placed on the proportion and aesthetics of the building and resulted in translations of his work being built outside Italy in more northern regions of Europe.  Fig. 1 Villa Almerico (Villa Rotunda), Andrea Palladio1As time continued other technologies began to be introduced that influenced drawing styles and began to influence modes of architectural representation. It is not known when the camera obscura came to existence, but it created a new way of perceiving the world that became a precursor for future innovation. The technology is rather simple as it is light reflecting off objects, passing through a tiny hole and projecting those objects on the other side. Early instances of the camera obscura exist in references to the study of light, but throughout time the projections became useful in entertainment and eventually got used in the practice of drawing. This allowed for the ability to trace in high detail the surrounding environment and create compositions that referenced the physical world. It has been proposed that Vermeer created his paintings (Fig. 2) of interior scenes aided by the camera obscura in the 16th century, creating unique form of representation that differed from perspectives that came before (Steadman, 287).  Fig.2  The Kitchen Maid,Johannes VermeerPerspectives copy how the eye perceived a scene but move them to the plane of the orthographic. Because of the legibility of the perspective they are an integral way of communicating the qualities of space in a less technical and more emotional way, emphasized by their relation to the human body. In John Ruskin’s The Elements of Perspective he guides readers through a series of problems that help to teach them how to construct perspectives in relation to plans of objects in space. The book is introduced by an illustration of a viewer looking out the window at the framed view. If this image could then be traced on the glass the result would be a perspective but this is only true if the viewer remains perfectly still by moving the point of the eye the view and perception of space can be changed (Ruskin, 1). The following problems presented in the book relate to the way in which a bunch of space can be projected from plan into perspective (Fig. 3). As the problems progress in Ruskin’s book the shapes in perspective become more complex and start to resemble architectural elements (fig. 4), Ruskin’s intention with the technicality of his teachings was to provide architects who could already understand other forms of orthographic drawings with the proper tools to expand their means of communication.GP1 Report - Background and Context2Fig.3  Problem 1 Figure 4John RuskinFig.4  Problem X Figure 25John RuskinFor Ruskin the perspective was a series of dots and lines existing in relation to a plane, while the orthographic drawing was influenced by many technologies it was still held hostage by the pencil.  Richard Difford traces Ruskin’s teachings of the perspective to Le Corbusier’s paintings Voisin plan for Paris (Fig. 5), Corbusier’s knowledge of how to manipulate the sight point and line, the vanishing point, and the station point and line, allowed for the creation of perspectives that attempted to push the limits of the relationship between perspectives and the body. Difford proposes that Corbusier did not come up with these unique perspectives on his own but that they were based off the diorama technology of Louis Daguerre.  Daguerre’s dioramas (Fig. 6) placed the perspective at such great a distance that the vanishing points of both eyes disappeared at the same place removing the sense of convergence from the drawings (Difford, 834). When done correctly this creates flattened perspectives that mirror the technology of the eyes. Daguerre’s dioramas used this perspective with paintings on translucent media that when combined with controlled artificial light could increase the illusion of reality in the pictured scene. Corbusier took advantage of this perspective to further his illusion of the ideal view of Paris. One that when viewers were confronted with it would convey spatial and ideological data that could be understood in relation to the human body, even though the perspectives appear from a viewpoint that is so high up they would be impossible for the body to occupyFig. 6  View of the diorama of the Boulevard des Capucines, Louis DaguerreFig. 5 Plan Voisin,Le CorbusierDaguerre’s influence would not end with the dioramas. In 1839, seventeen years after the first exhibition of the dioramas in Paris, Daguerre perfected the ability to chemically fix the reaction of the sun on a silver-plated sheet and the photograph was born. For the first time the hand was removed from creation, and though it would be a long time before the pencil would be replaced a technology was now in place that prophesied a future where it was no longer needed. Early photographs required long exposure times and large amounts of light, this meant that immovable subjects were ideal to be photographed. Infrastructure suited these conditions and photographers took to the streets. At this time Hausmannization was in full swing in Paris, the streets were rapidly changing and photographers such as Charles Marville (Fig. 7) and Eugene Atget (Fig. 8) captured and preserved images of old and new Paris (Sramek, 10). It is through these photographs that an image of Paris was constructed, photography began to play it’s part in the preservation of architectural and cultural data that before was only done by the hand of the architect and the artist. GP1 Report - Background and Context3Fig. 7 Rue au Lard,Charles MarvilleFig. 8 Paris Rooftops,Eugene AtgetPhotography did not stop there as the technological innovations made the process of reproduction faster and cheaper. Reproductions of drawings, paintings, and photographs dispersed throughout every region the way that the written word had been influenced by the printing press. Images gained wider reach then physical objects and they became the main way in which cultural data was gained influence. The production of Marcel Duchamp’s Fountain (Fig. 9) in 1917 that these conditions were fully exploited. While the original ready-made urinal has disappeared, the piece has always been able to maintain its myth as the Stieglitz photograph that Duchamp commissioned for it was widely circulated. This fulfilled the musings of Oliver Holmes who in 1859 suggested “that we only need a few negatives of an object worth seeing, taken from different perspectives, and that’s all. The object may then be destroyed” (On the Risk of Images, 43). Fountain may have disappeared “but its semi-fictional documentation and narrative produced a guarantee, a shortcut to history through photography and writing” (Not Objects so Much as Images, 277).Fig. 9 Fountain,Alfred SteiglitzIn 1949 the Xerox Corporation was trademarked as the photocopier was ready to make its influence. Allowing for a new ease in reproduction the dispersion of objects quickly increased and generations of architects began to graduate who were focused on developing their ideas and theory at a speed suited for the world of media instead of the slow pace of physical buildings. Archigram came to popularity in the 1960’s seeking to propose fictional projects that critiqued the current state of architecture and the role that media played in turning buildings in to easily consumable objects. Influenced by media and pop art Archigram’s drawings (Fig. 10) were free from the limitations of the physical world and exhibited “unfettered cre-ativity, usually enjoyed by artists” that allowed them to create representations that “rank as the most memorable of the 1960’s and among the most remarkable ever made”(Sadler, 3). The cultural data of architecture no longer needed or even wanted to be in the built form.Archigram and their contemporaries looked to challenge the previous notions about how architecture was thought of and created. This way of thinking relied heavily on proposing technological innovations that were cutting edge or impossible. This leaning in there work also allowed them to be on the forefront of representational technologies that were slowly making their way in to architecture. Amongst this group was Cedric Price who in his talk Technology is the Answer but what was the Question shows his work Generator (Fig. 11), which contain some of the early computer drawings his firm produced. With the slide showing, Price makes a very underwhelming statement about the role of the computers role within the design process saying, “but the use of computer in this form, was merely to cut the time - the office time - that it would take us to show a variety of possible plans and re-plans on the existing site”(Price, 22). The computer had begun to make its impact in the world of architectural representation by replacing the orthographic way of thinking and supplanting the role of the pencil as a tool of creation connected to the hand.Fig. 10 Plug-in University Node, project Elevation,Peter CookFig. 11 Generator, project White Oak, Florida Perspective,Cedric PriceGP1 Report - Background and Context4The computer paved the way for the electrical drawing and the electrical image, the technological innovation that transitioned the orthographic world in to the post-orthographic. A world on which whose surfaces we see “simulated representations – electrical simulations of orthographic formats that once represented the world”, John May presents these representations in the post orthographic as being separated from drawing in the fact that it “does not want(s) to be a representation of the world,  it wants to be a presentation of the world, an automatic and perceptually up-to-date, real-time model of the world” (May, 19). The post orthographic world is no longer one where the architect draws but a world where they “process images”, we live in a world where there is no longer a need for drawings in the production or representation of a building, and as architects further explore this avenue of production the “psychological-gestural residue of orthography is rapidly disappearing from … architectural culture” (May, 21). In other words, we have left the age in which drawing was the main form of communication and fully entered an era where the image contains all the cultural data related to architectural representation.In the post orthographic world there is a heightened importance of the 3D model. May defines models as “images that “refresh” at a speed anterior to perception” (21), and it took a while for these images to gain significance as a form of worthwhile architectural data. In the early 1990’s Frank Gehry looked to 3d-modeling software to solve the answer for the complex geography(Fig. 12) for which the tradition orthographic drawing was providing problems. His answer was not found in the field of architecture but outside of it in the realm or aerospace. Catia proved instrumental in helping the firm explore ways to represent and create formal gestures that were difficult in an orthographic sense. Gehry seeing the possibilities for what the tools made possible rebranded a version of Catia for architects called Digital Project (A History of Parametric) and while it was slow in uptake it would not be long before 3D models became an essential part of every workflow in the search to create new forms.GP1 Report - Background and Context5Architects have continually looked to the world around them for inspiration to help them make informed design decisions. Frei Otto famously studied soap bubbles (Fig. 13) to understand tensile structures and Gaudi hung weighted chains (Fig. 14) and strings as analog methods of form finding. Digital modeling has created a separation from the physical in this field as well. It was easy for Gaudi to see the result of adding a weight to a chain, the reaction is immediate and directly in response to natural forces. In the post orthographic world this ceases to exist as the physical laws that govern are simulated and can be adjusted and controlled in a way that is not possible in the physical world. It is a world that reduces everything to bits of code, 0’s and 1’s acted as the new points and lines, and in many cases the architect was provided tools, allowing the creation and manipulation of code without the need to fully understand it. This also reduced all types of architectural representation down to numerical data that could be used to create representations.Fig. 12 Guggenheim Bilbao,Frank GehryFig. 13 Frei Otto Experimenting with Soap Bubbles,Frei OttoFig. 14 Sagrada Familia Church,Antoni GaudiCoding has seemed to place an increased importance on efficiency in architecture. One of the major outcomes of this culture of efficiency is the development of BIM software. In the case of programs like Revit there is an interesting contradiction between the orthographic way of production on a non-orthographic interface. Revit offers the ability for the architect to work in simulations of traditional forms that all update each other instantly as the techniques we would recognize from orthographic drawing are just an interface used to update mathematical data in the background that is informing the code. The ability to then add more layers of data to the legibility of the pseudo-orthographic simulations makes BIM a powerful form of communication understood throughout the conventions of the building industry. This has allowed for engineers and other consultants to be brought in to the process and to collaborate in a direct way but has caused the visualization of numerical data to gain prevalence as a form of representation. Spreadsheets have risen as an essential part of the design process as they carry the answers to what decisions become the most efficient and most economical.For those that fight against the recontextualization of the orthographic mentality the data of architecture comes to play a different role. For this group efficiency is still important but it is in the optimization of form finding. Patrick Schumacher has been one of the lead champions of this movement that tries to use the power of scripting to create architecture that aesthetically explores the “elegance of ordered complexity and the sense of seamless fluidity, akin to natural systems” (Schumacher, 16). For Schumacher the computation has allowed the “cumulative build-up of virtuosity, resolution and refinement facilitated by the simultaneous development of parametric design tools and scripts that allow the precise formulation and execution of intricate correlations between elements and subsystems” (15). Data has become a new language one that is used to rationalize formal moves.  The rise of data is not only connected to the rise of the computer and the ability to process it rapidly it is largely connected to the ability to share that data amongst a larger group. The printing press, photography, and other innovative tools of reproduction created massive changes to how cultural data has been shared, the Internet has stepped in to continue their role at an accelerated pace. The transfer of information continues to speed up as new ways are found to translate sound and sight into compressible series of numbers that can be uncompressed and reprocessed in to a version with little variation from the original form. This has allowed more information to be at the finger tips of designers then ever before, and not just the typical data that architects are used to, other forms of from other fields.Zeynep Çelik Alexander unpacks the influence of consumable data with the illustration of architecture schools whose walls used to be filled with historical precedents but are “covered instead with flow diagrams on energy consumption, images of brain scans, maps of transportation networks, or models of thermal distribution” (Alexander, 23). For Alexander this focus on the world of data has allowed for two things to happen, architecture has been able to break free of the historical boundaries that are place on it by becoming interdisciplinary, but in doing so it has also lost the connection to the historical and cultural data that could not be stored as ones and zeros. The political, ethical, and cultural arguments that designers make are being forgotten in the isolation of architecture to data. In many cases because this data exists in a form that is beyond the technical understanding of architects, removing the ability for critique of the technological conditions that create from this data (30).As artificial intelligence and machine learning are getting deployed in every field the separation between user and data is seemingly getting wider. The tools processing the data are getting to the point where the casual user has an idea of the frameworks that the AI uses but the specifics are contained inside the inaccessible black box of the algorithm. These algorithms have enhanced the designers tool kits, and for most it is not important to understand how these tools work only that they work. The developers of Lunchbox for Grasshopper recently introduced machine learning components that a user can download and employ in very little time. On their website one suggestion for how to use this tool is by having the AI suggest façade designs for a building based on the orientation of the sun (proving ground). It is not hard to see a day when an architects justification for the façade (Fig. 15) they designed is that they AI told them this one was best, a statement with very little critical thought that leads to little opportunity for actual discourse about design.GP1 Report - Background and Context6Fig. 15 Screenshot Provinggrounds.io,Proving Grounds Fig. 16 Screenshot algorithms.designAI is moving in to design fields (Fig. 16), layouts for websites are being designed by AI, user experience is being personalized by AI, and AI can generate entirely fake photorealistic drawings and videos. With the research being done in AI and the willingness for researchers to share their findings, more and more decisions are being justified by the acceptance of what an AI spits out. Ignorance to the technological contexts is not an option and the role of the critical designer will require an understanding of how AI is set up to make decisions. An AI is only as good as the information it is given, and it is important that they are open to critique. This especially becomes important when AI start to analyze and produce cultural objects. An AI functions on data and that means breaking the large history of cultural data contained in images and drawings down in to numerical pixel values. An AI will interpret and create without the historical and cultural understanding that architects would see in the same data set. Leaving the role of critique for the designer as they use the tools they are given access to, to guide decisions that come to have cultural and ethical implications on eventual users.7GP1 Report - MethodologyFig. 17 Neuron DiagramStimuli Output SignalThreshold ValueNeuronNThere are multiple research directions in the field of artificial intelligence and each claim that their direction is the way to super intelligence. One of the research tracks that has recently taken large steps forward, takes a biological approach by trying to digitize the functions of the human brain. Well little is known about complex functions of the brain, the basics center around the functions of neurons. On the simplest level a neuron(Fig. 17) is a processor, it receives ions as stimulus, when the stimulus value reaches the threshold level the neuron fires and passes an electrical signal on in the brain. When this electrical activity happens in conjunction with others, the brain recognizes certain patterns of the inputs and associates them with a corresponding output. Input Value Output ValueThreshold ValuePerceptronNThe programming for a neuron was first proposed in 1943 by Warren McCulloch and Walter Pitts, their neuron performed the basic tasks of firing, but the threshold values had to be preprogrammed giving the devices no ability other than deciding whether to fire or not. Frank Rosenblatt continued the idea in the 1950’s by building devices called perceptrons.(Fig. 18) His devices had the ability, through mechanical means, to update the threshold value of the neurons, this gave the neurons the ability to start to adapt to different tasks and showed how the concept of learning could be approached (Domingos, 96-97). The perceptron still had a major problem that resulted in it being ignored, the perceptron could only distinguish between 2 outcomes, firing OR not firing, it did not have the ability to classify data as existing in multi-dimensional classifications, firing AND not firing. It was clear that the way to address this problem was to link multiple perceptrons together and thus create the neural network, while the idea was clear, it was unclear how to program multiple layers to update themselves based on the results of the output layer (103). The next update to the neural network was the introduction of the sigmoid function. From the beginning, the nodes of the network received and fired in an all or nothing response, the sigmoid function allowed the input data to be mapped to values between -1 and 1, this provided differing levels of response and changed neural networks from deterministic responses to probabilistic responses. This subtlety allowed for more complex mapping of the layers of neurons and the ability to see how important individual nodes were in relating input and output values (105). In 1986 David Rumelhart developed the algorithm for backpropagation, allowing for nonlinear problems to be solved in statistical analysis, his algorithm has since been widely applied and one of those applications was the neural network. Backpropagation gave each node the to analyseand update the threshold values and not get stuck on non-linear classifications of data (113). This allowed the neural network to be applied in different fields, with many breakthroughs coming in fields where the network learns to recognize individual features within a larger dataset such as object, speech and character recognition.Fig. 18 Perceptron DiagramStepping back from the historical development or the neural network (Fig. 19) it is important to understand how the network functions and processes data. The neural network consists of 3 parts the input layer, the hidden layers, and the output layer. The input layer and output layers are easy to understand as they are the values that directly relate to the input and output data. If a grayscale image is being input and output by the network the values of the individual pixels between 0-255 would be the values inside the layers. If the input image is sized at 24 x 24 pixels this means the input layer is made up of 576 individual values. The output layer is the exact same with its number of values relating to the determined size of the output image. The layers in between the input and output become more complex and often little is known about the specifics of how they function in each application, the middle layers are known as the hidden layers. The hidden layers contain a determined number of nodes, the values from the previous layer are passed on to each node, and as they are passed on they are multiplied by a weight (w) that is updated through backpropagation. At the node the new weighted values are then summed together and passed through the sigmoid function resulting in a value between -1 and 1. This value is then passed on to the next layer and the process continues through all the layers until the output values are reached. In the training phase for neural networks often pairs of data are provided being the input values and the expected output. Through the providing of pairs the algorithm updates the values of the weights to recognize the patterns inside the network to develop the most accurate weights it can. Using the optimized values of the weights the network can then take input values on their own and predict output values based on what it has learned to be the probable outcome.This basic framework of the neural network has been modified and expanded to work within desired applications, many have been used in technologies that are capturing the imagination of the public, from IBM’s Watson to self-driving cars. While these applications offer a glimpse of the future they have allowed for more awareness of the limitations. Neural networks are set up to go through large amounts of data and learn, once this learning is done the algorithm is only regurgitate the same information over and over (119). They are not able to make new inferences from the data, they provide a façade of intelligence but ultimately further highlight the sentiment that “computers are the ultimate idiot savants” (71), and that while they may seem to hold vast amounts of knowledge, there is still a lack of wisdom. Reasoning and logic are important for the human decision-making process and are relied upon by the designer in understanding the human elements of a building, these qualities are not contained in the neural network.8GP1 Report - MethodologyHidden Layer 1Weights Applied At All ConnectionsNew Values From Sigmoid MappingNew Values From Sigmoid MappingWeights Applied At All ConnectionsWeights Applied At All ConnectionsHidden Layer 2Output LayerFig. 19 Neural Network DiagramInput LayerWhen using Neural Network for the purposes of creation, the built-in logic of the algorithm sets up a systematic problem as it is a process that does not have the ability to create new inferences or use logic to inform the products that it is creating. If a neural network on its own could produce a design for a building, the lack of logic and new knowledge would cause it to create variations of the past without the ability to evaluate the impact of it creations. If the network is supposed to replace the designer this has significant ramifications for the design process, but if paired with a designer who is trained only to create what is logical  the ability to process information in a way that is fundamentally different from how the natural brain works, a strength can be seen as the neural network offers a diverse and challenging way of creation through the arrangement of pixels.  As the scope of artificial intelligence changes, it is not out of the questions that other research directions will provide ways of addressing fields of logic and reason, as well as other shortfalls that will arise as Neural Network technology is further explored. As this happens this does not remove the designer from understanding the decisions made by algorithms, it will place more responsibility for the designer to understand the encoded process that governs how the AI will make decisions. It will not be long before they become more and more a part of the processes we employ in the designing of architecture. If we disregard the technology in its beginnings it is moving at a pace that will quickly leave us behind. It is important to understand the current state of the algorithms that can mimic human function so that when they become better at what they do, their outcomes are not blindly accepted but understood, allowing for the ability to have high level critique.NVIDIA has become one of the largest researchers in the neural network realm as they produce and develop the technology of the GPU that is needed to efficiently perform the calculations in a relatively small amount of time. A group from NVIDIA lead by Tero Kerras became widely known amongst the public as they took the framework of the Generative Adversarial Neural Network (GAN) and built upon it to process high quality images of celebrities. The outcomes from their experiment created high quality photorealistic images of celebrities that never existed. (Fig. 20)9GP1 Report - MethodologyFig. 20 1024 × 1024 images generated using the CELEBA-HQ dataset,Tero Karras10GP1 Report - MethodologyThe GAN (Fig. 21) is the development of pitting two neural networks against each other and having them compete. The two networks are usually classified as the generator and the discriminator. The generator is a neural network that takes input values and compares them to output values with the hopes of creating images that resemble the initial image dataset. The discriminator uses the original data set to learn to distinguish between images in the dataset and images created by the generator. While the network is training competition influences the adjusting of the weights, as both networks continue Fig. 21 Generative Adversarial Neural Network DiagramVS.Generator(Artist)Discriminator(Critic)to get better, producing more photorealistic images but also distinguish between images that are converging in similarity. It is not completely understood how this happens inside the black box of the neural network, the intent of the network is that it is able to mathematically identify and create pixel relationships that form patterns and can simulate objects.For architects this offers a line of inquiry different from the typical uses for AI within the design process. Algorithms are being used to create designs that optimize data in many forms, whether it be through the optimization of form based on how humans move through a site, or through analyzing in greater detail sun paths. While the neural network in its base form is based on optimization, the optimization of pixel relationships seen within a dataset, its outcomes are difficult to define as optimum or efficient. They are reinterpreting cultural and historical data that is largely subjective.The largest source of data easily available, that is focused on architecture, is images. With one google search plans, section, perspectives, and many forms of architectural representation can be accessed that reveal ways of designing and presenting buildings. Websites that curate architecture projects show how groups and cultures can accomplish goals by how they position the data found in images. Comparing two sources of representational architectural data will reveal very different outcomes. The images that form the Instagram feed of Archdaily are very different from the image database of Artstor. With such easy access to a high-quality form of data it would be ignorant to say that architects are not already being influenced by the curation bias created by the sources from which they consume their architectural imagery. Biases already exist through the systems that distribute images, these systems become important as algorithms act and contribute within them without responsibility for what they create.11GP1 Report - MethodologyExperiment 1To understand the implications of a neural network producing architectural images an experiment was designed with hopes of gaining a greater understanding of the GAN. NVIDIA ran their algorithm on top of the line GPU’s that are far beyond the capabilities of machines I have access to. Fortunately, many neural networks have been rewritten to be less computationally intensive. The celebrity experiment is no exception, Taehoon Kim (@carpedm20) has brought the basic framework to the python language in his project DCGAN (Kim). The project removes the need for a GPU and moves the computation to the CPU, allowing for anyone with the ability to download and run the code. The input and output size in this algorithm are limited to 174 x 174 pixels instead of the 1024 x 1024 pixels the NVIDIA algorithm processed. The limitations of this algorithm result in lost quality of the images that end up affecting the photorealistic quality of the results. The parameters of the technology cause some problems when thinking about the size and scale of architecture data. The digital world is increasingly dimensionless which contradicts the orthographic world of architectural representation. There are ways that this could be addressed in standardizing the scale of the input but for the sake an initial test, perspectives seem to fit and are the most accessible form of representational architectural data. They are scale-less and have a loose relationship to space that might pair well in a system that creates without logic. With these parameters in mind an experiment needed to be set up that would allow for a greater understanding of how the neural network functions, if and how it can be controlled by the architect, and do the results reveal anything about reducing architecture to the pixel relationships contained in images.The limits of the algorithm mandated that the algorithm would run on input images of architectural perspective the size of 174x174 pixels. One of the things revealed through other’s DCGAN experiments is that the initial datasets needed to have similarity within the overall aspects of the input images but have variances that made each individual image unique. With the celebrity database the images where cropped in on the face providing a basic shape and structure that all the images followed. The differences were found in the variance of unique elements such as eyes, mouth, and other facial features. This becomes much more difficult in the curation of architectural databases as the basic forms of buildings differ greatly in most cases and the loss of unique details when entire buildings are viewed small resolutions. To try and overcome this, an architect with a unique style was chosen with hopes that the algorithm would attempt to create unique building forms amongst a landscape. Without knowing how the algorithm would function on the dataset it was time to create a database with hopes of learning how this could be improved on the future. Zaha Hadid Architects was chosen to be the focus of the database. Images play a unique role in her practice, including her paintings that became recognizable in her competition proposals. In later works the practices unique style became key in many nations developing cultural buildings  that helped create national identities by being advertisements to bring the rich and famous. One such example was the Heydar Aliyev Cultural Center in Baku (Fig. 22). The recognizable form of the building is featured in many perspective images that have become more important then the building itself. The building was used to present a new vision of Azerbaijan and the identity that they were creating through architecture.  Large developments were being created with iconic cultural buildings with the role of catering to the high end foreign market. The Heydar Aliyev Cultural Center was not the only example of ZHA designing buildings for this purpose as the recognizable style was purchased by many wealthy clients who wanted the recognition of having a Zaha. Fig. 22 Heydar Aliyev Cultural Center,Photographer: Iwan Baan.12GP1 Report - MethodologyThe importance of images to ZHA’s work means that visualizations and images are very prevalent in different contexts online and could be easily gathered. This was important when collecting the images for the database as there is a need for large amounts of unique data. To curate the dataset, all built ZHA project was searched in google. A batch downloader was then used to download all the relevant google results. After going through all 34 Zaha projects a total of 3352 (Fig. 23) images were collected. The size of this dataset is tiny relative to other applications of the DCGAN. To build out the numbers of the dataset, Patrick Schumacher’s Youtube page was used as a source of images coming from the design massing visualization videos. The videos were downloaded, and a selection of video frames where then converted to jpegs (Fig. 24) to bring the dataset numbers up to 8428 images. To convert the images to a usable format for the algorithm all images were then batch cropped to a square format and downsized to 174 x 174 pixels. The database was now ready for the experiment to commence. While the intention of the algorithm is to create photorealistic images related to the ones in the dataset, the limited data and heavy biases within the database did not give me confidence that photorealistic versions of fake ZHA projects were attainable with this version of the output. With the dataset curated there is the possibility that objects will appear that start to have some of the characteristics found within ZHA projects. This will provide the possibility of looking at the formal gestures found within the projects in a new context and ideally offer up the possibility to learn about the qualities of the algorithm and about the relationship of the built environment and the images that represent it.13GP1 Report - MethodologyFig. 23 Selection From Dataset #1Batch Downloaded From Google.14GP1 Report - MethodologyFig. 24 Selection From Dataset #2Batch Downloaded From Google.15GP1 Report - ResultsThe training was broken in 6 sessions over 2 weeks because of the intensity of computation required me to run the algorithm over periods of time when I would not require my computer for other purposed. Training consisted of the algorithm taking a variable number of images based on a preprogramed batch size, in the case of this experiment 25 was chosen. The algorithm takes 25 images from the dataset and trains the network on those 25 images, it does this until the algorithm makes its way through all images in the dataset. A complete run through the dataset is called an epoch. The first of the 6 sessions were completed over a weekend and therefore a total of 30 epochs was run in this session. All the other five sessions were run overnight in less time so 10 epochs were run in each session. During each session through every 100 batches a series of 25 images would be output showing the latest results of the training. This would also happen at the end of each session with the algorithm producing 2500 images from the last checkpoint giving an indication of what the algorithms could produce at that time.The training started out for the first 300 batches producing very pixelated images. One of the first developments that can be seen is the black bars located at the top and bottom of each image. This can be attributed to the large amount of film still in the database. When the videos where converted to images the conversion of the aspect ratio meant that black bars where included in the images. This is a feature that continues to show up for the rest of the training period. The other element that is prominent for the first 300 batches is the compositions all feature a green/yellow/brown object surrounded by blue. At this point the algorithm is learning to distinguish between land and sky, it has realized that images usually contain an object amongst a background and it is trying to figure out how to create that in a very general sense.At batch 400 we enter the second epoch. It is here that groups of black spots start to develop on the images. Patterns are developing throughout the image and smaller groups of pixels within the larger image have become more related to each other. This continues until the end of the 3rd epoch. Over these two epoch the other differences seen are in the changing background colors but also the introduction of white which seems to be imitating clouds through at this point it is tough to make any judgement of content. The colors in the images seem to have strong correlations to certain colors in the dataset with projects that have lots of images influencing the dataset more then anything else.At the beginning of the 4th epoch the black patches within the images seem to be developing in to more condensed forms rather then being distributed throughout the entire images. After some batches they seem to get smaller but soon return to a larger size. The color outlining the black patches is no longer only white, browns and reds are now showing up. The backgrounds at this time are still changing the same way they were in earlier results but the they are less pixelated then before and have gained a smoothness.Between the 4th and the middle of the 6th epoch things stay very similar although more of a separation between the ground colors of the yellow/green/brown have formed a more significant separation between the blue at the top of the images. The middle of the 6th epoch brings the development of mounds within the images. These are objects that are centered in the frame with the ground below them and a sky in the background. It seems as if rudimentary figures are forming, many resembling pyramid shaped objects with wide bases and narrow tops. Some are isolated, but iterations have appeared that contain multiples. Around the 12th epoch the objects start to gain a smoothness to them and the black spots are disappearing. The color of these objects changes quite a bit as some take on greys, but others become the same color as the landscape. Occasionally the colors will make large jumps to a red for a couple batch result but within a few hundred batches the results will return to more neutral colors. The 14th epoch brings two developments, the objects are starting to be less of singular blobs and they are starting to develop differentiation of multiple parts, but at the size and quality of the images these parts are mostly consisting of small black dots and no details. The other development is the arrival of subtitles on some of the images. The dataset contained in it film stills from a video that contained black subtitle bars with white text these are starting to appear over a few of the images. They do not contain actual words only lines that resemble text. In the 18th epoch the images start to develop a depth that was not contained in them before. This reveals itself through the illusion of there being some kind spatial relationship between the objects instead of just flat patches of color that developed over top of each other. This seems to be the last major development in session one. Throughout the rest of the 12 epochs differences are very minor as the objects within each batch seem to evolve minor details that add a little more complexity to the forms. Overall the training has slowed down significantly at this point.During the 30th epoch my computer crashed, this resulted in the ending of the first session with no result images being output. The only images were the training images described above, so from those all the observations about session 1 have been derived.Session 116GP1 Report - Results100400700200500800300600900Epoch 1Epoch 2Epoch 317GP1 Report - Results100013001600110014001700120015001800Epoch 4Epoch 5Epoch 618GP1 Report - Results190022002500200023002600210024002700Epoch 7Epoch 8Epoch 919GP1 Report - Results280031003400290032003500300033003600Epoch 10Epoch 11Epoch 1220GP1 Report - Results370040004300380041004400390042004500Epoch 13Epoch 14Epoch 1521GP1 Report - Results460049005200470050005300480051005400Epoch 16Epoch 17Epoch 1822GP1 Report - Results550058006100560059006200570060006300Epoch 19Epoch 20Epoch 2123GP1 Report - Results640067007000650068007100660069007200Epoch 22Epoch 23Epoch 2424GP1 Report - Results730076007900740077008000750078008100Epoch 25Epoch 26Epoch 2725GP1 Report - Results820085008800830086008900840087009000Epoch 28Epoch 29Epoch 3026GP1 Report - ResultsLater training sessions continued the results that started to show at the end of session 1. No longer are big jumps forming within a couple of epochs and it gets much harder to distinguish variations between the training results. There are small moments in the result images that progression is seen but they are few and far between. One of the first improvements that shows up is the ability to read words in the subtitles, this may not be an architectural detail but it shows that further detail is being recognized by the generator and discriminator. Well the words do become readable in the second session in later session they start to appear clearer. One of the other improvements is in repetition of details. It seems as if many of the images are starting to develop lines across the objects. This is a result of the database containing many projects with panelized facades that are repeated over flowing surfaces. Because these panels differ in structure and color the algorithm is having a tough time figuring out how to apply these details. They become rougher and smoother at various points with session 3 containing the more pixelated areas with session 4 and 5 fighting to resolve these patches of roughness a little more. When looking at the output results at the end of each session forms from the dataset are being recognized and elements of the projects are starting to make their way in to the results. Images can either be seen containing elements from a single project that are imitating form and materials but also as combinations of elements of different projects. There is a tendency for the objects to take on neutral colors of grey and white. This is probably because of the large amount of visualizations in the dataset that show generic massings without materials as just explorations of site and form. The other reason could be the prevalence of the panels of ZHA projects carry very little material difference in them and are often monotone, usually white or silver.Session 2-5Session 2TrainingSession 2Results27GP1 Report - ResultsSession 3TrainingSession 4 TrainingSession 3 Results28GP1 Report - ResultsSession 4 ResultsSession 5 TrainingSession 5 Results29GP1 Report - AnalysisIf the goal was to create photorealistic images that resemble features of ZHA projects, the experiment is a failure. Of the thousands of images produced there is not even one that could fool a viewer in to thinking that it was not a simulation created by a computer. While there was not a photorealistic result, there were some promising developments in the results that are starting to become more believable simulations of architecture. In looking at the successes and failures in the results, the influences and flaws of the dataset can be understood to reveal more about the workings of the technology.One of the first developments in the results was the separation of ground and sky, there is something almost biblical about this development. In the first moments of creating difference the algorithm recognized that there were two main elements of context that showed up in perspective images of architecture objects. It is tough to find an image that is not containing both elements of ground and sky. For photographers these two elements become important tools for framing the architectural scene. One of the first rules that photographers learn in composition is the rule of thirds. This rule suggests that the frame is separated by third lines and on those lines main elements of the composition are placed. Often this results in images with one third ground and two thirds sky, or vice versa depending on emphasis. The way that the composition of sky and ground are formed by the algorithm suggest how prevalent this is, as the algorithm has recognized this and is using it in the way that it has distributed the early pixel formations.The other photographic element that plays a role in the development of the images has to do with the development of objects in the middle of the frame. Not only in photography but almost all perspective images are set up in a way that is meant to reveal an element. If we look at the elements of perspectives such as vanishing points and horizon line, when they are thoughtfully set up they put focus specifically on the element that the author was trying to bring focus to. In many of the images from the data set this has to do with the formal gesture of the building amongst the landscape. This suggests that the appearance of a form on the landscape is the next dominant feature specific to images. The two dominant elements of perspective images being the context and the object have formed which on there own can start to be recognized as architectural objects.Fig. 27 Enlarged Selection From Results #1Created by Author.30GP1 Report - AnalysisThe next level of development that would push the images more towards legibility would be in the details. As suggested in the setting up of the experiment, the limits of the technology result in the loss of image detail. This is further amplified by the scale of the perspectives in the dataset. The perspective images were not created in a way that was focusing on the small details of the element. While details do play a role in ZHA’s practice the initial visualizations that are used to convey ideas to clients are more about developing an iconic shape or form that flows out of the landscape, this means that many of the visualizations are massing models or just slightly more developed then initial massings, they do not contain many details and to expect them to show up in the results would be a lot to ask from the algorithm as they are hard for a human onlooker to distinguish in the precedent images. This problem could probably be addressed but it would only be greatly improved if images where curated that focused on similar details and the algorithm could then work to simulate details based on the a more focused curated dataset. Lack of details also bring up another issue with the resulting images that provide the biggest hit to the legibility of architectural images in the results. This issue is the separation of the ground and the object. This may be a positive element in analyzing the intent of ZHA’s work. Patrick Schumacher has claimed that one of the goals of parametric architecture is to “enhance the overall sense of organic integration by means of correlations that favour deviation amplification” (Schumacher, 17). The algorithm has a tough time distinguishing between landforms and building forms. I do think this has some relation to the design intent of ZHA as the fluidity of the curves do contain more of a resemblance to organic forms then traditional building forms. Curation of the dataset could have been done with more traditional building forms, I do have my doubts that this would result in significant changes. That would also mean that certain building types would not lend themselves to the algorithm and result in the technology playing a further role in editing the histories of architecture by what it is able to produce. My suggestion here would be to convert the original dataset to back and white. This would privilege formal elements of the image and enhance the contrast between elements within the dataset. The algorithm would also be able to work faster as black and white images have 1 valuefor each pixel instead of 3. This would further remove some details in areas with the same tone, but it would be interesting to see of the removal of this detail would result in more distinguish forms that are easier to distinguish as architectural massings.The successes and the failures within the results point towards one part of the process having a lot of influence over the entire process. This is the building and curation of the dataset. The results of the algorithm have shown how similar forms found in the dataset are to the forms in the results and this begins to point out a crucial role for designers that engage with the technology. Even though the algorithm has been run on one architectural database the results produced are setting up a choice on how the database can be manipulated in future experiments. For the algorithm to produce more realistic results the database will have to become more specific and restricted in the inputs. While this will produce more recognizable results this runs the risk of producing results that are very similar to the dataset and offer no room for creative interpretation of the results. While this is a technological success it causes the architect to be beholden to the inherent problems of the algorithm. The major of these being that it blindly reproduces the qualities of the dataset with no logical interpretation of the qualities found in the results.   Fig. 28 Enlarged Selection From Results #2Created by Author.31GP1 Report - DiscussionThe resurgence of neural networks can be attributed to two factors, the increase in computing power and the availability of data. With every aspect of out lives digitally integrated and connected through social networks, online-shopping, and the internet of things a larger picture of how we individually and collectively exist in the world is being recorded. Most of this data has become accessible and is used to inform how the world responds and reacts. Facebook advertisements allows for the advertiser to upload a profile of a stand-in for their intended audience, an algorithm then uses this one profile to comb through Facebooks many users and create a subgroup that is made up of individuals similar to the uploaded profile. The advertiser then can filter through the subgroup using other defining data points to specifically target their ads to a group of people. To advertisers the way in which an individual consumes online media is predictive of the goods and services that they will purchase, using data to predict future behavior has become a lucrative part of the industry.In architecture it is not as easy to see how the recording of data is impacting design. Architects have usually moved on to new projects and new ideas by the time post-occupancy data can start being collected from buildings, and when post-occupancy has been assessed it is often years after the design process and can only manifest itself in future projects, instead of influencing the design of projects that are directly related to the data. Daniel Davis is trying to address this problem through his research at WeWork through the evaluation of meeting rooms (Davis, 119). Information is collected by an app including occupancy and user rating. This and other information are then used to evaluate the performance of the room and update designs across their 200 locations, including changes to room layout, wall paper, types of furniture, etc. Davis’ experiment is isolated and limited in its function and approach, but it does show a fundamental search to use data to define groups of people and the belief that through analysis of more parameters designers can approach optimized design solutions that work for everyone.As data becomes available to designers and new ways of recording data are employed, design approaches are developing that seek to use data as “the fuel for new forms of automation and machine intelligence that will take over many of the tasks of traditional architectural practice”(Marble, 128). With this approach there is an underlying belief that the systematic and logical govern all elements of design and should be privileged above all, if the statistical analysis of available data can be shown to justify the design decision then that must be the correct solution. Increasingly people are becoming less convinced by statistics as media is filled with different viewpoints using the same statistic through different forms of manipulated visualizations to align with the effect they desire to create. By acknowledging the subjective use of statistics there is a greater call for evaluation of the systems and technologies that are being used to analyze large datasets. As data becomes more prevalent in the evaluation of design, the role of the designer will include being “proactively engaged not only in the use of this data but also in the development of the technologies, processes, and policies that determine what data is measured and how it gets used” (129).The recognition that the systems that define artificial intelligence are subjective when using empirical data is causing many to question the role that algorithms have in shaping the future, how much more does this shift when tools of statistics are used on sources of input data that have a harder time being defined by an optimum. The use of the DCGAN on an architectural dataset contained an underlying juxtaposition in the system that was using images to create new images. The digitization of representation has resulted in representation being a collection of values that when interpreted with the proper tool results in an image that can be read by the viewer. To the computer an image is a package of numbers relating to pixel value and removes the larger context of values that are often associated with images. For the computer an image is not worth a thousand words, they are worth a million numbers.  The potential of AI is there to create images that mimic the systems of representation that have been present in our histories but removes the entire structure of logic from their formation. This does not remove the algorithm from the production of histories. An image is defined by the fact that its “origins…are already distanced” which creates a reality where “the capacity of the image to signify autonomously becomes of even greater importance” (Gronlund, 22). Images have become a massive part of the social environment through which we exist and communicate. Images flow easily throughout the world as simulations of everything from the retelling of events in our lives, to substitutes for our emotional responses. It is not new that these images are manipulated and constructed from the wide range of sources, but this has always been done through creation processes that are controlled in large part by a human guiding every step. Creation has happened in a way that even in its lowest form the combining of source material is done with a communicative logic in the choices of the creator. With this logic removed from the tool of creation human interaction is split in to the processes of curation and interpretation, both of which revolve around the understanding of the interpretation of images and the cultural knowledge that they carry with them. In his seminal work Dispersion, Seth Price sums up this sentiment by writing “with more and more media readily available through this unruly archive, the task becomes one of packaging, producing, reframing, and distributing; a mode of production 32GP1 Report - Discussionanalogous not to the creation of material goods, but to the production of social contexts, using existing material” (Dispersion). There stands to reason, that as technology distances architect from the tasks of creating traditional architectural representations it becomes more important to understand the histories and practices that have lead to the tropes of representation that the algorithms produce.For the experiment on the Zaha dataset relatively little thought was given to the individual pieces of data that were collected. This was part of not knowing the workings of the algorithm but was also a factor of the large amount of data that needed to be collected. This resulted in the grabbing of everything that had a small relationship to Zaha. This is especially illustrated through the large amount of very similar video stills that populated the database, these ended up having a much greater effect on the final form as the relationships between the pixels in these images, skewed the data towards their forms as they made up almost half of the database. The output images showed this skewing by reproducing images that contained subtitles, aspect ratios and geometric shapes that matched these input images. By having a more nuanced control over the images during the process of creation the results could have been manipulated to produce results with different intents. Largely influencing the database was the metadata associated with the image. As I searched the name of each Zaha project only images that where associated with the words typed in to the search engine came up. If I had chosen to blindly accept all the results that came up on each page the dataset would have included further images such as portraits of Zaha, photographs of stakeholder meetings, logos, and many other images that were not architectural. The removal of these images was done to gain some similarity, but the process illustrates the “unruly archive” of the internet, while it is a large source of data it is filled with disregard for the disposable image and the translations that happen through sharing and repurposing. The internet is a flawed database more telling of out treatment of the contexts of images then it is of the content within the image. By accessing it blindly we are accepting the biases that underlie the archive.It has not taken long for algorithms to be unleashed on the internet and reveal troubling results about how algorithms develop bias. Microsoft created a bot to learn from twitter interactions with 18-24 year old’s and respond with hopes of fitting in to this demographic. Within a day the bot named Tay “went from upbeat conversationalist to foul-mouthed, racist Holocaust denier who said feminists “should all die and burn in hell” and that the actor “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism”” (Garcia, 111). The unfiltered bias of the algorithm quickly revealed that even though the technology of the bot worked as expected, the data the bot consumed influenced greatly what the bot created with no understanding or responsibility for the content that it was producing, even though that content could have a great affect on those who interacted with it. Tay may be an extreme example of how bias plays out and it may seem as if bias is not as important in outputting large amounts of poor quality architectural images, but bias does seem to have an architectural parallel in the idea of style. Because of market forces, and architects being required to sell themselves as a recognizable brand, style has arisen as one of the most important objectives of an architectural work (In What Style Shall We Build?). Images are then used on websites, blogs, and social media feeds to communicate this personal style to those that consume the images. Once an architect develops a style there is less and less of a reason for them to break the mold and develop further, instead they exist to provide a service of providing an icon to the client that contains signifiers of the architect. This leads to the possibility of architects doing very little to push their own practices and the design profession once they become successful. The AI algorithm runs in to the same problem as they cannot produce relationships that do not present themselves in the dataset. The algorithm only can repeat the systems of the past.In the Zaha data set this is shown in the fact that in trying to create a new Zaha Hadid Architects project the algorithm was combining elements from the projects within the input dataset.  This results in the illusion of progress and development but is functioning in a way that is not pushing any aspect of design it is creating objects that can be recognized by their associations and assigned a value that makes it an icon but offers not much more. In Charles Jenks work, Architecture 2000, he sets out to predict the future of architecture over the last thirty years of the 20th century. Style is presented within the context of classifying architectural styles to predict where they might be headed. Style is given much more worth as it is the way of understanding and portraying the intentions and goals of an architectural design. For Jenks this becomes important, for “without specifying the goals and values inherent in creation, we have no way of specifying the responsibility of an architect nor of judging whether a trend is positive or negative” (Architecture 2000, 48). If style is looked at more critically then just being for the intent of developing an icon, style becomes a main part of architectural purpose and that must be communicated in all forms of architectural representation and most importantly reach beyond the digital 33GP1 Report - Discussionsimulation and in to the physical results.If AI continues to exert it influence in the field of architecture the easily quantifiable tasks will soon become obsolete, leaving architects to work in the areas of the field that are more subjective and harder or impossible to optimize. If this is the future, the realm of style has the potential to be one where architects spend a majority of their time. Technologies will continue to be introduced to influence this realm, the beginnings of which can be seen in the DCGAN. This will place key importance on the architect being able to understand what defines style and how it is portrayed and curated through the use of images. By understanding how they define aspects of style architects will be able to use them as a powerful way to create discourse around the influence of style in the realm of architecture. Curation is not solely a product of the visual world, especially when collecting thousands of images, language also become a key factor.Language is a key starting point when describing projects, clients and architects have discussions, developers create goals and advertisements, architects hold talks about their works, and critics write and shape how projects are perpetuated. This language becomes the descriptors of the symbol and form the language of style. Simplifications of this language become the tags which classify the images that display these elements of style. As this exists in the realm of the internet, context shifts these meanings. Inadequate thought in to the systems of curation can result in a disconnect between the written language and the language of images. The systems employed in curating a bias or style for the algorithm to create within gains importance, but this must be related to the outputs.With AI in its initial stages of creating cultural material it is open for experimentation, even with the current state of algorithms producing what are far from believable results. It is within the context of low quality images that most of the experimentation happens. High end AI research requires a large amount of capital to run the computationally heavy algorithms. In many cases this research is being done by companies that have responsibilities to shareholders to profit. Some large tech companies have started to move more toward releasing their tech as open-source research but only when it does not affect their bottom line. In Hito Steyerl’s In Defence of the Poor Image, the low-quality image is presented as being the product of public system of the internet, where an image is “distributed for free, squeezed through slow digital connections, compressed, reproduced, ripped, remixed, as well as copied and pasted into other channels of distribution” (Steyerl). It is a setting that destroys the fetish of the high-quality reproduction in favor of the visual language that tends to “express all the contradictions of the contemporary crowd: its opportunism, narcissism, desire for autonomy and creation, its inability to focus or make up its mind, its constant readiness for transgression and simultaneous submission” (Steyerl). With the technology and input data expressing themselves in the realm of the poor image, the products have become abstractions of the original forms that existed in higher visual realm. It is a realm that is inaccessible and has been used in its current concept to develop a language of icon and profit. The DCGAN’s results brings the image “towards abstraction: it is a visual idea in its very becoming”(Steyerl). By creating in the realm of the poor image a context can be explored that accepts the benefits and drawbacks of a more public more diverse conversation about the role and creation of style in the realm of architecture.Looking at the trend of how this could play out in the future practice of architecture one seams to hark back to the early myth of the architect; the lone genius creating a masterpiece. As AI replaces the basic jobs of the architect there is the potential for firms to shrink in size as less people are needed to produce a design then before. In Nicolas Negroponte’s early hypothesis of AI from the 1970’s, Architecture Machine, he creates a future where the architect is sitting and talking to the computer about the design. The computer is responding in a way that is human like, and one is left to question why the architect wants to replace the human with a thing that is essentially just replicating human function. I suggest the answer could be the fact that the AI is the perfect underling, it learns only what it is told, it produces consistently, it doesn’t argue, and it doesn’t need to eat or go home. For the arrogant it provides the ability to replicate ones own ideal in a posthuman form that can function in a predictable way and consistently turn out designs that align with one’s principles.   While this aligns with the idea of the posthuman “who has been enhanced by new technologies that his or her abilities exceed that of a natural human” (Gronlund, 89), it promotes a way of working that only entrenches one in their bias and does not challenge them. Diversity becomes lost in the design process, and the potential for development of ideas goes down as AI perpetuates the learned principles without even the smallest. The illusion of diversity will be maintained as the datasets that are trained on can still be manipulated to challenge each other. But if the methods of interpretation are not open to systems of diversity the products will never extend beyond an individuals bias. The consolidation of the architectural process has the potential to further remove critique from design through the ability to further isolate us as individuals creating within society.   With the DCGAN there is a potential for AI to operate in fields that are not traditionally about optimization. Instead AI can help produce cultural GP1 Report - Discussionobjects, but the machine is removed from the social responsibility of what it creates, it is here that the architect has the possibility to collaborate that can push the critique and use of style beyond creating recognizable icons. To do so the illusion is created that the algorithm will be able to process more and more data and output neutral results. This is never a possibility as the role of bias is always present. If it is not understood AI has the possibility to further isolate the designer and allowing their bias to progress without challenge in the design process. This runs the risk of losing diversity within the design profession that is integral to the individual and collective growth.3435GP1I - Elements of ArchitectureExperiment 2Experiment 2 looked to build upon some of the failures in experiment 1 by attempting to create outputs from the algorithm with more clarity. To accomplish this, it was determined that the dataset should move away from trying to recreate an entire building at find a focus that is easier to display in 178x178 pixels. By breaking the building in to elements the datasets would have a more explicit formal intent that would enhance the similarity between images. The similarity would work better with the constraints of the algorithm and hopefully create results that were more recognizable. Breaking architecture down to its elements is not a novel approach. For the 2014 Venice Biennale Rem Koolhaas proposed a view of architecture as a sum of it parts. He theorized a way of understanding the basic building blocks of architecture, doors, walls, windows, etc., and the social, economic, and cultural forces that produced them.By bringing the GAN to the realm of the element a reduction down to pixel relationships would take place removing the elements of the contextual information that designers associate with them. It would then be up to the architect interpreting them to reassign this contextual data back on to the images.I decided that little curation would be done in deciding the content that would be considered for each categories. The categories were formed by image scraping Google, Flickr, and other image search engines with a generic search term and using the images that were associated with that tag. This began with the terms “window” and “door” as they were the two categories in Koolhaas’ elements that had forms that could be easily matched up within the dataset.After image scraping the images were then processed and formatted for use with the algorithm. This involved square cropping the images so that the door or window was at the center of the frame roughly around the same size. The images were then output in to two resulting datasets at the resolution of 178x178 pixels. A selection of the resulting datasets our shown in fig 29 and fig 30.Fig. 29 Selection of Windows Dataset Batch Downloaded from Google/FickrCreated by Author.36GP1I - Elements of ArchitectureFig. 30 Selection of Doors Dataset Batch Downloaded from Google/FickrCreated by Author.37GP1I - Elements of ArchitectureResultsThe results the algorithm produced moved closer to accomplishing the goal of producing photorealistic images. The attempt to tailor the dataset towards the strengths of the algorithm created results that were more recognizable. In doing so more was revealed about the workings if the algorithm. The first thing to be recognized within the training results was the edges within the images. If the original data set contained strong connected edges those would develop first in the output images and would always be maintained as the form and colours around them would continuously change.Fig. 31 Selection of Windows ResultsCreated by Author.38GP1I - Elements of ArchitectureFig. 32 Selection of Doors ResultsCreated by Author.While formally the images could have been viewed as more of a success then experiment one. The element images lost a lot as they became to recognizable. I hypothesize that this in part is a result of the curation process. The populist approach to accepting every image removed an architectural intent from the dataset and placed it within how the public, and specifically the public uploading images views windows. The dataset became full of what seem to be travel photographs of windows. The results echo this as a generic window with a “Barcelona” twist dominates the output. The same happens with the doors but to a less recognizable aesthetic. Instead there is a generic output that has little to take from it as the algorithm produces without an intent, instead trying to satisfy all the pixel data. This reveals a key split in methods of creation between the human and the GAN. It became clear that to move forward with the GAN a process would have to be developed where I as the designer was setting up the GAN to function within a more intentional design setting and using it to stimulate production in a very specific way. 39GP1I - Counterfeiting DailyExperiment 3 - Design ProcessThe first two experiments had provided valuable knowledge about the working of the GAN. The best outcomes from the GAN were the results that engaged the designer critically in the process of curating and interpreting the images that the GAN received and produced. Through this knowledge the GAN could become part of a design process and the next logical step was to explore what that design process could be.The GAN already contained an adversarial design process within the system that its code set up, this happens in the generator vs discriminator relationship. This is a relationship that earlier was described as artist vs critic, but after experimenting with the algorithm and researching the technology it became clear that labeling the generator as an artist may be to generous in terms of creativity. Instead the generator functions much more like a counterfeiter, in the sense that it is trying to create within a certain set of constraints that attempt to fool the critic in to thinking that the output image is something that it is not. This relationship can be seen in fig. 33 as it shows to actors collaborating in an adversarial process.Generator (Counterfeiter)Synthesizes Images Compares Images to DatasetRanks Images as Real or FakeUpdates Understanding of Real and Fake ImagesUpdates Understanding of  What Fools CriticReceives Feedback From Discriminator Discriminator (Critic)Fig. 33 GAN Process DiagramCreated by Author.Input Values Output ValuesNetwork LayersNeural NetworkInput Values Output ValuesNetwork LayersNeural Network40GP1I - Counterfeiting DailyAlthough we do not often term the traditional design process as adversarial, it could be thought of in the same way. In more traditional design processes there is an adversarial relationship between designer and critic, whether that be through engaging with others or happening within an individual’s mind. Fig. 34 lays out a comparison that shows how designers propose design solutions and then critically engage with what they put forward. They then update their understanding based on the feedback provided and update their solution to the problem accordingly. This process is continued at least until the design is finished but also in many cases after when understanding the design after it has been implemented.Generator (Designer)Synthesizes design Critical thought Decides what is and isn’t workingUpdates understanding of design problemUpdates understanding of how to design Receives feedback from criticDiscriminator (Critic)Fig. 34 Traditional Design Process DiagramCreated by Author.41GP1I - Counterfeiting DailyThe two processes though different in what they perceive and accomplish contains an overlapping, to move forward with a GAN design process a design process that uses the strengths of both the designer and the algorithm must be proposed. A process where the GAN challenges the designer to engage with areas that they would not move towards without the collaboration of the GAN. Figure 35 outlines a proposal for the beginnings of a collaborative design process with the designer and algorithm curating and producing input and output data that works towards a common design goal.DesignerCurates Data Trains on ImagesOutputs Image IterationsProduces images for designerUpdates understanding of how to use GAN nextProcesses and Understands GAN outputGANInput Values Output ValuesNetwork LayersNeural NetworkFig. 35 GAN Collaborative Design Process DiagramCreated by Author.42GP1I - Counterfeiting DailyExperiment 3 - Design ProblemExperiment 3 - SiteThe Dataset - Designer output/Algorithm inputThe previous experiments with the GAN revealed that to produce usable results from the GAN the initial inputs needed to maintain similarity in form, they also needed to be curated with an architectural intent. For this I went to the main source that designers turn to when in need of architectural precedent images, archdaily.com. Over the past couple years, I have engaged in conversation with many colleagues and professors that question how we deal with precedent. A practice that many seem to engage in is finding an image based on what is classified as visual interest and dropping it on a board with your project and claiming that the qualities found in the image are what are in one’s project. Often these are perspective images that are not thought of in terms of scale or concept but strictly on the visual qualities that they suggest. Many discussions have been engaged in that question whether this is a valid use of precedent but that does not stop it from happening, images show up and are often preferred in presentations over plan and section. The GAN is the ideal tool for exploring this image centric way of working. It takes the process of designing through images to the extreme. Instead of looking at one or two images it can process thousand taking the process already engaged in by designers further then we could possibly ever take it on our own. For this reason, archdaily, the supermarket of image precedent, is the logical place to construct a dataset. Their images are created by architects to show off aspects of design, they are also curated to push forward a certain design bias by those deciding what get shown on the site and not. Every year this bias is on full display as archdaily releases their best of categories, summing up that year’s designs.As data for the GAN I decided to use the best of category Best Houses of 2018 in attempt to understand and create/counterfeit what archdaily would consider a “best” single family home. Data collection began with me downloading all the images from the 80 projects in this category. These images would form the bases for the design decisions in the project by becoming smaller categories that I would feed to the GAN to get results that would further the design process and address the decisions that I, as a designer, would need to make.To start I needed to develop a site that would fit within the requirements needed for an archdaily site. Of the images downloaded from archdaily there was a total of 62 site plans. This provided a small problem as the DCGAN algorithm does not function well on small datasets. To try and increase the number of images within the datasets I augmented the dataset by rotating the images 90, 180, and 270 degrees, as well as mirroring and flipping the images. This brought the dataset to a total 496 images which is still small but much more workable then the original 62.Fig. 36 Selection from Site DatasetCreated by Author.43GP1I - Counterfeiting DailyThe Results - Algorithm output/Designer inputThe small dataset provided problems of the algorithm quickly overtraining and producing outputs that all looked the same and would be heavily influenced by the last site plan that it had trained on. I attempted to stop this from happening by running short training runs on the algorithm and then selecting images from the results that were different from one another. Fig. 37 Selection from Site Results.Created by Author.44GP1I - Counterfeiting DailyInterpretation - Algorithm input/Designer outputFig. 38 Selection from Site Interpretations.Created by Author.I then took 9 of these resulting images and turned them in to the beginnings of site plans. Attempting to interpret the differing combinations of form and colour within the results in to aspects such as building massing, roads and trees. A total of 9 potential sites were initially developed, 4 of which our shown below.45GP1I - Counterfeiting DailyExperiment 3 - Exterior MassingsThe Dataset - Designer Output/Algorithm InputThe interpreted site massings found within the site plans needed to be refined based on the exterior perspectives, the forms they suggested and how they engaged with the site. As the exterior images where mainly focused on the building form this became much more of a driving factor then site relationship. To create the dataset, I went back to the original archdaily images and selected all images that contained the building exteriors. These were then all cropped down to square format focusing as an attempt to make the building the same size in the images to provide better results. The resulting dataset contained 554 images that were then passed along to the algorithm. Fig. 39 Selection from Exteriors Dataset.Created by Author.46GP1I - Counterfeiting DailyThe Results - Algorithm Output/ Designer Input After 300 epochs of training the algorithm produced images that contained enough clarity and enough difference to work on. These forms differed from earlier experiments as they contained enough information to suggest actual buildings but did not move towards being generic. The images have taken on the quality of the GAN and the poor resolution images that it works on. Within the images there are suggestions that mimic aspects of the original dataset. The fact that the GAN only understands the pixel relationships means that it suggests forms which take on elements that do not make sense and our hard to interpret. Forms contain impossible perspectives and impossible structures because the contexts that a designer would bring to a project are not even slightly considered by the GAN. It is in these images that the aspect of the glitch and the GAN aesthetic have begun to take shape. Fig. 40 Selection from Exteriors Results.Created by Author.47GP1I - Counterfeiting DailyInterpretation - Algorithm input/Designer outputThe GAN had provided many inputs for me as a designer to interpret and bring to the site massings. I began the process by trying to interpret the form found within the images. These initial explorations took the form of small sketches (Fig 40) based off a selection of the GAN outputs that I found to contain an idea that was evocative or could inspire a design concept. Fig. 41 Selection from Exteriors Interpretations.Created by Author.48GP1I - Counterfeiting DailyI then matched the images based on the conceptual sketches as well as other visual aspects such as the formal and colour similarities. Bringing the new image combinations back to the site plans, I began to match the images with the potential massings that could work within the site and with what I as the designer read in to the potential that the rough site massings suggested. From these images rough building plans were formed and combined with the sites to turn them in to site axonometrics, to provide a sense of what each potential combination so far was suggesting.Fig. 42 Selection from Refining Axonometrics.Created by Author.49GP1I - Counterfeiting DailyMoving forward it was decided to work to refine site #9. The site places the building mass in a flat section just before the site drops off in a surrounding valley. On the side of the house nearest the drop a second storey extends out providing a vantage point above the trees. The combination of this massing located on the site provides potential for further refinement and is the best candidate for counterfeiting an archdaily best home.Fig. 43 Site #9 Refining Axonometric.Created by Author.Site SelectionGP1I - Counterfeiting DailyExperiment 3 - Interior/ Plan DevelopmentThe Dataset - Designer Output/Algorithm Input50The next step in furthering the design was to start to refine the massing based on the interior program. For this a program needed to be developed out. I went back to the original archdaily dataset and began to curate all the interior images. In total the dataset contained 687 images of interiors. For analysis I further broke these images down to more specific datasets. This breakdown allowed me to have exact percentages which became the program areas(Fig 42). From the image data I was able to ascertain that to be an archdaily best house the space should be allotted according to what the image data provided. When feeding the images to the algorithm I went back to the complete dataset as it was the only way to maintain a dataset with a usable number of images.Kitchen 13% Living 19% Dining 12% Bedroom11% Bathroom 7% Generic 38%Fig. 44 Program Diagram.Created by Author.Fig. 45 Selection from Interiors Dataset.Created by Author.51GP1I - Counterfeiting DailyThe Results - Algorithm output/Designer Input  The interior output images were not as big of a success in terms of clarity as the exterior images. The interiors contained more of a difference in formal aspects within the dataset images. As a result, the images produced by the algorithms do further push the GAN aesthetic seen in the exteriors dataset. In the resulting image forms can be seen that elicit connections to conventional forms within the original dataset but the interpretation of the GAN is much more on the forefront creating impossible spaces that contain many possibilities for interpretation.Fig. 46 Selection from Interiors Results.Created by Author.52GP1I - Counterfeiting DailyInterpretation - Algorithm input/Designer outputUsing the concepts found within the images I interpreted what program space the image would be associated. Combining this with the program space the images began to be placed around the rough plan(Fig 47) to define the character and concept behind each of the individual spaces, but also in relationship between how the images could then connect with each other.Fig. 47 Interior Program Placement Diagram.Created by Author.53GP1I - Counterfeiting DailyExperiment 3 - Plans/Sections/ElevationsBringing the interior information back to the other information of exterior and site interpretations GAN I synthesized the information together to refine the collaborative results in to more traditional design drawings. The resulting single-family home was designed.Fig. 48 Site Plan.Created by Author.54GP1I - Counterfeiting DailyFig. 49 Second Floor Plan.Created by Author.Fig. 51 Short Section.Created by Author.Fig. 53 Elevation B.Created by Author.Fig. 55 Elevation D.Created by Author.Fig. 52 Elevation A.Created by Author.Fig. 54 Elevation C.Created by Author.Fig. 50 Long Section.Created by Author.55GP1I - Counterfeiting DailyExperiment 3 - RendersFor the process of creating renders I looked back to an earlier version of a GAN I had used, the pix2pix algorithm. The algorithm is trained by feeding in two corresponding images and it learns to map what is found in one image to the other. For my purposes I created two datasets for interiors and exteriors by taking the outcomes produced by the GAN previously and running them through an image-to-line-tracing script. I then paired the images with their corresponding line drawings (fig 56) and trained the pix2pix algorithm on the image pairs.When looking at the traditional design drawings on their own it was clear that by focusing on the hand of the designer the aesthetic and qualities of the GAN that had informed much of the process had been lost. There were very few aspects of the aesthetic left over. By reverting to traditional design drawing I had straightened all the lines and lost all the imaginative qualities that had been so prominent in the creation of the project. I had in a sense over-trained or over-corrected. The next step would be to see if I could input the drawings back to the algorithm and have them fight back against the hand of the designer.Fig. 56 Exterior (above) and Interior (Below) pix2pix pairs.Created by Author.56GP1I - Counterfeiting DailyI then prepped the elevations to become inputs for the pix2pix model by converting the traditional cad output line drawings in to versions that resembled the line tracing outputs that pix2pix was training on. The elevations where then used to create perspective line drawings of different exterior locations (Fig57). For the interiors I pulled line drawing perspectives from a rhino model using make2d. I then brought the images in to photoshop and changed the linework from the straight vector linework in to forms found from the interior GAN output line-tracings (Fig 58). Fig. 57 Exterior Line Drawings For Algorithm Input.Created by Author.Fig. 58 Interior Line Drawings For Algorithm Input.Created by Author.GP1I - Counterfeiting Daily57The line drawings were then fed in to the pix2pix algorithm as individual images as the resulting images where returned as renders in attempts to pull the images back towards the aesthetic of the GAN.Fig. 59 Pix2pix Renders.Created by Author.GP1I - Counterfeiting Daily58Through using GANs to collaborate in the design process I brought myself in to a design process that challenged my more traditional way of thinking about design. In many ways the critiques that are associated with images were maintained in this recontextualization of them. In the collaborative process there is little focus given to the important elements of human experience other then the visual. While this critique does apply it would be to simple to say that the GAN does not offer anything valuable because it only works in one aspect of design while disregarding others. The strength of the GAN lies in its ability to remove images from the typical contextual elements that designers approach images with. The removal of these contexts is evident in the images that it creates and then hands back to the designer. For the designer this creates a relationship with images that is different from the current relationship that dominates image culture. While the GAN presents us with enough images to mimic the infinite scroll we are used to, we are not able to pass through the images in the same way. The images are abstract enough that they speak a visual language that at this point as a designer I do not comprehend. To read in to the images I must then engage with each of them in a more critical way, using then to push my creativity and imagination. It has created an experience with images that was not there when endlessly scrolling through the pages of design blogs hoping something will catch my eye and I can place it in my project. By thinking critically about the GAN and its inputs and outputs I have regained a critical respect for images one that allows me to see a future that is not dominated by non-engagement because of images but one that allows design drawing to fully embrace in a thoughtful manner in the digital image dominated culture.Experiment 3 - ConclusionGP1I - Counterfeiting Daily59Bibliography - Works citedAlexander, Zeynep Ç. “Neo-Naturalism.” Log 40, no. 31, 2014, pp. 23-30.Davis, Daniel. A History of Parametric. http://www.danieldavis.com/a-history-of-parametric/. 2014.Davis, Daniel. “Evaluating Buildings With Computation and Machine Learning”. 2016: POSTHUMAN FRONTIERS: Data, Designers, and Cognitive Machines [Proceedings of the 36th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA). ACADIA. 2016.Difford, Richard. “Infinite Horizons: Le Corbusier, the Pavillon De l’Esprit Nouveau Dioramas and the Science of Visual Distance.” The Journal of Architecture, vol. 22, no. 5, 2017, pp. 825-853.Evans, Robin. Translations from Drawing to Building. vol. 2.;2;, MIT Press, Cambridge, Mass, 1997.Funke, Bettina. “Not Objects So Much as Images”. Aesthetics in the 21st Century, Speculations V. Punctum Books, Brooklyn. 2014.Funke, Bettina. “On the Risk of Images”. Guyton Price Smith Walker: Kunstehalle Zurich. 38th Street Publishers, New York. 2008. Garcia, Megan. “Racist in the Machine.” World Policy Journal, vol. 33, no. 4, 2016, pp. 111-117.Gronlund, Melissa. Contemporary Art and Digital Culture. Routledge, Taylor & Francis Group, London;New York. 2017.Jencks, Charles. Architecture 2000: Predictions and Methods. Praeger, New York, 1971.Jencks, Charles. In what Style Shall we Build? vol. 237, EMAP Architecture, London, 2015.Kim, Taehoon. DCGAN-Tensorflow. https://carpedm20.github.io/faces/.Marble, Scott. “Everything that can be Measured Will be Measured.” Technology|Architecture + Design, vol. 2, no. 2, 2018, pp. 127-129.May, John. “Everything is Already an Image”. Log 40, Spring/Summer 2017. Anyone Corporation, New York. 2017.Price, Cedric. Technology is the Answer But What Was the Question?. Pidgeon Digital. 1979. Price, Seth. Dispersion. http://www.distributedhistory.com/Dispersion2016.pdf. 2002-2016.Proving Grounds. New Machine Learning Examples with LunchBoxML. https://provingground.io/2018/03 /12/new-machine-learning-exam-ples-with-lunchboxml/. 2018.Purdy, Daniel L., Project MUSE Open Access Books, and Project Muse University Press eBooks. On the Ruins of Babel: Architectural Metaphor in German Thought. Cornell University Press, Ithaca, N.Y, 2011;2010;, doi:10.7591/j.ctt7zh96.Ruskin, John. The Elements of Perspective, Arranged for the use of Schools, and Intended to be Read in Connexion with the First Three Books of Euclid. New York, United States, 1873.Sadler, Simon, Archigram (Group), and Inc ebrary. Archigram: Architecture without Architecture. MIT Press, Cambridge, Mass, 2005.Shumacher, Patrik. “parametricism.” Architectural Design, vol. 79, no. 4, 2009, pp. 14-23.GP1I - Counterfeiting Daily60Bibliography - Works citedBibliography - Image SourcesSramek, Peter, et al. Piercing Time: Paris After Marville and Atget 1865-2012. vol. 47181, Intellect, Bristol;Chicago, IL. 2014.Steadman, Philip. “Allegory, Realism, and Vermeer’s use of the Camera Obscura.” Early Science and Medicine, vol. 10, no. 2, 2005, pp. 287-314.Steyerl, Hito. “In Defence of the Poor Image”. E-flux Journal #10, November 2009. 2009.1. Palladio, Andrea. Villa Almerico (Villa Rotunda), from I quattro libri dell’architettura di Andrea Palladio (Book 2, page 19). https://www.metmuseum.org/art/collection/search/698054. 1570.2. Vermeer, Johannes. The Kitchen Maid (The Milkmaid). Artstor, library-artstor-org.ezproxy.library .ubc.ca/asset/ARIJKMUSEU MIG_10313628004. 1660.3. Ruskin, John. “Figure 4 Problem 1”. The Elements of Perspective, Arranged for the use of Schools, and Intended to be Read in Con-nexion with the First Three Books of Euclid. New York, United States. 1873.4. Ruskin, John. “Figure 25 Problem X”. The Elements of Perspective, Arranged for the use of Schools, and Intended to be Read in Connexion with the First Three Books of Euclid. New York, United States. 1873.5. Le Corbusier. Plan Voisin. Artstor, library-artstor-org.ezproxy.library.ubc.ca/a sset/AWSS35953_35953_34647162. 1925.6. Daguerre, Louis-Jacques-Mandé. View of the diorama of the Boulevard des Capucines. Artstor, library-artstor-org.ezproxy.library.ubc.ca/asset/ARMNIG_10313259326 .early 19th century.7. Marville, Charles. Rue au lard.Artstor, library-artstor-org.ezproxy.library.ubc.ca/asset/AMICO _CLARK_103906701. 1860-1870.8. Atget, Eugène. Parisian rooftops. Artstor, library-artstor-org.ezproxy.library. ubc.ca/asset/ARTSTOR_103_41822001205127 .19009. Stieglitz. Fountain. 1917. Artstor, library.artstororg.ezproxy.library.ubc.ca/asset/AMCADIG_ 1031084770610. Cook, Peter. Plug-in University Node, project Elevation. Artstor, library-artstor-org.ezproxy.library.ubc.ca/asset/MOMA_4350008. 1965.11. Price, Cedric. Generator, project White Oak, Florida Perspective. Drawing date: 1978-80, Project date: 1978-80. Artstor, li-brary-artstor-org.ezproxy.library.ubc.ca/asset/MOMA_2458000312. Gehry, Frank. Guggenheim Museum Bilbao, exterior, facade and tower from northwest. 1991-1997. Artstor, library-artstor-org.ezproxy.library.ubc.ca/asset/ALIEBERMANIG_10313050483.200913. Frei Otto Experimenting with Soap Bubbles. Frei Otto. https://www.researchgate.net/figure/Frei-Otto-Experiment-ing-with-Soap-Bubbles_fig2_318103333. 1961.14. Gaudí, Antoni. Sagrada Familia Church. Artstor, library-artstor-org.ezproxy.library.ubc.ca/ asset/AWSS35953_35953_29401015. 1882.15. Screenshot of provinggrounds.io. https://provingground.io/2018/10/29/aec-tech-2018-lunchboxml-workshop-examples/. 2018.16. Screenshot of algorithms.design. https://algorithms.design/. 2018.17. Neuron Diagram. Made by Author18. Perceptron Diagram. Made by Author19. Neural Network Diagram. Made by author20. Karas,Tero.1024 × 1024 images generated using the CELEBA-HQ dataset. https://arxiv.org/pdf/ 1710.10196.pdf. 2018.21. Generative Adversarial Neural Network. Made by author.22. Baan, Iwan. Heydar Aliyev Cultural Center. http://www.zaha-hadid.com/architecture/heydar-aliyev-centre/#. 2012.23. Selection From Zaha Dataset Batch Downloaded From Google #1. Made my Author.24. Selection From Zaha Dataset Batch Downloaded From Google #2. Made my Author.25. Results From Zaha trained GAN session 1 epochs 1-30. Made my Author.26. Training Session 2-5 Results. Made my Author.27. Enlarged Selection From Results #1. Made my Author.28. Enlarged Selection From Results #2. Made my Author.28. Selection of Windows Dataset Batch Downloaded from Google/Flickr. Made my Author.30. Selection of Doors Dataset Batch Downloaded from Google/Flickr. Made my Author.31. Selection of Window Results. Made my Author.32. Selection of Door Results. Made my Author.33. GAN Process Diagram. Made my Author.34. Traditional Design Process Diagram. Made my Author.35. GAN Colaborative Design Process Diagram. Made my Author.36. Selection from Site Dataset. Made my Author.37. Selection from Site Results. Made my Author.38. Selection from Site Interpretations. Made my Author.39. Selection from Exteriors Dataset. Made my Author.40. Selection from Exteriors Results. Made my Author.41. Selection from Exteriors Interpretations. Made my Author.42. Selection from Refining Axonometrics. Made my Author.43. Site #9 Axonometric. Made my Author.44. Program Diagram. Made my Author.45. Selection from Interiors Dataset. Made my Author.46. Selection from Interiors Results. Made my Author.47. Interior Program Placement Diagram. Made my Author.48. Site Plan. Made my Author.49. Second Floor Plan. Made my Author.50. Long Section. Made my Author.51. Short Section. Made my Author.52. Elevation A. Made my Author.53. Elevation B. Made my Author. 54. Elevation C. Made my Author.55. Elevation D. Made my Author.56. Exterior(above and Interior (below) pix2pix pairs. Made my Author.57. Exterior Line Drawings For Algorithm Input. Made my Author.58. Interior Line Drawings For Algorithm Input. Made my Author.59. Pix2pix Renders. Made my Author.GP1I - Counterfeiting Daily61

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.42591.1-0387289/manifest

Comment

Related Items