UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Risk minimization in Rts, with application to FFTT timber construction Larsen, Alfred 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2015_may_larsen_alfred.pdf [ 4.32MB ]
Metadata
JSON: 24-1.0167112.json
JSON-LD: 24-1.0167112-ld.json
RDF/XML (Pretty): 24-1.0167112-rdf.xml
RDF/JSON: 24-1.0167112-rdf.json
Turtle: 24-1.0167112-turtle.txt
N-Triples: 24-1.0167112-rdf-ntriples.txt
Original Record: 24-1.0167112-source.json
Full Text
24-1.0167112-fulltext.txt
Citation
24-1.0167112.ris

Full Text

RISK MINIMIZATION IN RTS, WITH APPLICATION  TO FFTT TIMBER CONSTRUCTION  by Alfred Larsen  B.A.Sc., The University of British Columbia, 2012  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Civil Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  January 2015  © Alfred Larsen, 2015  ii Abstract The risk posed to a structure from an earthquake may be minimized by changing the design characteristics of the structure to determine the optimal design. A risk measure, the mean value of the cost functions in this thesis, can be determined using reliability methods to construct a loss curve. This formulation includes the effect of uncertainty in all aspects of the cost, including construction and repair given an event. This risk model also requires no prior information to determine the mean cost and does not define a discrete “failure,” instead using a continuum of possible outcomes in determining the mean of the cost functions. The optimization model allows for different search directions and step sizes in the search for the minimum cost, with steepest descent and BFGS search directions currently implemented. These analyses are performed using the Rts software, which has the capability of performing the optimization, risk, and reliability analyses on input structural models. The functionality of risk minimization is demonstrated with two example structures, with the framework provided for a third. The first is an example previously solved in Rt, which confirms functionality of the implementations in Rts. The second model uses an analytical model of a single-storey timber-steel hybrid frame, which utilizes the novel structural “Finding the Forest Through the Trees” (FFTT) design concept that has been proposed in Vancouver and studied at UBC. The minimum mean cost of this structure, subject to the cost functions and structural simplification, was determined by optimizing two decision variables that represent the fundamental geometry of the frame. Optimization of this frame converged to one point throughout many analyses, utilizing both the steepest descent and BFGS search methods. Finally, the framework for a future 6-storey FFTT example was developed. This   iii example is inspired from modern tall timber design concepts, which are discussed in a literature review and demonstrates unique features within Rts, including the deep parameterization and nested model structure.   iv Preface This thesis consists of unpublished work. The FORM and sampling models included in Rts are based off of the models in Rt developed by Dr. Mojtaba Mahsuli and Dr. Terje Haukaas. The risk and optimization models in Rts were developed with collaboration from Dr. Haukaas, who developed the basic framework. I performed the testing of these models, fixed bugs, and ensured functionality. Example 1 and Example 2 were developed by me. The example presented in Chapter 5 includes an input file that was expanded upon by me, with the basic inputs written by Dr. Haukaas.  The example in Appendix A was developed by me, with the methods taught to me by Dr. Haukaas and Saied Allahdadian. The OpenSees file was modified from a freely available example provided from the OpenSees wiki to its present form, while the Rt input file is original work.  This manuscript was written by me and revised with my advisor, Dr. Haukaas. Dr. Tannert also helped with additional revision and editing. All figures and graphs are my work, with the exception of those with a reference provided.    v Table of Contents Abstract .................................................................................................................................... ii	  Preface ..................................................................................................................................... iv	  Table of Contents .................................................................................................................... v	  List of Tables ........................................................................................................................... x	  List of Figures ......................................................................................................................... xi	  List of Algorithms ................................................................................................................ xiii	  List of Abbreviations ........................................................................................................... xiv	  Glossary ................................................................................................................................. xv	  Acknowledgements .............................................................................................................. xvi	  Chapter 1 Introduction .......................................................................................................... 1	  1.1	   Objective and Scope .................................................................................................... 1	  1.2	   Background .................................................................................................................. 3	  1.2.1	   Reliability-based Design Optimization ................................................................. 3	  1.2.2	   Tall Wood Building Concepts .............................................................................. 3	  1.3	   Overview of Contributions .......................................................................................... 5	  1.3.1	   Risk and Optimization Model Implementation ..................................................... 5	  1.3.2	   Development of Risk Minimization Examples ..................................................... 5	  Chapter 2 Literature Review ................................................................................................. 7	  2.1	   Timber as an Engineering Material .............................................................................. 7	  2.1.1	   Cross-Laminated Timber ...................................................................................... 7	  2.1.2	   Glued-Laminated Timber ...................................................................................... 8	  2.1.3	   Seismic Design Concept of Timber ...................................................................... 9	  2.2	   State of the Art: Tall Timber Buildings ..................................................................... 10	    vi 2.2.1	   Tall Timber Structural Concepts ......................................................................... 10	  2.2.2	   Realized Tall Timber Structures ......................................................................... 13	  2.2.3	   Future Projects .................................................................................................... 15	  2.3	   Mass Timber Research ............................................................................................... 16	  2.3.1	   Flooring and Diaphragm Systems ....................................................................... 16	  2.3.2	   CLT Panel Testing .............................................................................................. 17	  2.3.3	   Steel-Timber Hybrid Testing .............................................................................. 18	  2.3.4	   System Testing .................................................................................................... 20	  2.4	   Research Into Reliability Analysis and Design Optimization ................................... 21	  2.4.1	   Timber Reliability ............................................................................................... 21	  2.4.2	   Tall Building Reliability Analysis ...................................................................... 25	  2.5	   Software Overview .................................................................................................... 26	  2.5.1	   OpenSees ............................................................................................................. 26	  2.5.2	   Rt ......................................................................................................................... 27	  Chapter 3 Orchestrating Models in Rts .............................................................................. 28	  3.1	   Reliability Analysis .................................................................................................... 29	  3.1.1	   Definition of a Limit State Function ................................................................... 30	  3.1.1.1	   Classical Reliability Analysis ...................................................................... 30	  3.1.1.2	   Performance-based Reliability Analysis ...................................................... 31	  3.1.2	   Definition of Random Variables ......................................................................... 32	  3.1.3	   Results from a Reliability Analysis .................................................................... 33	  3.1.4	   Reliability Methods ............................................................................................. 33	  3.1.4.1	   First-Order Second Moment ........................................................................ 34	  3.1.4.2	   First-Order Reliability Method .................................................................... 35	  3.1.4.3	   Second-Order Reliability Method ................................................................ 37	    vii 3.1.4.4	   Sampling Methods ....................................................................................... 37	  3.1.4.4.1	   Monte Carlo Sampling .......................................................................... 38	  3.1.4.4.2	   Importance Sampling ............................................................................ 39	  3.2	   Risk Analysis ............................................................................................................. 40	  3.2.1	   Introduction to Risk Models ............................................................................... 40	  3.2.2	   Determination of the Mean Cost ......................................................................... 41	  3.2.3	   Risk Analysis Algorithm .................................................................................... 42	  3.3	   Optimization Analysis ............................................................................................... 46	  3.3.1	   Introduction to Optimization Analysis ................................................................ 47	  3.3.2	   Reliability-Based Design Optimization .............................................................. 47	  3.3.3	   Risk Minimization .............................................................................................. 49	  3.3.4	   Optimization Tools ............................................................................................. 50	  3.3.4.1	   General Algorithm for Optimization Analysis ............................................ 50	  3.3.4.2	   Steepest Descent Method ............................................................................. 53	  3.3.4.3	   BFGS Method .............................................................................................. 53	  Chapter 4 Optimization Examples ...................................................................................... 57	  4.1	   Example 1: One-Dimensional, Algebraic Optimization ............................................ 57	  4.1.1	   Problem Formulation .......................................................................................... 57	  4.1.2	   Rts Settings Formulation ..................................................................................... 58	  4.1.3	   Example 1 Results ............................................................................................... 60	  4.1.4	   Risk Model Tuning ............................................................................................. 63	  4.1.5	   Discussion of Results .......................................................................................... 68	  4.2	   Example 2: Two-Dimensional, Algebraic Optimization ........................................... 69	  4.2.1	   Problem Formulation .......................................................................................... 70	  4.2.1.1	   Structural Model .......................................................................................... 71	    viii 4.2.1.2	   Cost Models ................................................................................................. 73	  4.2.1.2.1	   Cost of Materials ................................................................................... 73	  4.2.1.2.2	   Cost of Drift-related Damage ................................................................ 74	  4.2.1.2.3	   Cost of Structural Components ............................................................. 74	  4.2.1.2.4	   Cost of Capacity Design ....................................................................... 76	  4.2.1.2.5	   Total Costs Using First-Order Approximation ..................................... 77	  4.2.2	   Rts Settings Formulation ..................................................................................... 82	  4.2.3	   Example 2 Results ............................................................................................... 85	  4.2.4	   Discussion of Results .......................................................................................... 95	  Chapter 5 Future Structural Example ................................................................................ 99	  5.1	   Model Overview ........................................................................................................ 99	  5.2	   Load Models ............................................................................................................ 100	  5.3	   Component Response Models .................................................................................. 101	  5.3.1	   Mesh Parameterization ...................................................................................... 101	  5.3.2	   Beam Model ...................................................................................................... 102	  5.3.3	   Panel Model ...................................................................................................... 102	  5.3.4	   Connection Models ........................................................................................... 103	  5.4	   Damage and Cost Models ........................................................................................ 105	  5.5	   Orchestrating Models ............................................................................................... 108	  5.5.1	   Reliability Models ............................................................................................. 108	  5.5.2	   Risk Models ...................................................................................................... 108	  5.5.3	   Optimization Models ........................................................................................ 109	  5.6	   Other Considerations ............................................................................................... 109	  Chapter 6 Conclusion ......................................................................................................... 111	  6.1	   Summary of Research .............................................................................................. 111	    ix 6.2	   Future Work ............................................................................................................. 112	  References ............................................................................................................................ 114	  Appendices ........................................................................................................................... 120	  Appendix A : Reliability Analysis in Rt Implementing an Opensees Structural Model .. 120	  A.1	   Probabilistic Input Parameters ............................................................................ 121	  A.2	   File Response ...................................................................................................... 122	  A.3	   Command Response ............................................................................................ 122	  A.4	   Example Code ..................................................................................................... 123	  A.5	   Rt Code ............................................................................................................... 125	  A.6	   OpenSees Code ................................................................................................... 129	     x List of Tables  Table 2.1 Summary of modern tall timber structures ............................................................. 13	  Table 4.1 Example 1 decision variable ................................................................................... 58	  Table 4.2 Example 1 random variables ................................................................................... 58	  Table 4.3 Example 1 FORM model settings ........................................................................... 59	  Table 4.4 Example 1 sampling model settings ....................................................................... 59	  Table 4.5 Example 1 risk model settings ................................................................................ 59	  Table 4.6 Example 1 optimization model settings .................................................................. 60	  Table 4.7 Results of Example 1 of steps to optimum ............................................................. 60	  Table 4.8 Comparison of cost functions in Example 2 using first-order approximations ...... 79	  Table 4.9 Example 2 decision variables ................................................................................. 83	  Table 4.10 Example 2 random variables ................................................................................. 83	  Table 4.11 Example 2 constants ............................................................................................. 84	  Table 4.12 Example 2 FORM settings .................................................................................... 84	  Table 4.13 Example 2 sampling settings ................................................................................ 85	  Table 4.14 Example 2 risk model settings .............................................................................. 85	  Table 4.15 Example 2 optimization model settings ................................................................ 85	  Table 4.16 Example 2 cost minimization results using steepest descent ................................ 87	  Table 4.17 Example 2 cost minimization results using BFGS ............................................... 87	  Table 4.18 Example 2 steps taken to 0.5% of minimum cost ................................................. 88	   Table A.1 Example parameters ............................................................................................. 124	     xi List of Figures Figure 2.1 A CLT specimen with 5 layers (Stürzenbecher et al. 2010) .................................... 8	  Figure 2.2 Glulam specimens (Tri-State Forest Products n.d.) ................................................. 9	  Figure 2.3 Example concept of FFTT design with a core (Green and Karsh 2012) ............... 12	  Figure 2.4 Examples of modern timber construction (left to right): e3 (Architecture in Development n.d.), Stadthaus (Techniker 2010), Forte (Lend Lease n.d.) ............................. 13	  Figure 3.1 Diagram of model structure and inputs ................................................................. 28	  Figure 3.2 Example exceedance probability curve ................................................................. 41	  Figure 3.3 Example exceedance probability curve with annotated characteristics ................ 45	  Figure 4.1 Mean cost at each optimization step in Example 1 ............................................... 61	  Figure 4.2 Decision variable at each optimization step in Example 1 .................................... 61	  Figure 4.3 Risk at different values of the design variable in Example 1 ................................ 62	  Figure 4.4 Exceedance probability at three decision variable values in Example 1 ............... 62	  Figure 4.5 Risk model accuracy .............................................................................................. 64	  Figure 4.6 Risk model precision ............................................................................................. 64	  Figure 4.7 Computational efficiency of risk measures ........................................................... 66	  Figure 4.8 Effect on precision of the initial sampling iterations (probability requirements 0.5-99.5%) ..................................................................................................................................... 66	  Figure 4.9 Single-storey FFTT frame ..................................................................................... 70	  Figure 4.10 Example 2 beam panel connection ...................................................................... 71	  Figure 4.11 Single-storey FFTT frame idealization ............................................................... 72	  Figure 4.12 Example 2 optimal configuration for minimum mean cost ................................. 86	  Figure 4.13 Example 2 mean cost minimization using steepest descent ................................ 88	  Figure 4.14 Detail of Example 2 mean cost minimization using steepest descent ................. 89	  Figure 4.15 Example 2 mean cost minimization using BFGS ................................................ 89	    xii Figure 4.16 Detail of Example 2 cost minimization using BFGS .......................................... 89	  Figure 4.17 Mean cost around the optimum determined using a first-order approximation .. 90	  Figure 4.18 Mean cost near the optimum determined from Rts ............................................. 91	  Figure 4.19 Percent Difference of the First-Order Approximation to Rts Mean Cost Values 91	  Figure 4.20 Decision variables throughout optimization using steepest descent ................... 92	  Figure 4.21 Decision variables throughout optimization using BFGS ................................... 93	  Figure 4.22 Optimization of the beam clear span in Example 2 using steepest descent ........ 94	  Figure 4.23 Optimization of the beam clear span in Example 2 using BFGS ........................ 94	  Figure 4.24 Optimization of the panel depth in Example 2 using steepest descent ................ 95	  Figure 4.25 Optimization of the panel depth in Example 2 using BFGS ............................... 95	  Figure 5.1 Schematic of the structural model ....................................................................... 100	  Figure 5.2 Cross-section of the steel beam ........................................................................... 102	  Figure 5.3 Panel-beam connection detail .............................................................................. 104	  Figure 5.4 Detail at panel-ground connection ....................................................................... 105	   Figure A.1 Interaction between Rt and OpenSees ................................................................ 120	  Figure A.2 Schematic of the example structure .................................................................... 124	     xiii List of Algorithms Algorithm 3.1 FORM Algorithm ............................................................................................ 36	  Algorithm 3.2 Sampling Algorithm ........................................................................................ 38	  Algorithm 3.3 Risk Algorithm ................................................................................................ 43	  Algorithm 3.4 Optimization Algorithm .................................................................................. 51	       xiv List of Abbreviations  • BFGS algorithm - Broyden-Fletcher-Goldfarb-Shanno algorithm • COV – Coefficient of Variation • CLT – Cross-Laminated Timber • DV – Decision Variable • EP – Exceedance Probability  • FFTT – Finding the Forest Through the Trees • FORM – First-Order Reliability Method • HLRF algorithm – Hassofer-Lind-Rackwitz-Fiessler algorithm • IS – Importance Sampling  • LSF – Limit-State Function  • LSL – Laminated Strand Lumber • LVL – Laminated Veneer Lumber • MCS – Monte Carlo Sampling • PDF – Probability Density Function • RBDO – Reliability-Based Design Optimization • RM – Risk Minimization • RV – Random Variable • SORM – Second-Order Reliability Method    xv Glossary • Coefficient of variation: a parameter describing a probability density function; the standard deviation divided by the mean • Constant variable: a parameter that is taken at a single value, with no probability distribution • Cross-Laminated Timber (CLT): According to FPInnovations (2012), an element with at least three glued layers of boards placed in orthogonally alternating orientation to the neighbouring layers  • Decision variable: a parameter that is taken as a constant during an analysis, but may be varied between optimization iterations • Finding the Forest Through the Trees (FFTT): a tall timber-steel hybrid building design concept presented by Green and Karsh (2012) • Random variable: a parameter that is assigned a probability distribution and statistics associated with that distribution (typically mean and coefficient of variation) • Tall timber: a primarily timber mid- to high-rise structure, taken as above 6-stories in this thesis     xvi Acknowledgements I would like to thank my supervisor, Dr. Terje Haukaas, for all of the help and support with this thesis. I also am thankful for my classmates Lisa Tobber, Jeremy Atkinson, Michael Fairhurst, David Optyker, Ben Ticknor, Sai Pai, Abbas Javaherian, and Vasantha Ramani for help while working on this project and other parts of my degree. I would also like to thank Dr. Thomas Tannert and the other professors in the department for their help with this thesis and other parts of my degree. I would also like to thank my parents, Gwen and Mark, who supported and advised me throughout my time at university. Finally, I also must thank all of my friends and roommates at UBC for helping make my time in Vancouver and the surroundings great. Special thanks to Tianna Sturdy and countless people in the Varsity Outdoor Club that provided many weekends of extracurricular activities throughout, and far beyond, SW BC.     1 Chapter 1 Introduction 1.1 Objective and Scope The performance of structures in earthquakes includes many uncertainties in the engineering of the structure. These uncertainties may be included in a risk analysis that implements reliability algorithms to determine the probability of exceedance, all in terms of cost of the structure. These costs include the construction and material cost, cost of damage, and the costs of a structure failing to perform as designed. This is given in terms of a risk measure, which then may be optimized with developed optimization techniques to give the configuration of a structure to reduce total costs. This analysis combines many models that are nested to contribute to the final cost and is novel in the sense that it encapsulates a continuum of structural behaviour, unlike past Reliability-Based Design Optimization (RBDO) that defined a discrete failure event. The first objective of this thesis is to develop software models in Rts that allow this risk minimization to be performed and test these models incrementally. The optimization analysis will require two models implemented. The first model is a risk tool, which determines the mean cost of a load applied to a model with probabilistic inputs. This model will take inputs from two reliability models, First-Order Reliability Method (FORM) and sampling methods. By implementing a brief sampling analysis, the model will determine an approximate mean value of the total cost input without needing prior knowledge of the distribution. Once cost thresholds are defined, reliability methods are used to determine the probability of exceeding each threshold and used to construct an exceedance probability curve, otherwise known as a loss curve. The risk measure is determined from the loss curve, which here is the mean value of the cost functions.   2 The second model takes existing optimization techniques and implements them to the risk model. This model uses the mean cost that is output from the risk model and then determines the gradients with respect to the decision variables, which are the aspects of the design that may be changed. The gradient is determined by perturbing these decision variables. Using this information, optimization techniques are implemented to get the search direction to determine new values for the decision variables – this is iterated until the minimum risk measure is reached. The risk model allows the mean cost of the structure under the given loading to be minimized, giving the optimal structural design figures. Here, these cost functions only represent the cost due to the loading in this document, not the entire life cycle of the building. Once these models are operational, the second objective is to test them with incrementally complex example structural models. The first example is a confirmation of results, using a simple algebraic equation with a single decision variable from a conference paper (Haukaas et al. 2013). This concept of risk minimization is then applied to the “Finding the Forest Through the Trees” (FFTT) tall wood structural concept through a more complex example with two decision variables, many nested variables, and an analytical approximation of a single-storey frame. Finally the framework for a six-storey FFTT frame with many decision variables and an in-house structural model is provided. This structure has deeply embedded decision variables, which are used for the basic meshing parameters of the finite element structure. When performed, this example will provide insight into the FFTT structural concept.   3 1.2 Background 1.2.1 Reliability-based Design Optimization Reliability analysis has been around since the 1970s (Enevoldsen and Sorensen 1994). Applications to timber structures have been limited, however, Rosowsky (2013) provides a good overview of the literature to date. Much of these analyses have focused on meeting target reliability indices, or probabilities of failure, of either single components or small assemblies. However, some analyses went beyond, creating loss curves by determining the probabilities at many thresholds for light-frame timber buildings (Lee and Rosowsky 2006; Yin and Li 2011). To present date, there have not been reliability analyses on modern tall timber structures, however tall building analyses have been performed on reinforced concrete structures (Koduru and Haukaas 2010; Soares et al. 2002).  Using reliability methods to optimize structures is also established. RBDO is a method to determine the optimum cost of a series of designs by varying the design. This method allows an optimal value to be determined using a defined and discrete failure state. Based off of this failure state, the probability of failure may be determined. These are combined to determine a cost (Enevoldsen and Sorensen 1994). In contrast, a new method of optimizing a risk measure, based off of the loss curve of a structure, was performed to encapsulate the effects of uncertainty in all aspects of the analysis instead of only a failure state (Haukaas et al. 2013). 1.2.2 Tall Wood Building Concepts In recent years, there has been resurgence of interest in tall timber. In Vancouver, old timber post-and-beam structures are still in use, standing up to 9 storeys (Koo 2013). In Europe and   4 Australia, there have been several modern cross-laminated timber (CLT) buildings up to 10 stories. Most of these buildings contain apartments and are located in low seismic hazard zones. The building designs vary, implementing either concrete or CLT cores, many with CLT gravity systems.  In addition to these realized buildings, many different concepts have been proposed for future timber-hybrid structures. These vary, from using primarily timber, to using cantilever buildings, concrete cores, and CLT infill panels in steel moment frames (Dickof 2013; Falk 2005; Van de Kuilen et al. 2011; Skidmore Ownings & Merrill 2013; Timmer 2011). In particular, Green and Karsh (2012) proposed a tall timber concept for seismic regions, called FFTT. This concept uses CLT panels vertically, linked with steel beams to provide a ductile lateral load-resisting mechanism.  Incremental research has been performed on the inclusion of CLT in seismic regions, many focusing on buildings designed with CLT wall components with ductile connections. Component level research and testing was performed in several studies (Dujic et al. 2008; Hristovski et al. 2012; Popovski et al. 2010; Rinaldin et al. 2013). These studies have been expanded to look at the system response of CLT primary structures both experimentally and with detailed models, including 3-storey and 7-storey building configuration tested on two shake tables in Japan (Ceccotti et al. 2010; Rinaldin et al. 2013).  In addition to this research, the FFTT building system has also been the subject of studies. The connection has been investigated both experimentally and numerically at UBC (Azim 2014; Bhat 2013; Fairhurst 2014).   5 1.3 Overview of Contributions 1.3.1 Risk and Optimization Model Implementation Two models were developed and implemented in Rts in this thesis and tested to ensure that they work intended. The first is the full development of the risk model, which received cost function and probability input from reliability models and gave the mean cost of a structure under a defined loading, requiring no prior knowledge. No prior knowledge is necessary and the model will automatically adjust the limit thresholds as appropriate to adequately estimate the lost curve and provide the mean cost using quadrature. The optimization module takes existing models and implements them into Rts. It is formulated in such a manner that it calls the input model to determine the current value and gradient, using finite difference methods. Only a fixed step size has been implemented currently, but the search direction has two methods implemented in Rts: steepest descent and BFGS. Each of these methods is used in conjunction with the risk model to test the compatibility with two examples. 1.3.2 Development of Risk Minimization Examples Risk minimization was applied to two examples of increasing complexity, implementing the risk and optimization orchestrating models in Rts. The first example, a single beam, is from a conference paper by Haukaas et al. (2013). This example provided a confirmation that the orchestrating models were functioning properly and confirmed the results. This model was simple, with an algebraic objective function containing 5 random variables and one decision variable.   6 The second example was designed as a sub-section of the FFTT system. This example consisted of an elastic analytical 2-D model of a single-storey FFTT frame, with a lateral load applied. This model consisted of many nested algebraic models and two decision variables that represented dimensions of the final design, with other parameters implemented as random variables. The orchestrating functions implemented were used in this analysis to provide the optimal configuration for the lowest cost for the functions included. This was performed with both search tools implemented. Finally, the framework of a third example was provided for future development in Rts. This is currently a six-storey FFTT frame, which is comprised of CLT panels modeled as finite element shells for columns and linear steel linking beams. This example demonstrates how structural parameters in this model may be deeply embedded, such that the structural points are based off of decision variables, which, in turn are the basis of the finite elements. This allows for the entire mesh to be varied throughout the optimization process.   7 Chapter 2 Literature Review 2.1 Timber as an Engineering Material For the purpose of this thesis, the definition of mass timber will be as used by Green and Karsh (2012); engineered products that include CLT, laminated veneer lumber (LVL), and laminated strand lumber (LSL). In addition, glulam beams will be considered within this category. These engineered products benefit from the timber defects being distributed throughout the section, increasing the uniformity and performance of the product, despite the inclusion of lower grade timber (Lam 2001). Only CLT and glulam are discussed significantly in this thesis, with the focus on CLT. 2.1.1 Cross-Laminated Timber CLT was first developed in Austria in the 1990s and has become popular in Europe, yet is relatively new to North America (FPInnovations 2012). CLT is an engineered wood product constructed from several layers of cut timber sections, shown in Figure 2.1. These layers alternate board directions typically perpendicularly and are generally composed of an odd number of layers such that the exterior layers are oriented parallel to each other. CLT is manufactured using glue between the timber layers with a combination of heat and pressure applied.    8  Figure 2.1 A CLT specimen with 5 layers (Stürzenbecher et al. 2010) The size of CLT panels is limited by transportation requirements and manufacturing facilities to approximately 18 m long by 3 m wide. The thickness generally varies between 3 and 7 layers, which could get upwards of 500 mm. The finished panels provide a very stiff plate, both in and out of plane and allow for two-way action similar to a concrete slab (FPInnovations 2012). There are two main theories for designing and analyzing CLT: by using modified beam theory as in the CLT Handbook (FPInnovations 2012) or by using orthotropic plate theories (Guggenberger and Moosbrugger 2006; Stürzenbecher et al. 2010), but no method has been universally accepted. 2.1.2 Glued-Laminated Timber Glued-laminated timber (glulam) is another engineered wood product constructed from aligning smaller laminates along the same axis to build-up a large member by pressing with glue. Examples are shown in Figure 2.2. This product is not proprietary and may be used in many applications. One benefit is that the beams may be designed in many shapes and cross-sections as required, allowing great flexibility in uses (Lam 2001). Typically, glulam is used as beams and columns in timber frame structures. Ductility in glulam designs is typically ensured by oversizing the beams relative to the connections and by designing a ductile   9 connection. However, by modifying the laminate orientation and grading or by adding reinforcement, ductility may be created in the member itself (Tomasi et al. 2010).  Figure 2.2 Glulam specimens (Tri-State Forest Products n.d.) 2.1.3 Seismic Design Concept of Timber In general, the seismic design concept of timber structures concentrates the inelastic action in ductile zones that provide a mechanism for dissipating energy within the system. These ductile areas are typically steel components in the connections, though configurations may vary greatly. How this is implemented is a key point of discussing new tall timber concepts, discussed in Section 2.2.1. These are important considerations, particularly areas with risk of seismic activity. To protect the brittle components, ductile connections must be designed in conjunction with preventing premature brittle failure modes through capacity design (Jorissen and Fragiacomo 2011). Other work has been performed on the robustness of structures during seismic loading was investigated and good design practices and prescriptive rules to increase system robustness in seismic areas were discussed (Branco and Neves 2011). These studies provide a general overview of the seismic design and a basis of design, in addition to design texts. FPInnovations has also released a tall wood technical guide (FPInnovations 2014).   10 2.2 State of the Art: Tall Timber Buildings Timber structures have been used throughout history for all types of structures, from ancient tall Buddhist temples to modern housing and shops (Smith and Snow 2008). Prior to the modern building codes of the mid-20th century, tall timber structures were built extensively in Canada. These structures were primarily post-and-beam buildings with masonry exteriors. Today, many buildings up to 9 storeys are still in use in Vancouver’s Gastown and Yaletown neighbourhoods. For a change in use, these structures must meet new fire and structural upgrade requirements, which includes masonry strengthening and improvements to diaphragm-wall connections (Koo 2013). Despite these tall wood structures, modern timber construction in North America has primarily been for low-rise residential and commercial use. While traditionally these have been constructed with light timber framing, it has become increasingly common to use mass timber products. In recent years, however, the limits of timber buildings have been re-evaluated, with many concepts for mid- and high-rise structures being presented and many structures realized. While the structural focus of the examples is the FFTT concept, these provide a comparison and a background in tall timber, both in Canada and worldwide. 2.2.1 Tall Timber Structural Concepts Recently, there has been new interest in new tall timber structures. In British Columbia and Quebec, the regulations were changed to allow 6-storey timber construction, up from the 4-storey maximum that is common throughout the rest of Canada. In addition, studies done in the United States and Europe have confirmed the structural and architectural feasibility of tall timber structures (Falk 2005; Van de Kuilen et al. 2011; Skidmore Ownings & Merrill 2013; Timmer 2011).    11 Falk (2005) explores the architectural aspects as well as different structural systems for massive timber structures. The focus is on low- and mid-rise structures primarily in Sweden, with several case studies of realized buildings. It does not, however, focus on the structural aspects of the buildings, nor include any seismic considerations. Van de Kuilen et al. (2011) look at 30-40 storey concrete-timber composite towers constructed to resist wind loads in Shanghai, China. The system investigated includes a concrete core with outrigger floors, with CLT walls and tension bars that provide stiffness and extra load capacity for wind forces. This provides a concept of a hybrid timber system, but does not use any innovative new concepts in the primary (reinforced concrete) shear wall. Timmer (2011) analyzed the feasibility of tall timber office buildings to 100 m, which considers several building configurations. The lateral deflection, fire safety, wind, and dynamics of the structure are considered and modeled. This research included the basis of design for tall timber structures in wind governing areas, but does not include the effect of seismic excitation on structures. In 2008, engineers at Techniker did a feasibility analysis of an 8-storey CLT residential tower, which included checking robustness, structural movement, fire, and acoustics. This report found that the building was feasible and included possible connection details and floor layouts (Yates et al. 2008). In 2009, Techniker completed a 9-storey building with a CLT lateral load system, Stadhaus. Later in 2010, Techniker (2010) also provided a conceptual design for a 30-storey tower. The analysis found that above 25 storeys, the most economical configuration would be a hybrid structure that utilized a concrete core.    12 In Chicago, Skidmore Ownings & Merrill (2013) compared a timber-composite building to a benchmark 42-storey concrete building, a concrete apartment the company had previously built. The concept discussed for the composite structure included a CLT core and floors, but included concrete link beams as ballast to reduce uplift due to the governing wind forces. These reports are not located in seismic regions and designed primarily for wind loads. Green and Karsh (2012) performed a study that also looked into the feasibility of a tall timber structure in Vancouver, BC, up to 30 storeys. These conceptual designs consist generally of CLT panel walls and glulam columns, with CLT floors and cores consisting of CLT panels connected with steel sections for the lateral load system. An example of one configuration, implementing a core to reach 12 stories, is shown in Figure 2.3. This report also describes feasible sequencing for the construction of such structures. To date, there have been no realized FFTT designs.  Figure 2.3 Example concept of FFTT design with a core (Green and Karsh 2012)    13 2.2.2 Realized Tall Timber Structures Modern tall timber structures are not simply concepts, however. In the past decade, many tall timber structures have been constructed and a selection is summarized in Table 2.1. These are primarily residential buildings in Europe.  Table 2.1 Summary of modern tall timber structures Name Year completed Location Total stories Holzhausen 2006 Steinhausen, CH 6 e3 2008 Berlin, DE 7 Limnologen 2009 Växjö, SE 8 Stadhaus 2009 London, UK 9 Bridport House 2011 London, UK 8 Forté 2012 Melbourne, AU 10 LifeCycle Tower 1 2012 Dornbirn, AT 8 Wood Innovation and Design Centre 2014 Prince George, BC 6   Figure 2.4 Examples of modern timber construction (left to right): e3 (Architecture in Development n.d.), Stadthaus (Techniker 2010), Forte (Lend Lease n.d.) The first tall timber building in Switzerland was the Holzhausen apartments, finished in 2006. At 6-storeys total, only the top 4 are constructed of timber, with a concrete podium. The building’s core is concrete, but it has CLT walls and beam floors (Lehmann 2012).    14 The e3 building in Berlin, Germany was completed in 2008. While this 7-storey structure included two concrete cores, the vertical system was CLT. The flooring consisted of glulam beams with a concrete surface. This building also had an external fire escape staircase made out of concrete that is separate from the building (Lehmann 2012). The Limnologen project consisted of four apartment buildings Växjö, Sweden and was completed in 2009 as part of a timber building program in the region. Each of these buildings is eight total storeys, with a concrete first-floor and CLT for the remaining seven. This concrete base allowed for anchoring of the upper stories, which consisted of CLT load-bearing walls and floors. In addition, these structures have been instrumented and have been used as a test case for many studies that Serrano (2009) summarizes, with additional studies by Bard et al. (2010) on the transmission and attenuation of walking vibrations through the walls and floors of the structure. The nine-storey apartment building Stadthaus was constructed in London, UK in 2009. Then the tallest mass timber building in the world, it consisted of CLT instead of a hybrid system above the ground floor. Of interest, this building used a CLT lateral load resisting system, in addition to timber gravity systems, but had a concrete first floor (Lehmann 2012).  In the same London borough as Stadhaus, Hackney, is a second tall timber structure. Finished in 2011, Bridport is an 8-storey apartment that consists only of CLT, with no concrete ground floor (Lehmann 2012). In 2012, the newly constructed Forté, designed by Lend Lease, became the tallest CLT apartment building at 10-storeys, including a concrete first-storey. Notably, RMIT University performed an analysis of the life cycle impact of this building, compared to a reference   15 building, for Forest & Wood Projects Australia and found that it had reduced environmental impact in most of the considered categories (Durlinger et al. 2013). This apartment building also consists of a CLT lateral load resisting system and has a concrete first storey. In Dornbirn, Austria, CREE developed LCT ONE, was constructed in 2012 to 8-storeys using the LifeCycle Tower (LCT) system of prefabrication. The LCT ONE system consists of a prefabricated platform construction using a cast-in-place concrete core, glulam columns, and a concrete-wood composite slab system. In this system, the slab rests on the column tops, which both separates the storeys for fire protection and prevents loading timber perpendicular to the grain. The LCT system developed is projected to be feasible to 30 stories (Zangerl and Tahan 2012). The Wood Innovation and Design Centre in Prince George, BC is the tallest recent timber structure in British Columbia at 6-stories. It consists of a CLT core and floors, with glulam columns. It was completed in October 2014 and will be occupied by the University of Northern British Columbia, as well as offices (Government of British Columbia n.d.). 2.2.3 Future Projects In the summer of 2014, the University of British Columbia released an Expression of Interest for architectural services for a tall wood student residence. It is for a maximum 53 m tall building, planned for construction starting in 2015 and occupancy in 2017. This is intended as a pilot project, and will influence the NBCC 2020 (UBC Properties Trust 2014).  Another project in London, Banyan Wharf, is under construction by Regal Homes. This 10-storey luxury apartment building will be constructed with CLT (York 2014). In Bergen, Norway another tall apartment building is underway, called Treet. Treet consists of a glulam   16 truss structure and is planed to reach 14-storeys; construction was started in 2014 (The Local 2014). 2.3 Mass Timber Research The design and testing of individual components for timber structures has been performed worldwide. First, in this section, several components that have been studied will be looked at, including the flooring, diaphragms, CLT and steel-timber hybrid testing. While the focus in this thesis is upon the seismic aspects of timber systems, relevant tall timber systems are also included as part of the work to date. 2.3.1 Flooring and Diaphragm Systems Integral to any lateral load resisting system, the flooring diaphragm provides a load path for seismic forces to be transmitted to the walls and foundation. While this is the main point of interest, other non-seismic considerations of the flooring systems are included here to provide a brief overview of flooring systems. A recent literature review on timber-concrete composite flooring design was performed by Yeoh et al. (2011), focusing primarily on timber-concrete composite. This system typically comprises of a timber base connected to a concrete slab by some sort of shear connection. This system provides several benefits over traditional timber floors including: minimizing deflections, vibration, acoustic, and fire characteristics. In comparison to a typical concrete slab, it also reduces mass (important for both gravity and seismic designs), allows for faster erection, and contains lower embodied energy. There are two general methods of design: the linear elastic method (governed by timber failure) or elastoplastic design (governed by shear connectors yielding). Yeoh et al. also provide a review of various composite connection   17 systems, influence of concrete properties, testing programs that have taken place, finite element modeling, prefabrication, and considerations including fire, acoustics, and vibrations.   The use of CLT flooring systems has been studied at the University of New Brunswick. This has included the broad study of building systems and the inclusion of hybrid timber slab systems that satisfy design requirements (Weckendorf and Smith 2012a). Work was also performed to show that screw-type fasteners provide sufficient strength for connecting CLT flooring (Asiz and Smith 2011) and the identification of the issues due to the CLT slab local dynamics on occupancy serviceability (Weckendorf and Smith 2012b). Additional research on diaphragms subjected to seismic loads was performed at the UBC by Ashtari (2012). This research included calibrated ANSYS models to determine the relative in-plane stiffness of the CLT floor slabs to the longitudinal connections between them. This also included a parametric analysis to determine the influence of various factors on the diaphragms. This research provided a flowchart to help guide designers of CLT floor slabs.  2.3.2 CLT Panel Testing While all timber seismic construction will are composite structures to some degree, this section considers structures made primarily out of CLT panels, with ductile connectors between panels. This is in contrast to Section 2.3.3, which considers systems that have a significant number of steel components relative to timber. At the University of Ljubljana, CLT walls were tested for their shear capacity due to a cyclic lateral load. This testing was performed on both panel segments with and without openings. These experimental tests were then compared to finite element models. The testing   18 determined the stiffness reduction due to openings in the panel (Dujic et al. 2008). Later, CLT panels were tested on a shake table and the results were found to be consistent with the quasi-static cyclic testing results (Hristovski et al. 2012). FPInnovations performed a series of 32 tests on many configurations of CLT panels. These configurations varied the connections, including brackets and hold-downs, including using rivets, screws, and nails. In addition, the wall configurations were varied, with some tall walls, two storey walls, long walls, and walls with openings. From these monotonic and cyclic tests, the hysteric loops and behaviour of the systems could be determined. These walls, with the considered base connections, were determined to have satisfactory seismic behaviour (Popovski et al. 2010).  The behaviour of CLT panels and their connections under earthquake excitation was modeled by Rinaldin et al. (2013). These models used elastic shells for the timber elements, while non-linear spring elements were used for the connections, complete with hysteretic properties defined for the shear and axial motion. These models, which were developed for a single panel, coupled walls, and a single-storey building, were verified with experimental testing give a method to model CLT structures using finite element analysis. 2.3.3 Steel-Timber Hybrid Testing Testing of steel-timber hybrid systems has been an active research area at the University of British Columbia (UBC). In particular, two concepts have been actively researched recently at UBC: CLT infill in steel moment frames and the FFTT structural system. Dickof (2013) investigated a system where a steel moment frame had CLT infill panels. This work was performed using OpenSees software for three, six, and nine storey non-linear   19 models. These detailed models combined several different infill panel configurations and moment frame ductilities. The results of this investigation found that CLT infill panel additions increased strength and stiffness, in addition to reducing the ultimate interstorey drift of the structure. From these results, the ductility and behaviour factors of the system were determined. This system primarily is a steel moment frame that utilizes timber, however, not a timber-primary system.  The FFTT panel-beam connection was first tested at UBC, after the concept was introduced. Bhat (2013) performed material bearing tests on CLT, and also monotonic and cyclic tests of cantilevered beams embedded in CLT panels. The embedment depth of these beams was varied in addition to using wide flange and HSS sections. These results showed potential in the weak-beam, strong-column mechanism desired in the FFTT concept. However, the W-flange sections were susceptible to uplift from the CLT section and out-of-plane buckling and HSS sections tested were small relative to practical load demands. Building on the experimental findings of Bhat, Azim (2014) continued the analysis at UBC on FFTT connections. This included first creating numerical models, which were then compared to the testing. Azim then performed a numerical parametric analysis of these connections, determining that CLT crushing at the connection is a significant consideration, even for small beams. Finally, some new connection configurations with longer embedment length and reduced depth were tested experimentally. These new configurations improved the performance, although testing on larger beams is still required for practical applications.   20 2.3.4 System Testing In addition to testing of single components, some large-scale tests were performed during the Construction System Fiemme (SOFIE) project. The SOFIE project was a large experimental project supported by both Trento University and IVALSA-CNR of Italy. This project included several stages that extensively explored the building characteristics, fire resistance, and the seismic reactions of CLT buildings. This was done in stages, starting from individual component testing that culminated with the full scale 3-D shake table testing of 3- and 7-storey CLT buildings (Ceccotti et al. 2010). In the 3-storey building test, three configurations were tested, with differing number of openings and sizing of the openings. For the design of the building, the equivalent lateral force procedure by Eurocode 8 was followed, with the design location in Italy. When tested, the structure showed no residual displacement at the “near collapse” state, defined where connections had started failing (Ceccotti et al. 2010).  The design of the 7-storey structure used the results from the previous 3-storey testing to determine the behaviour factor, q (similar to the product RdR0 in NBCC). Only one configuration was tested, which also showed no residual displacement of the structure and did not require major repair. This testing demonstrated CLT structures were feasible in seismic zones (Ceccotti et al. 2010).  Based on the results from testing at FPInnovations, Pei et al. (2012) constructed a finite element model of a 10-storey CLT building. This structure was modeled with CLT panels as walls, implementing brackets, designed using Direct Displacement Design. This model was then tested using many scaled ground motions. This allowed an approximation of a behaviour   21 factor for the structure and demonstrated that this type of structure is viable in seismic regions.  Much of the research to date of tall CLT structures in the Pacific NW was summarized by Pei et al. (2014). This includes a definition of a goal of constructing a 10-storey building and showing defined performance targets, in addition to identifying challenges that must be overcome and a plan on how to reach these goals.  At UBC, analytical simulations of tall wooden buildings have been performed. Fairhurst (2014) performed a suite of non-linear time history analysis on several building models, based on varying seismic hazards and heights. In addition, he has worked on the wind loading of the models. This research utilized nonlinear 3-D models in OpenSees to perform the analyses and determined that the structures may be designed to meet seismic requirements; however, wind loading may govern taller structures. 2.4 Research Into Reliability Analysis and Design Optimization To the present date, there have not been any reliability analyses of tall timber structures. However, there are a few reliability studies on either timber structures or on tall buildings.  2.4.1 Timber Reliability Reliability studies have been performed for different timber structures. Some of these structures are a single element reliability analysis, while others look at both linear and non-linear roof truss systems. In addition, the interaction of primary to secondary roof systems on robustness has been performed. While many of these analyses have been studied for snow and live loads, others have been performed with seismic loading.    22 Rosowsky (2013) performed a review of probabilistic modeling and reliability of timber structures. This review discusses the introduction and inclusion of limit state design in codes and then discusses the different approaches of coupled and uncoupled load-resistance reliability analysis. This study then looks at the future of timber reliability in the context of performance-based seismic design and primarily for light frame timber houses.  Toratti et al. (2007) performed a reliability analysis of a single glulam beam, considering the critical section of a simply supported tapered beam in a Finnish supermarket. This analysis was not based upon a failed beam. It compares the reliability of the Eurocode 5 and Finnish Building Code designs, while also considering the failure probability during a 30- and 60-minute fire. The probability of exceeding the maximum bending stress was calculated throughout the year based on different snow loading. Two studies in Sweden investigated the system effect of a light wooden roof truss to gravity loads. These analyses considered variation in loading and strength of the timber and found the spacing required to reach a target reliability index of 4.3. The first study by Hansson and Thelandersson (2002) considered a linear structure, while the second by Hansson and Ellegaard (2006) extended the analysis to consider non-linear nail connections. It was determined that the additional refinement of the analysis did not change the system effect significantly. A similar study was performed in France on a wooden truss by Riahi et al. (2011), but included the effects of seismic forces on the truss’s reliability. This model was constructed with non-linear stamped plate connections and run with the 1997 Kobe ground excitation. The First-Order Reliability Method (FORM) limit state was determined by defining a failure   23 displacement. The analysis involved using a finite element model to predict structural responses. The failure displacement was varied to show the sensitivity of the failure probability at each displacement. Due to the failure of several arenas and long-span timber structures in Europe, there was work done to understand the failures and the robustness of these systems. A survey of failed structures identified the critical issues (Dietsch 2011) and a detailed analysis of the design and possible flaws in two collapsed arenas (Munch-Andersen and Dietsch 2011). The analyses considered the primary structural elements (typically large glulam beams) with a long span and the secondary structures (purlins). The reliability analysis of the roof structure was performed and then extended to the robustness of the building. This explored the possibility of propagation of structural failure throughout the system and gave the probability of failure of the structures given errors in the design. These analyses identified the preferable configuration of the structure to consist of an indeterminate and redundant primary structure to allow load redistribution prior to component failure, with a statically determinant secondary structure that prevent the propagation of failure from one component to the next (Dietsch 2011; Miraglia et al. 2011; Sørensen 2011). These studies all consist of system analyses of timber buildings and provide a more pragmatic approach when compared to many of the pedagogical examples of simple structures. Further detailed analyses on similar structures were performed using seismic loading by Branco and Neves (2011), where the effects of added redundancy and ductility were noted to improve robustness.  A non-linear reliability analysis by Li et al. (2011a; b) using experimentally models was performed for the seismic evaluation of two and three-storey post-and-beam structures. In   24 this analysis, the damage to the building was quantified by the peak-interstorey drift experienced by the structure through a library of ground motions. Using the ground motion and design characteristics (peak ground acceleration, structural mass, etc.), response surfaces were generated for the structure. These response surfaces were used to perform the FORM and importance sampling analyses to give the structural system reliability. These were used as indicators of the performance of the structure, in terms of expected drifts and reliability indices. Multiple studies have been performed on the reliability analysis of low light-frame structures, similar to the housing common in North America. Van de Lindt and Walz (2003) developed a hysteretic response for wood shear walls based on experimental data for ten single panel walls. This response utilized in a reliability analysis by running ten ground motions on each wall. This study estimated earthquake return periods and the reliability indices of different drifts at several earthquake return periods. Another study by Li and Ellingwood (2007) looked at the influence of openings in the walls. The considered residential structures were analyzed with three hazard levels and used with twenty ground motions for each hazard level and three wall configurations. The probability of failure for different damage states was then identified. Further research on light-frame buildings combined snow and seismic loading for reliability analyses. Lee and Rosowsky (2006) looked at this load combination in three US sites and used a convolution integral to combine the multiple hazards. This study developed seismic fragility curves after running non-linear time history analyses for different percentages of snow loads. A later study looked at this topic as well, but from a different methodology of   25 combing the snow and seismic loads. Yin and Li (2011) considered two separate models for the earthquake loading, using the Poisson process for occurrence and Type II distribution for intensity, and a filtered Poisson process for snow loads that increased the load in each event. These modules were then combined in the structural analysis to predict interstorey drift response and create loss curves, in terms of monetary value. All of this research on timber reliability to date has consisted of short timber buildings, both light frame and heavy construction. There are studies that consider both snow and seismic loading for both individual elements and structural assemblies. However, no research was found on the few tall timber structures built to date. Additionally, most of these analyses focus on achieving a reliability index and only a few use reliability analyses to determine a loss curve to determine the risk to the structure. Many of these analyses did, however, consider complete structural systems and used several methods to determine the reliability, including the union of many limit states, a single indicator limit state, and aggregated building states to determine loss curves. 2.4.2 Tall Building Reliability Analysis A reliability analysis performed by Koduru and Haukaas (2010) considered a realized 15-storey reinforced concrete shear wall building in Vancouver. This probabilistic analysis used synthetic ground motions with variable modulating function characteristics for crustal, subcrustal, and subduction earthquakes. The analysis was performed in OpenSees using Monte Carlo simulations that gave monetary loss curves based on the damage models for the structural and non-structural components to demonstrate a unified reliability analysis.    26 Another system reliability analysis was performed by Soares et al. (2002) on reinforced concrete frame structures that included non-linearities in the system. These geometric and material non-linearities were modeled with a response surface using least squares regression to provide an explicit limit state function in terms of the member’s force capacity. This reliability analysis also included a comparison of safety factors to reliability indices and a parametric analysis that demonstrated the method with a practical example. While not aimed at timber reliability and risk analyses, these two studies provide a basis for a reliability and risk analysis of tall buildings. In contrast to many of the timber reliability analyses, these studies were focused on the global system behaviour of the structure. In particular, Koduru and Haukaas (2010) focus on the cost performance of the structure, not on component responses. This is in line with the focus of this thesis, which is attempting to minimize the global cost of the structure given an event, rather than meeting target reliability indices. 2.5 Software Overview In this thesis, three programs are focused on for analyses. OpenSees and Rt are important for background information and previous work and discussed briefly here. A third program, Rts, is a new program based off of the foundation of Rt, Rts, which is used in the remaining chapters for orchestrating models and examples. 2.5.1 OpenSees  OpenSees is an object-based program first developed at the Pacific Earthquake Engineering Research (PEER) Centre in Berkeley, California. This open source program provides a platform for non-linear simulation of structural responses under earthquake excitation.   27 OpenSees may be utilized in both Rt and Rts. Rt and Rts can call an OpenSees analysis, then use the response from the structural analysis as an input to other models, as discussed in Appendix A. 2.5.2 Rt Rt is a program developed by Mahsuli and Haukaas (2013a; b) at UBC to provide a reliability analysis platform. It is an object-oriented application that focuses on performance-based earthquake analyses of large systems or regions, using simplified models for damage to individual buildings. As mentioned above, it also allows OpenSees to be used as an external model, allowing a detailed structural model to be used in a reliability analysis; a guide for this is included in in Appendix A. Further information on Rt and its operation may be found in the development references.    28 Chapter 3 Orchestrating Models in Rts Rts implements a multi-model platform, with many models intended for running simulations. Four orchestrating models have been implemented in this new program, which use responses from other models, such as algebraic expressions or structural models. These are the reliability (FORM and sampling models), risk, and optimization models. The assembled structure of these orchestrating models is given in Figure 3.1. An overview of the operation of the reliability models is given. The current implementation is consistent with prior models, with minor changes. In contrast, the risk model is a novel algorithm and implementation and the optimization model is a new implementation in Rts using established techniques.   Figure 3.1 Diagram of model structure and inputs Rts is based upon the framework of Rt, but extends the functionality. A key element of this program is the parameterization that may be used deeply within the program when defining both random and decision variables and the use of these variables in other aspects of the program. This includes basing the structural mesh upon parameters, which then change during analyses. In addition, a variety of models may be nested, with subsequent models running those it is dependent upon when called.  Rts is still under development, but future plans include a repair manager to categorically decide upon repair actions and costs, while also estimating the amount of visible damage. μc,$σc$Structural$Model$Random$Variables$Decision$Variables$Other$Parameters$Op<miza<on$Model$Cost$Model$ Risk$Model$Sampling$FORM$x$ u$ c$ p(c>c0)$μc$  29 Development will also include ground motion models, as well as non-linear structural elements. Additionally, direct differentiation procedures are implemented (with the exception of the risk model) that allow for quick gradient calculations. For these reasons, this program will provide a good foundation for structural design optimization in a reliability-based context. Here, the focus is on the orchestrating models and how they have been implemented in Rts. 3.1 Reliability Analysis Reliability analysis is the prediction and study related to the probability of an event occurring for a structural system. In the past, a reliability analysis was performed to compare loading with resistance to determine the probability failure of a system, when the load exceeds the resistance. In the context of this thesis, however, a reliability analysis is used as a model to quantify the probability of exceeding a cost threshold. This analysis may then be used for other risk or optimization analyses. The goal of reliability analysis is to take the uncertainties into account in the model to determine the response of the system through two key components: the limit-state function and random variables.  The major benefit of a reliability analysis is the inclusion of probabilistic information about the model, such as load intensity, geometry, or material properties. This makes it a powerful tool to determine a probability of an event of a response exceeding a threshold. However, a reliability analysis only gives this probability – it is not possible to directly identify a target reliability and work backwards to determine a design. In addition, the robustness of the algorithms used varies and convergence issues may occur for very rare occurrences, as well as with highly non-linear limit state functions.   30 3.1.1 Definition of a Limit State Function  One of the two basic ingredients of a reliability analysis is the limit state function (LSF), which defines the state of the system, while random variables are the second. When the LSF is greater than zero, it represents an intact state, while a value less than zero represents a failed system. Using a LSF, it is possible to determine the failure probability of the system for the limit state criteria chosen using the limit state surface, where the LSF has a value of zero. The LSF allows for the failure domain to be defined for the reliability problem (Der Kiureghian 2005a). The criteria vary depending on the problem, but examples include load demand and resistance, deflection, or cost. The formulation of the LSF depends upon the formulation of the reliability analysis performed, discussed below. 3.1.1.1 Classical Reliability Analysis A classical reliability analysis is typical performed with the limit state being identified as an engineering parameter. This engineering parameter is generally a physical measurement that may be taken to represent the structure. Examples of these representative limit state parameters may be interstorey drift of the building, deflection of a component or structure, or the stress or strain state of a specific component. These parameters must be chosen with good judgment, as with the limit state of these parameters, since the reliability analysis depends upon these assumptions. Two example LSF formulations could be:  g(x) = δmax −δ(x)  (3.1)  g(x) =1−σ (x)fy  (3.2)   31 Eq. (3.1) gives an LSF that compares a deflection against a maximum allowable deflection, while the LSF in Eq. (3.2) determines the observed stress as a ratio of the yield stress. In either case, failure occurs when the LSF is a negative value. There are two approaches for the reliability analysis of a system. One utilizes an indicator response for the LSF, which allows the response to be summarized in one parameter (interstorey drift, for instance). In contrast, a System Reliability model could be used that checks many LSFs. The failure state could consist of several limit states of separate components occurring in parallel, or occurring singularly in series, or a combination of both (Thoft-Christensen 2005). This is not to be confused with the reliability of structural systems, as Systems Reliability is another field of study. While research has been done on System Reliability methods and is discussed by Thoft-Christensen (2005) for different problems, individual component limit states are not discussed in this thesis in favour of using a single aggregated cost limit state. 3.1.1.2 Performance-based Reliability Analysis In comparison to “classical reliability analysis,” performance-based analysis goes beyond the failure probability. A “classical analysis” may focus on one parameter as an indicator of the building state and try to reach a target failure probability. Performance-based reliability analyses, in the context of this thesis, aggregate all of the possible costs of the building, including construction and repair for all components. There are many benefits with approaching the reliability problem with this method. It removes the assumption that one engineering response from the structure is representative of the whole structure by taking account of all components in the structural model. In addition, it does not define discrete   32 states for individual components. Rather, each component is given a continuous cost of repair based on the structural response. This cost of repair is compiled for all components in the structure, which is then implemented into a LSF below:  g(x) =Cthreshold −Cconstruction (x)−Crepair (x)  (3.3) In Eq. (3.3), formulation is includes costs from all aspects of interest in the structure, subtracted from a threshold cost of interest. This formulation also allows a reliability analysis to easily be extended into the Risk Minimization analyses discussed in this thesis. By considering several magnitudes of costs, a probability of exceedance curve may be constructed from many analyses and risk measures may be attained from this analysis. 3.1.2 Definition of Random Variables The other key input in a reliability analysis is the random variables. Since the random variables are defined with a probabilistic distribution and the requisite information (including mean, coefficient of variation (COV), etc.), it is possible to solve for an event’s probability of occurance. Random variables may be included in the LSF either explicitly or implicitly. Examples of random variables can include geometric and material parameters, load intensities, or cost values. These are typically given in a vector format, such as:   X = EsfyPwindM!"##### $%&&&&&  (3.4) There are several different distributions for characterizing random variables using continuous probability density functions. The function distribution used, along with the distribution   33 information (including mean, standard deviation, or other model parameters), may be based on experimental results, historical results, or engineering judgment.  3.1.3 Results from a Reliability Analysis A reliability analysis gives estimation of an event occurring, as defined by the LSF. This result may then be used as a parameter for further models, such as a risk or optimization analysis (discussed further in Sections 3.2 and 3.3). The failure probability may also be given interchangeably as the reliability index, β, which is related to the failure probability through the relationship with the normal distribution function, Φ, shown here:   pf =Φ(−β)  (3.5) In addition to the failure probability, the reliability analysis also determines the alpha vector. This may be used as an importance measure to rank relative importance of random variables. A reliability analysis on its own, as mentioned earlier, cannot be used to directly determine a design for a given failure probability criterion. This is due to the use of a design in the determination the analysis and the design may not be changed within a single analysis. However, with the use of iterative designs and analyses, an appropriate optimal design may be determined. 3.1.4 Reliability Methods  Once a limit state function has been chosen and the random variables defined, a reliability analysis may be performed. There are many methods for performing this analysis, each with benefits and shortcomings. Regardless of different techniques, these all provide the operator with the failure probability or reliability index. This is a brief summary of several techniques:   34 First-Order Second Moment, First-Order Reliability Method, Second-Order Reliability Method, and Sampling. This does not include the full procedures or derivation, which are well accepted and found in reliability textbooks.  3.1.4.1 First-Order Second Moment The First-Order Second Moment (FOSM) method allows calculation of the reliability index using only the mean and the standard deviation (the second moment) information of the limit state function. This reliability analysis, also known as the Mean Value FOSM method does not account for any other aspects of the distributions, using only the probability transformations to determine the second moment information and using a first-order linear approximation of the LSF. Using this information, a reliability index may be calculated with:  β = µgσ g  (3.6) Eq. (3.6) determines the reliability index from the mean, µg, and standard deviation, σg, of the LSF (Haukaas 2014a). This allows the failure probability to be calculated for normally distributed problems using Eq. (3.5). The approach uses makes a first-order approximation of the LSF, which is exact for linear LSFs. However, this approximation results in the “invariance problem” for non-linear LSFs, where equivalent LSFs will not give the same result in different algebraic formulations.  While FOSM is capable of quick calculations due to its simple formulation, it has several disadvantages. It is inappropriate for non-linear LSFs, as it does not account for these non-linearities due to the first-order approximation (Choi et al. 2007). Because of this, it is also not a suitable check for other analyses. Additionally, the failure probability is not explicitly   35 calculated and may only be determined from the reliability index. In practice, FOSM is not broadly useful, as it cannot be used for non-linear problems. However, for linear problems, FORM may be used and will converge in a single step. As a result, it is not implemented in Rts. 3.1.4.2 First-Order Reliability Method The First-Order Reliability Method (FORM) determines the reliability index by transforming the variables into a standard normal space. This transformation gives the variables a mean of zero and a unit standard deviation. In this standard space, the transformed LSF may also be plotted, with the reliability index being found as the shortest distance from the origin to the limit state surface (Choi et al. 2007). This method solves the invariance problem of FOSM, as all equivalent LSF will have the same surface (Haukaas 2014b). FORM uses a linearization of the limit state surface and assumes that it extends perpendicularly from the chord segment from the origin to the design point. It is beyond this linear surface that the failure probability is determined. For non-linear LSF, this may result in errors in the failure probability determined in FORM. In addition, analysis of highly non-linear LSFs may cause issues with convergence. A single-constraint optimization algorithm performs the search for the design point, the closest point on the limit state surface to the origin in the standard normal space. The constraint is that the limit state function is set to zero, while the distance from the origin to the design point is minimized (Haukaas 2014b). This results in two convergence criterion: one to test that the design point is sufficiently close to the limit state surface, with a second to test that the point is the closest to the origin.    36 Algorithm 3.1 FORM Algorithm 1. Select starting point in SNS (standard normal space) 2. Transform into original space 3. Evaluate LSF 4. Evaluate gradient of LSF 5. Check convergence criterion 5.1. If converged, give design point in original space and failure probability 5.2. If not converged, compute next design point in SNS (step 1) and repeat until converged  The FORM algorithm gives the general steps required in the analysis. Typically, for the starting point, the origin is used in step 1. Step 2 implements probability transformations to convert back to the original random variable space to determine the value of the LSF. The gradient of the LSF is more involved and may be performed using the finite difference method (FDM) or the direct differentiation method (DDM), if implemented. Step 4 checks that the convergence is within a given tolerance threshold defined for the analysis. If the problem is converged, the design point is known and the failure probability may be easily calculated. Conversely, if the analysis iterates further, the search for the next design point is commenced, using Armijo step size and HLRF search direction algorithms. A FORM analysis provides a computationally efficient method of determining the failure probability accurately, while also providing information about the importance factors for the analysis. In addition, it also may use DDM sensitivities in calculation of the gradient, saving computational effort. Inclusion of DDM capabilities will also allow it to be used by other governing models (such as an in the risk model). The main downside of FORM analyses is the algorithm lacks robustness and will not converge quickly for all problems.   37 3.1.4.3 Second-Order Reliability Method Second-Order Reliability Method (SORM) is an extension of the FORM analysis, including a curvature approximation around the design point. This curvature then corrects the failure probability calculated by FORM, which assumes a tangent line at the design point on the limit state surface. This curvature may be determined by differentiating the LSF at the design point, which is only practical for explicit LSFs or LSFs with high order differentials. Alternatively, the curvature may be approximated by using two of the trial design points in conjunction with the final design point (Der Kiureghian 2005a). SORM analysis provides a more refined version of FORM, with little additional necessary analysis. In practice for highly non-linear LSFs, SORM may not provide adequate results despite correcting the FORM analysis. In these cases, a sampling method may be preferred. SORM is not currently implemented in Rts, but may be added in the future as a modifier for a FORM analysis. 3.1.4.4 Sampling Methods Sampling analyses use many realizations to approximate the failure probability. There are several different methods to produce the samples, with two mentioned here. Monte Carlo sampling produces samples according to the distribution of the variables and does not require any numerical correction. In comparison, Importance Sampling centers the sampling around a chosen point and then corrects the failure probability accordingly. Sampling methods are theoretically simple and easy to implement. In addition, they are robust and give results for non-linear limit states that may not converge with other algorithms. A downside is that DDM sensitivities may not be used in sampling analyses.   38 3.1.4.4.1 Monte Carlo Sampling Monte Carlo Sampling (MCS, or generally referenced as sampling) is a method of generating many realizations to determine the probability of occurrence. This is a computationally heavy, but theoretically simple, method. It is performed by determining the values of the random variables from their probability distributions and then computing the response (Choi et al. 2007). The algorithm for sampling used in Rts is below. Algorithm 3.2 Sampling Algorithm 1 Generate random numbers outcomes  2 Transform realizations to original random variable space 3 Solve LSF and collect result in an indicator function 4 Update the failure probability from observed results 5 Iterate until desired coefficient of variation or iteration count is reached  The key element of a sampling analysis is the probability transformations from the random number generator to the random variable distributions. After the LSF is solved, the response is then tallied using an indicator function, I(x), which is given a unit value if it is in the failure region and is otherwise zero. This gives an estimate of the failure probability after N samples using:   pf = 1N I(xi )i=1N∑  (3.7) As implemented in Rts, sampling will continue until either the maximum number of iterations or until a target COV is reached, with the current COV given by:  δ = 1N VarsamplingMeansampling  (3.8)   39 MCS is advantageous as it is a method that can be applied to all problems relatively simply and is a good method to check other analyses. This comes at a price, however, and MCS is very computationally expensive, particularly to determine results with small target coefficients of variation or events with low probability of occurrences (Haukaas 2014c). Additionally, MCS of high probability events (almost “sure thing”) requires a low target COV. Otherwise, a single failure may result in the achieving the target COV without achieving a satisfactory analysis. In particular, MCS is useful for highly non-linear LSFs that may not be adequately analyzed by FORM or SORM. MCS is also useful in providing robustness to the Rts risk model, as it provides a method to analyze cost thresholds that did not converge with FORM. 3.1.4.4.2 Importance Sampling Another sampling method is Importance Sampling (IS), which is a refined method of MCS. Unlike MCS, where the sampling is centered around the origin (of the standard normal space), the IS samples are centered around another point. This point is typically the design point determined from a FORM analysis. If the distribution center is selected appropriately, it allows many more of the sampled points to fall into the failure domain. The final failure probability may be determined by adjusting the results based on the known sampling center (Choi et al. 2007). This method provides better results for fewer sampling iterations compared to MCS, and is significantly more efficient for small failure probability events. IS follows the Sampling Algorithm in Section 3.1.4.4.1, with some modifications to the steps, using the notation from Haukaas (2014c). The random numbers are transformed to a modified probability distribution h(x) instead of the random variable distribution φ(x). This   40 gives a modified expression for the indicator function and probability failure, which are now formulated as:  q(x) = I(x)ϕ(x)h(x)  (3.9)  pf = 1N q(xi )i=1N∑  (3.10) IS sampling is not currently implemented in Rts, but provides an additional reliability method that could be implemented in the future. It could be beneficial in determining low probability events in the risk model. 3.2 Risk Analysis A risk analysis provides a method for characterizing the measure of risk for a probabilistic model. Rather than just considering one particular threshold or limit state to for a reliability analysis, it compiles many analyses and provides an overall risk measure. As currently implemented in Rts, only the mean cost is used for a risk measure. 3.2.1 Introduction to Risk Models Risk modeling is a tool for quantifying the cost of a structure, which in this thesis provides an objective for an optimization. A risk analysis includes constructing an exceedance probability (EP) or loss curve, such as the one in Figure 3.2. The EP curve is constructed by performing a reliability analysis at many thresholds. Based on an EP curve, a risk measure may then be evaluated. In this case the mean cost is determined using the area under the curve, as derived in Section 3.2.2.   41  Figure 3.2 Example exceedance probability curve Using risk measures provides the benefits of quantifying the costs of an event occurring, instead of using a single criterion as calculated in a reliability analysis. This allows for decisions to be made based on various risk measures. These risk measures may be chosen as appropriate for the stakeholders, allowing flexibility in quantifying appropriate risk. In addition, the different risk measures allow the inclusion of the effects of the rare tail events (Haukaas 2008). Most importantly, the use of risk models provides an objective function for an optimization analysis. This implementation also requires no prior knowledge of the system, as a preliminary sampling analysis occurs when determining the risk measure. 3.2.2 Determination of the Mean Cost There is currently only one risk measure implemented in the risk model, the mean cost. As described in the literature, the mean cost my be determined by the area under the EP curve (Haukaas 2014d; Haukaas et al. 2013; Der Kiureghian 2005b). Following the notation of Haukaas (2014d), the mean cost is determined as the expected cost from the PDF of the cost distribution, fC(c), by the following relation: 0"0.1"0.2"0.3"0.4"0.5"0.6"0.7"0.8"0.9"1"400" 500" 600" 700" 800" 900" 1000" 1100"Probability*of*exceeding*cost*Cost**Example*Exceedance*Probability*Curve*  42  µC = E[C]= c ⋅ fC (c)dc−∞∞∫  (3.11) However, the EP curve determined is the complementary cumulative distribution function (CCDF), GC(c), which is related to the cumulative distribution function, FC(c), and PDF, fC(c), as shown below:  CC (c) =1−FC (c) =1− fC (c)−∞c∫ dc  (3.12) Using this relation, the expected cost may be expressed in terms of the CCDF as:  E[C]= − c ⋅GC (c)dc0∞∫ dc  (3.13) It is possible to integrate this by parts and drop a vanishing boundary term determine:  E[C]= −[c ⋅GX (c)]0∞ + 1⋅GC (c)dc =0∞∫ GC (c)dc0∞∫  (3.14) As a result, the mean cost may be determined as the integral of the EP curve. As a result, it is possible to give the response of the risk model as the mean by numerically integrating the EP curve. 3.2.3 Risk Analysis Algorithm The risk analysis is performed, implementing reliability methods including FORM and sampling to develop an EP curve and determine risk measures. This is performed by varying the LSF using different cost thresholds to determine the probability of exceeding that cost (Haukaas 2008). The algorithm that is currently implemented in Rts is presented here. More information on the selection of parameters is given in Section 4.1.4.   43 Algorithm 3.3 Risk Algorithm 1. Perform initial sampling 1.1. Save the user-entered sampling information 1.2. Perform a sampling analysis on input parameter (for 200 samples) 1.3. Return mean and standard deviation from sampling data 1.4. Reset the sampling model with the user-entered information 2. Determine thresholds 2.1. Attempt 5.0 std. dev. spread from mean 2.1.1. Check that mean divided by the std. dev. is greater than the spread 2.1.1.1. If yes, use this number of std. dev. spread from the mean 2.1.1.2. If not, reduce number of std. dev. spread by 0.25 and check again 2.2. Determine thresholds, dividing the interval equally among 31 thresholds from sampled mean over the std. dev. spread 3. Determine probabilities 3.1. While “upper threshold is below max probability requirement” is false loop 3.1.1. Set threshold (starting from first vector position) 3.1.2. Run reliability analysis in FORM to determine probability of exceeding this cost threshold 3.1.2.1. If FORM returns an error, run sampling to determine the probability of exceeding the cost threshold 3.1.3. If “lower threshold is above minimum probability requirement” is false 3.1.3.1. If the probability exceeds probability requirement OR the next threshold is below 0 cost 3.1.3.1.1. “Lower threshold is above minimum probability requirement” is set to true 3.1.3.1.2. Skip to threshold at second place in initial threshold vector 3.1.3.2. Otherwise add a new threshold 3.1.3.2.1. Store threshold and probability values in a temporary vector 3.1.3.2.2. Resize threshold and probability vector for one more threshold, a constant interval below initial threshold 3.1.3.2.3. Move data into the resized vectors 3.1.4. Check if at final specified threshold AND the probability is below the threshold required 3.1.4.1. If true, set “upper threshold is below max probability requirement” to true 4. Determine risk measure 4.1. For all thresholds and probabilities, sum area under loss curve using trapezoid rule 4.2. Return risk measure  Initially, the thresholds for the risk analysis are determined using a sampling analysis, which allows the risk analysis to be performed without any prior information on the cost   44 distribution. This provides a mean and standard deviation for the cost parameter used in the risk analysis. This analysis uses 200 samples centred about the mean of the random variables, then resets the previous sampling settings entered by the user. Using this data, Rts will define thresholds at equal spaced intervals (represented by the blue circles in Figure 3.3). This is performed at a preset minimum number of points, defined in the code, for 5.0 standard deviations from the mean. If necessary, Rts will reduce the distance from the mean to the lowest threshold, such that all thresholds are positive. The intention is to keep the thresholds around the mean, where the probability rapidly varies, to provide the best result from quadrature. The initial spread of 5.0 standard deviations was chosen as it was expected to encompass the entire region for most problems. It also may be reduced as necessary for different problems. In cases where these intervals are not sufficient, additional thresholds will be automatically added to extend the region of interest to adjust to the case at hand. This allows robustness to the algorithm and covers situations where a 5.0 deviation spread is insufficient.   45  Figure 3.3 Example exceedance probability curve with annotated characteristics A FORM analysis is then run at each of these thresholds to determine the probability of exceedance. If these FORM analyses do not converge, Rts will attempt a sampling analysis. This utilizes the computational efficiency of FORM, but also keeps the risk model robust by implementing sampling whenever necessary, rather than giving an error and failed analysis. The reliability analyses are repeated for all of the thresholds. For both the lowest and highest threshold, Rts will require that the probability exceed a requirement encoded in Rts. This is done at the low threshold by adding additional points at the same interval (represented by the red circles in Figure 3.3), until the requirement is met (or the cost becomes zero). The same occurs at the upper threshold and once there is a sufficiently low probability of occurrence, Rts will determine the risk measure. As a result, the actual number of thresholds depends upon the risk probability distribution, and can be often up to and exceeding twice the original number of points. Requiring that the probabilities exceed a threshold value is important, as   46 the area determined underneath the graph is sensitive to the end values, particularly when approaching 100% probability due to the significant area encompassed under the curve from the point to zero cost. Currently, the only risk measure implemented in Rts is the mean cost. This is determined using the trapezoid numerical integration method below:  µc = 12 (cn+1 − cn )[p(cn+1)+ p(cn+1)]n=1N∑  (3.15) The numerical integration in Eq. (3.15) determines the mean cost, µc, using the determined the cost thresholds, cn, and their respective probabilities, pn. The mean cost is then given as the response of the risk model. It is important that this is the only response and two functions with different probability density functions but equal means are equivalent in this analysis. Particularly, this means that in the optimization function, it will only reduce the mean, irrespective of other characteristics of the PDF. To optimize other aspects of the PDF, other risk measures may be added in Rts, which are discussed in Haukaas et al. (2013).  3.3 Optimization Analysis An optimization analysis allows the search for the minimum of any objective in an analysis by variation of the decision variables. This allows the determination of the best solution in the system. In the analysis considered, this often considers various design parameters as the decision variables to determine the optimal solution. The basics of optimization analysis will be introduced here. Reliability-Based Design Optimization (RBDO), performed in the past, will be contrasted to Risk Minimization (RM), a new method for probabilistic design optimization. The algorithm and methods currently implemented in Rts are also discussed.   47 3.3.1 Introduction to Optimization Analysis  An optimization analysis is the search for minimizing the objective of a system. Decision variables are input, which unlike the random variables discussed earlier, are constant variables that are varied while searching for the optimum. An objective is defined, in this case taken as the mean cost determine from the risk model. Using different methods, a search direction and step size are determined and the next step towards the optimum is taken.  It should be noted that there are several ways of optimizing a probability density function (PDF). The decision of what characteristic of PDF is minimized lies in the risk measure. Currently, only the mean value risk measure is implemented in Rts. This results in that only the expected response is minimized, but other measures of the PDF, such as variance or cost at a given percentile, are not considered. For any PDF, if the means are equal, they are equivalent in this analysis.  3.3.2 Reliability-Based Design Optimization In the 1990s, an optimization scheme for structural engineering was developed. Enevoldsen and Sorensen (1994) have discussed RBDO. RBDO provided a methodology to use reliability analysis to approximate the total costs of a system, then to use non-linear optimization techniques to minimize the objective function. These reliability techniques are as discussed in Section 3.1, where any suitable optimization algorithm may be used. This objective function is subject to the bounds, typically specifying a reliability index to provide suitable probability of failure of the system to satisfy local codes. It considers the minimizing the objective function given by the total cost. This total cost consists of the cost of construction and the cost of failure. The probability of failure is determined from the   48 reliability analysis given the decision variables. These components are combined to form the objective function, typically of the form:  Ctotal =Cconstruction (y)+ pf (x, y)×Cfailure(y)  (3.16) This cost formulation uses decision variables y and random variables x. Eq. (3.16) includes a non-deterministic probability of failure, but this is the only inclusion of probabilistic data in the objective function. The cost of construction and failure for the structure are completely deterministic, based on the decision variables and include no uncertainty. In addition, the probability is only of a given discrete “failure” event. This results in two binary states: intact and failed.  Enevoldsen and Sorensen discussed the importance of pre-evaluation and post-evaluation of the systems in the scope of the design process. The pre-evaluation provides an opportunity to find the failure modes, while also determining the important decision variables. After the final design, a post-evaluation allows confirmation that the design is still susceptible to the failure modes identified in the pre-evaluation. This also allows a sensitivity analysis of the optimal design (Enevoldsen and Sorensen 1994). Both of these analyses are important for a complete design. This RBDO analysis also implements different models for the reliability analysis. These include reliability on the element level or system level. In addition, the implementation of a finite-element model was included to provide a structural response model (Enevoldsen and Sorensen 1994). This module is combined with other aspects of the optimization analysis, in a nested approach. This nested approach is similar to what is done in the current analysis.    49 3.3.3 Risk Minimization A new method of design optimization that implements reliability analysis is presented here and hereby coined Risk Minimization (RM). This analysis is a step forward from past RBDO because it includes non-deterministic input in all aspects of the analyses in a nested model approach. A possible objective function is:  Ctotal =Cconstruction (x, y)+Cdamage(x, y)+… (3.17) In this formulation the random variables, x, and decision variables, y, are included in all terms. The objective function formulation in Eq. (3.17) includes the effect of uncertainty in all aspects of the problem. This is due to the inclusion of random variables in not just the reliability analysis (as in classical RBDO), but in all parameters in the cost model. In addition, this does not define a particular “failure” point or state, but is rather a continuum of building states. This provides a more sophisticated analysis that provides a method to encompass uncertainty in all aspects of the analysis for the optimal solution, from the construction to repair costs. Further applications could contain other costs, including life-cycle costs. This objective function formulation is also easily adapted to nested models in Rts that call upon successive dependent analyses. For example, an optimization model uses a risk model as the objective function to determine the mean of the costs. This causes the risk model to run, which calls all other models that have variables that feed into it, such as a reliability model. In turn, the reliability model calls the cost models, damage models, structural models, etc. This allows the mean cost to be determined for a given event. All models that are nested within the encompassing optimization model are run when a response is required from them,   50 with the schematic shown earlier in Figure 3.1. It should be noted that this model initialization occurs with all models, including reliability and risk models, not only the optimization model.  3.3.4 Optimization Tools In Rts, there is one algorithm for an optimization analysis, which may utilize different methods for determining the step direction and thus, step size. An ideal optimization method balances robustness, efficiency, and accuracy. A robust algorithm works for many problems without modification and may self-correct if errors occur, while efficient algorithms are not prohibitively computationally expensive. Finally, an accurate algorithm gives the correct answer with appropriate precision (Nocedal and Wright 2000). While there are many established optimization methods, only the ones implemented in Rts are discussed here. 3.3.4.1 General Algorithm for Optimization Analysis The basic algorithm in Rts does not depend upon the different methods used in the analysis. It performs the iterations, but calls upon other methods to determine the step size, step direction, and convergence. This algorithm will find the local optimum, but will not ensure global optimality.   51 Algorithm 3.4 Optimization Algorithm 1. Determine objective function value at current decision variables 2. Determine the gradient of the objective function at the current value 3. Check convergence 3.1. If converged, local optimum achieved 3.2. If not converged, update for next iteration 3.2.1. Determine step direction 3.2.2. Determine step size 3.2.3. Update decision variables 4. Iterate until converged  While this algorithm does not have many steps, the models that are required for it to run may make both determining the value and gradient of the objective function very computationally heavy. In particular, direct differentiation methods are not applied in the risk model, since it may not be applied to sampling analyses. As a result, finite differentiation methods are used, which requires running the analysis once at the current values of the decision variables, then perturbing each decision variable individually and running the risk analysis again. To check convergence, the gradient is checked to be sufficiently close to zero, which gives a local minimum. The method currently employed for this is the gradient norm, calculated as:   e > ∂f∂y1"#$ %&'2 + ∂f∂y2"#$ %&'2 +…+ ∂f∂yn"#$ %&'2  (3.18)  This checks that the gradient of the objective function, f, with respect to each decision variable, y, is below the tolerance, e. The tolerance is not required to be as small as other search algorithms (such as FORM), as there is often some small variability between steps.    52 The optimization algorithm takes a step using the equation:   yn+1 = yn − sn pn  (3.19)  This equation updates the values of the decision variables using step size, sn, and direction, pn. Both the step size and direction calculation depends upon the optimization methods chosen. Of note, Eq. (3.19) is formulated for minimization, as the subtraction sign moves this away from an increasing direction. The methods described in the next section do not require further calculation from the objective function. Rather, they utilize only the first-order information already calculated, saving computational resources. A move limit on the change to the decision variable is implemented in Rts as well, allowing a maximum change of 20% from the current value at each step. This is intended to prevent erroneous large steps that take the decision variables out of the region of interest. The move limit is only applied to one decision variable and does not scale the step of the other decision variables. The optimization model is only capable of using the objective function values given to it. Therefore, for best results, the risk model should be as precise as possible. This additional computation in the risk model greatly improves the performance of the optimization models. In determining the gradient of the objective function, this analysis currently implements FDM gradients. This method perturbs each decision variable individually. For each decision variable, the objective is then recomputed. The perturbation is constant and has been set as 10% of the current value. This value of the perturbation factor appears to work well though the risk model. Higher perturbations can cause issues with the gradient by skipping over minimums. In contrast, lower values tend to get diffused in the risk calculation and don’t   53 give as clear a gradient. In turn, the error in the calculation becomes more prominent, which also causes issues with the optimization search. 3.3.4.2 Steepest Descent Method The Steepest Descent Method provides a conceptually simple, but inefficient, optimization technique. It utilizes only the gradient to determine the search direction for the next point. Haukaas (2014d) updates the step using:   yn+1 = yn − sn ∂fdyn  (3.20) This updates from step n to step n+1 using the step size, sn, for the update and the vector of derivatives, with respect to each of the decision variables. This method has the benefit of being easily implemented and calculated. However, it is very computationally intensive and converges slowly. In particular, as the DV approach the optimum, the gradient approaches zero and, for a fixed step size, each iteration moves very little. In addition, a small step size is required (in the order of 10-3 or 10-4) to prevent the algorithm from taking too large a step and passing over local minima of interest. The step sized used is dependent upon the objective function and starting point and will require some experimentation. Although this method is computationally intensive, which limits its usefulness for large-scale optimization analyses, it is a simple and robust search method. It is also not very susceptible to minor errors in calculations and self-corrects readily. 3.3.4.3 BFGS Method The BFGS (an acronym of the developers Broyden, Fletcher, Goldfarb, and Shanno) is a common Quasi-Newton optimization method. Unlike the calculating the Hessian matrix in   54 the Newton method, this method uses only the value and gradient of the objective function to provide an approximation of the Hessian matrix. The Hessian matrix, or its approximation, contains the curvature information of the objective function with respect to the decision variables (Nocedal and Wright 2000).  The BFGS algorithm provides updates to the search direction only, using the first-order information of the objective function and updating an approximated Hessian matrix at each step. The first iteration is determined using a steepest-descent approach, which also uses a significantly smaller step size to prevent an excessive first step, which will cause issues with the optimization search and may move the search outside of the area of interest. This also defines the Hessian inverse for the initial step as a diagonal matrix, as given below:   Hk = 0.001 0 00 ! 00 0 0.001!"### $%&&&  (3.21)  For subsequent iterations, k, the BFGS algorithm is applied as described below using the notation from Nocedal and Wright (2000). Three auxiliary functions are defined:   sk = yk − yk−1  (3.22)   gk =∇fk −∇fk−1  (3.23)   ρk = 1gkTsk  (3.24)  Eq. (3.22) determines sk, the difference of the current and past decision variables. Eq. (3.23) defines gk, the change between the current and past values of the function’s gradient. Of note,   55 the index k refers to the values from current step, while k+1 refers to the step in advance and that sk in Eq. (3.22) is not the same term as in Eq. (3.20). Once these values are determined, the inverse Hessian, Hk+1, is determined by:   Hk+1 = (I − ρkgkskT )Hk (I − ρkskgkT )+ ρkskskT  (3.25)  The result from Eq. (3.25) may be then used to determine the step direction:  pk+1 = Hk+1∇fk+1  (3.26)  While the step size is not determined in the BFGS algorithm, the algorithm is dependent upon the size. From experience using the model in Rts, a fixed step size of 0.5 appears to be most reliable. Of note, the initial step using the steepest descent method is much smaller, in the order of 10-3 or 10-4. This is required, as otherwise the step taken will likely be too large, depending upon the gradient. Future work and implementations could include providing a variable step size that meets the Wolfe Conditions to aid quick convergence. The BFGS algorithm, as with other Quasi-Newton methods, has the benefits of reasonably fast convergence while remaining relatively computationally efficient. The convergence rate is not as fast as true Newton optimization methods, which require full calculation of the Hessian. This small difference in number of iterations is offset largely by the significant advantage of significantly less computational effort in comparison to Newton methods. In addition, it requires no additional computation compared to Steepest Descent with respect to the objective function, but provides convergence in fewer steps. Another benefit of BFGS is that a new Hessian approximation is not required for each iteration. Instead, the approximation is updated in each step. In addition, the BFGS algorithm has been found   56 through experience to be robust and self-correcting (Nocedal and Wright 2000). For a large-scale problem with many decision variables, however, maintaining the Hessian matrix may be computationally intensive. This is not an anticipated issue in these minimization analyses. These characteristics make the BFGS method a powerful optimization tool.  In the experience from this thesis, BFGS provides generally fast convergence compared to steepest descent. In some cases, however, the BFGS algorithm does make an erroneous step. This is largely remedied by providing precise risk measurements. In the cases of an erroneous step, the algorithm will attempt to self-correct. This works in some occasions, but others will cause the decision variables to become unrealistic (such as negative geometry). In addition, BFGS will typically approach the optimum quickly, but may not reach convergence quickly. This provides a useful tool, but is generally not as robust as steepest descent search directions.     57 Chapter 4 Optimization Examples The risk-based design optimization techniques described in Chapter 3 will now be applied to some simple optimization examples. First, an algebraic objective function that has one decision variable that was presented in Haukaas et al. (2013). This provides a simple opportunity to confirm the results while also providing a simple example to demonstrate the analysis. A second, optimization example is then presented. This example is of a one-storey FFTT frame, represented by an analytical model with two decision variables. Example 2 consists of many nested algebraic models that are aggregated into a final total cost. This example is intended as a stepping-stone to the future large-scale, 6-storey FFTT frame structure discussed in Chapter 5.   4.1 Example 1: One-Dimensional, Algebraic Optimization A simple example from Haukaas et al. (2013) is used to compare and verify the analysis performed in Rts. This analysis uses only one decision variable and an algebraic objective function. In the conference proceedings, results are found and compared using sampling, Taylor approximation, and quadrature using FORM (Haukaas et al. 2013). The results here match what was found in this paper. 4.1.1 Problem Formulation Example 1 is an idealized multi-storey building, idealized as a cantilever beam by Haukaas et al. (2013). This gives a simple cost formulation:  c = (x1 + x2Ld 2 )+ (x3 PL3d 4 )  (4.1)    58 The formulation in Eq. (4.1) represents the total cost as a function of the decision variable, d. In the equation from Haukaas et al., the first parenthesis represents the “construction cost,” a function of d and the building height, L. The second parenthesis represents the “repair cost,” which is a function of d, L, and lateral load, P. 4.1.2 Rts Settings Formulation Example 1 contains a single decision variable and five random variables, described in Table 4.1 and Table 4.2. There are no constants in this example. The decision variable, d, is given initial values on either side of the optimum. The random variable, x3, was defined by a value of 1.5 in the Rts formulation, as issues with step sizes and probability transformations occurred with a very small value. To get the desired value, the exponential was moved into Eq. (4.1). Table 4.1 Example 1 decision variable Parameter Initial Values d 0.25, 1.0  Table 4.2 Example 1 random variables Parameter Distribution Mean COV x1 Lognormal 200 10% x2 Lognormal 200 10% x3 Lognormal 1.5.10-5 10% L Lognormal 3 10% P Lognormal 2000 10%  The settings of the orchestrating functions are mostly Rts default settings. Table 4.3 shows the particular FORM settings for this example. The sampling settings are in Table 4.4. Of   59 note, sampling around the origin is in the standardized normal space, which represents the mean value of the random variables. The risk model settings for the example are shown in Table 4.5. These settings are all internally coded into Rts and there is a discussion of how these were found in Section 4.1.4. Finally, the optimization settings used are presented in Table 4.6 for each different method used. All analyses converged with the gradient norm. Of note, this analysis involved no step length limitation in the optimization analysis, unlike Example 2, which is limited to a step of 20% of the current decision variable value. Table 4.3 Example 1 FORM model settings Parameter Value Gradient Method Finite Difference Maximum Steps 10 Search Direction HLRF Step Size Armijo Step Size  Table 4.4 Example 1 sampling model settings Parameter Value Maximum Samples 5000 Target C.O.V. 0.5% Sampling Centre Origin  Table 4.5 Example 1 risk model settings Parameter Value Minimum No. Thresholds 31 Initial Sampling Iterations 200 Probability Requirements 0.5%-99.5%    60 Table 4.6 Example 1 optimization model settings Method Step Size Gradient Norm Convergence Steepest Descent 0.0001 0.9 BFGS 0.5 0.9  4.1.3 Example 1 Results These models were run both for the probabilistic parameters and for a first-order approximation using constants with values equal to the mean. The results are shown in Table 4.7. All of the methods reached very similar results. However, the values determined using first-order approximation were slight overestimations. For the purposes of these results, the steps taken to optimum cost is the number taken to reach within 1.0% of the final converged cost. The number of steps to reach the minimum cost, shown in Table 4.7, is similar for both probabilistic and first-order analyses. Computation times, using all methods, were in the order of 30 seconds for approximately 15 steps to full reach the convergence criteria in Table 4.6.  Table 4.7 Results of Example 1 of steps to optimum Analysis Initial d Risk model First-order approximation c d Steps c d Steps Steepest descent 0.25 324.934 0.3571 3 326.451 0.3561 3 1.0 324.992 0.3573 8 326.456 0.3561 8 BFGS 0.25 324.991 0.3572 8 326.474 0.3560 7 1.0 324.950 0.3567 6 326.454 0.3561 4 Average 324.997 0.3571 - 326.459 0.3561 -    61  Figure 4.1 Mean cost at each optimization step in Example 1  Figure 4.2 Decision variable at each optimization step in Example 1 In both BFGS and steepest descent optimization analyses, the optimal value of d was quickly determined on either side of the optimum, as shown in Figure 4.1 and Figure 4.2.  The mean costs for several values of d have been computed using the risk analysis and are compared to the first-order (mean value) approximation shown in Figure 4.3. For each of these points, a loss curve may be calculated. Three curves of interest are the decision variable at the initial values compared to the optimal value in Figure 4.4 the two start points and the optimum. It is clear that the mean cost of the system is much lower at the optimum than other 0"200"400"600"800"1000"1200"1400"1600"1800"2000"0" 2" 4" 6" 8" 10" 12" 14" 16"c"Step"Objec*ve"Func*on"Steepest"Descent,"d=0.25"Steepest"Descent,"d=1.0"BFGS,"d=0.25"BFGS,"d=1.0"0"0.2"0.4"0.6"0.8"1"1.2"0" 2" 4" 6" 8" 10" 12" 14" 16"d"Itera(on"Decision"Variable"Steepest"Descent,"d=0.25"Steepest"Descent,"d=1.0"BFGS,"d=0.25"BFGS,"d=1.0"  62 points in this plot, which may also be confirmed when compared to Figure 4.3. In this example, the first-order approximation is in good agreement with the results from Rts.  Figure 4.3 Risk at different values of the design variable in Example 1  Figure 4.4 Exceedance probability at three decision variable values in Example 1 0"200"400"600"800"1000"1200"1400"1600"1800"2000"0" 0.1" 0.2" 0.3" 0.4" 0.5" 0.6" 0.7" 0.8" 0.9" 1"c"d"Risk"Risk"from"Rts"First7order"approxima=on"0"0.1"0.2"0.3"0.4"0.5"0.6"0.7"0.8"0.9"1"0" 200" 400" 600" 800" 1000" 1200"Probability*of*exceeding*cost*Cost**Exceedance*Probabili5es**d"="0.25"d"="0.357"d"="1.0"  63 4.1.4 Risk Model Tuning The risk algorithm discussed in Section 3.2.3 has several hardcoded parameters that may be changed. This includes the selection of basic number of thresholds into which to divide and analysis and also the maximum and minimum probabilities (tail probabilities) allowed at the end of the thresholds. Finally, the maximum number of samples used in the initial sampling analysis was varied for one set of the probability requirements. This provides an optimization problem, with goals of improved accuracy and precision, while not unnecessarily increasing computation time. In this analysis, risk samples were taken at a specific value for the decision variable: d = 0.3. This point was chosen since it is near the optimum, but still is significantly far away to have a gradient, but is not nearing an asymptote. For each tail probability requirement and specified minimum number of thresholds were sampled 100 times. For a increased number of thresholds, the spacing between the thresholds throughout the analysis is reduced as given in the risk algorithm. This analysis allowed for the accuracy, precision, and approximate computation time to be calculated over a large number of samples.   64  Figure 4.5 Risk model accuracy  Figure 4.6 Risk model precision 0.975&0.98&0.985&0.99&0.995&1&1.005&0& 20& 40& 60& 80& 100& 120&Accuracy'Ini+al'number'of'thresholds'Accuracy'(n=100)'5,95%&2,98%&1,99%&0.5,99.5%&0"0.0005"0.001"0.0015"0.002"0.0025"0.003"0.0035"0.004"0" 20" 40" 60" 80" 100" 120"Coefficient)of)varia.on)Ini.al)number)of)thresholds)Precision)(n=100))5+95%"2+98%"1+99%"0.5+99.5%"  65 The accuracy, a measure of the approximation to the “true value,” is important here for the quadrature approximation to be as close as possible. This “true value” is taken here as the mean of 100 risk measures, with 501 specified points and a requirement that probabilities exceed 99.9% or 0.01% at the tails. The other accuracies are then compared to it in Figure 4.5. It is apparent that the tail probability requirements are most influential and much more significant than the specified number of thresholds. The precision of the risk measure approximation is important to ensure that both the approximation of the risk measure is consistent and so the gradient is calculated properly. Here the precision is measured as the coefficient of variation of the risk measures for a given number of thresholds and tail probability. It is important that this is low to ensure that both the approximation of the risk measure is consistent and for proper gradient calculation while optimizing the function. As can be seen in Figure 4.6, for all of the tail probability thresholds, the COV reduces significantly up to about 31 initial thresholds. After this point, the increase in precision diminishes. However, the most significant parameter in increasing precision is stricter probability requirements.   66  Figure 4.7 Computational efficiency of risk measures   Figure 4.8 Effect on precision of the initial sampling iterations (probability requirements 0.5-99.5%) 0"0.5"1"1.5"2"2.5"3"3.5"0" 20" 40" 60" 80" 100" 120"Seconds(per(risk(analysis(Ini1al(number(of(thresholds(Computa1onal(Efficiency((n=100)(5+95%"2+98%"1+99%"0.5+99.5%"0"0.00005"0.0001"0.00015"0.0002"0.00025"0.0003"0.00035"0.0004"0.00045"0.0005"0" 20" 40" 60" 80" 100" 120"Coefficient)of)varia.on)Ini.al)number)of)thresholds)Precision)(n=100))100"ini-al"samples"200"ini-al"samples"350"ini-al"samples"500"ini-al"samples"1000"ini-al"samples"  67 Finally, an approximate computation time per risk measure calculation is important for computational efficiency. Figure 4.7 shows the relationship of specified thresholds and probability requirements to computation time. As expected, higher requirements require more computational effort. The final parameter entered into the risk analysis is the initial sampling analysis (performed in step 1 of the risk model algorithm). This sampling analysis provides the mean and standard deviation used in the assignment of risk thresholds. It was found to have little effect on the precision of the analyses, as shown in Figure 4.8. This figure was determined only at the probability requirement of 0.5 to 99.5%. Interestingly, the results are the same regardless of 350, 500, or 1000 initial samples. This indicates that either the initial sampling is sufficiently optimized already or that the algorithm is provides robustness that allows minor variations in the determination of the initial mean to be recovered in the remaining analysis. The computational cost of this sampling is minimal, however, as it is only run once and minor compared to the remaining steps of the algorithm.  Based on this analysis, the settings that are encoded in Rts are to use tail probability requirements of 99.5% to 0.5% in the risk analysis and starting with an initial 31 thresholds. The initial sampling analysis performed uses 200 samples. This provides the most accurate and precise calculations, while minimizing additional computational effort. It was also found that although computation time in the risk model was increased to improve precision, this resulted in improved optimization model performance. These settings are used for all examples in this document.   68 In selecting a tail probability requirement of 0.5-99.5%, it is important to note that this is also a good limit for feasibility of implementation. At high probability events, sampling does not perform well at high probabilities, as the target COV may be reached after a single failure/success. Having a lower target COV, however, results in more computational effort at all other sampling thresholds as well. Both of these offset the additional accuracy and precision that would be attained with more exacting probability requirements. It also may beneficial to implement Importance Sampling for low-probability sampling.  4.1.5 Discussion of Results This model provides an excellent starting point for risk-based design optimization techniques and a platform for debugging and refining techniques. Using a one-dimensional problem provides easy visualization. This analysis also allows a comparison to known results. The risk model tuning included running a suite of risk measures, with different encoded parameters, to determine a balance of precision, accuracy, and efficiency.  The results of this analysis agreed with those presented in Haukaas et al. (2013), while also allowing a comparison of the full probabilistic risk problem to a mean value approximation. Both of the analyses presented here provided full agreement. It also became clear that, while the steepest descent may converge more slowly upon the optimum, it is the most robust method, particularly for smaller step sizes. In contrast, BFGS may step close to the optimum quickly, but can be finicky. It will typically self-correct, but may not go to the same local minima. In particular, in problems with asymptotes, the gradients become very large and can cause issues with the search algorithms. In addition, step sizes are difficult to choose and the   69 objective may be very sensitive. For steepest descent minimization methods, a smaller size is generally preferable, as it will be more stable. This has a trade-off, as the gradient often becomes small near the optimum and convergence requires additional computational effort. A step size in the order of 10-4 appears to be reasonable. It could also be feasible to run subsequent analyses and increase the step size as the optimum is approached. For BFGS in Rts, the first step is determined by steepest descent. Afterwards, a fixed step size of 0.5 appears reasonable.  The development of this analysis also provided a platform to tune the risk analysis. During the initial analyses, it became apparent that precision was required as well as improvements in the algorithm. Without this, the optimization algorithms were not functioning properly and although convergence could occur, it would require many steps. With the improvements of the risk model, the optimization analyses provide much better and faster convergence. As a result, the overall computational demand is reduced with the increased precision of the risk analysis. 4.2 Example 2: Two-Dimensional, Algebraic Optimization  In the next simple example for risk-based design optimization, an analytical approximation of a single-storey FFTT frame is used. While this is still a simple structure, this is a step up in complexity from the previous example. In particular, it uses many nested models and two decision variables. These models are all algebraic, in particular the structural model, which allows for fast computation time.   70 4.2.1 Problem Formulation This model is of a simple FFTT frame, as shown in Figure 4.9. It consists of two CLT panels and a steel beam connecting them. Each of the CLT panels is a 5-layer laminate with three layers running vertically and 34 mm thick laminates. The steel wide flange beam is fully fixed to the top of the panel, with a connection bearing plate at fixed ratio of the panel depth long. For a lateral load applied at the steel beam, the objective is to minimize the mean cost of the structure. This includes material costs, damage and repair due to the load, and penalties due to poor design (ensuring capacity design principles and sufficient strength). The two decision variables that will be varied are the panel depths, w, and the beam clear span, L.   Figure 4.9 Single-storey FFTT frame H"w"L"w"rbearingw"  71  Figure 4.10 Example 2 beam panel connection This example will also approximate the capacity of the connection using a bearing plate that is full CLT panel width and a constant ratio of the depth, w, long as shown in Figure 4.9 and Figure 4.10. An assumption made from the bearing plate is that the shear force concentrated at the connection is spread evenly along the end of the vertical laminates the bearing plate. The remaining details of the connection are not detailed here, but assumed that the beam and panel are adequately fixed to remain together in tension.  4.2.1.1 Structural Model The FFTT frame was simplified into a stick model to aid and simplify computation, as shown in Figure 4.11. The section of beam embedded in CLT is assumed to act as rigid body, as the CLT is very stiff. In addition, this model does not include the shear deformation of the CLT panel. Finally, the connection to the ground of this model assumed to act as an ideal pinned   72 connection, with no rocking or rotational stiffness. This also allows easy calculation of the bending moment and shear at any location on the structure.   Figure 4.11 Single-storey FFTT frame idealization Using virtual work, the lateral displacement of the beam may be determined as:  Δ = FH 36ECLT ICLT + FH 2L12EsIs LL +w"#$ %&'2  (4.2)  This gives an algebraic expression for the displacement as a function of all of the considered parameters. In addition, the bending moments and shear in the steel beam are determined using analytical equations, given in Section 4.2.1.2. Of interest, this structure is completely determinate and the maximum bending moment applied to the CLT panel is completely independent of any decision variables. The maximum beam bending moment and shear, however, do vary depending on the panel width due to the rigid connection approximation. Rigid% Rigid%H%w/2% w/2%L%F,%Δ%  73 4.2.1.2 Cost Models Unlike the previous example, this model contains more refined cost models and nested models. While all algebraic, these models take input from other models to determine the cost. This is an example of the multi-model capability of Rts, allowing successive responses as inputs. It also allows several considerations to be applied individually as a cost, which is then summed to determine the total cost below:  Ctotal =CCLT +Csteel +Cdrift +CCLT ,shear +Csteel,shear +Csteel,bending +Ccapacity  (4.3)  In this example, there are six cost considerations that make up the total cost in Eq. (4.3). The first two, CCLT and Csteel, are the cost of the materials and related to the geometry and volume of the materials. A cost related to the drift of the model, Cdrift, represents the damage caused by excessive deflection of the force, as a function of the interstorey drift. Three models penalize poor design of the shear capacity of the CLT connection, CCLT,shear, shear capacity of the beam, Csteel,shear, and finally bending capacity of the beam, Csteel,bending. Finally, Ccapacity ensures that the CLT panel’s bending capacity is greater than the steel beam, to ensure capacity design principles. As well, in this formulation, the uncertainty and error in the costs is contained in each individual cost, represented by θ, rather than an additional term in the cost summation. No economies of scale are included in this model. 4.2.1.2.1 Cost of Materials These models attempt to relate the geometry and volume of the materials into a cost. Broadly, this is a cost of construction, but unlike in traditional reliability-based design optimization, this includes probabilistic modeling that helps account for the uncertainty in the problem. Generally, these costs increase with additional material in that component (panel   74 width or beam length), but aren’t strictly related to volume, as given for both the CLT panels and steel beams below:   (4.4)   Csteel =θsteel Is (L +w)2  (4.5)  4.2.1.2.2 Cost of Drift-related Damage This model quantifies the cost to the damage in general. This model assumes that all damage is related to the interstorey drift of the structure, as given below:  Cdrift =θdrift ΔH"#$ %&'2  (4.6)  This model gives the cost of the drift such that small displacements do not damage other parts of the building significantly, but at larger drifts, the cost of damage becomes significant. 4.2.1.2.3 Cost of Structural Components One of the main considerations of a seismic system is to ensure that forces do not exceed the capacity and that deformation occurs in a ductile manner. In this cost formulation, the desired mechanism is yielding of the beams in flexure. This is attained by the cost functions used below, which will result in a high cost unless this mechanism is attained. This is attributed to the expense of a brittle failure in the occurrence of this load. There is no formulation of the actual capacity of the full CLT panel here; however, this is considered in Section 4.2.1.2.4 to ensure capacity design principles are met. CCLT = 2θCLTtHw2  75 The first cost function is related to the shear capacity of the beam-panel connection. It is undesirable to have a failure in the CLT here, as it will result in a brittle fracture. The moment and shear applied to the connection at the panel edge, where it is maximum, is taken as:   Msteel,applied = FH2 LL +w!"# $%&  (4.7)   Vapplied = 2Msteel,appliedL  (4.8)  The resistance of the connection is taken as the strength of the bearing area near the end of the connection, as shown here:   VCLT ,resist = rbearingw( ) 3tlaminate( ) k3 fCLT( )  (4.9)  This is anticipated to be where the bulk of the shear is taken and may be characterized by a steel bearing plate across the full width of the panel that would be included in the design as a ratio of the panel depth. This formulation is the yield force of the 3 parallel-to-grain laminate layers in the CLT under the bearing plate. Finally, the cost of the connection damage is taken as:  CCLT ,shear =θCLT ,shear VappliedVCLT ,resist!"## $%&&4  (4.10)   It should be noted that this cost is an approximation has not been calibrated to testing. The steel beam has two limit states of interest, a shear failure and a bending failure. As discussed earlier, the desired mechanism of the frame is the yielding of the beam in flexure.   76 While it is possible to yield in shear in a ductile manner, this is not intended in this mechanism in this configuration. The maximum bending moment and shear force applied to the beam for the structural model discussed in Section 4.2.1.1 and given in Eq. (4.7) and Eq. (4.8). These are then entered into the cost models that compare the capacity to resistance of the steel, given below:  Csteel,shear =θsteel,shear VappliedVsteel,resist!"## $%&&4  (4.11)   Csteel,moment =θsteel,shear Msteel,appliedMsteel,resist!"## $%&&4  (4.12)  In these cost models, the shear resistance, Vsteel,resist, and the plastic bending moment, Msteel,resist, are determined by the beam size and taken as random variables. In addition coefficient of the flexural cost equation is somewhat smaller for a bending failure. Although this is the desired mechanism, the structure must maintain enough capacity as well. This formulation would be relevant for an equivalent elastic force procedure for a seismic load. 4.2.1.2.4 Cost of Capacity Design As discussed earlier, this frame has a desired mechanism of the beam yielding in flexure. While the cost functions in Section 4.2.1.2.3 provide that the individual components have sufficient capacity to resist the forces applied, they do not quantify the high cost of the mechanism not being attained. The moment of inertia of the panel is determined as:  ICLT = 3tlaminatew312  (4.13)  Using this result, the panel resistance to bending, using beam theory, may be determined as:   77   (4.14)  These equations are for a 5 layer CLT panel, with three laminates in the vertical direction. The cost may then be determined by comparing the expected plastic moment of the beam is compared to the bending capacity of the panel, given by:  Ccapacity =θcapacity Msteel,resistMCLT ,resist!"## $%&&4  (4.15)  This cost model represents the high cost of a structural failure due to the undesired mechanism in a seismic event. As a result, it punishes poor designs with increased costs. 4.2.1.2.5 Total Costs Using First-Order Approximation A comparison of the cost functions is shown in Table 4.8, using a first-order approximation for all of the random variables. This allows for easy calculation and comparison of the functions. These show the cost functions individually with their relation to each decision variable. A comparison of this approximation to a region determined in Rts is discussed in Section 4.2.3. Notably, the asymptotes in CCLT,shear and Ccapacity had great influence over the total cost in the region around the optimum. The construction-related costs, CCLT and Csteel, increase with more materials, but have generally minor costs. The drift related costs, Cdrift, also are increased for long clear spans and low panel widths. This is expected, as these reduce the stiffness of the structure. Finally, the limit states for shear and moment in the steel beam are most important for small panel widths. A unique characteristic of this problem, however, since the beam does not change in the design and the structure is statically determinate. This MCLT ,resist = 2k3 fCLT ICLTw  78 means that the moment in the CLT panel does not change due to relative stiffness of the frame members. This also means that the moment in the beam ends is only a function of the panel width as a result of the rigid connection assumption, as shown in Figure 4.11.    79 Table 4.8 Comparison of cost functions in Example 2 using first-order approximations Cost Plot CCLT   Csteel   0.5$1.22$2.73.5$0$1000$2000$3000$4000$5000$6000$7000$8000$0.5$ 1$ 1.5$ 2$ 2.5$ 3$ 3.5$ 4$L"(m)"Cost"w"(m)"0.5$1.22$2.73.5$0$1000$2000$3000$4000$5000$6000$7000$8000$0.5$ 1$ 1.5$ 2$ 2.5$ 3$ 3.5$ 4$L"(m)"Cost"w"(m)"  80 Cost Plot Cdrift   CCLT ,shear   Csteel,shear   0.5$1.22$2.73.5$0$2000$4000$6000$8000$10000$12000$14000$16000$18000$20000$0.5$ 1$ 1.5$ 2$ 2.5$ 3$ 3.5$ 4$L"(m)"Cost"w"(m)"0.5$1.22$2.73.5$0$2000$4000$6000$8000$10000$12000$14000$16000$18000$20000$0.5$ 1$ 1.5$ 2$ 2.5$ 3$ 3.5$ 4$L"(m)"Cost"w"(m)"0.5$1.22$2.73.5$0$1000$2000$3000$4000$5000$6000$7000$8000$9000$10000$0.5$ 1$ 1.5$ 2$ 2.5$ 3$ 3.5$ 4$L"(m)"Cost"w"(m)"  81 Cost Plot Csteel,moment   Ccapacity   Ctotal   0.5$1.22$2.73.5$0$500$1000$1500$2000$2500$3000$3500$4000$4500$5000$0.5$ 1$ 1.5$ 2$ 2.5$ 3$ 3.5$ 4$L"(m)"Cost"w"(m)"0.5$1.22$2.73.5$0$2000$4000$6000$8000$10000$12000$14000$16000$18000$20000$0.5$ 1$ 1.5$ 2$ 2.5$ 3$ 3.5$ 4$L"(m)"Cost"w"(m)"0.5$1.22$2.73.5$0$2000$4000$6000$8000$10000$12000$14000$16000$18000$20000$0.5$ 1$ 1.5$ 2$ 2.5$ 3$ 3.5$ 4$L"(m)"Cost"w"(m)"  82  4.2.2 Rts Settings Formulation This model was set up according to the analytical and cost models discussed. The parameters and values used are discussed here. Notably, all of the equations must use compatible units. Unit conversion equations were included for both convenience and practicality. In some analyses, very large or very small numbers appear to cause issues with models in Rts, particularly with FORM steps. These maybe avoided by providing a variable in a convenient unit and converting later. The two decision variables used are given in Table 4.9, the beam clear span, L, and the panel width, w. These have each been given 3 initial values, one below, one near, and one above the optimum. These were then combined for 9 total analyses, each with a different set of initial values. An analysis was run for each set using steepest descent and BFGS search directions. Table 4.10 describes the continuous random variables in Example 2. These are separated into how they are organized in the model. The beam is a W250x33, 350W steel section. All random variables are given a nominal COV of 10%, with the exception of the wood strength, modulus of elasticity, and the loading applied on the frame. These are expected be more uncertain and have COVs of 20%. The distributions are either normal or lognormal. It is important to note that some distributions were required to be lognormal to prevent issues with the algebraic expressions. This is because the normal distribution has zero as a possible outcome, while the lognormal distributions are always positive. Using lognormal distributions removes the possibility of dividing by zero.    83 This analysis focused on including most parameters included as random variables to help encapsulate the uncertainty. Table 4.11 includes the two parameters that were taken as constants. The storey height was taken as a fundamental constant, while k3 is a modification factor used in Eq. (4.14) for the wood strength, and is determined from the CLT Handbook (FPInnovations 2012). While it is a constant, it is meant to modify the wood strength, a random variable itself. The bearing panel ratio is also taken as a constant, as it is a design parameter that would be detailed in a final design. Table 4.9 Example 2 decision variables Parameter Initial Values Unit L 1.5, 2.5, 3.5 m w 1.5, 2.5, 3.5 m  Table 4.10 Example 2 random variables Geometry Parameter Distribution Mean COV Units tlaminate Lognormal 34 10% mm Ibeam Lognormal 48.9.106 10% mm4 Material Ewood Normal 9.5 20% GPa fwood Normal 5.5 20% MPa Esteel Normal 200 10% GPa Mr,beam Lognormal 132 10% kNm Vr,beam Lognormal 323 10% kN Loads F Lognormal 200 20% kN Cost θCLT Normal 750.103 10% - θSteel Normal 200 10% - θdrift Normal 1500 10% - θCLT,shear Normal 500 10% - θsteel,shear Normal 500 10% - θsteel,bending Normal 100 10% - θcapacity Normal 500 10% -   84 Table 4.11 Example 2 constants Parameter Value Unit H 3.5 m rbearing 15 % k3 0.603 -  The orchestrating function settings are generally default settings in Rts. The settings of note, however, are included here. The FORM settings are shown in Table 4.12. Table 4.13 shows the settings for the sampling model. This was taken as sufficient for the probability requirements in the risk model and also uses Rts in-house probability distributions and random number generators. The sampling centre is the origin in the standard normal space, or the mean of the random variable. The risk model settings are shown in Table 4.14. Table 4.15 gives the settings used for both methods used in the optimization model. A notable difference between this analysis and Example 1 is that the total step in this analysis are limited to 20% of the current value of the decision variable to prevent large steps that move the variables out of the region of interest. Table 4.12 Example 2 FORM settings Parameter Value Gradient Method Finite Difference Maximum Steps 10 Search Direction HLRF Step Size Armijo Step Size     85 Table 4.13 Example 2 sampling settings Parameter Value Maximum Samples 50 000 Target C.O.V. 0.5% Sampling Centre Origin  Table 4.14 Example 2 risk model settings Parameter Value Minimum No. Thresholds 31 Initial Sampling Iterations 200 Probability Requirements 0.5%-99.5%  Table 4.15 Example 2 optimization model settings Method Step Size Gradient Norm Convergence Steepest Descent 0.0001 10.0 BFGS 0.5 30.0  4.2.3 Example 2 Results The results of the minimization analyses are given in Table 4.16 and Table 4.17, while figure of the optimized frame is given in Figure 4.12. Regardless of the initial values of the decision variables, all analyses converged to the same minimum mean cost, beam clear span, and panel depth. These are the results to full convergence criteria, as given in the problem definition. An exception is noted where the convergence criteria was nearly met, but then an erroneous step was taken and the total number of steps doubled to self-correct and converge. The average time per step is also given, which is quite variable. This is highly dependent upon the success of FORM converging in the risk analysis, which is much faster than the sampling analyses.    86  Figure 4.12 Example 2 optimal configuration for minimum mean cost The steps in Table 4.16 and Table 4.17 are to reach convergence, as defined in Table 4.15. This is not a great comparison metric between analyses, however, since the error in the gradient becomes most apparent near optimum. This is when gradients are small and the error becomes a more significant proportion. Much of the minimization is performed in the early steps, with the final convergence taking time to reach. In particular, BFGS generally approaches the optimum quickly, but will have trouble meeting strict convergence criteria. Using the average of the minimum costs determined by steepest descent, the steps taken in each analysis to reach within 0.5% of this value are given in Table 4.18. In most cases, BFGS reached this threshold faster than steepest descent.  3.5$m$2.11$m$2.70$m$ 2.70$m$  87 Table 4.16 Example 2 cost minimization results using steepest descent Initial DV Steepest Descent L (m) w (m) Final cost ($) Final L (m) Final w (m) Steps Average time per step (s) 1.5 1.5 5329.1 2.116 2.701 32 71.7 1.5 2.5 5315.6 2.042 2.722 25 58.0 1.5 3.5 5327.2 2.096 2.707 58 52.6 2.5 1.5 5325.1 2.100 2.707 80 51.8 2.5 2.5 5336.5 2.141 2.695 24 52.9 2.5 3.5 5332.4 2.135 2.685 14 79.0 3.5 1.5 5326.4 2.117 2.698 57 49.4 3.5 2.5 5322.4 2.099 2.703 63 41.7 3.5 3.5 5330.8 2.129 2.697 43 51.1 Average 5327.3 2.108 2.702 44.0 56.5  Table 4.17 Example 2 cost minimization results using BFGS Initial DV BFGS L (m) w (m) Final cost ($) Final L (m) Final w (m) Steps Average time per step (s) 1.5 1.5 5325.8 2.094 2.703 23 127.1 1.5 2.5 5325.8 2.091 2.708 13 59.8 1.5 3.5 5326.1 2.108 2.710 26* 60.5 2.5 1.5 5330.9 2.111 2.699 17 129.5 2.5 2.5 5327.6 2.106 2.700 6 68.3 2.5 3.5 5322.0 2.085 2.708 15 60.4 3.5 1.5 5321.3 2.083 2.710 14 93.0 3.5 2.5 5326.4 2.086 2.709 14 63.7 3.5 3.5 5326.3 2.118 2.704 16 37.2 Average 5325.8 2.098 2.706 16.0 77.7 Notes: *Used gradient norm < 35.0 for convergence criteria    88 Table 4.18 Example 2 steps taken to 0.5% of minimum cost  Initial w 1.5 2.5 3.5 SD BFGS SD BFGS SD BFGS Initial L 1.5 14 12 1 3 2 10 2.5 25 9 13 4 3 6 3.5 29 9 21 10 23 12  These analyses minimized the mean cost, the measure of risk determined using the risk model. This was performed for 9 different starting coordinates, using both steepest descent search methods (Figure 4.13 and Figure 4.14) and BFGS (Figure 4.14 and Figure 4.15). As discussed above, both methods were satisfactory in all cases in converging to the same mean cost of about $5325.   Figure 4.13 Example 2 mean cost minimization using steepest descent 0"10000"20000"30000"40000"50000"60000"70000"80000"90000"100000"0" 10" 20" 30" 40" 50" 60" 70" 80"Mean%Cost%Step%Mean%Cost%Minimiza/on%(Steepest%Descent)%L=1.5,"w=1.5"L=1.5,"w=2.5"L=1.5,"w=3.5"L=2.5,"w=1.5"L=2.5,"w=2.5"L=2.5,"w=3.5"L=3.5,"w=1.5"L=3.5,"w=2.5"L=3.5,"w=3.5"  89  Figure 4.14 Detail of Example 2 mean cost minimization using steepest descent  Figure 4.15 Example 2 mean cost minimization using BFGS  Figure 4.16 Detail of Example 2 cost minimization using BFGS 4000#5000#6000#7000#8000#9000#10000#0# 2# 4# 6# 8# 10# 12# 14# 16# 18# 20#Mean%Cost%Step%Mean%Cost%Minimiza/on%(Steepest%Descent)%(Detail)%L=1.5,#w=1.5#L=1.5,#w=2.5#L=1.5,#w=3.5#L=2.5,#w=1.5#L=2.5,#w=2.5#L=2.5,#w=3.5#L=3.5,#w=1.5#L=3.5,#w=2.5#L=3.5,#w=3.5#0"10000"20000"30000"40000"50000"60000"70000"80000"90000"100000"0" 5" 10" 15" 20" 25" 30"Mean%Cost%Step%Mean%Cost%Minimiza/on%(BFGS)%L=1.5,"w=1.5"L=1.5,"w=2.5"L=1.5,"w=3.5"L=2.5,"w=1.5"L=2.5,"w=2.5"L=2.5,"w=3.5"L=3.5,"w=1.5"L=3.5,"w=2.5"L=3.5,"w=3.5"4000#5000#6000#7000#8000#9000#10000#0# 1# 2# 3# 4# 5# 6# 7# 8# 9# 10#Mean%Cost%Step%Mean%Cost%Minimiza/on%(BFGS)%(detail)%L=1.5,#w=1.5#L=1.5,#w=2.5#L=1.5,#w=3.5#L=2.5,#w=1.5#L=2.5,#w=2.5#L=2.5,#w=3.5#L=3.5,#w=1.5#L=3.5,#w=2.5#L=3.5,#w=3.5#  90 It would be desirable to determine the optimum using a first-order approximation, as it would remove the need to do additional analyses with reliability and risk models. The region around the optimum determined using the first-order approximation earlier in Section 4.2.1.2.5 is shown in Figure 4.17. This approximation was used to estimate the cost functions and responses in the final minimization analyses. A surface obtained using values from the Rts risk model are shown in Figure 4.18. The percent difference of the approximation to Rts values is shown as a surface in Figure 4.19. It is apparent that while the general trend and attributes of the first-order approximation are shared with Rts, these values are consistently underestimated in this example and the difference is highly variable, from little error to 50%.  Figure 4.17 Mean cost around the optimum determined using a first-order approximation 1.5$2$2.5$3$3.5$0$1000$2000$3000$4000$5000$6000$7000$8000$9000$10000$1$ 2$ 3$4$5$L"(m)"Mean"Cost"w"(m)"First2Order"Approxima8on"Mean"Cost"  91  Figure 4.18 Mean cost near the optimum determined from Rts  Figure 4.19 Percent Difference of the First-Order Approximation to Rts Mean Cost Values The trajectories of the decision variables in these analyses are also possible to plot, as both analyses converge upon the optimal values. Figure 4.20 shows the decision variables during the search using steepest descent. These follow the direction of the steepest gradient at any given point, converging to the optimum near a clear span of 2.1 m and panel depth of 2.7 m. 1.5$2$2.5$3$3.5$0$1000$2000$3000$4000$5000$6000$7000$8000$9000$10000$1$2$3$4$5$L"(m)"Mean"Cost"w"(m)"Mean"Cost"from"Rts"1.5$2$2.5$3$3.5$'50$'40$'30$'20$'10$0$1$ 2$ 3$4$5$L"(m)"%"Difference"w"(m)"Comparision"of"First7Order"Approxima<on"to"Rts"Mean"Cost"  92 The trajectories of the BFGS search, however, are much less intuitive and may be seen in Figure 4.21. In particular, one analysis (starting at L=1.5 m, w=3.5 m) approaches the optimal values, but then steps away, self-correcting later.   Figure 4.20 Decision variables throughout optimization using steepest descent 1"1.5"2"2.5"3"3.5"4"1" 1.5" 2" 2.5" 3" 3.5" 4" 4.5"w"(m)"L"(m)"Steepest"Descent"Op0miza0on"Trails"L=1.5,"w=1.5"L=1.5,"w=2.5"L=1.5,"w=3.5"L=2.5,"w=1.5"L=2.5,"w=2.5"L=2.5,"w=3.5"L=3.5,"w=1.5"L=3.5,"w=2.5"L=3.5,"w=3.5"Op.mum"  93  Figure 4.21 Decision variables throughout optimization using BFGS The results of the optimization of the beam clear span for both search methods are shown in Figure 4.22 and Figure 4.23. All analyses converge to the optimal value of L of approximately 2.1 m. The search for the optimal panel width of 2.7 m for each method is similarly displayed in Figure 4.24 and in Figure 4.25. Again, the steps taken by BFGS are much less intuitive than by steepest descent. 1"1.5"2"2.5"3"3.5"4"1" 1.5" 2" 2.5" 3" 3.5" 4" 4.5"w"(m)"L"(m)"BFGS"Op-miza-on"Trails"L=1.5,"w=1.5"L=1.5,"w=2.5"L=1.5,"w=3.5"L=2.5,"w=1.5"L=2.5,"w=2.5"L=2.5,"w=3.5"L=3.5,"w=1.5"L=3.5,"w=2.5"L=3.5,"w=3.5"Op.mum"  94  Figure 4.22 Optimization of the beam clear span in Example 2 using steepest descent  Figure 4.23 Optimization of the beam clear span in Example 2 using BFGS  1"1.5"2"2.5"3"3.5"4"4.5"0" 10" 20" 30" 40" 50" 60" 70" 80"L"(m)"Step"Beam"Clear"Span"Op1miza1on"(Steepest"Descent)"L=1.5,"w=1.5"L=1.5,"w=2.5"L=1.5,"w=3.5"L=2.5,"w=1.5"L=2.5,"w=2.5"L=2.5,"w=3.5"L=3.5,"w=1.5"L=3.5,"w=2.5"L=3.5,"w=3.5"1"1.5"2"2.5"3"3.5"4"4.5"0" 5" 10" 15" 20" 25" 30"L"(m)"Step"Beam"Clear"Span"Op1miza1on"(BFGS)"L=1.5,"w=1.5"L=1.5,"w=2.5"L=1.5,"w=3.5"L=2.5,"w=1.5"L=2.5,"w=2.5"L=2.5,"w=3.5"L=3.5,"w=1.5"L=3.5,"w=2.5"L=3.5,"w=3.5"  95  Figure 4.24 Optimization of the panel depth in Example 2 using steepest descent  Figure 4.25 Optimization of the panel depth in Example 2 using BFGS 4.2.4 Discussion of Results This analysis was meant as an incremental step to a full 6-storey FFTT structural analysis, which is discussed in Chapter 5. As a result, this analysis was performed on a single-storey FFTT frame. This analysis did, however, converge from several initial values into a defined point for each decision variable at the same minimum cost. While this configuration of frame is not the intended FFTT design in the concept, it performed adequately in this analysis and had reasonable optimal values. 1"1.5"2"2.5"3"3.5"4"0" 10" 20" 30" 40" 50" 60" 70" 80"w"(m)"Step"Panel"Depth"Op1miza1on"(Steepest"Descent)"L=1.5,"w=1.5"L=1.5,"w=2.5"L=1.5,"w=3.5"L=2.5,"w=1.5"L=2.5,"w=2.5"L=2.5,"w=3.5"L=3.5,"w=1.5"L=3.5,"w=2.5"L=3.5,"w=3.5"1"1.5"2"2.5"3"3.5"4"0" 5" 10" 15" 20" 25" 30"w"(m)"Step"Panel"Depth"Op1miza1on"(BFGS)"L=1.5,"w=1.5"L=1.5,"w=2.5"L=1.5,"w=3.5"L=2.5,"w=1.5"L=2.5,"w=2.5"L=2.5,"w=3.5"L=3.5,"w=1.5"L=3.5,"w=2.5"L=3.5,"w=3.5"  96 The cost coefficients in Table 4.10 were not calibrated for this analysis to represent a real cost. While the cost itself does not have a particular value, these do have some relevant conclusions about the static probabilistic design of this frame. The cost bounds for structural integrity were asymptotic. As a result, the actual cost coefficients of these are not of utmost importance, just the relative values compared to the others. Similarly, the material costs were relatively low compared to the structural integrity if insufficient. This result in a model that has an optimum result for performance in the loading case and selected design, which includes a 5-layer CLT panel and the specified steel beam. The capacity of the CLT panel and connection, relative to the steel beam, were critical cost models and the specified steel beam was relatively small, represented by a W250x33 section. For this single-storey FFTT structure, the structure’s response depended greatly on the base support. This is particularly important for the one-storey frame, which would have greatly reduced the moment demand on the structure and would have stiffened the frame. The effect of the base conditions are expected to diminish as the structure increases in height. In addition, the strength of the CLT panel is a limiting factor in this analysis. The cost functions associated with failure of the CLT at the beam-panel connection and the ensuring that the beam yielding mechanism was obtained were the most governing in this example.  A broad optimum is ideal for structural engineering, as it allows small design changes to not affect the structure as a whole and shows that it is resilient with respect to that design variable. For highly sensitive design variables, much more consideration would be necessary during design, as they have a profound effect on the response. This may include much more specific modeling or changing the concept. In this analysis, it was found that there is a large   97 region of relatively little cost variance, which may be inferred from the first-order approximations given in Section 4.2.1.2.5 and Figure 4.18. This analysis also allowed risk minimization to be performed on a more complicated model that has several layers of nested models, including algebraic expressions. It also demonstrates that in each of the analyses with a given optimization method, the decision variables are optimized independently. For each initial value of the decision variable, the analyses converge nearly the same, with slight differences that are likely from the differing calculations in each. While both methods converged, the BFGS method used fewer steps in most analyses to get with 0.5% of the optimum. However, while it gets within the optimality region efficiently, it does not identify the exact optimum well and is very sensitive to errors in the risk model. While requiring more steps to reach the optimum, steepest descent offers a robust method that will reach the precise optimum. The convergence criteria used is very sensitive to the number of random variables and the error in the risk model. These may need to be adjusted to suit the problem to help define when the minimum cost is met. This is due to the error in the risk model, which becomes more substantial as a part of the gradient when the objective function flattens out and gradients become small. While increasing precision could allow for the convergence criteria to be reduced, this provides little added value to the analysis.  The results using cost minimization in Rts were compared to a first-order approximation made earlier for this example. These show that although the trends appear to be similar in the approximation, the values disagree significantly and inconsistently. As a result, the first-order approximation would be inappropriate for a final analysis.   98 A future model is discussed in Chapter 5 and incorporates some of these suggestions for improved models. In particular, this model used an approximated structural analysis of the frame. This did not include shear deformation, which is important behaviour for the squat CLT panels. In addition to including this, a finite element model would also allow for inclusion of a more sophisticated model for CLT crushing at the beam-panel connection than an assumed bearing area dependent only on shear force. This refinement is important, as the CLT strength, both at the beam-panel connection and due to bending were important final optimization of the structure. This model also assumed that the base was an ideal pin connection. This is a rough approximation made, particularly for a one-storey frame. Future analyses could include varied connection stiffness or a full contact model. For a single storey frame, however, this connection point significantly affects the frame’s response. Finally, the cost model coefficients were taken as rough estimates and must be refined for a future analysis.      99 Chapter 5 Future Structural Example The two risk minimization examples in Chapter 4 provided a basis for future analyses. Chapter 5 describes a proposed model, which includes a full Rts structural analysis, many nested models, and a large-scale optimization problem. The structure currently modeled is a 6-storey FFTT frame, although it could be extended to a 9-storey (or taller) structure. This chapter is intended as a guide for reasonable modeling in the future with this framework, with possible refinements that may be included. All descriptions included here are as-is for the current framework of the example. 5.1 Model Overview This example model is intended to showcase a new method of performance-based earthquake structural optimization, in addition to applying a new design concept for tall timber structures. It is possible to contain all of this analysis within Rts, using several nested models that allow implementation of reliability-based risk analysis. This analysis is would be novel by the implementation in the deep parameterization and flexibility that is provided by Rts.    100  Figure 5.1 Schematic of the structural model A working structural model has been constructed for this analysis. The structural model itself is made to represent an FFTT frame consisting of CLT panels with embedded steel beams, which are designed to be the yielding members of the lateral load system. Prospective decision variables are the panel depth and beam clear span (as in Example 2), with additional decision variables including wide flange beam dimensions. 5.2 Load Models As the currently available option, the structure is loaded laterally for a pushover analysis. The load is applied as a point load, on the exterior of the CLT on either side of the CLT panels. This spreads the load to both sides of the structure. While currently the load is applied instantaneously, it is possible to provide a ramp load if using a non-linear analysis. 6@H$w$ w$L$  101 Further load models employed by Rts are at present under development. In the future, the intention is that the program will extended to be able to use earthquake modeling to apply recorded and synthetic, random variable-based ground motions to the structures in dynamic structural analysis.  5.3 Component Response Models  For this example, a component model using Rts was implemented to demonstrate the program’s capability of integrating many models. It is also possible to use an OpenSees model to obtain structural responses in either Rt or Rts, which has been done in the past by Koduru and Haukaas (2010). A guide for implementing an OpenSees model is given in Appendix A.  This implemented model used 3 main components: steel beam elements, CLT panel shells, and the connections between. The connection detailing for each component is important and also discussed. In particular, the embedment of the steel components into the CLT panels is important to prevent high stress concentrations. This model, however, is based upon decision variables and random variables, with the geometry constantly changing. 5.3.1 Mesh Parameterization Using the deep parameterization in Rts, it is possible to include the whole geometry of the structure as variables. In this example, this is shown as the basic geometry of the frame is expressed as a function of the variables for panel depth, beam clear span, and height of the structure. The mesh parameterization is performed by inclusion of the formulas the point model in Rts, RPoints, while the remaining components are then based off of the RPoint   102 locations. It is important that these RPoints are “prerun.” Prerun calculates RPoint values as the input file is entered, which ensures that the intended initial values are used.  5.3.2 Beam Model The beams are modeled by RSteelIBeamComponent, a component included in Rts. These consist of a linear material, as determined through the Mesh Option in the input file. These beams span the clear distance between the panels (a decision variable) and are fully embedded for the depth of the CLT panel. While the steel material properties are random variables and automatically generated by Rts, the geometric properties of the beams may all be decision variables, as identified in Figure 5.2. In this example, the beams at all five levels are identical to each other. For the details about the beam-panel connection, refer to Section 5.3.4. Future implementation could include non-linear fibre modeling of the beams.   Figure 5.2 Cross-section of the steel beam 5.3.3 Panel Model The CLT panels used as columns in this model are modeled by elastic, isotropic Quad4 and Mindlin elements, in Rts as RCLTPanelComponent. This model generates the material properties as random variables and the node locations as “generic algebraic models.” The   103 thickness of the CLT and the mesh options are user inputs, while the depth of the panel is a decision variable. Using these parameters, Rts meshed the elements as a Quad4 and Mindlin elements with a user selected number of elements in each direction of the panel. The Quad4 elements are of importance in this analysis, as they are important for in-plane axial and bending response of the structure. The use of these elements also accounts for both bending and shear deformation of the loaded CLT panels. Mindlin elements are also generated to provide nominal out-of-plane stiffness, which was required due to the lateral stiffness in the beams that prevent Rts from locking these degrees of freedom (global DOF with no stiffness are automatically locked). Unless the model applies out-of-plane loading, it is not necessary to model CLT’s complicated out-of-plane response.  5.3.4 Connection Models The beam-panel connection has no additional members within the model and a schematic is shown in Figure 5.3. The beams are currently meshed to the CLT panel only at the two outside edges of the panel, the embedded portion of these beams could then be discretized to have points coincide with the CLT finite element mesh. Rts automatically attaches coincident points to ensure a consistent deformation. The coincidence and subsequent meshing of the beam and panel throughout the structure is important to prevent stress concentrations in the model that do not simulate actual behaviour. Connection bearing plates could be considered here.   104  Figure 5.3 Panel-beam connection detail To approximate a “pinned” base, the CLT panel bases are offset from the ground plane by a distance of 50 mm. This is necessary to allow rotation, as Rts will mesh coincident nodes into the fixed ground plane, resulting in a “fixed” base. A RSteelIBeamComponent connects the ground to the panel, along the centreline of the panels. This anchor, shown in Figure 5.4, represents a relatively flexible connection to the ground, approximating a pinned connection. By changing the properties of the anchor component, a stiffer connection may be modeled. In addition to this anchor, a baseplate is used at the bottom of the CLT panel. This baseplate meshes with both the panel and the anchor, which allows for reduced stress concentrations at the anchor-panel connection location. As with the beam-panel connection, the baseplate should mesh with the panel throughout the length.      105  Figure 5.4 Detail at panel-ground connection Further design considerations could focus on this connection and its influence on the structure for mid- and high-rise models. Depending on these results, this may become a location to refine or be left as an appropriate simple approximation. In addition, this connection could be varied to determine its influence and if significant, made a decision variable. 5.4 Damage and Cost Models Using the structural model responses, damage and cost models may be used to quantify the responses in terms of money. Rts allows this to be performed using generic algebraic formulation currently. The intention is that in the future, a Repair Manager will be implemented. This will use enhanced finite elements, which will contain the damage and cost information within the components. This would change the method in which the damage is aggregated. Instead of adding formulas as a function of the structural responses, this could be included as part of the element. In addition, more sophisticated cost models may be employed in the future for more advanced modeling of non-linear and dynamically loaded structures.      106 In the case of this example, it is assumed that the enhanced finite elements are not fully implemented. As a result, the damage and cost models are treated as one, using algebraic expressions as a function the structural model’s responses. These are similar to those employed in the example discussed in Section 4.2 and will not be repeated here. However, some additional refinements are discussed here. In addition, the cost coefficients should be calibrated to represent physical realities. The cost of construction and materials may be related to the quantities of material used, as performed in Example 2. But as a simplification for the model in Chapter 4, no economies of scale are considered as additional models. This could be included in a large-scale analysis, which would discount large quantities of similar materials. In addition, these models could be updated to use specific aspects of the geometry, depending upon the selection of decision variables. The drift-related damage to the building in Example 2 was a simple polynomial function that related the increasing drift to higher cost. For a larger model under a pushover analysis, a summation of the different drift-related costs for each floor may be used. If, however, a dynamic analysis is possible, the damage and repair cost could also be formulated as a function of the acceleration experienced by each floor – something that is not possible in a static model. In Example 2, there were three cost equations that were used to push the optimal structure into having sufficient capacity by penalizing poor designs with high costs. This was a result of an elastic static analysis and analytical model, which does not have any behaviour change   107 when structural capacities are exceeded. These three costs were formulated for the CLT shear capacity at the panel-beam connection, the beam shear, and beam moment capacity.  The panel-beam connection, in the analytical model, was formulated as a function of the beam’s shear on a bearing area of the CLT panel. This is an idealization of the concentrated stress at this area. The use of a finite element model will allow these stresses to be computed for each element. This may require additional discretization of the CLT at these concentrated locations. In addition, a stress response may need to be added to the RCLTPanelComponent model. The cost formulations of the steel beams may also be changed, depending on the structural implementation. The shear capacity of the beam is important for small clear spans, but does not have too much influence at longer spans. The moment capacity is important for a linear elastic beam component to prevent the capacity from being exceeded. With a non-linear model, however, moment capacity may not be necessary as the additional deflection from plastic deformation would be captured in the interstorey-drift related costs. The capacity design of the structure was important in Example 2, as it ensured that the CLT panel had greater capacity than the steel beam in the optimal model (or penalized with a prohibitive cost). This is important for an elastic model, as they do not indicate structurally when the member’s capacity is exceeded. Depending on the final configuration of the model, there are several ways this cost function could be formulated. There are other costs that may be considered in a structural model. These could include costs that represent the utility of the structure, such as architectural considerations such as beam   108 clear span. Other possible costs could include the life cycle costs, which are not included to date, as Example 2 considered the structure under an earthquake load. 5.5 Orchestrating Models The orchestrating models may require some adjustment for this model. Possible considerations for each are discussed below. 5.5.1 Reliability Models Both FORM and sampling may need some initial testing for suitability to these problems. FORM generally works without much additional tweaking, but does not always converge. As a result, sampling may be called from the Risk Model. The sampling model requires more attention than the FORM model. This is due to the maximum number of iterations it requires, in addition to the target COV. As discussed in Section 4.1.4, it is important to keep the target COV low, particularly at high probabilities, to prevent too few samples from being taken. The maximum number of samples can be estimated using through preliminary testing. The random variable distributions may also be an important part for the analyses. In particular, it is key to remember that zero is a possible realization of any normally distributed random variable. This can cause problems with the cost functions if divided by this random variable. This may be avoided by defining lognormal random variables as necessary. 5.5.2 Risk Models The risk model used in this analysis may need some additional tuning as appropriate for the example. In particular if computation time is prohibitive, using probability thresholds of 1-  109 99% may provide a precise enough results for the optimization models. It is imperative that the risk model provides precise calculations for the optimization model to function properly. Additional risk measures may also be desired to change the method of optimization. It would also be of great benefit if DDM sensitivities were implemented, as this would reduce the computational demands and allow DDM gradients to be determined through all models.  5.5.3 Optimization Models The optimization model in this analysis may require some extra consideration, particularly with the step size determination. This depends upon the search method used, but could include a line search to help determine a variable step size. Another consideration is the gradient norm convergence criteria, which will require an appropriate number that is dependent upon the final number of decision variables. During the course of the analyses, some of the decision variables may be found to have little influence in the final cost of the structure. These decision variables may be removed to improve computational efficiency, but it should be considered that they may have significant impact at a different region of the problem space. In addition, other decision variables could be added if they have a significant effect. If a decision variable does not have significant gradient in the optimization model, the perturbation factor may be increased if FDM gradients are used. 5.6 Other Considerations In the analysis of the one-storey FFTT frame in Section 4.2, the base of the walls were assumed to be pin-ended. This was a simple assumption for a significant aspect in the model. While the panel base connection is expected to be less important as the frame becomes taller,   110 the modeling of this connection may be an aspect that could warrant more consideration and analysis. In this example’s current formulation, there are no defined constraints on any of the decision variables. Instead, everything is an attempt to quantify the decision in terms of costs. One reason is to prevent unnecessary complexity in the optimization algorithm, while also this could prevent issues with convergence due to highly non-linear functions. Finally, this is an attempt to find the best response of a system and not a design directly. A good design will ideally have a plateau around the optimum, which means that it is not highly susceptible to minor tweaks in the design. For these reasons, there are no specific bounds in the optimization problems. These models are currently formulated for a “once off” analysis – meaning, in any analysis, all models are run. But for structures that are analyzed frequently with slightly differing geometries, construction of a response surface model may be beneficial. This shifts the computational expense from having to analyze every model to the initial construction of the response surface. In addition, it is useful for determining the sensitivity of the response to a variable in highly implicit problems.    111 Chapter 6 Conclusion 6.1 Summary of Research By implementing a risk and optimization model in Rts, it is now possible to perform risk minimization to a structure by varying deeply embedded parameters. This minimization reduces aggregated cost functions, which implement nested models, to optimize the mean cost of an event on a structure. The use of a risk measure allows the uncertainty of random variables to be included in all aspects of the cost functions, including construction and damage. In addition, the formulation of the cost function in this risk minimization does not specify discrete events or have a failure probability. The risk measure determines the mean cost using a continuum of possible cost events, determined using reliability methods.  The risk model is new within Rts and determines the mean value of a cost function. It accomplishes this by determining the loss curve with reliability techniques and requires no prior information. The mean cost is then determined using quadrature to determine the area under the cost function. This value is used as the objective function of the optimization model, a new implementation in Rts using existing techniques. By varying the decision variables, the optimal design configuration for minimum mean cost may be determined. This new formulation of risk minimization was tested using two examples. The first example was a single polynomial function with one decision variable. This function was optimized using two search methods which both converged upon the minimum cost and optimal decision variable. This also could be compared to a first-order analysis and a previously published optimization. Finally, this example was used to fine-tune the risk model parameters to most efficiently determine the risk measure to adequate precision.   112 A second example included an analytical formulation of single-storey FFTT frame. This analysis considered material costs, cost of damage, and limit state requirements of the design. It implemented many nested functions, which were all optimized using two optimization search methods implemented in Rts to determine the optimal value of the two decision variables. These results were then compared to those using a first-order approximation, which was significantly and inconsistently different than the Rts risk formulation. In addition, this analysis helped show the behaviour of the frame with varied parameters. Notably, the connection strength of the CLT and enforcing the weak-beam, strong-column mechanism largely governed this analysis. This analysis provided a stepping-stone to a future and larger FFTT frame example. This example has the framework implemented and the geometry is provided as a framework for future work. This model implements the basic geometry of the structure as function of decision variables. In addition, this model also implements an Rts in-house structural analysis that gives responses for the cost functions. 6.2 Future Work While the risk and optimization models are implemented fully in Rts, future work could be performed on these models. Some particular directions of work are discussed below. The risk model in Rts currently only calculates the mean value of the cost of a design and event, which also limits the optimization to only reduce this characteristic of the cost. Adding further risk models could change the metric calculated and thus what is minimized. In addition, improving the precision of the model is essential for proper functioning of the optimization model. This is particularly important with BFGS search directions, which are   113 sensitive to error. In order to reduce computation times, implementation of DDM sensitivities in the risk model will allow for easy and computationally efficient gradient calculations in the optimization algorithm. This may also include ensuring that the FORM model’s robustness to determine the probabilities at many locations.  The optimization model is formatted to be modular and take input methods for the search direction, step size, and convergence criteria. This allows for easy addition of future models. In particular, a new variable step size model could be beneficial for optimization using the BFGS algorithm. The optimization model will be significantly benefit by the inclusion of DDM sensitivities in all other models, as this will reduce the need for perturbing each decision variable individually. As discussed in Chapter 5, a future model is included with framework for a 6-storey FFTT frame. While the basics of the model are discussed, suggestions and goals for future study are included. This includes enabling an Rts in-house structural analysis in the example for the frame response. Some elements of the frame may require more detailed modeling, in particular the beam-panel connections and the base connections. In addition, the implementation of dynamic analyses and non-linear finite element members will expand the depth of analysis. Finally, implementing more elaborate cost functions will allow for improved values and understanding of the structure’s optimized design. Ideally, these will allow for specific design recommendations to be made for the FFTT concept.   114 References Architecture in Development. (n.d.). “e3, Berlin.” <http://architectureindevelopment.org/project.php?id=318> (Dec. 2, 2014). Ashtari, S. (2012). “In-Plane Stiffness of Cross-Laminated Timber Floors.” University of British Columbia. Asiz, A., and Smith, I. (2011). “Connection System of Massive Timber Elements Used in Horizontal Slabs of Hybrid Tall Buildings.” Journal of Structural Engineering, 137, 1390–1393. Azim, M. R. (2014). “Numerical and Experimental Investigations of Connection for Timber-Steel Hybrid System.” University of British Columbia. Bard, D., Davidsson, P., and Wernberg, P. (2010). “Sound and Vibration Investigations in a Multi-family Wooden Frame Building.” Proceedings of the International Congress on Acoustics, ICA 2010, Sydney, AU, 1–6. Bhat, P. (2013). “Experimental Investigation of Connection for the FFTT, a Timber-steel Hybrid System.” University of British Columbia. Branco, J. M., and Neves, L. a. C. (2011). “Robustness of Timber Structures in Seismic Areas.” Engineering Structures, Elsevier Ltd, 33(11), 3099–3105. Ceccotti, A., Sandhaas, C., and Yasumura, M. (2010). “Seismic Behaviour of Multistory Cross-Laminated Timber Buildings.” Proceedings of the International Convention of Society of Wood Science and Technology and United Nations Economic Commision for Europe - Timber Committee, Geneva, Switzerland. Choi, S.-K., Grandhi, R., and Canfield, R. (2007). Reliability-Based Structural Design. Springer-Verlag London Limited, London, UK. Dickof, C. (2013). “CLT Infill Panels in Steel Moment Resisting Frames as a Hybrid Seismic Force Resisting System.” University of British Columbia. Dietsch, P. (2011). “Robustness of Large-span Timber Roof Structures — Structural aspects.” Engineering Structures, Elsevier Ltd, 33(11), 3106–3112. Dujic, B., Klobcar, S., and Zarnic, R. (2008). “Shear Capacity of Cross-Laminated Wooden Walls.” 10th World Conference on Timber Engineering 2008, Miyazaki, Japan, 1641–1648. Durlinger, B., Crossin, E., and Wong, J. (2013). Life Cycle Assessment of a Cross Laminated Timber Building. Melbourne, Aus.   115 Enevoldsen, I., and Sorensen, J. (1994). “Reliability-Based Optimization in Structural Engineering.” Structural Safety, 15, 169–196. Fairhurst, M. (2014). “Dynamic Analysis of the FFTT System.” University of British Columbia. Falk, A. (2005). “Architectural Aspects of Massive Timber.” Luleå University of Technology. FPInnovations. (2012). CLT Handbook. Quebec, QC. FPInnovations. (2014). Technical Guide for the Design and Construction of Tall Wood Buildings in Canada. Pointe-Claire, QC. Government of British Columbia. (n.d.). “Wood Innovation and Design Centre.” <www.jtst.gov.bc.ca/woodinnovation/about.htm> (Nov. 30, 2014). Green, M., and Karsh, E. (2012). Tall Wood: The Case for Tall Wood Buildings. Vancouver, BC, 240. Guggenberger, W., and Moosbrugger, T. (2006). “Mechanics of Cross-Laminated Timber Plates Under Uniaxial Bending.” 9th World Conference of Timber Engineering, Portland, OR. Hansson, M., and Ellegaard, P. (2006). “System Reliability of Timber Trusses Based on Non-linear Structural Modelling.” Materials and Structures, 39(6), 593–600. Hansson, M., and Thelandersson, S. (2002). “Assessment of Probabilistic System Effects on the Reliability of Timber Trusses.” Materials and Structures, 35, 573–578. Haukaas, T. (2008). “Unified Reliability and Design Optimization for Earthquake Engineering.” 23, 471–481. Haukaas, T. (2014a). “Mean-value First-order Second Moment Method (MVFOSM).” <www.inrisk.ubc.ca> (Oct. 20, 2014). Haukaas, T. (2014b). “The First-order Reliability Method (FORM).” <www.inrisk.ubc.ca> (Oct. 30, 2014). Haukaas, T. (2014c). “Sampling.” <www.inrisk.ubc.ca> (Oct. 30, 2014). Haukaas, T. (2014d). “Continuous Random Variables.” <www.inrisk.ubc.ca> (Jan. 7, 2015). Haukaas, T. (2014e). “Gradient-Based Algorithms.” <www.inrisk.ubc.ca> (Nov. 28, 2014).   116 Haukaas, T., Allahdadian, S., and Mahsuli, M. (2013). “Risk Measures for Minimization of Earthquake Costs.” Proceedings of the 11th International Conference on Structural Safety & Reliability, New York, NY. Hristovski, V., Stojmanovska, M., and Dujic, B. (2012). “Full-Scale Shaking Table Tests of XLam Panel Systems - Correlation with Cyclic Quasi-Static Tests.” 15th World Conference of Earthquake Engineering, Lisbon, Portugal. Jorissen, A., and Fragiacomo, M. (2011). “General Notes on Ductility in Timber Structures.” Engineering Structures, Elsevier Ltd, 33(11), 2987–2997. Der Kiureghian, A. (2005a). “First- and Second-Order Reliability Methods.” Engineering Design Reliability Handbook, E. Nikolaidis, ed., C R C Press LLC. Der Kiureghian, A. (2005b). “Non-ergodicity and PEER’s Framework Formula.” Earthquake Engineering & Structural Dynamics, 34(13), 1643–1652. Koduru, S. D., and Haukaas, T. (2010). “Probabilistic Seismic Loss Assessment of a Vancouver High-rise Building.” 136(March), 235–245. Koo, K. (2013). A Study on Historical Tall-wood Buildings in Toronto and Vancouver. Van de Kuilen, J. W. G., Ceccotti, A., Xia, Z., and He, M. (2011). “Very Tall Wooden Buidings with Cross Laminated Timber.” The 12th East Asia-Pacific Conference on Structural Engineering and Constrution, Elsevier Ltd., 1621–1628. Lam, F. (2001). “Modern Structural Wood Products.” Progress in Structural Engineering and Materials, 3(3), 238–245. Lee, K. H., and Rosowsky, D. V. (2006). “Fragility Analysis of Woodframe Buildings Considering Combined Snow and Earthquake Loading.” Structural Safety, 28(3), 289–303. Lehmann, S. (2012). “Sustainable Construction for Urban Infill Development Using Engineered Massive Wood Panel Systems.” Sustainability, 4(12), 2707–2742. Lend Lease. (n.d.). Forté. Melbourne, Aus. Li, M., Lam, F., Foschi, R. O., Nakajima, S., and Nakagawa, T. (2011a). “Seismic Performance of Post and Beam Timber Buildings I: Model Development and Verification.” Journal of Wood Science, 58(1), 20–30. Li, M., Lam, F., Foschi, R. O., Nakajima, S., and Nakagawa, T. (2011b). “Seismic Performance of Post-and-Beam Timber Buildings II: Reliability Evaluations.” Journal of Wood Science, 58(2), 135–143.   117 Li, Y., and Ellingwood, B. (2007). “Reliability of Woodframe Residential Construction Subjected to Earthquakes.” Structural Safety, 29(4), 294–307. Van de Lindt, J., and Walz, M. A. (2003). “Development and Application of Wood Shear Wall Reliability Model.” Journal of Structural Engineering, 129(March), 405–413. Mahsuli, M., and Haukaas, T. (2013a). “Seismic Risk Analysis with Reliability Methods, Part I  : Models.” Structural Safety, Elsevier Ltd, 42, 54–62. Mahsuli, M., and Haukaas, T. (2013b). “Computer Program for Multimodel Reliability and Optimization Analysis.” Journal of Computing in Civil Engineering, 27(1), 87–98. Miraglia, S., Dietsch, P., and Straub, D. (2011). “Comparative Risk Assessment of Secondary Structures in Wide-span Timber Structures.” Applications of Statistics and Probability in Civil Engineering, Faber, Koehler, and Nishijima, eds., Taylor & Francis Group, London, UK, 1301–1309. Munch-Andersen, J., and Dietsch, P. (2011). “Robustness of Large-span Timber Roof Structures — Two Examples.” Engineering Structures, Elsevier Ltd, 33(11), 3113–3117. Nocedal, J., and Wright, S. (2000). Numerical Optimization. Springer, New York, NY. Pei, S., Berman, J., Dolan, D., Lindt, J. Van De, Ricles, J., Sause, R., Blomgren, H., Popovski, M., and Rammer, D. (2014). “Progress on the Development of Seismic Resilient Tall CLT Buildings in the Pacific Northwest.” World Conference on Timber Engineering 2014, Quebec City, QC. Pei, S., Popovski, M., and Lindt, J. W. Van De. (2012). “Seismic Design of a Multi-Story Cross Laminated Timber Building Based on Component Level Testing.” Wold Conference on Timber Engineering, Auckland, New Zealand, 244–252. Popovski, M., Schneider, J., and Schweinsteiger, M. (2010). “Lateral Load Resistance of Cross-Laminated Wood Panels.” Wold Conference on Timber Engineering, Trentino, Italy. Riahi, H., Moutou Pitti, R., Bressolette, P., Chateauneuf, A., and Fournrly, E. (2011). “Reliability-based Design of Structures Under Seismic Loading: Application to Timber Structures.” Experimental and Applied Mechanics, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, T. Proulx, ed., The Society for Experimental Mechanics, New York, NY, 417–423. Rinaldin, G., Amadio, C., and Fragiacomo, M. (2013). “A Component Approach for the Hysteretic Behaviour of Connections in Cross-Laminated Wooden Structures.” Earthquake Engineering & Structural Dynamics, 42, 2023–2042.   118 Rosowsky, D. V. (2013). “Evolution of Probabilistic Analysis of Timber Structures from Second-Moment Reliability methods to Fragility Analysis.” Structural Safety, 41, 57–63. Serrano, E. (2009). Documentation of the Limnologen Project: Overview and Summaries of Sub Projects Results. Växjö, Sweden. Skidmore Ownings & Merrill. (2013). Timber Tower Research Project. Chicago, IL, 72. Smith, I., and Snow, M. A. (2008). “Timber: An Ancient Construction Material with a Bright Future.” The Forestry Chronical, 84(4), 504–510. Soares, R., Mohamed, A., Venturini, W., and Lemaire, M. (2002). “Reliability Analysis of Non-linear Reinforced Concrete Frames Using the Response Surface Method.” Reliability Engineering & System Safety, 75(1), 1–16. Sørensen, J. D. (2011). “Framework for Robustness Assessment of Timber Structures.” Engineering Structures, Elsevier Ltd, 33(11), 3087–3092. Stürzenbecher, R., Hofstetter, K., and Eberhardsteiner, J. (2010). “Structural Design of Cross Laminated Timber (CLT) by Advanced Plate Theories.” Composites Science and Technology, Elsevier Ltd, 70(9), 1368–1379. Techniker. (2010). Tall Timber Buildings - the Stadthaus, Hoxton, London. London, UK. The Local. (2014). “Work to Start on World’s Tallest Wooden House.” The Local, (20140402). Thoft-Christensen, P. (2005). “System Reliability.” Engineering Design Reliability Handbook, E. Nikolaidis, ed., C R C Press LLC. Timmer, S. (2011). “Feasibility of Tall Timber Buildings.” Delft University of Technology. Tomasi, R., Parisi, M. A., and Piazza, M. (2010). “Ductile Design of Glued-laminated Timber Beams.” Practice Periodical on Structural Design and Construction, 14, 113–122. Toratti, T., Schnabl, S., and Turk, G. (2007). “Reliability Analysis of a Glulam Beam.” Structural Safety, 29(4), 279–293. Tri-State Forest Products. (n.d.). “Glulam.” <www.tsfpi.com/glulam.html> (Dec. 3, 2014). UBC Properties Trust. (2014). “New Tall Wood Student Residence at Brock Commons: Architectural Services Expression of Interest.” Vancouver, BC.   119 Weckendorf, J., and Smith, I. (2012a). “Multi-functional Interface Concept for Highrise Hybrid Building Systems with Structural Timber.” World Conference on Timber Engineering, Auckland, NZ. Weckendorf, J., and Smith, I. (2012b). “Dynamic Characteristics of Shallow Floors with Cross-Laminated-Timber Spines.” World Conference on Timber Engineering, Auckland, NZ. Yates, M., Linegar, M., and Dujic, B. (2008). “Design of an 8 Storey Residential Tower from KLH Cross Laminated Solid Timber Panels.” 10th World Conference on Timber Engineering 2008, 4, 2189 – 2196. Yeoh, D., Fragiacomo, M., De Franceschi, M., and Boon, K. H. (2011). “State of the Art on Timber-Concrete Composite Structures: Literature Review.” Journal of Structural Engineering, 137, 1085–1095. Yin, Y.-J., and Li, Y. (2011). “Probabilistic Loss Assessment of Light-frame Wood Construction Subjected to Combined Seismic and Snow Loads.” Engineering Structures, Elsevier Ltd, 33(2), 380–390. York, M. (2014). “Wood You Buy a Home Like This? Meet London’s New Timber Homes.” City AM, London, UK, (December). Zangerl, M., and Tahan, N. (2012). “LCT ONE.” Wood Design & Building, 25–28.        120 Appendices Appendix A: Reliability Analysis in Rt Implementing an Opensees Structural Model This Appendix is included as an operator’s guide for implementing an OpenSees analysis in Rt. This functionality is included in Rt and allows for the reliability analyses to provide probabilistic inputs into the OpenSees structural model and receive responses. The use of an external model in Rt for structural responses, as OpenSees provides an advanced analysis program used in other analyses. A simple structural example is included to illustrate this use of both programs, with the code included for the input files in Section A.4.  Figure A.1 Interaction between Rt and OpenSees The procedure of the information transfer between the two programs is displayed in Figure A.1. Rt is used to define the random variables and limit state function and performs the reliability analysis. This reliability analysis algorithm utilizes the random variables, and based on the design point search algorithm, determines realizations of the random variables. These realizations are then sent as input parameters into the OpenSees structural model, Rt# OpenSees#RV#Realiza/ons#Structural#Analysis#Model##Random#Variables#Reliability#Algorithm#Limit#State#Func/on# Structural#Response#  121 which performs a deterministic analysis based upon this structural realization. The desired responses from this model’s analysis are then sent to Rt and the limit state function is evaluated. Based upon this result, the reliability analysis continues to iterate as necessary. The transfer of responses from OpenSees to Rt may be performed using two methods: by creating response data files or by sending Rt single response commands. The method chosen depends upon the use of the responses in the analysis. Developing a database of responses would favour creating and saving the files in OpenSees elsewhere, which then can be opened using Rt in addition to any other text or data program. In comparison, if only a single response is desired for the sake of the reliability analysis, sending this to Rt as a command could be more efficient. Both methods are described below and sample code is included, intended to aid future implementation by an unfamiliar operator. While text input files may be used to develop analyses in Rt, this will focus on using the program’s user interface. For simplicity with directories, the Rt input file, OpenSees input file, and the OpenSees executable should all be within the same folder. A.1 Probabilistic Input Parameters Rt allows the user to define probabilistic distributions for random variables. This is done by creating new RVs under Models>Parameter>Random Variable>Continuous and then adjusting the properties of that RV.  These RVs need to be shared with OpenSees. To perform this, the external OpenSees model needs to be defined in Rt. This is done under Models>Model>External Software>OpenSees. A new model may be defined and includes “Parameter List” under the Properties menu. This defines the parameters in Rt that will be sent to OpenSees when the model is called. It is also   122 in this menu that the OpenSees input file is defined. This is shown in the example code on lines #9-30. In OpenSees, no additional parameter definition is required. To use these values, they need simply to be called by the placeholder “$randomvariable”. An example of this use is on code line #171. It is important not to define these values, as it will overwrite the reliability analysis values and give erroneous results.  A.2 File Response One method of obtaining results from OpenSees in Rt is by creating data files with the desired responses. This is performed by creating an output file in OpenSees. For example, this may be done by defining a “recorder” for node displacement. From this recorder, a data file may be created in the desired directory. This provides a record of the response separately, but will be overwritten by default in all subsequent iterations by the Rt analysis. This is done in the OpenSees example code lines #190-200. To read these files in Rt, the response may be defined in Models>Parameter>Response>File. The output file location is specified in the Properties menu, which allows a specific row and column of the file also to be specified. This allows Rt to read the response from the output file, which then may be used for a reliability analysis. The Rt example code illustrates this on lines #40-45. A.3 Command Response The other option for inputting responses from OpenSees to Rt is by the command response. Unlike the file response, this takes a command directly from OpenSees without the additional creation of a file.    123 To perform this, define a command response in Rt under Models>Parameters>Response>Command. In the properties of this response, the OpenSees model may be defined, as well as the command used in the model. This command is then run in OpenSees during the analysis to get the response. An example of this is shown in the Rt example code lines #48-51. No command is required in OpenSees, but an example of displaying the value in the command pane is show in in the example code on line #225. A.4 Example Code To illustrate the use of OpenSees as a structural response model used by Rt, an example is given. This example is a simple structure consisting of a cantilever beam, as shown in Figure A.2. This cantilever structure has length, L, moment of inertia, Iz, modulus of elasticity, Es, and lateral load applied at the beam end, Wx. The measured response is the flexural deflection D, of the beam at the location of the load. This structure is modeled in OpenSees using an elasticBeamColumn, with node 1 at the fixed end and node 2 at the free end. This example file was modified from the “Time History Analysis of a 2D Elastic Cantilever Column” example, available freely online (opensees.berkeley.edu/wiki).    124  Figure A.2 Schematic of the example structure A reliability analysis implementing FORM is performed in Rt. The purpose of this analysis is to predict the probability that the deflection of the beam, disp (the notation in Rt, given as D in OpenSees), exceeds 6 mm (line 56). All of the beam parameters have been given nominal values for the mean and coefficient of variation and an assumed distribution, as shown in Table A.1. These random variables must all be defined in consistent units (here kN and mm). The COV of the parameters are arbitrary, based roughly upon the variability expected. Table A.1 Example parameters Variable Distribution Mean COV Units L Normal 3000 0.05 mm Es Normal 200 0.01 GPa Iz Normal 40.106 0.05 mm4 Wx Normal 5 0.2 kN   Wx,	  D Es,	  Iz L   125 A.5 Rt Code The code below is to be copied to a .txt file to be used as an input file in Rt.    126 // EXAMPLE BEAM RELIABILITY ANALYSIS IN RT 1 // BY ALFRED LARSEN, JANUARY 2014 2 // An example to test Rt and OpenSees functionality by testing 3 a cantilever beam with random variables for load, length, 4 and stiffness (modulus of elasticity and moment of 5 inertia) 6  7  8 // RANDOM VARIABLES 9 RContinuousRandomVariable |ObjectName: L |CurrentValue: 3000 10 |DistributionType: Normal (mean, stdv) |Mean: 3000 11 |StandardDeviation: 150 |CoefficientOfVariation: 0.05 12 |Parameter1: 3000 |Parameter2: 150 |Parameter3: 0 13 |Parameter4: 0 |UncertaintyType: Aleatory 14 RContinuousRandomVariable |ObjectName: Wx |CurrentValue: 5 15 |DistributionType: Normal (mean, stdv) |Mean: 5 16 |StandardDeviation: 1 |CoefficientOfVariation: 0.2 17 |Parameter1: 5 |Parameter2: 1 |Parameter3: 0 |Parameter4: 18 0 |UncertaintyType: Aleatory 19 RContinuousRandomVariable |ObjectName: Es |CurrentValue: 200 20 |DistributionType: Normal (mean, stdv) |Mean: 200 21 |StandardDeviation: 2 |CoefficientOfVariation: 0.01 22 |Parameter1: 200 |Parameter2: 2 |Parameter3: 0 23 |Parameter4: 0 |UncertaintyType: Aleatory 24 RContinuousRandomVariable |ObjectName: Iz |CurrentValue: 25 40000000 |DistributionType: Normal (mean, stdv) |Mean: 26 40000000 |StandardDeviation: 2000000 27 |CoefficientOfVariation: 0.05 |Parameter1: 40000000 28 |Parameter2: 2000000 |Parameter3: 0 |Parameter4: 0 29 |UncertaintyType: Aleatory 30  31  32 // DEFINE OPENSEES MODEL (PARAMETERS USED IN OPENSEES ARE 33 DEFINED HERE USING THE RANDOM VARIABLES FROM ABOVE) 34 ROpenSeesModel |ObjectName: BeamModel |DisplayOutput: true 35 |ParameterList: L; Wx; Iz; Es;  |ExecutableFile: OpenSees 36 |InputFile: ExampleBeamReliabilityOpenSees.txt 37  38  39 // DEFINE LOCATION OF OPENSEES RESPONSE 40 // THIS USES A “FILE RESPONSE,” WHERE THE RESPONSE VALUE IS 41 TAKEN FROM AN OUTPUT FILE GENERATED IN OPENSEES 42 RFileResponse |ObjectName: Disp |CurrentValue: 5.625 |Model: 43 BeamModel |ResponseFile: NodeDisplacement.out |Maximum: 44 true |Absolute: true |Row: 10 |Column: 2 45  46  47   127 // ANOTHER OPTION IS “COMMAND RESPONSE,” WHERE OPENSEES GIVES 48 A RESPONSE TO RT DIRECTLY VIA THE OUTPUT SCREEN 49 // RCommandResponse |ObjectName: Disp |CurrentValue: 0 |Model: 50 BeamModel |Command: puts NodeDisp |Absolute: true 51  52  53 // LIMIT STATE FUNCTION 54 RFunction |ObjectName: DispLSF |Expression: 6 - Disp 55 |GradientAnalysisType: FiniteDifference 56 |PerturbationFactor: 1000 |EfficientPerturbation: true 57  58  59  60 // RT ANALYZERS 61 RStepperNonlinSingleConstrSolver |ObjectName: mySolver 62 |OutputDisplayLevel: Minimum |StartPoint: Mean 63 |StepSizeSearcher: myStepSizeSearcher 64 |StepDirectionSearcher: myStepDirectionSearcher 65 |Transformer: myTransformer |ConvergenceChecker: 66 myConvergenceChecker |MaximumIterations: 100 67  68 RArmijoStepSizeSearcher |ObjectName: myStepSizeSearcher 69 |OutputDisplayLevel: Minimum |Transformer: myTransformer 70 |MeritChecker: myMeritChecker |MaximumReductions: 10 71 |Base: 0.5 |InitialStepSize: 1 |InitialStepsCount: 2 72 |SphereRadius: 50 |SphereDistance: 0.1 |SphereEvolution: 73 0.5 74  75 RAdkZhangMeritChecker |ObjectName: myMeritChecker 76 |OutputDisplayLevel: None |Multiplier: 2 |Adder: 10 77 |Factor: 0.5 78  79 RHLRFStepDirectionSearcher |ObjectName: 80 myStepDirectionSearcher 81  82 RNatafTransformer |ObjectName: myTransformer 83 |OutputDisplayLevel: None 84  85 RStandardConvergenceChecker |ObjectName: myConvergenceChecker 86 |OutputDisplayLevel: Minimum |E1: 0.001 |E2: 0.001 87  88 RIndependentNormalRandomNumberGenerator |ObjectName: 89 myRandomNumberGenerator |StartPoint: CurrentValue 90 |StandardDeviation: 1 |Seed: 0 |Transformer: 91 myTransformer 92  93   128 RHistogramAccumulator |ObjectName: myHistogramAccumulator 94 |OutputDisplayLevel: None |MaximumIterations: 100000 95 |PlottingInterval: 100 |NumberOfBins: 100 96  97 RFailureProbabilityAccumulator |ObjectName: 98 myFailureAccumulator |OutputDisplayLevel: None 99 |MaximumIterations: 1000000 |PlottingInterval: 100 100 |TargetCoefficientOfVariation: 0.05 101 |RandomNumberGenerator: myRandomNumberGenerator 102  103 RFORMAnalyzer |ObjectName: myFORMAnalysis |LimitStateFunction: 104 DispLSF |NonlinearSingleConstraintSolver: mySolver 105 |ComputeRandomVariableSensitivities: true 106 |ComputeDecisionVariableSensitivities: true 107 |ComputeModelResponseStandardDeviationSensitivities: true 108 |PrintSensitivities: true 109 |CorrectProbabilityWithFirstPrincipalCurvature: false 110  111 RSamplingAnalyzer |ObjectName: mySamplingAnalysis 112 |RandomNumberGenerator: myRandomNumberGenerator 113 |Transformer: myTransformer |Accumulator: 114 myFailureAccumulator 115  116 RFunctionEvaluationAnalyzer |ObjectName: 117 myFunctionEvaluationAnalysis |Function: DispLSF 118 |EvaluateGradient: false |SetRandomVariablesToMean: false 119 |PrintRandomVariableList: false |PlotModelFlowchart: true 120  121 RFOSMAnalyzer |ObjectName: myFOSMAnalysis |Function: DispLSF 122 |PrintCorrelationMatrix: true |PrintCovarianceMatrix: 123 false124   129  A.6 OpenSees Code The example code that follows is for use in OpenSees for the example program. This is to be pasted in a text input file. It is not necessary for the user to run OpenSees, as it is called through the execution of the Rt analysis. Of note, this may be run independently as a deterministic structural analysis if lines #160-166 have the comments removed.    130 ########### 125 # Beam reliability example 126 # By Alfred Larsen, Jan 2014  127 # Modified model of OpenSees Example Manual “Time History 128 Analysis of 2D Elastic Cantilever Column” 129 # Modified to create a cantilever beam with a lateral load 130 applied at top to measure displacement that may be used 131 for reliability analysis in Rt 132 ########### 133 # 134 #          <--D--> 135 #  Wx----->2 136 #  | 137 #  | 138 #  | Nodes as indicated 139 #  | Length L = 3000mm 140 #  | Modulus of elasticity E = 200 GPa 141 #  | Moment of inertia I = 40e6 142 #  | 143 #  | 144 #  ________1________ 145 # 146 # 147 ########## 148 # All units in kN, mm 149  150  151 # Remove existing model 152 #wipe 153  154  155 # Create the model builders 156 model BasicBuilder -ndm 2 -ndf 3 157  158  159 # Set parameters (commented out for reliability analysis) 160 # Parameters varied for reliability analysis are Es, L, Ex, Iz 161 as seen above 162 #set Es 200 163 #set L 3000 164 #set Wx 5 165 #set Iz 40e6 166  167  168 # Define nodes 169 node 1 0 0 170 node 2 0 $L 171  172   131  173 # Fix nodes/define boundary conditions 174 # fix $nodeTag (ndf $constrValues) 175 fix 1 1 1 1  176  177  178 # Define transformation 179 geomTransf Linear 1  180  181  182 # Define beam 183 # element elasticBeamColumn $eleTag $iNode $jNode $A $E $Iz 184 $transfTag <-mass $massDens> 185 element elasticBeamColumn 1 1 2 3630 $Es $Iz 1  186 #no mass of beam is considered 187  188  189 # Define recorder 190 # Currently the Rt analysis is set to use a “file response” 191 that references this file 192 # recorder Node <-file $fileName> <-xml $fileName> <-binary 193 $fileName> <-tcp $inetAddress $port> <-precision $nSD> <-194 timeSeries $tsTag> <-time> <-dT $deltaT> <-closeOnWrite> 195 <-node $node1 $node2 ...> <-nodeRange $startNode 196 $endNode> <-region $regionTag> -dof ($dof1 $dof2 ...) 197 $respType' 198 recorder Node -file NodeDisplacement.out -time -node 2 -dof 1 199 2 3 disp 200  201  202 # Apply load 203 timeSeries Linear 1 204  205  206 # Load pattern 1 is force in kN on node 2 in +x direction 207 pattern Plain 1 1 { 208  load 2 $Wx 0. 0. 209 } 210  211  212 constraints Plain 213 numberer Plain 214 system BandGeneral 215 algorithm Linear 216 integrator LoadControl 0.1 217 analysis Static 218 analyze 10 219  220   132  221 # Print to screen 222 # This can be used for “command response” of the file in 223 OpenSees, not used in the Rt file here 224 puts "Node 2 displacement: [nodeDisp 2]" 225 puts "Force: $Wx kN" 226 puts "Length: $L mm" 227 puts "Moment of inertia: $Iz mm^4" 228 puts "Modulus of elasticity: $Es GPa" 229 print node 2 230 print element 231 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0167112/manifest

Comment

Related Items