Applied Science, Faculty of
Civil Engineering, Department of
DSpace
UBCV
Bohl, Alejandro
2009-08-17T15:24:06Z
2009
Master of Applied Science - MASc
University of British Columbia
The structural engineering community is currently exploring the concept of performance-based earthquake engineering (PBEE). In an effort to amend the code-oriented practice, in which life safety is the primary concern, predictions are made regarding the cost and downtime associated with damage. A typical result is the “loss curve,” which represents the annual probability of exceeding various cost thresholds. Such predictions are useful to improve decision making related to structural design by enabling stakeholders to consider the cost of possible future damage, in addition to the construction costs.
Substantial progress has been made in the field of PBEE in the last few years. Most of these developments use structural response parameters, such as inter-storey drifts, as performance measures. This first generation PBEE is now being used by some engineers in the practicing community. However, most practicing engineers are unfamiliar with second generation PBEE, which focuses on economic loss. In this paper, PBEE is first contrasted with code-oriented design, with emphasis on how it helps engineers communicate with different stakeholders. Next, a comparison between two different PBEE methods, namely the ATC-58 approach and the unified reliability approach, is made.
An example with a three-storey office building is presented, with detailed description of the hazard, structure, damage, and loss modeling. The different approaches to PBEE are contrasted along several axes, including accuracy, computational cost and convergence. It is found that each approach has unique merits, and that the synergy from combining certain aspects from different approaches can be significant.
https://circle.library.ubc.ca/rest/handle/2429/12261?expand=metadata
1037366 bytes
application/pdf
COMPARISON OF PERFORMANCE BASED ENGINEERING APPROACHES by ALEJANDRO BOHL B.S., Universidad Peruana de Ciencias Aplicadas, Peru, 2005 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES (Civil Engineering) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2009 © Alejandro Bohl, 2009 ABSTRACT The structural engineering community is currently exploring the concept of performance- based earthquake engineering (PBEE). In an effort to amend the code-oriented practice, in which life safety is the primary concern, predictions are made regarding the cost and downtime associated with damage. A typical result is the “loss curve,” which represents the annual probability of exceeding various cost thresholds. Such predictions are useful to improve decision making related to structural design by enabling stakeholders to consider the cost of possible future damage, in addition to the construction costs. Substantial progress has been made in the field of PBEE in the last few years. Most of these developments use structural response parameters, such as inter-storey drifts, as performance measures. This first generation PBEE is now being used by some engineers in the practicing community. However, most practicing engineers are unfamiliar with second generation PBEE, which focuses on economic loss. In this paper, PBEE is first contrasted with code-oriented design, with emphasis on how it helps engineers communicate with different stakeholders. Next, a comparison between two different PBEE methods, namely the ATC-58 approach and the unified reliability approach, is made. An example with a three-storey office building is presented, with detailed description of the hazard, structure, damage, and loss modeling. The different approaches to PBEE are contrasted along several axes, including accuracy, computational cost and convergence. It is found that each approach has unique merits, and that the synergy from combining certain aspects from different approaches can be significant. ii TABLE OF CONTENTS ABSTRACT ....................................................................................................................... ii TABLE OF CONTENTS ..................................................................................................iii LIST OF TABLES............................................................................................................. vi LIST OF FIGURES ..........................................................................................................vii ACKNOWLEDGEMENTS............................................................................................... ix CHAPTER 1: INTRODUCTION........................................................................................1 1.1 Background ...............................................................................................................1 1.2 Objectives .................................................................................................................2 1.3 Organization of the Thesis ........................................................................................3 CHAPTER 2: LITERATURE REVIEW.............................................................................5 2.1 Evolution of Earthquake Engineering.......................................................................5 2.2 Performance-Based Earthquake Engineering Methodologies ..................................8 2.3 Comparison between Code-Based and Performance-Based Design ......................14 CHAPTER 3: UNIFIED RELIABILITY APPROACH....................................................16 3.1 Unified Reliability Analysis ...................................................................................17 3.2 First Order Reliability Method (FORM).................................................................22 3.3 Sampling .................................................................................................................26 CHAPTER 4: ATC-58 APPROACH ................................................................................30 4.1 Hazard Model..........................................................................................................30 4.2 Response Model......................................................................................................31 4.3 Damage Model........................................................................................................32 4.4 Cost Model..............................................................................................................34 iii 4.5 Sampling Analysis ..................................................................................................35 4.6 Relationship between the ATC-58 and the Unified Reliability Approaches..........36 CHAPTER 5: STEP BY STEP PBEE...............................................................................38 5.1 Step 1: Select Performance Measures.....................................................................38 5.2 Step 2: Select Structural Analysis Method .............................................................40 5.3 Step 3: Select Hazard Model...................................................................................41 5.4 Step 4: Estimate Damage ........................................................................................43 5.5 Step 5: Estimate Loss..............................................................................................44 5.6 Step 6: Uncertainties ...............................................................................................45 5.7 Step 7: Select Analysis Programs ...........................................................................46 CHAPTER 6: CASE STUDY ...........................................................................................49 6.1 Building Description...............................................................................................49 6.2 ATC-58 methodology .............................................................................................50 6.2.1 Hazard Model ...................................................................................... 50 6.2.2 Response Model .................................................................................. 53 6.2.3 Damage Model .................................................................................... 55 6.2.4 Loss Model .......................................................................................... 60 6.3 Smoothing Fragility Curves....................................................................................63 6.4 Mehanny-Deierlein Damage Model .......................................................................68 CHAPTER 7: ANALYSIS RESULTS..............................................................................71 7.1 Computation of the Total Loss Curve.....................................................................71 7.2 Comparison of Reliability Methods........................................................................73 7.2.1 Accuracy.............................................................................................. 74 iv 7.2.2 Computational Cost ............................................................................. 77 7.2.3 Convergence ........................................................................................ 78 CHAPTER 8: CONCLUSIONS AND RECOMMENDATIONS.....................................81 8.1 Conclusions.............................................................................................................81 8.2 Recommendations for Future Work........................................................................82 REFERENCES ................................................................................................................. 84 APPENDIX A: Procedure to Generate Additional EDPs................................................. 87 APPENDIX B: Repair Quantities..................................................................................... 89 v LIST OF TABLES Table 6.1 Building weight and modes .............................................................................. 49 Table 6.2 Ground motions representing 50% in 50 years hazard level ............................ 51 Table 6.3 Ground motions representing 10% in 50 years hazard level ............................ 51 Table 6.4 Ground motions representing 5% in 50 years hazard level .............................. 52 Table 6.5 EDP matrix at 50% in 50 years hazard............................................................. 54 Table 6.6 EDP matrix at 10% in 50 years hazard............................................................. 54 Table 6.7 EDP matrix at 5% in 50 years hazard............................................................... 54 Table 6.8 Performance groups.......................................................................................... 55 Table 6.9 Definition of fragility cuves ............................................................................. 56 Table 6.10 PG1 repair quantities ...................................................................................... 60 Table 6.11 Unit costs ........................................................................................................ 62 Table 6.12 Repair cost for each performance group......................................................... 64 Table 7.1 Computational cost ........................................................................................... 77 vi LIST OF FIGURES Figure 2.1 Performance objectives (Vision 2000).............................................................. 9 Figure 2.2 PEER methodology framework ...................................................................... 12 Figure 3.1 Models............................................................................................................. 18 Figure 3.2 Unified reliability analysis .............................................................................. 19 Figure 3.3 Probabilistic analysis model............................................................................ 21 Figure 3.4 FORM approximation ..................................................................................... 24 Figure 3.5 Monte Carlo sampling analysis ....................................................................... 27 Figure 3.6 Importance sampling analysis ......................................................................... 28 Figure 4.1 Fragility curve ................................................................................................. 33 Figure 4.2 Cost CDF for a given return period................................................................. 35 Figure 4.3 ATC-58 approach............................................................................................ 37 Figure 5.1 Step by step PBEE........................................................................................... 48 Figure 6.1 Plan view (Applied Technology Council 2009).............................................. 50 Figure 6.2 Uniform hazard spectra ................................................................................... 53 Figure 6.3 Fragility curves for structural system.............................................................. 57 Figure 6.4 Fragility curves for exterior enclosure ............................................................ 57 Figure 6.5 Fragility curves for drift sensitive non-structural elements ............................ 58 Figure 6.6 Fragility curves for acceleration sensitive non-structural elements ................ 58 Figure 6.7 Fragility curves for office content................................................................... 59 Figure 6.8 Fragility curves for roof equipment ................................................................ 59 Figure 6.9 Unit cost function............................................................................................ 62 Figure 6.10 Fragility curve for performance group 1 ....................................................... 64 vii Figure 6.11 Fitted profile.................................................................................................. 66 Figure 6.12 Fitted fragility curve for performance group 1.............................................. 68 Figure 7.1 Cost CDF......................................................................................................... 72 Figure 7.2 Loss curve ....................................................................................................... 73 Figure 7.3 Cost CDF for a 5% in 50 years hazard level ................................................... 75 Figure 7.4 Cost CDF for a 10% in 50 years hazard level ................................................. 75 Figure 7.5 Cost CDF for a 50% in 50 years hazard level ................................................. 76 Figure 7.6 Loss curves...................................................................................................... 76 Figure 7.7 Model “waviness” ........................................................................................... 79 Figure 7.8 Convergence problems.................................................................................... 80 viii ACKNOWLEDGEMENTS I would like to thank my supervisor, Dr. Terje Haukaas, for his support and guidance throughout this research. I would also like to thank my co supervisor, Dr. Kenneth Elwood, for his valuable input. I would like to thank the member of infrastructure risk strategic research project. Majid Baradaran, Dr. Stephanie Chang, Dr. Ricardo Foschi, Edwin Guerra, Dr. Smitha Koduru, Mojtaba Mahsuli, Karthick Pathman, Dr. Robert Sexsmith and Shahrzad Talachian. This work would not be possible without their input and contributions of knowledge. I would also like to thank the Natural Sciences and Engineering Research Council of Canada for funding this project. I am also grateful to the Department of Civil Engineering and the Faculty of Graduate Studies for granting me the University of British Columbia Graduate Fellowship and the UMA Scholarship in Civil Engineering. I would especially like to thank my friends for making my time at UBC very enjoyable. ix CHAPTER 1: INTRODUCTION 1.1 Background Traditional engineering practice is based on prescriptive codes. Codes provide engineers with the instructions that they should follow in order to design safe and economic structures. The objective of the code is to protect human live and, therefore, it focuses on attaining the performance level of life safety by preventing collapse. Past earthquakes, such as Northridge and Loma Prieta, have shown that modern codes attain their objective since the numbers of casualties were relatively low. However, they also demonstrated that buildings that complied with code provisions suffered great economic losses that were not acceptable by the users and owners. This motivated the engineers to take the next step in earthquake engineering and not only protect human live, but also the functionality of a structure and the investment made on a building. To achieve these new objectives the structural engineering community, particularly in regions with high seismic hazard, is exploring the concept of performance- based earthquake engineering (PBEE). PBEE implies the design or assessment of structures to withstand, as economically as possible, the uncertain future demands that users and nature will put upon them. It is based on the premise that performance objectives can be defined in a quantitative manner and that performance can be predicted. Its objective is to provide stakeholders with the necessary information to make rational and informed decisions based on life-cycle considerations (Krawinkler et al. 2006). There has been substantial progress in PBEE in the last few years. However, most of the developments have been in methodologies that define discrete performance levels, like “collapse prevention”, in terms of the structural response, like inelastic rotations. 1 This thesis focuses on a novel approach to PBEE where performance is defined by the economic loss that a structure suffers during a seismic event. In this work the sources of loss are the structural, non-structural elements and contents of the building. Nevertheless, they could also include downtime and number of casualties. This new approach allows engineers to communicate more efficiently with other stakeholders. Performance-based engineering is interpreted in different ways by the engineering community. In this thesis, a distinction is made between first and second generation PBE. First generation PBE refers to the assessing the performance level of the structure based on structural response parameters for discrete hazard levels. Second generation PBE assesses the performance of the structure based on the loss it suffers during a seismic event. Additionally, this assessment is made a continuous hazard. 1.2 Objectives The main objective of this thesis is to compare two performance-based engineering approaches, the ATC-58 approach and the unified reliability analysis approach. The comparison is made along several axes, the computational cost, explicit accounting for uncertainty, accuracy, flexibility and convergence. The study is performed on a three story, steel moment frame building. The ground motion records, the building model and the information about the repair cost of the structural, non-structural elements and contents of the building were obtained from the ATC-58 project (Applied Technology Council 2009). This information is vital to carry out a performance-based assessment. To help carry out the assessment a code was written on Tcl (ActiveState 2009) to provide this information to the OpenSees structural analysis and reliability modules. OpenSees (The Open System for Earthquake 2 Engineering Simulation) is an open-source software for the simulation of the seismic response of structural systems. It has been developed at the Pacific Earthquake Engineering Research Center (PEER) (Pacific Earthquake Engineering Research Center 2009). The secondary objective is to encourage the practicing engineering community to start using performance-based assessment. To achieve this, a step by step guide to carry out a performance-based assessment is presented. It is hoped that these guidelines will encourage the practicing community to take advantage of these new developments. 1.3 Organization of the Thesis The thesis is divided into eight chapters. Chapter 1 is the introduction. It presents the background and objectives of the thesis. Chapter 2 is the literature review. First it presents the evolution of earthquake engineering into PBEE. Then, the different methods to carry out a performance-based assessment are introduced. Later, the vision for the future of PBEE is presented as well as topics of future research. Finally, it describes the differences between code-based engineering and PBEE, focusing on how it helps engineers communicate with different stakeholders. Chapter 3 presents the unified reliability approach. First, the framework of the unified reliability approach is described, highlighting its flexibility. Later, the reliability methods that can be used with this methodology are compared. Chapter 4 presents the ATC-58 approach. A comparison is made between the ATC-58 and the unified reliability approaches. Then a description of the models used by the ATC-58 approach is made. 3 Chapter 5 presents a step by step guide to carry out a performance-based assessment. Chapter 6 presents the performance-based assessment of a steel frame building. The assessment is done using the ATC-58 approach and the unified reliability approach. Chapter 7 presents the theoretical bases to compute a loss curve, which synthesises the results of a performance-based assessment. Then, it compares the results of both assessments making use of the loss curves. Chapter 8 presents the conclusion of the thesis and suggestions for future research. 4 CHAPTER 2: LITERATURE REVIEW 2.1 Evolution of Earthquake Engineering At the beginning of the 20th century the earthquakes in San Francisco, USA (1906), Messina, Italy (1908) and Kanto, Japan (1923) produced devastating losses and casualties; motivating engineers to figure out solutions to the earthquake hazard. This lead to the development of the equivalent static method, that consisted on applying a percentage of the weight of the building as a lateral load. At the same time, the importance of directly measuring the ground acceleration was noted and the first accelerographs networks were installed in Japan and the United States. In 1925, the earthquake at Santa Barbara, USA caused considerable damage but a small number of deaths. This made engineers aware of the importance of designing buildings to withstand earthquake forces, leading to an increase in research effort to measure the natural periods of building and in shake table testing. Also, the first record obtained from the accelerographs networks installed in previous years began to be used for the improvement of earthquake engineering knowledge. The lack of adequate performance encouraged engineers, architects, owners, underwriters and bankers to create better building codes; leading to the adoption of the Uniform Building Code. This may have been one of the most important advances made at that time. At first (1927) a constant coefficient, C, was used to determine the percentage of the weight that should be applied as a lateral load. Later, the factor was refined so that it would be based on some dynamic properties of the structure. First, it used the number of stories (1943) and later the natural period of the building (1952). 5 More recent earthquakes, such as Alaska, USA (1964); Mexico City, Mexico (1985); Loma Prieta, USA (1989); Northridge, USA (1994); Kobe, Japan (1994) and Koceali, Turkey (1999) have continued to push engineering knowledge further, leading to more refined and better designs. Finally, in the last few decades, the advance in computer technology has facilitated the use of more sophisticated techniques in building design and a sizable reduction in the amount of time required to analyze a structure. This has lead to increased productivity and more efficient designs (Bozorgnia and Bertero 2004). The consequences of the most recent seismic events have shown that the implementation of the accumulated earthquake engineering knowledge into codes has successfully reduced the risk to human life. However, they have also demonstrated that the economic and social losses are not being address adequately. The economic losses due to recent earthquakes have not been reduced, in fact, these losses have increased. Many engineers and researchers have now focused their efforts to significantly reduce the economic losses caused by an earthquake. As a result the concept of performance-based earthquake engineering (PBEE) was developed and its practical implementation is now under development (Krawinkler et al. 2006; Moehle 2005; Ellingwood 2008). This concept is based on the idea that performance can be predicted with confidence to make rational decisions based on not only on construction costs, but also potential replacement costs from earthquake damage. Under PBEE, engineers will be able to design for any performance objective that the stakeholders desire. Its scope reaches far beyond life safety, including the continuing operation of critical infrastructure after a devastating event, or maximizing the return on a client’s investment. To achieve 6 this objective, seismic design will have to change its practice from an empirical, experience-based approach to more scientifically oriented prediction of performance. Today’s practice is code oriented. It is expected that an adequate design will result from following code provisions. These codes try to achieve the performance level of life safety by using prescriptive measures. A classic example of how code provisions have become “more performance-based” is the base shear equation used in codes all around the world (see Equation 2.1). This equation has several factors that relate to the seismicity of the region, the natural period of the structure, S, the characteristics of the structural system, R, and the soil characteristics, F. However, originally each one of these factors was mostly based on good engineering judgement, disregarding the great uncertainty in the ground motion’s frequency content and intensity. Later, with the ATC- 3-06 document, many of these factors became based on first principles, like seismic risk maps that had been develop from a seismic hazard analysis. Others factors still remained empirical, such as the response modification factor, R. od evava RR WIMSF V ,= (2.1) It is implied that by conforming to the code requirements on strength and detailing, a design that will ensure life safety and collapse prevention under strong ground motions will be obtained. At the same time, conforming to strength and drift limit requirements will control damage under a less severe earthquake (Krawinkler H 2004). However, it is not possible to assess weather or not these equations will provide the “performance” that is intended by the code. 7 2.2 Performance-Based Earthquake Engineering Methodologies In an effort to explicitly describe the desired performance of a structure various guidelines have been published on first generation PBEE methodologies. The most important ones are Vision 2000 (SEAOC 1995), FEMA 273/356 (FEMA 273 1996; FEMA 356 2000), ASCE-41 (ASCE/SEI 41-06 2007) and ATC-40 (ATC-40). Although they have do have some minor differences, the substance is similar. However, a difference worth noting that all of them focus on existing building except for the Vision 2000 document. These performance-based methodologies consist of three steps. The first step is to establish the performance level that is expected for different hazard levels. Hazard levels are typically defined by the probability of being exceeded in a given amount of time (eg 2% in 50 years). For each of the chosen hazard levels the performance objective should be described in terms of quantifiable decision variables such as drifts, plastic rotations, damage or loss. The limits on these decision variables are not decided by the engineer. They are prescribed by the guideline that is being employed. An example of a performance objective is the basic objective shown in Figure 2.1.“Fully operational” for a frequent event (43 year return period), “operational” for an occasional event (72 year return period), “life safe” for a rare event (475 year return period) and “near collapse” for a very rare event (970 year return period). 8 Earthquake Performance Level Figure 2.1 Performance objectives (Vision 2000) The next step is to determine the response of the structure for each hazard level. It is clear that the predicted response will heavily depend on the characteristics of the seismic input, the modeling of the structural elements and materials and the analysis method that is employed. Most performance-based methodologies recommend the use of nonlinear analysis methods since the motivation for developing this methods is to move away from traditional linear design. The final step is to asses the performance is translating the response of the structure to the decision variables that were established in Step one. This is the step that makes these methodologies “performance-based” and is also the most challenging step. It is very difficult to make this transformation in a convenient and yet rational manner. Clearly, the use of first generation PBEE methodologies is already a significant Frequent (43 years) Occasional (72 years) Rare (475 years) Very Rare (970 years) Fully Operational Operational Life Safe Near Collapse Unacceptable Performance E ar th qu ak e D es ig n L ev el 9 improvement over current code procedures but should not prevent engineers from striving for a complete reliability-based assessment (Chen and Lui 2006). A reliability-based performance assessment should provide decision makers with a quantitative basis to compare the benefits of each design. This means that the results of the assessment should be expressed in a way that all stakeholders can understand, such as a dollar loss. It is also important that stakeholders can compare seismic risk with other risk factors that they may face. Accounting for all the uncertainties in a seismic event, as well as in the response of the structure and the consequent damage is a considerable task. Several sources of uncertainty exist, including the earthquake source, its path, the site characteristics, material properties, and member and system capacities. Ideally, a performance-based approach should account for all sources of uncertainty, both aleatory (irreducible and inherent to the characteristics of the phenomenon) and epistemic (reducible, typically modeling error modeling error or lack of data). There are models that take into account many of these sources of uncertainty to predict the seismic hazard and the structural response. However, most of them are complex and time consuming. A major step toward an integrated probabilistic approach to PBEE has been taken by the Pacific Earthquake Engineering Research (PEER) Center. PEER has invested several years in the development of the ATC-58 methodology that has distinct advantages over other existing methodologies. These advantages are that it recognizes the importance of incorporating uncertainties in every step of the assessment process. This allows an estimation of the sensitivity of the decision variables to the uncertain input. Also, it acknowledges the importance of representing performance in terms of 10 economic loss, which is more comprehensible to other stakeholders. The framework to estimate the decision variables, DVs, in explained in the following. First the hazard is quantified by intensity measures, IMs, which should comprehensively define the seismic input to the structure. Given the IM, the engineering demand parameters, EDPs, are computed, typically using nonlinear analyses. To relate the EDPs to the DVs an intermediate step is taken. The EDPs are related to damage measures, DMs, which represent the repair efforts needed to restore the component to its original state. Finally, the DVs are evaluated given the DMs. These steps are shown in Figure 2.2 and can be expressed by the following equation in accordance with the total probability theorem: ∫ ∫ ∫ ∫ ∫ ∫ ∫ ∫ ∫ ∫ ∫ ∫ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ⋅⋅⋅⋅= ⋅⋅⋅⋅⋅⋅= ⋅⎟⎟⎠ ⎞ ⎜⎜⎝ ⎛ ⋅⎟⎟⎠ ⎞ ⎜⎜⎝ ⎛ ⋅⋅⋅⋅= ⋅⎟⎟⎠ ⎞ ⎜⎜⎝ ⎛ ⋅⋅⋅= ⋅⋅= 0 0 0 0 0 0 0 0 0 0 0 0 d)()|(d)|(d)|( ddd)()|( d d)|( d d)|( ddd)()|( d d)|( d d)|( dd)()|( d d)|( d)()|()( imimfimedpFedpdmFdmdvF dmedpimimfimedpF edp edpdmF dm dmdvF dmedpimimfimedpF edp edpdmF dm dmdvF dmedpedpfedpdmF m dmdvF dmdmfdmdvFdvF (2.2) where F( | ) denotes a conditional cumulative distribution function (CDF) and f() denotes a probability density function (PDF). 11 Intensity measure. (IM) Engineering demand parameter. (EDP) e.g. Spectral acceleration. e.g. Interstory drift. Tool: analysis model Damage measure. (DM) Decision variables. (DV) e.g. Damage state. e.g. Repair cost. Tool: Fragility curves Tool: Loss function Figure 2.2 PEER methodology framework The PEER methodology focuses on estimating the performance of buildings quantitatively by continuous variables using the economic loss. Instead of discrete qualitative performance levels that first generation PBEE methodologies employ. This quantitative amount can be used by stakeholders to make more rational decisions in their risk management strategies. Ways to report economic loss include, expected annual loss, mean annual frequency of exceeding of a given dollar loss and the maximum probable loss (EERI 2000). The most common way to express economic loss is in terms of the repair cost. In the PEER methodology the total repair cost is obtained by adding the repair cost of each individual component in the structure. These costs are not continuous because they are associated with repair actions that are triggered by discrete damage states. For example a 12 window either needs repair or it is completely undamaged, since even if it is only partially broken it needs complete replacement. Another way to express economic loss is downtime. Downtime is the amount of time that a structure will be out of service after a seismic event. Downtime is harder to estimate, but can severely outweigh the losses due to repairs because it will cause loss of revenue for an amount of time (Comerio 2006). In that reference, an effort was made to quantify downtime by relating it to the repair time. However, this has high uncertainty since it will depend on the availability of materials and labour in the post-earthquake environment. Even if downtime could be estimated with confidence it is still difficult to estimate the associated losses due to loss of business. In the future it is expected to move from performance-based assessment, to performance-based design. The PEER methodology described above, as well as the other PBEE methodologies, focuses on assessment. This means that all the structural and non- structural elements are given. Performance-based design is different because all the elements need to be created first. Ideally, all performance objectives should be accounted for in the preliminary design phase and, therefore, the performance assessment becomes a simple verification of a proper design. Instead of an unattractive iterative process, that may only lead to fine tuning a conceptually poor design. The challenge is to provide engineers with a set the EDPs that will control the decisions variables so that they can produce adequate preliminary designs. This objective is still many years away. Future research includes the improvement of all aspects of performance-based engineering. Consensus on how to represent the hazard at a site needs to be developed, as well as improving models to predict the structural response given this hazard. On the loss 13 side, there is a need to develop ways to estimate damage, loss, downtime and collapse. All of these aspects should be coupled with the identification and quantification of the uncertainties that affect the final decision variables. Finally, the principal challenge is to incorporate all this aspects without making the design process overly complex and time consuming for practicing engineers. 2.3 Comparison between Code-Based and Performance-Based Design Traditional design practice focuses on limiting displacements and forces to the levels indicated in codes. This is easy to understand for engineers and leads to straightforward design methods. However, other stakeholders like architects, owners and occupants, are not familiar with these engineering measurements and this makes it hard to communicate to them the quality of the structure. On the other hand, if a performance-based design is carried out, the engineer will be able to communicate with the stakeholders in terms they understand clearly, such as the expected annual costs of repair or the annual saving that would result from using a given retrofit strategy. This kind of information can be used to identify and satisfy specific client’s needs. The use of codes confines engineers to a life safety performance objective, which is very important but may not be sufficient to satisfy certain needs. One client may need a structure to house very important equipment. Shutting down this equipment will produce heavy losses. A performance-based design can yield the probability of the equipment shutting during the time period that it will be operational. The client can be presented with different designs that provide different probabilities of shutdown and different construction costs. This kind of information enables the client to make a better decision about his investment and facilitates the communication between him and the engineers. 14 Additionally, the extra information that is provided to the client from a performance-based assessment has added value. In other words, clients may be willing to pay more for this kind of design. Moreover, in cases where this information becomes critical for a project, clients will seek engineers that are capable of providing it to them. Certainly, engineers that are able to perform this kind of analysis will have a competitive edge. To take full advantage of these benefits it is not enough to have engineering knowledge and experience. Engineers will have to become familiar with methods that incorporate uncertainties in the performance predictions. The unified reliability approach adopted in this thesis is intended for this purpose. 15 CHAPTER 3: UNIFIED RELIABILITY APPROACH The use of a performance-based approach instead of a code-based approach gives engineers the ability to target specific performance objectives rather than blindly follow code equations. It is true that code provisions are intended to achieve the performance objective of life safety. However, this is only achieved in an implicit manner, since there is no real way to tell if this objective has been attained. To assess the performance of a building, engineers need to have the necessary analysis tools. The development of such tools, including models for hazard, response, and consequences, is the objective of several ongoing research efforts, as described in the Introduction of this thesis. In particular, in an attempt to provide a better alternative to the use of conditional probabilities and total probability integration, which is implied by Equation 2.2, the unified reliability approach has been suggested (Haukaas 2008). This approach allows engineers to define loss limit-states that are evaluated by a series of probabilistic analysis models. Essentially, the probabilistic analysis is carried out by state-of-the-art reliability methods in conjunction with this framework of probabilistic analysis models. This makes the analysis more flexible, and the modeling more consistent and transparent. This chapter presents the unified reliability analysis framework. The characteristics of the models it employs and its outputs are highlighted as well as its advantages and disadvantages. 16 3.1 Unified Reliability Analysis The primary objective of the unified reliability analysis is to estimate the probability that a selected performance measure exceeds a tolerable limit. Additionally, sensitivity and importance vectors provide physical insight by identifying the parameters that are most important. In this thesis, the performance measure will be the repair cost of the structure, but, as explained before, other performance measures could be used (e.g. interstory drifts, plastic rotations, and repair time). In order to estimate the probability a limit-state function needs to be formulated. This limit-state function ( )g is defined so that the event of interest (traditionally denoted as failure) is attained if the function has a negative value. Therefore, since “failure” occurs when the repair cost exceeds a given cost threshold, the limit-state function can be of the form RT CCg −= (3.1) where, represents the cost threshold and represents the repair cost. It is clear that if the repair cost exceeds the cost threshold the value of the limit-state function will be negative and, therefore, failure will have occurred. TC RC The value of the repair cost is dependant on the uncertain ground motion that the structure will be subjected to. It is also dependant on the uncertain response of the structure given this ground motion. Additionally it also depends on the uncertain damage that will occur given the structural response. Consequently, to be able to evaluate the limit-state function each one of these phenomena needs to be modeled taking into account their uncertainties. This process is shown in Figure 3.1. 17 Loss model. Figure 3.1 Models The unified reliability approach framework employs the limit-state function in Equation 3.1 and thus estimates the loss probability of a building. This is done by solving the fundamental reliability problem, which is of the form ( )∫ ∫ ≤ = 0g f dxxfp K (3.2) where is the sought probability and fp ( )xf denotes the joint PDF of the random variables. This means that the probability of failure is the integral of the joint PDF of the random variables over the failure domain ( 0≤g ). When the first-order reliability method (FORM) is applied to solve the reliability problem in Equation (3.2) the reliability index, β, appears. This quantity has a geometrical interpretation in the transformed space of standard normal random variable. Hazard model. (Spectral acceleration, synthetic or recorded ground motions) Damage model. (Cumulative damage model or fragility curves) Random variables Response model. (Simplified or finite element nonlinear analysis) (Repair cost, downtime, environmental and social impact) 18 It is the distance from the origin to the nearest point on the “limit-state surface,” characterized by G=0 (Ditlevsen 1996). The reliability index, β, relates directly to the sought probability by the relationship ( )β−Φ=fp (3.3) where Φ is the standard normal CDF. RELIABILITY ANALYSIS MODELS Loss model Figure 3.2 Unified reliability analysis Figure 3.2 shows the framework of this analysis. The models evaluate the value of the repair cost for a given realization of the random variables. The reliability analysis module produces these realizations and utilizes the results from the models to evaluate the limit-state function defined by the analyst. There is repeated interaction to solve the reliability problem. The solution is attained by using any one of the available reliability Analysis toolbox. (Sampling, FORM, SORM) Limit-state function evaluator. Damage model Response model Random variables OUTPUTS (Expected cost, annual rate of exceedance) Hazard model 19 methods, which include Monte Carlo sampling, FORM, the second-order reliability method (SORM). FORM and Monte Carlo sampling are explained in the following sections of this chapter. The essential idea of unified reliability analysis is that any probabilistic model can plugged into the boxes shown in Figure 3.2. This versatility allows the engineer to try different models and compare the results. Another benefit is the flexibility that the user has with respect to the different reliability methods (e.g. FORM, sampling) that can be used to solve the problem. The unified reliability analysis is based on the formulation of probabilistic analysis models that can accurately represent the hazard, response, damage and loss of a structure. These models need to satisfy certain requirements, that is, they must account for the uncertainties in the phenomena that they represent in an explicit manner using random variables. Also, they must yield a deterministic result for a given realization of the input parameters and the random variables. They should not yield a probability, which is the output from the commonly used fragility curves. Additionally, the results of the model must cover the entire outcome space, namely, the model must be able to represent all realizations of the phenomenon contributing to the probability of failure. This type of model has several attractive features, like the explicit accountability of both epistemic and aleatory uncertainties and the physical insight that is obtained from developing detailed models. A schematic example of such a model is shown in Figure 3.3. If this was the structural analysis model, the input from a previous analysis model could be the ground motion from the hazard model, while the random variables could be the 20 structural properties of the building. Subsequently, the output would be the structural response. In turn, this response will serve as input for the damage model. Random variables. (From reliability module) Figure 3.3 Probabilistic analysis model The models work in sequence to calculate the loss, as seen in Figure 3.1. From looking at this figure and Figure 3.3 it is observed that the results reported by each model depend on the realization of the random variables. Figure 3.1 also indicates that several options are available for each of the models. For example, there are hazard models that will produce a scalar intensity measure, while other yield an entire ground motion time history. This provides versatility to and allows the engineer to select the level of refinement that is desired for the analysis. This is especially useful for a performance-based analysis of a portfolio of structures, where different levels of refinement are practical for each structure. In this type of analysis, critical infrastructure will demand detailed analysis, while less important structures are analyzed with simplified models. INPUTS. (Outputs from previous analysis model) Analysis model. OUTPUTS. (Inputs to next analysis model) 21 3.2 First Order Reliability Method (FORM) Often it is not possible to find a closed form solution for a reliability problem. This is the case when the limit-state function is defined by the response of other models, such as in the framework of the unified reliability analysis. FORM is a method that can solve this type of reliability problem by searching for the design point and then estimates the probability of failure by using the fist order approximation shown in Equation 3.3 (Ditlevsen 1996). This method is efficient because it searches for the design point; therefore no time is wasted in calculations that do not contribute significantly to obtaining the probability of failure. The limit-state function may be dependant on any type of random variable (normal, lognormal, poisson, etc) with or without correlation between them. However, in FORM it is necessary to transform them into uncorrelated standard normal random variables and search for the design point in the standard normal space. The advantages of doing this is that in the original space each random variable has its own dimension (e.g. kN or mm) while the standard normal space is dimensionless. This allows distances to be considered to estimate β . Furthermore, the probability distribution of this space is the well known multi-normal probability distribution, which greatly facilitates the estimation of the probability of failure. The design point, , is the smallest vector of random variables (most probable realization) for which the limit-state function is zero (failure). This is expressed by Equation 3.4. Note that the limit-state function in the standard normal space is denoted by upper case G. *y ( ){ }0minarg* == yyy G (3.4) 22 The search for the design point is performed similarly to an ordinary Newton algorithm. First, the limit-state function and its slope are evaluated at the initial point. Then, the function is assumed to be linear and the location where it is zero is estimated. Finally, the limit-state function is evaluated at this new point. The process is repeated until convergence is achieved. The search algorithm proposed by Lind and Hasofer to find a new trial point is presented next: ( ) ( ) αyαy yy ⎥⎥⎦ ⎤ ⎢⎢⎣ ⎡ +∇=+ i T i i i G G 1 (3.5) where and represent the value of the limit-state function and its gradient at the previous trial point and is the unit gradient vector defined by ( )iG y ( )iG y∇ α ( ) ( )i i G G y y α ∇ ∇−= . It must be noted that the search is not be possible if the limit-state function is discontinuous. Discontinuities will make it impossible to calculate the gradient and without the gradient it is not possible to find a new trial point. This is an important drawback of FORM because it must be kept in mind that it cannot solve all limit-state functions. The algorithm stops when convergence is achieved. FORM requires two convergence criteria, one verifies that the limit state-function is on the boundary of the failure domain ( ). The second one verifies that the design point is as close as possible to the origin. 0=G The first convergence criterion is expressed by Equation 3.6. Where G is the value of the limit-state function at the design point in the standard normal space and is the value of the limit-state function at the origin. 0G 23 1 0 e G G ≤ (3.6) The second criterion is expressed by Equation 3.7. Where is the coordinates of the design point and y α is the unit gradient vector as defined above. 2e T ≤− yααy (3.7) Both constants and are the acceptance tolerances defined by the analyst. 1e 2e y1 y 2 Real limit-state surface FORM approximation error Design point β FORM approximation Figure 3.4 FORM approximation Once the design point is found the probability of failure is estimated using Equation 3.3. Figure 3.4 shows the linear approximation that is used to estimate to 24 probability of failure. It is worth noting that the probability distribution decays exponentially as you move away from the origin. Therefore, the highest contribution to the probability of failure is located close to the design point. This means that the linear approximation produces an accurate result (Ditlevsen 1996). The side benefits from using FORM are the importance measures that result as a by-product of the analysis. The unit gradient vector, α, at the design point reveals the sensitivity of the limit-state function to each of the random variables in the standard normal space. The higher the value of a component of, α the more sensitive the limit- state function is to that random variable. Additionally, positive value of α indicates a load variable, while a negative value of α indicates a resistance variable. A load variable is a variable that increases the probability of failure if its value increases. A resistance variable has the opposite effect. This allows ranking of the importance of each random variable in the analysis. However, if the random variables are correlated, the ranking in the standard normal space will not be the same as the ranking in the original space. For this reason it will be useful to obtain an importance vector in the original space. This vector is calculated from the α vector using Equation 3.8 (Ditlevsen 1996). DαJ DαJ γ ˆ ˆ , , xy xy= (3.8) In this equation is the Jacobian of the transformation into the standard normal space and D is the diagonal matrix with the standard deviation of the original random variables. It is important to note that if the random variables are not correlated then both vectors are equal. xy ,J ˆ 25 These importance measures are useful to reduce the problem by neglecting some random variables that are not of importance. Also, they provide physical insight into the characteristics of the problem. 3.3 Sampling Sampling is an alternative method to solve the same reliability problem. This section describes two variations of sampling, namely, Monte Carlo sampling and Importance sampling. Monte Carlo sampling differs from FORM in that is does not search for the design point nor uses a linear approximation to estimate the probability of failure. Instead, it generates random outcomes of the random variables and evaluates the limit-state function for each realization. Then, it counts the number of times a failure occurred and estimates the probability of failure using the notion of relative frequency shown in Equation 3.9. samples ofnumber Total failures ofNumber =fp (3.9) The algorithm will keep producing random outcomes and estimating the probability of failure until convergence is achieved. The convergence criterion for Monte Carlo sampling is to have a stable probability of failure. This is evaluated by calculating its coefficient of variation as shown in Equation 3.10. The analyst must define the target coefficient of variation that is required for convergence. [ ] f f p p pVar f =δ (3.10) This method is extremely robust. It can evaluate any type of limit-state function (even those with discontinuities) and will always achieve convergence. This presents an 26 advantage over FORM. However, a major disadvantage is that many thousands of evaluations of the limit-state function may be required to obtain only one failure realization. This is especially true when the probability of failure is very low, which is the case for engineered infrastructure. Figure 3.5 shows an example of a Monte Carlo sampling analysis. The outcomes of the random variables are shown along with the limit-state surface. It is seen that only one of the simulations produced a failure. This exemplifies how computationally expensive this type of analysis is. y1 y 2 Figure 3.5 Monte Carlo sampling analysis The source of this lack of efficiency is that the samples are taken around the mean. If the probability of failure is low, the limit-state surface will be far away from the mean. 27 This is why very few of the realization will produce a failure event. This problem can be solved by using importance sampling. This method differs from Monte Carlo sampling in that the samples are taken around any point, not only the mean. To increase the efficiency of the method it is best to sample around a point that is close to the limit-state surface. A good choice is to use the design point, which is a result from FORM analysis. y1 y 2 Figure 3.6 Importance sampling analysis Figure 3.6 shows the simulations of an importance sampling analysis. It is seen that several realizations produce a failure event. This significantly reduces the number of simulations that are required to achieve convergence. Additionally, importance sampling can be used to refine the results of FORM because it does not make a linear 28 approximation. Also, if FORM is dealing with a limit-state function that has convergence problems, the analysis can be stopped before convergence is achieved and the last trial point can be recorded. Even though this point is not the design point, it will be closer to the limit state-surface than the mean; therefore it can then be used as the centre of the importance sampling simulations to estimate the probability of failure. In this thesis the use of FORM is considered to have advantages over the others. Firstly, FORM usually requires around five to ten iterations to achieve convergence, even for low probabilities of failure, which makes it greatly superior when investigating the response at the tail of the distribution. This significantly reduces the computational time needed to carry out the analysis. Secondly, FORM yields useful by-products from the analysis, such as the importance measure of the random variables. These measures can then be used to considerably reduce the number of random variables needed in the analysis and, therefore, further reduce the amount of time required. Then again, to gain of these advantages the models that are used need to satisfy an additional criterion. The models must be smooth; this means that their output must be continuously differentiable with respect to all the input parameters, including the random variables. This poses a challenge, since not all the available models are smooth. However, as explained later, some smoothing techniques can be used to smooth non-continuous models to be able to use them with FORM. 29 CHAPTER 4: ATC-58 APPROACH One of the early methodologies to tackle the challenges of performance-based engineering is to the methodology proposed by ATC-58 (Applied Technology Council 2009). This method was developed by the Pacific Earthquake Engineering Research Center (PEER) at the University of California, Berkeley (Yang T.Y. et al. 2009). This method is a pioneering attempt to produce a framework that can be used for practical purposes. Like the unified reliability approach, this method uses models to represent the hazard, response, damage and loss of the structure. This chapter presents this methodology and contrasts it with the unified reliability approach. 4.1 Hazard Model First, consider the hazard model. In the ATC-58 approach a number of recorded ground motions that represent the hazard at a specific site must be selected. Clearly, the selection of representative ground motions is in itself a challenging task (Iervolino and Cornell 2005). Specifically, it is non-trivial to decide which ground motions correctly describe the characteristics of all the possible fault failure mechanism in the area as well as their frequency content. It is also an open question to select what is the appropriate number of ground motion records to include. After selecting the suite of ground motions, the records need to be scaled. This scaling is carried out to achieve a uniform intensity measure for each return period. The scaling process is also challenging since there is no agreement on which method is the best (Baker and Cornell 2006). One engineer may decide to match the spectral 30 acceleration at the natural period of the structure; while others may match the peak ground acceleration. This is an issue that requires judgment and should not be taken lightly. Once the suite of ground motions is selected and properly scaled for each return period the hazard model is complete. It should be noted that although there is a limited number of ground motions that represent the hazard, the hazard model represents more ground motions since the response for these ground motions will be fitted to a distribution as it is explained next. 4.2 Response Model Using the selected scaled ground motions, nonlinear dynamic analyses are used to determine the response of the structure and key responses that can be used to measure damage are selected. Normally, these would be peak story drifts and peak floor accelerations; however other parameters could be used if necessary. These key responses are called engineering demand parameters (EDP). After performing the analyses a matrix like the one shown in Table 6.5 can be generated for each return period. Each column of this matrix represents the values of a particular EDP due to each ground motion. These columns have correlation between them since it is likely that if one EDP has a higher value the others would also have a higher value in response to a larger intensity of the ground motion. The computational time required to perform each nonlinear analysis can be considerable depending in the complexity of the model. For a reliability analysis such as this one, the highest possible accuracy in the response model is needed. This means that the model is likely to be highly detailed and complex and, therefore, the computational 31 time needed to analyse it will be substantial. It is critical to reduce the computational time as much as possible to make this kind of analyses feasible. To achieve this reduction and facilitate a reliability analysis, the EDP matrix for each return period that is generated from the nonlinear dynamic analyses is fitted to a joint lognormal distribution and the correlation between EDPs is maintained. The statistical procedure to perform this fitting is described in Appendix A. The great advantage of fitting the response to a joint lognormal distribution instead of performing a nonlinear dynamic analysis for each realization is the time needed to generate new sets of EDPs is significantly reduced. To produce a new realization only a set of standard normal random numbers is needed to simulate the response. 4.3 Damage Model For each realization of a set of EDPs, the damage the structure suffers needs to be estimated. The first step to assess the damage is to divide the structural elements, non- structural elements and contents that make up the building into “performance groups.” Depending on the structural system and its use, different performance groups must be selected. Each group should consist of components that are affected by the same EDP in a similar way. These components need to have the necessary number of damage states to describe the repair efforts required to restore them to their original state. Each performance group should incorporate elements that are sensitive to the same response parameter (e.g. drift or acceleration) and that will suffer approximately the same damage for a given magnitude of the response parameter. Once the performance groups are defined, the damage that they suffer needs to be estimated. This is done using fragility curves like the one shown in Figure 4.1.These 32 curves provide the conditional probability of being in each damage state given an EDP. For example, looking at the fragility curve shown in Figure 4.1 it is seen that if the EDP is 1.5, then the probability of the performance group being in damage state one, two, three or four are 0.3%, 49.7%, 42.7% and 7.3% respectively. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 1 2 3 4 5 6 7 EDP P( D S> D Si ) 8 Figure 4.1 Fragility curve The fragility curves work together with lookup tables. The lookup tables provide the repair quantities for each item in a performance group using the damage state obtained from the fragility curves. These repair quantities could be square meters of partitions walls to repair; number of doors to replace or any other source of damage in the performance group. Here it must be noted that while the fragility curve for a 33 performance group could be used for several structures; the lookup table must be constructed for each individual structure since the repair quantities clearly depend on the size of the structure and its distribution. Both the lookup table and the fragility curves must be constructed by people that have good knowledge of how the elements are affected by the response of the structure. 4.4 Cost Model After the damage suffered by the structure and its components has been calculated, the damage needs to be translated into a repair cost. Depending on the damage state that a performance group is in, the amount of repair required will vary. These repair quantities will depend on the characteristics of each structure; they must be estimated for every performance group and damage state. Once the repair quantity is obtained for each performance group, the repair cost is calculated by multiplying the repair quantity by the unitary repair cost. The function describing the unitary repair cost needs to be estimated by people with good judgement of the repair cost of each item. Although cost values will change for each item, the unitary repair cost should decrease as the repair quantity increases. This is to take into account that some fixed cost will be less significant when there is considerable repair work needed. Finally, when the repair costs for all performance groups are known, they are added to calculate the total repair cost of the building. Additionally, downtime could be estimated from the repair time and it would be an additional loss quantity of importance to the stakeholders. 34 4.5 Sampling Analysis The calculation of the total repair cost must be done several times to obtain the probability of occurrence of each cost value. To achieve this, the Monte Carlo sampling approach is utilized. The first step is to generate the random variables needed for the analysis. One standard normal random variable is needed for each EDP and one uniform random variable is needed for each performance group. The standard normal random variables are used to generate the EDP realization using the methodology shown in Appendix A. The uniform random variables are used to read the vertical axis of the fragility curves. They represent the probability of being in a damage state given the EDP value. 0 1 2 3 4 5 6 7 8 9 10 x 106 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Cost ($) P (R ep ai r c os t < C os t) Cost CDF Figure 4.2 Cost CDF for a given return period 35 By counting the results that are under a certain cost value we can calculate the probability of not exceeding that cost value. By doing this for several cost thresholds a CDF, as shown in Figure 4.2, can be constructed for each return period. 4.6 Relationship between the ATC-58 and the Unified Reliability Approaches The ATC-58 approach has four models, similar to the ones needed for the unified reliability approach. However, these four models collapse to only two. The first model unites the hazard and response models into joint lognormal distribution that represents the response of the structure. In this form it can be seen as an input-output model from the unified reliability approach, where the inputs are the random variables representing the uncertainty in the ground motion and the structural response and the outputs are the peak structural responses. The second model joins the damage and loss models into a cost model. This is also an input-output model, where the inputs are the peak structural response and the random variables representing the uncertainty in damage and the outputs is the repair cost. Additionally, the ATC-58 approach also needs to generate random variables and evaluate the limit-state function. Contrary to the more general unified reliability approach, only sampling can be used. The reason for this is that the loss model is discontinuous and therefore gradient based methods are not possible. 36 RELIABILITY ANALYSIS MODELS Damage model Figure 4.3 ATC-58 approach This means that the ATC-58 approach can be interpreted as a subset of the unified reliability approach in which the number of models is reduced to two and the possible reliability methods are limited to only Monte Carlo sampling. Figure 4.3 shows a flowchart for the ATC-58 approach, it is observed that it very similar to the one for the unified reliability approach (see Figure 3.2) but some of the possibilities have been removed. Therefore, it can be said that it constitutes a subset. Limit-state function evaluator. Random variables OUTPUTS (Expected cost, annual rate of exceedance) Sampling Response model 37 CHAPTER 5: STEP BY STEP PBEE Performance-based earthquake engineering has gained popularity in the academic world. This has led to the development of the methodologies described throughout this thesis. However, most practicing engineers are not familiar with PBEE and, consequently, they are not taking advantage of this new progress. This chapter presents a step by step guide that will help engineers carry out a performance-based assessment of a structure. It is clear from the different methodologies that exist that a performance-based assessment can be done in several ways. Therefore, emphasis is placed on the different options that are available to engineers in each of the steps of the process. Figure 5.1 is presented at the end of the chapter as a summary of the different options that are available in each of the steps. Performance-based assessment in typically used for existing structures. However, it can also be used to assess the performance of a structural design. This would allow engineers to compare competing design alternatives and help select the better alternative. 5.1 Step 1: Select Performance Measures In order to carry out a performance-based assessment of a structure, the first and most important step is to decide how performance will be measured. It is important that a quantifiable measure that can be understood by all stakeholders is chosen. This facilitates communication and the definition of the tolerable performance limits. There are several ways to measure performance. Today, the typical approach is to use interstory drifts or inelastic rotations as the performance measure. These measurements are used to verify if the structure achieves the acceptance criteria of 38 discrete performance levels, such as “immediate occupancy” or “collapse prevention”. This is the approach that is proposed by the FEMA 273/356 and Vision 2000 documents. Another way to measure performance is economic loss. Using economic loss has the distinct advantages of allowing comparison with other hazards that could also produce loss (such as fire) and with the cost of reducing the economic impact of the hazard. Additionally, it greatly facilitates the communication between stakeholders, since it is a variable that everyone can understand. However, the main disadvantage of using economic loss as a performance measure is that it is difficult to estimate with confidence. It would be preferable to account for all sources of loss; these include repair cost, injuries, casualties, downtime and social impact. The accurate estimation of these values is very difficult, but the repair cost presents the least complex estimation. This is why the repair cost has been the main focus of methodologies that use loss as their decision variables, such as the ATC-58 and unified reliability approaches. Once the performance measure has been selected an accurate description of the structure is needed in order to define its performance. If structural response parameters are selected as the performance measures, the description of the building is the same as for any other engineering design project. This means describing the member sizes, material properties, mass and other parameters that engineers are used to work with. On the other hand, if economic loss is selected the description of the structure needs to be more detailed. In addition all the structural parameters, information about the type, quantity and distribution of non-structural components and contents in the building are needed. This information is required for the assessment of repair times and costs. If 39 the number of injuries and casualties needs to be estimated, information about the number of occupants will be needed as well. It can be seen that this step is very important because it will determine the type of information that the assessment will provide. It is also the step that will determine the amount of work needed to carry out the assessment. Using structural response parameters facilitates the engineering work but hamper the ability to communicate with other stakeholders in order to make optimum decisions. On the other hand, using economic loss presents the engineer with several challenges (e.g. estimating damage and repair actions), but also the reward of more valuable information at the end of the process. 5.2 Step 2: Select Structural Analysis Method The next step is to decide what kind of analysis method will be used to predict the response of the structure. It is conventional engineering practice to employ the modal superposition method to calculate the response of a structure under seismic excitation. However, for performance-based engineering this is not a suitable method because it is based on linear combination of different modes to try to approximate the actual nonlinear response during an earthquake. Since a correct performance assessment requires the accurate estimation of the response, the nonlinear characteristics of the structure need to be described explicitly. There are two possible methods, nonlinear static analysis (commonly known as pushover analysis) or nonlinear dynamic analysis. First generation PBEE methodologies employ both analyses, while the ATC-58 and unified reliability approaches only employ nonlinear dynamic analyses to asses loss. However, pushover analyses could be used to determine loss as well. 40 Pushover analysis has the main advantages of being simple to understand and to use. Additionally, pushover analyses provide great insight about the behaviour of the structural response and make it easy to spot modeling errors. For performance assessment, the structure is pushed to a specified displacement and the state of the structure is assessed at the level of displacement. The main disadvantage is that the only parameter of the ground motion that is used for this analysis is the spectral acceleration at the fundamental period of the structure. Therefore the record to record variability is not taken into account. Also, the use of this spectral acceleration is only adequate if the first mode governs the structural response. On the other hand, a nonlinear dynamic analysis does take into account the record to record variability of the records that are used. However, any errors in the model can affect the results greatly and they will not be easy to locate and correct. For this reason it is a good practice to perform a pushover analysis first to understand the structural behaviour and purge any modeling errors. Another drawback of nonlinear dynamic analysis is that more time is needed to run the analysis and to interpret the results. Also, more qualified and experience engineers are needed fully exploit the information that can be obtained from the analysis results. 5.3 Step 3: Select Hazard Model The next step is to characterize the seismic hazard that the structure will be subjected to during its life cycle. How this is done will greatly depend on the analysis method that has been chosen. For this reason, this step is taken after the structural analysis method has been selected. For pushover analyses, the hazard is characterized only by the spectral 41 acceleration at the natural period of the structure, while for nonlinear dynamic analyses the hazard is characterized by complete ground motion records. If the selected analysis method is pushover analysis, the spectral acceleration needs to be modeled. One option is the use the response spectra provided by the Geological Survey of Canada (GSC). The GSC Open File 4459 provides the median and the 84th percentile response spectra. The 84th percentile spectrum is one standard deviation higher than the mean spectrum. With this information it is possible to obtain a probability distribution for the spectral acceleration of the structure given its natural period. Alternatively, the spectral acceleration can be modeled by using attenuation relationships, like the one proposed by Atkinson (Atkinson 1997). These types of relationships predict the spectral acceleration from the magnitude of the earthquake, fault type, focal depth and soil characteristics. It is clear that this approach is more refined because the magnitude and epicentre of the earthquake will also have to be modeled. However, the results are more accurate. On the other hand, if a nonlinear dynamic analysis is to be employed, ground motion records are required. The most common approach is to select records that match the characteristics of the soil, earthquake magnitude and fault type. The task of selecting the records that will be used for the analysis is not an easy one and may require the assistance of a seismologist. The selected records also need to be scaled so that all of them represent the same intensity. There are a variety of scaling methods, matching the spectral acceleration at the natural period of the structure or matching the energy of the 42 earthquake. However, there is no consensus on which scaling technique is the best (Goulet et al. 2008; Kurama and Farrow 2003). For a reliability analysis this approach has one very important limitation. Since the number of records is limited, they cannot cover the entire outcome space of possible ground motions, even if many ground motions are used. Therefore, some types of ground motions may be left out and this will produce bias in the reliability analysis. An alternative that solves this limitation is the use artificially generated ground motions. This requires the use of ground motion models like the one proposed by Razaeian and Der Kiureghian (Rezaeian and Der Kiureghian 2008). These types of models pass an artificially generated white noise through a number of filters to produce a ground motion. Some of these models need to be calibrated to target ground motions in order to be accurate. Therefore, the complicated task of ground motion selection may not be averted. However, once the calibration is done, these models provide an infinite number of ground motions that cover the entire outcome space. As mentioned above, the selection of the hazard model is highly dependant on the choice of structural analysis method. This should be taken into account when selecting the analysis method. Pushover analyses are easier to perform and employ simpler hazard models. However, nonlinear dynamic analyses are capable of providing more detailed and accurate results. 5.4 Step 4: Estimate Damage Using the hazard model and the structural model the response of the structure can be calculated. If structural response parameters were selected as the performance measures (e.g. maximum interstory drift less than 2%), there is no work that needs to be done in 43 this step. However, if the performance measure is economic loss, the structural response needs to be related to the amount of damage the structure suffers during a seismic event. This is done by using damage models. The most commonly know damage model are fragility curves (Applied Technology Council 2009). They provide the conditional probability of observing a damage state in a component given the response of the structure. They have gain popularity and therefore more fragility curves are constantly being developed for structural and non-structural elements, as well as contents. For the structural elements, there are alternative damage models. The model presented by Mehanny and Deierlein (Mehanny and Deierlein 2001) estimates the damage of structural elements based on the nonlinear deformations it suffers during the duration of the ground motion. The output is a continuous damage index from zero to one. Zero represents no damage and one represents failure of the structural element. Several other damage indices exist and they have been reviewed by Williams and Sexsmith (Williams and Sexsmith 1995) 5.5 Step 5: Estimate Loss After the damage to the components of the structure has been estimated it needs to be converted to a measure of economic loss. As mentioned before, there are several ways to measure economic loss; they include repair costs, downtime, injuries, casualties and socioeconomic effects. All of them are important and very difficult to estimate, however some of them are easier to estimate than others. The repair efforts needed to restore all the components of the structure to its original state can be estimated with precision if the damage is known. However, the cost 44 and time it will take depends on highly uncertain future factors. The availability of materials, workforce and financing is extremely difficult to predict in a post-earthquake environment where a large portion of the population and infrastructure has been affected. The large number of repairs needed in the region will increase the demand of resources, while the effects on people will reduce the availability of the workforce. This makes it difficult to predict the repair costs and downtime. Nevertheless, it is possible to obtain repair costs and time with present values from contractors and from this information an estimate can be obtained for the future. It may also be important to estimate the number of injuries and casualties. This is even harder to estimate for an individual building since it will depend on the time the earthquake strikes. For example if it strikes at 3am there will be a small number of casualties in office buildings and a high number in residential buildings. The opposite will happen in the earthquake strikes at 3pm. Finally, the most difficult and maybe most important consequence after and earthquake are the socioeconomic effects cause by structural damage. For example, road, bridge and port closures due to damage may isolate the region, preventing aide from being effective. It will also complicate trading activities which may substantially affect the economy of the region. 5.6 Step 6: Uncertainties Each of the predictions that are made using the models described in the previous steps will differ from reality due to the uncertainties that are involved in them. Each prediction will be affected by both aleatory (irreducible and inherent to the characteristics of the phenomenon) and epistemic (reducible, typically modeling error modeling error or lack 45 of data) uncertainty. In a performance-based assessment it must be decided weather or not to take this uncertainties into account. It is possible not to consider uncertainties and work with mean values throughout the analysis. The most important advantage of this approach is that only one analysis is needed and therefore the computational cost is highly reduced. However, the valuable information that can be produced by a reliability analysis, like the loss curves, will not be available. On the other hand, if uncertainties are taken into account this information will be available to facilitate communication with other stakeholders. The drawback is that some reliability knowledge is needed in addition to engineering expertise. Another disadvantage is that several analyses are needed to achieve a solution, increasing the computational cost. However, to mitigate this problem the efficient FORM can be used instead of Monte Carlo sampling as shown in Section 7.2. It is important to note that there are several sources of uncertainties in a risk analysis. It has been argued that for a seismic risk analysis the single most important source of uncertainty is the seismic hazard (Ellingwood 2001). However, when damage and loss are taken into account, there is also great uncertainty the prediction of how much damage is going to occur. Additionally, the uncertainty related to the repair costs, especially in the post earthquake environment, is not to be overlooked. 5.7 Step 7: Select Analysis Programs The last step in to select the analysis programs that will be used. There is no program that can carry out all the aspects of the performance-based assessment. Therefore, a different program needs to be used for each step. The choice of which program to use to model the 46 hazard and the response will depend on the comfort level of each engineer with each program. However, most engineers are not familiar with the available software to model the loss and to carry out the reliability analysis. This choice will depend on which reliability approach will be used. If the ATC-58 methodology is employed there are two available programs. One is Performance-Based Earthquake Engineering 1.0 and other is ATC-58 PACT (Applied Technology Council 2009). Both are available for download. They do not have any structural analysis capabilities; therefore the EDP matrices need to be calculated by the user and provided as an input. The loss estimation is done using fragility curves. ATC-58 PACT has a library with some fragility curves available for the user, but additional curves can be added as needed. Performance-Based Earthquake Engineering 1.0 does not have a library of fragility curves and all of them need to be provided by the user. Finally, they both use of Monte Carlo sampling to solve the reliability problem and produce the loss curves. Alternatively, if the unified reliability approach is used, Rt is the available software to orchestrate all the models (Inrisk 2009). Rt provides the ability to communicate with different software to model each step and to solve the reliability problem using different solution strategies. Rt is still under development but will soon have a library of models; including hazard, damage and loss models. 47 Figure 5.1 Step by step PBEE Performance measure Loss Structural response Structural analysis method Nonlinear dynamic analysis Pushover analysis Ground motion record: Spectral acceleration: - Synthetic GM - Response spectrum Hazard model - Recorded GM - Attenuation models - Fragility curves Damage estimation - Cumulative models - Repair costs Loss estimation - Downtime Consider uncertainties: No uncertainties: - More information - Limited information Uncertainties - More expensive - Faster analysis 48 CHAPTER 6: CASE STUDY This chapter discusses the performance-based assessment that was carried out for a steel moment frame building. The analysis involves the evaluation of the building’s performance to seismic events. The analysis was carried out using the data and the methodology from ATC-58 (Yang T.Y. et al. 2009), and compared with the unified reliability approach for the same building. The benefits and weaknesses of both approaches were contrasted. 6.1 Building Description The studied building is located in Berkeley, California. It is three stories high; each story is 14 feet high. It has an H-shaped plan layout, each floor has an area of 22 736 square feet as shown in Figure 6.1. The bold lines indicate the location of the lateral force resisting system. The weight of the building and the periods for its modes of vibration are shown in Table 6.1. Floor Weight (kips) Mode Period (s) 1st 2,538 1st 1.13 2nd 3,110 2nd 0.33 3rd 2,935 3rd 0.15 TOTAL 8,583 Table 6.1 Building weight and modes 49 Figure 6.1 Plan view (Applied Technology Council 2009) Due to the double symmetry of the building, only one half of the building was model in two dimensions, placing one frame next to the other and connecting them with a rigid diaphragm. The elements were modeled using fibre sections and all the supports are considered pinned at the base. 6.2 ATC-58 methodology The following section shows a summary of work presented in (Yang 2006). This methodology is a pioneering work in PBEE that has been disseminated in the engineering community. 6.2.1 Hazard Model Three discrete hazard levels were selected to represent the seismic events that could affect this building. These hazard levels are events with probability of exceedance of 50 50% in 50 years, 10% in 50 years and 5% in 50 years. Their respective return periods are 72, 475 and 975 years. Ten records were selected to represent the ground motions at each hazard level. Table 6.2, Table 6.3 and Table 6.4 show the selected ground motions. Earthquake Mw Station Scale factor Coyote Lake, 8/6/1979 5.7 Coyote Lake, Dam Abutment 2.68 Gilroy # 6 0.60 Temblor 1.43 Cholome Array # 5 1.57 Parkfield, 27/6/1996 6.0 Cholome Array # 8 2.25 Fagundes Ranch 7.49 Livermore, 27/1/1980 5.5 Morgan Territory Park 2.96 Coyote Lake, Dam Abutment 0.50 Anderson Dam, Downstream 2.01 Morgan Hill, 24/4/1984 6.2 Hall Valley 0.73 Table 6.2 Ground motions representing 50% in 50 years hazard level Earthquake Mw Station Scale factor Los Gatos Present Center 0.79 Saratoga Aloha Ave 1.28 Corralitos 1.67 Gavilan College 3.79 Gilroy Historic Building 2.35 Loma Prieta, USA 17/101989 7.0 Lexington Dam Abutment 0.47 Kobe, Japan 17/1/1995 6.9 Kobe JM A 0.49 Kofu 6.00 Tottori, Japan 6/10/2000 6.6 Hino 0.56 Erzincan, Turkey 13/3/1992 6.7 Erzincan 1.23 Table 6.3 Ground motions representing 10% in 50 years hazard level 51 Earthquake Mw Station Scale factor Los Gatos Present Center 1.02 Saratoga Aloha Ave 1.65 Corralitos 2.16 Gavilan College 4.88 Gilroy Historic Building 3.03 Loma Prieta, USA 17/101989 7.0 Lexington Dam Abutment 0.61 Kobe, Japan 17/1/1995 6.9 Kobe JM A 0.64 Kofu 7.73 Tottori, Japan 6/10/2000 6.6 Hino 0.72 Erzincan, Turkey 13/3/1992 6.7 Erzincan 1.59 Table 6.4 Ground motions representing 5% in 50 years hazard level The ground motions were scaled to match the spectral acceleration from the uniform hazard spectrum at the natural period of the building. This is not necessarily the best way to scale ground motions, especially for multi degree of freedom systems (Baker and Cornell 2006). Figure 6.2 shows the equal hazard spectra. It is seen that for the hazard levels with return periods of 72, 475 and 975 years the spectral accelerations are 0.26g, 0.64g and 0.83g respectively. The scale factors used to achieve these spectral accelerations are shown in Table 6.2, Table 6.3 and Table 6.4. 52 00.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 3.5 4 Period (s) Sp ec tr al a cc el er at io n (g ) 72 years 475 years 975 years Figure 6.2 Uniform hazard spectra 6.2.2 Response Model Using the scaled ground motions presented in the previous section, nonlinear dynamic analyses were conducted to determine the response of the structure. The EDPs that were used to describe this response were the peak inter-story drifts and the peak acceleration at each floor level. Table 6.5, Table 6.6 and Table 6.7 show the EDPs for each hazard level, the inter-story drifts are represented by ∆i and the floor accelerations are represented by ai. 53 ∆1 (%) ∆2 (%) ∆3 (%) ag (g) a2 (g) a3 (g) aR (g) 0.66 1.07 2.02 0.75 1.05 0.85 0.75 0.68 0.95 0.98 0.27 0.35 0.34 0.40 0.80 0.90 1.69 0.53 0.87 0.76 0.74 0.76 1.10 1.51 0.52 1.32 1.04 0.75 0.65 0.99 1.54 0.55 0.90 0.61 0.57 1.27 1.02 3.08 1.66 1.81 1.67 1.09 0.61 0.94 1.09 0.88 1.13 0.89 0.60 0.56 0.95 1.32 0.44 0.54 0.59 0.56 0.76 0.84 1.44 0.88 1.15 0.97 0.67 0.63 0.95 1.04 0.22 0.28 0.31 0.42 Table 6.5 EDP matrix at 50% in 50 years hazard ∆1 (%) ∆2 (%) ∆3 (%) ag (g) a2 (g) a3 (g) aR (g) 1.40 1.83 1.79 0.51 1.02 0.65 0.64 1.31 1.47 1.63 0.46 0.94 0.99 0.64 1.53 2.56 3.10 0.81 0.97 1.01 0.85 1.84 1.89 2.79 1.11 1.64 1.45 1.04 2.14 2.63 2.94 0.66 0.77 0.74 0.72 1.26 1.90 1.89 0.21 0.36 0.40 0.48 0.77 1.69 2.29 0.42 0.76 0.72 0.64 3.15 3.40 4.70 4.65 4.76 3.00 3.07 1.38 1.76 2.07 0.59 0.69 0.58 0.61 1.66 2.23 2.35 0.59 0.77 0.77 0.61 Table 6.6 EDP matrix at 10% in 50 years hazard ∆1 (%) ∆2 (%) ∆3 (%) ag (g) a2 (g) a3 (g) aR (g) 2.17 2.87 3.10 0.66 1.20 0.82 0.72 1.33 1.59 1.93 0.60 1.36 1.25 0.71 1.54 2.53 3.78 1.04 1.21 1.14 0.94 2.68 2.79 2.95 1.44 1.92 1.63 1.11 3.00 3.65 4.30 0.85 0.99 1.01 0.74 1.66 2.42 2.44 0.28 0.42 0.46 0.51 0.93 1.77 2.55 0.55 0.93 0.86 0.74 4.91 5.05 5.87 5.99 5.46 3.48 3.60 1.64 2.13 2.56 0.77 0.94 0.63 0.73 1.81 2.44 2.63 0.76 0.93 0.92 0.75 Table 6.7 EDP matrix at 5% in 50 years hazard Using nonlinear dynamic analyses to produce additional EDPs is held back by its computational cost and the small number of strong recorded ground motion. To solve this 54 problem a joint lognormal distribution is fitted to each EDP matrix. This allows the generation of new EDPs without the need of an additional nonlinear dynamic analysis by using the procedure shown in Appendix A. 6.2.3 Damage Model To estimate the damage that is suffered by the building and its contents, all the main components need to be separated into performance groups. Each performance group must collect components that are affected by the same EDP in a similar way. For this building 16 performance groups were used. These groups are the structural system; exterior enclosure; drift sensitive non-structural elements; acceleration sensitive non-structural elements and office content in each floor; as well as the equipment on the roof. Table 6.8 shows a summary of the performance groups as well as their corresponding EDP. Performance group Components EDP PG1 ∆1 PG2 ∆2 PG3 Structural system. (lateral load resisting system) ∆3 PG4 ∆1 PG5 ∆2 PG6 Exterior enclosure (glass) ∆3 PG7 ∆1 PG8 ∆2 PG9 Drift sensitive non- structural elements (doors) ∆3 PG10 a2 PG11 a3 PG12 Acceleration sensitive non- structural elements (ceiling tiles) aR PG13 ag PG14 a2 PG15 Office content (computers) a3 PG16 Equipment on roof aR Table 6.8 Performance groups 55 Each performance group is assigned a fragility curve. The fragility curves model the probability of being in a damage state given a value of the EDP. Each damage state represents the repair effort needed to restore the components back to their original state. The boundary that separates one damage state from the next is defined by its median and its coefficient of variation. Table 6.9 shows the parameters that define each boundary between damage states. Figure 6.3 to Figure 6.8 show the fragility curves used to define the damage states of each performance group (Yang T. 2008). Components >= DS2 >= DS3 >= DS4 Median 1.5 2.5 3.5 Structural system C.o.v. 0.25 0.30 0.30 Median 2.8 3.1 - Exterior enclosure C.o.v. 0.097 0.12 - Median 0.39 0.85 - Drift sensitive non- structural elements C.o.v. 0.17 0.23 - Median 1.0 1.5 2.0 Acceleration sensitive non-structural elements C.o.v. 0.15 0.2 0.2 Median 0.3 0.7 3.5 Office content C.o.v. 0.20 0.22 0.25 Median 1.0 2.0 - Equipment on roof C.o.v. 0.15 0.2 - Table 6.9 Definition of fragility cuves 56 Fragility curves for structural system 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 1 2 3 4 5 6 7 8 EDP P( D S> D Si ) Figure 6.3 Fragility curves for structural system Fragility curves for exterior enclosure 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 1 2 3 4 5 EDP P( D S> D Si ) Figure 6.4 Fragility curves for exterior enclosure 57 Fragility curves for drift sensitive non-structural elements 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.5 1 1.5 2 EDP P( D S> D Si ) Figure 6.5 Fragility curves for drift sensitive non-structural elements Fragility curves for acceleration sensitive non-structural elements 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.5 1 1.5 2 2.5 3 3.5 4 EDP P( D S> D Si ) Figure 6.6 Fragility curves for acceleration sensitive non-structural elements 58 Fragility curves for office content 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 1 2 3 4 5 6 7 8 EDP P( D S> D Si ) Figure 6.7 Fragility curves for office content Fragility curves for roof equipment 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.5 1 1.5 2 2.5 3 3.5 4 EDP P( D S> D Si ) Figure 6.8 Fragility curves for roof equipment 59 6.2.4 Loss Model Each item in a performance group has a repair quantity associated to every damage state. The repair quantity represents the amount of repair needed. Table 6.10 shows the repair quantities for performance group 1. All the repair quantities are shown in Appendix B. These repair quantities are used in the tri-linear function shown in Figure 6.9 to calculate the unit repair cost for each item. The parameters of this function (minimum quantity, maximum quantity, minimum cost and maximum cost) are shown in Table 6.11. Finally, the unit repair costs are multiplied by the repair quantities for each repair item and added up through the structure. This will provide the total repair cost for the building. Repair quantity Repair item Unit DS1 DS2 DS3 DS4 Demolition Finish protection sf 0 6,000 6,000 6,000 Ceiling system removal sf 0 2,000 3,000 5,000 Drywall assembly removal sf 0 800 800 6,000 Miscellaneous MEP loc 0 2 4 6 Remove exterior skin (salvage) sf 0 0 0 5,600 Repair item Welding protection sf 0 1,500 1,500 1,500 Shore beams below and remove loc 0 0 0 12 Cut floor slab at damaged connection sf 0 70 150 1,600 Carbon arc out weld lf 0 40 50 50 Remove portion of damaged beam/column sf 0 0 100 100 Replace weld from above lf 0 40 40 40 Remove/replace connection lb 0 0 0 3,000 Replace slab sf 0 70 70 1,600 Put-back Miscellaneous MEP and cleanup loc 0 2 4 6 Wall framing (studs drywall tape paint) sf 0 800 800 6,000 Replace exterior skin (from salvage) sf 0 0 0 5,600 Ceiling system sf 0 2,000 3,000 5,000 Table 6.10 PG1 repair quantities 60 Repair item Unit Min qty Max qty Min cost Max cost GENERAL CLEAN UP Office papers books sf 1,000 10,000 0.06 0.1 Office equipment sf 1,000 10,000 0.04 0.06 Loose furniture/file drawers sf 1,000 10,000 0.03 0.05 Water damage sf 1,000 20,000 0.1 0.15 CONTENTS Conventional office sf 10,000 50,000 21 25 ROOF-TOP MEP Repair in place ls 1 2 10,000 10,000 Remove and replace ls 1 2 200,000 200,000 STRUCTURAL DEMOLITION Finish protection sf 1,000 40,000 0.15 0.3 Ceiling system removal sf 1,000 10,000 1.25 2 Drywall assembly removal sf 1,000 20,000 1.5 2.5 Miscellaneous MEP loc 6 24 150 200 Remove exterior skin (salvage) sf 3,000 10,000 25 30 STRUCTURAL REPAIR Welding protection sf 1,000 10,000 1 1.5 Shore beams below and remove loc 6 24 1,600 2,100 Cut floor slab at damaged connection sf 10 100 150 200 Carbon arc out weld lf 100 1,000 10 15 Remove portion of damaged beam/column sf 100 2,000 50 80 Replace weld from above lf 100 1,000 40 50 Remove/replace connection lb 2,000 20,000 5 6 Replace slab sf 100 1,000 16 20 STRUCTURAL PUT-BACK Miscellaneous MEP and cleanup loc 6 24 200 300 Wall framing (studs drywall tape paint) sf 100 1,000 8 12 Replace exterior skin (from salvage) sf 1,000 10,000 30 35 Ceiling system sf 100 60,000 5 8 NON-STRUCTURAL INTERNAL DEMOLITION Remove furniture sf 100 1,000 1.25 2 Carpet and rubber base removal sf 1,000 20,000 1 1.5 Drywall construction removal sf 200 20,000 1.5 2.5 Door and frame removal ea 12 48 25 40 Interior glazing removal sf 500 5,000 2 2.5 Ceiling system removal sf 1,000 20,000 1.25 2 MEP removal sf 100 10,000 15 40 Remove casework lf 100 1,000 15 20 Table 6.11 Unit costs 61 Repair item Unit Min qty Max qty Min cost Max cost NON-STRUCTURAL INTERNAL CONSTRUCTION Drywall construction/paint sf 500 25,000 8 12 Doors and frames ea 12 48 400 600 Interior glazing sf 100 15,000 30 45 Carpet and rubber base sf 500 30,000 4 6 Patch and paint interior partitions sf 1,000 10,000 2 2.5 Replace ceiling tiles sf 1,000 20,000 1.5 2 Replace ceiling system sf 1,000 20,000 2.5 3 MEP replacement sf 100 1,000 60 80 Replace casework lf 100 1,000 50 70 NON-STRUCTURAL EXTERNAL DEMOLITION Erect scaffolding sf 1,000 10,000 2 2.5 Remove damaged windows sf 100 1,000 15 20 Remove damage precast panels sf 3,000 10,000 8 12 Miscellaneous access sf 100 1,000 15 20 NON-STRUCTURAL EXTERNAL PUT-BACK Install new windows sf 100 1,000 70 80 Provide new precast concrete panels sf 1,000 10,000 65 80 Patch and paint exterior panels sf 500 5,000 3.5 4.5 Miscellaneous put back ea 100 1,000 7 10 Site clean up sf 1,000 10,000 0.75 1.5 Table 6.11 Unit costs Unit cost ($) Max cost Cost Min cost Min qty Q Max qty Figure 6.9 Unit cost function 62 6.3 Smoothing Fragility Curves The damage-cost model given by fragility curves is discontinuous. This is because each damage state of a performance group is related to a particular repair cost. Therefore, as the performance group moves from one damage state to the next, there will be a sudden jump in the repair cost. It was mentioned in Section 3.2 that FORM reliability analysis (and several other reliability methods) would not be possible if the limit-state function is discontinuous. In this section, smoothing is utilized to remedy this problem and thus facilitating FORM. To understand how the smoothing is carried out, consider the explanation in the previous section, which stated that each damage state is related to a repair quantity. However, this repair quantity is also related to a unit repair cost by the cost function shown in Figure 6.9. As a result, each damage state is associated to a repair cost. Table 6.12 shows the repair cost that is associated with each damage state for all performance groups and Figure 6.10 shows the graphic representation of the cost jumps for performance group 1. As mentioned above, this discontinuity prohibits the use of FORM, which would reduce the appeal of fragility curves when they are used with the unified reliability approach because the computationally efficient gradient-based methods would not be available. 63 Figure 6.10 Fragility curve for performance group 1 Total repair cost ($) Performance group DS1 DS2 DS3 DS4 PG1 0.00 49,313.74 78,718.33 772,092.51 PG2 0.00 58,735.00 78,718.33 731,635.37 PG3 0.00 58,735.00 78,718.33 698,659.18 PG4 0.00 511,133.33 1,154,413.33 - PG5 0.00 511,133.33 1,154,413.33 - PG6 0.00 511,133.33 1,154,413.33 - PG7 0.00 28,931.97 387,773.50 - PG8 0.00 28,931.97 387,773.50 - PG9 0.00 28,931.97 387,773.50 - PG10 0.00 11,055.16 85,993.82 311,468.82 PG11 0.00 11,055.16 85,993.82 311,468.82 PG12 0.00 11,055.16 85,993.82 311,468.82 PG13 0.00 555.56 1,600.00 481,600.00 PG14 0.00 555.56 1,600.00 481,600.00 PG15 0.00 555.56 1,600.00 481,600.00 PG16 0.00 10,000.00 211,500.00 - Table 6.12 Repair cost for each performance group 64 However, ruling out fragility curves and focusing solely on other, better suited, methods is not desirable. This is because the engineering community has great interest in developing fragility curves structural elements, non-structural elements and building contents. If fragility curves were ignored this vast and continuously growing source of information will be outside the reach of the unified reliability approach. It is clear that fragility curves cannot be used directly; therefore something must be done in order to make their use suitable with gradient-based reliability methods. The solution pursued is to smooth fragility curves so that they will become continuous. To achieve a smooth relationship between the input EDP and the output cost, the following strategy is employed. First, consider Figure 6.10, where the cost is related to the EDP, if a probability value is given. Next, consider a cut along a particular probability value. Figure 6.10 shows this cut as a black line. Figure 6.11 shows the 2D profile that is obtained from the cut. The solid line is the original fragility curve profile. This solid line could be smoothed by fitting of any appropriate function. In this thesis, it is selected to utilize a lognormal distribution function for this purpose. This choice is not made because the fitted quantity is a probability. Rather, the lognormal distribution leads to a convenient algorithm, as show in the following. In the course of this study, it was also found that the use of the lognormal distribution function provided a good approximation to the discontinuous function. To be able to fit a lognormal CDF to the solid line in Figure 6.11, the ordinate axis needs to be scaled to a maximum value of 1 because all CDFs converge to a maximum value of 1. This scaling is done by dividing the cost values by the maximum 65 repair cost for this performance group. The scaled axis is shown on the right side of Figure 6.11. Cut along P(DS>=DSi)=0.5 0 100000 200000 300000 400000 500000 600000 700000 800000 0 1 2 3 4 5 6 EDP C os t ( $) 0 0.2 0.4 0.6 0.8 1 C os t r at io Figure 6.11 Fitted profile Upon scaling the ordinate axis to the domain 0 to 1, the height of each step between plateaus in Figure 6.11 is taken to represent the probability of being in the next damage state, here denoted Pi. The EDP value at the jump is denoted EDPi. Under these assumptions we calculate the statistical first and second moments, which are needed to draw the lognormal distribution: (6.1) ∑ = ⋅= numDS i iiEDP EDPP 1 µ 66 ( )[ ]∑ = −×= numDS i EDPiiEDP EDPP 1 2µσ (6.2) Next, the lognormal parameters, xi and lambda, are calculated using Equations 6.3 and 6.4. ⎟⎟⎠ ⎞ ⎜⎜⎝ ⎛ ⎟⎟⎠ ⎞ ⎜⎜⎝ ⎛+= 2 1ln EDP EDP µ σξ (6.3) ( ) 25.0ln ξµλ −= EDP (6.4) Using these lognormal parameters the lognormal CDF shown in Figure 6.11 as a dashed line is calculated as ( ) ⎟⎟⎠ ⎞⎜⎜⎝ ⎛ −Φ= ξ λxxF ln (6.5) Finally, this CDF is scaled by multiplying it by the maximum repair cost for this performance group. In the performance-based analysis, this smoothing is carried out for any “cut,” i.e., for any probability value. Effectively, this yields a complete smoothing of the fragility curve. By carrying out the smoothing for many probability values, Figure 6.12 is obtained to demonstrate that the fragility curve is now a smooth surface. This means that gradient-based methods, e.g., FORM can now be used with fragility curves. Therefore, the information and knowledge that is contained in fragility curves will not be out of the reach of the unified reliability analysis. 67 Figure 6.12 Fitted fragility curve for performance group 1 6.4 Mehanny-Deierlein Damage Model It is also possible to estimate the damage to the structural elements by using cumulative damage measures, such as the Mehanny-Deierlein damage model (Mehanny and Deierlein 2001) instead of fragility curves. This model differs from fragility curves in that it estimates damage based on the inelastic deformations of the elements (only beam and column elements) during cyclic loading instead of estimating it based on the peak inter-story drifts. The damage model proposed by Mehanny and Deierlein reads 68 ( ) ( ) βα β α θ θθ θθ ⎟⎟⎠ ⎞ ⎜⎜⎝ ⎛+ ⎟⎟⎠ ⎞ ⎜⎜⎝ ⎛+ = ∑ ∑ + + = ++ = ++ + n i ippu n i ipp D 1 FHC, 1 FHC,PHCcurrent (6.6) where is the inelastic deformation in the positive direction of loading and is the maximum inelastic deformation that the section can resist under monotonic loading. These deformations are separated into those induced by primary half cycles (PHC) and follower half cycles (FHC). A PHC is a half cycle whose amplitude exceeds all previous cycles and a FCH is any other half cycle. The coefficients + pθ +puθ α and β are calibrated by comparing the results to test data. This data includes the concrete, steel and composite elements (Azizinamini et al. 1992; Ozcebe and Saatcioglu 1987; Watson and Park 1994). For this building, the steel calibration is used, where α and β are 1.0 and 1.5 respectively (Mehanny and Deierlein 2001). For the negative direction a similar damage index, , is calculated. These two indices are combined by using Equation 6.7 to produce the final damage index. The parameter − θD γ is also calibrated to test data. For this building the value is 6.0. ( ) ( )γ γθγθθ −+ += DDD (6.7) It is expected that this model will produce results that are closer to the real behaviour of the structural elements in comparison to the results obtained from fragility curves. This is because this model considers the cyclic deformations that the elements go through during a seismic event, as well as the dimension of the columns and their reinforcement. However, in order to use this model, the complete time-history of the response is needed and therefore a nonlinear dynamic analysis is needed to evaluate the 69 structural damage for each realization of a seismic event. This increases the computation time needed to perform the analysis since the fitted joint lognormal response model shown in Section 6.2.2 cannot be used. Instead, a hazard model that produces a complete ground motion record is needed. The damage index is used to obtain the repair cost of the structural elements by relating it to the loss index: ( )( )( ) ⎩⎨ ⎧ ≥ <⋅−+⋅= 1for 1for 5.0sin15.0 s s m m m D D C CD L θ π (6.8) where and represent the repair cost and total replacement cost of structural element m (Koduru 2008), respectively. mL mC 70 CHAPTER 7: ANALYSIS RESULTS This chapter compares the results obtained from using the different methods described in the previous chapters. A performance-based assessment is carried out using different approaches and the benefits and drawbacks of each one are outlined. 7.1 Computation of the Total Loss Curve The methods shown in the previous chapters is used to determine the total repair cost, , of the building after a seismic event. Using this information a reliability problem of the form is formulated and solved using any reliability method. If cost threshold, , is varied to different values, a CDF as the one shown in Figure 7.1 is produced. This is a valuable source of information for stakeholders since it shows the probability of exceeding different cost thresholds for a given return period. TC ( iT CCP < ) iC Although this information is useful, it will be even more helpful to have one graph that shows the annual rate of exceeding a cost threshold, by incorporating all return periods. This is done by computing a “loss curve.” The loss curve is calculated by the rule of total probability by multiplying the slope of the hazard curve by the complement of the CDFs at different hazard levels and the integrating over all hazard levels (Yang 2006). This procedure is represented by Equation 7.1. ( ) ( ) ( )dSaSaSaSaCCPCC iiiTiT ∫∞ =≥=≥ 0 λλ (7.1) 71 Cost CDF 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 1 2 3 4 5 6 7 8 9 Cost (million $) P( To ta l C os t< C os t) 5% in 50 years 10% in 50 years 50% in 50 years Figure 7.1 Cost CDF Repeating this process for all cost thresholds produces a loss curve like the one shown in Figure 7.2. This graph provides additional information to the stakeholders in a concise manner in order to facilitate communication and the decision making process. Additionally, the mean annual repair cost for all hazard levels is represented by the area under this loss curve, which is easily calculated by integration. This is also helpful information that can be used to compare against insurance premium that the owner would have to pay to insure his building. 72 Loss Curve 0 0.005 0.01 0.015 0.02 0.025 0 1 2 3 4 5 6 7 8 9 Cost (million $) λ(T ot al C os t> C os t) Figure 7.2 Loss curve 7.2 Comparison of Reliability Methods The performance-based assessment of the building was done using both the original fragility curves and the smoothed fragility curves. As a result a cost CDF was obtained for each hazard level. The analysis was carried out using Monte Carlo sampling and FORM when using the smooth fragility curves and only Monte Carlo sampling when using the original fragility curves due to the discontinuity problems. For each hazard level, when a Monte Carlo sampling analysis was performed, one million samples were generated and these samples were employed to estimated the probability of exceeding all cost thresholds. On the other hand, when FORM was used, a separate analysis had to be executed for each cost threshold. The convergence criteria were set at 0.001 for both 73 closeness of the design point to the limit state surface and closeness of the direction of the gradient vector towards the origin in the standard normal space (Haukaas 2007). 7.2.1 Accuracy The results of these analyses are shown in Figure 7.3, Figure 7.4 and Figure 7.5. The blue line shows the results obtained when using the original fragility curves. Its discontinuity is observed clearly by the sudden jumps in the CDF values. The red line shows the results obtained from analysing smoothed fragility curves using Monte Carlo sampling. It is observed that these results follow an average of the CDF obtained from the original fragility curves. The brown line shows the results obtained from analysing smoothed fragility curves using FORM. It is seen that this solution tends to overestimate the probabilities of failure. This is mostly due to the assumption of a linear limit-state surface that is employed when using FORM. The error caused by this assumption is lower as the hazard level decreases because lower intensity earthquakes will produce a more linear response. These CDFs were combined to produce loss curves with the procedure explained in the previous section. The loss curves are shown in Figure 7.6, where the discontinuity from the fragility curves is still clearly visible. It can also be seen that the smoothed fragility curves follow and average of the original fragility curves. However, the most important result shown is that the error produce by the use of FORM is not significant since it closely follows the results obtained when using Monte Carlo sampling. 74 Cost CDF (5% in 50 years hazard level) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 1 2 3 4 5 6 7 8 9 Cost (million $) P( To ta l C os t< C os t) Sampling (smooth fragilities) Sampling (original fragilities) FORM (smooth fragilities) Figure 7.3 Cost CDF for a 5% in 50 years hazard level Cost CDF (10% in 50 years hazard level) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 1 2 3 4 5 6 7 8 9 Cost (million $) P( To ta l C os t< C os t) Sampling (smooth fragilities) Sampling (original fragilities) FORM (smooth fragilities) Figure 7.4 Cost CDF for a 10% in 50 years hazard level 75 Cost CDF (50% in 50 years hazard level) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Cost (million $) P( To ta l C os t< C os t) Sampling (smooth fragilities) Sampling (original fragilities) FORM (smooth fragilities) Figure 7.5 Cost CDF for a 50% in 50 years hazard level Loss curves 0 0.005 0.01 0.015 0.02 0.025 0 1 2 3 4 5 6 7 8 9 Cost (million $) λ(T ot al C os t> C os t) Sampling (smooth fragilities) Sampling (original fragilities) FORM (smooth fragilities) Figure 7.6 Loss curves 76 7.2.2 Computational Cost In the previous section it was shown that neither reliability method has an advantage over the other when it comes to the accuracy of the results. However, there is a significant difference in the computational cost. To compare the computational cost, the number of evaluations of the limit-state function is used. This is a good measure of computational cost since it does not depend of the speed of each individual computer. Table 7.1 shows the number of evaluations of the limit-state function that are required to obtain the CDF value at the mean and at the tail of the probability distribution for each hazard level (e.g. probability of failure less than 0.1%). For FORM this number of evaluations is the number needed to achieve the convergence criteria detailed earlier; for Monte Carlo sampling it is the number needed to achieve a coefficient of variation of 0.03. Table 7.1 also shows the ratio of required evaluations when using Monte Carlo sampling compared to when using FORM. This represents how much faster FORM is compared to Monte Carlo sampling when solving the reliability problem at a given location of the probability distribution. It is observed the FORM is considerably faster, particularly at the tails of the distribution. This is to be expected because FORM searches for the solution instead of randomly evaluating the limit-state function until convergence is attained. Number of evaluations Hazard level Probability distribution location Sampling FORM Sampling evaluations/ FORM evaluations Mean 1,111 48 23 5% in 50 years Tail 92,591,481 341 271,529 Mean 1,111 48 23 10% in 50 years Tail 61,727,283 364 169,580 Mean 1,111 96 12 50% in 50 years Tail 158,302 586 270 Table 7.1 Computational cost 77 Since one of the major drawbacks of carrying out a performance-based analysis is the amount of time that is need, the use of FORM presents a significant advantage. However, it must be noted that FORM cannot be used with discontinuous models and that is why smoothing the fragility curves was important. 7.2.3 Convergence An important advantage of Monte Carlo sampling analysis is that it will always achieve convergence, even if it takes an enormous number of simulations. On the other hand, FORM analysis may not converge in some cases or have slow convergence, which takes away its main advantage. There are many problems that could lead to convergence problems. The most common is discontinuity in the limit state function. However, this has been avoided in this work by smoothing the fragility curves. Another, cause for convergence problems is the presence of “waves” in the limit state function. For this example, since there are 23 random variables, the limit state function is difficult to visualize because it will be in a 23 dimensional space. To explain this problem in a simpler fashion Figure 7.7 shows how the total repair cost varies with one of the random variables. It is seen that the limit state function is continuous but that it presents “waves”. The “waves” occur because damage is triggered at different rates for each damage state. This situation makes the FORM search algorithm to converge at a slower rate. The slower converge happens in situations when the design point is found close to the location of these “waves”. 78 Model "waviness" 0 1 2 3 4 5 6 7 8 9 -3 -2 -1 0 1 2 3 Random variable To ta l C os t ( m ill io n $) Figure 7.7 Model “waviness” This convergence problem manifests itself when the costs CDFs are being constructed. Figure 7.8 shows the same CDF that is shown in Figure 7.3 for FORM analysis. The blue dots show the cost thresholds for which convergence was achieved before 20 iterations and the red circles show the cases for with converge was not achieved after 20 iterations. It is seen on Figure 7.8 that the “waves” also appear on the CDF and that the places where convergence is a problem are located around these waves. However, convergence is not achieved only in a few locations and the overall shape of the CDF is described properly with a linear interpolation through these points. 79 Cost CDF (5% in 50 years hazard level) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 1 2 3 4 5 6 7 8 9 Total Cost (million $) P( To ta l C os t< C os t) Converged Not Converged Figure 7.8 Convergence problems 80 CHAPTER 8: CONCLUSIONS AND RECOMMENDATIONS 8.1 Conclusions Performance-based earthquake engineering can be carried out in several ways. Past research has focused primarily on discrete performance levels that are based on structural response parameters. However, new research is focusing on loss as the main measure of performance. The use of loss as a performance measure makes the results of the analysis to be comprehensible by stakeholders that are not engineers. This facilitates the communication between engineers and other stakeholders, making it easier to identify and satisfy the client’s needs. The loss CDF obtained from the performance-based assessment at a given hazard level is a useful tools for communication with stakeholders. Nevertheless, merging different hazard levels into one loss curve provides a concise and simple illustration of the seismic risk to the structure. This information can be very helpful to compare seismic risk with other sources of risk to the structure, such as fire or flood risk. The ATC-58 methodology is easy to use if all the necessary information is provided. This information consists of adequate ground motions records, all the necessary fragility curves and repair cost data. However, it does not allow the engineers to decide what type of model or which reliability method to use. The unified reliability analysis is extremely versatile, permitting the engineers to test different models in a plug and play fashion. It also allows the use of different reliability methods which can greatly reduce the time required to carry out the analysis. 81 The incorporation of fragility curves into the unified reliability analysis framework enhances its versatility. This provides an additional damage-loss model that the engineer can use a performance-based assessment. Also, it allows the continuous research and development done on fragility curves to be within the reach of this methodology. The use of the first order reliability method (FORM) with adequate models significantly reduces the computational cost of the reliability analysis. This is beneficial to PBEE since the amount of time that is required to carry out the analysis is one of its main drawbacks. Most practicing engineers are not taking advantage of the new developments in PBEE. The step-by-step guidelines provided in this thesis provide encouragement to practicing engineers to start using performance-based methods in their assessments. 8.2 Recommendations for Future Work This study recommends the development of more refined damage models for non- structural elements. While several damage models exists to predict for structural elements (mostly line elements) that take into account the characteristics of the element. Non-structural damage is still predicted by simplified models. While adequate models exist to predict structural damage, more research is needed to predict structural collapse. Collapse is harder to predict than damage to individual elements because it is dependant on the entire structural system. It is also recommended that research efforts be placed on the development of more accurate models to predict the loss due to repair cost and downtime. As well as the 82 models to predict the effects that happen away from the site where the structure is located (e.g. social and environmental effects) Finally, it is recommended that hazard models that can produce synthetic ground motions records be made readily available for use in Vancouver. Several hazard models exist but they have not been developed for this region. 83 REFERENCES ActiveState. (2009). "Tcl Developer." http://www.tcl.tk/ (February 2nd, 2009). Applied Technology Council. (2009). "ATC-58 Project." http://www.atcouncil.org/atc- 58.shtml (April, 2008). ASCE/SEI 41-06. (2007). "Seismic Rehabilitation of Existing Buildings." . ATC-40. "Seismic evaluation and retrofit of existing concrete buildings." . Atkinson, G. M. (1997). "Empirical ground motion relations for earthquakes in the Cascadia region." Canadian Journal of Civil Engineering, 24(1), 64-77. Azizinamini, A., Corley, W. G., and Johal, L. S. P. (1992). "Effects of transverse reinforcement on seismic performance of columns." ACI Struct.J., 89(4), 442-450. Baker, J. W., and Cornell, C. A. (2006). "Spectral shape, epsilon and record selection." Earthquake Engineering and Structural Dynamics, 35(9), 1077-1095. Bozorgnia, Y., and Bertero, V. V. (2004). Earthquake Engineering: From Engineering Seismology to Performance-Bbased Engineering. CRC Press, Boca Raton, FL. Chen, W., and Lui, E. M. (2006). Earthquake engineering for structural design edited. CRC/Taylor & Francis, Boca Raton. Comerio, M. C. (2006). "Estimating downtime in loss modeling." Earthquake Spectra, 22(2), 349-365. Ditlevsen, O. M., H.O. (1996). Structural reliability methods. J.Wiley and Sons, New York. EERI. (2000). Financial Management of Earthquake Risk. Earthquake Engineering Research Institute, Oakland, California. Ellingwood, B. R. (2008). "Structural reliability and performance-based engineering." Proceedings of the Institution of Civil Engineers: Structures and Buildings, 161(4), 199- 207. Ellingwood, B. R. (2001). "Earthquake risk assessment of building structures." Reliability Engineering and System Safety, 74(3), 251-262. FEMA 273. (1996). "NEHRP guidelines for the seismic rehabilitation of buildings — ballot version." . 84 FEMA 356. (2000). "Prestandard and commentary for the seismic rehabilitation of buildings." . Goulet, C. A., Watson-Lamprey, J., Baker, J., Haselton, C., and Luco, N. (2008). "Assessment of ground motion selection and modification (GMSM) methods for non- linear dynamic analyses of structures." Geotechnical Earthquake Engineering and Soil Dynamics IV Congress 2008 - Geotechnical Earthquake Engineering and Soil Dynamics, May 18, 2008 - May 22, American Society of Civil Engineers, Sacramento, CA, United states, . Haukaas, T. (2008). "Unified reliability and design optimization for earthquake engineering." Prob.Eng.Mech., 23(4), 471-481. Haukaas, T. (2007). Engineering Decision Making with Numerical Simulation Models. Iervolino, I., and Cornell, C. A. (2005). "Record selection for nonlinear seismic analysis of structures." Earthquake Spectra, 21(3), 685-713. Inrisk. (2009). "Infrastructure Risk." http://www.inrisk.ubc.ca/software.htm (May, 2009). Koduru, S. (2008). "Performance-Based Earthquake Engineering with the First-Order Reliability Method." PhD thesis, The University of British Columbia, Vancouver. Krawinkler H, M. E. (2004). "Performance-based earthquake engineering." Earthquake Engineering: From Engineering Seismology to Performance-Based Engineering, B. V. Bozorgnia Y, ed., CRC Press, Boca Raton, 9-1-9-59. Krawinkler, H., Zareian, F., Medina, R. A., and Ibarra, L. F. (2006). "Decision support for conceptual performance-based design." Earthquake Engineering and Structural Dynamics, 35(1), 115-133. Kurama, Y. C., and Farrow, K. T. (2003). "Ground motion scaling methods for different site conditions and structure characteristics." Earthquake Engineering and Structural Dynamics, 32(15), 2425-2450. Mehanny, S. S. F., and Deierlein, G. G. (2001). "Seismic damage and collapse assessment of composite moment frames." J.Struct.Eng., 127(9), 1045-1053. Moehle, J. P. (2005). "Nonlinear analysis for performance-based earthquake engineering." Structural Design of Tall and Special Buildings, 14(5), 385-400. Ozcebe, G., and Saatcioglu, M. (1987). "CONFINEMENT OF CONCRETE COLUMNS FOR SEISMIC LOADING." ACI Struct.J., 84(4), 308-315. Pacific Earthquake Engineering Research Center. (2009). "Opensees Website." http://opensees.berkeley.edu (May 14th, 2009). 85 Rezaeian, S., and Der Kiureghian, A. (2008). "A stochastic ground motion model with separable temporal and spectral nonstationarities." Earthquake Engineering and Structural Dynamics, 37(13), 1565-1584. SEAOC. (1995). "Vision 2000 — Performance based seismic engineering of buildings." . Watson, S., and Park, R. (1994). "Simulated seismic loads tests on reinforced concrete columns." Journal of Structural Engineering New York, N.Y., 120(6), 1925-1849. Williams, M. S., and Sexsmith, R. G. (1995). "Seismic damage indices for concrete structures. A state-of-the-art review." Earthquake Spectra, 11(2), 319-319. Yang T. (2008). "PEER'S Performance-Based Earthquake Engineering Methodology." http://peer.berkeley.edu/~yang/ATC58website/ (January, 2009). Yang T.Y., Moehle J., Stojadinovic B., and Der Kiureghian A. (2009). "Performance evaluation of structural systems: theory and implementation." Journal of Structural Engineering, ASCE, (accepted for publication). Yang, T. (2006). "Performance Evaluation of Innovative Steel Braced Frames." PhD thesis, Engineering - Civil and Environmental Engineering, University of California, Berkeley, . 86 APPENDIX A: Procedure to Generate Additional EDPs The process of generating these new EDPs starts with assuming that the EDP matrix has a joint lognormal distribution. This matrix, X , is transformed into a normal distribution by taking the natural logarithm, producing a new matrix, Y . The sample statistics of this matrix are then calculated. The mean vector, , which contains the means of each EDP (see Equation A.1). The standard deviation matrix, , which contains the standard deviations of each EDP (see Equation A.2). The correlation matrix, , which contains the correlation coefficient between EDPs (see Equation A.3). And the Cholesky decomposition matrix, which is a lower triangular matrix that satisfies Equation A.4. YM YD YR With this statistics new EDPs can be generated by using Equation A.5. In this equation u is a vector of standard normal uncorrelated random variables. This new EDPs will have the same distribution as the original EDP matrix and the EDPs will maintain the correlation between them. Figure A.1 shows a flowchart of the process to generate new EDPs (Yang 2006). (A.1) ⎥⎥ ⎥⎥ ⎦ ⎤ ⎢⎢ ⎢⎢ ⎣ ⎡ = Ym Y Y Y µ µ µ M 2 1 M (A.2) ⎥⎥ ⎥⎥ ⎦ ⎤ ⎢⎢ ⎢⎢ ⎣ ⎡ = Ym Y Y Y σ σ σ ...00 0...0 0...0 2 1 MOMMS 87 (A.3) ⎥⎥ ⎥⎥ ⎦ ⎤ ⎢⎢ ⎢⎢ ⎣ ⎡ = 1... ...1 ...1 21 212 121 YmYYmY YmYYY YmYYY Y ρρ ρρ ρρ MOMMR (A.4) TYYY LLR = ( )YYY MuLSx += exp (A.5) Figure A.1 Generation of EDPs Sample statistics YM YD YL Original EDP matrix X Y ln Standard normal random variables x y u exp YYY MuLDy += Generated EDP data 88 APPENDIX B: Repair Quantities Repair quantity Repair item Unit DS1 DS2 DS3 DS4 Demolition Finish protection sf 0 6,000 6,000 6,000 Ceiling system removal sf 0 2,000 3,000 5,000 Drywall assembly removal sf 0 800 800 6,000 Miscellaneous MEP loc 0 2 4 6 Remove exterior skin (salvage) sf 0 0 0 5,600 Repair item Welding protection sf 0 1,500 1,500 1,500 Shore beams below and remove loc 0 0 0 12 Cut floor slab at damaged connection sf 0 70 150 1,600 Carbon arc out weld lf 0 40 50 50 Remove portion of damaged beam/column sf 0 0 100 100 Replace weld from above lf 0 40 40 40 Remove/replace connection lb 0 0 0 3,000 Replace slab sf 0 70 70 1,600 Put-back Miscellaneous MEP and cleanup loc 0 2 4 6 Wall framing (studs drywall tape paint) sf 0 800 800 6,000 Replace exterior skin (from salvage) sf 0 0 0 5,600 Ceiling system sf 0 2,000 3,000 5,000 Table B.1 Repair quantities for performance group 1 Repair quantity Repair item Unit DS1 DS2 DS3 DS4 Demolition Finish protection sf 0 6,000 6,000 6,000 Ceiling system removal sf 0 3,000 3,000 5,000 Drywall assembly removal sf 0 800 800 6,000 Miscellaneous MEP loc 0 2 4 6 Remove exterior skin (salvage) sf 0 0 0 4,000 Table B.2 Repair quantities for performance group 2 89 Repair quantity Repair item Unit DS1 DS2 DS3 DS4 Repair item Welding protection sf 0 1,500 1,500 1,500 Shore beams below and remove loc 0 0 0 12 Cut floor slab at damaged connection sf 0 70 150 1,600 Carbon arc out weld lf 0 40 50 50 Remove portion of damaged beam/column sf 0 0 100 100 Replace weld from above lf 0 40 40 40 Remove/replace connection lb 0 0 0 3,000 Replace slab sf 0 70 70 1,600 Put-back Miscellaneous MEP and cleanup loc 0 2 4 6 Wall framing (studs drywall tape paint) sf 0 800 800 6,000 Replace exterior skin (from salvage) sf 0 0 0 5,600 Ceiling system sf 0 3,000 3,000 5,000 Table B.2 Repair quantities for performance group 2 Repair quantity Repair item Unit DS1 DS2 DS3 DS4 Demolition Finish protection sf 0 6,000 6,000 6,000 Ceiling system removal sf 0 3,000 3,000 5,000 Drywall assembly removal sf 0 800 800 6,000 Miscellaneous MEP loc 0 2 4 6 Remove exterior skin (salvage) sf 0 0 0 3,000 Repair item Welding protection sf 0 1,500 1,500 1,500 Shore beams below and remove loc 0 0 0 12 Cut floor slab at damaged connection sf 0 70 150 1,600 Carbon arc out weld lf 0 40 50 50 Remove portion of damaged beam/column sf 0 0 100 100 Replace weld from above lf 0 40 40 40 Remove/replace connection lb 0 0 0 2,000 Replace slab sf 0 70 70 1,600 Table B.3 Repair quantities for performance group 3 90 Repair quantity Repair item Unit DS1 DS2 DS3 DS4 Put-back Miscellaneous MEP and cleanup loc 0 2 4 6 Wall framing (studs drywall tape paint) sf 0 800 800 6,000 Replace exterior skin (from salvage) sf 0 0 0 5,600 Ceiling system sf 0 3,000 3,000 5,000 Table B.3 Repair quantities for performance group 3 Repair quantity Repair item Unit DS1 DS2 DS3 Demolition Erect scaffolding sf 0 6,000 6,000 Remove damaged windows sf 0 3,400 3,400 Remove damage precast panels sf 0 0 8,400 Miscellaneous access sf 0 8,400 8,400 Put-back Install new windows sf 0 3,400 3,400 Provide new precast concrete panels sf 0 0 8,400 Patch and paint exterior panels sf 0 5,000 5,000 Miscellaneous put back ea 0 8,400 8,400 Site clean up sf 0 6,000 6,000 Table B.4 Repair quantities for performance group 4 Repair quantity Repair item Unit DS1 DS2 DS3 Demolition Erect scaffolding sf 0 6,000 6,000 Remove damaged windows sf 0 3,400 3,400 Remove damage precast panels sf 0 0 8,400 Miscellaneous access sf 0 8,400 8,400 Put-back Install new windows sf 0 3,400 3,400 Provide new precast concrete panels sf 0 0 8,400 Patch and paint exterior panels sf 0 5,000 5,000 Miscellaneous put back ea 0 8,400 8,400 Site clean up sf 0 6,000 6,000 Table B.5 Repair quantities for performance group 5 91 Repair quantity Repair item Unit DS1 DS2 DS3 Demolition Erect scaffolding sf 0 6,000 6,000 Remove damaged windows sf 0 3,400 3,400 Remove damage precast panels sf 0 8,400 8,400 Miscellaneous access sf 0 8,400 8,400 Put-back Install new windows sf 0 3,400 3,400 Provide new precast concrete panels sf 0 0 8,400 Patch and paint exterior panels sf 0 5,000 5,000 Miscellaneous put back ea 0 8,400 8,400 Site clean up sf 0 6,000 6,000 Table B.6 Repair quantities for performance group 6 Repair quantity Repair item Unit DS1 DS2 DS3 Demolition Finish protection sf 0 5,000 10,000 Remove furniture sf 0 5,000 10,000 Carpet and rubber base removal sf 0 0 10,000 Drywall construction removal sf 0 0 10,000 Door and frame removal ea 0 8 8 Interior glazing removal sf 0 100 100 Ceiling system removal sf 0 0 5,000 MEP removal sf 0 0 1,000 Remove casework lf 0 0 200 Construction Drywall construction/paint sf 0 0 10,000 Doors and frames ea 0 8 25 Interior glazing sf 0 100 400 Carpet and rubber base sf 0 0 10,000 Patch and paint interior partitions sf 0 5,000 5,000 Replace ceiling system sf 0 0 5,000 MEP replacement sf 0 0 1,000 Replace casework lf 0 0 200 Table B.7 Repair quantities for performance group 7 92 Repair quantity Repair item Unit DS1 DS2 DS3 Demolition Finish protection sf 0 5,000 10,000 Remove furniture sf 0 5,000 10,000 Carpet and rubber base removal sf 0 0 10,000 Drywall construction removal sf 0 0 10,000 Door and frame removal ea 0 8 8 Interior glazing removal sf 0 100 100 Ceiling system removal sf 0 0 5,000 MEP removal sf 0 0 1,000 Remove casework lf 0 0 200 Construction Drywall construction/paint sf 0 0 10,000 Doors and frames ea 0 8 25 Interior glazing sf 0 100 400 Carpet and rubber base sf 0 0 10,000 Patch and paint interior partitions sf 0 5,000 5,000 Replace ceiling system sf 0 0 5,000 MEP replacement sf 0 0 1,000 Replace casework lf 0 0 200 Table B.8 Repair quantities for performance group 8 Repair quantity Repair item Unit DS1 DS2 DS3 Demolition Finish protection sf 0 5,000 10,000 Remove furniture sf 0 5,000 10,000 Carpet and rubber base removal sf 0 0 10,000 Drywall construction removal sf 0 0 10,000 Door and frame removal ea 0 8 8 Interior glazing removal sf 0 100 100 Ceiling system removal sf 0 0 5,000 MEP removal sf 0 0 1,000 Remove casework lf 0 0 200 Construction Drywall construction/paint sf 0 0 10,000 Doors and frames ea 0 8 25 Interior glazing sf 0 100 400 Carpet and rubber base sf 0 0 10,000 Patch and paint interior partitions sf 0 5,000 5,000 Replace ceiling system sf 0 0 5,000 MEP replacement sf 0 0 1,000 Replace casework lf 0 0 200 Table B.9 Repair quantities for performance group 9 93 Repair quantity Repair item Unit DS1 DS2 DS3 DS4 Clean-up Water damage sf 0 0 10,000 20,000 Demolition Finish protection sf 0 4,000 10,000 20,000 Remove furniture sf 0 4,000 10,000 20,000 Ceiling system removal sf 0 0 0 20,000 MEP removal sf 0 0 500 2,000 Construction Replace ceiling tiles sf 0 2,500 8,000 8,000 Replace ceiling system sf 0 0 0 20,000 MEP replacement sf 0 0 500 2,000 Table B.10 Repair quantities for performance group 10 Repair quantity Repair item Unit DS1 DS2 DS3 DS4 Clean-up Water damage sf 0 0 10,000 20,000 Demolition Finish protection sf 0 4,000 10,000 20,000 Remove furniture sf 0 4,000 10,000 20,000 Ceiling system removal sf 0 0 0 20,000 MEP removal sf 0 0 500 2,000 Construction Replace ceiling tiles sf 0 2,500 8,000 8,000 Replace ceiling system sf 0 0 0 20,000 MEP replacement sf 0 0 500 2,000 Table B.11 Repair quantities for performance group 11 Repair quantity Repair item Unit DS1 DS2 DS3 DS4 Clean-up Water damage sf 0 0 10,000 20,000 Demolition Finish protection sf 0 4,000 10,000 20,000 Remove furniture sf 0 4,000 10,000 20,000 Ceiling system removal sf 0 0 0 20,000 MEP removal sf 0 0 500 2,000 Construction Replace ceiling tiles sf 0 2,500 8,000 8,000 Replace ceiling system sf 0 0 0 20,000 MEP replacement sf 0 0 500 2,000 Table B.12 Repair quantities for performance group 12 94 Repair quantity Repair item Unit DS1 DS2 DS3 DS4 Clean-up Office papers books sf 0 0 10,000 10,000 Office equipment sf 0 5,000 10,000 10,000 Loose furniture/file drawers sf 0 10,000 20,000 20,000 Contents Conventional office sf 0 0 0 20,000 Table B.13 Repair quantities for performance group 13 Repair quantity Repair item Unit DS1 DS2 DS3 DS4 Clean-up Office papers books sf 0 0 10,000 10,000 Office equipment sf 0 5,000 10,000 10,000 Loose furniture/file drawers sf 0 10,000 20,000 20,000 Contents Conventional office sf 0 0 0 20,000 Table B.14 Repair quantities for performance group 14 Repair quantity Repair item Unit DS1 DS2 DS3 DS4 Clean-up Office papers books sf 0 0 10,000 10,000 Office equipment sf 0 5,000 10,000 10,000 Loose furniture/file drawers sf 0 10,000 20,000 20,000 Contents Conventional office sf 0 0 0 20,000 Table B.15 Repair quantities for performance group 15 Repair quantity Repair item Unit DS1 DS2 DS3 Clean-up Loose furniture/file drawers sf 0 0 50,000 Roof-top MEP Repair in place ls 0 1 1 Remove and replace ls 0 0 1 Table B.16 Repair quantities for performance group 16 95
Thesis/Dissertation
2009-11
10.14288/1.0063147
eng
Civil Engineering
Vancouver : University of British Columbia Library
University of British Columbia
Attribution-NonCommercial-NoDerivatives 4.0 International
http://creativecommons.org/licenses/by-nc-nd/4.0/
Graduate
Comparison of performance based engineering approaches
Text
http://hdl.handle.net/2429/12261