Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Performance-based seismic design using designed experiments and neural networks Zhang, Jiansen 2003

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
[if-you-see-this-DO-NOT-CLICK]
ubc_2003-854205.pdf [ 9.59MB ]
[if-you-see-this-DO-NOT-CLICK]
Metadata
JSON: 1.0063534.json
JSON-LD: 1.0063534+ld.json
RDF/XML (Pretty): 1.0063534.xml
RDF/JSON: 1.0063534+rdf.json
Turtle: 1.0063534+rdf-turtle.txt
N-Triples: 1.0063534+rdf-ntriples.txt
Original Record: 1.0063534 +original-record.json
Full Text
1.0063534.txt
Citation
1.0063534.ris

Full Text

PERFORMANCE-BASED SEISMIC DESIGN USING DESIGNED EXPERIMENTS AND NEURAL NETWORKS BY HANSEN ZHANG B. Sc., Wuhan University of Technology, 1990 M.Sc, Wuhan University of Technology, 1993 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR THE DEGREE OF DOCTOR OF PHILOSOPHY In THE FACULTY OF GRADUATE STUDIES Department of Civil Engineering We accept this thesis as conforming To thej^quired-standatd THE UNIVERSITY OF BRITISH COLUMBIA April 2003 © Jiansen Zhang, 2003 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of ^iVi( £~nctrr\<?<?rA The University of British Columbia Vancouver, Canada DE-6 (2/88) ABSTRACT • r There are many uncertainties involved in seismic design process. Such factors as earthquake ground motions, variability of structural geometries and material properties, and approximation in analytical model contribute to the non-performance of the structure. Therefore, reliability methods are applied in structural engineering to assess the structural performance. However, seismic reliability assessment may necessitate a large number of performance function evaluations, each requiring a nonlinear dynamic structural analysis, which is a formidable, if not impossible task. In performance-based seismic design, a set of design parameters must be found to meet the associated target reliability levels for different performance objectives. This is conventionally achieved by trial-and-error using repeated forward reliability analysis, which is inefficient. Hence, it is desirable to develop an efficient and effective procedure that can reduce the colossal computational efforts, making seismic reliability assessment and performance-based design tractable. This study has explored for the first time applications of Design of Computer Experiments and Artificial Neural Networks for seismic reliability analysis, as well as performance-based seismic design, taking into account structural nonlinear dynamic behavior and all the major uncertainties involved. Experimental design is utilized to construct response databases for Neural Networks learning. Neural Networks act as a surrogate of the computer program, improving computational efficiency by approximating structural responses. Case studies have been carried out to demonstrate the applicability and efficiency of the proposed methods in seismic reliability assessment and performance-based seismic design. II TABLE OF CONTENTS ABSTRACT H TABLE OF CONTENTS m LIST OF TABLES VH LIST OF FIGURES IX NOTATIONS AND ABBREVIATIONS XI ACKNOWLEDGMENTS XV CHAPTER 1 INTRODUCTION 1 1.1 General 1 1.2 Review of Previous Work 3 1.2.1 Synthesis of artificial ground motions 3 1.2.2 Design of computer experiments 4 1.2.3 Approximation models 5 1.2.4 Performance-based design 6 1.3 Obj ectives of The Research1.4 Thesis Outline 8 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS 11 2.1 Introduction 12.2 Review of Ground Motion Simulation Methods 13 2.2.1 Empirical Green's function method 4 2.2.2 Spectral representation method 17 2.2.3 Frequency-wave number power spectra..; 19 2.2.4 Autoregressive moving average (ARMA) model 21 2.2.5 Wavelet transform method 23 2.2.6 Neural network model 4 2.3 Generation of Non-stationary Ground Motion 22.3.1 Determination of ground motion spectral characteristics 25 2.3.2 Generation of a stationary process 7 2.3.3 Selection of modulation function 22.3.4 Generation of a non-stationary artificial ground motion 30 2.3.5 Baseline correction 33 2.4 Simulation of Ground Motion Compatible With Response Spectrum 37 2.5 Summary and Discussion 4III CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY 43 3.1 Introduction 43.2 Review of Methods for Design of Computer Experiments 44 3.2.1 Central composite design 46 3.2.2 Latin hypercube design 7 3.2.3 Uniform design 9 3.2.4 Low discrepancy sequence design 52 3.2.4.1 Hammersley sequence design3.2.4.2 Halton sequence design 4 3.3 Experimental Design Implementation in This Study 56 3.3.1 Grid design 57 3.3.2 Grid-based optimal design 58 3.3.3 Optimized Latin hypercube design 59 3.4 Summary and Discussion 62 CHPATER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION 5 4.1 Introduction 64.2 Multilayer Backpropagation Neural Networks 67 4.2.1 General4.2.2 Artificial neural model 68 4.2.3 Network architecture 9 4.2.4 Training strategies 71 4.2.4.1 Backpropogation algorithm 72 4.2.4.2 Other training algorithms 5 4.2.5 Performance evaluation4.2.6 Neural networks implementation in this study 74.2.6.1 Data preparation 74.2.6.2 Topology of the network 77 4.2.6.3 Training 80 4.3 Radial Basis Function Networks 82 4.3.1 General4.3.2 Radial basis function network training 83 4.3.3 Radial basis function networks implementation in this study 87 4.4 Summary and Discussion 8CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY. 89 5.1 Introduction 85.2 Performance-based Seismic Design 94 5.2.1 Multiple performance objectives in SEAOC Vision 2000 95.2.2 Performance-based seismic design criteria in this study 6 IV 5.2.2.1 Multiple seismic hazard levels 98 5.2.2.2 Multiple performance objectives 101 5.2.2.3 Structural analysis approach 2 5.2.2.4 Seismic design criteria 103 5.3 Implementation of Performance-based Seismic Design 109 5.3.1 Reliability and performance-based seismic design5.3.2 Performance-based seismic design using neural networks 110 5.4 Summary and Discussion 113 CHAPTER 6 SEISMIC RELIABILITY ANALYSES: CASE STUDIES 116 6.1 Introduction 116.2 Description of The Nonlinear Dynamic Analysis Program 117 6.3 Case study 1: A Two-story Reinforced Concrete Plane Frame 118 6.3.1. Description of the structure and ground motion 116.3.2. Construction of the response databases 120 6.3.3. Reliability assessment 123 6.3.4. Sensitivity analysis 8 6.4 Case study 2: A Tall Reinforced Concrete Frame 130 6.4.1 Description of the structure 136.4.2. Construction of The Response databases 136.4.3. Reliability Assessment 4 6.4.3.1 Neural networks training 136.4.3.2 Two levels of design earthquakes 137 6.4.3.3 Reliability assessment for serviceability limit state 139 6.4.3.4 Reliability assessment for ultimate limit state 141 6.5 Case study 3: A Bridge Bent Without or With Seismic Isolation 143 6.5.1 Description of the structure 146.5.2 Construction of the response databases 146.5.3 Reliability assessment 5 6.5.3.1 Neural networks training 146.5.3.2 Two levels of earthquakes 150 6.5.3.3 Reliability assessment for serviceability limit state 152 6.5.3.4 Reliability assessment for ultimate limit state 154 6.6 Case study 4: A Wood Shear Wall 158 6.6.1 Description of the structure6.6.2 Random variables 159 6.6.3 Performance evaluation 160 6.7 Case study 5: An Instrumented Structure for Earthquake Response Measurement 162 6.7.1 Description of the structure6.7.2 Ground motions 163 6.7.3 Random variables 7 6.7.4 Performance evaluation 168 6.7.4.1 The structure before seismic retrofit 166.7.4.2 The structure after seismic retrofit 173 6.8 Reliability Assessment: Summary and Conclusions 8 V CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS 179 7.1 Introduction 177.2 A Two-story Reinforced Concrete Frame 181 7.2.1 Description of the structure 187.2.2 Random variables 187.2.3 Performance-based design formulation 182 7.2.4 Results 187.3 A Tall Reinforced Concrete Building 183 7.3.1 Description of the structure7.3.2 Random variables 184 7.3.3 Performance-based design formulation. 185 7.3.4 Results 186 7.4 A Bridge Bent Without or With Seismic Isolation 187 7.4.1 Description of the structure 187.4.2 Bridge bent without seismic isolation 187.4.2.1 Random variables7.4.2.2 Performance-based design for serviceability limit state 188 7.4.2.3 Performance-based design for ultimate limit state 187.4.3 Bridge bent with seismic isolation 189 7.4.3.1 Random variables 187.4.3.2 Performance-based design for serviceability limit state 190 7.4.3.3 Performance-based design for ultimate limit state 197.5 A Wood Shear Wall, 192 7.5.1 Description of the structure 197.5.2 Random variables7.5.3 Performance-based design 3 7.6 Summary 194 CHAPTER 8 SUMMARY AND FUTURE WORK 195 8.1 Summary : 198.2 Future Work 199 REFERENCES 201 Appendix A Database for the two-story reinforced concrete frame 213 Appendix B Database for the tall reinforced concrete building 214 Appendix C Database of the bridge without or with seismic isolation 216 Appendix D Response database of the wood shear wall 219 Appendix E Response database of the Holiday Inn 220 Appendix F Reliability index database for the two-story reinforced concrete building 221 Appendix G Reliability index database for the tall building 222 Appendix H Reliability index database for the bridge bent with isolation 223 VI LIST OF TABLES Table 2.1 Kanai-Tajimi spectrum parameters 25 Table 2.2 Clough-Penzien filter parameters 6 Table 2.3 Seismic coefficients Ca and Cv 38 Table 3.1 Central Composite Design for three variables 47 Table 3.2 Random Latin Hypercube Design for two variables 8 Table 3.3 A Uniform Design for two variables with 21 levels 50 Table 3.4 A Hammersley Sequence Design for two variables 3 Table 3.5 A Halton Sequence Design for two variables 5 Table 3.6 A Grid Design for three variables 58 Table 3.7 A Optimized Latin Hypercube Design for two variables 61 Table 5.1 Performance level definitions 97 Table 5.2 Performance objectives 102 Table 6.1 Case study 1: Ground motion parameters distribution and statistics 120 Table 6.2 Case study 1: Ground motion parameter combinations 121 Table 6.3 Case study 1: Input variable bounds 123 Table 6.4 Case study 1: Neuron numbers and neural network RMSREs 124 Table 6.5 Case study 1: Neural networks training relative error statistics 12Table 6.6 Case study 1: Input variable probability distributions and statistics 125 Table 6.7 Case study 1: Reliability indices for collapse prevention limit state 126 Table 6.8 Case study 1: Reliability indices for life safety limit state 127 Table 6.9 Case study 1: Reliability indices for functionality limit state 128 Table 6.10 Case study 1: Variation of reliability index with statistical parameters 129 Table 6.11 Case study 2: Input variable bounds 132 Table 6.12 Case study 2: Neuron numbers and neural network RMSREs 134 Table 6.13 Case study 2: Neural networks training relative error statistics 135 Table 6.14 Case study 2: Input variable probability distributions and statistics (Serviceability limit state) 140 Table 6.15 Case study 2: Reliability index for serviceability limit state 141 Table 6.16 Case study 2: Input variable probability distributions and statistics (Ultimate limit state) 142 Table 6.17 Case study 2: Reliability index for ultimate limit state 143 Table 6.18 Case study 3: Input variable bounds 144 Table 6.19 Case study 3: Neuron numbers and neural network RMSREs (without isolation) 146 Table 6.20 Case study 3: Neuron numbers and neural network RMSREs (with isolation) 7 Table 6.21 Case study 3: Neural networks training relative error statistics 149 Table 6.22 Case study 3: Input variable probability distributions and statistics (Serviceability limit state) 152 Table 6.23 Case study 3: Reliability index for serviceability limit state without isolation.. 152 Table 6.24 Case study 3: Reliability index for serviceability limit state with isolation 153 Table 6.25 Case study 3: Input variable probability distributions and statistics (Ultimate VII limit state) : ? 154 Table 6.26 Case study 3: Random variable probability distributions and statistics 155 Table 6.27 Case study 3: Reliability index for ultimate limit state without isolation 156 Table 6.28 Case study 3: Reliability index for ultimate limit state with isolation 156 Table 6.29 Case study 3: Random variable probability distributions and statistics 157 Table 6.30 Case study 3: Reliability index for ultimate limit state with isolation 158 Table 6.31 Case study 4: Input variable probability distributions and statistics 160 Table 6.32 Case study 4: Neuron numbers and neural network RMSREs 161 Table 6.33 Case study 4: Neural networks training relative error statistics 16Table 6.34 Case study 4: Reliability index for wood shear wall 161 Table 6.35 Case study 5: Input variable bounds (before retrofit) 8 Table 6.36 Case study 5: Input variable bounds (after retrofit) 16Table 6.37 Case study 5: Input variable probability distribution and statistics 168 Table 6.38 Case study 5: Neuron numbers and neural network RMSREs (before retrofit) 169 Table 6.39 Case study 5: Neural networks training relative error statistics (before retrofit) 169 Table 6.40 Case study 5: Serviceability reliability indices (before retrofit) 170 Table 6.41 Case study 5: Life safety reliability indices (before retrofit) 17Table 6.42 Case study 5: Neuron numbers and neural network RMSREs (after retrofit) 174 Table 6.43 Case study 5: Neural networks training relative error statistics (after retrofit).. 174 Table 7.1 Reinforced concrete plane frame: Input variable probability distribution and Statistics 181 Table 7.2 Tall building: Input variable probability distributions and statistics 186 Table 7.3 Bridge bent: Input variable probability distributions and statistics (without seismic isolation) 187 Table 7.4 Bridge bent: Input variable probability distributions and statistics (with seismic isolation) 9 Table 7.5 Wood shear wall: Input variable probability distributions and statistics 193 VIII LIST OF FIGURES Figure 2.1 A schematic diagram of Empirical Green's Function method 16 Figure 2.2 Comparison of Kanai-Tajimi, Clough-Penzien and sine-square spectrum 27 Figure 2.3 Jennings modulation function 28 Figure 2.4 Hsu & Bernard modulation function 9 Figure 2.5 Artificial ground motion generated using Amin & Ang modulation function 30 Figure 2.6 Artificial ground motion generated using Hsu modulation function 31 Figure 2.7 Artificial ground motion with two strong components 32 Figure 2.8(a) Acceleration time history before baseline correction 5 Figure 2.8(b) Velocity time history before baseline correction 3Figure 2.8(c) Displacement time history before baseline correction 35 Figure 2.8(d) Acceleration time history after baseline correction 6 Figure 2.8(e) Velocity time history after baseline correction 3Figure 2.8(f) Displacement time history baseline correction 6 Figure 2.9 Flowchart to generate response spectrum compatible ground motion time history 39 Figure 2.10 UBC design response spectrum 38 Figure 2.11 UBC design spectrum compatible artificial ground motion accelerogram 40 Figure 3.1 A Random Latin Hypercube Design for two variables 48 Figure 3.2 A Uniform Design for two variables with 21 levels 51 Figure 3.3 A Hammersley Sequence Design for two variables 4 Figure 3.4 A Halton Sequence Design for two variables 56 Figure 3.5 A Grid-based Optimal Design for two variables 9 Figure 3.6 A Optimized Latin Hypercube Design for two variables 62 Figure 4.1 A schematic diagram of an artificial neuron 68 Figure 4.2 Transfer function 6Figure 4.3 A typical Multilayer Backpropagation Neural Network 70 Figure 4.4 A schematic Radial Basis Function Network 84 Figure 5.1 SEAOC Vision 2000 performance levels 96 Figure 6.1 Reinforced concrete plane frame geometry 118 Figure 6.2 Cross peak trilinear hysteresis model CP3 122 Figure 6.3 Geometry of tall building 131 Figure 6.4 CANNY trilinear hysteresis model 133 Figure 6.5 Bridge bent with isolation 144 Figure 6.6 Modulation functionFigure 6.7 Degrading bilinear model 6 Figure 6.8 Variation of reliability index with Br(mm) 154 Figure 6.9 Variation of reliability index with isolator width meanBr 157 Figure 6.10 Wood shear wall construction 159 Figure 6.11 Variation of reliability index with respect to e, 162 Figure 6.12 Variation of reliability index with respect to e2IX Figure 6.13 A typical floor plan of Holiday Inn 163 Figure 6.14 Holiday Inn earthquake ground motion, Northridge 1994 (a) Longitudinal accelerogram 164 (b) Transverse accelerogram(c) Vertical accelerogram 5 (d) Rotational accelerogram 16Figure 6.15 Variation of reliability index with respect to ground motion components (a) Variation of reliability index with respect to A^ 172 (b) Variation of reliability index with respect to Agy 17(c) Variation of reliability index with respect to A^ 172 (d) Variation of reliability index with respect to Ag, 173 Figure 6.16 Variation of reliability index with respect to damper mean area Ad (Serviceability limit state) 175 Figure 6.17 Variation of reliability index with respect to damper mean area Ad (Life safety limit state) 17Figure 6.18 Variation of reliability index with respect to ground motion components (a) Variation of reliability index with respect to A^ 176 (b) Variation of reliability index with respect to A^, 177 (c) Variation of reliability index with respect to A^ 17(d) Variation of reliability index with respect to A^ 177 X NOTATIONS AND ABBREVIATIONS NOTATIONS Ml vector Euclidean norm Ao story acceleration limit aa design earthquake peak ground acceleration Aa hysteretic damper cross sectional area Ag peak ground acceleration Agx peak ground acceleration in the longitudinal direction Agy peak ground acceleration in the transverse direction Agz peak ground acceleration in the vertical direction Agr peak ground acceleration in the rotational direction on a horizontal pi Amax maximum story acceleration autoregression coefficient a(t) ground acceleration time history A(co) Fourier transform of ground acceleration time history bi moving average coefficient Br isolator width c center of radial basis function Ca UBC (Uniform Building Code) design spectrum parameter Cv UBC design spectrum parameter C(Xd) cost function in terms of design parameter vector Xd d(t) ground displacement time history D diameter of a column D(co) Fourier transform of ground displacement time history ej nail spacing of wood shear wall E(.) expectation Ec concrete modulus of elasticity EC error criterion fl) transfer function f(t) modulation function f; concrete compressive strength fy steel yield strength G performance function ground damping ratio K damping ration of Clough-Penzien filter Hi neural network hidden neuron i output H, minimum number of hidden layer neurons Hu maximum number of hidden layer neurons h((0) high-pass Butterworth filter kx wave number in x-direction XI K wave number in y-direction M mass; earthquake magnitude; member bending moment Mb beam moment capacity to prevent lateral buckling Mp probable moment capacity My member flexural yield moment Mu ultimate moment capacity N axial load on a column Nb buckling capacity of a column Ntrain number of training samples Ntest number of testing samples Nw number of network weights P random variable with uniform distribution over the interval [0,1] P number of examples q uniformly distributed load Q concentrated load . r radius of radial basis function R seismic reduction factor Rn standard normal variable Ruu(^'n.'c) autocorrelation function s dimensionality So power spectrum density of the white noise sa pseudo-acceleration response spectrum SKT(co) Kainai-Tajimi Power Spectrum density function SSEtrain sum square error over training dataset SSEtest sum square error over training dataset Suu(kx,ky,co) frequency-wave number spectrum Td earthquake duration Ti neural network target output I U normalized input V shear force V0 story velocity limit Vmax maximum story velocity vP probable shear capacity Vu ultimate shear capacity V1 max maximum i-th story shear force V' y i-th story shear capacity v(t) ground velocity time history V(co) Fourier transform of ground velocity time history Wj j-th weight of RBF network w(t) generated stationary process Wji weight connecting neuron j with preceding neuron I Wk noise value at time k8t Xa design parameter vector XII neural network input I X, - lower bound of variable X Xu . upper bound of variable X Yi neural network output i variable value at time k8t z seismic zone factor a momentum factor 0 reliability index K target reliability index corresponding to the k-th performance objective Pk(xd) calculated reliability index corresponding to the k-th performance objective 5t time increment A displacement A© frequency increment AW weight change Au roof displacement when a kinematical mechanism forms \ roof displacement when the first beam plastic hinge forms neural network relative error for the k-th sample <k random phase angle <J)(X) radial basis function O design matrix Y Euler constant learning rate H() mean value of a certain random variable HA global displacement ductility He local rotational ductility HA structural displacement ductility capacity He member rotational ductility capacity V earthquake arrival rate CO frequency corner frequency <°g predominant ground frequency fundamental frequency of Clough-Penzien filter CO, lower bound of circular frequency upper bound of circular frequency e story drift ratio e0 story drift ratio limit eu ultimate plastic rotation rotation at the yield moment a(.) standard deviation of a certain random variable time increment XIII objective function ABBREVIATIONS ANN Artificial neural networks BSSC Building Structural Safety Council. CCD Central composite design COV Coefficient of variation DOE Design of experiments FEMA Federal Emergency Management Agency FFT Fast Fourier transform FORM First order reliability method HSD Hammersley sequence design LFTT Inverse fast Fourier transform IS Importance sampling ISO International Standard Organisation LHD Latin hypercube design LI Local interpolation LRB Lead rubber bearing MCS Monte Carlo simulation MLP Multilayer perceptron NBCC National Building Code of Canada NEHRP National Earthquake Hazard Reduction Program OA Orthogonal array OLHD Optimal Latin hypercube design OSB Oriented strand board PGA Peak ground acceleration RBFN Radial basis function network RMSRE Root mean square relative error SEAOC Structural Engineers Association of California UBC Uniform building code UD Uniform design XIV ACKNOWLEDGEMENTS I am deeply indebted to my supervisor, Professor Ricardo O. Foschi, who suggested this project. I am extremely grateful to him for the endless help and support he has given me, for the time and countless efforts in advising me during different stages of this thesis. His enthusiasm and encouragement, his profound knowledge and scientific exploration spirit, have contributed significantly to the fulfillment of my academic goals, and will benefit me in my future pursuit. I would like to thank the whole department with whom I had the pleasure of carrying out my research in an excellent environment. I am particularly grateful to the supervisory committee members, whose helps are invaluable. Discussions with them were always very inspiring. I would like to thank all professors during my graduate studies at UBC, for their help and the valuable knowledge they have shared with me. I would like to express my gratitude to Dr. Hong Li for his encouragement and help throughout my studies, for numerous discussions we had regarding various academic issues. I also would like to thank my wife, Lei Zhou, for her understanding and support during all the hard times I went through; and to my son, William M. Zhang, who was born during my studies, and whose birth is really a blessing to me and a inexhaustible source of motivation for me. Last, but far from least, I would like to thank my parents to whom this thesis is dedicated, for the many sacrifices they have made to help me learn. Their unconditional love, emotional support and encouragement have always been the driving force in pursuing my life goals. I shall be always grateful. XV CHAPTER 1 INTRODUCTION CHAPTER 1 INTRODUCTION 1.1 General Structural design has been traditionally based on deterministic analysis. However, uncertainties and randomness associated with loads, environment, materials, analysis models, structural details, construction workmanship and quality control, service inspection and maintenance, all contribute to a small probability that the structure will not perform as intended. Therefore, all these uncertainties and randomness should be taken into consideration, in the framework of probability or reliability-based design, to assure a sufficient safety level in design. In earthquake engineering, in particular, due to a multitude of random variables relating to ground motions, material and geometric non-linearity as well as analytical models, the behavior responses are extremely difficult to predict accurately during a strong earthquake motion. Structural reliability must be studied in the context of probabilistic seismic risk assessment, incorporating probabilistic hazard analysis, reliability analyses of components and system, and system risk evaluation. Among them, the crucial step is component and system reliability analyses, which are generally based on simulation techniques such as Monte Carlo simulation and its variants, since the structural responses are implicit functions of the intervening random variables. However, because the probability of failure is small, computer simulation entails a large number of performance function evaluations, each requiring execution of a nonlinear dynamic analysis program. Hence, the resulting simulation process may be time-consuming or even computationally prohibitive, which makes it non-feasible in most practical situations. 1 CHAPTER 1 INTRODUCTION In order to circumvent the computational difficulty, researchers have been studying various procedures to deal with implicit performance functions and attempting to expedite numerical simulation in reliability analysis of structures. The methods available can be categorized into three types: (1) Monte Carlo simulation with variance reduction techniques, (2) response surface methodology, (3) sensitivity-based probabilistic finite element analysis. Monte Carlo simulation variants, such as importance sampling, adaptive sampling and directional simulation, improve simulation efficiency by variance reduction techniques. They need estimation of the most probable failure point, which is unknown in advance. Response surface methods consist of approximating the actual performance function with analytical expressions, usually second order polynomials, fitted to selected values of the performance function in the neighborhood of the most likely failure point. The response surface is then used for failure probability estimation by means of routine reliability analysis approaches. Although it is conceptually simple and easy to implement, the response surface method could result in many iterations, for it can only approximate the performance function accurately in vicinity of the most likely failure point. Moreover, as the number of variables increases, it suffers so-called "curse of dimensionality", ie, the number of coefficients grows exponentially with the number of variables. Sensitivity analysis can be used to identify important variables, those having a greater influence on structural reliability, thus saving a certain amount of computational effort. Nevertheless, it is based on perturbation and is only accurate when the input variables have small variability, and requires many repetitions of deterministic analyses. Artificial intelligence and machine learning techniques have been developing very rapidly in recent years, and provide robust and effective tools for structural reliability analysis. The 2 CHAPTER 1 INTRODUCTION learning machines have the ability to adapt to the environment and learn from their experience. Although they have been studied extensively and applied successfully in computer science, electrical engineering, econometrics and other fields, their potential capabilities have hardly been exploited for structural reliability analysis. In this thesis, artificial intelligence and machine learning will be explored for seismic reliability assessment and performance-based seismic design. Case studies will be carried out to demonstrate their applicability and efficiency. 1.2 Review of Previous Work Earthquake engineering has witnessed great progress in the past fifty years. Structures designed in conformance with seismic codes of practice have generally exhibited satisfactory performance during recent major earthquakes. However, much still need to be done to further improve structural performance and mitigate seismic damage in future earthquakes. The following gives a brief overview of the research work relevant to this study. 1.2.1 Synthesis of artificial ground motions In earthquake resistant design or research, appropriate historic earthquake acceleration recordings may not be available, and artificial ground accelerograms may be required when a structural dynamic time history analysis is performed. A number of methods have been proposed to characterize earthquake ground acceleration. Spectral representation-based algorithms were widely used by engineers for generating artificial earthquake ground motions (Kanai, 1957; Tajimi, 1960; Shinozuka and Deodatis, 1988; Hwang and Huo, 1994; and Deodatis, 1996). Seismic ground motion had also been simulated by stochastic wave 3 CHAPTER 1 INTRODUCTION representation method, which took the spatial variation of ground motion into account (Deodatis et al, 1990). Another approach was Autoregressive Moving Average (ARMA) models (Polhermus and Cakmak, 1981; Chang et al, 1982; Olafsson and Sigbjornsson, 1995; Spanos and Zeldin, 1996). Geophysicists and seismologists usually use the Empirical Green's Function method for predicting target strong earthquake motions, superposing the records of small events and using them as Green's functions by considering the differences in stress drop, wave attenuation in the media, and radiation patterns between large and small events (Hartzell, 1978; Hadley and Helmberger, 1980; Hutchings, 1991, 1994; Haddon, 1996). In addition, some novel approaches have been tried successfully. Wavelet transform was applied for analysis and simulation of earthquake ground motions (Iyama and Kawamura, 1999). Neural networks were successfully employed for generation of spectrum-compatible accelerograms (Ghaboussi and Lin, 1998). 1.2.2 Design of computer experiments In engineering, models are used for problem formulation and solution. Mechanistic models are based on well-established engineering knowledge. However, when such knowledge is not available, empirical models have to be built relating the input variables (predictor variables) to the output variables (response variables) based on observed data. They are referred to as response surfaces in statistics, usually in the form of second order polynomials. In order to construct a response surface, a certain number of representative input vectors have to be selected, which are the subject of experimental design. Central composite design (Box and Wilson, 1951) is the widely used, almost standard, classical experimental design method for building second order polynomials response surface. Latin hypercube sampling (McKay et 4 CHAPTER 1 INTRODUCTION al, 1979) was the first method introduced for computer experimental design. Later, a number of improved methods were proposed, such as optimal Latin hypercube design (Park, 1994; Morris and Mitchell, 1995), Orthogonal array-based Latin hypercube design (Tang, 1993). Uniform Design (Fang, 1980) was based on the concept of good lattice point, which is very efficient and provides good design uniformity. Some low discrepancy sequences such as Hammersley sequence (Hammersley, 1960), Halton sequence (Halton, 1960), Niederreiter sequence (Niederreiter, 1987) were also used for experimental design, since the data points generated are well spread and have good uniformity. 1.2.3 Approximation models A variety of approximation models has been developed over the years. Response surface methodology (Box and Wilson, 1951) is originally developed for physical experiments, and has been applied widely in manufacturing industries for product improvement or process optimization. A second order polynomial is generally constructed by linear regression, using the least squares technique. Kriging is an interpolation method developed in geostatistics (Cressie, 1991). It is extremely flexible, and can provide accurate predictions for highly nonlinear problems. Multivariate adaptive regression splines (MARS) approximate the responses by adaptively selecting a set of spline basis functions and the coefficients through forward or backward regression (Friedman, 1991). Artificial intelligence and machine learning have undergone great progress in the last 20 years. Computational intelligence tools such as neural networks, radial basis function networks, Gaussian processes and support vector machines have been proved versatile, robust and universal approximators, and have found applications in a wide range of fields. 5 CHAPTER 1 INTRODUCTION 1.2.4 Performance-based design Performance-based design is gaining acceptance in earthquake engineering, and has the potential to make significant improvement over current practice. Performance-based seismic design was formulated as a structural optimization problem and solved by minimizing a cost function subjected to performance constraints (Ghaboussi and Lin, 2000). Since earthquake resistant design involves randomness and many uncertainties, reliability is always one of the main concerns. Reliability-based framework for performance-based design was put forward, with lifecycle cost minimization carried out to determine the optimal design for structures subjected to multiple natural hazards (Wen, 2001). A computational approach for efficient implementation of performance-based design was proposed, in which reliability was calculated by Importance Sampling, with performance functions evaluated by local interpolation of a response database (Foschi et al., 2002). Performance-based seismic engineering was also discussed as to main requirements for a reliable design, suitable probabilistic design approach and conceptual preliminary design procedure (Bertero and Bertero, 2002). 1.3 Objectives of The Research As seen from the foregoing review, the structural response during a strong earthquake motion depends on a multitude of random variables, and the behavior is very intricate and the response is very difficult to be predicted in a reliable manner. This is due to the uncertainty and randomness pertinent to the ground motion, the material, geometric and boundary non-linearity associated with the structure, as well as the uncertainty in the computational models with built-in simplifying assumptions and approximations. At present, the most accurate CHAPTER 1 INTRODUCTION structural analysis procedure is nonlinear dynamic time history analysis. The structural responses during a strong earthquake are random processes, implicit functions of the intervening random variables. In order to perform structural reliability assessment, simulation approaches are generally indispensable. However, seismic reliability assessment may necessitate a large number of performance function evaluations, each requiring execution of a nonlinear dynamic analysis program, which is a formidable task in terms of computational time and resources. Similarly, performance-based seismic design also requires repetitive running of a nonlinear dynamic structural analysis program. To improve efficiency, some researchers have proposed using empirical models to replace computer code for prediction and estimation. However, most of the works are limited to deterministic problems: linear or nonlinear static problems with a few variables. Realistic earthquake engineering problems involve many random variables, with their interactions resulting in complex structural behavior. Therefore, a model is sought that can approximate accurately the input - output variable functional relationship and, as such, improve computational efficiency and effectiveness. To accomplish this goal, this thesis will focus on, (1) development of neural network-based model and corresponding software; (2) exploration of performance-based seismic design using neural network modeling. In order to demonstrate the applicability and efficiency of the proposed methods, some applications in seismic reliability analysis will be provided. Furthermore, case studies on performance-based seismic design will be presented. 7 CHAPTER 1 INTRODUCTION 1.4 Thesis Outline The thesis is organized as follows: Chapter 1 Introduction: The background and incentive for this study is described, the objectives of the research are outlined, and previous works pertinent to this study are briefly addressed. Chapter 2 Generation of artificial ground motions: Review of previous works on artificial ground motion synthesis is presented. Generation of non-stationary accelerograms, as well as spectrum-compatible accelerograms is discussed. The synthesized ground motions will be used as earthquake ground inputs for nonlinear dynamic time history analysis. Chapter 3 Design of computer experiments methodology: Experimental design methods are reviewed, including classical methods, random design methods and quasi-random design methods. The approach proposed in this research is described. Design of experiments techniques will be used to construct response databases for neural network training. Chapter 4 Artificial neural networks theory and implementation: The fundamentals of artificial neural networks theory are described, with multilayer backpropagation neural networks particularly discussed. The implementation of artificial neural networks in the research is detailed. Chapter 5 Performance-based seismic design methodology: The state of art and practice of performance-based seismic design is described, including the philosophy, design criteria, and design methods. In this study, performance-based seismic design is formulated as a structural 8 CHAPTER 1 INTRODUCTION optimization problem subject to reliability constraints, with the optimum solution computed by gradient-free algorithms. The optimization relies on neural network modeling of the structural responses, which is proved efficient and effective. Chapter 6 Seismic reliability analyses: case studies: A number of case studies of seismic reliability analysis, based on response databases and neural network models or local interpolations, are presented. (1) A one-bay two-story reinforced concrete frame is subjected to earthquake excitation, and the reliability indices associated with three performance levels are determined, with sensitivity of reliability with respect to the random parameters studied. (2) A two-bay twenty-story reinforced concrete frame is adopted as an example of tall building, and its performances under two different levels of ground shakings are assessed. (3) The behavior of a bridge bent without or with seismic isolation is investigated, with lead rubber bearing used as the seismic deck isolator. The effect of the isolator on structural performances is studied. (4) A wood shear wall under strong ground motions is analyzed, with material non-linearity depicted by a nail hysteresis model, and the effects of nail spacing on structural reliability are studied. (5) An actual building structure that has been instrumented and experienced several earthquakes is investigated for its seismic performance without or with seismic retrofit. Brace type hysteretic dampers are used as seismic upgrading strategy. Chapter 7 Performance-based seismic design applications: The databases generated as well as the neural network models constructed in the preceding chapter, are employed for performance-based seismic designs. (1) For the one-bay two-story reinforced concrete frame, the optimal distribution of masses is determined, if the distributions of the other random CHAPTER 1 INTRODUCTION variables are known; (2) Under strong ground shaking, the column dimensions of the twenty-story building are determined to meet pre-specified target reliability indices. (3) The diameter of the non-isolated bridge pier columns is calculated when the distributions of other random variables are prescribed. In the case that the bridge deck is seismically isolated, the dimension of the isolator is determined given the statistical information of other random variables. (4) The nail spacing of the wood shear wall under earthquake shaking is calculated. Chapter 8 Summary and future work: A discussion of the significance of this study for seismic reliability assessment and performance-based seismic design is presented. Some recommendations for future further study are briefly outlined. 10 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS 2.1 Introduction With the development of computer technology and advances in numerical modeling of modern complex structures, it becomes feasible for practicing engineers to perform nonlinear dynamic time history analysis of structures subjected to ground motions. As long as the computational model for the structure and the adopted ground motion time histories are appropriate, such an approach has shown its superiority both in accuracy and efficiency as compared to other methods (Atkinson, 1998). It is then necessary to have accelerograms that represent the type of seismic excitation expected at a site. Structural engineers tend to take advantage of any historically recorded accelerograms for the given site, or borrow some strong ground motion recordings from other regions and scale the magnitudes if there are not such records for the site under consideration. This approach seems plausible, but some cautions must be taken. For a region with high seismicity, there are some recordings of ground motions, but those records are representative of the actual past earthquakes, and they will never repeat in the future, in other words, they may not represent future earthquake ground motions. On the other hand, historic records for a given site with low seismicity are usually scarce. To use strong ground motions from other regions blindly, without any assessment of the similarities and differences between the two sites as regard to seismic source mechanism, wave propagation path and local site characteristics could lead to severe errors in predicting structural responses. Nobody knows to what extent the adopted records approximate the expected future ground motions of the specific site of interest. Moreover, 11 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS the complexity and uncertainty involved in the structural behavior require that a number of ground motions be used in assessing the responses to ensure a safe and economical design. Hence, structural analysts must resort to artificial ground motion synthesis. Earthquake ground motion is influenced by such factors as source mechanism, magnitude, epicentral distance, travelling path geology and topography, and local soil conditions, to name a few. Since historical strong ground motion records may be few, it is difficult to generate accelerograms that can serve as realizations of future earthquake records. Visually, there are obvious differences between the actual ground motion records and artificially simulated ones. However, numerous studies on structural responses have showed that the simulated ground motions are equivalent to the recorded ground motions, as long as the simulated ones include approximately the same amplitude and frequency content and are of nearly the same duration as the real ground motions (Atkinson, 1998). There are basically two methods for simulating ground motions: the engineering approach and the seismological approach. Seismologists and geophysicists are interested in understanding the earthquake mechanism and reproduction of the faulting process and wave propagation in heterogeneous media. They generate ground motion based on a physical model that takes into account seismic moment, stress drop, fault rupture process, fault dimension and orientation, travel path geology and local site amplification as well as topographical effects. A slip function is postulated to model the rupture process and the Elastoaynamic Representation Theorem is employed to compute the ground motion. As this approach incorporates all the major factors which affect earthquake ground motion, it can accurately reflect the source effect, wave propagation effect and local site condition, and it is very useful for site-specific simulations. 12 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS However, engineers who are only interested in the prediction of the structural responses during future earthquakes, require that the synthetic accelerograms roughly result in the same structural responses as the real event or at least, when generated in a large ensembles, the results can be used to estimate an accurate probability distribution of the effects. Based on this distribution, a reliable seismic safety assessment can then be made. Whether or not the ground motions are due to the same faulting process or geological travel path as the anticipated real event is secondary. As earthquake spectra are often used by engineers in seismic analysis and design, it is desirable to employ artificial ground motion accelerograms compatible with the given design spectra. From the perspective of earthquake engineering, it would be better to combine these two approaches to generate site-specific accelerograms, ie., to model the ground motion as a non-stationary random process while taking into account the seismic source mechanism, wave propagation in heterogeneous media and local soil condition. 2.2 Review of Ground Motion Simulation Methods A number of approaches have been proposed for the generation of synthetic ground motions. The most general methods are: (1) analytical or geophysical models, such as using a semi-empirical Green's function; (2) spectral representation method, based on random process theory; (3) frequency-wave number spectra representation of spatial variability of seismic ground motions; (4) auto-regressive moving average (ARMA) model; (5) wavelet model; (6) neural networks model. It is well known that earthquake ground motions are very complicated in nature, and for engineering applications, it is not necessary to reproduce the expected ground motions in order to depict their characteristics sufficiently. What need to be 13 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS done is to identify and determine the ground motion parameters that are of engineering importance, and describe the characteristics of the ground motions in terms of these key parameters. To this end, three major factors are considered of primary significance and should be taken into consideration to obtain a proper ground motion time history, namely, the peak amplitude, the frequency content and the strong motion duration (Kramer, 1996). The construction of the model should be based on probabilistic seismic hazard evaluation of the site under consideration, especially for some large and important structures, taking into account uncertainties in seismic source, wave travel path geology and local soil conditions. Both non-stationarity in amplitude and frequency is preferred to be encompassed, as earthquake ground motion is in essence a non-stationary process. The objective here is to provide a brief description of ground motion synthetic methods. For details, please refer to the literatures mentioned. 2.2.1 Empirical Green's function method One of the most reliable methods for predicting strong ground motions from a large earthquake is the empirical Green's function method. Theoretical Green's function, applied to seismology, is a mathematical expression that depicts the effect of the Earth geological structure on seismic waves generated by a micro-earthquake. It is of minor practical value as it can only be calculated for a simplified subsurface geological structure, which does not reflect the real profile. The idea of the empirical Green's function was originally introduced by Hartzell (1978), as it was found that the Green's function resembles the actual recordings from micro-earthquakes. Thus, these records of micro-earthquakes, so-called empirical Green's functions, can be used to simulate strong ground motion anticipated at a given site. Usually, 14 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS actual recordings of micro-earthquakes (Richter magnitude 2 to 3) are employed to compute the ground motions of a moderate or large earthquake with magnitude 6 to 8. The advantage of this method is to exploit not only the common propagation path and local site effects, shared by small events and the target event, but also the source effects possessed by the small events within the fault area of the target event. The empirical Green's function method exploits the records of small events instead of the theoretical Green's functions. It is desirable that the small events should be as small as possible to be assumed as a point source in the fault. However, the smaller the events are, the more difficult it is to obtain accurate seismic records. As such, most of simulations by the empirical Green's function method have been made using aftershocks, which are not so small events as compared to the target event. Figure 2.1 gives a schematic illustration of the empirical Green's function method. For simulating earthquake strong ground motions, a finite fault model has to be employed. A causative fault plane is usually assumed based on seismological information. The fault plane is then discretized into many small patches, and each patch is treated as a sub-source, an impulsive point source. For each sub-source, its rupture model, slip function and the Green's function have to be defined. The acceleration at a site of interest is then calculated by superposing the arrival of the earthquake waves in a proper temporal and spatial sequence from all the sub-sources, which may have different rupture parameters and Green's functions. If no empirical Green's functions are available, the alternative method used is to stochastically simulate small event motions based on a seismological spectral model and make summation of small events in the same manner as with the empirical Green's functions. For further details, 15 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS see Hartzell (1978), Boore (1983), Papageorgiou and Aki (1983), Hutchings and Wu (1990), Hutchings (1991, 1994), Zeng (1994), Haddon (1996) and Kamae et al (1998). The empirical Green's function method has the following limitations: (1) Small magnitude aftershock records are needed from the same seismic source as the strong target motion. (2) The small events and the large event should share the same fault mechanism, travel path geology and local site conditions. (3) It can only be applied for soil with linear behavior as superposition is implied. To allow for nonlinear soil effect, ground motions at bedrock are generated first; then a nonlinear soil dynamic analysis has to be employed to calculate the surface ground motions. The empirical Green's function method is frequently applied by seismologists or geophysicists for prediction of site-specific strong ground motion, and its focus is on truly modeling the geophysical features of the earthquake process (i.e. faulting process and travel path geology effect). This method is often too restrictive to be used for predicting structural responses, because the detailed information concerning the fault rupture and geology is highly uncertain in real applications, if available. Figure 2.1 A schematic diagram of empirical Green's function method (Kramer, 1996, Fig.8.24, pp. 344) 16 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS 2.2.2 Spectral representation method Structural engineers usually apply stochastic process theory to simulate ground motions. White noise was the simplest ground motion model used in the earlier stage of earthquake research. It has been observed, however, that the ground motion generally has a strong segment that can be modeled as a stationary process. A filter was proposed to white noise (Kanai, 1957, Tajimi, 1960), which resulted in a ground motion power spectral density function (co) as follows, hg represents the critical damping ratio of the ground; S0 is a constant giving the power spectral density of the white noise. For firm ground, the following values were suggested: cog = 15.6 rad/sec, and hg = 0.6. Since, in this model, the power spectral density is proportional to o)~2 in the high frequency range, and it has a stationary peak in the neighborhood of a)g, both of which are the characteristics of an actual earthquake, it is widely used for generation of artificial ground motions. However, earthquake ground motion is non-stationary in time and heterogeneous in space. As noted early, the intensity, duration and spectral characteristics are the three major properties of ground motion. The simulated ground motion can be considered as the product (2.1) where cog denotes the predominant frequency of the ground; 17 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS of a stationary random process multiplied by a modulation function to reflect the non-stationarity, a(t) = f(t)-w(t) (2.2) where aft) denotes the non-stationary ground acceleration; /(t) denotes the modulation function; w(t) denotes a stationary process with specific power spectral density S(ta). Given the power spectral density S(<y), the stationary random process w(i) can then be synthesized as follows, w{t) = £ yJ2S(a)k )AG) cos(<V + A) (2-3) in which Aco = (a>u -a>,)/N, OJU and to, are the upper and lower bounds of circular frequency co; N is the number of frequencies considered; cok =u), +(k-0.5)Aco (k=l, 2, ...N) <l>k (k = 1, 2,..., N) is a set of random phase angles uniformly distributed over the interval (0, 2 n ) It is observed that the earthquake ground motion generally has three distinct segments, ie., the build-up segment, the strong motion segment and the decay segment. The modulation function is used to embody the non-stationarity of ground motion. Many functions have been proposed in the literature. Some of them will be discussed later. 18 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS 2.2.3. Frequency-wave number spectra method Spatial variation of earthquake ground motion is one of the important issues to be considered in the seismic design of spatially extended structures, some examples including long-span bridges, pipelines, underground structures, etc. The frequency-wave number spectra (FK spectra) can fully describe seismic wave that propagates coherently through a site. The FK method was introduced by Capon (1969), with the power spectra and cross spectra calculated by Fourier technique. The application of Fourier Transform to a sequence in space leads to the wave number spectrum. The application to a series of time histories recorded along a straight line in space results in the frequency-wave number spectrum, which comes from two successive application of the ID Fourier Transform. For a 2D stationary homogeneous stochastic wave u(x,y,i) with zero mean value, its autocorrelation function is defined as, Ku (g, ij, T) = E\u{x, y, t)u(x + Z,y + rj,t + T)] (2.4) where x and y are the space variables; t is the time variable; £ and TJ are the space increments in the x-direction and y-direction respectively; r is the time increment. If the triple Fourier transform of Ruu (£,/7,r) exists, the frequency-wave number spectrum of w(x,_y,/)is calculated as, S..(*:,,*,,®} = 7^£££^(£7.r)^ (2-5) \2.7V) 00 CO where kx is the wave number in x-direction; 19 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS ky is the wave number in y-direction; CD is the frequency; and the inverse transform is given by, Ruu(4,rj,T) = £XX*Suu(kx>ky,<o)exp(ikxZMkyri + i<OT)dkxdkyd<o (2.6) A closed-form analytical FK spectrum of the displacement field at the free surface of an elastic half space was derived based on a point source subject to a double couple, and then it was employed for simulation of ground motion displacements (Deodatis et al, 1990). The FK spectrum was utilized to evaluate spatial correlation characteristics in terms of the cross-spectral density function and the spatial coherence function. The synthesis was carried out numerically using the Fast Fourier Transform (FFT) technique to perform the inversion from the frequency wave number domain to the time space domain. The simulation is an extension of spectral representation, and the followings are from Deodatis et al (1990), u(x,y,t) = V2£ZZ X T^AIA^Jy^co^AkyAcor2cos(Ixkxlx + Iykyb/+oj,t + ) /,=l/y=l /=! 7,=±17,=±1 (2.7) where kxlx = lxAkx ,lx = l,2,...,yVx; co, =IAG>;1 = 1,2,. ..,N; x=kxuINx,Ak. :y =kyuINy,Au) = couIN; is a random phase angle uniformly distributed over (0,2 ;r). 20 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS It was assumed in the above that the frequency wave number spectrum is significant only in the following region defined by, — k < k < k —fc < k < k -co < 0) <6J xu — x xu' yu — y — yu' w — — u 2.2.4 Autoregressive moving average (ARMA) model The ARMA method is a time series analysis approach that synthesizes ground motion by a multi-step discrete equation, the coefficients of which may be time varying to introduce non-stationary behavior. The ARMA model describes a linear relationship between the present and past values of a time series Zk and a white noise shock WR as, Zk -a,Zw-....-apZk_p = Wk-btW^-...-bqWk_q (2.8) where Zk,Zk_x,..Zk_pare the variable values at time k#,(k-l)#,...( k-p)St; a^,...ap are the autoregressive coefficients; Wk,Wk_},..Wk_q are the noise values at time k#,(k-l)<5ir,...( k-q)5t; bx,..bq are the moving average coefficients ; St is the time increment; In Chang et al (1982), the Box-Jenkins approach (Box and Jenkins, 1976) was applied for identification of suitable ARMA models and optimal estimation of modeling parameter values. As the Box-Jenkins procedures are strictly valid for stationary time sequences, a moving window approach was employed by dividing the non-stationary target accelerograms into short equal segments and analyzing each segment individually. Further, goodness of fit was evaluated by examining the statistics of the residual sequences and comparing with those of 21 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS discrete white noise. A second-order autoregression first-order moving average model ARMA(2,1) and a fourth-order autoregression first-order moving average model ARMA(4,1) were found to best fit the target accelerograms. Ellis et al (1990) applied time series for generation of site-dependent accelerograms. The target accelerograms were analyzed to estimate the ARMA model parameters and a set of regression relations were derived relating the model parameters to the physical variables of the site, such as earthquake magnitude, epicentral distance and site geology. For simulation of site-specific ground motions, a stationary time series was first generated, then it was re-digitized to add non-stationary frequency content, and subsequently it was multiplied by the standard deviation envelope to yield an artificial accelerogram non-stationary in amplitude and frequency content as well as being consistent with the site physical conditions. The advantage of the their procedure is that the time invariant parameters are related to physical variables at the site. Moreover, confidence intervals for the model parameters can be used to generate an ensemble of earthquake ground motions corresponding to the mean, mean plus one standard deviation for design purpose. The deficiency of such a model is that the physical interpretation of the coefficients in the equation is not obvious, and these coefficients must be determined by fitting with some target earthquake accelerograms. Still, another shortcoming is that the coefficients should be time varying for truly non-stationary process but this could render the model rather complicated and even unmanageable. For applications of ARMA models in earthquake motion simulations, references are made to Polhermus and Cakmak (1981); Chang et al (1982); Nau et al (1982); 22 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS Gersch and Kitagawa (1985); Cakmak et al (1985); Olafsson and Sigbjornsson (1995); Spanos andZeldin (1996). 2.2.5 Wavelet transform method Wavelet transform is a mathematical tool that is widely used in electrical and electronic engineering for signal processing, and it transforms sequential data in time axis such as earthquake accelerograms to spectral data in both time and frequency domain. Therefore, a wavelet transform provides information on non-stationary time-dependent intensity of motions regarding a particular frequency of interest. One of the attractions in wavelet transform which is unavailable in Fourier transform is that the wavelet coefficients derived from time-sequential acceleration data represent the components of energy input in time and frequency domain. Iyama and-Kawamura (1999) applied wavelet transform to earthquake ground motion analysis and developed the relationship between wavelet coefficients and energy input, ie., energy principles in wavelet analysis were derived. By using the principles, the time-frequency characteristics of the 1995 Kobe earthquake ground motions were analyzed, and time histories of energy input for various ranges of frequencies and epicentral distances were identified. Furthermore, a technique to simulate earthquake ground accelerations by wavelet inverse transform was developed on the condition that target time-frequency characteristics were specified. Structural response to the synthesized accelerations was compared with the target values, which showed satisfactory correlation between wavelet coefficients and the energy responses in both the time and the frequency domains. Wavelet transform is a powerful analytical tool to identify the ground motion characteristics in both 23 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS time and frequency domain, and further studies in this area are expected to explore the potentials of the wavelet transform in earthquake engineering. 2.2.6 Neural network model Ghaboussi and Lin (1998) proposed a method to generate artificial earthquake accelerograms from the pseudo-velocity response spectra using neural networks. A two-stage approach was employed. First, Fast Fourier Transform was utilized to calculate the Fourier spectrum of a given accelerogram. Then, a replicator neural network was applied as a data compression tool to reduce the dimensionality of the discrete Fourier spectrum. The compressed discrete Fourier spectrum can be conversely decompressed and inverse Fourier Transform was carried out to obtain the associated ground motion. A multi-layer feedforward neural network was employed to map from the pseudo-velocity response spectrum to the compressed Fourier spectrum. Finally, the target accelerogram was obtained by combining the multi-layer feedforward neural network with the retrieval part of the replicator neural network. Ghaboussi and Lin's proposal was applied to a sample of 30 recorded earthquake accelerograms and exhibited potential for future applications. 2.3 Generation of Non-stationary Ground Motion In this thesis work, a program was written to generate non-stationary ground motion accelerogram based on spectral representation method. These accelerograms were then used in structural reliability analysis. The procedure is described in what follows. 24 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS 2.3.1 Determination of ground motion spectral characteristics The Kanai-Tajimi acceleration spectrum is employed to model the stationary power spectral density function of the acceleration time history (for clarity, it is repeated here), ^(«) = KH 2 * ,o* A a (2-1) 1 1 (cog - co ) + 4hgco cog where So is a constant determining the intensity of acceleration, cog and hg are the predominant frequency and damping ratio of the ground. The values suggested in Deodatis (1996) for three different soil conditions were used in this study, as shown in Table 2.1. Table 2.1 Kanai-Tajimi spectrum parameters Soil Type Frequency cog (rad/sec) Damping ratio hg Rock or stiff soil 8* 0.60 Deep cohesionless soil 5* 0.60 Soft to medium clays and sands 2.4^- 0.85 At zero frequency, the Kanai-Tajimi acceleration power spectrum is not equal to zero, and this is inconsistent with actual earthquake records. By applying the Kanai-Tajimi spectrum, a significant low frequency component is imparted into the ground motion. In order to model the earthquake motion realistically, the low frequency components must be cleared from the Kanai-Tajimi spectrum. This is achieved by passing a high-pass filter to the spectrum. Several filters have been proposed to modify the Kanai-Tajimi spectrum, two of them being the Clough-Penzien filter (Clough and Penzien, 1993) and the Sine-square filter (Shinozuka et al, 1994). 25 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS Clough and Penzien proposed the following filter, which substantially attenuates the low frequency components, \Hh(cof- (2.9) (ffl2-ffl2)A+4A>2»2 where a>h and hh are the fundamental frequency and damping ratio of the filter, which are selected to ensure the desired frequency content of the earthquake motion. The recommended values in Deodatis (1996) were reproduced in Table 2.2 below, Table 2.2 Clough-Penzien filter parameters Soil Type Frequency (rad/sec) Damping ratio hh Rock or stiff soil 0.87T 0.60 Deep cohesionless soil 0.5;r 0.60 Soft to medium clays and sands 0.24;r 0.85 The Sine-square spectrum was introduced to modify the Kanai-Tajimi spectrum. It is a high-pass filter, which was claimed to take into account the shear dislocation type seismic source effect (described by a ramp-type slip function), and is formulated as follows, ,„ , ,,2 fsin2(<»772) OXTT/T 1 'V A 1.0 6)>7ZIT where T is the dominant rise time of the ramp function. (2.10) A comparison of Kanai-Tajimi spectrum, Clough-Penzien spectrum and the sine-square spectrum is shown in Figure 2.2 (a)g =12 rad/sec, hg =0.6) The power spectrum density function for the stationary process is then given by, S(a>) = Srr(a)\Hk{a>)\2 (2.11a) 26 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS S(co) = SKT(o))\Hs(o>)\2 (2.11b) 2 Kanai-Tajimi o 5 10 15 20 25 30 35 40 45 50 Frequency (racVsec) Figure 2.2 Comparison of Kanai-Tajimi, Clough-Penzien and Sine-square spectrum 2.3.2 Generation of a stationary process A stationary earthquake accelerogram, which incorporates the subsoil frequency content, can be generated by superposition of simple harmonic waves with power spectrum S(co) and random phases using Equation (2.3). 2.3.3 Selection of modulation function Seismic ground motion is non-stationary, and it generally has three stages, a build-up stage, a strong motion stage and a decay stage. The above-generated stationary process has to be amplitude-modulated to mimic the time evolution of a real motion. A number of modulation functions have been proposed, and three of them used in this study are described as follows. 27 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS (1) Jennings modulation function Jennings et al (1968) proposed a modulation function in the form, /(0 = 1.0 exp[ta0.1(/-/2)/(f<,-/2)] 0 < t < U tx<t<t2 (2.12) The selection of proper constants ti, t2 has been discussed by Jennings, who pointed out that the modulation function is dependent on the magnitude of the earthquake, the distance from the causative fault and the focal depth, and proposed the following expressions for ti, t2 relative to earthquake magnitude M and ground motion duration ta. ti = [0.16-0.04(M-6)]ta (2.13.a) t2 = [0.54-0.04(M-6)]td (2.13.btd=10(M-2.5y3.23 (213c) Figure 2.3 displays the modulation function curve for earthquake with magnitude M = 7.0. Figure 2.3 Jennings modulation function 28 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS (2) Amin and Ang modulation function Amin and Ang (1968) proposed the following modulation function, /(OH i.o 0 < / < t (2.14) exp[-c(f-f2)] The selection of proper constants ti, t2 has been described by Jennings, ti is estimated around 2-4 sec, and t2 may be taken as 4 sec, 15 sec and 35 sec, respectively, for earthquakes with magnitudes 6, 7 and 8. The selection of c is based on the focal distance (Solnes,1997). (3) Hsu and Bernard modulation function Hsu and Bernard (1978) suggested the following modulation function, where to is the time instant when the earthquake motion attains its peak. Figure 2.4 shows the shape of this modulation function for to = 5.0 s f(t) = (t/t0)exp(l-t/t0) (2.15) 1.2 i 1 ^ 0 5 10 15 20 25 30 Time (sec) Figure 2.4 Hsu & Bernard modulation function 29 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS 2.3.4 Generation of a non-stationary artificial ground motion The final non-stationary earthquake motion is obtained by applying the modulation function to the stationary earthquake acceleration process, ie. «(0 = /(/MO (2.2) A ground motion accelerogram generated using Amin & Ang modulation function with PGA = 0.2g is shown in Figure 2.5 A ground motion accelerogram generated using Hsu modulation function with PGA = 0.2g is shown in Figure 2.6. Artificial Ground Motion 0.2 -i 1 1 0.15 1 -0.2 J T (sec) Figure 2.5 Artificial ground motion generated using Amin & Ang modulation function 30 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS Artificial Ground Motion o co o o < T (sec) Figure 2.6 Artificial ground motion generated using Hsu modulation function Sometimes, it is desired that the artificial ground motion have two strong components, and this is accomplished by the following modulation function, /(>) = rA exp r + c t-U exp V *2 11 J (2.16) where c is the ratio of the second peak amplitude to the first peak amplitude; t0 is the time instant when the first peak occurs; r; is the start time of the second strong component; t2 is the time instant when the second peak occurs; A ground motion accelerogram that is composed of a main shock and an aftershock with PGA = 0.2g is shown in Figure 2.7 31 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS Artificial Ground Motion T (sec) Figure 2.7 Artificial ground motion with two strong components The above simulation approach employs a simple power spectrum to account for the spectral characteristics of earthquake ground motion. A modulation function is utilized to reflect the non-stationarity of earthquake process. Implicitly the seismogenic source is assumed as a point source, so it can only be used to generate far-field earthquake. The advantage of this method is that it is easy to implement, and can approximately allow for the local site effect. However, the seismic faulting mechanism is not considered in the model, and the phase angles are assumed uniformly distributed over [0, 2n\ which is not consistent with real earthquake motions whose frequency contents may vary with time and phase angles are usually not uniformly distributed. CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS 2.3.5 Baseline correction Numerical integration of digital accelerograms (recorded or artificial) in time domain often results in non-physical shifts in velocity and displacement time histories. This is natural for artificially generated accelerograms, as they are synthesized based on a power spectrum, with no physical constraints imposed on. Generally, this phenomenon has hardly any influence on seismic response of structures subjected to those artificial ground motions, as the inertial forces are calculated based on acceleration time history. However, this issue has to be addressed for spatially extended structures, since the discrepancy in absolute displacements at different points of structure can lead to damage or even failure of the structure. Besides, it is necessary to correct the artificial accelerograms used for shake table test, for the hydraulic actuators have limited displacement scope. There are many methods available to perform correction of digital accelerograms to eliminate the unrealistic velocity or displacement drift. Trifunac (1971) applied a time domain filter, the Ormsby filter to the acceleration time history. A number of frequency filters were used for processing earthquake records, such as elliptical filters (Sunder and Conner, 1982; Sunder and Schumacker, 1982) and Butterworth filters (Converse, 1992). A method based on Lagrange multipliers was proposed by Trujillo and Carter (1982). In this thesis, the following procedure is adopted, (1) Integrate the accelerogram a(t) to obtain the velocity time series v(t); (2) Fit a quadratic polynomial to the velocity time history v(f) by least squares technique, v(f) = c0 + cxt + c2/2; (3) Remove the derivative c, + 2c2t from the accelerogram a(t); 33 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS (4) Calculate the Fourier spectrum of accelerogram a(t) by Fast Fourier Transform (FFT), A(a)); (5) Apply a causal high-pass Butterworth filter, h(coi) = l/yjl + (a)c I co)4 , to A(co); (6) Perform Inverse Fast Fourier Transform (IFFT) of A(a)to obtain a(t)&s the corrected accelerogram; (7) Calculate A(co) by FFT to a(t), and compute V(co), the velocity Fourier Spectrum; (8) Apply a causal high-pass Butterworth filter, h(co) = 1/^/1 + (coc Ico)4 , to V{co); (9) Perform IFFT of V{<x>) to obtain v(t) as the final velocity time series; (10) Calculate V(co) by FFT to v(t), and compute D(ai), the displacement Fourier Spectrum; (11) Apply a causal high-pass Butterworth filter, h(co) = \ / yjl + (o)c / a))4 , to D(co); (12) Perform IFFT of £>(<y)to obtain d(t)z& the final displacement time series; In the above, the corner circular frequency is taken as ac =27ifc = 2^x0.05 = O.br, to remove the long period components whose period is in excess of 20 sec. A ground accelerogram and the associated velocity and displacement time histories prior to baseline correction are presented in Figures 2.8(a)(b)(c), while the corresponding time histories after processing are displayed in Figures 2.8(d)(e)(f). The units of acceleration, velocity and displacement are, respectively, m/sec2, m/sec and m. 34 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS Acceleration time history < Time(sec) Figure 2.8(a) Acceleration time history before baseline correction Velocity time history 0.3 Time(sec) Figure 2.8(b) Velocity time history before baseline correction Displacement time history -60 Time(sec) Figure 2.8(c) Displacement time history before baseline correction 35 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS Acceleration time history 2 Time(sec) Figure 2.8(d) Acceleration time history after baseline correction Velocity time history Time(sec) Figure 2.8(e) Velocity time history after baseline correction Figure 2.8(f) Displacement time history after baseline correction 36 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS 2.4 Simulation of Ground Motion Compatible With Response Spectrum In practical engineering applications, engineers are generally required to perform seismic resistant design in accordance with a certain mandatory code. The seismic design ground motion for a given site is specified in the form of design response spectrum, which is an idealization of the response of a linear single degree of freedom oscillator subject to a set of ground motions. To carry out nonlinear dynamic analysis, it is often preferable to use ground motion time histories compatible with the code design spectrum or response spectrum provided by the client. A program was developed in this study to generate earthquake motion time histories compatible with the code prescribed design spectrum or user specified response spectrum. The procedure is described as shown in Fig.2.9. The idea of adjusting power spectrum density function was suggested by Gasparini and Vanmarcke (1976). The response spectrum is calculated by direct integration (Paz, 1997). Baseline correction can be done in the same way as outlined in subsection 2.3.5. A stationary acceleration time history is first generated based on an assumed spectral density function, and Equation (2.3) is applied to produce a stationary ground motion time history. Subsequently, a modulation function /(f) is applied to the stationary time history w(t) as in Equation (2.2) to obtain a non-stationary time history a(t). Two modulation functions were implemented, namely, Amin & Ang function Equation (2.14) and Hsu & Bernard function, Equation (2.15). The 1997 U. S. Uniform Building Code design spectrum has been implemented as the default design spectra. The design ground motion is specified as corresponding to a 5% damped elastic response spectra as shown in Figure 2.10. The seismic coefficients Ca and Cv are 37 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS given in Table 2.3. For soil profile type SF, site-specific geotechnical investigation and dynamic soil response analysis should be carried out to determine the proper seismic coefficients. Sa(g) I • To Ts Figure 2.10 UBC Design response spectrum Table 2.3 Seismic coefficients Ca and Cv Soil Seismic Zone Factor Z type Z = 0.075 Z = 0.15 Z = 0.20 Z = 0.30 Z = 0.40 Ca Cv Ca Cv Ca Cv Ca Cv Ca Cv SA 0.06 0.06 0.12 0.12 0.16 0.16 0.24 0.24 0.32Na 0.32NV SB 0.08 0.08 0.15 0.15 0.20 0.20 0.30 0.30 0.40Na 0.40NV Sc 0.09 0.13 0.18 0.25 0.24 0.32 0.33 0.45 0.40Na 0.56NV SD 0.12 0.18 0.22 0.32 0.28 0.40 0.36 0.54 0.44Na 0.64NV SE 0.19 0.26 0.30 0.50 0.34 0.64 0.36 0.84 0.36Na 0.96NV In the above, Na and Nv are two near-source factors. The Na factor applies to the acceleration controlled portion of the design spectrum, and the Nv factor applies to the velocity controlled portion of the design spectrum. Na has a value from 1 to 1.5, and Nv has a value froml to 2, depending the seismicity of the faults and the relative location of the active faults. 38 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS Read the target acceleration response spectrum (T, r Initialize the power spectrum density function S(co) r Generate a stationary acceleration time history w(f) r Generate a non-stationary acceleration time history a(t) r Calculate the response spectrum Sa(T,g) Compare the target response spectrum (T, with the calculated response spectrum Sa(T,%) Yes End No Update the power spectrum density function S(co) S(a»= S(co) Fig.2.9 Flowchart to generate response spectrum compatible ground motion time history 39 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS A ground motion acceleration time history compatible with UBC design spectrum for Z = 0.20 and soil type Sc is shown in Figure 2.11. Artificial Ground Motion -0.3 1 ! -0.4 T (sec) Figure 2.11 UBC Design spectrum compatible artificial ground motion accelerogram 2.5 Summary and Discussion Earthquake ground motion is one of the most important uncertainties that significantly affect structural behavior; hence, the success of seismic resistant design using time history analysis depends largely on the appropriate selection of strong ground motion acceleration time histories. A sufficient number of accelerograms are needed to assess the variability in structural responses as the result of uncertainty in earthquake motions. A number of methods that have been proposed in the past for artificially synthesizing earthquake ground motion accelerations were briefly reviewed. Seismologists and geophysists usually are interested in simulating the physical faulting process, and they simulate earthquake motions based on 40 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS physical models of the earthquake process. Structural engineers, on the other hand, are more concerned with the effects of ground motions on structural responses. Response spectrum compatible ground motion accelerograms are generally the preferred choice in seismic resistant analysis and design, whether or not the generated ground motion can represent the anticipated future earthquake is secondary. No matter what approach is used, the three major factors of ground motion should be allowed for, namely, the intensity, the frequency content and duration. It is recommended that the simulation be based on probabilistic seismic hazard analysis of the site, especially for a large or important structure, as too many uncertainties are involved in the earthquake process. A program was developed in this study to generate non-stationary ground motions using spectral representation. A baseline correction algorithm was devised to process the synthesized accelerogram. The generated ground motions will be used as ground excitations in structural analysis to calculate structural responses. Three ground parameters (PGA, predominant frequency and duration) have been identified so that they can be manipulated later in response database construction. A response spectrum compatible ground motion can also be generated using the code design spectrum or response spectrum provided by the client. With the development of engineering seismology and earthquake engineering, more knowledge about earthquakes is to be achieved. It is expected that in the future, more seismological information will be incorporated into artificial ground motion simulation, and realistic earthquake ground motion can be predicted based on probabilistic seismic hazard assessment of the region of interest. As the earthquake resistant design code is changing its 41 CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS philosophy toward performance-based design, in order to preserve the operational integrity of critical structures after a major earthquake in addition to life safety, structural design is subject to requirements that are more stringent. In order to produce a safe and economical design, structural engineers must model the ground motion and structural behavior realistically. Thus, it is compulsory to synthesize as realistically as possible reliable ground motions that the structure may experience during its intended lifetime. It is envisioned that realistic prediction of earthquake ground motion at a given site will be possible with further advances in seismology and earthquake engineering. 42 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODLOGY 3.1 Introduction The analysis and design of modern engineering projects often involves computer-based simulations. In the past, new product design was usually achieved, to some extent, on basis of physical experiments on prototypes or models. However, this type of experiments may be, in general, costly and time-consuming. With the great advancements in computer technology, computer simulations are instead used extensively in a variety of areas, such as engineering design, industry manufacturing, etc. However, even with today's most advanced computers, it is still expensive and time-consuming to do simulations of large and complex engineering systems for design optimization and reliability analysis. Hence, approximate approaches, based on computer experimental design and response modeling, are being employed to reduce the computational expense and running time to an acceptable level, without sacrificing prediction accuracy. Prior to an approximate response model being constructed, the design variables (input variables) and the response variables of interest (output variables) must be selected judiciously. First, a sample of combinations of the design variables is generated during an experimental design phase. An appropriate computer program is then run for each combination in the sample, obtaining the corresponding responses. Next, a response model is developed to map the input - output functional relationship. Finally, the response model is used as a surrogate model that is sufficiently accurate to substitute the actual response during 43 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY design optimization or reliability analysis. Thus, building approximations for computer simulations involves four steps: (1) Problem specification; (2) Experimental design; (3) Response modeling; (4) Applications of response models; In the following, computer experimental design methods currently available are briefly reviewed, followed by the design approaches proposed in this study and their implementations. A summary concludes with comparisons regarding the advantages and disadvantages of various experimental design methods. 3.2 Review of Methods for Design of Computer Experiments Prior to building a response model, a database of representative input - output pairs must be created. The data points should be carefully selected so that they cover the design space as uniformly as possible. The problem of choosing a suitable sample of design variables is the subject of Design of Experiments (DOE), a branch of Statistics. Classical DOE is developed for physical experiments that are subject to noise, so replications at some points may be necessary for estimation of the error due to noise. Central Composite Design is a typical classical experimental design. As physical experiments are costly and time-consuming, the designs are made parsimonious to reduce the experimental overhead. A linear or quadratic polynomials response surface is usually built. As the number of data points grows exponentially with dimension, it is impossible to apply it to high-dimensional problems. It is 44 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY neither practical nor accurate to model the intricate behavior of a large and complex system using this modeling technique. Space-filling Designs are proposed for computer experiments to overcome the above-mentioned drawbacks. The data points are chosen to scatter uniformly throughout the design space so that as much information as possible can be obtained from the computer simulation. Sacks et al. (1989) thoroughly discussed computer experimental design. They outlined the differences between physical experiments and computer experiments. Furthermore, they treated the deterministic output from computer experiment as the realization of a random process, and used a Kriging model for prediction. Koehler and Owen (1996) systematically presented two main statistical approaches to computer experiments, one based on Bayesian statistics, while the other based on sampling techniques. Latin Hypercube Design (McKay et al, 1979) was the first approach introduced for computer experiments, which will be covered in the next section. Orthogonal Array Design was proposed to improve upon a Latin Hypercube Design (Owen, 1992). Orthogonal-Array (Hedayat, et al, 1999) based Latin Hypercube Design was developed by Tang (1993). Park (1994) applied the integrated mean square error criterion to Latin Hypercube Design to generate Optimal Latin Hypercube Design. Morris and Mitchell (1995) employed the max-min distance criterion given in Johnson et al (1990) to construct & Max-min Latin Hypercube Design. Simulated annealing was employed to maximize the minimal inter-point distance so that the data points were spread as far as possible from each other. Fang (1980) applied number-theoretic methods for experimental design to generate a Uniform Design. A generating vector is needed for constructing a uniform design, and in a high dimensional case, this design needs considerably fewer samples than other methods. Ye (1998) created an Orthogonal Latin 45 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY Hypercube Design in which two columns of the Latin Hypercube are orthogonal. Kalagnanam and Diwekar (1997) used Hammersley Sequence Sampling for experimental design. Hammersley Sequence is a kind of Low Discrepancy Sequences, which places data points evenly in the unit hypercube. It provides a design with better uniformity than Latin Hypercube Design. Simpson et al (2001) compared different experimental design methods, namely, Latin Hypercube Design, Orthogonal Array Design, Uniform Design and Hammersley Sequence Design. Based on two engineering problems, it was concluded that Uniform Design performs well when the sample size is small whereas Hammersley Sequence Design exhibits better behavior in the case of large sample sizes. 3.2.1 Central composite design Central Composite Design (CCD) is a fractional factorial design that is composed of a central point, corner points of a hypercube, and additional "star points" which are situated on the axes and have a distance of a from the origin, a may take values on the interval [1.0,Vs], where s is the number of variables (in statistics, it is termed factors). When a=1.0, the design is called center-faced Central Composite Design. Central Composite Design is a three-level design that enables a quadratic polynomial response surface to be built. Altogether 2s+2s+l data points are needed while a full second-order polynomial response surface has (s+l)(s+2)/2 coefficients to be determined. A Central Composite Design for 3 variables is shown in Table 3.1 46 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY 3.2.2 Latin hypercube design Latin Hypercube Sampling (LHS) is a stratified Monte Carlo simulation method. The probability range [0.0,1.0] for each random variable is divided into n equal intervals, within which a random number P, (i = l,...,n) is generated. Then the corresponding random variable values are obtained by the inverse of the cumulative distribution function F(x), ie., Xj = F''(Pi), where Pi denotes the probability value for the i-th interval and Xi represents the corresponding random variable value. Latin Hypercube Design (LHD) is the application of Latin Hypercube Sampling in s dimensions with random combination of the n random variable levels. A Latin Hypercube can be written as a matrix of n rows and s columns (n is the number of samples and s is the number of variables). Each column is a random permutation of the n levels of the associated variable. A Latin Hypercube Design of 10 combinations for two variables is shown in Table 3.2 and plotted in Figure 3.1 Table 3.1 Central Composite Design for three variables Sample No. X, X2 X3 1 0 0 0 2 a 0 0 3 -a 0 0 4 0 a 0 5 0 -a 0 6 0 0 a 7 0 0 -a 8 1 1 1 9 1 1 -1 10 1 -1 1 11 1 -1 -1 12 -1 1 1 13 -1 1 -1 14 -1 -1 1 15 -1 -1 -1 47 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY Where in the table, -1,0 and 1 indicate, respectively, the lower bound, the mean and the upper bound of the variable. Table 3.2 Latin Hypercube Design for two variables Sample No. x, x2 1 0.02 0.66 2 0.15 0.23 3 0.23 0.52 4 0.31 0.73 5 0.47 0.35 6 0.56 0.01 7 0.62 0.83 8 0.71 0.46 9 0.83 0.12 10 0.94 0.93 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 3.1 A Latin Hypercube Design for two variables In the above, each variable is scaled to the interval [0,1]. 48 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY Latin Hypercube Design is easy to construct, and each variable is sampled at n levels. When the data points are projected into any single dimension, there are exactly n different points. This is a desirable attribute for deterministic computer experiments, as the data points do not overlap, which minimizes any information loss. Nonetheless, since the data points are randomly spread in the design space, some points may cluster at certain regions, leaving voids at other regions. This situation should be avoided in all practical situations. A few approaches have been proposed to address this issue of non-uniformity. 3.2.3 Uniform design Uniform Design resulted from application of number-theoretic methods for design of experiments. Readers are referred to Fang et al (2000) for the mathematical theory and details. The essence of the number-theoretic method is to choose data points in such a way that they scatter uniformly in the s-dimension unit hypercube. The generation of a Uniform Design is outlined as follows (Fang and Wang, 1994). Suppose the dimensionality is s, and n data points are to be generated. Let (n; hi, ..., hs) be a vector with integral components satisfying 1 < /*, < n,ht hj(i * j),s<n and the greatest common divisors (»,/».) = 1, / = !,...,s. Let (3.1) The k-th element of i-th variable xfa can also be calculated by (3.2) where (n; hi,..., hs) is termed the generating vector. 49 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY A Uniform Design for two variables each of which has 21 levels is shown in Table 3.3 and plotted in Figure 3.2. The generating vector adopted is (21; 1, 13). Once the generating vector is known, it is easy to generate a Uniform Design. Compared to other methods, Uniform Design is more economical as it needs far fewer points especially for high-dimension case. Table 3.3 A Uniform design for two variables with 21 levels Sample No. Xi x2 1 1/42 25/42 2 3/42 9/42 3 5/42 35/42 4 7/42 19/42 5 9/42 3/42 6 11/42 29/42 7 13/42 13/42 8 15/42 39/42 9 17/42 23/42 10 19/42 7/42 11 21/42 33/42 12 23/42 17/42 13 25/42 1/42 14 27/42 27/42 15 29/42 11/42 16 31/42 37/42 17 33/42 21/42 18 35/42 5/42 19 37/42 31/42 20 39/42 15/42 21 41/42 41/42 50 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY 0 1/5 2/5 3/5 4/5 1 Figure 3.2 A Uniform Design for two variables with 21 levels Note that the generated data points are in the unit hypercube I' = (0,1)', and they must be transformed in the following way to the design space for practical application. - X\ + xkt W ~ %\) where X^ = k-th sample of i-th variable in design space; xfa. = k-th sample of i-th variable in unit cube space; (3.3) X\ = lower bound of i-th variable; X" = upper bound of i-th variable; 51 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY 3.2.4 Low discrepancy sequence design Low discrepancy sequences such as Halton sequence, Hammmersly sequence, Sobol sequence, Faure sequence, and Niederreiter sequence are used for numerical integration, optimization and computer simulation. They form the family Quasi-Monte Carlo Methods. The data points generated by low discrepancy sequences have asymptotically uniform distribution. 3.2.4.1 Hammersley sequence design The principle of Hammersley sequence (Hammersley, 1960) is briefly outlined below. For mathematical details, readers are referred to Niederreiter (1992). Each nonnegative integer k can be expanded using a prime base p : k = a0+alP + a2p2 +... + arpr (3.4) where a, e [0, p -1]. Let define a function of k, *,(*) = ^ + -V.... + -^r <3-5) P P P If p = 2, the corresponding sequence <J>2(£) is termed Van Der Corput sequence. Hammersley sequence is generated as follows, Let s denotes the dimension of the design space, and s-l distinct prime number bases are selected, denoted by pl,p2,...,ps_x, the k -th s - dimensional Hammersley sequence is given by the following vector, 52 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY [^.*F,(*),*/2(*X....*^-,(*)) (3-6) where n indicates the number of sample points, and PuP2,-;Ps-i are the bases for dimension 2, 3,..., s respectively, with px <p2 <... <. A Hammersley Sequence Design for two variables is shown in Table 3.4 and plotted in Figure 3.3. The number of data points is 20 with a base of 2. Table 3.4 A Hammersley Sequence Design for two variables Sample No. Xi X2 1 1/40 0.50000 2 3/40 0.25000 3 5/40 0.75000 4 7/40 0.12500 5 9/40 0.62500 6 11/40 0.37500 7 13/40 0.87500 8 15/40 0.06250 9 17/40 0.56250 10 19/40 0.31250 11 21/40 0.81250 12 23/40 0.18750 13 25/40 0.68750 14 27/40 0.43750 15 29/40 0.93750 16 31/40 0.03125 17 33/40 0.53125 18 35/40 0.28125 19 37/40 0.78125 20 39/40 0.15625 53 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY 0.2 0.4 0.6 0.8 Figure 3.3 A Hammersley Sequence Design for two variables A Hammersley sequence is easy to generate, and it provides a low discrepancy sequence with good uniformity. As before, the generated points are in the unit hypercube Is = (0,1)*, and they must be transformed to the design space by Equation (3.3) for practical application. 3.2.4.2 Halton sequence design Halton sequence (Halton, 1960) is similar to Hammersley sequence, and the procedure to generate Halton sequence is as follows, a) Choose s distinct prime numbers px,p2,--,Ps for each dimension, with/7, <p2 <...<ps; b) Express the integer fusing a prime base /?, (/' = 1,2,...,5)as Equation (3.4), and calculate the function Opi(k)&s Equation (3.5); c) The k - th s - dimensional Halton Sequence is given by the following vector, 54 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY (O AW *(k),...,Q ,.(!)) (3.7) where pi,p2,...,psare the bases for dimension 1,2,3,..., s respectively, with Pl<Pl<-<Ps-A Halton Sequence Design for two variables with /?, =2,p2 =3 and n = 20 is shown in Table 3.5 and plotted in Figure 3.4 Table 3.5 A Halton Sequence Design for two variables Sample No. X2 1 0.50000 0.333333 2 0.25000 0.666667 3 0.75000 0.111111 4 0.12500 0.444444 5 0.62500 0.777778 6 0.37500 0.222222 7 0.87500 0.555556 8 0.06250 0.888889 9 0.56250 0.037037 10 0.31250 0.370370 11 0.81250 0.703704 12 0.18750 0.148148 13 0.68750 0.481481 14 0.43750 0.814815 15 0.93750 0.259259 16 0.03125 0.592592 17 0.53125 0.925926 18 0.28125 0.074074 19 0.78125 0.407407 20 0.15625 0.740741 55 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY Figure 3.4 A Halton Sequence Design for two variables As before, the generated points are in the unit hypercube Is = (0,1)*, and they must be transformed to the design space by Equation (3.3) for practical application. 3.3 Experimental Design Implementation in This Study The classical experimental design methods were developed for physical experiments. Since at most three levels are considered for each variable, they are customized for constructing second order polynomials response surfaces. As the number of coefficients grows exponentially with the number of variables, they are useful only when the dimensionality is relatively small. Latin Hypercube Design may result in clustering of data points in some regions leaving voids elsewhere, especially when the sample size is small. Although Uniform 56 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY design is very efficient and can produce a design with good uniformity, the design lacks flexibility as it involves looking up a deterministic design table. Low discrepancy sequences have asymptotic uniformity, whereas the degree of uniformity of finite sequences is not clear. Hence, a number of approaches for computer experimental design have been proposed in this thesis and are discussed hereby. 3.3.1 Grid design When the number of variables is small, it is advantageous to employ the Grid Design. It is a full factorial design with all data points uniformly scattered in the design space. The total s number of data points is given by n = JJl,, where s is the number of variables, and lj is the i=l number of levels for variable Xi. The user can control the number of levels for each variable. For an important variable, more levels may be specified, whereas for a less important variable, fewer levels are needed. Although it is easy to implement, the total combination is too large when the number of levels for the variables is large. A Grid Design for three variables each of which has three levels is shown in Table 3.6. In the table, -1, 0 and 1 indicate respectively, the lower bound, the mean and the upper bound of a variable. All the data points need to be transformed into the original space for practical application. 57 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY Table 3.6 A Grid Design for three variables Sample x2 x3 1 -1 -1 -1 2 -1 -1 0 3 -1 -1 1 4 -1 0 -1 5 -1 0 6 -1 0 1 7 -1 1 -1 8 -1 1 9 -1 1 1 10 0 -1 -1 11 0 -1 12 0 -1 1 13 0 0 -1 14 0 0 15 0 0 1 16 0 1 -1 17 0 1 18 0 1 1 19 1 -1 -1 20 1 -1 21 ,1 -1 1 22 1 0 -1 23 1 0 24 1 0 1 25 1 1 -1 26 0 1 0 27 0 1 1 3.3.2 Grid-based optimal design An optimal design based on grid is proposed as follows. For each variable, the number of levels 1; (i = l,...,s) is specified, so the unit hypercube is divided into rectangular blocks. In addition, the number of samples in each block is prescribed. An algorithm is devised to maximize the minimum distance between any two points in every block. This approach ensures that the entire design space is covered and there is the pre-specified number of data 58 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY points in every block, which is the desired property of an experimental design. Such a design for two variables, each of which has five levels, is displayed in Figure 3.5. 1.00 0.80 0.60 0.40 0.20 0.00 0.00 0.20 0.40 0.60 0.80 1.00 Figure 3.5 A Grid-based Optimal Design for two variables As before, the generated points are in the unit hypercube Is = (0,1)*, and they have to be transformed to the design space by Equation (3.3) for practical application. 3.3.3 Optimized Latin hypercube design Since the data points in a Latin Hypercube Design may scatter non-uniformly in the design space, some points may be clustered in a certain region. To overcome this shortcoming, it is desirable to optimize the Latin Hypercube Design so that the neighboring data points are kept at a minimal distance apart. To this end, an Optimized Latin Hypercube Design has been proposed in this study. For a unit hypercube of dimension s that contains n data points, there 59 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY are n sub-cubes each of which has a volume of 1/n. Thus the side length of each sub-cube is Vl/n . This is the distance criterion adopted in this study for two adjacent points. As it is found that most good designs are symmetric or nearly symmetric, more data points than needed are generated through Latin Hypercube Design. Then the data points that have a distance less than a certain limit are merged. Finally, sorting is carried out to find the specified number of data points that have the largest inter-point distances. The pseudo code for generating Optimized Latin Hypercube Design is outlined as follows. a) Generate a Latin Hypercube with lie data points, more than the number of points needed n, and a = no / n ; Do while (dm < dmin ) Do I = 1,11c Calculate inter-point distance d, Ifd<dm,thendm = d; If d < 0.5dmin, merge these two points; End Do 1.05dmin If (dmi„ > 0.75 sVT7n), then exit End Do c) Sort the design points according to the inter-point distance, eliminate one of the two points which are too close. Repeat this process until the specified number of data points is obtained. a 60 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY An Optimized Latin Hypercube Design for two variables with 25 samples is given in Table 3.7 and plotted in Figure 3.6. Table 3.7 An Optimized Latin Hypercube Design for two variables Sample No. x, X2 1 0.491425 0.413191 2 0.034870 0.920369 3 0.598705 0.870449 4 0.356883 0.908647 5 0.440257 0.231409 6 0.896336 0.511703 7 0.082090 0.309849 8 0.206978 0.792904 9 0.885799 0.304578 10 0.135979 0.069002 11 0.882969 0.757601 12 0.335790 0.625134 13 0.474810 0.072808 14 0.708869 0.618938 15 0.661865 0.393954 16 0.917636 0.118527 17 0.278858 0.381604 18 0.284570 0.173540 19 0.489135 0.647998 20 0.754425 0.057946 21 0.668308 0.211304 22 0.785505 0.884717 23 . 0.115413 0.541039 24 0.965401 0.936214 25 0.025384 0.703390 As before, the generated points are in the unit hypercube Is = (0,1)% and they have to be transformed to the design space by Equation (3.3) for practical application 61 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY 0.8 0.6 0.4 0.2 0.2 0.4 0.6 0.8 Figure 3.6 An Optimized Latin Hypercube Design for two variables 3.4 Summary and Discussion Computer simulations using response representation are used extensively in science, engineering and industry. The design of experiments constitutes an indispensable prerequisite, and the success of the computer simulation depends to a great extent on the appropriateness of the experimental design. Classic experimental design methods are aimed for physical experiments, to build simple response surface in the form of linear or quadratic polynomials. Since physical experiments are costly and time-consuming, the designs are usually parsimonious to reduce the experimental efforts. Central Composite Design is the most popular and almost standardized 62 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY classic experiment design method. It is widely used for constructing second-order polynomials response surface. In classical experiment design, the approximate function is parametric, namely, linear or quadratic polynomials. This is a model-driven approach. In real life, such a simple model is incapables of modeling complex systems. Thus the classic experiment design is not suitable for computer experiments. During the past years, space-filling designs have been proposed to answer the needs of computer simulation, where the mathematical model is not known in advance. This is a data-driven approach. Computer experiments allow the user to try different models. The model can be very flexible, linear or nonlinear; parametric, semi-parametric or non-parametric. The model should be adaptive, allowing a good fit to the available data and ensuring a good generalization. Latin Hypercube Design was the first approach introduced to address computer experimental design. It is a stratified Monte Carlo method, such that variables at different levels are sampled with the same chance. Because the samples generated by Latin Hypercube Design are not uniformly distributed and may show congregations in some areas and voids elsewhere, other methods are proposed to overcome the problem. Uniform Design is the application of number-theoretic methods in statistics. It provides an experimental design with good uniformity and equidistance. It is a very efficient design in which the number of samples is far fewer than those needed for other methods if the number of levels is large. Nevertheless, it is usually generated by looking up a design table. Low discrepancy sequences are used for numerical integration, optimization and simulation. The sequence is a set of points that are uniformly scattered over a unit hypercube asymptotically. They 63 CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY seem to be promising tools in experiment design as regards the good uniformity of the data points they generate, especially when the sample size is large. A Grid-based Optimal Design and an Optimized Latin Hypercube Design have been proposed in this thesis to improve design uniformity by optimization based on the max-min criterion (to maximize the minimum inter-point distance) or controlling minimum inter-point distance. In the former approach, the user can control the number of levels for each variable, and every block has the same number of data points. The latter method is based on progressively merging the data points whose distance is below a certain limit and then sorting the database for the required number of data points. The generated designs cover the entire design space and exhibit good uniformity. 64 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION 4.1 Introduction Artificial Neural Networks (ANN) are computational devices composed of many highly interconnected processing units. Each processing unit keeps some information locally and is able to perform some simple computations. The networks as a whole have the capability to respond to input stimuli and produce the corresponding response, and to adapt to the changing environment by learning from experience. There are a number of artificial neural network paradigms. Among them, the most widely used are the Multilayer Backpropagation Neural Networks (Multilayer Perceptrons, MLP) and the Radial Basis Function Networks (RBFN). Generally speaking, the Multilayer Backpropagation Neural Networks encompass the following basic elements: (1) an input layer whose neurons receive inputs from external sources, and send the signals to the neurons of the subsequent layer; (2) one or several hidden layers whose neurons receive inputs from neurons of the preceding layer, perform some calculations, and broadcast their outputs to the neurons of the next layer; (3) an output layer whose neurons process the inputs and produce the final responses; (4) the connecting weights between the neurons of the adjacent layers which embody the strengths of connection; (5) a transfer function (activation function) for processing the inputs to a neuron; (6) a learning rule employed to train the networks; (7) training data, the set of examples from which the networks learn the functional relationship between inputs and outputs. 65 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION An artificial neural network must be trained prior to practical application. The neural network is presented a set of examples, and from these examples it discovers the underlying mapping from the input space to the output space. A learning rule must be employed, and the weights are iteratively adjusted to reconstruct the presented examples. After the neural network has been well trained and tested, it has learned the functional dependencies and is able to respond to a unseen input pattern and predict the corresponding output. A well-trained neural network can perform either causal mapping (from causes to effects) or inverse mapping (form effects to possible causes). Artificial neural networks possess some distinctive properties not found in conventional computational models. Traditional computing models are based on predefined rules (equations, formulas, etc.) that clearly specify the problem. The program follows an explicit step-by-step procedure to compute the desired outputs. This is feasible when the rules that define the problem are known in advance. In most cases, there are only observational data of the problem, while the underlying rules relating the input variables (independent variables, predictor variables) to the output variables (dependent variables, response variables) are either unknown or extremely difficult to discover. Under these circumstances, artificial neural networks exhibit their superiorities, and they have the following favorable attributes, (1) Inherently parallel structure which can tackle complex problem by many massively connected simple processing units; (2) Ability to learn and generalize from experience and examples; (3) Robustness when dealing with noisy or incomplete input data; (4) Adaptivity to new information. 66 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION Artificial neural networks have been proven to be effective computational tools for a great variety of tasks such as pattern recognition, classification, signal processing, system identification, estimation and prediction, analysis and design, data compression, adaptive control and optimization. They are continuously finding new applications in a spectrum of diverse fields such as science, engineering, medicine, business, and industry (Kumar and Topping, 1999). In the next two sections, the fundamentals of MLP and RBFN, as well as their implementations in this study will be discussed. 4.2 Multilayer Backpropagation Neural Networks 4.2.1 General Multilayer Backpropagation Neural Network is one of the well known and the most widely used artificial neural networks paradigms. The network is composed of an input layer, one or several hidden layers and one output layer of neurons. The neurons of adjacent layers are interconnected by weights that indicate the strength of connectivity. The input layer neurons do not perform any calculations, and they just receive signals from the outside environment. The presence of a series of hidden layers and the adoption of nonlinear transfer function enable the network to learn complex nonlinear functional mapping between the input quantities and output quantities. The network must be trained by presenting a set of training input-output pairs. This is achieved by carrying out optimization in an attempt to minimize the training error through weight updates. During operation of the network, the data flow from input layer forwards to output layer. Each neuron computes the weighted sum of its inputs and subtracts a threshold. The result passes a nonlinear transfer function and the output from the neuron is produced. Then the neuron output is sent to the neurons of the 67 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION subsequent layer. This process is repeated for every following layer of neurons, and the outputs from neurons of the last layer serve as the network predictions. 4.2.2 Artificial neuron model Artificial neuron is the basic building block of the complex neural network system. Its operation determines the function of the entire network. A schematic diagram of artificial neuron is illustrated in Figure 4.1. The neuron receives inputs from the neurons of the preceding layer Xi, X2, ... X„, calculates the weighted sum of the inputs and subtracts a threshold 0j, ie. ^JVijXi -0, . Then the neuron passes this outcome through a nonlinear transfer function f(.) and produces the n 1=1 neuron output as, X x„ Figure 4.1 A schematic diagram of an artificial neuron n (4.1) i=i 68 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION where Xi, i = 1,2,... ,n are the inputs to neuron j; Wy denotes the weights connecting neuron j and preceding neuron i; 0j denotes the threshold of neuron j; Yj denotes the output from neuron j; f(.) is the nonlinear transfer function, usually logistic function fix) = 1.0/(l+e"x) (Figure 4.2) or hyperbolic tangent function f(x) = tanh(x). 1.5 -1 -logistic function (J^ —" • o 5 "3 -1-0.5 --1 --1.5 -1 3 ! Figure 4.2 Transfer function 4.2.3 Network architecture Generally, a Multilayer Backpropagation Neural Network is made of an input layer of neurons, one or several hidden layer of neurons and an output layer of neurons. The neighboring layers are fully interconnected by weights. A typical layout of a three-layer neural network is illustrated in Figure 4.3. The network shown consists of an input layer with five neurons, a hidden layer with three neurons and an output layer with two neurons. The input layer neurons receives information from the outside environment and transmits them to 69 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION the neurons of the hidden layer; the hidden layer neurons process the incoming information and extract useful features to reconstruct the mapping from input space to output space; and the output layer neurons produce the network predictions to the outside world. Prior to being applied for prediction, the neural network architecture (the number of hidden layers, and the number of neurons in each layer) must be set up, then a set of training samples are used to train the network so that it learns the functional relationship between the input variables and the output variables. At the start of training, the weights are randomly set Input layer Hidden layer Output layer Figure 4.3 A typical Multilayer Backpropagation Neural Network to some small real numbers. Then the examples are presented to the network and a forward pass operation is performed. Each neuron calculates the weighted sum of its inputs and transmits the result through a transfer function from which the neuron output is obtained. The 70 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION typical transfer function is the sigmoid function. The data flow forwards layer by layer. The outcomes from the output neurons serve as the network estimates. The discrepancies between the target outputs and the predicted outputs measure the training error. In order to achieve satisfactory estimation, the weights of the network must be adapted to minimize the training error, and this is done by a backward pass of the error, which is called error backpropagation. The network error is passed backwards from the output layer to the input layer, and the weights are adjusted based on some learning strategies to reduce the network error to an acceptable level. After the network is well trained, all the weights are frozen, and the network can be applied for prediction. 4.2.4 Training strategies The training of artificial neural networks is an unconstrained optimization process: to find the optimal neural network parameters, the connecting weights, so that the network errors on the training examples are minimized. Any unconstrained optimization method can be used toward this end. The optimization methods can be categorized into two classes: the deterministic method and the stochastic method. The deterministic class comprises first-order methods such as gradient descent and second-order methods such as Newton's method. The stochastic method is based on random search, such as simulated annealing and evolutionary algorithm. Backpropagation algorithm will be discussed in the following section, while other training methods will be briefly mentioned. 71 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION 4.2.4.1 Backpropagation algorithm The backpropagation learning method is an approximate gradient descent method. The amount of learning is proportional to the difference (delta) between the target output and the computed output, so it is also called delta rule. Rumelhart et al (1986) extended it to multiplayer feedforward neural networks, and named it error-backpropagation or generalized delta rule. The following discusses the backpropagation algorithm for training multiplayer neural networks. Consider a three-layer neural network as shown in Figure 4.3. Assume that the number of neurons in the input layer, hidden layer and output layer are I, J and K respectively. Let Xf be the p-th input to the i-th neuron of the input layer, I ? be the p-th network input to the j-th neuron of the hidden layer, H ? be the p-th output from the j-th neuron of hidden layer; Ikp be the p-th input to the k-th neuron of the output layer, and Ykp be the p-th output from the k-th neuron of the output layer; hence, we have the following expressions, (4.2) (4.3) j (4.4) (4.5) where Wfl denotes the weight connecting i-th input neuron to j-th hidden neuron; Wy denotes the weight connecting j-th hidden neuron to k-th output neuron; /(.) denotes the transfer function. 72 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION Suppose the p-th desired output for the k-th neuron of the output layer is Tkp, and then the sum of squared error over all neurons of the output layer is defined as follows, ^ZZC7/"1*')2 (4-6) *• P K The backpropagation algorithm minimizes the above error functional by incremental updating the weight in proportion to the instantaneous gradient of the error with respect to the corresponding weight. For the weights connecting the hidden layer to the output layer, r)F AWh. = -tj-=- (4.7) * dW^ V ' where rj represents the learning rate which indicates the rate of change of the weight. Using the chain rule of derivative, we can rewrite the above equation as, = -Yn/'UDH? (4.8) p Let Skp = (77 - Yp)f(Ip), then Equation (4.8) can be written as A^=//ZW (4-9) p For hidden neurons, there are not target outputs. In order to apply the same principle to the neurons in the hidden layers, the error must be backtracked to the hidden layer neurons. The weight update rule for the weights Wfi is again formulated as, r)F AW, = -rj—— (4.10) 73 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION Using the chain rule of derivatives, the above equation can be rewritten as, P k p t (4.11) p where 5/=/I(/;)2:^. k If there is more than one hidden layer, the same procedure can be applied to each hidden layer by backtracking the error. The selection of learning rate is problem-dependent, and requires experience and experimentation. If the learning rate is too large, we might overshoot the minimum; on the other hand, if it is too small, the convergence will be slow. Initially all weights should be set to small random values to prevent neuron saturation in the early training stage which results in slow learning. In the classical backpropagation method, the training is fast at the beginning, but at a flat region of the error surface, the progress is very slow. In order to circumvent this drawback, a momentum term can be added, in which g indicates the iteration number. To be more effective, it is more reasonable to adapt the learning rate and momentum rate during the training process (Hagan, 1996). AW(g) = -rjVE(g) + aAW(g -1) (4.12) J?(g) = S"(g-i) CT7]( g-1) E(g)>s-E(g-1) E(g)<s-E(g-1) (4.13) 74 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION where £=0.7, a= 1.05, e =1.04 4.2.4.2 Other training algorithms A number of fast training methods can be used to speed up the learning process (Shepherd, 1997). Among them, the conjugate gradient method and second-order methods such as Quasi-Newton method, Levenberg-Marquardt method, model-trust region strategies are worth mentioning. Compared to the first-order approaches, these methods need more calculations in each iteration. The fast training methods are not general, as their efficiency highly depends on the problem under consideration. 4.2.5 Performance evaluation After the network has been trained, the network error is minimized to a certain lower level. However, a lower training error does not imply a lower generalization error. To make sure that the network is well trained and it has the capability to generalize, a subset of examples have to be presented to the network, with the network outputs compared to the target outputs. The testing errors indicate the extent of generalization error when the network is put to use. If the testing errors are considered acceptable by the user, then the training stage is complete. The network topology is kept fixed, and the network weights are frozen. The network is ready for application. 4.2.6 Neural networks implementation in this study 4.2.6.1 Data preparation 75 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION Since neural network generalizes by learning from examples presented to it, its ability of generalization is strongly affected by the training data. Hence, generation of sufficient number of training examples is extremely important. The training examples must cover the range from the lower bounds to the upper bounds of all input variables and distribute uniformly over the whole design space. The data should be comprehensive, representing particular features of the entire variable population. If the input variables have a large dimensionality, it may be advantageous to apply some statistical methods such as Principle Component Analysis or Factor Analysis to select a smaller set of important input variables. That will reduce the number of instances required for network training and accordingly the network complexity. During training of network using backpropagation algorithm, the network weight change is proportional to the derivative of mean square error with respect to the weight under consideration. Since the derivative tends to have a smaller value as the absolute value of the weight goes up, it is customary to scale the input variables into a small range, for instance, [-1.0,1.0] in order to speed up training. A simple linear normalization function is used, U = -l.0 + 2.0(X-Xl)l(Xu-Xt) (4.14) Where U denotes the normalized value of input variable X; X, denotes the lower bound of input variable X; Xu denotes the upper bound of input variable X; The normalization of output variables depends on the range of the transfer function, the sigmoid transfer function has a lower and an upper output limits, which are 0.0 and 1.0 for 76 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION logistic function, or -1.0 and 1.0 for hyperbolic tangent function. Usually linear transformation works well, albeit a nonlinear transformation may be conducive if the data are clustered. Thus in this work, the output variables are first transformed in the following, S = \n(Y) (4.15) where Y denotes the target value of output variable. Then S is normalized within the values of 0.1 to 0.9 (for logistic function), or -0.9 to 0.9 (for hyperbolic tangent function), F = 0.1 + 0.8(5-ln7/)/(ln7B -In7,) (4.16a) F = -0.9 + 1.8(1S'-lnyj)/Ony,-ln7l)' (4.16b) Where V is the normalized target value of output variable Y, Yl denotes the lower bound of output variable Y; Yu denotes the upper bound of output variable Y; 4.2.6.2 Topology of the network The architecture of the neural network must be determined before training. The number of input variables and the number of output variables are determined by the problem specifications. It is recommended to reduce the number of input variables based on experience and engineering judgment, as too many inputs will make the model complex as well as slow down the learning process. Though there are theorems that guarantee that multiplayer feedforward neural networks with (at least) two hidden layers are capable of approximating any nonlinear function within a desired accuracy (Hornik, 1991), no general 77 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION guidelines are available to select the appropriate topology of the networks. Generally, a trial-and-error approach is followed to find the best network structure. Several networks with different architectures are trained and tested, and the best one with the least test error is used for application. The hidden layer plays a crucial role in the neural network performance. It enables the network to model complicated nonlinear relationships and to capture the features underlying the inputs and the outputs. An optimal number of neurons in the hidden layer are required. A network with few neurons may not be able to capture the complex underlying relationship between inputs and outputs, thus it cannot generalize well to unseen data. On the other hand, too many neurons tend to result in over-fitting of the training data, ie, the model is too complicated to be reliably inferred from a limited amount of training data, hence the network prediction will be poor in spite of a very low training error. There is no general rule for choosing the optimal number of neurons in the hidden layer. It is problem dependent, and to some extent, it hinges on the amount and the quality of the training data. In a word, it must be large enough to be able to model the complicated nonlinear mapping while small enough to ensure a good generalization. In addition, the number of neurons should be so small that the number of weights is fewer than the number of training instances. Some heuristic approaches can be applied to improve on the initial architecture. One hidden layer is usually adopted. There are two algorithms available: cascade algorithm and pruning algorithm. In cascade algorithm, we start with a simple architecture with only a few hidden neurons, and evaluate the performance by parallel training and testing. If the training error is high, more hidden neurons are added. The process is repeated until at some step the testing error begins to increase. In pruning algorithm, we start from a complex architecture with many hidden 78 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION neurons, and evaluate the network performance by parallel training and testing. If over-fitting occurs, the neurons in the hidden layer are reduced until the training error is reduced to an acceptable level. In this work, one hidden layer is adopted and the number of neurons in the hidden layer is determined by cross validation. There are two types of cross validation, leave-one-out cross validation and multi-fold cross validation (Cherkassky, 1998). The model selection procedure is outlined in the following pseudo code. (1) Set the initial number of neurons to half the number of input variables, H, =112; (2) Set the maximum number of neurons to Hu = (N^ -1) /(/ + 2); (3) DoH=H„Hu If (H = H,), initialize the weights to some random small values; If (H * H,), initialize the weights connecting the newly added neuron to input neurons and output neurons; Divide the training dataset into five subsets; Don =1,5 Train the network using four of the five datasets and use the rest for testing; Calculate the training sum square error SSE"rajn, and the testing sum square error SSE"est; Calculate the error criterion ECa=WSSE^/(N^-Nw)) + ]otiSSEi,/Nml) End Do 79 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION Calculate the average EC" = -££C„ 5 n=l,5 End Do (4) Select the number of neurons as H which has the minimal EC" where / denotes the number of input variables; SSE^n denotes the training sum square error; SSE^ denotes the testing sum square error; Ntrain denotes the number of training examples; Ntest denotes the number of testing examples; Nw denotes the number of network weights; The network with the minimum EC value is selected as the best network structure. 4.2.6.3 Training There are two training modes, batch mode and pattern mode. In a batch mode, the entire set of training examples is presented to the network, and the network output for every input vector is computed. Then the mean square error of the network is calculated and the network weights are adjusted backwards using error backpropagation. In a pattern mode, every example is presented to the network and the corresponding network error is calculated, then the network weights are adjusted backwards based on the error from that example alone. Usually batch mode is preferred due to the following reasons: (1) Pattern mode training needs more weight updates and thus is slower, since weights must be changed for every example; 80 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION (2) In a pattern mode, the ordering of examples has an impact on the training. Examples presented at the end of training have more influences than those presented at the beginning. The network tends to " forget the past ". While in a batch mode, the order of presentation does not make a difference; (3) Batch mode provides a more accurate measurement of weight changes on the average. The network has to be trained many epochs (the presentation of the entire training dataset to the network is termed an epoch) before the network error decreases gradually to a stable value. The training time depends on such factors as network topology, the number of hidden layers, and the number of neurons in each layer, the training data as well as the nature of the input-output relationship. The training can be stopped when the iteration limit is reached, or the training error has reached a predefined error limit. In this study, batch mode training was adopted and the weights obtained from previous training were kept. The following procedure was employed for the final training, with the optimal number of neurons determined by the above model selection. (1) Divide the training dataset into two subsets, namely, training dataset (80% of the total data) and the test dataset (20% of the total data); (2) Train the network using the training dataset and evaluated its performance with the testing dataset; (3) If the error for a certain testing case is large the predefined threshold, put it into the training dataset, meanwhile, the case with the smallest error in the training dataset is put into the testing dataset; 81 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION (4) Repeat the training process until both the training root mean square relative error (RMSRE) and the testing RMSRE are reduced to the acceptable limit, or the number of iteration is exhausted. 4.3 Radial Basis Function Networks 4.3.1. General Radial Basis Function Network (RBFN) is composed of a linear combination of radial basis functions, whose output is symmetric about its center and decays monotonically with the distance from the center. It is another neural network paradigm for function approximation and classification. A typical radial basis function is the Gaussian function. ^(x) = exp r2 J (4.17) where x denotes the input variable vector; c denotes the center of the function (vector); r denotes the radius of the function (scalar); ||«|| is a vector norm. A radial basis function network comprises linear combination of a set of radial basis functions, and it can be expressed in the following form, M **) = (4.18) j=0 where ^0(x) = 1.0; 82 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION M is the number of radial basis functions; Wj denotes the j-th weight; (fijix) denotes the j-th radial basis function. If a Gaussian function is selected as the radial basis function, then Equation (4.18) becomes, Figure 4.4 illustrates a radial basis function network with three layers, ie, an input layer, a hidden layer and an output layer. The neurons in the input layer receives information from the outside world and transmit them to the hidden layer neurons, which perform a nonlinear transformation of the input vector by means of radial basis function. The outcomes from the hidden neurons are linearly combined with the coefficients (weights) and exported as the network output. 4.3.2. Radial basis function network training The design of Radial Basis Function Network involves selection of proper radial basis function, determination of the number of hidden neurons and network training. Usually the Gaussian function is chosen as the radial basis function, but others such as multiquadratic function, Cauchy function, and thin-plate splines are used in some applications. Once the type of radial basis function and the number of neurons are established, training is performed to determine the values of network parameters. Recall in Equation (4.19), there are three parameters for each hidden neuron, namely, the center vector c;, the radius rj and the weight y(x)=^wjexp -M J (4.19) 83 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION Wj, (j = 0,1,2,...,M), where Mis the number of hidden neurons. There are two ways of training a RBFN, viz, supervised training or two-stage training. Output layer Hidden layer Input layer Figure 4.4 A schematic Radial Basis Function Network Supervised training of RBFN is similar to the training of Multilayer Perceptron. The values of the parameters are adjusted to minimize the sum-of-squares error, (4.20) n=l k=\ where P is the number of samples; K is the number of outputs; tl denotes the target value of k -th output corresponding to the input vector x"; 84 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION yk (xn) denotes the calculated value of k -th output corresponding to the input vector x"; If gradient descent is employed as the training algorithm, the following update rule can be used for adjusting the values of the model parameters (Ghosh et al, 1992), &wk/ ~ 7i ifk ~yk(x"))0j (*") (4.21) Ac, = 7a^(x")fc-^£(r; -*(*")>•'* (4-22> 2 X" -C,|| K Ar, = ^(*")"" 3'"' -y*(x")>»* (4-23) where 77,, n2, rj3 are the learning rates. Two-stage training involves unsupervised training of radial basis function centers and radii, followed by training of the weights. The training of centers is accomplished by K-means learning, a type of competitive learning in which the Euclidean distances between a certain input vector and all the centers are calculated, and the center with the minimal distance gets the privilege to update. The pseudo code for this algorithm is listed as follows, (1) Initialize the centers by randomly assigning input vectors to them; (2) Do n = 1, number of samples Doj=l,M Calculate ds = -cj End Do Find the neuron j which has the minimal dj, and nj = n}. +1; Update the center c™w = (ntf" + x")/(ny +1); 85 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION End Do The radii of neurons play a very important role as they determine the quality and smoothness of the mapping function. All neurons may have the same value or each neuron has its own value. When the same value is used for all neurons, it can be set as a multiple of the average distance among the centers of all neurons. When each neuron has its own radius, the value is usually taken as 1.5 to 2 times the average distance between the neuron center and the centers of some nearest neighbors. After the basis parameters (centers and radii) have been determined, the weight values can be determined by solving a system of linear equations. Based on the available training data, Equation (4.18) yields, <S>W = T (4.24) where O denotes the design matrix, with element corresponding to n-th sample and j-th neuron given by ^;(x"); W denotes the weight vector; T denotes the target output matrix, with element corresponding to n-th sample and k-th output variable given by tk ; The above equation can be solved by singular value decomposition as, W = (®T<&yxQ>TT (4.25) 86 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION 4.3.3. Radial basis function networks implementation in this study In this study, a RBFN was implemented in order to compare its performance with that of a multilayer perceptron (MLP). K-means learning was employed to train the neuron centers, and the radius of a certain neuron was set to ]/y/~2 of the largest distance between it and some of its nearest neighbors. Gradient descent was adopted for training of the network weights. RBFN has found wide applications in pattern recognition, signal processing, nonlinear system identification and medical diagnosis, etc. owing to their universal and smooth functional approximation capabilities. Compared to a MLP, a RBFN needs more memory for storing the centers, radii and weights, so it is more susceptible to the curse of dimensionality. 4.4 Summary and Discussion Artificial intelligence and machine learning have witnessed great advancements and found applications in a wide range of fields. Multilayer feedforward neural networks and radial basis function networks have been discussed. The common feature of these computational learning models is that they are able to learn the underlying complex input-output functional relationship given a collection of training data, and they can adapt to a changing environment. Multilayer perceptron and radial basis function networks have been implemented in this work for seismic reliability analysis and design optimization. They will be employed as a surrogate model in lieu of the more expensive and time-demanding computer code, as the cost of a precise solution is much higher compare to an approximate one with impreciseness 87 CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION within the range of acceptability. By doing this way, the computational efficiency is greatly improved, which will be verified by subsequent case studies of seismic reliability analyses and applications in performance-based seismic design. 88 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODOLOGY 5.1 Introduction Earthquakes constitute one of the major natural hazards to society. The past seismic data show that strong earthquakes resulted in great human casualties and large economic losses around the world. The failures of infrastructure, such as buildings, bridges, highways, dams, etc. during severe ground motions were responsible for these fatalities and losses. Generally, major casualties were concentrated in densely populated regions with poorly built facilities vulnerable to earthquakes, while major economic losses were located in areas with modern industrial and commercial developments. Thanks to advancements of earthquake engineering in the past decades, human casualties have been reduced significantly during severe earthquakes. This demonstrates, in part, the success of modern seismic resistant design philosophy and engineering practice. On the other hand, earthquake resistant design still faces many challenges and difficulties. The 1989 Loma Prieta earthquake, the 1994 Northridge earthquake and the 1995 Kobe earthquake witnessed enormous economic losses and exposed further deficiencies in seismic resistant design and construction. The huge economic losses can be ascribed to the following reasons: (1) Due to industrialization and urbanization, cities are expanding constantly as more people work and live in large cities. Many of these densely populated cities possess high seismic hazards, as they usually are situated close to the boundaries of tectonic plates or in regions with soft subsoil. 89 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY (2) Most of the existing buildings and other constructions in the seismic zones were designed and constructed in conformance with past code of practices, which are deemed incapable of withstanding the expected ground motions by modern seismic codes. All those sub standard buildings and facilities need to be retrofitted up to the current standard. Unfortunately, owing to the lack of adequate funding, seismic rehabilitation of these buildings and structures may not be accomplished in time, leaving them as the target of next earthquakes. (3) The basic philosophy of the modem seismic design code aims to accomplish the following goals: a) to resist a minor earthquake without damage; b) to resist a moderate earthquake without structural damage, but nonstructural components may suffer from some damage; c) to resist a strong earthquake without collapse to ensure life safety, but the structure and non-structural components may experience severe damage. However, the emphasis of the code is to provide life safety for the public by preventing collapse of the structures under severe earthquakes, whereas economic losses due to property damages and business interruptions are secondary. Though three objectives are stated in the code, only the life safety goal has been explicitly executed, and no specific procedures have been provided in the code for explicit evaluations of other performances, like the vulnerability of non-structural elements, contents, equipments, etc., which can cause more economic losses than the structural damage, even for a moderate earthquake. (4) The complexity of the structural behavior during a strong ground motion is not fully accounted for by the code approaches. To implement the seismic design, the code specifies a ground motion criterion, usually in the form of a seismic zone factor and a 90 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY design spectrum. For simple and regular structures, an equivalent static force method is provided where only the first vibration mode is allowed for. The elastic seismic forces are established based on the seismic zone, the structural importance, and the site condition. The seismic design forces are determined empirically by taking advantage of the inelastic structural behavior and ductility to reduce the elastic seismic forces to a design level. For large and complex structures, the modal decomposition response spectrum method or nonlinear dynamic time history analysis are generally recommended, but only some guidelines are given in the code for reference (NBCC, 1995). Consequently, most of the buildings have been designed based on an oversimplified analysis approach, without elaborate modeling of the structural behavior and the effects of non-structural components. During a strong earthquake ground motion, the responses of the structure are strongly nonlinear owing to stiffness degradation as well as strength deterioration. Hence, a nonlinear dynamic analysis should be carried out to realistically capture the actual behavior of the structure. Though there are some commercial software packages available for this purpose, the analysis process is highly dependent on the analyst's capability to accurately model the structural system. In addition, nonlinear dynamic time history analysis requires representative ground accelerograms for the site. Engineers routinely use historically recorded accelerograms without paying much attention to the seismic background. For some regions, there are historic recordings available for ready usage, nevertheless, they represent past earthquakes and may never be recorded in the future. For a site without historical recordings, the adoption of accelerograms recorded in other regions can lead to unfathomable errors in structural response predictions. 91 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY (5) Even though the seismic resistant design is carried out based on the state-of-art method, the goal can only be achieved through proper detailing and good quality assurance during construction. During the 1994 Northridge earthquake and the 1995 Kobe earthquake, some steel moment-resistant frames that are generally considered as ductile systems underwent brittle damages, especially in the field-welded beam-column joints. Those damages were caused by poor detailing of the connection where the moments at the web of beams could not be completely transmitted to the column, leading to stress concentration in beam flanges. Some other failures were attributed to desultory inspection and poor workmanship during construction (Mazzolani and Gioncu, 2000). (6) Structural maintenances throughout the service life play an important role for structures that may subject to future seismic ground shakings. Some buildings are renovated during their service life for other usage with their seismic resistance sacrificed instead of strengthened, and could become the easy target of an upcoming earthquake. (7) Seismic ground motion is one of the most important and less understood factors, due to the randomness and uncertainty involved. Obviously, the present code design spectrum cannot fully describe the expected seismic loading for a structure. In general, the code provisions give a macro-zonation at a country level, with the single design spectrum roughly corrected by considering the local site soil conditions. This approach is deficient in that the actual site seismic conditions are not clearly accounted for, such as magnitude, distance to potential seismic sources, attenuation law, site soil stratification, etc. Probabilistic seismic hazard assessment of the site is crucial for a successful seismic resistant analysis and design. Moreover, the recent earthquakes drew engineers attention 92 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY to an important aspect of ground motions that was ignored in the past, ie., the differences in ground motions from far-source and near-source earthquakes. The tremendous damages indicated that the earthquake action model employed in the present code, based on the ground motions recorded in far-source regions, could not be used to depict the earthquake effects in near-source regions. Based on lessons we have learned from the last major earthquakes, the structural engineering community commenced to reexamine the seismic design philosophy and engineering practice, to find out the deficiencies in the current code of practice, and to propose procedures to remedy the drawbacks inherent in the present code. Performance-based seismic design has been put forward as the cornerstone of the next generation code. SEAOC's Vision 2000: Performance-based Engineering of Buildings and BSSC's NEHRP FEMA 273: Guidelines for the Seismic Rehabilitation of Buildings have laid the foundation of performance-based seismic engineering by introducing multiple performance goals, design criterion associated with each performance level, and refined analytical procedures for performance evaluations. It is generally agreed that: (1) The traditional way of design focuses mainly on life safety, which is basically a single level design. The protection of integrity of building contents and prevention of business interruption are also equally important for some critical buildings. Hence, a multiple design performance levels should be employed based on the function and contents of the building after the strike of a severe earthquake, in order to reduce damages and maintain functional continuity. (2) Multiple levels of earthquake ground motions need to be adopted, accounting for diverse hazards imposed on the structure throughout its service life. (3) Refined and sophisticated numerical 93 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY procedures need to be developed to realistically evaluate the intricate responses of the structure. (4) Seismic design should be carried out in the framework of reliability-based design in order to reliably satisfy the multiple performance goals by taking into account various uncertainties and randomness involved in the seismic design process. (5) Extensive experiments have to be undertaken to validate effective detailing, and rigorous supervision is to be dictated to guarantee the construction quality. (6) From an economic perspective, earthquake resistant design should be based on whole-life cost-benefit analysis, allowing for all major factors involved in the design, construction, and maintenance of the building as well as probable maximum losses due to failures of the building and its contents during an earthquake. 5.2 Performance-based Seismic Design 5.2.1. Multiple performance objectives in SEAOC Vision 2000 Performance-based seismic design implies that multiple target performance objectives are expected to be satisfied when the structure is subjected to earthquake ground motion of a certain intensity associated with that performance level. In SEAOC's Vision 2000: Performance-based Engineering of Buildings and BSSC's NEHRP FEMA 273: Guidelines for the Seismic Rehabilitation of Buildings, four performance levels have been proposed, as shown in Table 6.1. The two systems of performance levels are quite similar to each other, albeit different terminology is utilized. In NEHRP, acceptance criteria for Life Safety and Collapse Prevention performance levels are defined at the component level. The performance levels recommended by SEAOC Vision 2000 for different types of buildings under distinct ground motion intensities are shown in Figure 5.1. Buildings are categorized according to 94 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY their occupancy and use, namely, basic facilities, essential/hazardous facilities, and safety critical facilities. It is expected that, after a severe ground shaking, buildings for emergency response and essential public service should have a low probability of being damaged beyond the limit which affects their normal function, and those facilities which house hazardous materials such as poisonous chemicals or radioactive materials should have an lower damage level to prevent any disastrous releases. For a moderate earthquake, all ordinary buildings should undergo limited user-acceptable damages to reduce economic losses and business interruptions. Four distinct ground shaking intensities are specified in SEAOC Vision 2000, namely, frequent earthquake with a return period of 43 years, occasional earthquake with a return period of 72 years, rare earthquake with a return period of 475 years, and very rare earthquake with a return period of 970 years. The earthquake scenario should reflect the probable seismic hazards of the site under consideration. Though Both SEAOC and NEHRP have made the first step toward the development of performance-based design procedures, there are still a lot to do for its full growth. Ground motion characteristics such as near-field velocity pulse effects and duration are not accounted for in the provisions. An explicit serviceability evaluation procedure must be developed to estimate structural damages. Also, an approach needs to be elaborated to evaluate the possible damages to non-structural members and building contents, for instance, costly equipment and motion-sensitive instrument, hazardous substance container, etc. More refined and sophisticated analytical models need to be established in order to realistically simulate the structural behavior and its surroundings throughout strong ground shaking. Apart from 95 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY performance evaluation at component level, performance acceptance criteria are also required at system level, considering the systematic behavior of the structure as a whole, and the integrity of the structure during strong ground motion. Performance Objective EarWiqtKlte OpiAnd Operational life Safe Cofcpt* ProDabilHy frequent Occasional law Vary Rare ^^^^^^^^^^^^^ PerForr HHHHHB Figure 5.1 SEAOC Vision 2000 performance levels (SEAOC, 1995) 5.2.2. Performance-based seismic design criteria in this study At present, most seismic codes merely consider one design earthquake: a rare, severe earthquake with a return period of 475 years. Though three performance levels are specified, only one performance level, life safety, is explicitly executed. Even this performance level is poorly implemented. A standard design spectrum is defined for the whole country, and elastic base shear is calculated by adjusting the spectrum on the basis of local peak ground acceleration, structural importance and site soil conditions. The design base shear is calculated as a fraction, 1/R, of the expected elastic base shear. The R factor is determined based on engineering judgment and observations of structural performances during past earthquakes. In this way, the code tries to achieve the life safety level by qualitatively 96 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY Table 5.1 Performance level definitions in NEHRP and SEAOC (Hamburger, 1996) Performance definitions Descriptions NEHRP SEAOC Operational Fully Operational No significant damage has occurred to structural and non-structural components. Building is suitable for normal intended occupancy and use Immediately Occupancy Functional Only very minor damage has occurred. The building retains its original stiffness and strength. Nonstructural components operate, and the building is available for normal use. Repairs, if required, may be instituted at the convenience of the building users. The risk of life-threatening injury during the earthquake is negligible. Life Safety Life Safety Only minor structural damage has occurred. The structure retains nearly all its original stiffness and strength. Nonstructural components are secured, and if utilities are available, most would function. Life-safety systems are operable. Repairs may be instituted at the convenience of the building users. The risk of life-threatening injury during the earthquake is very low. Collapse Prevention Near Collapse Significant structural and nonstructural damage has occurred. The building has lost a significant amount of its original stiffness, but retains some lateral strength and margin against collapse. Nonstructural components are secure, but may not operate. The building may not be safe to occupy until repaired. The risk of life-threatening injury during the earthquake is low. 97 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY limiting damage to structural components. Hence, no attempts have been made to rigorously assess the safety margin against failure by this approach. After the last earthquakes, it is generally recognized that damage control should be included as an integral part of seismic design. Since it is economically unjustifiable and technically infeasible to design all structures to withstand the severe earthquake without any damage, it is agreed that an ordinary structure should be able to resist a moderate earthquake with user-acceptable damage in non-structural elements as well as building contents, and structural damages should be reparable. In the event of severe earthquake, the structure should be able to dissipate the input seismic energy by inelastic deformations, though the structural damages may be irreparable, its integrity must be maintained to ensure the life safety of the occupants through collapse prevention. In case that the owner would like to pay extra expenses for enhanced performance beyond the minimum code requirements for continued operation of the building even after a strong earthquake, the engineer should provide an optimal design with higher reliability in performance but lower overall cost. In this study, different performance objectives will be defined corresponding to different levels of ground motions. A framework for performance-based design is proposed. 5.2.2.1 Multiple seismic hazard levels Earthquake ground motion is the most important factor affecting seismic design, since it involves lots of uncertainties and randomness. A successful seismic design hinges largely on the appropriate characterization of the earthquake motions. After the last earthquakes, it is commonly agreed that multiple seismic hazards should be considered for performance-based design. How many levels of earthquakes need to be allowed for and the characteristics of the earthquake motions depend on the location of site to all potential sources of earthquakes, the 98 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY seismic features of each source, and the travel path geology from each source to the site as well as site soil stratification and properties. The characterization of the possible seismic hazards at the site can be rationally achieved in the framework of probabilistic seismic risk assessment. One phenomenon worthy of attention is the recently observed differences between near-source ground motions and far-source ground motions. The design methods in most seismic codes nowadays are based on recordings from far-field earthquakes that can not be used to properly delineate near-source motions, which is one of the reasons for the tremendous economic losses in recent near-field earthquakes. The major differences between them are as follows (Mazzolani and Gioncu, 2000): (1) Near-source ground motions are of short-duration and pulsate in acceleration, velocity and displacement, while far-source ground motions are cyclic with longer duration; (2) Near-source ground motions have significant vertical component, while horizontal components dominate far-source ground motions; (3) Near-source ground motions have very high velocities; (4) The effect of directionality of wave propagation is substantial for near-source ground motions, while local soil stratification has a great influence for far-source ground motions. A seismic region around the site can be subjected to ground shakings of different intensities, low, moderate or severe. A low earthquake occurs frequently, but it will cause no structural damage or slight non-structural damage. A moderate earthquake happens occasionally, and it 99 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY may give rise to moderate or even heavy non-structural damage as well as reparable structural damage. A severe earthquake rarely takes place; nevertheless, its occurrence may result in heavy structural damage or even collapse. The seismic hazard levels are disputable, as they depend on the site seismicity as well as other socio-economic factors. In contrast to SEAOC definitions, the earthquake hazard levels mentioned in Mazzolani and Gioncu (2000) are: (1) frequent, with a return period of 8-10 years; (2) occasional, with a return period of 20-30 years; (3) rare, with a return period of 450 years; (4) very rare, with a return period of over 970 years. It seems that the frequent earthquake and the occasional earthquake defined in SEAOC are not so distinct. When probabilistic seismic risk assessment of the site is not carried out, four levels of earthquakes are suggested in this study: (1) frequent minor earthquake, with probability of exceedance of 90% in 50 years (return period 22 years); (2) occasional moderate earthquake, with probability of exceedance of 50% in 50 years (return period 73 years); (3) rare major earthquake, with probability of exceedance of 10% in 50 years (return period 475 years); (4) very rare severe earthquake, with probability of exceedance of 5% in 50 years (return period 970 years). For a very important structure, as for example a nuclear power plant, a maximum probable earthquake is defined as an earthquake with probability of exceedance of 2% in 50 years, with corresponding return period of 2475 years. If probabilistic seismic hazard assessment is performed, ground motions corresponding to earthquake scenarios specific and appropriate to the site will be considered and used for design. This level of earthquake normally will only be used for design of essential or critical structures. 100 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY 5.2.2.2 Multiple performance objectives Performance-based seismic design should be carried out by transparently satisfying multiple performance levels corresponding to multiple hazards with the corresponding target reliabilities, based on accurately modeling structure responses using sophisticated numerical procedures. The target reliability indices depend on occupancy, importance and consequence of non-performance of the structure after an earthquake. They can be obtained by back-calculating reliability of existing structures. ISO "General Principles on Reliability for Structures" should be referred to in determining the proper values, taking into account owner's requirements and the economic impacts. The target reliability indices mentioned later on are for illustration purpose only. In SEAOC Vision 2000, inter-story drift ratio (the ratio of the difference of lateral displacements of adjacent floors to the story height) is adopted as the performance criterion. The recommended limits for the four performance objectives are respectively, 0.2% (Fully Operational), 0.5% (Operational), 1.5% (Life Safety) and 2.5% (Near Collapse). There are no universally accepted limit values, and they must be determined according to building function and owner demand. Aside from story drift ratio, other criteria should be adopted for performance evaluation. Four performance objectives are suggested in this study. (1) Serviceability: for a frequent minor earthquake, a structure is in the elastic range, so the non-structural components are checked for possible damages and building contents examined for normal functioning. An inter-story drift ratio limit of 1/500 may be adopted with target reliability index between 1.5-2.5. The maximum floor acceleration may need to be checked for some motion-sensitive instrument. (2) Capability: for an occasional moderate earthquake, a structure may work in the elasto-plastic state. The structure is assumed to suffer reparable damages, with non-structural elements moderately 101 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY damaged. The yield strengths of major structural components are to be examined for evaluation of local structural damages. An inter-story drift ratio limit of 1/200 may be used. The target reliability index for this limit state can be set as 2.0-3.0. (3) Stability: for a rare strong earthquake, a structure is presumed to work in ultimate state. The structure may suffer moderate damages but still maintains its integrity. The ultimate strengths of major structural components are to be investigated for structural stability. An inter-story drift ratio of 1/100 may be set as the limit. The target reliability index for this limit state can be set as 2.5-3.5. (4) Survivability: for a very rare, severe earthquake, a structure is at the edge of collapse with a kinematical mechanism formed. The structure is heavily damaged and need demolition afterwards. The drift ratio limit of the entire structure could be 1/50. In order to guarantee the safety of the occupants, the ductility of the whole structure is checked for collapse prevention. The target reliability index for this limit state could be specified as 3.0-4.0. The performance levels, earthquake hazard levels, story drift limits and the suggested target reliability indices are summarized in Table 5.2. Table 5.2 Performance objectives Performance level Probability of exceedance in 50 years Story drift ratio limit Target reliability index Serviceability 90% 1/500 1.5-2.5 Capability 50% 1/200 2.0-3.0 Stability 10% 1/100 2.5-3.5 Survivability 5% 1/50 3.0-4.0 5.2.2.3 Structural analysis approach 102 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY Time history analysis is the only method that is able to reveal the actual behaviors of the structure during an earthquake. A mechanical model of the structure is built by a realistic modeling of material nonlinear hysteresis, member connections, non-structural component effects and soil-structure interaction. Some historic acceleration recordings or simulated ground motions that are deemed representative of the future earthquake motions are selected, based on the site seismic risk assessment. The structural responses are obtained by numerical integration of the equations of motion. The difficulty of the method consists in appropriate modeling the structure and its environment, as well as choice of proper earthquake accelerograms. As earthquake motion involves many uncertainties, a spectrum of accelerograms should be selected considering a variety of possible earthquake scenarios. Although time history analysis is not used widely in current engineering practice, it is believed that it will be indispensable in the upcoming years, as performance-based seismic design becomes the backbone of the code of the next generation. 5.2.2.4 Seismic design criteria 0 Four seismic design criteria are employed corresponding to the foregoing four performance objectives. (1) Stiffness design criterion To control damage to non-structural components and building contents, and ascertain that the structure works in the elastic range under a minor earthquake, the rigidity of the structure must be checked. Inter-story drift ratio is usually adopted for this purpose, with the limit state function in the following form, 103 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY G = 90-e (5.1) where G0 denotes the inter-story drift ratio limit; 6 denotes the computed maximum inter-story drift ratio; There is not a fixed value of inter-story drift ratio limit, as it depends on the nature of the non-structural elements and the requirement of the user. A commonly accepted value varies between 0.1%-0.3%. For some delicate or precise instrument housed in the building, it may be necessary to check the maximum floor acceleration or velocity to assure their normal functioning. The corresponding limit state function can be expressed in the following forms, G = A0-Amax (5.2) G = V0-V„, (5.3where A0 denotes the acceptable acceleration at the floor level; .Amax denotes the computed maximum floor acceleration; V0 denotes the acceptable velocity at the floor level; denotes the computed maximum floor velocity; The acceleration limit or the velocity limit depends on the requirements of the building contents, which can be obtained from the manufacturers. For this level of serviceability limit state, the target reliability index could be set about 1.5-2.5, as the consequence of failure is not disastrous. 104 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY (2) Yield strength design criterion Strength design plays a pivotal role in traditional structural design, and it will continue to be a major part in performance-based seismic design. Under the action of an occasional moderate earthquake, a structure may enter the inelastic range, all potential plastic hinges need to be examined for their probable yield strengths, and non-hinge zones checked to preclude unanticipated hinges forming. The limit state functions can be formulated as, where My is the yield moment capacity of member and M is the computed moment. The possible over-strength of the members should be considered for real representation of the yield strengths. The stability of columns and beams with thin-wall section must be checked for possible local buckling. In addition, the story yield ratio should be checked to prevent week story mechanism, the limit state of which can be expressed as, G = My -M (5.4) G = 10-V /V * max ' " y (5.5) in which V1^ is the computed maximum i-th story shear force; Vy is the i-th story shear capacity. (3) Ultimate strength design criterion 105 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY The ultimate strengths of major structural components need to be evaluated as in conventional design. The principle of capacity design should be employed to establish a viable plastic structural failure mechanism. On one hand, the demand on flexural strengths can be realistically evaluated by nonlinear dynamic time history analysis of the structure ends of members are estimated by assuming plastic behavior at those sections, with over-strength considered due to variability of material properties and hardening effect. The shear strengths must also be checked to prevent any premature brittle failure. The stability of columns and beams must be checked to preclude global buckling. In this case, the limit state function can be written as, subject to possible earthquake ground excitations. On the other hand, the bending strengths at G = Mp -M (5.6) G = Vp-V (5.7) G = Mb -M (5.8) G = Nb-N (5.9) where Mp denotes the probable moment capacity; M denotes the computed moment demand; Vp denotes the shear strength capacity; V denotes the computed shear force; Mb denotes the moment capacity of a beam to prevent buckling; Nb denotes the buckling capacity of a column; N denotes the applied axial load on a column; 106 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY For this level of capability limit state, the target reliability index could be set between 2.5 and 3.5. Beam should be allocated a smaller target reliability compared to column and joint. In any circumstance, the limit state regarding shear strength and buckling should have a higher reliability relative to bending strength, as the failures tend to be sudden without warning. (4) Ductility design criterion Ductility is the ability of the structure to dissipate input earthquake energy through undergoing high inelastic deformations without significant strength degradation at some predefined locations. During a strong earthquake, life safety is assured in a well-designed structure, as a kinematical mechanism will be formed to prevent collapse. Both the global displacement ductility and the local member rotational ductility need to be assessed for a transparent ductile design. The local ductility is checked to prevent plastic deformation from being concentrated in some members. The structural global ductility is the manifestation and collective behavior of members' local ductility. The global ductility demand and the local rotational ductility demands can be computed by inelastic dynamic time history analysis. The global displacement ductility capacity can be evaluated by push-over analysis based on the assumption that a global kinematical mechanism is formed with plastic hinges developed at the ends of beams and the bottom of columns, and is defined as, HA=AU/Ay (5.10) global displacement ductility; the roof displacement when a kinematical mechanism forms; the roof displacement when the first beam plastic hinge forms; where, u.A = 107 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY Member local rotational ductility can be evaluated based on the ultimate rotational capacity of the plastic hinge. The rotational ductility is calculated from moment-rotation curve assuming an elastic-perfectly plastic behavior, and defined as follows, ne=eu/ey (5.ii) where, u.e = local rotational ductility; 9U = the ultimate plastic rotation; 0y = the rotation at the yield moment; Both the local rotational ductility at the component level and the global displacement ductility at the system level should be examined in order to provide the structure with sufficient ductility. The limit states are in the following forms, G = u£-nA (5.12) G = H°-n9 (5-13where u,° denotes the structural displacement ductility capacity; |iA denotes the required structural displacement ductility; u,g denotes the member rotational ductility capacity; u.e denotes the required member rotational ductility; There is no widely accepted global ductility limit as well as local ductility limit, since they depend on the structural type and configuration, material, member connections, foundation type, site condition, etc. For this level of survivability limit state, the target reliability index 108 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY should be set around 3.0-4.0 or even higher, as the consequence of failure due to overall collapse would be catastrophic. 5.3 Implementation of Performance-based Seismic Design 5.3.1 Reliability and performance-based seismic design In Bertero and Bertero (2002), it is defined that performance-based seismic design is "consisting of selection of design criteria and structural systems such that at the specified levels of ground motion and with defined levels of reliability, the structure will be damaged beyond certain limiting states or other useful limits". In other words, the essence of performance-based design is to control damages due to different levels of hazard by controlling structural responses, with defined levels of reliability. As a result, performance-based design encompasses proper determination of multi-level earthquake ground motions corresponding to the site seismic risks, and definition of multiple performance criteria associated with each level of seismic hazard according to the minimum code requirement as well as the enhanced requirements specified by the owner. As such, the designer must employ multi-level design criteria and execute elaborate structural analysis to realistically evaluate the structural performances, so that all the performance objectives are satisfied with the specified confidence and accordingly the whole-life cost is minimized. All the work can only be accomplished in the framework of reliability-based optimum design, in view of the great amount of uncertainties involved in the entire design process. Li and Foschi (1998) introduced a general Inverse Reliability Method for estimation of design parameters corresponding to given target reliabilities with multiple constraints. The 109 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY approach had been applied successfully to solving some inverse reliability problems in earthquake engineering and offshore engineering. Foschi et al (2002) proposed a computational approach for efficient implementation of performance-based design, and several case studies were presented to illustrate its applicability. Bertero and Bertero (2002) emphasized that performance-based seismic design should be carried out in the format of probabilistic design. A reliability-based framework for performance-based design was put forward in Wen (2001), where minimum lifecycle cost criteria were adopted to determine the target reliability for structures under multiple natural hazards. The successful implementation of performance-based design hinges on satisfying the code and the owner's requirements by fulfillment of multiple performance objectives under multiple levels of hazard with minimum cost. Owing to the great amount of uncertainties involved in the design process, reliability assessment of the structural design is deemed indispensable; therefore, performance-based design should be carried out in the format of reliability-based design, with the solution obtained by optimization. 5.3.2. Performance-based seismic design using neural networks In this study, performance-based seismic design will be formulated in the context of reliability-based design, and the design parameters are to be calculated by structural optimization. Neural networks will be employed to expedite the optimization process, by improving computational efficiency. Besides the basic requirements of the seismic code, the designer must also satisfy the requirements specified by the owner with predefined target reliability levels. Higher 110 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY reliability implies higher initial cost, lower maintenance cost and lower expected damage cost for the same earthquake motion. This can be achieved by maximizing the expected benefit and minimizing the expected whole life cost. In the absence of benefit, the criterion of minimal lifecycle cost should be adopted. After the owner and the designer have reached an agreement on multi-level performance objectives and the corresponding target reliabilities, the design can be formulated as a structural optimization problem in the following form, Find the design parameter vector Xa to minimize the objective function V=Z(Pl-fik(X4)) +C(XJ (5.14a) subject to Xi < Xd < X» (5.14bin which B[ - the target reliability index corresponding to k-th performance objective; Bk(Xd)= the calculated reliability index corresponding to k-th performance objective associated with design parameter vector Xa; C(Xd) = a cost function defined in terms of the design parameter vector Xa; Xi = the vector of lower bounds of design parameter vector Xa; Xu = the vector of upper bounds of design parameter vector Xa; Structural optimization will be applied to calculate the optimal design parameter vector through minimizing the objective function. The optimization can be effected by executing an optimization program linked to a reliability analysis sub-program and a structural analysis sub-program. Such type of software is not available on the market. Even if such a program were ready to use, a large and complex structure under strong seismic excitation would 111 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY require, during the optimization process, a large number of structural analyses and reliability assessments. To reduce the computing effort and to improve computational efficiency, use will be made of approximation models that can provide acceptable accuracy and in the same time save computational demand. Neural networks will be employed as a surrogate for the expensive and time-demanding structural analysis needed in the reliability assessment and the optimization for performance-based seismic design. The stochastic structural response is a major concern in performance-based design. In this study, the probabilistic response is estimated by fitting a series of its deterministic values to a proper probability distribution. A database of input variables is generated first. For a given input variable combination, the nonlinear dynamic structural response is calculated using program CANNY (Li, 1996) based on a set of ground motion accelerations that are characterized by the common ground parameters. Then the probability distribution of the response is found by fitting the response data to an appropriate distribution (Lognormal distribution or Extreme Value I distribution) with the mean and standard deviation obtained. This process is repeated for all the combinations in the input variable database. Finally, there exist a response database for the mean value and one for the standard deviation. Two neural networks will be trained, one for the mean value and the other for the standard deviation, and they will be used for seismic reliability analysis. Performance-based design is formulated above as a constrained optimization problem, and can be solved in general by any constrained optimization approach. Since the structural responses are very complicated for a strong ground excitation, with peaks and troughs due to resonance, gradient-based methods may encounter convergence difficulties or even diverge. 112 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY Gradient-free algorithms such as simulated annealing (Kirkpatrick et al, 1983), genetic algorithm (Goldberg, 1989), trust region method (Byrd, et al, 1987, 1988; Conn et al, 2000), Tabu search (Glover, 1989, 1990; Glover and Laguna, 1993, 1997; Corne et al, 1999; Karaboga and Pham, 1999), particle swarm algorithm (Eberhart and Kennedy, 1995; Kennedy and Eberhart, 1995; Kennedy et al, 2001), or other random search tools may be more suitable in this circumstance. 5.4 Summary and Discussion Performance-based design has been established as the mainstream for structural design in seismic regions after reexamination of the philosophy and engineering practice of current seismic design by the structural engineering community, because of the colossal economic losses in the last earthquakes. SEAOC Vision 2000 and FEMA 273 have laid the groundwork of performance-based design by specifying multiple levels of hazard and multiple performance objectives as well as presenting refined numerical analytical procedures. However, there are still a lot to do for implementation of performance-based in routine seismic design. The realistic determination of the characteristics of future earthquake ground motion on the basis of seismic hazard at the site is a pivotal first step for a successful seismic design, with the participations of geotechnical engineers and seismologists essential and conducive. The multiple performance objectives and the corresponding target reliability levels subject to different hazards have to be decided collectively by the owner, structural engineer and municipal authority. The structural design should be carried out based on realistic modeling of the structure and its environment. Sophisticated structural model has to be elaborated by reflecting the nonlinear behavior of the members, connections and soil-113 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY structure interaction, as well as the effects of non-structural components and building contents. Nonlinear dynamic time history analysis should be resorted to so that the real responses of the structure are calculated. The structural detailing must rely on well-proven engineering practice or extensive experimental verification. Strict construction quality control and rigorous inspection throughout the whole process are necessary for realization of the design. Effective maintenance and timely rehabilitation of the structure during its service life will be required to keep its performance up to the standard and reduce time-dependent risks. Many uncertainties are involved throughout the design process as regard to the ground motion, material property, structural configuration and detailing, analytical model, construction and maintenance. It is critical to consider all the major uncertainties to guarantee that the design objectives are met with a certain confidence. Hence, performance-based design should be implemented in the context of reliability-based design, with the design parameters computed by optimization. A performance-based design framework has been proposed in this study. Four performance objectives are described corresponding to four levels of seismic hazard. Four design criteria are discussed as to structural stiffness, yield strength, ultimate strength and ductility. Because of the complicated responses of the structure when it is subjected to earthquake motions, reliability assessment and optimal design generally are computational intensive and time demanding. In order to improve computational efficiency and reduce the work burden, neural networks are applied to find a mapping of the input-output functional relationship, and are 114 CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY employed as the surrogate for the computer code in the design process, making a computationally prohibitive task tractable and executable. Performance objectives are accomplished by optimization of an objective function that may include cost, making a design technically dependable and economically beneficial. It is expected that greater structural reliability will be achieved in performance-based design for structures with various performance requirements, and economic values of the buildings and their contents will be protected. With progressive developments of this "controlled design" procedure, it is envisioned that performance-based design will be applied widely in the near future for rehabilitation of existing structures and creation of new buildings, with design implemented in a cost-effective manner and risks well controlled. 115 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES CHAPTER 6 SEISMIC RELIABILITY ANALYSES: CASE STUDIES 6.1 Introduction Five case studies are now presented for seismic reliability assessment of structures. (1) A low two-story reinforced concrete frame was used as a first example of an existing building. The responses of interest were the maximum floor drift and the maximum roof drift. (2) A twenty-story reinforced concrete structure was used as an example of a high-rise building, with its seismic performances evaluated for two levels of earthquakes. The responses chosen were the maximum values of roof displacement, roof acceleration, inter-story drift ratio of the 15th floor, the inter-story drift ratio of the 5th floor, base shear force and base overturning moment. (3) A bridge bent without or with seismic isolation was assessed for its seismic performance subjected to two levels of ground shakings. The maximum values of displacement at the cap beam, column base moment, column base shear and column ductility, and beam moment and beam ductility, were selected as the responses. (4) A wood shear wall, for which the influence of the nail spacing on structural performance was investigated. 116 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES (5) An actual instrumented building that has experienced three earthquakes and suffered damage was evaluated for its seismic performance, if it were subjected to a ground shaking similar to the Northridge, California earthquake of 1994. 6.2 Description of The Nonlinear Dynamic Analysis Program Any nonlinear dynamic analysis program can be used for calculation of structural response. In this thesis, a general-purpose 3D nonlinear static and dynamic structural analysis program, CANNY (Li, 1996), was used. The material nonlinearity is embodied by a lumped plasticity model. For geometric nonlinearity, P-A effects can be included. The structural system is discretized into an assembly of massless elements. Altogether, seven types of element are available, ie., beam element, column element, shear panel element, link element, support element, cable element and isolation element. The mass can be lumped at structural joints or concentrated at the center of gravity on each floor when a rigid diaphragm is assumed. The program can be used for analyzing structural responses due to dead, live, wind and seismic load. Nonlinear static pushover analysis for structure subject to monotonic or cyclic loading can be undertaken with a limit on roof displacement or base shear specified. Nonlinear dynamic analysis is conducted step by step in the time domain using either Newmark's P method or Wilson's 0 method. A number of hysteresis models are built in the program for description of member nonlinear force-displacement (moment-curvature) behavior when subjected to cyclic loading. Uniaxial hysteresis models are devised to simulate the inelastic behaviors of uniaxial bending, shear and axial tension or compression. Multiple axial spring models can be used to simulate the 117 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES flexural behavior of reinforced concrete column under the action of varying axial load and biaxial bending. Biaxial shear models are developed to approximate the column biaxial shear deformation or the lateral stiffness of a layered rubber bearing under bi-directional lateral loads. This program was selected as it uses the hard disk as virtual memory to store the stiffness matrix and to conduct the analysis, with no limitation as to the size of the problem. It can perform nonlinear dynamic analysis of a structure quickly, and there is a library of hysteresis models available to choose. 6.3 Case Study 1: A Two-story Reinforced Concrete Plane Frame 6.3.1. Description of the structure and ground motion Wi (300x750) (400x500) 4.0m W2 (300x750) 4.0m (400x500) 777 9.0m Figure 6.1 Reinforced concrete plane frame geometry 118 CHAPTER 6 SEISMIC RELAEBILITY ANALYSES: CASE STUDIES A one-bay, two-story reinforced concrete plane frame is selected as a first example of an existing structure for seismic reliability assessment. The dimensions of the frame are shown in Figure 6.1. It has a span of 9.0 m and story height of 4.0 m. All the columns have cross section 400 x 500 mm, while the beams have cross section 300 x 750 mm. The columns are rebars (As = 2040 mm2) at the top and the bottom. The weights on the roof and the floor are denoted by W, and W2 respectively. For seismic retrofit of an existing structure, reliability assessment needs to be carried out to evaluate its performance under seismic excitation, in an effort to identify the weaknesses and propose strengthening measures. It was assumed that the earthquake occurrence could be described as a Poisson process with arrival rate v - 0.01/year, and the PGA had a Lognormal distribution with coefficient of variation (COV) 0.6, with design PGA (return period 475 years) ad = 400cm/sec2. symmetrically reinforced using 5#25 steel rebars (As = 2550 mm2), and the beams have 4#25 Pa(a>ad) = 1.0- exp(-vPe(a >ad)) = 1/475 (6.1) or PJa>ad) = 2.10748234e-3 = 0.210 (6.2) 0.01 The corresponding Normal variate for the event is fie = 0.803 Since olna = Jln(l + Va2) = 0.6, or Va = 0.658 (6.3) And ad = a =exp(PeJlnfl + Va2)) = 400 cm/sec2 (6.4) 119 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES So a 400,1 i + v: exp(8e^ln(l + Va2)) exp(0.803* 0.6) 478.8869452 onn , 2 = 300 cm/sec And <7 =aV= 300 x 0.658 = 200 cm/secf a a (6.5) (6.6) Hence, the earthquake peak ground acceleration is assumed to have a Lognormal distribution with a mean 300 cm/sec2 and a standard deviation 200 cm/sec2. Due to the high uncertainties associated with the expected earthquake ground motion, the seismic ground shaking was presumed to be characterized mainly by three parameters (PGA A, predominant ground frequency cog and duration Td) that have probability distributions as given in Table 6.1. Thirty random combinations of Ag, cog and Td were generated via Latin Hypercube Sampling as given in Table 6.2. Based on the combinations, thirty ground motion acceleration time histories were synthesized using the Hsu & Bernard modulation function (where to was taken as 0.2Ta). Table 6.1 Case study 1: Ground motion parameters distributions and statistics Parameter Distribution Mean Standard deviation AX (cm/sec2) Lognormal 300 200 co g (rad/sec) Normal 7.50 2.00 Td (sec) Normal 40.0 10.0 6.3.2. Construction of the response databases To build response databases for neural network training, five variables were chosen as input variables, namely, steel yield strength fy, concrete compression strength f'c, modulus of 120 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES elasticity of concrete Ec, weight on the roof Wt and weight on the floor W2. Though f'c and Ec were correlated, they were treated as independent variables. The bounds are given in Table 6.2 Case study 1: Ground motion parameter combinations Ag (cm/sec2) co g (rad/sec) Td (sec) 214.635 "7.177 2.987 106.379 7.389 29.572 286.670 5.858 54.089 326.559 8.806 37.919 430.634 5.320 39.631 326.075 , 8.299 52.725 246.511 9.708 47.239 421.700 10.769 38.375 102.975 7.575 38.375 55.258 4.649 37.577 362.672 8.849 54.033 388.662 6.607 24.269 214.214 9.857 44.706 137.108 9.478 41.971 588.179 6.579 43.697 740.161 11.666 32.187 239.109 9.796 11.623 130.679 9.465 46.306 512.112 8.369 25.356 219.398 9.566 36.272 243.753 8.640 55.202 140.709 8.068 28.331 156.997 6.076 38.223 225.254 8.736 39.924 212.679 10.082 50.279 811.417 5.902 40.164 126.201 5.976 30.247 879.899 7.015 42.378 607.912 5.997 67.593 666.123 8.602 46.157 Table 6.3. Two responses were selected as the output variables, viz., the displacement at the floor D2, and the displacement at the roof D,. Latin Hypercube Sampling was applied to generate a design of 150 combinations of the five input variables (fy,fc,Ec,W1,W2). Then, 121 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES for every combination of the input variables, the program CANNY was run to compute the corresponding responses D2, D, for the 30 synthesized ground acceleration time histories. Subsequently, for each response, its mean and standard deviation were calculated based on the 30 values for the 30 artificial ground accelerograms. Finally, for every response, two response databases were created, one for its mean and the other for its standard deviation. Altogether, four response databases were constructed. Appendix A shows just the first 10 combinations and the corresponding responses. The cross-peak tri-linear model CP3 was adopted to simulate the hysteresis behavior of the reinforced concrete members, with its hysteresis skeleton curve shown in Figure 6.2. This model can be used to simulate the post-yield unloading stiffness degradation and strength deterioration. Figure 6.2 Cross peak tri-linear hysteresis model CP3 (Li, 1996) 122 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES Table 6.3 Case study 1: Input variable bounds Variable Lower bound Upper bound /v(MPa) 300 500 /;(MPa) 15 45 £c(MPa) 19500 25500 W1 (KN) 280 370 w2(m 360 540 6.3.3. Reliability assessment Based on the aforementioned four response databases, four neural networks were trained, for the mean and standard deviation of the two responses, as the earthquakes were changed according to Table 6.2. The neural network-training program was run to learn the unknown functional dependencies between the five input variables (fy,fc,Ec,W1,W2) and the four output variables (D2, SD2 ,D,,SD1). Hereafter, neural network relative error is defined as, Ok-Yk with root mean square relative error (RMSRE) given by, RMSRE = k=l where Ok denotes the target output for the k-th example; Yk denotes the neural network output for the k-th example; P denotes the number of examples. (6.7) (6.8) 123 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES In this case, of the 150 examples, 120 were used for training and 30 were used for testing. The number of hidden neurons and network RMSREs for the four responses are given in Table 6.4, with the relative error statistics for every response shown in Table 6.5. Table 6.4 Case study 1: Neuron numbers and neural network RMSREs Response Neuron number Training Testing 5 0.017 0.024 6 0.021 0.020 D, 4 0.015 0.015 $D1 3 0.019 0.015 in the table, Z)7 denotes the mean value of roof displacement Di; SDl denotes the standard deviation of roof displacement Di; D2 denotes the mean value of floor displacement D2; SD2 denotes the standard deviation of floor displacement D2; Table 6.5 Case study 1: Neural networks training relative error statistics Relative error Mean Standard deviation e(D2) -0.0008 0.0185 e(SD2) 0.0002 0.0152 efD,) 0.0004 0.0197 e(SDl) -0.0008 0.0224 During reliability analysis, the input variables were postulated to have the following probability distributions and statistics as shown in Table 6.6. 124 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES Table 6.6 Case studyl: Input variable probability distributions and statistics Input variable Distribution Mean Standard deviation /y(MPa) Lognormal 400.0 30.0 /;(MPa) Normal 30.0 4.5 £c(MPa) Normal 22500.0 1000.0 W,(KN) Normal 360.0 24.0 W2(KN) Normal 450.0 30.0 Three limit states were examined in this study corresponding to three performance levels, ie., collapse prevention, life safety and normal function. For each combination, the displacements over the 30 records were fitted to a Lognormal distribution. Thus, the roof displacement D, and the floor displacement D2 were expressed as, D, 1 + exp\ 1 In r (S >A 7 + 1 D2 = 1 + expl R ln\ °D2 1 + (6.9a) (6.9b) where Rn is a random variable with Standard Normal distribution. If the responses were fitted to an Extreme Value-I distribution, then the responses could be calculated by inverse transform as, D,=D,-^—SLfr + bi(-bip)J D2=D2 — y/6SD2 [y + ln(-lnp)J (6.10a) (6.10b) 125 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES where Euler constant y = 0.5772; pis a random variable with uniform distribution over the interval [0,1]. (1) Collapse prevention limit state For the limit state of collapse prevention, one failure mode was considered in respect to the roof displacement as indicated by the following performance function, with the displacement limit set to 3% of the building height. The associated reliability indices by mean of Importance Sampling (IS) and Monte Carlo Simulation (MCS) are shown in Table 6.7, with responses calculated by Neural Networks (NN) and Local Interpolation (LI) (Foschi et al, 2002). G = 0.240-D1(fy,f'e,Eo,W„Wa) (6.11) Table 6.7 Case study 1: Reliability indices for collapse prevention limit state Performance function IS MCS NN LI NN LI G = 0.240 -D, 1.828(1.826) 1.767(1.780) 1.825(1.821) 1.769(1.782) Note: The values in parentheses are based on Extreme Type-I distribution. (2) Life safety limit state For the limit state of life safety, three failure modes were considered in regard to the roof displacement, the floor displacement as well as the inter-story drift between the roof and the floor, as indicated by the following three performance functions. The drift limit was set to 1.5% of the height of the story or building. The associated reliability indices by IS and MCS 126 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES for the three failure modes, as well as the system reliability are given in Table 6.8, and the responses were calculated by NN and LI. G, =0.120-D1(fy,fc,Ec,W1,W2) (6.12a) G2 =0.060-D2(fy,f;,Ec,W1,W2) (6.12bG3 =0.060-[D1(fy,f'c,Ec,W1,W2)-D2(fy,fc,Ec,W1,W2)] (6.12c) Table 6.8 Case study 1: Reliability indices for life safety limit state Performance function IS MCS NN LI NN LI Gj = 0.120 -Dj 0.937 (0.741) 0.887 (0.671) 0.935 (0.740) 0.887 (0.675) G2 = 0.060 -D2 0.719 (0.418) 0.690 (0.420) 0.722 (0.423) 0.694 (0.423) G3 = 0.060-(Dj-D2) 0.625 (0.399) 0.614 (0.398) 0.623 (0.398) 0.614 (0.399) System reliability .N/A N/A . 0.052 (-0.356) 0.030 (-0.339) Note: The values in parentheses are based on Extreme Type-I distribution. N/A = Not available (3) Functionality limit state For this limit state, three failure modes were considered regarding the roof displacement, the floor displacement and the inter-story drift between the roof and the floor. The three performance functions are listed below, with the displacement limit set to 0.5% of the height of the story or building. The associated reliability indices by IS and MCS for every failure mode, as well as the system reliability are presented in Table 6.9. G, =0.040-D,(fy,f;,Ec,W„W2) (6.13a) G2 =0.020-D2(fy,fc,Ec,W1,W2) (6.13b127 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES G3 =0.020-[Dl(fy,fc,Ec,W1,W2)-D2(fy,fc,Ec,W1,W2)] (6.13c) Table 6.9 Case study 1: Reliability indices for functionality limit state Performance function IS MCS NN LI NN LI G} = 0.040-D, -.511 (-.475) -.511 (-.464) -.510 (-.472) -.507 (-.461) G2 = 0.020-D2 -.480 (-.414) -.587 (-.476) ..479 (-.406) -.582 (-.471) G3 = 0.020-(D,-D2) -.091 (-.102) -.061 (-.093) -.084 (-.100) -.056 (-.089) System reliability N/A N/A -1.499 (-.555) -1.529 (-.513) It can be observed from the above results that, subject to the probabilistic earthquake ground motion and the assumed variable statistics, the performances of the structure can be considered as below standard. Though collapse is less likely to happen (with probability of failure about 2%), life safety of the occupants cannot be guaranteed (with probability of failure about 50%), to say nothing of normal operation (with probability of failure more than 90%). Hence, it needs to be retrofitted up to standard based on the assumed seismic hazard. For comparison with Neural Networks, another approximation scheme, Local Interpolation was also employed. It was found that Local Interpolation took more time than Neural Networks, as for each query point (the point whose response is sought), it involves searching the response database for some nearest neighbors and estimating the response by interpolation, which is time consuming especially for a large database. It can also be seen that, in general, the reliability prediction based on Lognormal distribution is at about the same level as that of Extreme Value-I distribution. 6.3.4. Sensitivity analysis Sensitivity analysis was conducted to evaluate the influence of each variable on reliability index, based on which the important variables can be identified. Only the collapse prevention 128 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES limit state was considered for this purpose, with responses fitted to Lognormal distribution. The results were given in Table 6.10, in which each mean was varied up and down by 5% while the others kept unchanged. Table 6.1,0 Case study 1: Variation of reliability index with statistical parameters Variable Parameter Parameter value Reliability index 380 1.755 400 1.808 /,(MPa) 420 1.860 4 1.807 °(fy) 20 1.807 40 1.804 28.5 1.782 tff'c) 30.0 1.825 /c' (MPa) 31.5 1.870 0.3 1.934 <*(f.) 3.0 1.820 6.0 1.786 21375 1.940 rl(Ec) 22500 1.807 £c(MPa) 23625 1.671 225 1.820 a(Ec) 450 1.815 1000 1.807 342 1.854 rfWi) 360 1.807 Wj (KN) 378 1.757 3.6 1.811 a(Wx) 7.2 1.811 18.0 1.809 427.5 1.841 »(W2) 450.0 1.807 472.5 1.772 4.5 1.808 <j(W2) 9.0 1.808 22.5 1.808 Based on the above results, it can be concluded that the mean values of the five input variables are important for the reliability evaluation, while their standard deviations are not so important as the reliability index is not sensitive to the variation of the standard deviations. CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES ju( fy ) and ju( f'c) have a positive influence on reliability, whereas n(Ec), p(M,) and ju(M2) have a negative impact on reliability. 6.4 Case Study 2: A Tall Reinforced Concrete Frame 6.4.1 Description of the structure The structure under investigation is a two-bay, twenty-story reinforced concrete frame (Figure 6.3), taken as an example of a tall building. The story height is 4 m, and each bay is 8 m. The beams have a constant cross section 350mm x 700 mm. The columns have varied cross sections along the height of the building: from stories 1 to 7, BjxH,; from stories 8 to 14, B2xH2; from stories 15 to 20, B3xH}. The reinforcement ratio for the beams and columns is assumed about 1%. 6.4.2 Construction of the response databases Fifteen random variables were selected as the input variables, and they were, • peak ground acceleration, Ag; • predominant ground frequency, cog; • earthquake strong motion duration, Td; • distributed vertical load on beam, q; • steel yield strength, fy; • concrete compression strength for columns from story 1 to 7, fcl; • concrete compression strength for columns from story 8 to 14, fc2; 130 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES concrete compression strength for columns from story 15 to 20, fc3; concrete compression strength for beams, fb; cross section width of columns from story 1 to 7, B, ; cross section depth of columns from story 1 to 7, H,; cross section width of columns from story 8 to 14, B2 ; cross section depth of columns from story 8 to 14, H2; cross section width of columns from story 15 to 20, B3 ; cross section depth of columns from story 15 to 20, H3; |<8mH«8m »| Figure 6.3 Geometry of tall building The lower bounds and upper bounds of these variables for constructing an experimental design, are given in Table 6.11. 131 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES Table 6.11 Case study 2: Input variable bounds Input variable Lower bound Upper bound A (cm/sec2) 10 980 cog (rad/sec) 7t 1271 Td (sec) 1 60 q (KN/m) 15 60 /„(MPa) 400 450 /cl(MPa) 35 45 /c2(MPa) 25 35 /c5(MPa) 15 25 /6(MPa) 15 25 2?, (mm) 700 1000 /f, (mm) 900 1200 52(mm) 500 700 H2 (mm) 700 900 ^(ram) 400 500 //^ (mm) 500 700 Six response variables were selected; namely, the maxima of the roof displacement D20> the roof acceleration A20, the 15th story drift ratio 615, the 5th story drift ratio 0S, the base overturning moment M and the base shear force V. Hammersley sequence sampling was adopted to generate 300 combinations of the fifteen input variables. For every combination of the input variables, the program CANNY was run to compute the desired responses for twenty synthesized ground acceleration time histories (characterized by the three ground motion parameters, ie., Ag, cog and Td). Next, for each response, its mean and standard deviation were calculated based on the values for the twenty artificial ground accelerograms. Appendix B shows just the first 10 combinations and the corresponding responses. The CANNY tri-linear model CA7 was employed to simulate the hysteresis behavior of the 132 CHAPTER 6 SEISMIC RELAIBJLITY ANALYSES: CASE STUDIES reinforced concrete members, as this model can delineate stiffness degradation, strength deterioration and pinching behavior of reinforced concrete. Its hysteresis skeleton curves are shown in Figure 6.4. It was assumed that shear strengths were sufficient for both the columns and beams, so the elastic model ELI was used for shear calculations. .fy 4'm XJ" ~~l ^u*^**^ • • j —*« ::::::::: ::::::::::::::::::::::::: •D Y* tiFy (a) Unloading Stiffness Degradation 00 Strength Deterioration (<0 Pinching Behavior Figure 6.4 CANNY tri-linear hysteresis model (Li, 1996) 133 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES 6.4.3. Reliability assessment 6.4.3.1 Neural networks training Twelve neural networks were trained for the mean and standard deviation of the six responses. Of the 300 combinations, 240 were used for training and 60 were used for testing. The number of hidden neurons and network RMSREs for the twelve responses are presented in Table 12, with the relative error statistics for every response given in Table 6.13. Table 6.12 Case study 2: Neuron numbers and network RMSREs Response Neuron number Training Testing 9 0.011 0.030 e 9 0.015 0.039 •^20 7 0.009 - 0.017 $A20 4 0.032 0.040 o15 9 0.015 0.024 $915 6 0.020 0.032 G5 7 0.011 0.019 7 0.017 0.034 M 8 0.010 0.026 sM 9 0.017 0.040 V 9 0.008 0.022 Sy 8 0.020 0.037 In the table, D20,SD20 denote the mean and standard deviation of roof displacement D20; A20,SA20 denote the mean and standard deviation of roof acceleration A20; 015, S915 denote the mean and standard deviation of the 15th story drift ratio 6l5; 95, S65 denote the mean and standard deviation of the 5th story drift ratio 0S; M,SM denote the mean and standard deviation of base overturning moment M; 134 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES V,SV denote the mean and standard deviation of base shear V; Table 6.13 Case study 2: Neural network training relative error statistics Relative error Mean Standard deviation e(D20) -0.0060 0.0178 £(^D2o) 0.0158 0.0304 s(A20) 0.0001 0.0120 s(SA20) 0.0099 0.0295 e(0„) -0.0056 0.0202 e(S6l5) 0.0016 0.0209 e(G5) -0.0079 0.0193 0.0016 0.0197 e(M) 0.0006 0.0158 e(SM) -0.0003 0.0246 e(V) 0.0028 0.0123 e(Sv) -0.0004 0.0247 Two limit states were considered in this study corresponding to two performance levels, serviceability limit state and ultimate limit state. The responses were fitted to a Lognormal distribution as follows, D D 20 20 1 + 'D20 exp\ f f °D20 2\ R In 1 + [1 < ^20 j J (6.14a) ^20 -K D20j x20 1 + 'A20 exp\ \ A2o J f f °A20 2\ In 1 + vi L A20 -J J J (6.14b) 135 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES ol5 = 9 15 1 + rs v \ 915 J exp\ f f 2\ In 1 + ^015 [i < 915 J ) ) 05 = o5 1+ rs v expl f f fs } 2\ bi In 1 + \ < 65 j J ) (6.14c) (6.14d) M M rexpl J + M J R • In 1 + V = exp\ J + f ( fs } 2^ In 7+ 11 { UJ J J < v J (6. He) (6.141) in the above, Rn is a random variable with standard Normal distribution. If the responses were fitted to Extreme Value-I distributions, then the responses could be calculated by inverse transform as, ^o-D20-^^[Y+ln(-lnp)] D„ = D, A20 = *2o-^^fr + lnf-lnp)] 915 ~ 915 ~ 7T ^S, 015 fr + ln(-lnp)J 0s=es-^-^fr + ln(-lnp)J (6.15a) (6.15b) (6.15c) (6.15d) 136 CHAPTER 6 SEISMIC RELAffilLITY ANALYSES: CASE STUDIES _ |7y M =M -——^-[y + lnf-lnpJJ (6.15e) n V = V-±A}L[r + ln(-lnp)] (6.15f) n where Euler constant y = 0.5772; with p is a random variable with uniform distribution over the interval [0,1]. 6.4.3.2 Two levels of design earthquakes Two levels of earthquake, a frequent minor earthquake for serviceability limit state evaluation and a rare strong earthquake for ultimate limit state evaluation, were considered. (1) The earthquake for serviceability limit state Assume that occurrence of a minor earthquake can be characterized by a Poisson process with arrival rate of v = 0.10/year, and the probability of exceedance of the design earthquake ad in 50 years is 50% (annual probability of exceedance 0.013767), then Pa(a>ad) = 1.0-exp(-vPe (a>ad)) = 0.013 767 (6.16) or P.(a >ad) = 0012862 = 0.13863 (6.17) ' d 0.10 V J The corresponding Normal variate for the event is Be = 1.086 For this earthquake, assume its peak acceleration has a Lognormal distribution with coefficient of variation 0.6, and that the design earthquake is set at 0.15g, or 147.15 cm/sec2, °ina = Jl»0 + Va3) = 0.6, or Va = 0.658 (6.3) 137 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES Since ad = . ° exp(0e Jln(l + V2)) = 147.15 cm/sec2 (6.18) o _ I47.15Jl + V* 176.170535 _ . 2 So a = * = P2 cm/sec2 (6.19) eapf & + Vl)) exP( J- 086 *0.6) And oa = aVa =92 x 0.658 = 61 cnVsec2 (6.20) Hence, the earthquakes peak ground acceleration is assumed to have a Lognormal distribution with a mean 92 cm/sec2 and a standard deviation 61 cm/sec2. (2) The earthquake for ultimate limit state For this level of earthquake, assume that occurrence of the earthquake can be modeled by a Poisson process with arrival rate of v = 0.01/year, and the design earthquake ad with a return period of 475 years is 0.4g, or, 392.4 cm/sec2. As the annual risk is given by, Pa(a >ad) = 1.0-exp(-vPe(a > ad)) = 1/475 (6.1) or . 2.10748234e-3 „„fl7,0 ,, P(a>ad) = = 0.210748 (6.2) 0.01 The corresponding Normal variate for the event is Be = 0.803 For this earthquake, assume that its peak acceleration also has a lognormal distribution with coefficient of variation 0.6, then, olna = Jln(l + V2a) = 0.6, or Va = 0.658 (6.3) Since ad = , ° exp(Be Jln(l + Va2)) = 392.4 cm/sec2 (6.21) 138 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES So a 392.4^1+ V2 _ 469.788 = 290 cm/sec2 (6.22) exp(/3eylln(J + Va2)) exp(0.803* 0.6) And aa = aVa = 290x 0.658 = 191 cm/sec2 (6.23) Thus, the postulated earthquake peak ground acceleration has a Lognormal distribution with a mean 290 cm/sec2 and a standard deviation 191 cm/sec2. 6.4.3.3 Reliability assessment for serviceability limit state The roof displacement, the roof acceleration, the 15th story drift ratio and the 5th story drift ratio were evaluated for this limit state. The roof displacement limit was set to 1/400 of the total building height, with the acceleration limit set to 2.0 m/sec2. The story drift ratio limit was set to 0.0025 (1/400). The probability distributions and statistics of the input variables were given in Table 6.14. The statistics of steel yield strength and concrete compressive strengths were calculated so that the lower bound and upper bound of each variable (Table 6.11) cover the range from mean - 3*standard deviation to mean + 3*standard deviation. It was assumed that the dimensions were well controlled, so a COV of 1% was used. The performance functions are expressed as the followings, G, = 0.200 -D 20 (6.24a) G2 =2.000-A l20 (6.24b) G3 =0.0025-0, 15 (6.24c) G4 =0.0025-65 (6.24d) 139 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES Reliability analysis was carried out using IS and MCS, with the responses estimated by neural networks trained beforehand. The responses were fitted to two distributions, ie, Lognormal distribution and Extreme Value-I distribution (reliability index in parenthesis). The results are presented in Table 6.15. It can be seen that for the specified performance criteria, the structure may be considered to maintain normal operation under the considered earthquakes if a minimum target reliability index was set to be 1.5. Compared to the 5th story, the 15th story has a lower reliability, which implies that more deformation occurs at the higher stories of the structure. Whether the responses have a Lognormal or an Extreme value-I distribution, the reliability estimates are quite similar. Table 6.14 Case study 2: Input variable probability distributions and statistics (Serviceability limit state) Input variable Distribution Mean Standard deviation Ag (cm/sec2) Lognormal 92.0 61.0 a)g (rad/sec) Normal 571 7t Td (sec) Normal 20.0 5.0 q (KN/m) Normal 45.0 4.5 /„ (MPa) Lognormal 400.0 10.0 L, (MPa) Lognormal 40.0 1.5 /„, (MPa) Lognormal 30.0 1.5 L, (MPa) Lognormal 20.0 1.5 A (MPa) Lognormal 20.0 1.5 B, (mm) Normal 900.0 9.0 H, (mm) Normal 1100.0 11.0 B7 (mm) Normal 600.0 6.0 H2 (mm) Normal 800.0 8.0 B, (mm) Normal 450.0 4.5 H3 (mm) Normal 600.0 6.0 140 1 CHAPTER 6 SEISMIC RELAffilUTY ANALYSES: CASE STUDIES Table 6.15 Case study 2: Reliability index for serviceability limit state Performance function Neural networks IS MCS G, = 0.200 -D20 2.173 (2.168) 2.168 (2.170) G2 = 2.000 -A20 2.083 (2.079) 2.071 (2.076) G3 = 0.0025-01S 1.576(1.576) 1.551 (1.552) G4 = 0.0025-0, 1.854(1.840) 1.815 (1.815) 6.4.3.4 Reliability assessment for ultimate limit state The roof displacement, the 15th story drift ratio, the 5th story drift ratio, base overturning moment and base shear force were evaluated for this limit state. The roof displacement limit was set to 1.0% of the building height, with the story drift ratio limit set to 1.0%. The base shear capacity mean was assumed to equal to 10% of the total floor weight of the structure as: 45x16x20x0.1 = 1440KN, and the COV of base shear capacity was assumed 10%. The base overturning moment resistance depends on the column axial forces that are varying during the earthquake, so it is very difficult to estimate. For illustration purpose, it was assumed that the base moment capacity had a Normal distribution with mean 54000 KNm and standard deviation 5400 KNm. The probability distributions and statistics of the input variables were given in Table 6.16, with the performance functions expressed as follows, G,= 0.800 -D20 (6.25a) G2= 0.010-015 (6.25bG3 = 0.010 - 05 (6.25c) G4=M0-M (6.25d141 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES G5=V0-V (6.25e) Again, reliability analysis was carried out using IS and MCS, with the responses estimated by the neural networks trained in advance. As before, the responses were fitted to two distributions: Lognormal and Extreme Value -1. The results are presented in Table 6.17, with values in parentheses corresponding to Extreme Value -1 distribution. From the above calculations, it can be observed that the reliability predictions are close to each other, no matter whether the responses have a Lognormal or an Extreme Value-I distribution. In compared with the 5th story, the 15th story has a lower reliability, the implication of which is that more deformation occurs at the upper stories of the structure. Table 6.16 Case study 2: Input variable probability distributions and statistics (Ultimate limit state) Input variable Distribution Mean Standard deviation Ag (cm/sec2) Lognormal 290.0 191.0 cog (rad/sec) Normal 5n 7t Td (sec) Normal 30.0 5.0 q (KN/m) Normal 45.0 4.5 /„ (MPa) Lognormal 400.0 10.0 L, (MPa) Lognormal 40.0 1.5 (MPa) Lognormal 30.0 1.5 (MPa) Lognormal 20.0 1.5 fh (MPa) Lognormal 20.0 1.5 B, (mm) Normal 900.0 9.0 H, (mm) Normal 1100.0 11.0 By (mm) Normal 600.0 6.0 H2 (mm) Normal 800.0 8.0 B, (mm) Normal 450.0 4.5 H3 (mm) Normal 600.0 6.0 142 CHAPTER 6 SEISMIC RELAffilLITY ANALYSES: CASE STUDIES Table 6.17 Case study 2: Reliability index for ultimate limit state Performance function Neural networks IS MCS G, = 0.800-D20 2.727 (2.768) 2.884 (2.874) G2 = 0.010 -e15 2.057 (2.086) 2.160 (2.162) G3 =0.010-e5 2.298 (2.333) 2.395 (2.394) G4=M0-M 2.543(2.571) 2.648 (2.623) G5=V0-V 2.322 (2.345) 2.442 (2.443) 6.5 Case Study 3: A Bridge Bent Without or With Seismic Isolation 6.5.1 Description of the structure A bridge bent without or with seismic isolation was studied for its seismic performance. The geometry of the bridge with four Lead Rubber Bearing (LRB) isolators is shown in Figure 6.5. The two round columns have a diameter D (mm). The height of the cap beam from the ground is 8 m, with a rectangular section BxH (B is fixed to D + 500, H = 1500 mm). The bearings have a square section (width Br) with a round lead plug of diameter Br I 4, and their height is assumed to be 0.4 Br The reinforcement ratios of the column and beam are assumed 1.25% and 1% respectively. 6.5.2. Construction of the response databases In the case without isolation, five variables were selected as the input variables, namely, peak ground acceleration A, predominant ground frequency cog, strong motion duration Td (Figure 6.6), column diameter D, and vertical load on the bearing Q. In the case with isolation, a sixth variable, the width of the isolators Br was added. The lower bounds and the upper bounds of the variables are given in Table 6.18. 143 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES Six variables were chosen as the output variables, ie., the maxima of displacement at the cap beam A, column base moment Mc, column base shear Vc, column ductility beam moment Mh, and beam ductility jub. Isolator, width Br Figure 6.5 Bridge bent with isolation Figure 6.6 Modulation function Table 6.18 Case study 3: Input variable jounds Input variable Lower bound Upper bound Ag (cm/sec2) 20 1960 cog (rad/sec) 71 1071 Td (sec) 1 60 D (mm) 1500 2100 Q (KN) 1200 3600 Br (mm) 500 1000 Optimized Latin Hypercube Design was adopted to generate 200 combinations of the input variables, including all the data points on the boundary. For every combination of the input variables, twenty artificial earthquake accelerograms (characterized by the three ground motion parameters, ie., Ag, cog and Td but with different phases) were generated and the program CANNY was run to compute the corresponding responses. Then, for each response, 144 CHAPTER 6 SEISMIC RELAJBILITY ANALYSES: CASE STUDIES its mean and standard deviation were calculated based on the values for the twenty artificial ground accelerograms. Finally, for every response, two response databases were constructed, one for its mean and the other for its standard deviation. Appendix C shows just the first 10 combinations and the corresponding responses. The CANNY tri-linear model CA7 was also employed here to simulate the hysteresis behavior of the reinforced concrete members. The degrading bilinear model BL2 was adopted to describe the nonlinear behavior of the isolators, with the hysteresis skeleton curve shown in Figure 6.7. The shear modulus of rubber is taken as 1.0 MPa, with the yield strength of the lead plug set to 10.0 MPa (Priestley and Calvi, 1996). The yield displacement was assumed 10% of the isolator height, and the post-yield stiffness was taken as 1/3 of the initial stiffness. 6.5.3. Reliability assessment 6.5.3.1 Neural networks training Twelve neural networks were trained for the mean and standard deviation of the six responses. Of the 200 combinations, 160 were used for training and 40 were used for testing. The number of hidden neurons and the network RMSREs for the twelve responses are given in Table 6.19 for the bridge without isolation and Table 6.20 for the bridge with isolation. The relative error statistics for every response are shown in Table 6.21. 145 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES Figure 6.7 Degrading bilinear model (Li, 1996) Table 6.19 Case si tudy 3: Neuron numbers and network RM SREs (without isc Response Neuron number Training Testing A 12 0.018 0.024 5 0.026 0.039 8 0.012 0.024 12 0.027 0.033 K 9 0.014 0.028 6 0.045 0.060 9 0.020 0.040 3 0.024 0.030 10 0.019 0.025 8 0.040 0.046 Mb 4 0.024 0.025 10 0.033 0.030 Where A,SA denote the mean and standard deviation of displacement A; Mc,SMc denote the mean and standard deviation of column moment Mc; 146 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES Vc,SVc denote the mean and standard deviation of column shear force Ve; Jic^^ denote the mean and standard deviation of column rotational ductility pc ; Mb,Sm denote the mean and standard deviation of beam moment Mb; Mb>Sfi> denote the mean and standard deviation of beam rotational ductility jub; Table 6.20 Case study 3: Neuron number and network RMSREs (with isolation) Response Neuron number Training Testing A 10 0.017 0.020 11 0.021 0.034 10 0.019 0.030 7 0.046 0.044 K 9 0.015 0.021 8 0.041 0.040 ~Pc 12 0.016 0.022 7 0.026 0.029 Mb 7 0.019 0.031 10 0.025 0.041 Mb • 10 0.015 0.034 Spb 7 0.032 0.040 Two limit states were investigated that correspond to two performance levels, serviceability limit state and ultimate limit state. With the responses fitted to a Lognormal distribution, they can be calculated as follows, A = expl 7+ ^ i R„ i 11 In 7 + \U AJ J (6.26a) 147 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES M = 1 + exp\ R. - In ( (S ^ J + \ ^Mc (6.26b) Vc = 7 + exp\ R, 7 + (6.26c) 7 + ex/? s 2^ In 1 + fJC [ i I J J (6.26d) 7 + ex/? 7? 1 2\ (6.26e) Mb 1 + Mb J exp\ R 1 In 7+ ^ (6.26f) In the above, Rn is a random variable with standard normal distribution. 148 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES Table 6.21 Case study 3: Neural network training relative error statistics Relative error Without isolation With isolation Mean Std dev Mean Std dev e(A) 0.0000 0.0220 0.0017 0.0176 e(SJ -0.0004 0.0305 -0.0017 0.0265 s(Mc) -0.0017 0.0164 -0.0011 0.0213 *(SMc) -0.0036 0.0289 -0.0021 0.0460 e(Vc) -0.0021 0.0179 0.0011 0.0162 B(SVC) -0.0018 0.0632 -0.0015 0.0406 e(Hc) -0.0003 0.0254 0.0019 0.0176 0.0042 0.0269 0.0014 0.0270 s(Mb) -0.0027 0.0235 0.0044 0.0235 e(Sm) -0.0020 0.0433 -0.0021 0.0460 -0.0019 0.0244 0.0036 0.0199 e(Spb) 0.0010 0.0321 -0.0019 0.0336 Type-I distribution, then (6.27a) (6.27b) (6.27c) (6.27d) (6.27e) (6.27f) As in previous examples, if the responses were fitted to an Extreme they could be calculated as, A = A -^-^-fr + ln(-lnp)J n — JEs Mc =MC-2L-ML[r + in(-inp)j 7t V<=Vc-^-*-[y + ln(-lnp)] n Mc=Mc~ ^Sf* f / + In(-lnp)] 7t — J6S Mb =Mb-?-^L[y + ln(-lnp)] n Mb =Mb —f/ + ln(-lnp)J n 149 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES where Euler constant y = 0.5772; and p is a random variable with uniform distribution over the interval [0,1]. 6.5.3.2 Two levels of earthquakes Two levels of earthquake were considered in this case. One was a minor earthquake for serviceability limit state, and the other was a major earthquake for ultimate limit state. (1) The earthquake for serviceability limit state Assume that occurrence of a minor earthquake can be characterized by a Poisson process with arrival rate of v = 0.05/year, and the probability of exceedance of the design earthquake ad in 50 years is 50% (annual probability of exceedance 0.013767), then Pa(a>ad) = 1.0-exp(-vPe (a >ad)) = 0.013 767 (6.16) or Pe(a>ad)=°°13862 = 0.277253 (6.28) ' " 0.05 The corresponding Normal variate for the event is Be = 0.591 For this earthquake, assume its peak acceleration has a Lognormal distribution and coefficient of variation 0.6, and that the design earthquake is set to ad = 180 cm/sec2, then °lna = yll"(l + Va2) = 0.6, or Va = 0.658 (6.3) Since ad = ° exp(Be^ln(l + V2)) = 180 cm/sec2 (6.29) 180jl + V2 215 499 , So a = — ; = = 151 cm/sec2 (6.30) exp(Bejln(l + V2 )) exp( 0.591* 0.6 ) 150 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES And aa = aVa = 151 x0.658 = 99 cm/sec2 (6.31) Hereby, the earthquake peak ground acceleration has a Lognormal distribution with mean 151 cm/sec2 and standard deviation 99 cm/sec2. (2) The earthquake for ultimate limit state Assume that occurrence of earthquake can be modeled by a Poisson process with arrival rate of v = 0.01/year, and the design earthquake with a return period of 475 years is ad = 440 cm/sec2, then the annual risk is given by, PJa>ad) = 1.0-exp(-vPe(a > ad)) =1/475 (6.1) or . 2.10748234e-3 n^innAO ,£ n. P„(a>ad) = = 0.210748 (6.2) 0.01 The corresponding Normal variate for the event is 6e = 0.803. &,na = ylln(l + Va2) = 0.6, or Va = 0.658 (6.3) Since ad = ° exp(0eJln(l + V2)) = 440 cm/sec2 (6.32) So a = 44°f^ = 526'77 = 325cm/sec2 (6.33) exp(Bjln(l + Va2)) exp(0.803* 0.6) And aa = aVa =325x0.658 = 214 cm/sec2 (6.34) Hence, the earthquake peak ground acceleration has a Lognormal distribution with mean 325 cm/sec2 and standard deviation 214 cm/sec2. 151 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES 6.5.3.3 Reliability assessment for serviceability limit state The displacement at the cap beam was checked against the limit that was set as 1/200 of the height. The distributions and statistics of the input variables were given in Table 6.22, with the performance function in the following form, G = 8.0/200 -A(Ag,a)g,Ts,D,Q) (6.35a) or G = 8.0/800-A(Ag,(og,Ts,D,Q) (6.35bwhere A denotes the cap beam lateral displacement. Table 6.22 Case study 3: Input variable probability distributions and statistics (Serviceability limit si .ate) Input variable Distribution Mean Standard deviation Ag (cm/sec2) Lognormal 151.0 99.0 cog (rad/sec) Normal 571 71 Ts (sec) Normal 20 5 /J(mm) Normal 1800 90 Q(KN) Normal 2400 240 Br (mm) Normal 750 7.5 (1) Bridge bent without isolation In this case, reliability analysis was conducted by IS and MCS, with responses calculated by neural networks and a Local interpolation scheme. The results are given in Table 6.23. Table 6.23 Case study 3: Reliability index for serviceability limit state without isolation Performance function IS MCS NN LI NN LI G = 8.0/200-A 2.101(2.096) 1.526(1.517) 2.104(2.101) 1.536(1.510) Note: The values in parentheses are based on Extreme Value-I distribution 152 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES (2) Bridge bent with isolation In this case, reliability analysis was also carried out with IS and MCS, and the results were presented in Table 6.24, where the mean of Br was set to 750 mm with a COV of 0.01. Table 6.24 Case study 3 Reliability index for serviceability limit state with isolation Performance function IS MCS NN LI NN LI G = 8.0/200-A 3.760(3.719) 3.471(N/A) 3.860(3.812) 3.800(3.823) G = 8.0/800 -A 2.063(2.067) 1.896(2.002) 2.064(2.068) 1.882(1.872) The sensitivity of reliability index (G = 8.0/800 - A) with respect to the mean of Br was investigated and plotted in Figure 6.8, in which the COV of Br was assumed 0.01. It can be observed from Table 6.23 and Table 6.24 that, for serviceability limit state, the reliability level of the isolated bridge where the lateral displacement limit is set to 1/800 of its height, is about the same as that of the non-isolated bridge where the lateral displacement limit is set to 1/200 of its height; hence, seismic isolation can greatly improve the bridge performance. The analysis using neural networks takes less time compared to Local Interpolation, as Local Interpolation involves searching the whole database and ranking the closest neighbors to the query point. It can be seen from Table 6.23 and 6.24 and Figure 6.8 that the reliability indices are approximately the same, whether the response is fitted to Lognormal or Extreme Value-I distributions. Figure 6.8 shows that the reliability index decreases as the isolator mean width increases. This is explained by the fact that, as the isolator mean width increases, so does its 153 CHAPTER 6 SEISMIC RELAJBEJTY ANALYSES: CASE STUDIES stiffness; therefore, more inertial force is transmitted to the bridge bent from the deck, which results in larger displacement. 3 A 1 1 1 1 1 500 600 700 800 900 1000 Isolator mean width B (mm) Figure 6.8 Variation of reliability index with Br (mm) 6.5.3.4 Reliability assessment for ultimate limit state Strong earthquakes were applied for reliability analysis of the ultimate limit states. The probability distributions and statistics of the input variables are given in Table 6.25. (1) Bridge bent without isolation In this case, five performance functions were evaluated regarding cap beam displacement, column moment, column shear, column ductility, and beam moment. They are given below, Table 6.25 Case study 3: Input variable probability distributions and statistics (Ultimate limit state) Input variable Distribution Mean Standard deviation Ag (cm/sec2) Lognormal 325.0 214.0 o)g (rad/sec) Normal 571 Td (sec) Normal 30 5 /J(mm) Normal 1800 90 g(KN) Normal 2400 240 Br(mm) Normal 750 7.5 154 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES G, = 0.2-A(Ag,cog,Td,D,Q) (6.36a) G2 =MU -Mc(Ag,cog,Td,D,Q) (6.36b) G3 =VU -Vc(Ag,a>g,Td,D,Q) (6.36c) G4=p0-pc(Ag,cog,Td,D,Q) (6.36dGs =My -Mb(Ag,cog,Td,D,Q) (6.36e) where A denotes the cap beam lateral displacement; Mu and Mc denote the column ultimate moment capacity and seismic demand; V„ and Vc denote the column ultimate shear capacity and seismic demand; u,0 and u.c denote the column hinge rotational ductility capacity and seismic demand; My and Mb denote the beam yield moment capacity and seismic demand; In the above equations, the cap beam lateral displacement limit was set to 2.5% of the height, and the assumed statistics of other variable are presented in Table 6.26. Table 6.26 Case study 3: Random variable probability distributions and statistics Random variable Distribution Mean Standard deviation Mu (KNm) Normal 8900.0 445.0 Vu (KN) Normal 2250.0 112.25 Ho Normal 12.0 1.2 Mv (KNm) Normal 16000.0 800.0 Importance Sampling and Monte Carlo simulation were conducted for reliability calculation, with the responses fitted to Lognormal distribution and estimated by neural networks. The results are presented in Table 6.27. 155 CHAPTER 6 SEISMIC RELAIBELrTY ANALYSES: CASE STUDIES Table 6.27 Case study 3: Reliability index for ultimate limit state without isolation Performance function IS MCS Gj = 0.2-A 2.368 2.438 G2=MU-MC 2.255 2.589 G,=K-Ve 2.242 2.506 2.265 2.283 G5=My-Mb 5.218 Not done (2) Bridge bent with isolation Reliability analysis was also carried out with Importance Sampling and Monte Carlo simulation, with the responses estimated by neural networks. The results are presented in Table 6.28, where the mean of Br was set to 750 mm with a coefficient of variation 0.01. Table 6.28 Case study 3: Reliability index for ultimate limit state with isolation Performance function Reliability index G, = 0.2-A 3.606 (MCS) G2=MU-MC 3.588 (MCS) G3=VU-VC 3.716 (MCS) G4=M0-Mc 6.000* (IS) G5=My-Mb 5.123" (IS) Note: * number of samples = 5000000, coefficient of variation of probability of failure = 36.55% ** number of samples = 5000000, coefficient of variation of probability of failure = 18.62% For performance functions G4 and G5, as the reliability indices are very high, even Importance Sampling simulation with sample size 5,000,000 yielded poor reliability estimates. 156 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES The sensitivity of reliability index to the mean of Br was investigated (G, =0.2-A) and plotted in Figure 6.9, in which the coefficient of variation of Br was assumed as 0.01 of its mean. 3 .j , , , , 1 500 600 700 800 900 1000 Isolator mean width (mm) Figure 6.9 Variation of reliability index with isolator width mean Br To achieve the same level of reliability as the non-isolated case, the displacement limit was set to 1/250 of the height, and the assumed statistics of other random variables were modified as given in Table 6.29. The results of reliability analysis are presented in Table 6.30. Table 6.29 Case study 3: Random variable probability distributions and statistics Random variable Distribution Mean Standard deviation Mu (KNm) Normal 6230.0 311.5 Vu (KN) Normal 1668.75 83.4375 Mo Normal 1.5 0.15 Mv (KNm) Normal 12000.0 600.0 157 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES Table 6.30 Case study 3: ] Reliability index for ultimate limit state with isolation Performance function IS MCS G, =0.032-A 2.505 2.569 G2=MU-MC 2.184 2.410 G,=K-Ve 1.734 2.530 G4=\i0-\ic 2.091 2.124 G5=My-Mb 4.754 Not done The above calculations show that, for ultimate limit state, seismic isolation can significantly improve seismic performance of the bridge bent by reducing the inertial force transmitted to the pier from the deck. In both cases, the bending capacity of the cap beam is far greater than the seismic demand. In comparison of Table 6.27 with Table 6.30, it can be seen that, to achieve the same level of reliability, the capacities in the isolated bridge can be reduced to different extents for different performance criteria. As addressed before, the reliability index decreases with increase of isolator mean width, which has been explained previously. This example is similar to the bridge design in Dicleli (2002) where hybrid seismic isolation was used. 6.6 Case Study 4: A Wood Shear Wall 6.6.1 Description of the structure Wood shear walls are typically used for residential construction in North America (Figure 6.10). It is composed of framing members and sheathing panels that are connected with the frame members by means of nails or screws. In this case study, the wood shear wall under investigation has a height of 2.4 m and a width of 2.4 m, with 12 mm thick Oriented Strand 158 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES Board (OSB) sheathing panels on one side and vertical elements (studs) spacing of 400 mm. The sheathing panels are connected with the frame using common 50 mm long nails. .Framing Member A _Sheathing Panel -Fastener Figure 6.10 Wood shear wall construction 6.6.2 Random variables The response of a wood shear wall during an earthquake depends on several factors: (1) the characteristics of the ground shaking, such as the peak ground acceleration, duration and frequency content; (2) the mass carried by the wall; (3) the nail and its interaction with the wood media; (4) the nail spacing around the periphery of the wall and in the interior of the wall. In this case study, four variables were selected as the input variables, namely, the peak ground acceleration Ag, the mass on the wall M, the nail spacing along the perimeter e1 and the nail spacing in the interior e2, with the probability distributions and statistics given in Table 6.31. 159 CHAPTER 6 SEISMIC RELAJBILITY ANALYSES: CASE STUDIES Table 6.31 Case study 4: Random variable probability distributions and statistics Random variable Distribution Mean Standard deviation et (m) Normal 0.050 0.0050 e2 (m) Normal 0.120 0.0120 M (KN.sec^/m) Normal 6.0 0.6 Ag (m/sec2) Lognormal 0.927 0.556 The earthquake considered was that of Landers, Joshua Tree Station, 1992, with its peak acceleration adjusted according to the statistics of Table 6.31. The distribution of Ag is consistent with a design acceleration of 0.25g at a return period of 475 years. The earthquake was assumed to occur, on average, once every 10 years. The coefficient of variation of Ag was assumed as 0.6. 6.6.3 Performance evaluation Two responses were selected to evaluate the structural performance: the drift of the top of the wall A and the nail tearing force V. 131 combinations of the input variables were generated and the structural responses were calculated by a software package DAP3D developed at the University of British Columbia for 3-dimensional analysis of wood frame structures. This software can perform nonlinear dynamic analysis of an arbitrary wood frame structure, taking into account the hysteresis behavior of the nails in the wood medium. Appendix D shows just the first 10 combinations and the corresponding responses. Based on the response databases, two neural networks were developed for the responses. Of the 131 examples, 104 were used for training and 27 were used for testing. The hidden neuron numbers and network RMSREs for the two responses are given in Table 32, with the relative error statistics for every response shown in Table 6.33. 160 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES Table 6.32 Case study 4: Neuron number and network RMSREs Response Neuron number Training Testing A 13 0.011 0.029 V 6 0.045 0.045 Table 6.33 Case study 4: Neural network training relative error statistics Response Mean Standard deviation e(A) 0.0034 0.0163 e(V) 0.0005 0.0453 The performance criteria were embodied by the following performance functions, G1=H/200-A(e1,e2,M,Ag) (6.37a) G2=V0-V(e„e2M,Ag) (6.37bwhere V0 was the nail force capacity in terms of sheathing edge tearing, which, from test, was assumed normal with a mean of 1.05 KN and a standard deviation of 0.105 KN. Reliability analysis was conducted with Importance Sampling and Monte Carlo simulation, with the responses estimated by neural networks. The results are presented in Table 6.34. Table 6.34 Case study 4: Reliabi ity indices for wood shear wall Performance function IS MCS Gj =H/200-A(e]te2,M,Ag) 2.663 2.778 G2=V0-V(e1,e2,M,Ag) 2.524 2.843 The effects of the e, and e2 on reliability index associated with performance function Gj = H /200-A(eJte2,M,Ag) were investigated by varying et or e2 independently while keep the distributions of other variables unchanged, as shown in Figure 6.11 and Figure 6.12. 161 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES 0 3 c 2.9 2.8 1 2.7 2.6 £ 2.5 0.015 0.03 0.045 0.06 0.075 e1 mean (m) S 4 I 37 = 3.1 J2 2.8 a> 01 2.5 0.03 0.06 0.09 0.12 e2 mean (m) 0.15 Figure 6.11 Variation of reliability index with respect to e, Figure 6.12 Variation of reliability index with respect to e2 It can be observed from the above figures that as e, changed from 0.070 m to 0.010m (a reduction of 85.7%), the reliability index increased from 2.578 to 2.814 (a growth of 9%); whereas the reliability index increased from 2.647 to 3.511 (a growth of 32.6%) as e2 changed from 0.125m to 0.025 m (a reduction of 80%). It seems that it is more effective to improve reliability by decreasing the nail spacing in the interior of the wood shear wall. 6.7 Case Study 5: An Instrumented Structure for Earthquake Response Measurement 6.7.1 Description of the structure This example structure is an actual building, a Holiday Inn located in the city of Van Nuys, California. It is a seven-story reinforced concrete frame-slab structure with a height of about 20 m, which has 8 bays in the longitudinal direction and 3 bays in the transverse direction. The typical floor plan is about 19m by 46 m, as shown in Figure 6.13. It was designed and built in the 1960s, and since has experienced three earthquakes, ie., 1971 San Fernando, 1987 Whittier and 1994 Northridge events. Instruments operated by the California Strong Motion Instrument Program (CSMTP) recorded the structural responses during these earthquakes. For 162 CHAPTER 6 SEISMIC RELAffiJLITY ANALYSES: CASE STUDIES details of the related information, readers are referred to Rahmatian (1997). The aim of this study is to investigate the influences of different components of ground motion on structural performance, and compare the seismic performance of the structure before and after a seismic retrofit. The retrofit strategy suggested here are steel cross brace dampers along longitudinal axes A and D, between transverse axes 4 and 6; and along transverse axes 1 and 9, between longitudinal axes B and C. CP m © ® B&ajK'atan] Bm-45.72 PI © 0 0 ® 450 mm'squai* interior ccrcratp, mlinin ftp.) 4G> 216 nfTi Earerate slab (tjp.) -@ Exterior sp^ncrai. bBamatperipbeiy concrete dab (Vp J 3SBx5Dflrrnv lBKtaior: concrete CDlJFmrlfyp.) Figure 6.13 A typical floor plan of the Holiday Inn (Ventura et al, 2002) 6.7.2 Ground motions It was assumed that the building was subjected to the same ground motions recorded at the ground floor level due to the Northridge earthquake (CSMTP record channel 1, 13, 15, 16). 163 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES The ground motions had four components, in longitudinal direction, transverse direction, vertical direction and a rotational component in the horizontal plane. The peak values for the longitudinal, transverse, vertical and rotational components were, respectively, 444.5 cm/sec2, 408.9 cm/sec2, 295.2 cm/sec2 and 0.0954 rad/sec2, sampling interval being 0.02 sec. For application purpose, the peak values were scaled to 1.0 or -1.0, and they were plotted in Figure 6.14. 1 c 0.5 o E «J o 0 f Iff)™ 20 30 40 50 6 Time (sec) (a) Longitudinal accelerogram -1 Time (sec) (b) Transverse accelerogram Figure 6.14 Holiday Inn earthquake ground motions, Northridge 1994 164 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES -1 J Time (sec) (c) Vertical accelerogram Time(sec) (d) Rotational accelerogram Figure 6.14 Holiday Inn earthquake ground motions, Northridge 1994 (continued) (1) The longitudinal and transverse components The longitudinal and transverse components are supposed to have the same peak design value. Assuming that occurrence of the earthquakes can be modeled by a Poisson process with an arrival rate of v = 0.05/year, and that the design earthquake ad with a return period of 475 years is 4.400 m/sec2, Pa(a>ad) = J.0-exp(-vPe(a>ad)) = 1/475 ' (6.1) and from which the probability of exceeding the design acceleration during an event is, 165 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES . 2.10748234e-3 nn„.jn*„ P„(a>ad) = = 0.042149646 (6.2) The corresponding Normal variate for the event is Be = 1.726 Assuming that the peak accelerations have a Lognormal distribution with coefficient of variation 0.6, then = ^ri(l + V2a) = 0.6, or Va = 0.658 (6.3) ad = . ° exp(BeJln(l + V2)) = 4.400 m/sec2 (6.38) 4.400Jl + V2 5.267756398 , . 2 a = v — =  1.870 m/sec (6.39) exp(Bjln(l + Va2)) exp(1.726* 0.6) aa = aVa = 1.870 x 0.658 = 1.230 m/sec2 (6.40) Thus, the postulated peak horizontal ground acceleration during events has a Lognormal distribution with a mean 1.870 m/sec2 and a standard deviation 1.230 m/sec2. (2) The vertical component The vertical component is supposed to have a peak design value of 2/3 of the horizontal peak value. Again, assume that occurrence of the earthquakes can be modeled by a Poisson process with arrival rate of v = 0.05/year, and that the design earthquake ad with a return period of 475 years is 2.930 m/sec2. As before, Pa(<* >aJ = 1.0-exp(-vPe(a >ad)) = 1/475 (6.1) or n, . 2.10748234e-3 PJa>a,) = = 0.042149646 (6.2) e d 0.05 166 CHAPTER 6 SEISMIC RELAffilLITY ANALYSES: CASE STUDIES The corresponding Normal variate for the event is Be = J. 726 Assuming again that the peak acceleration has a Lognormal distribution with coefficient of variation 0.6, then ^ina = JWl + Va) = 0.6, or Va = 0.658 (6.3) ad = . ° exp(Be Jln(l + V2)) = 2.930 m/sec2 (6.41) 2.930^1 + Va2 3.507846874 i a = v — =  1.245 m/sec (6.42) exp(PeJln(l + V2 )) exp(1.726* 0.6) aa = aVa = 1.245 x 0.658 = 0.819 m/sec2 (6.43) Thus, the postulated peak vertical ground acceleration during an event has a Lognormal distribution with a mean 1.245 m/sec2 and a standard deviation 0.819 m/sec2. (3) The rotational component Following the same procedures and assuming the design peak rotational acceleration of 0.1 rad/sec2, the postulated peak rotational ground acceleration during an event has a Lognormal distribution with a mean 0.0425 rad/sec2 and a standard deviation 0.0280 rad/sec2. 6.7.3 Random variables A structural analysis model has been calibrated by Ventura et al (2002), so only the uncertainties associated with ground motions are considered. The peak values of the four ground motion components, A^.A^.A^A^, were selected as the random variables for the structure before seismic retrofit; and an additional random variable, the hysteretic damper 167 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES sectional area, Ad, was chosen for the structure after seismic retrofit. The variable bounds before and after retrofit are given in Table 6.35 and 6.36. The distributions and statistics of the random variables were listed in Table 6.37. Table 6.35 Case study 5: Input variable bounds (before retrofit) Random variable Lower bound Upper bound (m/sec2) 0.100 4.900 Am (m/sec2) 0.100 4.900 Axz (m/sec2) 0.100 4.900 Agr (rad/sec ) 0.005 0.120 Table 6.36 Case study 5: Input variable bounds (after retrofit] Random variable Lower bound Upper bound A^ (m/sec2) 0.100 9.000 A^ (m/sec2) 0.100 9.000 A„ (m/sec2) 0.100 9.000 A, (rad/sec') 0.002 0.200 Ad (mm2) 1500 15000 Table 6.37 Case study 5: Input variable probability distributions and statistics Random variable Distribution Mean Standard deviation A^ (m/sec2) Lognormal 1.870 1.230 Agy (m/sec2) Lognormal 1.870 1.230 Axz (m/sec2) Lognormal 1.245 0.819 Agr (rad/sec2) Lognormal 0.0425 0.0280 Ad (mm2) Normal 9000 90 6.7.4 Performance evaluation 6.7.4.1 The structure before seismic retrofit (1) Reliability analysis 168 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES Eighty combinations of the four random variable values were generated using Optimized Latin Hypercube Design, and for each combination, CANNY was run to calculate the structural responses: roof drifts, story drift ratios, base moments and base shear forces. Only the roof drifts were used in reliability assessment. The time step integration was carried out using Newmark's method, and viscous damping (5%) was assumed proportional to mass and instantaneous stiffness. Appendix E shows just the first 10 combinations and the corresponding responses. Finally, for each response, a neural network was trained and used for reliability assessment. Of the 80 examples, 64 were used for training and 16 were used for testing. The hidden neuron number and network RMSREs for the two responses are given in Table 38, with the relative error statistics for every response shown in Table 6.39. Table 6.38 Case study 5: Neural network training RMSREs (before retrofit) Response Neuron number Training Testing A, 4 0.022 0.023 D, 6 0.030 0.035 Table 6.39 Case study 5: Neural network training error statistics (before retrofit) Response Mean Standard deviation Dx -0.0001 0.0222 0.0020 0.0315 where Dx denotes roof longitudinal displacement; Dy denotes roof transverse displacement; The roof displacements were evaluated against the serviceability limit state and the life safety limit state. For the serviceability limit state with drift limit of 1/200 of its height, the following performance functions were used, 169 CHAPTER 6 SEISMIC RELAffilUTY ANALYSES: CASE STUDIES G, = 0.100-Dx(Agx,Agy,Agz,Agr) G2 = 0.100-D/A^.Agy.A^.A^) (6.44a) (6.44b) and for the limit state of life safety with drift limit of 1/100 of building height, the performance functions were, G^O.IOO-DJA^A^A^) G2 =0.200-Dy(Agx,Agy,Ag2,Agr) (6.45a) (6.45b) The reliability analysis was conducted by Importance Sampling and Monte Carlo Simulation, with structural responses estimated using neural networks and Local Interpolation. The results were given in Table 6.40 for serviceability limit state and Table 6.41 for life safety limit state. Table 6.40 Case study 5: Serviceability reliability indices (before retrofit) Performance function IS (104) MCS (105) NN LI NN LI G^O.IOO-DJA^A^A^AJ 0.385 0.282 0.414 0.365 G2=0.100-D/Agx,Agy,Agz,Agr) -0.038 0.098 -0.066 0.122 CPU time (sec) 10 15 25 70 Table 6.41 Case study 5: Life safety re iability indices (before retrofit) Performance function IS (105) MCS (106) NN LI NN LI G^OJOO-DJA^A^A^) 1.644 1.577 1.821 1.815 G2 =0.200-Dy(Agx,Agy,Agz,Agr) 0.914 1.079 1.363 1.313 CPU time (sec) 50 135 195 690 170 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES From the tables, it seems that for the serviceability limit state, the reliability in the longitudinal direction is higher than that of the transverse direction. The low reliability indices show that the structure is relatively flexible, as it is a column flat slab system with low lateral stiffness. The reliability prediction based on Neural Networks is a somewhat greater than that from Local Interpolation, with the latter taking more time. As it takes 2525 seconds to run CANNY once, reliability assessment using Monte Carlo simulation by integrating RELAN and CANNY would take 2525x105 seconds (2922 days) on a Pentium III 500 MHz PC. For the life safety limit state, again, the reliability index in the longitudinal direction is higher than that of the transverse direction. The reason might be that the transverse direction has a larger radius of rotation, which is susceptible to strong rotational ground motion. Neural Networks takes far less time than Local Interpolation in reliability calculation. A direct Monte Carlo simulation using CANNY dynamic analysis would take 2525x106 seconds (29224 days) on a Pentium III 500 MHz PC. Neural Networks exhibit robustness compared to Local Interpolation. Both are utilized with Importance Sampling, after the estimation of a "design point", as described. When the reliability is high, the design point could be far away from the mean value point, which renders Local Interpolation less effective as there might be fewer data around the design point. (2) Influences of the different components of ground motion The influence of each variable on structural performance was studied by varying its mean and keeping its coefficient of variation constant 0.66, while the statistics of other variables were kept unchanged. The results were shown in Figure 6.15, where the solid line corresponds to performance function G, = 0.200-Dx and the dashed line is associated with 171 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES G2 = 0.200 -Dy (a) Variation of reliability index with respect to A (b) Variation of reliability index with respect to A gy (c) Variation of reliability index with respect to A 111 CHAPTER 6 SEISMIC RELAffilLITY ANALYSES: CASE STUDIES 2 -i— x | 1.8 — t 16 — 1 1.4 co ^ 1.2 - — 01 1 - — 0.035 (d) Variation of reliability index with respect to Agr Figure 6.15 Variation of reliability index with respect to ground motion components It can be seen from above that peak longitudinal acceleration has a great influence on longitudinal response, while peak transverse acceleration has a significant effect on transverse response. The peak vertical and rotational accelerations have no obvious impact on longitudinal response, though they have a slight effect on transverse response. 6.7.4.2 The structure after seismic retrofit (1) Reliability analysis After the Northridge earthquake, the Holiday Inn suffered different levels of damage, and was repaired afterwards. Several options were available for seismic retrofit, such as adding shear walls, installing steel frames, upgrading with base isolation, providing energy dissipation devices, etc. In this study, steel cross brace type dampers with hysteretic damping (Huang et al, 2002) were used as the retrofit scheme, as they added little weight to the structure and were simple to erect. The yield strength was assumed 350 MPa. 0.04 0.045 0.05 Peak rotational acceleration 173 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES 96 combinations of the five input random variables were generated by Optimized Latin Hypercube Design (Appendix E), and CANNY was run for each case. Finally, for each response, a neural network model was built for reliability analysis. Of the 96 examples, 76 were used for training and 20 were used for testing. The hidden neuron number and network training RMSREs for the two responses are given in Table 42, with the relative error statistics for every output variable shown in Table 6.43. Table 6.42 Case study 5: Neuron number and network RW [SREs (after retro Response Neuron number Training Testing Dx 3 0.025 0.026 Dy 4 0.036 0.036 Table 6.43 Case study 5: Neural network training error statistics (after retrofit) Response Mean Standard deviation Dx 0.0002 0.0256 Dv 0.0001 0.0378 Reliability analysis was carried out using Monte Carlo simulation, and the variation of reliability index with respect to damper mean area were plotted in Figure 6.16 and Figure 6.17, where the standard deviation of the area was assumed 1% of its mean. The performance functions for serviceability limit state were as follows, Gj =0.100-Dx(Agx,Agy,Agz,Agr,Ad) (6.46a) G2= 0.100-D/A^.A^.Av.A^.AJ (6.46band for life safety limit state as, G, = 0.200-Dx(Agx,Agy,Agz,Agr,Ad) (6.47a) G2 = 0.200 - Dy( Agx,Agy,Agz,Agr,Ad ) (6.47b) 174 CHAPTER 6 SEISMIC RELAffilUTY ANALYSES: CASE STUDIES x 2 1 1.7 £ 1.4 i 1.1 * 0.8 <L> OL 0.5 1000 3000 5000 7000 9000 1100 1300 1500 0 0 0 Damper mean area Figure 6.16 Variation of reliability index with respect to damper mean area Ad (Serviceability limit state) In the above, the solid line corresponds to performance function G, = 0.100-DX and the dashed line is associated with G2 =0.100-Dy. x 3 •o 2.6 |.2.2 5 1.8 I 1.4 * 1 -i r —i r 1000 3000 5000 7000 9000 1100 1300 1500 0 0 0 Damper mean area Figure 6.17 Variation of reliability index with respect to damper mean area Ad (Life safety limit state) In the above, the solid line corresponds to performance function G, =0.200- Dx and the dashed line is associated with G, = 0.200 -Dv. It is can be seen that as the mean damper sectional area Ad increases, the reliability in the longitudinal direction grows steadily when Ad is greater than 7000 mm2; though it decreases 175 CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES slightly when Ad is less than 7000 mm2; on the contrary, the reliability increases with Ad when Ad is less than 9000 mm2, it decreases afterwards. Overall, the reliability of the retrofitted structure is improved compared to that of the original structure. Since the dampers in the two directions have the same cross sectional area throughout the height of the building, the reliability in the transverse direction begins to decrease after some point. This is due to the increasing rigidity, resulting in more inertial forces that compromise the benefit of increasing the damper size. To accomplish a better performance, the damper size should be optimized along the two horizontal directions as well as in the vertical direction. (2) Influences of the different components of ground motion The influence of each variable on structural performance was again investigated by varying its mean by -10% to 10% and keeping its coefficient of variation constant 0.66, while the statistics of other variables were kept unchanged. Only the life safety limit state was considered. The results were shown in Figure 6.18, where the solid line corresponds to performance function Gt=0.200-Dx and the dashed line is associated with G2 = 0.200 -Dy. « 31 1 •g 2.8 •! 26 !5 2.4 co =55 2.2 2 A " i"'ii ""i " 1 1.6 1.7 1.8 1.9 2 2.1 Peak longitudinal acceleration (a) Variation of reliability index with respect to A, 176 CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES (b) Variation of reliability index with respect to A gy 2.5 x S 2.4 2.3 3 2.2 co =55 2.1 2 1.1 1.15 1.2 1.25 1.3 1.35 1.4 Peak vertical acceleration (c) Variation of reliability index with respect to A (d) Variation of reliability index with respect to A Figure 6.18 Variation of reliability index with respect to ground motion components 177 CHAPTER 6 SEISMIC RELAffilLITY ANALYSES: CASE STUDIES It is observed that the peak longitudinal acceleration has a significant influence on longitudinal performance, while the peak transverse acceleration affects the transverse performance substantially. The peak vertical acceleration and peak rotational acceleration have minor effects on both the longitudinal and transverse performances. 6.8 Reliability Assessment: Summary and Conclusions In view of the many sources of uncertainties inherent in earthquake resistant design, reliability analyses need to be undertaken in the design process by properly taking into account those uncertainties. Five case studies of structural seismic reliability analyses were presented to demonstrate the applicability and effectiveness of the proposed approach. Two levels of performances, serviceability and ultimate limit state, are generally assessed for the corresponding two levels of earthquakes. The examples proved the near impossibility of performing seismic reliability assessment by means of standard Monte Carlo simulation, using nonlinear dynamic analysis directly. The case studies illustrate, instead, that the reliability assessment can be carried out, quickly and accurately, by using Designed Experiments and Neural Networks trained with databases of responses of the structural system, under probabilistic seismic ground excitation. The results also demonstrate that Neural Networks are more robust and efficient than local regression methods of interpolating responses. Powered by this tool, structural engineers can accomplish the seismic design objectives with explicit reliability, which will assure life safety and mitigate seismic risks by reducing possible damage. 178 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS 7.1 Introduction Performance-based seismic design requires that for multiple seismic hazards, multiple performance criteria be satisfied explicitly with specified reliability levels. Because many uncertainties are involved, performance-based design should be carried out in the framework of reliability analysis, by taking into consideration the effect of all major uncertainties, in order to achieve the pre-defined design objectives. Among all the uncertainties, the earthquake ground motion is the most important and it is not well understood. As the input to structural analysis, its appropriate characterization ultimately determines the success of a seismic design. The intricate structural response under excitation of ground shaking depends on the inelastic behavior of the structure and its connections, the influence of non-structural elements and building content, soil-structure interaction, as well as the analytical model based on assumptions and simplifications. The real response of the structure can only be estimated using stochastic nonlinear dynamic analysis. In this manner, both the uncertainties in structural capacities and seismic demands can be considered, so that seismic design can be undertaken within a transparent and realistic methodology. In the present state of practice of seismic design, the uncertainties in structural capacity and seismic demand are generally unaccounted for. The seismic capacity is usually estimated by a nonlinear static analysis (pushover analysis) in which only the fundamental mode is allowed for, with the response obtained by monotonic incremental lateral loading. The elastic 179 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS analysis conducted according to the codified procedure does not consider the intrinsic uncertainties and the computations do not reflect the actual structural behavior. The resulting designs may have an uncertain reliability level, certainly quite variable across design conditions. Performance-based design aims at improving the procedure by attempting to meet performance requirements with specified reliabilities. Thus, the design is formulated as an optimization problem in the context of reliability-based design. Stochastic nonlinear dynamic structural responses are implemented within reliability analysis when the structure is subjected to a probabilistic earthquake. Design parameters are sought by minimizing the objective function defined in Equation (5.14). There are two ways of solving this problem: a) the reliability analysis can be conducted using FORM or simulations; or b) a reliability index database can be generated in terms of the design parameters, with a neural networks model subsequently built, and used for optimization. Gradient-free algorithms will be relied on for the optimization due to the possible highly nonlinear behavior of the structure. Four applications are presented to illustrate the applicability of the proposed method. These correspond to the same examples discussed in the previous chapter, ie., a two-story reinforced concrete frame, a tall reinforced concrete building, a bridge bent without or with seismic isolation, and a wood shear wall. Advantage is taken of the response databases and trained neural networks developed beforehand. Multiple seismic hazards are considered, with multiple performance objectives. 180 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS 7.2 A Two-story Reinforced Concrete Frame 7.2.1 Description of the structure The structure has been discussed in Chapter 6, a two-story reinforced concrete frame. It is subjected to the same probabilistic earthquake ground shaking, and the mean weight on the roof Wj and the mean weight on the floor W2 are the design parameters to be calculated, allowing for each one a coefficient of variation of 0.05. 7.2.2 Random variables The steel yield strength fy, the concrete compressive strength f'c, the concrete elastic modulus Ec, the weight on roof W,, and the weight on floor W2, were selected as the five input variables. The response variables were the drift at the roof D, and the drift at the floor D2. The probability distributions and statistics of the input variables were as shown in Table 7.1. Table 7.1 Reinforced concrete plane frame: Input variable probability distributions and statistics Input variable Distribution Mean Standard deviation /v(MPa) Lognormal 400.0 30.0 /JOVlPa) Normal 30.0 4.5 £c(MPa) Normal 22500.0 1000.0 fF,(KN) Normal ? 0.05^ W2(KN) Normal ? 0.05W, 181 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS 7.2.3 Performance-based design formulation For this probabilistic strong ground shaking, the limit state of collapse prevention was evaluated, as indicated by the following two performance functions, In the above, the drift limit was set to 3% of the story height from ground. Dt and D2 were assumed Lognormally distributed, and estimated by means of neural networks that were developed in the previous chapter. The target reliability indices for the two modes were set to P| =1.800 and p2 = 1.500, respectively. 7.2.4 Results Two approaches were used for the optimization problem, depending on how 6/Xd ) was obtained. (1) Conventional approach By this approach, the reliability index is calculated each time by using standard method like FORM and Importance Sampling. The design parameters and the achieved reliability indices were found to be, Gl=0.240-Dl(fy,fc,Ec,W1,W2) (7.1a) G2=0.120-D2(fy,f'c,Ec,W1,W2) (7.1b) W,=370.5KN, B, =1.799; W2 = 426.7KN, B2 =1.500. 182 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS (2) Neural network approach Two reliability index databases were constructed by running RELAN (Foschi et al, 2002) for combinations of W, and W2, while keeping the coefficients of variation constant as 0.05. Appendix F shows just the first 10 combinations of the databases. Two neural networks were then developed for the two reliability indices 6, and B2. The design parameters and the achieved reliability indices were obtained by optimization, W,=367.0KN, Bj =1.800; W2 =455.8KN, B2 =1.500. It can be seen that the two approaches produced roughly the same answer. Though building reliability index databases involves some more time, it is compensated by savings during the actual optimization. This approach using B neural networks will show its superiority as the number of design parameters increases. 7.3 A Tall Reinforced Concrete Building 7.3.1 Description of the structure This structure has been discussed in Chapter 6. It is a two-bay, twenty-story reinforced concrete frame, with story height of 4 m and span of 8 m. The beams have a constant cross section 350 mm x 700 mm. The column cross sections vary along the height of the building: from stories 1 to 7, BjxH,; from stories 8 to 14, B2 xH2; from stories 15 to 20, B3xH3. 183 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS 7.3.2 Random variables As before, fifteen input random variables were considered and, for clarity, they were listed below, (1) peak ground acceleration, Ag; (2) predominant ground frequency, cog; (3) earthquake strong motion duration, Td; (4) beam distributed vertical load, q; (5) steel yield strength, fy; (6) concrete compression strength for columns from story 1 to 7, fcl; (7) concrete compression strength for columns from story 8 to 14, fc2; (8) concrete compression strength for columns from story 15 to 20, fc3; (9) concrete compression strength for beams, fb; (10) cross section width of columns from story 1 to 7, B, ; (11) cross section depth of columns from story 1 to 7, H,; (12) cross section width of columns from story 8 to 14, B2; (13) cross section depth of columns from story 8 to 14, H2; (14) cross section width of columns from story 15 to 20, B3; (15) cross section depth of columns from story 15 to 20, H3; 184 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS The probability distributions and statistics of the fifteen input random variables were given in Table 7.2, where B,,H ltB2,H 2,B3,H 3 were selected as the design parameters, and the column dimensions were assumed well-controlled with COV 0.01. 7.3.3 Performance-based design formulation Six structural responses were studied, namely, the roof displacement D20< the roof acceleration A20, the 15th story drift ratio 01}, the 5th story drift ratio 05, the base overturning moment M and the base shear V. They were assumed to have Lognormal distribution and calculated using the neural networks developed in Chapter 6. For the six ultimate limit states, the following performance functions were considered, G, = 0.800-D20 (7.2a) G2 = 4.900 -A20 (7.2bG3= 0.0 JO -915 (7.2c) G4= 0.010 -65 (7-2dG5=M0-M (72e) G6=V0-V (7-2fFor illustration purpose, Mg was assumed Normally distributed with mean 54000 KNm and standard deviation 5400 KNm; and V0 was assumed Normally distributed with mean 1440 KN and standard deviation 144 KN. The target reliability indices for the six performance functions were specified as, p| = 3.000 ;{32 = 2.500; ft =2.500 ;ft = 2.500 ;p; = 2.500 ;ft = 2.500 185 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS Table 7.2 Tall building: Input variable probability distributions and statistics Input variable Distribution Mean Standard deviation 4, (gal) Lognormal 290.0 191.0 eog (rad/sec) Normal 5TC 7C Td (sec) Normal 30.0 5.0 q (KN/m) Normal 45.0 4.5 /„ (MPa) Lognormal 400.0 10.0 /., (MPa) Lognormal 40.0 1.5 /,, (MPa) Lognormal 30.0 1.5 L, (MPa) Lognormal 20.0 1.5 /„ (MPa) Lognormal 20.0 1.5 B, (mm) Normal ? 0.0 IB, i/, (mm) Normal ? 0.01H, i?2 (mm) Normal ? 0.0IB\ H2 (mm) Normal ? 0.01H2 5, (mm) Normal ? 0.0 IB3 H3 (mm) Normal ? 0.01H3 7.3.4 Results Sixty-four combinations of the six design parameters BltH,,B2,H2,B3,H'3 were created. Six reliability index databases were constructed for the six performance functions using RELAN, maintaining the coefficients of variation of B,, Ht, B2, H2, B3, H3 as 0.01. Appendix G shows the first 10 combinations of the reliability index databases. Next, six neural networks were developed for the six reliability indices. Finally, the design parameters and the achieved reliability indices were found by optimization as, B, = 830mm, H, = 1129mm; B2 = 536mm, H2 = 747mm; B3 = 497mm,H3 = 608mm; 186 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS 8, = 2.952, 82 = 2.525, B3 = 2.505, B4 = 2.543, B5 = 2.923, B6 = 2.574 7.4 A Bridge Bent Without or With Seismic Isolation 7.4.1 Description of the structure This is the same bridge outlined in Chapter 6. It has two round columns connected on top by a cap beam, which is 8 m above the ground. In the case of seismic isolation for the deck, four identical Lead Rubber Bearing isolators are put on the cap beam. The performance of this bridge bent without or with seismic isolation, for two levels of earthquake, is studied. 7.4.2 Bridge bent without seismic isolation 7.4.2.1 Random variables In this case, five variables were selected as the input variables, namely, peak ground acceleration Ag, predominant ground frequency cog, strong motion duration Td, column diameter D, and vertical load on the bearing Q. The probability distributions and statistics of the variables were given Table 7.3, where the values in parenthesis were for ultimate limit state. Table 7.3 Bridge bent: Input variable probability distributions and statistics (without seismic isolation) Input variable Distribution Mean Standard deviation Ag (cm/sec2) Lognormal 151.0(325.0) 99.0 (214.0) cog (rad/sec) Normal 5TC K 7; (sec) Normal 20 (30) 5 D(mm) Normal 1800 18 0(KN) Normal 2400 240 187 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS 7.4.2.2 Performance-based design for serviceability limit state The displacement at the cap beam was evaluated against a limit of 1/200 of its height from ground, based on the following limit state function, where the displacement A was assumed to have a Lognormal distribution and estimated by the neural networks developed in Chapter 6. The design parameter was the mean of the column diameter, D, assuming the standard deviation as 1% of the mean. The target reliability index was specified as B' = 2.000. From optimization, the value of the design parameter and the achieved reliability index were found to be, D= 1754mm, 0 = 2.000 7.4.2.3 Performance-based design for ultimate limit state This limit state was supposed to be associated with the state of collapse prevention. The displacement at the cap beam was evaluated against a limit of 1/40 of its height from ground, as indicated by the following limit state function, in which the displacement A was assumed to have a Lognormal distribution and estimated by neural networks. G = 0.04-A(Ag,cog,Ts,D,Q) (7.3) G = 0.200-A(Ag,co g,Ts,D,Q) (7.4) 188 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS The design parameter was again the mean of the column diameter, D, with a standard deviation fixed at 1% of its mean. The target reliability index was specified as P' =2.500. From optimization, the value of the design parameter and the achieved reliability index were found to be, D = 1953mm, B = 2.500 7.4.3 Bridge bent with seismic isolation 7.4.3.1 Random variables In this case, six variables were selected as the input variables, namely, peak ground acceleration A , predominant ground frequency cog, strong motion duration Td, column diameter D, and vertical load on isolators Q, and width of the isolators Br. The probability distributions and statistics of the variables are given in Table 7.4, in which the values in parenthesis correspond to ultimate limit state. Table 7.4 Bridge bent: Input variable probability distributions and statistics (with seismic isolation] ) Input variable Distribution Mean Standard deviation Ag (cm/sec2) Lognormal 151.0(325.0) 99.0 (214.0) cog (rad/sec) Normal 5TC 71 Td (sec) Normal 20 (30) 5 £>(mm) Normal 1800.0(?) 18.0(0.01 D) G(KN) Normal 2400 240 5r(mm) Normal ? 0.0IB; 189 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS 7.4.3.2 Performance-based design for serviceability limit state The displacement at the cap beam was evaluated against a limit of 1/800 of its height from ground, as indicated by the following limit state functions, where the displacement A was assumed to have a Lognormal distribution and estimated by neural networks. The design parameter was the mean of the isolator width, Br, assuming the standard deviation to be 1% of the mean. The target reliability index was specified as B' = 2.000. By optimization, the value of the design parameter and the achieved reliability index were found to be, Br= 795mm, B = 2.000 7.4.3.3 Performance-based design for ultimate limit state The displacement at the cap beam was evaluated against a limit of 1/250 of its height from ground, and the column base moment was assessed against the yield capacity My, in terms of the following limit state functions, G = 0.010- A(Ag ,o)g,Ts,D, Q, Br) (7.5) G, = 0.032-A(Ag,a)g,Ts,D,Q,Br) (7.6a) G2=My-Mc(AG,G>g,Ts,D,Q,Br) (7.6b) 190 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS where the displacement A and Mc were assumed to have a Lognormal distribution and estimated by neural networks, while My was the yield moment of column which was assumed to have a Normal distribution with mean 5800 KNm and standard deviation 290 KNm. The design parameters were the means of the column diameter D and the isolator width Br, assuming the standard deviations to be 1% of the means. The target reliability indices were specified as B\ = B'2 = 2.500. Two approaches were used to solve the problem, ie, one was the conventional approach and the other was based on reliability index database and neural networks. (1) The conventional approach In this approach, the reliability index is calculated by standard methods such as FORM or Importance Sampling. From optimization, the values of the design parameters and the achieved reliability indices were found to be, D = 1765mm, Bt =2.507; Br= 748mm, B2 =2.502 (2) Neural network approach Two reliability index databases were constructed by running RELAN for combinations of the design parameters, while keeping the coefficients of variation constant; and Appendix H shows the first 10 combinations of the databases. 191 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS Two neural networks were respectively built for the two reliability indices: B, and B2. By optimization of the same problem, the design parameters and the corresponding achieved reliability indices were found to be, D= 1758mm, /?, =2.500; Br = 779mm, B2 =2.500 The results are very close to the results obtained with the conventional approach. Though construction of reliability index databases takes somewhat more time, the calculation of design parameters is very fast. 7.5 A Wood Shear Wall 7.5.1 Description of the structure This wood shear wall structure has been introduced in Chapter 6. It is a 2.4 m high and 2.4 m wide wall, with 12 mm thick OSB sheathing panels attached to the frame using common 50 mm long nails. The spacing of the vertical members is 400 mm. The structure is subjected to the same earthquake as before (Joshua Tree Station earthquake with amplitude adjusted). 7.5.2 Random variables The neural networks developed in the preceding chapter were used. The four input variables were the nail spacing along the perimeter e,, the nail spacing in the interior e2, the mass carried by the wall M, and the peak ground acceleration Ag. Two responses were the wall 192 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS drift A and the nail edge tearing force V. The probability distributions and statistics of the input variables were presented in Table 7.5. Table 7.5 Wood shear wall: Input variable probability distributions and statistics Random variable Distribution Mean Standard deviation e, (m) Normal ? O.le, e2 (m) Normal ? 0.1e2 M (KN.secVm) Normal 6.0 0.6 Ag (m/sec2) Lognormal 0.927 0.556 7.5.3 Performance-based design The design parameters were the mean of e,, e,, and the mean of e2, e2, so that the target reliability indices were met as B\ = B2 = 2.5 for the following performance criteria, G1=A0-A(e1,e2M,Ag) (7.7a) G2=V0-V(e„e2M,Ag) (7.7bwhere Ag denotes the drift limit which was set to 1/200 of the wall height, 12.0 mm; V0 denotes the nail edge tearing capacity that was assumed Normal with a mean of 1.05 KN and a standard deviation of 0.105 KN; The optimization was conducted using the conventional approach, in which FORM and Importance Sampling were utilized for reliability calculation. The design parameters and the corresponding achieved reliability indices were found to be, el = 0.058m, Bt =2.570; e2 = 0.128m, B2= 2.570 193 CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS 7.6 Summary Four performance-based seismic design applications have been presented illustrating the applicability and efficiency of the proposed method for performance-based seismic design. The proposed method allows for uncertainties in earthquake demands, and seismic design can be carried out in the framework of reliability-based design, so that multiple performance requirements can be satisfied with specified reliability levels. 194 CHAPTER 8 SUMMARY AND FUTURE WORK CHAPTER 8 SUMMARY AND FUTURE WORK 8.1 Summary In earthquake resistant design, multiple performance criteria must be satisfied with pre-specified reliabilities, and structural responses are evaluated based on probable earthquakes that may occur during the service life of the structure. The many uncertainties inherent in the design process are not explicitly coped with in current codified recommendations. Therefore, the achieved reliability level is not known and likely non-uniform across different design situations. This thesis has presented a methodology for 1) reliability assessment and 2) performance-based design. It is based on computer simulations utilizing neural networks for the evaluation of structural responses. As the primary uncertainty in seismic design, earthquake ground motion needs to be properly characterized. The ground motion model should take account of all the major factors that are of engineering significance. To this end, three parameters are identified to characterize the ground accelerogram, namely, the peak ground acceleration, the frequency content and the duration. Non-stationary ground acceleration time history is synthesized by multiplying a modulation function with a stationary stochastic process that is generated on the basis of a power spectrum. To prevent spurious displacement and velocity shift, a baseline correction method has been devised to process the synthesized accelerogram. The goal of the artificial ground motion generation is not to suggest a new approach, but to produce an ensemble of 195 CHAPTER 8 SUMMARY AND FUTURE WORK artificial ground accelerograms that share the same seismic parameters, in an attempt to consider the uncertainty in ground shaking for seismic analysis and design. The generated artificial accelerograms are used as inputs to a nonlinear dynamic structural analysis program to compute the responses of a structure. In order to build the neural network models for seismic reliability assessment, response databases have to be constructed in advance as the training data for neural network. This can be accomplished by virtue of design of experiment techniques. A new experimental design method is proposed in this study. First, a design is generated with more data points than needed using Latin Hypercube Sampling, then a minimum inter-point distance is set and the data points whose distance is less than the threshold are merged. The process is repeated until the inter-point distance threshold grows to a certain limit. Finally, the data points are sorted according to distance with one of the pair of close points eliminated, until the required number of data points is left. This method proves effective and efficient as the minimal inter-point distance is controlled, thus enforcing the uniformity of the design. Neural networks are proposed for seismic reliability analysis and performance-based design. Multiple layer neural networks are employed that has one hidden layer of neurons. The optimal number of neurons is determined by cross validation. The data are divided into five equal groups. The neural network is trained five times, and each time four of the subsets are used for training and the remaining one for verification. Both training errors and verification errors are calculated. A criterion based on the training error and verification error is used for measurement of network goodness. The network with the minimal criterion value is judged as the optimal network. Finally, the optimal network is trained by shuffling the data in the 196 CHAPTER 8 SUMMARY AND FUTURE WORK training dataset and the testing dataset. Any sample in the testing dataset with an error larger than a threshold is put into the training dataset, while the same number of samples in the training dataset that have the least error are put into the testing dataset. The training process is repeated until the training error is reduced to a certain limit or the limit on iteration is reached. By doing this way, all the critical data points are included in the training dataset, ensuring that the underlying functional relationship is well represented by the neural network. Performance-based seismic design is explored by mean of neural networks in the framework of reliability-based design. The purpose of this approach is to take into consideration all the major uncertainties in the design, to improve the computational efficiency, and to satisfy multiple performance criteria with preset reliability levels when the structure is subjected to multiple seismic hazards. Four performance levels are proposed, namely, serviceability limit state, capability limit state, stability limit state and survivability limit state corresponding respectively to four levels of earthquakes, ie, a frequent minor event, an occasional moderate event, a rare major event and a very rare event. Design criteria are outlined for each performance level. A framework for performance-based seismic design is outlined, in which performance-based seismic design is formulated as an optimization problem, with an objective function minimized to obtain the design parameters. Five case studies on seismic reliability of structures are presented to illustrate the proposed method. A two-story reinforced concrete frame is used as the first example of existing building, and its seismic reliability is assessed in relevance to a probabilistic earthquake. A twenty-storied reinforced concrete frame is utilized as a representative of a tall building, and its seismic performances corresponding to two levels of earthquakes are evaluated. A bridge 197 CHAPTER 8 SUMMARY AND FUTURE WORK bent without or with seismic isolation is studied as the third example, in which two levels of earthquakes are considered and seismic reliabilities of the bridge without or with seismic isolation are calculated. The results show that the reliability can be greatly enhanced by seismic isolation. As the fourth example, a wood shear wall is investigated regarding its seismic performance. The final example is concerned with an actual instrumented structure that has experienced three earthquakes and suffered damages. The seismic performance of this building during possible future earthquake is studied. The case studies prove the applicability and efficiency of the proposed approach, which can reduce the computational burden in seismic analysis and thus providing a promising tool for seismic reliability assessment. Performance-based seismic design applications are provided to illustrate the proposed approach, where design of experiments and neural networks are used. Two methods can be used to find the optimum design parameters, one is the conventional approach that utilizes FORM and Importance Sampling for reliability analysis, and the other is the P database approach where databases of the reliability indices are created first in terms of the design parameters and then neural networks of the reliability indices are built. The pre-constructed response databases and neural network models are taken advantage of for this purpose. In the case of the two-storied reinforced concrete frame, the maximum masses that can be carried on the roof and the floor are calculated. For the tall building in the ultimate limit state, the mean values of the column dimensions are calculated to meet the predefined target reliability levels. The mean of the column diameter of the bridge without isolation is calculated, whereas the means of column diameter and isolator width are optimized for the bridge with isolation. The optimal spacing of nails is found in the case of wood shear wall. These case 198 CHAPTER 8 SUMMARY AND FUTURE WORK studies prove the applicability of the proposed method for performance-based design by means of optimization to achieve the multiple performance objectives. It is concluded that designed experiments and neural networks provide a robust and efficient tool in seismic reliability analysis and performance-based design, improving the simulation of real behavior of a complex structure and reducing the computational burden. 8.2 Future Work This thesis work has explored seismic reliability assessment and performance-based seismic design using designed experiments and neural networks, however, the fields covered are so extensive that some issues are left for future developments in both theories and applications. • Earthquake ground motions, as the excitations to structure, need to be properly characterized in order to realistically predict the structural responses, provided the structural model is able to simulate the actual behavior of the structure. At present, there is not such a ground motion model that can accurately predict the future earthquake at a given site with high degree of reliability. Emphasis should be placed on the high uncertainty involved. Further works needs to be done in this direction, as the success of earthquake resistant design, to a large degree, hinges on appropriate characterization of the future ground motions at a specific site. • Though uncertainties in the seismic demand are taken into account by means of making assumptions of the distributions of the input variables and estimating the responses using neural networks, the uncertainties in the structural capacity are hard to deal with. Numerical procedures must be developed to evaluate the uncertainties in relation to 199 CHAPTER 8 SUMMARY AND FUTURE WORK moment, shear and axial strengths of member subjected to complex loadings, and global displacement ductility of structure, as well as member plastic hinge characteristics and its rotational ductility, etc. • Design of experiments is indispensable for response database construction. Albeit some efforts have been made in this regard, there is more room for exploration in order to achieve an efficient and uniform experimental design. • Multilayer neural networks and radial basis function networks are explored for seismic reliability assessment and performance-based design. Other machine learning paradigms such as Gaussian process (Williams, 1995; Rasmussen, 1996; Neal, 1997), and support vector machines (Vapnik, 1998, 2000) need to be investigated, or new learning methods need to be innovated for further improvement and development of this approach. • The optimization for performance-based design is slow when the number of design parameters is relatively large. Faster algorithms need to be developed to expedite the optimization process. • Performance-based seismic design should be undertaken in the format of reliability-based design, with the design parameters determined through lifecycle cost benefit analysis. The decision making process involves political, economic, societal and technical factors. Both theory and application need further development to facilitate its successful implementation in practical engineering projects, where structural safety is guaranteed with potential hazards well assessed and risks well managed. 200 REFERENCES Amin, M. and Ang, A. H. - S. (1968). Non-stationary stochastic models of earthquake motions, Proceedings ASCE, Journal of the Engineering Mechanics Division, 94, EM2 Arora, J., (eds.), (1999). Guide to Structural Optimization, ASCE publications Atkinson, G. and I. Beresnev (1998). Compatible ground motion time histories for new national seismic hazard maps, Canadian Journal of Civil Engineering, Vol.25, pp.305-318 Bertero, R. D. and Bertero, V. V. (2002). Performance-based seismic engineering: the need for a reliable conceptual comprehensive approach, Earthquake Engineering and Structural Dynamics, Vol.31, pp.627-652 Beskos, D. E. and Anagnostopoulos, S. A.(eds.), (1997). Computer Analysis and Design of Earthquake Resistant Structures: A Handbook, Computational Mechanics Publications Boore, D. M. (1983). Stochastic simulation of high-frequency ground motions based on seismological models of the radiated spectra, Bulletin of the Seismological Society of America, Vol. 73, No.6, pp.1865-1894 Boore, D. M., Stephens, C. D. and Joyner, W. B. (2001). Comments on baseline correction of digital strong motion data: examples from the 1999 Hector Mine, California Earthquake, Bulletin of the Seismological Society of America Box, G. E. P. and Wilson, K. B. (1951). On the experimental attainment of optimum conditions, Journal of the Royal Statistical Society, Series B, 13, pp. 1-45 Box, G. E. P. and Behnken, D. W. (1960). Some new three-level designs for the quantitative study of variables, Technometrics, 2, pp.455-475 Box, G. E. P. and Jenkins, G. M. (1976). Time Series Analysis: Forecasting and Control, Holden-Day Bratley, P. and Fox, B. L. (1988). Algorithm 659: implementing Sobol's quasi-random sequence generator, ACM Transactions on Mathematical Software, Vol.14, No.l, pp.88-100 Bratley, P. and Fox, B. L. and Niederreiter, H. (1992). Implementation and tests of low-discrepancy sequences, ACM Transactions on Modeling and Computer Simulation, Vol.2, No.3, pp. 195-213 Bratley, P. and Fox, B. L. and Neiderreiter, H. (1994). Algorithm 738: programs to generate Niederreiter's low-discrepancy sequences, ACM Transactions on Mathematical Software, Vol.20, No.4, pp.494-495 201 Breitung, K. and Faravelli, L. (1996). Response surface methods and asymptotic approximations, Mathematical Models for Structural Reliability Analysis, (Fabio Casciati and Brian Roberts eds.), CRC Press Bucher, C.G., Bourgund, U. (1990). A fast and efficient response surface approach for structural reliability problems, Structural Safety, Vol.7, pp.57-66 Byrd, R. H., Schnable, R. B. and Shultz, G. A. (1987). A Trust Region algorithm for nonlinearly constrained optimization, SIAM Journal of Numerical Analysis, Vol.24, pp. 1152-1170 Byrd, R. H., Schnable, R. B. and Shultz, G. A. (1988). Approximate solution of the Trust Region problem by minimization over two-dimensional subspace, Mathematical Programming, Vol.40, pp.247-263 Cai, G. Q. and Lin, Y. K. (1998). Reliability of nonlinear structural frame under seismic excitation, Journal of Engineering Mechanics, Vol.124, No.8, pp.852-856 Cakmak, A.S. , Sherif I. and Ellis, G.W. (1985). Modeling earthquake ground motions in California using parametric times series methods, Soil Dynamics and Earthquake Engineering, Vol.4, No.3, pp 124-131 Capon, J. (1969). High resolution frequency-wave number analysis, Proc. IEEE, No.57, ppl408-1418 Chang, M.K. et al (1982). ARMA models for earthquake ground motions, Earthquake Engineering and Structural Dynamics, Vol.10, No.5, pp.651-662 Cherkassky, V. andMulier, F. (1998). Learning from Data: Concepts, Theory, and Methods, John Wiley & Sons Clough, R. W. and Penzien, J. (1993). Dynamics of Structures, McGraw-Hill Collins, K. R., Wen, Y. K. and Foutch, D. A. (1996). Dual-level seismic design: a reliability-based methodology, Earthquake Engineering and Structural Dynamics, Vol.25, No. 12, pp. 1433-1468 Conn, A. R., Goulg, N. I. M. and Toint, P. L. (2000). Trust-region Methods, SIAM Converse, A. (1992). BAP Basic Strong Motion Accelerogram Processing Software, Version 1.0, USGS open-file Report 92-296A Corne, D., Dorigo, M. and Glover, F. (eds.), (1999). New Ideas in Optimization, McGraw-Hill Cressie, N. A. C. (1991). Statistics for Spatial Data, John Wiley & sons 202 Cunba, A. (1994). The role of the stochastic equivalent linearization method in the analysis of the nonlinear seismic response of building structures, Earthquake Engineering and Structural Dynamics, Vol.23, No.8, pp.837-858 Das, P. K., Zheng, Y. (2000). Cumulative formation of response surface and its use in reliability analysis, Probabilistic Engineering Mechanics, Vol.15, pp.309-315 Deodatis, G., Shinozuka, M. and Papageorgiou, A. (1990a). Stochastic wave representation of seismic ground motion I: F-K spectra, Journal of Engineering Mechanics, Vol. 116, No. 11, pp.2363-2379 Deodatis, G., Shinozuka, M. and Papageorgiou, A. (1990b). Stochastic wave representation of seismic ground motion II: simulation, Journal of Engineering Mechanics, Vol. 116, No. 11, pp.2381-2399 Deodatis, G. (1996). Non-stationary stochastic vector process: seismic ground motion applications, Probabilistic Engineering Mechanics, Vol.11, No.3, pp. 149-168 Dicleli, M. (2002). Seismic design of lifeline bridge using hybrid seismic isolation, Journal of Bridge Engineering, Vol.7, Issue. 2, pp. 94-103 Drucker, H., Burges, C, Kaufman, L., Smola, A. and Vapnik, V. (1997). Support vector regression machines, in Advances in Neural Information Processing Systems 9, Mozer, M. Jordan, M. and Petsche, T. (eds.), MIT Press, pp. 155-161 Duahe, S, Kennedy, A. D., Pendleton, B. J., and Roweth, D. (1987). Hybrid Monte Carlo, Physics Letters B, Vol. 195, pp.216-222 Eberhart, R. C. and Kennedy, J. (1995). A new optimizer using particle swarm theory, Proceedings of the Sixth International Symposium on Micromachine and Human Science, pp.39-43, Nagoya, Japan Ellis, G. W., Srinivasan, M. and Calmak, A. S. (1990). A Program to Generate Site Dependent Time Histories: EQGEN, Technical report NCEER-90-0009, National Center for Earthquake Engineering Research Enevoldensen, I., Faber, M. H. and S<j>rensen, J. D. (1993). Adaptive response surface * techniques in reliability estimation, Structural Safety and Reliability, Schuller, Shinozuka & Yao(eds), pp. 1257-1264 Etter, D. M. (1987). Structured Fortran?? for Engineers and Scientists, The Benjamin/Cummings Publishing Company Fajfar, P. and Krawinkler, H. (eds.), (1997). Seismic Design Methodologies for the Next Generation of Codes, A. A. Balkema 203 Fang, K.-T. (1980). Experimental design by uniform distribution, Acta Mathematice Applicatae Sinica, Vol.3, pp.363-372 Fang, K.- T. and Wang, Y. (1994). Number-theoretic Methods in Statistics, Chapman & Hall Fang, K.-T., Lin, D. K. J., Winker, P. and Zhang, Y. (2000). Uniform design: theory and applications, Technometrics, Vol. 42, pp. 237-248 FEMA (1997). NEHRP Guidelines for the Seismic Rehabilitation of Buildings, FEMA report 273 FEMA (1996). Performance-based Seismic Design of Buildings, FEMA report 283 Foschi, R. O. and Li, H. (1997). Inverse reliability method and application in offshore engineering, Structural Safety and Reliability, Shiraishi, Shinozuka & Wen (eds) Foschi, R. O., Li, H., Folz, B., Yao, F., Zhang, J. and Baldwin, J. (2000). RELAN: A General Software Package for Reliability Analysis. Department of Civil Engineering, University of British Columbia, Vancouver, Canada Foschi, R. O. and Li, H. (2001). Reliability and performance-based design in earthquake engineering, Structural Safety and Reliability, Corotis, Shueller & Shinozuka (eds) Foschi, R. O., Li, H., and Zhang, J. (2002). Reliability and performance-based design: a computational approach and applications, Structural Safety, Vol.24, pp.205-218 Foschi, R. O. and Zhang, J. (2003). Performance-based design and seismic reliability using designed experiments and neural networks, Proceedings of the 5th International Conference on Stochastic Structural Mechanics, August 11-13, Hangzhou, China Foschi, R. O. and Zhang, J. (2003). Neural networks application in seismic reliability and performance-based design, Proceedings of the 9th International Conference on Statistics and Probability in Civil Engineering, July 6-9, San Francisco, California, USA Fox, B. L. (1986). Algorithm 647: implementation and relative efficiency of quasirandom sequence generation, ACM Transactions on Mathematical Software, Vol.12, No.3, pp.362-376 Friedman, J. H. (1991). Multivariate adaptive regression splines, Annals of Statistics, Vol.19, No.l Gasparini, D. and Vanmarcke, E. H. (1976). Simulated earthquake motions compatible with prescribed response spectra, Technical Report, Department of civil Engineering, Massachusetts Institute of Technology, Publication No. R76-4 204 Gersch, W. and Kitagawa, G. (1985). A time varying AR coefficient model for modeling and simulating earthquake ground motion, Earthquake Engineering and Structural Dynamics, Vol.13, No.2, pp.243-254 Ghaboussi, J. and Lin, C.-C. (1998). New method of generating spectrum compatible accelerograms using neural networks, Earthquake Engineering and Structural Dynamics, Vol.27, No. 12, pp.377-396 Ghaboussi, J. and Lin, C.-C. J. (2000). Performance-based design using structural optimization, Earthquake Engineering and Structural Dynamics, Vol.29, pp. 1677-1690 Ghosh, J., Deuser, 1. and Beck, S. (1992). A neural network based hybrid system for detection, characterization and classification of short-duration oceanic signals, IEEE Journal of Oceanic Engineering, Vol.17, No.4, pp.351-363 Ghosh, S. and Rao, C. R. (eds.), (1996). Handbook of Statistics 13: Design and Analysis of Experiments, Elsevier Gibbs, M. N. (1997). Bayesian Gaussian Processes for Regression and Classification, PhD Thesis, University of Cambridge Gioncu, V. and Mazzolani, F. (eds), (2002). Ductility of Seismic Resistant Structures, Spon Press Glover, F. (1989). Tabu Search, Part I, ORSA Journal of Computing 1, pp. 190-206 Glover, F. (1990). Tabu Search, Part II, ORSA Journal of Computing 1, pp.4-32 Glover, F. and Laguna, M. (1993). Tabu Search, Modern Heuristic Techniques for Combinatorial Problems (Reeves, C. R. eds.), Blackwell, Oxford, pp. 10-150 Glover, F. and Laguna, M. (1997). Tabu Search, Kluwer Academic Press Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, Reading, MA Haddon, R.A. W. (1996). Use of empirical Green's function, spectral Ratios, and kinematic source models for simulating strong ground motion, Bulletin of the Seismological Society of America, Vol. 86, No.3, pp.597 - 615 Hadley, D. M. and Helmberger, D. V. (1980). Simulation of strong ground motions, Bulletin of the Seismological Society of America, Vol. 70, No. 2, pp. 617-630 Hagan, M. T., Howard, B. D. & Beale, M. (1996). Neural Network Design, PWS Publishing Company 205 Halton, J. H. (1960). On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals, Numer. Math. , Vol.2, pp.84-90 Hamburger, R. (1996). Performance-based seismic engineering: the next generation of structural engineering practice, EQE Review Hammersley, J. M. (1960). Monte Carlo methods for solving multivariable problems, Ann. New York Acad. Sci., Vol.86, pp.844-874 Hartzell, S. H. (1978). Earthquake aftershock as Green's functions, Geophysical Research Letters, 53, pp. 1425-1436 Hedayat, A. S., Sloane, N. J. and Stufken, J. (1999). Orthogonal Arrays: Theory and Applications, Springer-Verlag, New York Holland, J. H. (1975). Adaptation in Natural and Artificial Systems, University of Michigan Press, Michigan Hornik, K. (1991). Approximation capabilities of multilayered feedforward networks, Neural Networks, 4, pp.251-257 Howlett, R. J. and Jain, L. (Eds.), (2001). Radial Basis Function Networks, Physica-Verlag Hsu, H.-I. and Bernard, M. C. (1978). A random process for earthquake, Earthquake Engineering and Structural Dynamics, Vol.6 No.4 Huang, Y., Wada, A., Iwata, M., Mahin, S. A. & Connor, J. J. (2002). Design of damage-controlled structures, Innovative Approaches to Earthquake Engineering, WIT Press Hutchings, L. and Wu, F. (1990). Empirical Green's functions from small earthquakes: a waveform study of locally recorded aftershocks of the 1971 San Fernando earthquake, Journal of Geophysical Research, Vol. 95, No. B2, pp. 1187-1214 Hutchings, L. (1991). Prediction of strong ground motion for the 1989 Loma Prieta earthquake using empirical Green's functions, Bulletin of the Seismological Society of America, Vol. 81, No. 5, pp. 1813-1837 Hutchings, L. (1994). Kinematic earthquake models and synthesized ground motion using empirical Green's functions, Bulletin of the Seismological Society of America, Vol.81, No. 5, pp. 1813-1837 Hwang, H. H. M. and Huo, J. R. (1994). Generation of hazard-consistent ground motion, Soil Dynamics and Earthquake Engineering, Vol. 13, No.6, pp. 377-386 International Organization for Standardization (ISO) (1998) General Principles on Reliability for Structures, ISO/FDIS 2394 206 Iyama, J. and Kawamura, H. (1999). Application of wavelets to analysis and simulation of earthquake motions, Earthquake Engineering and Structural Dynamics, Vol.28, No.3, pp. 255-272 Jennings, P.C, Housner, G. W. and Tsai, N. C. (1968). Simulated earthquake motions, Report, Earthquake Engineering Research Laboratory, California Institute of Technology, April Johnson, M. E., Moore, I. M. and Ylvisaker, D. (1990). Minimax and Maximin distance designs, Journal of Statistical Planning and Inference, Vol.26, No.2, pp. 131-148 Kalagnanam, J. R. and Diwekar, U. M. (1997). An efficient sampling technique for offline quality control, Technometrics, Vol. 39, No.3, pp. 308-319 Kamae, K., Irikura, K. and Pitarka, A. (1998). A technique for simulating strong ground motion using hybrid Green's function, Bulletin of the Seismological Society of America, Vol. 88, pp. 357-367 Kanai, K. (1957). Semi-empirical formula for the seismic characteristics of the ground, Bulletin of Earthquake Research Institute, Vol. 35, pp.309-325 Karaboga, D., Pham, D. T. (1999). Intelligent Optimization Techniques: Genetic Algorithms, Tabu Search, Simulated Annealing and Neural Networks, Springer Verlag Kartam, N., Flood, I., and Garrett, J. (eds.), (1997). Artificial Neural Networks for Civil Engineers: Fundamentals and Applications, ASCE publications Kecman, V. (2001). Learning and Soft Computing, MIT Press Kennedy, J. and Eberhart, R. C. (1995). Particle swarm optimization, Proceedings of the IEEE International Conference on Neural Networks, Vol. IV, pp. 1942-1948, Piscataway, NJ Kennedy, J. and Eberhart, R. C. and Shi, Y. (2001). Swarm Intelligence, Morgan Kaufmann Publishers Kirkpatrick, S., Gelatt, C. D., Vecchi, M. P. (1983) Optimization by simulated annealing, Science, May Koehler, J. R. and Owen, A. B. (1996). Computer experiments, Handbook of Statistics (Ghosh, S. and Rao, C. R., eds.), Elsevier Science, New York, pp.261-308 Kramer, S. L. (1996). Geotechnical Earthquake Engineering, Prentice-Hall Kumar, B. (eds.) (1996). Information Processing in Civil and Structural Engineering Design, Civil-Comp Press 207 Kumar, B. and Topping, B. H. V. (eds.), (1999). Artificial Intelligence Applications in Civil and Structural Engineering, Civil-Comp Press Li, H. and Foschi, R. O. (1997). A inverse reliability method and its application, Structural Safety, Vol.20, pp.257-270 Li, H. (1999). An inverse reliability method and its applications in engineering design, Ph.D. Thesis, Department of Civil Engineering, University of British Columbia Li, K. N. (1996). Three-dimensional Nonlinear Dynamic Structural Analysis Computer Program Package CANNY-E User's Manual, Canny Consultants PTE LTD, Singapore MacKay, D. J. C. (1992). A practical Bayesian framework for backpropagation networks, Neural Computation, Vol.4, No.3, pp.448-472. MacKay, D. J. C. (1993). Bayesian methods for backpropagation networks, In van Hemmen, J. L., Domany, E. and Schulten, k., editors, Models of Neural Networks II, Springer. MacKay, D. J. C. (1997). Gaussian processes - A replacement for supervised neural networks? Lecture notes for a tutorial at NIPS 1997 Mazzolani, F. and Gioncu, V. (eds.), (2000). Seismic Resistant Steel Structures, Springer-Verlag Mckay, M. D., Beckman, R. J. and Conover, W. J. (1979). A comparison of three methods for selecting values on input variables in the analysis of output from a computer code, Technometrics, Vol. 21, No.2, pp. 239-245 Morris, M. D. and Mitchell, T. J. (1995). Exploratory designs for computer experiments, Journal of Statistical Planning and Inference, Vol.39, No. 1, pp.95-111 Myers, R. H. and Montgomery, D. C. (1995). Response Surface Methodology: Process and Product Optimization Using Designed Experiments, John Wiley & Sons, New York National Building Code of Canada (1995). Canadian Commission on Building and Fire Codes, National Research Council, Ottawa. Nau, R. F., Oliver, R. M. and Pister, K. S. (1982). Simulation and analyzing artificial nonstationary earthquake ground motions, Bulletin of the Seismological Society of America, Vol. 72, No.2, pp. 615-636 Neal, R. M. (1993a). Bayesian Learning via Stochastic Dynamics, In Hanson, S. J., Cowan, J. D. and Giles, L. L., editors, Neural Information Processing Systems 5, pp.475-482, Morgan Kaufmann, San Meteo, CA 208 Neal, R. M. (1993b). Probabilistic inference using Markov chain Monte Carlo method, Technical Report CRG-TR-93-1, Department of Computer Science, University of Toronto. Neal, R. M. (1995). Bayesian learning for neural networks, Ph.D. Thesis, Department of Computer Science, University of Toronto. Neal, R. M. (1997). Monte Carlo implementation of Gaussian process models for Bayesian regression and classification, Technical Report No. 9702, Department of statistics, University of Toronto Niederreiter, H. (1987). Point sets and sequences with small discrepancy, Monatsh. Math., Vol.104, pp.273-337 Niederreiter, H. (1988). Low-discrepancy and low-dispersion sequences, Journal of Number Theory, Vol.30, pp.51-70 Niederreiter, H. (1992). Random Number Generation and Quasi-Monte Carlo Methods, SIAM, Philadelphia Olafsson, S.and Sigbjornsson,R. (1995). Application of ARMA models to estimate earthquake ground motion and structural response, Earthquake Engineering and Structural Dynamics, Vol. 24, No.7, pp.951-966 Owen, A. B. (1992). Orthogonal Arrays for computer experiments, integration and visualization, Statistica Sinica, Vol.2, pp.439-452 Owen, A. B. (1994). Randomly permuted (t,m,s)-nets and (t,s)-sequences, in H. Niederreiter and P. J.-S. Shiue, editors, Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, pp.299-317, Springer, New York Papadrakakis, M., Papadopoulos, V., Lagaros, N. (1996). Structural reliability analysis of elastic-plastic structures using neural networks and Monte Carlo simulation, Computer Methods in Applied Mechanics and Engineering, Vol. 136, pp. 145-163 Papageorgiou, A. S. and Aki, K. (1983a). A specific barrier model for the quantitative description of inhomogeneous faulting and the prediction of a strong ground motion I. Description of the model. Bulletin of the Seismological Society of America, Vol. 73, pp.693-722 Papageorgiou, A. S. and Aki, K. (1983b). A specific barrier model for the quantitative description of inhomogeneous faulting and the prediction of a strong ground motion II. Applications of the model. Bulletin of the Seismological Society of America, Vol. 73, pp.953-978 209 Park, J.-S. (1994). Optimal Latin-Hypercube designs for computer experiments, Journal of Statistical Planning and Inference, Vol.39, No. 1, pp.95-111 Patterson, D. W. (1996). Artificial Neural Networks: Theory and Application, Prentice Hall Paz, M. (1997). Structural Dynamics: Theory and Computation, Chapman & Hall Polhermus, N. W. and Cakmak, A. S. (1981). Simulation of earthquake ground motions using Autoregressive Moving Average models, Earthquake Engineering and Structural Dynamics, Vol.9, No.4, pp. 343-354 Priestly, M. J. N., Calvi, G. M. (1996). Seismic Design and Retrofit of Bridges, John Wiley & Sons Rahmatian, P. (1997). Three-dimensional nonlinear dynamic seismic behavior of a seven story reinforced concrete building, M. A. Sc. Thesis, Department of Civil Engineering, University of British Columbia Rasmussen, C. E. (1996a). Evaluation of Gaussian processes and other methods for nonlinear regression, PhD Thesis, Department of computer Sciences, University of Toronto Rasmussen, C. E. (1996b). A practical Monte Carlo implementation of Bayesian learning, In Tourtzky, D. S., Mozer., M. C. and Hasselmo, M. E., editors, Advances in Neural Information Processing Systems 8, MIT Press Rumelhart, D. E., Hinton, G. E. & Williams, R. J. (1986). Learning internal representations by error propagation, In Parallel Distributed Processing, Volume I: Foundations, ed. David E. Rumelhart, James L. McClelland and the PDP Research Group, pp.318-362. The MIT Press. Sacks, J., Welch, W. J., Mitchell T. J. and Wynn P. (1989). Design and analysis of computer experiments, Statistical Sciences, Vol. 4, No. 4, pp. 409-435 SEAOC (1995). Vision 2000: Performance-based Seismic Engineering of Buildings. Shepherd, A. J. (1997). Second-Order Methods for Neural Networks - Fast and Reliable Training Methods for Multi-layer Perceptron, Springer Shinozuka, M.,and Deodatis, G. (1988). Stochastic process models for earthquake ground motion, Probabilistic Engineering Mechanics, Vol.3, No.3, pp. 114-123 Shinozuka, M., Zhang, R. and Deodatis, G. (1994). Sine-square modification to Kanai-Tajimi earthquake ground motion spectrum, Structural Safety & Reliability, Schuller, Shinozuka & 7ao(eds.), pp.2217-2223 210 Simpson, T., Lin, D. K. J., and Chen, W. (2001). Sampling strategies for computer experiments: design and analysis, InternationalJournal of Reliability and Applications Smith, M. (1993). Neural Networks for Statistical Modeling, Van Nostrand Reinhold Solnes, J. (1997). Stochastic Processes and Random Vibrations: Theory and Practice, John Wiley & Sons Spanos, P. D. and Zeldin, B. A. (1996). Efficient iterative ARMA approximation of multivariate random process for structural dynamics application, Earthquake Engineering and Structural Dynamics, Vol.25, No. 5, pp.497-508 Srinivasan, M., Corotis, R. and Ellingwood, B. (1992). Generation of critical stochastic earthquake, Earthquake Engineering and Structural Dynamics, Vol. 21, No. 4, pp.275-288 Sunder, S. and Conner, J. (1982). A new procedure for processing strong-motion earthquake signals, Bulletin of the Seismological Society of America, Vol. 72, No. 2, pp. 643-661 Sunder, S. and Schumacker, B. (1982). Earthquake motions using a new data processing scheme, Journal of Engineering Mechanics, ASCE, Vol. 108, No.6, pp. 1313-1329 Suzuki, S., Hada, K. and Asano, K. (1998). Simulation of strong ground motions based on recorded accelerograms and the stochastic method, Soil Dynamics and Earthquake Engineering, Vol.17, No.7-8, pp.551-556 Tajimi, H. (1960). A statistical method of determining the maximum response of a building structure during an earthquake, Proceedings of the 2nd World conference of Earthquake Engineering, Vol.11, pp.781-798 Tang, B. (1993). Orthogonal Array-based Latin Hypercubes, Journal of American Statistical Association, Vol.88, No.424, pp. 1392-1397 Topping, B. H. V. and Kumar, B. (eds.), (1999). Optimization and Control in Civil and Structural Engineering, Civil-Comp Press Topping, B. H. V. (eds.), (2000). Computational Engineering Using Metaphors from Nature, Civil-Comp, Edinburgh Trifunac, M. D. (1971). Zero baseline correction of strong motion accelerograms, Bulletin of the Seismological Society of America, Vol. 61, No.5, pp. 1201-1211 Trujillo, D. M. and Carter, A. L. (1982). A new approach to the integration of accelerometer data, Journal of Earthquake Engineering and Structural Dynamics, Vol.10, pp. 529-535 Vapnik, V. (1998). Statistical Learning Theory, John Wiley & Sons 211 Vapnik, V. (2000). The Nature of Statistical Theory, Springer-Verlag Vapnik, V., Golowich, S. E. and Smola, A. (1997). Support vector method for function approximation, regression estimation, and signal processing, in Advances in Neural Information Processing Systems 9, Mozer, M., Jordan, M. and Petsche, T. (eds), MIT Press, pp.282-287 Ventura, C. E., Rahmatian, P., Li, K. and Kubo, T. (2002). Reliability of 3-D nonlinear dynamic analysis of a seven story reinforced concrete building, Proceedings of the 12th European Conference on Earthquake Engineering, London, September Waarts, P. H. (2000). Structural Reliability Using Finite Element Methods - An Appraisal of DARS: Directional Adaptive Response surface Sampling, Delft University Press Waszczyszyn, Z. (eds.), (1999). Neural Networks in the Analysis and Design of Structures, Springer-Wien Wen, Y. K. (2001). Reliability and performance-based design, Structural Safety, Vol.23, pp.407-428 Williams, C. K. I. (1995). Regression with Gaussian processes, Mathematics of Neural Networks: Models, Algorithms and Applications (Ellacott, S. W., Mason, J. C. and Anderson, I. J., eds), Kluwer, 1997 Williams, C. K. I. and Rasmussen, C. E. (1996). Gaussian processes for regression, Advances in Neural Information Processing Systems 8, MIT Press Ye, K. Q. (1998). Orthogonal column Latin Hypercubes and their application in computer experiments, Journal of American Statistical Association, Vol. 93, No.444, pp. 1430-1439 Zeng, Y. (1994). A composite source model for computing realistic synthetic strong ground motions, Geophysical Research Letter, Vol. 21, pp. 725-728 212 Appendix A Database for the two-story reinforced concrete frame 1. Database of the input variable combinations fy(MPa) f;(MPa) Ec(MPa) W,(KN) W2(KN) 394.384 25.212 22173 307.79 435.15 415.878 28.848 22870 375.62 493.47 381.933 27.984 22751 354.41 471.60 378.513 39.612 24779 363.44 452.97 390.501 27.984 22751 364.07 498.06 388.632 30.180 23148 381.50 414.63 421.400 36.336 24225 375,62 513.18 421.400 25.356 22194 320.18 406.53 392.496 42.420 25170 401.45 468.90 383.194 26.796 22515 291.41 455.40 2. Database of the responses D3(m) SD3(m) D2(m) SD2(m) 0.083 0.071 0.051 0.048 0.089 0.082 0.052 0.054 0.09 0.087 0.052 0.056 0.079 0.078 0.047 0.052 0.087 0.084 0.052 0.056 0.082 0.076 0.047 0.048 0.083 0.077 0.050 0.053 0.072 0.061 0.042 0.041 0.084 0.083 0.049 0.054 0.074 0.068 0.046 0.049 213 Appendix B Database for the tall reinforced concrete building 1. Database of the input variable combinations <°s Ts q fy C 4 4 fb Bi Hi B2 H2 B3 H3 10 22 20.7 24 408 35 25 15 15 713 910 506 705 402 504 13 12 40.3 33 417 36 26 16 16 726 920 512 710 404 509 16 31 7.56 42 425 37 27 16 16 739 931 519 716 407 513 19 7.6 27.2 51 434 38 28 17 17 752 941 525 721 409 518 22 26 46.9 16.8 442 39 28 17 17 765 951 532 727 412 523 26 17 14.1 25.8 451 40 29 18 18 778 962 538 732 414 527 29 35 33.8 34.8 401 41 30 19 18 791 972 545 737 417 532 32 5.3 53.4 43.8 409 42 31 19 19 804 982 551 743 419 537 35 24 3.19 52.8 418 43 31 20 19 817 993 558 748 421 541 39 15 22.9 18.6 426 44 32 20 20 830 1003 564 754 424 546 2. Database of the corresponding responses (mean value) D20 •^20 e„ e5 M V 0.005001 0.159223 0.0121 0.009 8.49E+02 21.68 0.009044 0.184871 0.0223 0.0167 1.52E+03 40.49 0.006888 0.203147 0.0193 0.0128 1.13E+03 36.41 0.025976 0.250284 0.0612 0.0452 4.17E+03 96.77 0.006775 0.36076 0.0171 0.0131 1.22E+03 33.34 0.018344 0.473205 0.0399 0.0338 3.31E+03 81.45 0.00671 0.32982 0.0219 0.0133 1.14E+03 43.47 0.036875 0.388779 0.0885 0.0676 5.71E+03 141.62 0.018662 0.428509 0.0478 0.0375 3.34E+03 102.68 0.017346 0.652247 0.0384 0.0324 3.11E+03 79.97 214 3. Databases of the responses (standard deviation) JD20 ^A20 sei5 0.001526 0.022897 0.0032 0.002704 0.034733 0.0063 0.001176 0.029231 0.0034 0.007111 0.03689 0.0178 0.002096 0.068635 0.0045 0.00584 0.071256 0.0109 0.002717 0.060925 0.0051 0.014607 0.051566 0.0275 0.005389 0.061194 0.0115 0.004441 0.110517 0.0066 $95 SM sv 0.0024 2.47E+02 5.38 0.005 5.09E+02 10.32 0.0023 2.56E+02 5.57 0.0107 1.08E+03 19.53 0.0035 4.05E+02 8.72 0.0101 1.00E+03 20.19 0.0039 4.47E+02 8.87 0.0251 2.13E+03 36.07 0.0096 1.01E+03 23.69 0.009 7.66E+02 12.91 215 Appendix C Databases of the bridge without or with seismic isolation 1. Database of the input variable combinations A, (gal) og (rad/sec) Ts(sec) D(rnm) Br(mm) Q (KN) 47.3 8.3 46.45 2067 873 3142 67.9 22.4 33.61 1608 555 1658 228.7 3.7 7.42 1767 845 1829 627.8 18 27.05 2033 680 1854 702.5 27.1 40.83 1740 663 2168 197.2 30.3 2.87 1821 967 2849 271.5 19.8 7.66 1721 597 2967 369.3 18.1 51.35 1704 680 1767 376.3 12.2 32.72 1692 543 2616 34 15.2 22.17 1668 727 1932 2. Responses (mean) database for bridge bent without seismic isolation A(m) Mc (KNm) VC(KN) He Mb (KNm) Hb 1.5478 6567.751 1685.356 88.7234 5923.284 0 3542 2.8191 10481.97 2686.742 61.6259 9401.672 0 6043 0.6442 12910.6 3321.84 51.532 11813.51 0 5659 1.2087 18468.42 4734.917 96.6833 16472.42 0 8465 1.439 5563.964 1427.193 82.5027 5010.298 0 2885 2.2794 8811.976 2259.199 130.6768 7917.119 0 4988 0.2215 10449.36 2642.931 17.7033 9270.353 0 4361 0.813 13464.31 3451.355 65.0327 12104.14 0 5846 0.1733 3624.891 928.498 9.9191 3330.208 0 1705 0.2636 3995.262 1019.045 15.1239 3681.271 0 1961 216 3. Responses (standard deviation) database for bridge bent without seismic isolation SA(m) SMc(KNm) SVc(KN) ^ Mb (KNm) 0.0004 243.91 57.767 0.0298 183.478 0.0121 0.0006 139.467 33.711 0.0389 102.272 0.0028 0.002 518.08 131.827 0.1268 439.568 0.0288 0.0069 105.144 84.351 0.5428 1185.951 0.0262 0.0284 54.383 30.166 1.8965 244.438 0.0151 0.0017 585.322 143.612 0.1109 453.483 0.0288 0.008 129.36 52.568 0.5282 350.509 0.0228 0.0051 410.695 116.739 0.3351 535.487 0.0347 0.0108 41.454 29.104 0.6957 243.132 0.0153 0.0003 74.579 18.397 0.0188 54.603 0.0014 4. Responses (mean) database for bridge bent with seismic isolation A(m) Mc (KNm) VC(KN) Mb(KNm) Ub 0.0126 1098.438 249.9 0.0347 926.658 0.0186 0.0121 699.679 171.1 0.0517 719.342 0.0194 0.0781 4220.083 1032.32 0.6905 3489.402 0.1534 0.1147 5354.663 1270.148 0.5442 4164.677 0.1679 0.0938 4357.546 1075.5 0.7858 3788.365 0.1612 0.0239 2232.411 546.755 0.2663 1951.901 0.054 0.0831 2665.8 659.856 0.4351 2291.845 0.0788 0.0647 2691.394 664.914 0.4609 2287.903 0.0781 0.115 2737.568 673.086 0.4807 2320.041 0.0809 0.0074 471.07 112.983 0.029 540.318 0.0142 217 Responses (standard deviation) database for bridge bent with seismic isolation SA(m) SMc(KNm) 0.0036 271.644 0.0038 99.738 0.0136 440.268 0.0281 632.524 0.0212 579.226 0.0056 393.488 0.021 321.78 0.0136 314.446 0.0399 324.316 0.0016 90.662 SvoCKN) 62.498 0.0086 26.661 0.0118 107.2 0.0978 156.214 0.0874 152.06 0.1822 92.972 0.0727 80.52 0.0722 78.103 0.0726 81.263 0.0733 21.678 0.0056 Sivib (KNm) 197.867 0.004 92.098 0.0025 331.833 0.0217 488.48 0.0284 615.21 0.034 325.443 0.0152 321.954 0.0183 231.165 0.0154 317.919 0.0206 62.415 0.0016 218 Appendix D Response database of the wood shear wall el(m) e2 (m) M (KNs2/m) A* 2 (m/sec ) A(mm) V(KN) 0.01 0.025 2 0.5 0.343 0.3006 0.01 0.025 2 1 0.695 0.2983 0.01 0.025 2 1.5 1.04 0.3026 0.01 0.025 2 2 1.388 0.3521 0.01 0.025 2 2.5 1.717 0.3853 0.01 0.025 4 0.5 0.933 0.4988 0.01 0.025 4 1 1.652 0.674 0.01 0.025 4 1.5 2.532 0.5028 0.01 0.025 4 2 3.367 0.5753 0.01 0.025 4 2.5 4.065 0.4914 219 Appendix E Response database of the Holiday Inn 1. Response database of the Holiday Inn before seismic retrofit Ax (m/sec2) Ay (m/sec2) Az (m/sec2) AR (rad/sec2) Dx(m) Dy(m) 0.1 0.1 0.1 0.005 0.005694 0.007909 0.1 0.1 0.1 0.12 0.006339 0.007562 0.1 0.1 4.9 0.005 0.005459 0.008592 0.1 0.1 4.9 0.12 0.005999 0.008365 0.1 4.9 0.1 0.005 0.011515 0.329426 0.1 4.9 0.1 0.12 0.010811 0.344291 0.1 4.9 4.9 0.005 0.012482 0.27927 0.1 4.9 4.9 0.12 0.009658 0.29652 4.9 0.1 0.1 0.005 0.22921 0.090296 4.9 0.1 0.1 0.12 0.215678 0.090498 2. Response database of the Holiday Inn after seismic retrofit Ax (m/sec2) AY (m/sec2) Az (m/sec2) AR (m/sec2) Ad (mm2) Dx(m) DY (m) 0.1 0.1 0.1 0.002 1500 0.00403 0.003481 0.1 0.1 0.1 0.002 10500 0.002024 0.002666 0.1 0.1 0.1 0.2 1500 0.004296 0.00395 0.1 0.1 0.1 0.2 15000 0.002181 0.005977 0.1 0.1 9 0.002 1500 0.004151 0.003832 0.1 0.1 9 0.002 15000 0.002267 0.003439 0.1 0.1 9 0.2 1500 0.004493 0.004541 0.1 0.1 9 0.2 15000 0.002393 0.00748 0.1 9 0.1 0.002 1500 0.014179 0.450605 0.1 9 0.1 0.002 15000 0.002889 0.350399 220 Appendix F Reliability index database for the two-story reinforced concrete building w, W2 P. P2 310.000 390.000 o.ooo 2.020 310.000 403.300 1.675 2.003 310.000 416.700 1.648 1.987 310.000 430.000 1.621 1.972 310.000 443.300 1.595 1.955 310.000 456.700 1.570 1.939 310.000 470.000 1.546 1.922 310.000 483.300 1.523 1.905 310.000 496.700 1.500 1.889 310.000 510.000 1.480 1.873 221 Appendix G Reliability index database for the tall building 1. Input variable combinations B, (mm) H, (mm) B2(mm) H2(mm) B3 (mm) H3 (mm) 836 1008 590 778 407 591 812 984 574 787 410 626 787 1008 556 777 455 585 784 1027 579 779 460 648 850 979 586 839 409 600 787 971 597 837 404 635 805 954 562 803 466 567 763 1029 578 814 477 640 844 992 603 775 425 577 829 1038 609 757 409 642 2. Reliability index database p, P2 Ps P4 Ps P6 3.032 2.874 2.047 2.256 2.857 2.669 2.981 2.758 2.209 2.189 2.908 2.659 2.914 2.811 2.265 2.237 2.782 2.685 2.897 2.507 2.581 2.262 2.749 2.690 2.974 2.786 2.084 2.189 2.843 2.601 3.054 2.767 • 2.339 2.254 3.063 2.774 2.923 2.851 2.288 2.194 2.844 2.719 2.962 2.463 2.672 2.313 2.772 2.692 2.946 2.881 2.059 2.226 2.809 2.690 3.011 2.641 2.228 2.281 2.798 2.661 222 Appendix H Reliability index database for the bridge bent with isolation D(mm) Br(mm) ft P2 1550.000 550.000 2.210 3.801 1550.000 616.700 2.082 3.641 1550.000 683.300 1.906 3.449 1550.000 750.000 1.644 3.161 1550.000 816.700 1.360 2.546 1550.000 883.300 1.229 1.735 1550.000 950.000 1.190 1.102 1633.000 550.000 2.491 3.763 1633.000 616.700 2.422 3.650 1633.000 683.300 2.277 3.522 223 

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
United States 14 0
China 12 12
United Kingdom 12 0
India 10 0
Turkey 9 0
Iran 9 0
Vietnam 6 0
Germany 4 0
Norway 3 0
Nigeria 3 0
Japan 3 0
Canada 2 0
France 2 0
City Views Downloads
Unknown 37 0
Izmir 9 0
Mumbai 8 0
Hanoi 6 0
Ashburn 5 0
Shenzhen 4 9
Beijing 4 1
London 3 0
Tokyo 3 0
Wilmington 3 0
Abingdon 2 0
Guangzhou 2 2
Luft 2 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0063534/manifest

Comment

Related Items