Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Performance-based seismic design using designed experiments and neural networks Zhang, Jiansen 2003

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2003-854205.pdf [ 9.59MB ]
Metadata
JSON: 831-1.0063534.json
JSON-LD: 831-1.0063534-ld.json
RDF/XML (Pretty): 831-1.0063534-rdf.xml
RDF/JSON: 831-1.0063534-rdf.json
Turtle: 831-1.0063534-turtle.txt
N-Triples: 831-1.0063534-rdf-ntriples.txt
Original Record: 831-1.0063534-source.json
Full Text
831-1.0063534-fulltext.txt
Citation
831-1.0063534.ris

Full Text

PERFORMANCE-BASED SEISMIC DESIGN USING DESIGNED EXPERIMENTS A N D N E U R A L NETWORKS BY HANSEN ZHANG B. Sc., Wuhan University of Technology, 1990 M.Sc, Wuhan University of Technology, 1993  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR THE DEGREE OF DOCTOR OF PHILOSOPHY  In  THE F A C U L T Y OF G R A D U A T E STUDIES Department of Civil Engineering  We accept this thesis as conforming To thej^quired-standatd  THE UNIVERSITY OF BRITISH COLUMBIA April 2003 © Jiansen Zhang, 2003  In presenting  this  degree at the  thesis  in partial fulfilment  University of  freely available for reference copying  of  department  this or  publication of  British Columbia, and study.  thesis for scholarly purposes by  his  or  her  ^iVi(  requirements  I agree  that the  representatives.  may be It  this thesis for financial gain shall not  £~nctrr\<?<?rA  The University of British Columbia Vancouver, Canada  DE-6 (2/88)  the  I further agree  permission.  Department of  of  is  an  advanced  Library shall make it  that permission for extensive granted  by the  understood be  for  allowed  that without  head  of  my  copying  or  my written  ABSTRACT •r  There are many uncertainties involved in seismic design process. Such factors as earthquake ground motions,  variability of structural geometries and material properties, and  approximation in analytical model contribute to the non-performance of the structure. Therefore, reliability methods are applied in structural engineering to assess the structural performance. However, seismic reliability assessment may necessitate a large number of performance function evaluations, each requiring a nonlinear dynamic structural analysis, which is a formidable, if not impossible task. In performance-based seismic design, a set of design parameters must be found to meet the associated target reliability levels for different performance objectives. This is conventionally achieved by trial-and-error using repeated forward reliability analysis, which is inefficient. Hence, it is desirable to develop an efficient and effective procedure that can reduce the colossal computational efforts, making seismic reliability assessment and performance-based design tractable.  This study has explored for the first time applications of Design of Computer Experiments and Artificial Neural Networks for seismic reliability analysis, as well as performance-based seismic design, taking into account structural nonlinear dynamic behavior and all the major uncertainties involved. Experimental design is utilized to construct response databases for Neural Networks learning. Neural Networks act as a surrogate of the computer program, improving computational efficiency by approximating structural responses.  Case studies have been carried out to demonstrate the applicability and efficiency of the proposed methods in seismic reliability assessment and performance-based seismic design.  II  TABLE OF CONTENTS ABSTRACT  H  T A B L E OF CONTENTS  m  LIST OF TABLES  VH  LIST OF FIGURES  IX  NOTATIONS AND ABBREVIATIONS  XI  ACKNOWLEDGMENTS  XV  CHAPTER 1 INTRODUCTION  1  1.1 General 1.2 Review of Previous Work 1.2.1 Synthesis of artificial ground motions 1.2.2 Design of computer experiments 1.2.3 Approximation models 1.2.4 Performance-based design 1.3 Obj ectives of The Research 1.4 Thesis Outline  1 3 3 4 5 6 6 8  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  11  2.1 Introduction 2.2 Review of Ground Motion Simulation Methods 2.2.1 Empirical Green's function method 2.2.2 Spectral representation method 2.2.3 Frequency-wave number power spectra..; 2.2.4 Autoregressive moving average (ARMA) model 2.2.5 Wavelet transform method 2.2.6 Neural network model 2.3 Generation of Non-stationary Ground Motion 2.3.1 Determination of ground motion spectral characteristics 2.3.2 Generation of a stationary process 2.3.3 Selection of modulation function 2.3.4 Generation of a non-stationary artificial ground motion 2.3.5 Baseline correction 2.4 Simulation of Ground Motion Compatible With Response Spectrum 2.5 Summary and Discussion  11 13 14 17 19 21 23 24 24 25 27 27 30 33 37 40  III  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS M E T H O D O L O G Y  43  3.1 Introduction 3.2 Review of Methods for Design of Computer Experiments 3.2.1 Central composite design 3.2.2 Latin hypercube design 3.2.3 Uniform design 3.2.4 Low discrepancy sequence design 3.2.4.1 Hammersley sequence design 3.2.4.2 Halton sequence design 3.3 Experimental Design Implementation in This Study 3.3.1 Grid design 3.3.2 Grid-based optimal design 3.3.3 Optimized Latin hypercube design 3.4 Summary and Discussion  43 44 46 47 49 52 52 54 56 57 58 59 62  CHPATER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  65  4.1 Introduction 4.2 Multilayer Backpropagation Neural Networks 4.2.1 General 4.2.2 Artificial neural model 4.2.3 Network architecture 4.2.4 Training strategies 4.2.4.1 Backpropogation algorithm 4.2.4.2 Other training algorithms 4.2.5 Performance evaluation 4.2.6 Neural networks implementation in this study 4.2.6.1 Data preparation 4.2.6.2 Topology of the network 4.2.6.3 Training 4.3 Radial Basis Function Networks 4.3.1 General 4.3.2 Radial basis function network training 4.3.3 Radial basis function networks implementation in this study 4.4 Summary and Discussion  65 67 67 68 69 71 72 75 75 75 75 77 80 82 82 83 87 87  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN M E T H O D L O G Y .  89  5.1 Introduction 5.2 Performance-based Seismic Design 5.2.1 Multiple performance objectives in SEAOC Vision 2000 5.2.2 Performance-based seismic design criteria in this study  89 94 94 96  IV  5.2.2.1 Multiple seismic hazard levels 5.2.2.2 Multiple performance objectives 5.2.2.3 Structural analysis approach 5.2.2.4 Seismic design criteria 5.3 Implementation of Performance-based Seismic Design 5.3.1 Reliability and performance-based seismic design 5.3.2 Performance-based seismic design using neural networks 5.4 Summary and Discussion  98 101 102 103 109 109 110 113  CHAPTER 6 SEISMIC RELIABILITY ANALYSES: CASE STUDIES  116  6.1 Introduction 6.2 Description of The Nonlinear Dynamic Analysis Program 6.3 Case study 1: A Two-story Reinforced Concrete Plane Frame 6.3.1. Description of the structure and ground motion 6.3.2. Construction of the response databases 6.3.3. Reliability assessment 6.3.4. Sensitivity analysis 6.4 Case study 2: A Tall Reinforced Concrete Frame 6.4.1 Description of the structure 6.4.2. Construction of The Response databases 6.4.3. Reliability Assessment 6.4.3.1 Neural networks training 6.4.3.2 Two levels of design earthquakes 6.4.3.3 Reliability assessment for serviceability limit state 6.4.3.4 Reliability assessment for ultimate limit state 6.5 Case study 3: A Bridge Bent Without or With Seismic Isolation 6.5.1 Description of the structure 6.5.2 Construction of the response databases 6.5.3 Reliability assessment 6.5.3.1 Neural networks training 6.5.3.2 Two levels of earthquakes 6.5.3.3 Reliability assessment for serviceability limit state 6.5.3.4 Reliability assessment for ultimate limit state 6.6 Case study 4: A Wood Shear Wall 6.6.1 Description of the structure 6.6.2 Random variables 6.6.3 Performance evaluation 6.7 Case study 5: An Instrumented Structure for Earthquake Response Measurement 6.7.1 Description of the structure 6.7.2 Ground motions 6.7.3 Random variables 6.7.4 Performance evaluation 6.7.4.1 The structure before seismic retrofit 6.7.4.2 The structure after seismic retrofit 6.8 Reliability Assessment: Summary and Conclusions  116 117 118 118 120 123 128 130 130 130 134 134 137 139 141 143 143 143 145 145 150 152 154 158 158 159 160 162 162 163 167 168 168 173 178  V  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS  179  7.1 Introduction 7.2 A Two-story Reinforced Concrete Frame 7.2.1 Description of the structure 7.2.2 Random variables 7.2.3 Performance-based design formulation 7.2.4 Results 7.3 A Tall Reinforced Concrete Building 7.3.1 Description of the structure 7.3.2 Random variables 7.3.3 Performance-based design formulation. 7.3.4 Results 7.4 A Bridge Bent Without or With Seismic Isolation 7.4.1 Description of the structure 7.4.2 Bridge bent without seismic isolation 7.4.2.1 Random variables 7.4.2.2 Performance-based design for serviceability limit state 7.4.2.3 Performance-based design for ultimate limit state 7.4.3 Bridge bent with seismic isolation 7.4.3.1 Random variables 7.4.3.2 Performance-based design for serviceability limit state 7.4.3.3 Performance-based design for ultimate limit state 7.5 A Wood Shear Wall, 7.5.1 Description of the structure 7.5.2 Random variables 7.5.3 Performance-based design 7.6 Summary  179 181 181 181 182 182 183 183 184 185 186 187 187 187 187 188 188 189 189 190 190 192 192 192 193 194  CHAPTER 8 SUMMARY AND FUTURE WORK  195  8.1 Summary 8.2 Future Work  :  195 199  REFERENCES  201  Appendix A Database for the two-story reinforced concrete frame Appendix B Database for the tall reinforced concrete building Appendix C Database of the bridge without or with seismic isolation Appendix D Response database of the wood shear wall Appendix E Response database of the Holiday Inn Appendix F Reliability index database for the two-story reinforced concrete building  213 214 216 219 220  Appendix G Reliability index database for the tall building Appendix H Reliability index database for the bridge bent with isolation  VI  221 222 223  LIST OF TABLES Table 2.1 Kanai-Tajimi spectrum parameters 25 Table 2.2 Clough-Penzienfilterparameters 26 Table 2.3 Seismic coefficients C and C 38 Table 3.1 Central Composite Design for three variables 47 Table 3.2 Random Latin Hypercube Design for two variables 48 Table 3.3 A Uniform Design for two variables with 21 levels 50 Table 3.4 A Hammersley Sequence Design for two variables 53 Table 3.5 A Halton Sequence Design for two variables 55 Table 3.6 A Grid Design for three variables 58 Table 3.7 A Optimized Latin Hypercube Design for two variables 61 Table 5.1 Performance level definitions 97 Table 5.2 Performance objectives 102 Table 6.1 Case study 1: Ground motion parameters distribution and statistics 120 Table 6.2 Case study 1: Ground motion parameter combinations 121 Table 6.3 Case study 1: Input variable bounds 123 Table 6.4 Case study 1: Neuron numbers and neural network RMSREs 124 Table 6.5 Case study 1: Neural networks training relative error statistics 124 Table 6.6 Case study 1: Input variable probability distributions and statistics 125 Table 6.7 Case study 1: Reliability indices for collapse prevention limit state 126 Table 6.8 Case study 1: Reliability indices for life safety limit state 127 Table 6.9 Case study 1: Reliability indices for functionality limit state 128 Table 6.10 Case study 1: Variation of reliability index with statistical parameters 129 Table 6.11 Case study 2: Input variable bounds 132 Table 6.12 Case study 2: Neuron numbers and neural network RMSREs 134 Table 6.13 Case study 2: Neural networks training relative error statistics 135 Table 6.14 Case study 2: Input variable probability distributions and statistics (Serviceability limit state) 140 Table 6.15 Case study 2: Reliability index for serviceability limit state 141 Table 6.16 Case study 2: Input variable probability distributions and statistics (Ultimate limit state) 142 Table 6.17 Case study 2: Reliability index for ultimate limit state 143 Table 6.18 Case study 3: Input variable bounds 144 Table 6.19 Case study 3: Neuron numbers and neural network RMSREs (without isolation) 146 Table 6.20 Case study 3: Neuron numbers and neural network RMSREs (with isolation) 147 Table 6.21 Case study 3: Neural networks training relative error statistics 149 Table 6.22 Case study 3: Input variable probability distributions and statistics (Serviceability limit state) 152 Table 6.23 Case study 3: Reliability index for serviceability limit state without isolation.. 152 Table 6.24 Case study 3: Reliability index for serviceability limit state with isolation 153 Table 6.25 Case study 3: Input variable probability distributions and statistics (Ultimate a  v  VII  limit state) : ? Case study 3: Random variable probability distributions and statistics Case study 3: Reliability index for ultimate limit state without isolation Case study 3: Reliability index for ultimate limit state with isolation Case study 3: Random variable probability distributions and statistics Case study 3: Reliability index for ultimate limit state with isolation Case study 4: Input variable probability distributions and statistics Case study 4: Neuron numbers and neural network RMSREs Case study 4: Neural networks training relative error statistics Case study 4: Reliability index for wood shear wall Case study 5: Input variable bounds (before retrofit) Case study 5: Input variable bounds (after retrofit) Case study 5: Input variable probability distribution and statistics Case study 5: Neuron numbers and neural network RMSREs (before retrofit) Table 6.39 Case study 5: Neural networks training relative error statistics (before retrofit) Table 6.26 Table 6.27 Table 6.28 Table 6.29 Table 6.30 Table 6.31 Table 6.32 Table 6.33 Table 6.34 Table 6.35 Table 6.36 Table 6.37 Table 6.38  154 155 156 156 157 158 160 161 161 161 168 168 168 169 169 170 170  Table 6.40 Case study 5: Serviceability reliability indices (before retrofit) Table 6.41 Case study 5: Life safety reliability indices (before retrofit) Table 6.42 Case study 5: Neuron numbers and neural network RMSREs (after retrofit) 174 Table 6.43 Case study 5: Neural networks training relative error statistics (after retrofit).. 174 Table 7.1 Reinforced concrete plane frame: Input variable probability distribution and Statistics 181 Table 7.2 Tall building: Input variable probability distributions and statistics 186 Table 7.3 Bridge bent: Input variable probability distributions and statistics (without seismic isolation) 187 Table 7.4 Bridge bent: Input variable probability distributions and statistics (with seismic isolation) 189 Table 7.5 Wood shear wall: Input variable probability distributions and statistics 193  VIII  LIST OF FIGURES Figure 2.1 Figure 2.2 Figure 2.3 Figure 2.4 Figure 2.5  A schematic diagram of Empirical Green's Function method Comparison of Kanai-Tajimi, Clough-Penzien and sine-square spectrum Jennings modulation function Hsu & Bernard modulation function Artificial ground motion generated using Amin & Ang modulation function Figure 2.6 Artificial ground motion generated using Hsu modulation function Figure 2.7 Artificial ground motion with two strong components Figure 2.8(a) Acceleration time history before baseline correction Figure 2.8(b) Velocity time history before baseline correction Figure 2.8(c) Displacement time history before baseline correction Figure 2.8(d) Acceleration time history after baseline correction Figure 2.8(e) Velocity time history after baseline correction Figure 2.8(f) Displacement time history baseline correction Figure 2.9 Flowchart to generate response spectrum compatible ground motion time history Figure 2.10 UBC design response spectrum Figure 2.11 UBC design spectrum compatible artificial ground motion accelerogram Figure 3.1 A Random Latin Hypercube Design for two variables Figure 3.2 A Uniform Design for two variables with 21 levels Figure 3.3 A Hammersley Sequence Design for two variables Figure 3.4 A Halton Sequence Design for two variables Figure 3.5 A Grid-based Optimal Design for two variables Figure 3.6 A Optimized Latin Hypercube Design for two variables Figure 4.1 A schematic diagram of an artificial neuron Figure 4.2 Transfer function Figure 4.3 A typical Multilayer Backpropagation Neural Network Figure 4.4 A schematic Radial Basis Function Network Figure 5.1 SEAOC Vision 2000 performance levels Figure 6.1 Reinforced concrete plane frame geometry Figure 6.2 Cross peak trilinear hysteresis model CP3 Figure 6.3 Geometry of tall building Figure 6.4 CANNY trilinear hysteresis model Figure 6.5 Bridge bent with isolation Figure 6.6 Modulation function Figure 6.7 Degrading bilinear model Figure 6.8 Variation of reliability index with B (mm)  39 38 40 48 51 54 56 59 62 68 69 70 84 96 118 122 131 133 144 144 146 154  Figure 6.9 Variation of reliability index with isolator width meanB Figure 6.10 Wood shear wall construction Figure 6.11 Variation of reliability index with respect to e,  157 159 162  r  r  Figure 6.12 Variation of reliability index with respect to e  IX  2  16 27 28 29 30 31 32 35 35 35 36 36 36  162  Figure 6.13 A typical floor plan of Holiday Inn Figure 6.14 Holiday Inn earthquake ground motion, Northridge 1994 (a) Longitudinal accelerogram (b) Transverse accelerogram (c) Vertical accelerogram (d) Rotational accelerogram Figure 6.15 Variation of reliability index with respect to ground motion components  163 164 164 165 165  (a) Variation of reliability index with respect to A ^  172  (b) Variation of reliability index with respect to Agy  172  (c) Variation of reliability index with respect to A ^  172  (d) Variation of reliability index with respect to Ag,  173  Figure 6.16 Variation of reliability index with respect to damper mean area A (Serviceability limit state) Figure 6.17 Variation of reliability index with respect to damper mean area A  d  175 d  (Life safety limit state) Figure 6.18 Variation of reliability index with respect to ground motion components  175  (a) Variation of reliability index with respect to A ^  176  (b) Variation of reliability index with respect to A^,  177  (c) Variation of reliability index with respect to A ^  177  (d) Variation of reliability index with respect to A ^  177  X  NOTATIONS AND ABBREVIATIONS NOTATIONS  Ml  Ao aa Aa Ag  Agx Agy Agz Agr Amax  a(t) A(co)  vector Euclidean norm story acceleration limit design earthquake peak ground acceleration hysteretic damper cross sectional area peak ground acceleration peak ground acceleration in the longitudinal direction peak ground acceleration in the transverse direction peak ground acceleration in the vertical direction peak ground acceleration in the rotational direction on a horizontal pi maximum story acceleration autoregression coefficient ground acceleration time history Fourier transform of ground acceleration time history  C(X )  moving average coefficient isolator width center of radial basis function UBC (Uniform Building Code) design spectrum parameter UBC design spectrum parameter cost function in terms of design parameter vector X d  d(t) D D(co)  ground displacement time history diameter of a column Fourier transform of ground displacement time history  ej E(.) E EC fl) f(t)  nail spacing of wood shear wall expectation concrete modulus of elasticity error criterion transfer function modulation function  f;  concrete compressive strength  fy G  steel yield strength performance function ground damping ratio  K  h((0)  damping ration of Clough-Penzien filter neural network hidden neuron i output minimum number of hidden layer neurons maximum number of hidden layer neurons high-pass Butterworth filter  k  wave number in x-direction  bi B c C Cv r  a  d  c  Hi H, H  u  x  XI  wave number in y-direction  K  M Mb M p  My Mu N N  b  Ntrain Ntest N w  P P q Q r R Rn  R (^'n.' ) c  uu  mass; earthquake magnitude; member bending moment beam moment capacity to prevent lateral buckling probable moment capacity member flexural yield moment ultimate moment capacity axial load on a column buckling capacity of a column number of training samples number of testing samples number of network weights random variable with uniform distribution over the interval [0,1] number of examples uniformly distributed load concentrated load . radius of radial basis function seismic reduction factor standard normal variable autocorrelation function  So  dimensionality power spectrum density of the white noise  s  pseudo-acceleration response spectrum  s  a  S (co) KT  SSEtrain SSEtest  S (k ,k ,co) uu  T Ti U V  d  V  0  V ax m  v  P  V  u  V  1  max  x  y  Kainai-Tajimi Power Spectrum density function sum square error over training dataset sum square error over training dataset frequency-wave number spectrum earthquake duration neural network target output I normalized input shear force story velocity limit maximum story velocity probable shear capacity ultimate shear capacity maximum i-th story shear force  V'  i-th story shear capacity  v(t) V(co)  ground velocity time history Fourier transform of ground velocity time history j-th weight of R B F network generated stationary process weight connecting neuron j with preceding neuron I noise value at time k8t design parameter vector  y  Wj  w(t) Wji W Xa k  XII  0  neural network input I lower bound of variable X upper bound of variable X neural network output i variable value at time k8t seismic zone factor momentum factor reliability index  K  target reliability index corresponding to the k-th performance objective  X, X Yi  .  u  z a  P k (x d )  calculated reliability index corresponding to the k-th performance objective  5t  time increment displacement frequency increment weight change roof displacement when a kinematical mechanism forms  A A© AW A u  \  roof displacement when thefirstbeam plastic hinge forms neural network relative error for the k-th sample  H()  random phase angle radial basis function design matrix Euler constant learning rate mean value of a certain random variable  HA  global displacement ductility  He  local rotational ductility  HA  structural displacement ductility capacity  He  member rotational ductility capacity  <k <J)(X)  O Y  earthquake arrival rate frequency corner frequency  V CO  <°  predominant ground frequency g  fundamental frequency of Clough-Penzien filter CO,  lower bound of circular frequency upper bound of circular frequency  e e0 eu  story drift ratio story drift ratio limit ultimate plastic rotation rotation at the yield moment  a(.)  standard deviation of a certain random variable time increment XIII  objective function  ABBREVIATIONS ANN BSSC CCD COV DOE FEMA FFT FORM HSD LFTT IS ISO LHD LI LRB MCS MLP NBCC NEHRP OA OLHD OSB PGA RBFN RMSRE SEAOC UBC UD  Artificial neural networks Building Structural Safety Council. Central composite design Coefficient of variation Design of experiments Federal Emergency Management Agency Fast Fourier transform First order reliability method Hammersley sequence design Inverse fast Fourier transform Importance sampling International Standard Organisation Latin hypercube design Local interpolation Lead rubber bearing Monte Carlo simulation Multilayer perceptron National Building Code of Canada National Earthquake Hazard Reduction Program Orthogonal array Optimal Latin hypercube design Oriented strand board Peak ground acceleration Radial basis function network Root mean square relative error Structural Engineers Association of California Uniform building code Uniform design  XIV  ACKNOWLEDGEMENTS I am deeply indebted to my supervisor, Professor Ricardo O. Foschi, who suggested this project. I am extremely grateful to him for the endless help and support he has given me, for the time and countless efforts in advising me during different stages of this thesis. His enthusiasm and encouragement, his profound knowledge and scientific exploration spirit, have contributed significantly to the fulfillment of my academic goals, and will benefit me in my future pursuit.  I would like to thank the whole department with whom I had the pleasure of carrying out my research in an excellent environment. I am particularly grateful to the supervisory committee members, whose helps are invaluable. Discussions with them were always very inspiring. I would like to thank all professors during my graduate studies at UBC, for their help and the valuable knowledge they have shared with me.  I would like to express my gratitude to Dr. Hong Li for his encouragement and help throughout my studies, for numerous discussions we had regarding various academic issues.  I also would like to thank my wife, Lei Zhou, for her understanding and support during all the hard times I went through; and to my son, William M. Zhang, who was born during my studies, and whose birth is really a blessing to me and a inexhaustible source of motivation for me.  Last, but farfromleast, I would like to thank my parents to whom this thesis is dedicated, for the many sacrifices they have made to help me learn. Their unconditional love, emotional support and encouragement have always been the driving force in pursuing my life goals. I shall be always grateful.  XV  CHAPTER 1 INTRODUCTION  CHAPTER 1  INTRODUCTION  1.1 General Structural design has been traditionally based on deterministic analysis. However, uncertainties and randomness associated with loads, environment, materials, analysis models, structural details, construction workmanship and quality control, service inspection and maintenance, all contribute to a small probability that the structure will not perform as intended. Therefore, all these uncertainties and randomness should be taken into consideration, in the framework of probability or reliability-based design, to assure a sufficient safety level in design. In earthquake engineering, in particular, due to a multitude of random variables relating to ground motions, material and geometric non-linearity as well as analytical models, the behavior responses are extremely difficult to predict accurately during a strong earthquake motion. Structural reliability must be studied in the context of probabilistic seismic risk assessment, incorporating probabilistic hazard analysis, reliability analyses of components and system, and system risk evaluation. Among them, the crucial step is component and system reliability analyses, which are generally based on simulation techniques such as Monte Carlo simulation and its variants, since the structural responses are implicit functions of the intervening random variables. However, because the probability of failure is small, computer simulation entails a large number of performance function evaluations, each requiring execution of a nonlinear dynamic analysis program. Hence, the resulting simulation process may be time-consuming or even computationally prohibitive, which makes it non-feasible in most practical situations.  1  CHAPTER 1 INTRODUCTION  In order to circumvent the computational difficulty, researchers have been studying various procedures to deal with implicit performance functions and attempting to expedite numerical simulation in reliability analysis of structures. The methods available can be categorized into three types: (1) Monte Carlo simulation with variance reduction techniques, (2) response surface methodology, (3) sensitivity-based probabilisticfiniteelement analysis. Monte Carlo simulation variants, such as importance sampling, adaptive sampling and directional simulation, improve simulation efficiency by variance reduction techniques. They need estimation of the most probable failure point, which is unknown in advance. Response surface methods consist of approximating the actual performance function with analytical expressions, usually second order polynomials, fitted to selected values of the performance function in the neighborhood of the most likely failure point. The response surface is then used for failure probability estimation by means of routine reliability analysis approaches. Although it is conceptually simple and easy to implement, the response surface method could result in many iterations, for it can only approximate the performance function accurately in vicinity of the most likely failure point. Moreover, as the number of variables increases, it suffers  so-called "curse of dimensionality", ie, the number of coefficients  grows  exponentially with the number of variables. Sensitivity analysis can be used to identify important variables, those having a greater influence on structural reliability, thus saving a certain amount of computational effort. Nevertheless, it is based on perturbation and is only accurate when the input variables have small variability, and requires many repetitions of deterministic analyses.  Artificial intelligence and machine learning techniques have been developing very rapidly in recent years, and provide robust and effective tools for structural reliability analysis. The  2  CHAPTER 1 INTRODUCTION  learning machines have the ability to adapt to the environment and learn from their experience. Although they have been studied extensively and applied successfully in computer science, electrical engineering, econometrics and other fields, their potential capabilities have hardly been exploited for structural reliability analysis.  In this thesis,  artificial intelligence and machine learning will be explored for seismic reliability assessment and performance-based seismic design. Case studies will be carried out to demonstrate their applicability and efficiency.  1.2 Review of Previous Work Earthquake engineering has witnessed great progress in the past fifty years. Structures designed in conformance with seismic codes of practice have generally exhibited satisfactory performance during recent major earthquakes. However, much still need to be done to further improve structural performance and mitigate seismic damage in future earthquakes. The following gives a brief overview of the research work relevant to this study.  1.2.1  Synthesis of artificial ground motions  In earthquake resistant design or research, appropriate historic earthquake acceleration recordings may not be available, and artificial ground accelerograms may be required when a structural dynamic time history analysis is performed. A number of methods have been proposed to characterize earthquake ground acceleration. Spectral representation-based algorithms were widely used by engineers for generating artificial earthquake ground motions (Kanai, 1957; Tajimi, 1960; Shinozuka and Deodatis, 1988; Hwang and Huo, 1994; and Deodatis, 1996). Seismic ground motion had also been simulated by stochastic wave  3  CHAPTER 1 INTRODUCTION representation method, which took the spatial variation of ground motion into account (Deodatis et al, 1990). Another approach was Autoregressive Moving Average (ARMA) models (Polhermus and Cakmak, 1981; Chang et al, 1982; Olafsson and Sigbjornsson, 1995; Spanos and Zeldin, 1996). Geophysicists and seismologists usually use the Empirical Green's Function method for predicting target strong earthquake motions, superposing the records of small events and using them as Green's functions by considering the differences in stress drop, wave attenuation in the media, and radiation patterns between large and small events (Hartzell, 1978; Hadley and Helmberger, 1980; Hutchings, 1991, 1994; Haddon, 1996). In addition, some novel approaches have been tried successfully. Wavelet transform was applied for analysis and simulation of earthquake ground motions (Iyama and Kawamura, 1999). Neural networks were successfully employed for generation of spectrumcompatible accelerograms (Ghaboussi and Lin, 1998).  1.2.2  Design of computer experiments  In engineering, models are used for problem formulation and solution. Mechanistic models are based on well-established engineering knowledge. However, when such knowledge is not available, empirical models have to be built relating the input variables (predictor variables) to the output variables (response variables) based on observed data. They are referred to as response surfaces in statistics, usually in the form of second order polynomials. In order to construct a response surface, a certain number of representative input vectors have to be selected, which are the subject of experimental design. Central composite design (Box and Wilson, 1951) is the widely used, almost standard, classical experimental design method for building second order polynomials response surface. Latin hypercube sampling (McKay et  4  CHAPTER 1 INTRODUCTION  al, 1979) was thefirstmethod introduced for computer experimental design. Later, a number of improved methods were proposed, such as optimal Latin hypercube design (Park, 1994; Morris and Mitchell, 1995), Orthogonal array-based Latin hypercube design (Tang, 1993). Uniform Design (Fang, 1980) was based on the concept of good lattice point, which is very efficient and provides good design uniformity. Some low discrepancy sequences such as Hammersley sequence (Hammersley, 1960), Halton sequence (Halton, 1960), Niederreiter sequence (Niederreiter, 1987) were also used for experimental design, since the data points generated are well spread and have good uniformity.  1.2.3  Approximation models  A variety of approximation models has been developed over the years. Response surface methodology (Box and Wilson, 1951) is originally developed for physical experiments, and has been applied widely in manufacturing industries for product improvement or process optimization. A second order polynomial is generally constructed by linear regression, using the least squares technique. Kriging is an interpolation method developed in geostatistics (Cressie, 1991). It is extremely flexible, and can provide accurate predictions for highly nonlinear problems. Multivariate adaptive regression splines (MARS) approximate the responses by adaptively selecting a set of spline basis functions and the coefficients through forward or backward regression (Friedman, 1991). Artificial intelligence and machine learning have undergone great progress in the last 20 years. Computational intelligence tools  such as neural networks, radial basis function networks, Gaussian processes and support vector machines have been proved versatile, robust and universal approximators, and have found applications in a wide range of fields.  5  CHAPTER 1 INTRODUCTION 1.2.4  Performance-based design  Performance-based design is gaining acceptance in earthquake engineering, and has the potential to make significant improvement over current practice. Performance-based seismic design was formulated as a structural optimization problem and solved by minimizing a cost function subjected to performance constraints (Ghaboussi and Lin, 2000). Since earthquake resistant design involves randomness and many uncertainties, reliability is always one of the main concerns. Reliability-based framework for performance-based design was put forward, with lifecycle cost minimization carried out to determine the optimal design for structures subjected to multiple natural hazards (Wen, 2001). A computational approach for efficient implementation of performance-based design was proposed, in which reliability was calculated by Importance Sampling, with performance functions evaluated by local interpolation of a response database (Foschi et al., 2002). Performance-based seismic engineering was also discussed as to main requirements for a reliable design, suitable probabilistic design approach and conceptual preliminary design procedure (Bertero and Bertero, 2002).  1.3 Objectives of The Research As seen from the foregoing review, the structural response during a strong earthquake motion depends on a multitude of random variables, and the behavior is very intricate and the response is very difficult to be predicted in a reliable manner. This is due to the uncertainty and randomness pertinent to the ground motion, the material, geometric and boundary nonlinearity associated with the structure, as well as the uncertainty in the computational models with built-in simplifying assumptions and approximations. At present, the most accurate  CHAPTER 1 INTRODUCTION structural analysis procedure is nonlinear dynamic time history analysis. The structural responses during a strong earthquake are random processes, implicit functions of the intervening random variables.  In order to perform structural reliability assessment,  simulation approaches are generally indispensable. However, seismic reliability assessment may necessitate a large number of performance function evaluations, each requiring execution of a nonlinear dynamic analysis program, which is a formidable task in terms of computational time and resources. Similarly, performance-based seismic design also requires repetitive running of a nonlinear dynamic structural analysis program.  To improve efficiency, some researchers have proposed using empirical models to replace computer code for prediction and estimation. However, most of the works are limited to deterministic problems: linear or nonlinear static problems with a few variables. Realistic earthquake engineering problems involve many random variables, with their interactions resulting in complex structural behavior. Therefore, a model is sought that can approximate accurately the input - output variable functional relationship and, as such, improve computational efficiency and effectiveness. To accomplish this goal, this thesis will focus on,  (1) development of neural network-based model and corresponding software; (2) exploration of performance-based seismic design using neural network modeling.  In order to demonstrate the applicability and efficiency of the proposed methods, some applications in seismic reliability analysis will be provided. Furthermore, case studies on performance-based seismic design will be presented.  7  CHAPTER 1 INTRODUCTION  1.4 Thesis Outline The thesis is organized as follows:  Chapter 1 Introduction:  The background and incentive for this study is described, the  objectives of the research are outlined, and previous works pertinent to this study are briefly addressed.  Chapter 2 Generation of artificial ground motions: Review of previous works on artificial ground motion synthesis is presented. Generation of non-stationary accelerograms, as well as spectrum-compatible accelerograms is discussed. The synthesized ground motions will be used as earthquake ground inputs for nonlinear dynamic time history analysis.  Chapter 3 Design of computer experiments methodology: Experimental design methods are reviewed, including classical methods, random design methods and quasi-random design methods. The approach proposed in this research is described. Design of experiments techniques will be used to construct response databases for neural network training.  Chapter 4 Artificial neural networks theory and implementation:  The fundamentals of  artificial neural networks theory are described, with multilayer backpropagation neural networks particularly discussed.  The implementation of artificial neural networks in the  research is detailed.  Chapter 5 Performance-based seismic design methodology: The state of art and practice of performance-based seismic design is described, including the philosophy, design criteria, and design methods. In this study, performance-based seismic design is formulated as a structural  8  CHAPTER 1 INTRODUCTION optimization problem subject to reliability constraints, with the optimum solution computed by gradient-free algorithms. The optimization relies on neural network modeling of the structural responses, which is proved efficient and effective.  Chapter 6 Seismic reliability analyses: case studies: A number of case studies of seismic reliability analysis, based on response databases and neural network models or local interpolations, are presented. (1) A one-bay two-story reinforced concrete frame is subjected to earthquake excitation, and the reliability indices associated with three performance levels are determined, with sensitivity of reliability with respect to the random parameters studied. (2) A two-bay twenty-story reinforced concrete frame is adopted as an example of tall building, and its performances under two different levels of ground shakings are assessed. (3) The behavior of a bridge bent without or with seismic isolation is investigated, with lead rubber bearing used as the seismic deck isolator. The effect of the isolator on structural performances is studied. (4) A wood shear wall under strong ground motions is analyzed, with material non-linearity depicted by a nail hysteresis model, and the effects of nail spacing on structural reliability are studied. (5) An actual building structure that has been instrumented and experienced several earthquakes is investigated for its seismic performance without or with seismic retrofit. Brace type hysteretic dampers are used as seismic upgrading strategy.  Chapter 7 Performance-based seismic design applications: The databases generated as well as the neural network models constructed in the preceding chapter, are employed for performance-based seismic designs. (1) For the one-bay two-story reinforced concrete frame, the optimal distribution of masses is determined, if the distributions of the other random  CHAPTER 1 INTRODUCTION variables are known; (2) Under strong ground shaking, the column dimensions of the twentystory building are determined to meet pre-specified target reliability indices. (3) The diameter of the non-isolated bridge pier columns is calculated when the distributions of other random variables are prescribed. In the case that the bridge deck is seismically isolated, the dimension of the isolator is determined given the statistical information of other random variables. (4) The nail spacing of the wood shear wall under earthquake shaking is calculated. Chapter 8 Summary and future work: A discussion of the significance of this study for seismic reliability assessment and performance-based seismic design is presented. Some recommendations for future further study are briefly outlined.  10  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  CHAPTER 2  GENERATION OF ARTIFICIAL GROUND MOTIONS  2.1 Introduction With the development of computer technology and advances in numerical modeling of modern complex structures, it becomes feasible for practicing engineers to perform nonlinear dynamic time history analysis of structures subjected to ground motions. As long as the computational model for the structure and the adopted ground motion time histories are appropriate, such an approach has shown its superiority both in accuracy and efficiency as compared to other methods (Atkinson, 1998). It is then necessary to have accelerograms that represent the type of seismic excitation expected at a site. Structural engineers tend to take advantage of any historically recorded accelerograms for the given site, or borrow some strong ground motion recordingsfromother regions and scale the magnitudes if there are not such records for the site under consideration. This approach seems plausible, but some cautions must be taken. For a region with high seismicity, there are some recordings of ground motions, but those records are representative of the actual past earthquakes, and they will never repeat in the future, in other words, they may not represent future earthquake ground motions. On the other hand, historic records for a given site with low seismicity are usually scarce. To use strong ground motions from other regions blindly, without any assessment of the similarities and differences between the two sites as regard to seismic source mechanism, wave propagation path and local site characteristics could lead to severe errors in predicting structural responses. Nobody knows to what extent the adopted records approximate the expected future ground motions of the specific site of interest. Moreover,  11  C H A P T E R 2 G E N E R A T I O N O F ARTIFICIAL GROUND MOTIONS  the complexity and uncertainty involved in the structural behavior require that a number of ground motions be used in assessing the responses to ensure a safe and economical design. Hence, structural analysts must resort to artificial ground motion synthesis. Earthquake ground motion is influenced by such factors as source mechanism, magnitude, epicentral distance, travelling path geology and topography, and local soil conditions, to name a few. Since historical strong ground motion records may be few, it is difficult to generate accelerograms that can serve as realizations of future earthquake records. Visually, there are obvious differences between the actual ground motion records and artificially simulated ones. However, numerous studies on structural responses have showed that the simulated ground motions are equivalent to the recorded ground motions, as long as the simulated ones include approximately the same amplitude andfrequencycontent and are of nearly the same duration as the real ground motions (Atkinson, 1998). There are basically two methods for simulating ground motions: the engineering approach and the seismological approach.  Seismologists and geophysicists are interested in understanding the earthquake mechanism and reproduction of the faulting process and wave propagation in heterogeneous media. They generate ground motion based on a physical model that takes into account seismic moment, stress drop, fault rupture process, fault dimension and orientation, travel path geology and local site amplification as well as topographical effects. A slip function is postulated to model the rupture process and the Elastoaynamic Representation Theorem is employed to compute the ground motion. As this approach incorporates all the major factors which affect earthquake ground motion, it can accurately reflect the source effect, wave propagation effect and local site condition, and it is very useful for site-specific simulations.  12  C H A P T E R 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  However, engineers who are only interested in the prediction of the structural responses during future earthquakes, require that the synthetic accelerograms roughly result in the same structural responses as the real event or at least, when generated in a large ensembles, the results can be used to estimate an accurate probability distribution of the effects. Based on this distribution, a reliable seismic safety assessment can then be made. Whether or not the ground motions are due to the same faulting process or geological travel path as the anticipated real event is secondary. As earthquake spectra are often used by engineers in seismic analysis and design, it is desirable to employ artificial ground motion accelerograms compatible with the given design spectra. From the perspective of earthquake engineering, it would be better to combine these two approaches to generate site-specific accelerograms, ie., to model the ground motion as a non-stationary random process while taking into account the seismic source mechanism, wave propagation in heterogeneous media and local soil condition.  2.2  Review of Ground Motion Simulation Methods  A number of approaches have been proposed for the generation of synthetic ground motions. The most general methods are: (1) analytical or geophysical models, such as using a semiempirical Green's function; (2) spectral representation method, based on random process theory; (3) frequency-wave number spectra representation of spatial variability of seismic ground motions; (4) auto-regressive moving average (ARMA) model; (5) wavelet model; (6) neural networks model. It is well known that earthquake ground motions are very complicated in nature, and for engineering applications, it is not necessary to reproduce the expected ground motions in order to depict their characteristics sufficiently. What need to be  13  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  done is to identify and determine the ground motion parameters that are of engineering importance, and describe the characteristics of the ground motions in terms of these key parameters. To this end, three major factors are considered of primary significance and should be taken into consideration to obtain a proper ground motion time history, namely, the peak amplitude, the frequency content and the strong motion duration (Kramer, 1996). The construction of the model should be based on probabilistic seismic hazard evaluation of the site under consideration, especially for some large and important structures, taking into account uncertainties in seismic source, wave travel path geology and local soil conditions. Both non-stationarity in amplitude and frequency is preferred to be encompassed, as earthquake ground motion is in essence a non-stationary process. The objective here is to provide a brief description of ground motion synthetic methods. For details, please refer to the literatures mentioned.  2.2.1 Empirical Green's function method  One of the most reliable methods for predicting strong ground motions from a large earthquake is the empirical Green's function method. Theoretical Green's function, applied to seismology, is a mathematical expression that depicts the effect of the Earth geological structure on seismic waves generated by a micro-earthquake. It is of minor practical value as it can only be calculated for a simplified subsurface geological structure, which does not reflect the real profile. The idea of the empirical Green's function was originally introduced by Hartzell (1978), as it was found that the Green's function resembles the actual recordings from micro-earthquakes. Thus, these records of micro-earthquakes, so-called empirical Green's functions, can be used to simulate strong ground motion anticipated at a given site. Usually,  14  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  actual recordings of micro-earthquakes (Richter magnitude 2 to 3) are employed to compute the ground motions of a moderate or large earthquake with magnitude 6 to 8. The advantage of this method is to exploit not only the common propagation path and local site effects, shared by small events and the target event, but also the source effects possessed by the small events within the fault area of the target event. The empirical Green's function method exploits the records of small events instead of the theoretical Green's functions. It is desirable that the small events should be as small as possible to be assumed as a point source in the fault. However, the smaller the events are, the more difficult it is to obtain accurate seismic records. As such, most of simulations by the empirical Green's function method have been made using aftershocks, which are not so small events as compared to the target event.  Figure 2.1 gives a schematic illustration of the empirical Green's function method. For simulating earthquake strong ground motions, a finite fault model has to be employed. A causative fault plane is usually assumed based on seismological information. The fault plane is then discretized into many small patches, and each patch is treated as a sub-source, an impulsive point source. For each sub-source, its rupture model, slip function and the Green's function have to be defined. The acceleration at a site of interest is then calculated by superposing the arrival of the earthquake waves in a proper temporal and spatial sequence from all the sub-sources, which may have different rupture parameters and Green's functions. If no empirical Green's functions are available, the alternative method used is to stochastically simulate small event motions based on a seismological spectral model and make summation of small events in the same manner as with the empirical Green's functions. For further details,  15  C H A P T E R 2 G E N E R A T I O N O F ARTIFICIAL GROUND MOTIONS  see Hartzell (1978), Boore (1983), Papageorgiou and Aki (1983), Hutchings and Wu (1990), Hutchings (1991, 1994), Zeng (1994), Haddon (1996) and Kamae et al (1998).  The empirical Green's function method has the following limitations: (1) Small magnitude aftershock records are needed from the same seismic source as the strong target motion. (2) The small events and the large event should share the same fault mechanism, travel path geology and local site conditions. (3) It can only be applied for soil with linear behavior as superposition is implied. To allow for nonlinear soil effect, ground motions at bedrock are generated first; then a nonlinear soil dynamic analysis has to be employed to calculate the surface ground motions.  The empirical Green's function method is frequently applied by seismologists or geophysicists for prediction of site-specific strong ground motion, and its focus is on truly modeling the geophysical features of the earthquake process (i.e. faulting process and travel path geology effect). This method is often too restrictive to be used for predicting structural responses, because the detailed information concerning the fault rupture and geology is highly uncertain in real applications, if available.  Figure 2.1 A schematic diagram of empirical Green's function method (Kramer, 1996, Fig.8.24, pp. 344)  16  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  2.2.2  Spectral representation method  Structural engineers usually apply stochastic process theory to simulate ground motions. White noise was the simplest ground motion model used in the earlier stage of earthquake research. It has been observed, however, that the ground motion generally has a strong segment that can be modeled as a stationary process. A filter was proposed to white noise (Kanai, 1957, Tajimi, 1960), which resulted in a ground motion power spectral density function  (co) as follows,  (2.1)  where co denotes the predominant frequency of the ground; g  h represents the critical damping ratio of the ground; g  S is a constant giving the power spectral density of the white noise. 0  For firm ground, the following values were suggested: co = 15.6 rad/sec, and h = 0.6. g  g  Since, in this model, the power spectral density is proportional to o)~ in the high frequency 2  range, and it has a stationary peak in the neighborhood of a) , both of which are the g  characteristics of an actual earthquake, it is widely used for generation of artificial ground motions. However, earthquake ground motion is non-stationary in time and heterogeneous in space. As noted early, the intensity, duration and spectral characteristics are the three major properties of ground motion. The simulated ground motion can be considered as the product  17  C H A P T E R 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  of a stationary random process multiplied by a modulation function to reflect the nonstationarity,  a(t) = f(t)-w(t)  (2.2)  where aft) denotes the non-stationary ground acceleration; /(t)  denotes the modulation function;  w(t) denotes a stationary process with specific power spectral density S(ta).  Given the power spectral density S(<y), the stationary random process w(i) can then be synthesized as follows,  w{t) = £  yJ2S(a)  k  )AG)  cos(<V + A)  (-) 2  3  in which Aco = (a> -a>,)/N, u  OJU and to, are the upper and lower bounds of circular frequency co; N is the number of frequencies considered; co =u), +(k-0.5)Aco k  (k=l, 2, ...N)  <l> (k = 1, 2,..., N) is a set of random phase angles uniformly distributed over the k  interval (0, 2 n )  It is observed that the earthquake ground motion generally has three distinct segments, ie., the build-up segment, the strong motion segment and the decay segment. The modulation function is used to embody the non-stationarity of ground motion. Many functions have been proposed in the literature. Some of them will be discussed later.  18  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  2.2.3.  Frequency-wave number spectra method  Spatial variation of earthquake ground motion is one of the important issues to be considered in the seismic design of spatially extended structures, some examples including long-span bridges, pipelines, underground structures, etc. Thefrequency-wavenumber spectra (FK spectra) can fully describe seismic wave that propagates coherently through a site. The FK method was introduced by Capon (1969), with the power spectra and cross spectra calculated by Fourier technique. The application of Fourier Transform to a sequence in space leads to the wave number spectrum. The application to a series of time histories recorded along a straight line in space results in the frequency-wave number spectrum, which comes from two successive application of the ID Fourier Transform. For a 2D stationary homogeneous stochastic wave u(x,y,i) with zero mean value, its autocorrelation function is defined as,  Ku (g, ij, T) = E\u{x, y, t)u(x + Z,y + rj,t + T)]  (2.4)  where x and y are the space variables; t is the time variable; £ and TJ are the space increments in the x-direction and y-direction respectively; r is the time increment. If the triple Fourier transform of R (£,/7,r) exists, thefrequency-wavenumber spectrum of uu  w(x,_y,/)is calculated as,  S..(*:,,*,,®} = 7 ^ £ £ £ ^ ( £ 7 . r ) ^ \2.7V) 00 CO where k is the wave number in x-direction; x  19  (-) 2  5  C H A P T E R 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  k  is the wave number in y-direction;  y  CD is the frequency; and the inverse transform is given by,  R (4,rj,T) = £XX*Suu(k >ky,<o)exp(ik ZMk ri uu  x  x  (2.6)  + i<OT)dk dk d<o  y  x  y  A closed-form analytical FK spectrum of the displacement field at the free surface of an elastic half space was derived based on a point source subject to a double couple, and then it was employed for simulation of ground motion displacements (Deodatis et al, 1990). The FK spectrum was utilized to evaluate spatial correlation characteristics in terms of the crossspectral density function and the spatial coherence function. The synthesis was carried out numerically using the Fast Fourier Transform (FFT) technique to perform the inversion from thefrequencywave number domain to the time space domain. The simulation is an extension of spectral representation, and the followings arefromDeodatis et al (1990),  u(x,y,t) = V 2 £ Z Z X T^AIA^Jy^co^AkyAcor cos(I k 2  x  xlx  / , = l / = l /=! 7,=±17,=±1 y  (2.7) where  k  xlx  = lA x  ,l = l,2,...,yV ;  kx  x  x  co, =IAG>;1 = 1,2,. ..,N;  =k IN ,Ak.  x  xu  x  : =k IN ,Au) y  yu  y  = co IN; u  is a random phase angle uniformly distributed over (0,2 ;r).  20  + I k +oj,t y  yb/  +  )  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  It was assumed in the above that thefrequencywave number spectrum is significant only in the following region defined by, —k  <k <k  xu — x  —fc < k < k  xu'  -co < 0)  yu — y — yu'  w—  <6J  — u  2.2.4 Autoregressive moving average ( A R M A ) model  The ARMA method is a time series analysis approach that synthesizes ground motion by a multi-step discrete equation, the coefficients of which may be time varying to introduce nonstationary behavior. The ARMA model describes a linear relationship between the present and past values of a time series Zk and a white noise shock WR as,  Z -a,Z -....-a Z _ k  w  p  where Z ,Z _ ,..Z _ are k  k  x  k  k  p  = W -b W^-...-b W _ k  t  q  k  q  (2.8)  the variable values at time k#,(k-l)#,...( k-p)St;  p  a^,...a are the autoregressive coefficients; p  W ,W _ ,..W _ k  k  }  k  q  are the noise values at time k#,(k-l)<5ir,...( k-q)5t;  b ,..b are the moving average coefficients ; x  q  St is the time increment;  In Chang et al (1982), the Box-Jenkins approach (Box and Jenkins, 1976) was applied for identification of suitable ARMA models and optimal estimation of modeling parameter values. As the Box-Jenkins procedures are strictly valid for stationary time sequences, a moving window approach was employed by dividing the non-stationary target accelerograms into short equal segments and analyzing each segment individually. Further, goodness of fit was evaluated by examining the statistics of the residual sequences and comparing with those of  21  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  discrete white noise. A second-order autoregression first-order moving average model ARMA(2,1) and a fourth-order autoregression first-order moving average model ARMA(4,1) were found to bestfitthe target accelerograms.  Ellis et al (1990) applied time series for generation of site-dependent accelerograms. The target accelerograms were analyzed to estimate the ARMA model parameters and a set of regression relations were derived relating the model parameters to the physical variables of the site, such as earthquake magnitude, epicentral distance and site geology. For simulation of site-specific ground motions, a stationary time series was first generated, then it was redigitized to add non-stationary frequency content, and subsequently it was multiplied by the standard deviation envelope to yield an artificial accelerogram non-stationary in amplitude and frequency content as well as being consistent with the site physical conditions. The advantage of the their procedure is that the time invariant parameters are related to physical variables at the site. Moreover, confidence intervals for the model parameters can be used to generate an ensemble of earthquake ground motions corresponding to the mean, mean plus one standard deviation for design purpose.  The deficiency of such a model is that the physical interpretation of the coefficients in the equation is not obvious, and these coefficients must be determined by fitting with some target earthquake accelerograms. Still, another shortcoming is that the coefficients should be time varying for truly non-stationary process but this could render the model rather complicated and even unmanageable. For applications of ARMA models in earthquake motion simulations, references are made to Polhermus and Cakmak (1981); Chang et al (1982); Nau et al (1982);  22  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  Gersch and Kitagawa (1985); Cakmak et al (1985); Olafsson and Sigbjornsson (1995); Spanos andZeldin (1996).  2.2.5  Wavelet transform method  Wavelet transform is a mathematical tool that is widely used in electrical and electronic engineering for signal processing, and it transforms sequential data in time axis such as earthquake accelerograms to spectral data in both time and frequency domain. Therefore, a wavelet transform provides information on non-stationary time-dependent  intensity of  motions regarding a particular frequency of interest. One of the attractions in wavelet transform which is unavailable in Fourier transform is that the wavelet coefficients derived from time-sequential acceleration data represent the components of energy input in time and frequency domain. Iyama and-Kawamura (1999) applied wavelet transform to earthquake ground motion analysis and developed the relationship between wavelet coefficients and energy input, ie., energy principles in wavelet analysis were derived. By using the principles, the time-frequency characteristics of the 1995 Kobe earthquake ground motions were analyzed, and time histories of energy input for various ranges of frequencies and epicentral distances were identified.  Furthermore, a technique to  simulate  earthquake ground  accelerations by wavelet inverse transform was developed on the condition that target timefrequency characteristics were specified. Structural response to the synthesized accelerations was compared with the target values, which showed satisfactory correlation between wavelet coefficients and the energy responses in both the time and the frequency domains. Wavelet transform is a powerful analytical tool to identify the ground motion characteristics in both  23  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  time andfrequencydomain, and further studies in this area are expected to explore the potentials of the wavelet transform in earthquake engineering. 2.2.6  Neural network model  Ghaboussi and Lin (1998) proposed a method to generate artificial earthquake accelerograms from the pseudo-velocity response spectra using neural networks. A two-stage approach was employed. First, Fast Fourier Transform was utilized to calculate the Fourier spectrum of a given accelerogram. Then, a replicator neural network was applied as a data compression tool to reduce the dimensionality of the discrete Fourier spectrum. The compressed discrete Fourier spectrum can be conversely decompressed and inverse Fourier Transform was carried out to obtain the associated ground motion. A multi-layer feedforward neural network was employed to mapfromthe pseudo-velocity response spectrum to the compressed Fourier spectrum. Finally, the target accelerogram was obtained by combining the multi-layer feedforward neural network with the retrieval part of the replicator neural network. Ghaboussi and Lin's proposal was applied to a sample of 30 recorded earthquake accelerograms and exhibited potential for future applications.  2.3 Generation of Non-stationary Ground Motion In this thesis work, a program was written to generate non-stationary ground motion accelerogram based on spectral representation method. These accelerograms were then used in structural reliability analysis. The procedure is described in what follows.  24  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  2.3.1  Determination of ground motion spectral characteristics  The Kanai-Tajimi acceleration spectrum is employed to model the stationary power spectral density function of the acceleration time history (for clarity, it is repeated here),  ^(«) = K H 1  1  *  ,  o  *  A  (2-1)  (co - co ) + 4h co co 2  g  g  a  g  where So is a constant determining the intensity of acceleration, co and h are the predominant frequency and damping ratio of the ground. The values g  g  suggested in Deodatis (1996) for three different soil conditions were used in this study, as shown in Table 2.1.  Table 2.1 Kanai-Tajimi spectrum parameters Soil Type Frequency co (rad/sec) g  8* 5* 2.4^-  Rock or stiff soil Deep cohesionless soil Soft to medium clays and sands  Damping ratio h  g  0.60 0.60 0.85  At zerofrequency,the Kanai-Tajimi acceleration power spectrum is not equal to zero, and this is inconsistent with actual earthquake records. By applying the Kanai-Tajimi spectrum, a significant lowfrequencycomponent is imparted into the ground motion. In order to model the earthquake motion realistically, the lowfrequencycomponents must be cleared from the Kanai-Tajimi spectrum. This is achieved by passing a high-pass filter to the spectrum. Severalfiltershave been proposed to modify the Kanai-Tajimi spectrum, two of them being the Clough-Penzienfilter(Clough and Penzien, 1993) and the Sine-square filter (Shinozuka et al, 1994).  25  C H A P T E R 2 G E N E R A T I O N O F ARTIFICIAL GROUND MOTIONS  Clough and Penzien proposed the following filter, which substantially attenuates the low frequency components,  \H (cofh  (2.9)  (ffl2-ffl) +4A>»2 2 A  2  where a> and h are the fundamental frequency and damping ratio of the filter, which are h  h  selected to ensure the desired frequency content of the earthquake motion. The recommended values in Deodatis (1996) were reproduced in Table 2.2 below, Table 2.2 Clough-Penzienfilterparameters Soil Type Frequency Rock or stiff soil Deep cohesionless soil Soft to medium clays and sands  (rad/sec)  Damping ratio h  h  0.60 0.60 0.85  0.87T  0.5;r 0.24;r  The Sine-square spectrum was introduced to modify the Kanai-Tajimi spectrum. It is a highpassfilter,which was claimed to take into account the shear dislocation type seismic source effect (described by a ramp-type slip function), and is formulated as follows,  , „ 1  ,  '  V  ,,2 A  fsin (<»772) 2  OXTT/T  1.0  6)>7ZIT  (2.10)  where T is the dominantrisetime of the ramp function.  A comparison of Kanai-Tajimi spectrum, Clough-Penzien spectrum and the sine-square spectrum is shown in Figure 2.2 (a) =12 rad/sec, h g  g  =0.6)  The power spectrum density function for the stationary process is then given by, S(a>) = S (a)\H {a>)\  (2.11a)  2  rr  k  26  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  (2.11b)  S(co) = S (o))\H (o>)\  2  KT  s  2  Kanai-Tajimi  o  5  10  15  20  25  30  35  40  45  50  Frequency (racVsec)  Figure 2.2 Comparison of Kanai-Tajimi, Clough-Penzien and Sine-square spectrum  2.3.2 Generation of a stationary process  A stationary earthquake accelerogram, which incorporates the subsoil frequency content, can be generated by superposition of simple harmonic waves with power spectrum S(co) and random phases using Equation (2.3).  2.3.3  Selection of modulation function  Seismic ground motion is non-stationary, and it generally has three stages, a build-up stage, a strong motion stage and a decay stage. The above-generated stationary process has to be amplitude-modulated to mimic the time evolution of a real motion. A number of modulation functions have been proposed, and three of them used in this study are described as follows.  27  C H A P T E R 2 G E N E R A T I O N O F ARTIFICIAL GROUND MOTIONS  (1) Jennings modulation function  Jennings et al (1968) proposed a modulation function in the form,  0 <t < U  /(0 = 1.0  (2.12)  t <t<t x  2  exp[ta0.1(/-/ )/(f ,-/ )] 2  <  2  The selection of proper constants ti, t has been discussed by Jennings, who pointed out that 2  the modulation function is dependent on the magnitude of the earthquake, the distance from the causative fault and the focal depth, and proposed the following expressions for ti, t  2  relative to earthquake magnitude M and ground motion duration ta. ti = [0.16-0.04(M-6)]ta  (2.13.a)  t = [0.54-0.04(M-6)]t  (2.13.b)  2  t d = 1 0  d  (M- .5y3.23 2  (  2  1  Figure 2.3 displays the modulation function curve for earthquake with magnitude M = 7.0.  Figure 2.3 Jennings modulation function  28  3  c  )  C H A P T E R 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  (2) Amin and Ang modulation function Amin and Ang (1968) proposed the following modulation function,  0</ < t  /(OHexp[-c(f-f i.o )]  (2.14)  2  The selection of proper constants ti, t2 has been described by Jennings, ti is estimated around 2-4 sec, and t may be taken as 4 sec, 15 sec and 35 sec, respectively, for earthquakes with 2  magnitudes 6, 7 and 8. The selection of c is based on the focal distance (Solnes,1997).  (3) Hsu and Bernard modulation function  Hsu and Bernard (1978) suggested the following modulation function, f(t) =  (t/t )exp(l-t/t ) 0  (2.15)  0  where to is the time instant when the earthquake motion attains its peak. Figure 2.4 shows the shape of this modulation function for to = 5.0 s 1.2  i  1 ^  0  5  15  10  20  Time (sec)  Figure 2.4 Hsu & Bernard modulation function  29  25  30  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS 2.3.4  Generation of a non-stationary artificial ground motion  The final non-stationary earthquake motion is obtained by applying the modulation function to the stationary earthquake acceleration process, ie.  «(0 = /(/MO  (2.2)  A ground motion accelerogram generated using Amin & Ang modulation function with PGA = 0.2g is shown in Figure 2.5  A ground motion accelerogram generated using Hsu modulation function with PGA = 0.2g is shown in Figure 2.6.  Artificial Ground Motion 0.2 -i  1  1  0.15  -0.2  1  J  T (sec)  Figure 2.5 Artificial ground motion generated using Amin & Ang modulation function  30  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  Artificial Ground Motion  o co o o <  T (sec)  Figure 2.6 Artificial ground motion generated using Hsu modulation function  Sometimes, it is desired that the artificial ground motion have two strong components, and this is accomplished by the following modulation function,  r  /(>) =  A exp  r t-U  +c  (2.16)  exp V  *2 1 J 1  where c is the ratio of the second peak amplitude to thefirstpeak amplitude; t is the time instant when thefirstpeak occurs; 0  r is the start time of the second strong component; ;  t is the time instant when the second peak occurs; 2  A ground motion accelerogram that is composed of a main shock and an aftershock with PGA = 0.2g is shown in Figure 2.7  31  C H A P T E R 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  Artificial Ground Motion  T (sec)  Figure 2.7 Artificial ground motion with two strong components  The above simulation approach employs a simple power spectrum to account for the spectral characteristics of earthquake ground motion. A modulation function is utilized to reflect the non-stationarity of earthquake process. Implicitly the seismogenic source is assumed as a point source, so it can only be used to generate far-field earthquake. The advantage of this method is that it is easy to implement, and can approximately allow for the local site effect. However, the seismic faulting mechanism is not considered in the model, and the phase angles are assumed uniformly distributed over [0, 2n\ which is not consistent with real earthquake motions whose frequency contents may vary with time and phase angles are usually not uniformly distributed.  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  2.3.5  Baseline correction  Numerical integration of digital accelerograms (recorded or artificial) in time domain often results in non-physical shifts in velocity and displacement time histories. This is natural for artificially generated accelerograms, as they are synthesized based on a power spectrum, with no physical constraints imposed on. Generally, this phenomenon has hardly any influence on seismic response of structures subjected to those artificial ground motions, as the inertial forces are calculated based on acceleration time history. However, this issue has to be addressed for spatially extended structures, since the discrepancy in absolute displacements at different points of structure can lead to damage or even failure of the structure. Besides, it is necessary to correct the artificial accelerograms used for shake table test, for the hydraulic actuators have limited displacement scope.  There are many methods available to perform correction of digital accelerograms to eliminate the unrealistic velocity or displacement drift. Trifunac (1971) applied a time domain filter, the Ormsby filter to the acceleration time history. A number of frequency filters were used for processing earthquake records, such as elliptical filters (Sunder and Conner, 1982; Sunder and Schumacker, 1982) and Butterworthfilters(Converse, 1992). A method based on Lagrange multipliers was proposed by Trujillo and Carter (1982). In this thesis, the following procedure is adopted, (1) Integrate the accelerogram a(t) to obtain the velocity time series v(t); (2) Fit a quadratic polynomial to the velocity time history v(f) by least squares technique, v(f) = c + c t + c / ; 2  0  x  2  (3) Remove the derivative c, + 2c tfromthe accelerogram a(t); 2  33  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  (4) Calculate the Fourier spectrum of accelerogram a(t) by Fast Fourier Transform (FFT), A(a)); (5) Apply a causal high-pass Butterworth filter, h(coi) = l/yjl + (a) I co) , to A(co); 4  c  (6) Perform Inverse Fast Fourier Transform (IFFT) of A(a)to obtain a(t)&s the corrected accelerogram; (7) Calculate A(co) by FFT to a(t), and compute V(co), the velocity Fourier Spectrum; (8) Apply a causal high-pass Butterworthfilter,h(co) = 1/^/1 + (co Ico) , to V{co); 4  c  (9) Perform IFFT of V{<x>) to obtain v(t) as thefinalvelocity time series; (10) Calculate V(co) by FFT to v(t), and compute D(ai), the displacement Fourier Spectrum; (11) Apply a causal high-pass Butterworthfilter,h(co) = \ / yjl + (o) / a)) , to D(co); 4  c  (12) Perform IFFT of £>(<y)to obtain d(t)z& thefinaldisplacement time series;  In the above, the corner circular frequency is taken as a =27if = 2^x0.05 = O.br, to c  c  remove the long period components whose period is in excess of 20 sec.  A ground accelerogram and the associated velocity and displacement time histories prior to baseline correction are presented in Figures 2.8(a)(b)(c), while the corresponding time histories after processing are displayed in Figures 2.8(d)(e)(f). velocity and displacement are, respectively, m/sec , m/sec and m. 2  34  The units of acceleration,  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  Acceleration time history  < Time(sec)  Figure 2.8(a) Acceleration time history before baseline correction  Velocity time history 0.3  Time(sec)  Figure 2.8(b) Velocity time history before baseline correction  Displacement time history  -60 Time(sec)  Figure 2.8(c) Displacement time history before baseline correction  35  C H A P T E R 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  Acceleration time history  2  Time(sec)  Figure 2.8(d) Acceleration time history after baseline correction  Velocity time history  Time(sec)  Figure 2.8(e) Velocity time history after baseline correction  Figure 2.8(f) Displacement time history after baseline correction  36  C H A P T E R 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  2.4 Simulation of Ground Motion Compatible With Response Spectrum In practical engineering applications, engineers are generally required to perform seismic resistant design in accordance with a certain mandatory code. The seismic design ground motion for a given site is specified in the form of design response spectrum, which is an idealization of the response of a linear single degree offreedomoscillator subject to a set of ground motions. To carry out nonlinear dynamic analysis, it is often preferable to use ground motion time histories compatible with the code design spectrum or response spectrum provided by the client. A program was developed in this study to generate earthquake motion time histories compatible with the code prescribed design spectrum or user specified response spectrum. The procedure is described as shown in Fig.2.9. The idea of adjusting power spectrum density function was suggested by Gasparini and Vanmarcke (1976). The response spectrum is calculated by direct integration (Paz, 1997). Baseline correction can be done in the same way as outlined in subsection 2.3.5.  A stationary acceleration time history isfirstgenerated based on an assumed spectral density function, and Equation (2.3) is applied to produce a stationary ground motion time history. Subsequently, a modulation function /(f) is applied to the stationary time history w(t) as in Equation (2.2) to obtain a non-stationary time history a(t). Two modulation functions were implemented, namely, Amin & Ang function Equation (2.14) and Hsu & Bernard function, Equation (2.15).  The 1997 U. S. Uniform Building Code design spectrum has been implemented as the default design spectra. The design ground motion is specified as corresponding to a 5% damped elastic response spectra as shown in Figure 2.10. The seismic coefficients C and C are a  37  v  C H A P T E R 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  given in Table 2.3. For soil profile type SF, site-specific geotechnical investigation and dynamic soil response analysis should be carried out to determine the proper seismic coefficients. Sa(g)  I  •  To  T  s  Figure 2.10 UBC Design response spectrum Table 2.3 Seismic coefficients C and C Soil a  type  Z = 0.15  v  Seismic Zone Factor Z Z = 0.30 Z = 0.20  Z = 0.40  Z = 0.075 C C 0.06 0.06  C 0.12  C 0.12  C 0.16  C 0.16  C 0.24  0.24  0.32N  C 0.32N  SB  0.08  0.08  0.15  0.15  0.20  0.20  0.30  0.30  0.40N  0.40N  Sc  0.09  0.13  0.18  0.25  0.24  0.32  0.33  0.45  0.40N  0.56N  SD  0.12  0.18  0.22  0.32  0.28  0.40  0.36  0.54  0.44N  0.64N  0.30  0.50  0.34  0.64  0.36  0.84  0.36N  0.96N  a  S  A  SE  v  0.26  0.19  a  v  a  C  a  v  C  v  a  v  a  a  a  a  a  V  V  V  V  In the above, N and N are two near-source factors. The N factor applies to the acceleration a  v  a  controlled portion of the design spectrum, and the N factor applies to the velocity controlled v  portion of the design spectrum. N has a value from 1 to 1.5, and N has a value froml to 2, a  v  depending the seismicity of the faults and the relative location of the active faults.  38  V  C H A P T E R 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  Read the target acceleration response spectrum  (T,  r  Initialize the power spectrum density function S(co)  r  Generate a stationary acceleration time history w(f)  r  Generate a non-stationary acceleration time history a(t)  r  Calculate the response spectrum S (T,g) a  Compare the target response spectrum  (T,  with the  calculated response spectrum S (T,%) a  Yes End  No  Update the power spectrum density function S(co) S(a»=  Fig.2.9  S(co)  Flowchart to generate response spectrum compatible ground motion time history  39  C H A P T E R 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  A ground motion acceleration time history compatible with UBC design spectrum for Z = 0.20 and soil type Sc is shown in Figure 2.11. Artificial Ground Motion  -0.3  1  !  -0.4 T (sec)  Figure 2.11 UBC Design spectrum compatible artificial ground motion accelerogram  2.5 Summary and Discussion Earthquake ground motion is one of the most important uncertainties that significantly affect structural behavior; hence, the success of seismic resistant design using time history analysis depends largely on the appropriate selection of strong ground motion acceleration time histories. A sufficient number of accelerograms are needed to assess the variability in structural responses as the result of uncertainty in earthquake motions. A number of methods that have been proposed in the past for artificially synthesizing earthquake ground motion accelerations were briefly reviewed. Seismologists and geophysists usually are interested in simulating the physical faulting process, and they simulate earthquake motions based on  40  CHAPTER 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  physical models of the earthquake process. Structural engineers, on the other hand, are more concerned with the effects of ground motions on structural responses. Response spectrum compatible ground motion accelerograms are generally the preferred choice in seismic resistant analysis and design, whether or not the generated ground motion can represent the anticipated future earthquake is secondary. No matter what approach is used, the three major factors of ground motion should be allowed for, namely, the intensity, the frequency content and duration. It is recommended that the simulation be based on probabilistic seismic hazard analysis of the site, especially for a large or important structure, as too many uncertainties are involved in the earthquake process.  A program was developed in this study to generate non-stationary ground motions using spectral representation. A baseline correction algorithm was devised to process the synthesized accelerogram. The generated ground motions will be used as ground excitations in structural analysis to calculate structural responses. Three ground parameters (PGA, predominant frequency and duration) have been identified so that they can be manipulated later in response database construction. A response spectrum compatible ground motion can also be generated using the code design spectrum or response spectrum provided by the client.  With the development of engineering seismology and earthquake engineering, more knowledge about earthquakes is to be achieved. It is expected that in the future, more seismological information will be incorporated into artificial ground motion simulation, and realistic earthquake ground motion can be predicted based on probabilistic seismic hazard assessment of the region of interest. As the earthquake resistant design code is changing its  41  C H A P T E R 2 GENERATION OF ARTIFICIAL GROUND MOTIONS  philosophy toward performance-based design, in order to preserve the operational integrity of critical structures after a major earthquake in addition to life safety, structural design is subject to requirements that are more stringent. In order to produce a safe and economical design, structural engineers must model the ground motion and structural behavior realistically. Thus, it is compulsory to synthesize as realistically as possible reliable ground motions that the structure may experience during its intended lifetime. It is envisioned that realistic prediction of earthquake ground motion at a given site will be possible with further advances in seismology and earthquake engineering.  42  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODLOGY 3.1 Introduction The analysis and design of modern engineering projects often involves computer-based simulations. In the past, new product design was usually achieved, to some extent, on basis of physical experiments on prototypes or models. However, this type of experiments may be, in general, costly and time-consuming. With the great advancements in computer technology, computer simulations are instead used extensively in a variety of areas, such as engineering design, industry manufacturing, etc. However, even with today's most advanced computers, it is still expensive and time-consuming to do simulations of large and complex engineering systems for design optimization and reliability analysis. Hence, approximate approaches, based on computer experimental design and response modeling, are being employed to reduce the computational expense and running time to an acceptable level, without sacrificing prediction accuracy.  Prior to an approximate response model being constructed, the design variables (input variables) and the response variables of interest (output variables) must be selected judiciously. First, a sample of combinations of the design variables is generated during an experimental design phase. An appropriate computer program is then run for each combination in the sample, obtaining the corresponding responses. Next, a response model is developed to map the input - output functional relationship. Finally, the response model is used as a surrogate model that is sufficiently accurate to substitute the actual response during  43  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY design optimization or reliability analysis. Thus, building approximations for computer simulations involves four steps: (1) Problem specification; (2) Experimental design; (3) Response modeling; (4) Applications of response models;  In the following, computer experimental design methods currently available are briefly reviewed,  followed by the design approaches proposed in this study and their  implementations. A summary concludes with comparisons regarding the advantages and disadvantages of various experimental design methods.  3.2 Review of Methods for Design of Computer Experiments Prior to building a response model, a database of representative input - output pairs must be created. The data points should be carefully selected so that they cover the design space as uniformly as possible. The problem of choosing a suitable sample of design variables is the subject of  Design  of Experiments  (DOE), a branch of Statistics. Classical DOE is developed  for physical experiments that are subject to noise, so replications at some points may be necessary for estimation of the error due to noise. Central Composite Design is a typical classical experimental design. As physical experiments are costly and time-consuming, the designs are made parsimonious to reduce the experimental overhead. A linear or quadratic polynomials response surface is usually built. As the number of data points grows exponentially with dimension, it is impossible to apply it to high-dimensional problems. It is  44  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY neither practical nor accurate to model the intricate behavior of a large and complex system using this modeling technique.  Space-filling Designs are proposed for computer experiments to overcome the abovementioned drawbacks. The data points are chosen to scatter uniformly throughout the design space so that as much information as possible can be obtainedfromthe computer simulation. Sacks et al. (1989) thoroughly discussed computer experimental design. They outlined the differences between physical experiments and computer experiments. Furthermore, they treated the deterministic output from computer experiment as the realization of a random process, and used a Kriging model for prediction. Koehler and Owen (1996) systematically presented two main statistical approaches to computer experiments, one based on Bayesian statistics, while the other based on sampling techniques. Latin Hypercube Design (McKay et al, 1979) was thefirstapproach introduced for computer experiments, which will be covered in the next section. Orthogonal Array Design was proposed to improve upon a Latin Hypercube Design (Owen, 1992). Orthogonal-Array (Hedayat, et al, 1999) based Latin Hypercube Design was developed by Tang (1993). Park (1994) applied the integrated mean square error criterion to Latin Hypercube Design to generate Optimal Latin Hypercube Design. Morris and Mitchell (1995) employed the max-min distance criterion given in Johnson et al (1990) to construct & Max-min Latin Hypercube Design. Simulated annealing was employed to maximize the minimal inter-point distance so that the data points were spread as far as possible from each other. Fang (1980) applied number-theoretic methods for experimental design to generate a Uniform Design. A generating vector is needed for constructing a uniform design, and in a high dimensional case, this design needs considerably fewer samples than other methods. Ye (1998) created an Orthogonal Latin  45  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY Hypercube Design in which two columns of the Latin Hypercube are orthogonal. Kalagnanam and Diwekar (1997) used Hammersley Sequence Sampling for experimental design. Hammersley Sequence is a kind of Low Discrepancy Sequences, which places data points evenly in the unit hypercube. It provides a design with better uniformity than Latin Hypercube Design. Simpson et al (2001) compared different experimental design methods, namely, Latin Hypercube Design, Orthogonal Array Design, Uniform Design and Hammersley Sequence Design. Based on two engineering problems, it was concluded that Uniform Design performs well when the sample size is small whereas Hammersley Sequence Design exhibits better behavior in the case of large sample sizes.  3.2.1  Central composite design  Central Composite Design (CCD) is a fractional factorial design that is composed of a central point, corner points of a hypercube, and additional "star points" which are situated on the axes and have a distance of a from the origin, a may take values on the interval  [1.0,Vs],  where s is the number of variables (in statistics, it is termed factors). When a=1.0, the design is called center-faced Central Composite Design. Central Composite Design is a three-level design that enables a quadratic polynomial response surface to be built. Altogether 2 +2s+l data points are needed while a full second-order polynomial response s  surface has (s+l)(s+2)/2 coefficients to be determined. A Central Composite Design for 3 variables is shown in Table 3.1  46  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY 3.2.2  Latin hypercube design  Latin Hypercube Sampling (LHS) is a stratified Monte Carlo simulation method. The probability range [0.0,1.0] for each random variable is divided into n equal intervals, within which a random number P, (i = l,...,n) is generated. Then the corresponding random variable values are obtained by the inverse of the cumulative distribution function F(x), ie., Xj = F''(Pi), where Pi denotes the probability value for the i-th interval and Xi represents the corresponding random variable value. Latin Hypercube Design (LHD) is the application of Latin Hypercube Sampling in s dimensions with random combination of the n random variable levels. A Latin Hypercube can be written as a matrix of n rows and s columns (n is the number of samples and s is the number of variables). Each column is a random permutation of the n levels of the associated variable. A Latin Hypercube Design of 10 combinations for two variables is shown in Table 3.2 and plotted in Figure 3.1  Table 3.1 Central Composite Design for three variables Sample No. X, X2 X3 0 0 0 1 2  a  0  0  3  -a  0  0  4  0  a  0  5  0  0  6  0  -a 0  a  7  0  0  -a  8  1  1  1  9  1  1  -1  10  1  -1  1  11  1  -1  -1  12  -1  1  1  13  -1  1  -1  14  -1  -1  1  -1  -1  -1  15  47  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY  Where in the table, -1,0 and 1 indicate, respectively, the lower bound, the mean and the upper bound of the variable.  Table 3.2 Latin Hypercube Design for two variables Sample No. x, x2 0.66 0.02 1 0.23 0.15 2 0.52 0.23 3 0.73 0.31 4 0.35 0.47 5 0.01 0.56 6 0.83 0.62 7 0.46 0.71 8 0.12 0.83 9 0.93 0.94 10 1  0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0  0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8  0.9  1  Figure 3.1 A Latin Hypercube Design for two variables  In the above, each variable is scaled to the interval [0,1].  48  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY Latin Hypercube Design is easy to construct, and each variable is sampled at n levels. When the data points are projected into any single dimension, there are exactly n different points. This is a desirable attribute for deterministic computer experiments, as the data points do not overlap, which minimizes any information loss. Nonetheless, since the data points are randomly spread in the design space, some points may cluster at certain regions, leaving voids at other regions. This situation should be avoided in all practical situations.  A few  approaches have been proposed to address this issue of non-uniformity.  3.2.3  Uniform design  Uniform Design resulted from application of number-theoretic methods for design of experiments. Readers are referred to Fang et al (2000) for the mathematical theory and details. The essence of the number-theoretic method is to choose data points in such a way that they scatter uniformly in the s-dimension unit hypercube. The generation of a Uniform Design is outlined as follows (Fang and Wang, 1994).  Suppose the dimensionality is s, and n data points are to be generated. Let (n; hi, ..., h ) be s  a vector with integral components satisfying 1 < /*, < n,h  t  hj(i * j),s<n and the greatest  common divisors (»,/».) = 1, / = !,...,s. Let (3.1)  The k-th element of i-th variable x can also be calculated by fa  (3.2)  where (n; hi,..., h ) is termed the generating vector. s  49  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY A Uniform Design for two variables each of which has 21 levels is shown in Table 3.3 and plotted in Figure 3.2. The generating vector adopted is (21; 1, 13).  Once the generating vector is known, it is easy to generate a Uniform Design. Compared to other methods, Uniform Design is more economical as it needs far fewer points especially for high-dimension case.  Table 3.3 A Uniform design for two variables with 21 levels x2 Sample No. Xi 25/42 1/42 1 9/42 3/42 2 35/42 5/42 3 19/42 7/42 4 3/42 9/42 5 29/42 11/42 6 13/42 13/42 7 39/42 15/42 8 23/42 17/42 9 7/42 19/42 10 33/42 21/42 11 17/42 23/42 12 1/42 25/42 13 27/42 27/42 14 11/42 29/42 15 37/42 31/42 16 21/42 33/42 17 5/42 35/42 18 31/42 37/42 19 15/42 39/42 20 41/42 41/42 21  50  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY  0  1/5  2/5  3/5  4/5  1  Figure 3.2 A Uniform Design for two variables with 21 levels  Note that the generated data points are in the unit hypercube I' = (0,1)', and they must be transformed in the following way to the design space for practical application.  - X\  where  + x  kt  W ~ %\)  X^ = k-th sample of i-th variable in design space; x . = k-th sample of i-th variable in unit cube space; fa  X\ = lower bound of i-th variable; X" = upper bound of i-th variable;  51  (3.3)  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY 3.2.4 Low  Low discrepancy sequence design discrepancy sequences such as Halton sequence, Hammmersly sequence, Sobol  sequence, Faure sequence, and Niederreiter sequence are used for numerical integration, optimization and computer simulation. They form the family Quasi-Monte Carlo Methods. The data points generated by low discrepancy sequences have asymptotically uniform distribution.  3.2.4.1 Hammersley sequence design  The principle of Hammersley sequence (Hammersley, 1960) is briefly outlined below. For mathematical details, readers are referred to Niederreiter (1992).  Each nonnegative integer k can be expanded using a prime base p : k = a +a 0  + a p +... + a p 2  lP  (3.4)  r  2  r  where a, e [0, p -1].  Let define a function of k,  *,(*) =P ^ P+ -V.... + -^ P  r  If p = 2, the corresponding sequence <J> (£) is termed Van Der Corput sequence. 2  Hammersley sequence is generated as follows, Let s denotes the dimension of the design space, and s-l distinct prime number bases are selected, denoted by p ,p ,...,p _ , l  2  s  x  the k -th s - dimensional Hammersley sequence is  given by the following vector, 52  <-) 35  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY  [^.* ,(*),* 2(*X....*^-,(*)) F  (3-6)  /  where n indicates the number of sample points, and PuP ,-;P -i 2  2, 3,..., s respectively, with p <p <... x  2  s  are the bases for dimension  < .  A Hammersley Sequence Design for two variables is shown in Table 3.4 and plotted in Figure 3.3. The number of data points is 20 with a base of 2.  Table 3.4 A Hammersley Sequence Design for two variables X Xi Sample No. 0.50000 1/40 1 0.25000 3/40 2 0.75000 5/40 3 0.12500 7/40 4 0.62500 9/40 5 0.37500 11/40 6 0.87500 13/40 7 0.06250 15/40 8 0.56250 17/40 9 0.31250 19/40 10 0.81250 21/40 11 0.18750 23/40 12 0.68750 25/40 13 0.43750 27/40 14 0.93750 29/40 15 0.03125 31/40 16 0.53125 33/40 17 0.28125 35/40 18 0.78125 37/40 19 0.15625 39/40 20 2  53  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY  0.2  0.4  0.6  0.8  Figure 3.3 A Hammersley Sequence Design for two variables  A Hammersley sequence is easy to generate, and it provides a low discrepancy sequence with good uniformity. As before, the generated points are in the unit hypercube I = (0,1)*, s  and they must be transformed to the design space by Equation (3.3) for practical application.  3.2.4.2 Halton sequence design Halton sequence (Halton, 1960) is similar to Hammersley sequence, and the procedure to generate Halton sequence is as follows, a) Choose s distinct prime numbers p ,p ,--,P x  with/7, <p  2  2  s  for each dimension,  <...<p ; s  b) Express the integer fusing a prime base /?, (/' = 1,2,...,5)as Equation (3.4), and calculate the function O (k)&s Equation (3.5); pi  c) The k - th s - dimensional Halton Sequence is given by the following vector,  54  CHAPTER 3DESIGN OFCOMPUTER EXPERIMENTS METHODOLOGY  (O AW  *(k),...,Q ,.(!))  (3.7)  where p ,p ,...,p are the bases for dimension 1,2,3,..., s respectively, with i  2  s  Pl<Pl<-<Ps-  A Halton Sequence Design for two variables with /?, =2,p =3 and n = 20 is shown in 2  Table 3.5 and plotted in Figure 3.4  Table 3.5 A Halton Sequence Design for two variables X Sample No. 0.333333 0.50000 1 0.666667 0.25000 2 0.111111 0.75000 3 0.444444 0.12500 4 0.777778 0.62500 5 0.222222 0.37500 6 0.555556 0.87500 7 0.888889 0.06250 8 0.037037 0.56250 9 0.370370 0.31250 10 0.703704 0.81250 11 0.148148 0.18750 12 0.481481 0.68750 13 0.814815 0.43750 14 0.259259 0.93750 15 0.592592 0.03125 16 0.925926 0.53125 17 0.074074 0.28125 18 0.407407 0.78125 19 0.740741 0.15625 20 2  55  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY  Figure 3.4 A Halton Sequence Design for two variables  As before, the generated points are in the unit hypercube I = (0,1)*, and they must be s  transformed to the design space by Equation (3.3) for practical application.  3.3 Experimental Design Implementation in This Study The classical experimental design methods were developed for physical experiments. Since at most three levels are considered for each variable, they are customized for constructing second order polynomials response surfaces. As the number of coefficients  grows  exponentially with the number of variables, they are useful only when the dimensionality is relatively small. Latin Hypercube Design may result in clustering of data points in some regions leaving voids elsewhere, especially when the sample size is small. Although Uniform  56  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS M E T H O D O L O G Y  design is very efficient and can produce a design with good uniformity, the design lacks flexibility as it involves looking up a deterministic design table. Low discrepancy sequences have asymptotic uniformity, whereas the degree of uniformity of finite sequences is not clear. Hence, a number of approaches for computer experimental design have been proposed in this thesis and are discussed hereby.  3.3.1  Grid design  When the number of variables is small, it is advantageous to employ the Grid Design. It is a full factorial design with all data points uniformly scattered in the design space. The total s  number of data points is given by n = JJl,, where s is the number of variables, and lj is the i=l  number of levels for variable Xi. The user can control the number of levels for each variable. For an important variable, more levels may be specified, whereas for a less important variable, fewer levels are needed. Although it is easy to implement, the total combination is too large when the number of levels for the variables is large. A Grid Design for three variables each of which has three levels is shown in Table 3.6. In the table, -1, 0 and 1 indicate respectively, the lower bound, the mean and the upper bound of a variable. All the data points need to be transformed into the original space for practical application.  57  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY  Table 3.6 A Grid Design for three variables Sample x2 x3 -1 -1 -1 1 -1 0 -1 2 -1 1 -1 3 -1 -1 0 4 -1 0 5 -1 1 0 6 -1 -1 1 7 -1 1 8 -1 1 1 9 -1 -1 0 10 -1 0 11 1 -1 0 12 -1 0 0 13 0 0 14 1 0 0 15 -1 1 0 16 0 1 17 1 1 0 18 1 -1 -1 19 1 -1 20 ,1 1 -1 21 1 -1 0 22 1 0 23 1 1 0 24 1 -1 1 25 0 1 26 0 1 1 0 27  3.3.2  Grid-based optimal design  An optimal design based on grid is proposed as follows. For each variable, the number of levels 1; (i = l,...,s) is specified, so the unit hypercube is divided into rectangular blocks. In addition, the number of samples in each block is prescribed. An algorithm is devised to maximize the minimum distance between any two points in every block. This approach ensures that the entire design space is covered and there is the pre-specified number of data  58  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY  points in every block, which is the desired property of an experimental design. Such a design for two variables, each of which hasfivelevels, is displayed in Figure 3.5.  1.00  0.80  0.60  0.40  0.20  0.00 0.00  0.20  0.40  0.60  0.80  1.00  Figure 3.5 A Grid-based Optimal Design for two variables  As before, the generated points are in the unit hypercube I = (0,1)*, and they have to be s  transformed to the design space by Equation (3.3) for practical application.  3.3.3  Optimized Latin hypercube design  Since the data points in a Latin Hypercube Design may scatter non-uniformly in the design space, some points may be clustered in a certain region. To overcome this shortcoming, it is desirable to optimize the Latin Hypercube Design so that the neighboring data points are kept at a minimal distance apart. To this end, an Optimized Latin Hypercube Design has been proposed in this study. For a unit hypercube of dimension s that contains n data points, there  59  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY are n sub-cubes each of which has a volume of 1/n. Thus the side length of each sub-cube is V l / n . This is the distance criterion adopted in this study for two adjacent points. As it is found that most good designs are symmetric or nearly symmetric, more data points than needed are generated through Latin Hypercube Design. Then the data points that have a distance less than a certain limit are merged. Finally, sorting is carried out to find the specified number of data points that have the largest inter-point distances. The pseudo code for generating Optimized Latin Hypercube Design is outlined as follows. a) Generate a Latin Hypercube with lie data points, more than the number of points needed n, and a = no / n ;  a Do while (d < dmin ) m  Do I = 1,11c Calculate inter-point distance d, Ifd<d ,thend = d; m  m  If d < 0.5dmin, merge these two points; End Do 1.05dmin  If (dmi„ > 0.75 VT7n), then exit s  End Do c) Sort the design points according to the inter-point distance, eliminate one of the two points which are too close. Repeat this process until the specified number of data points is obtained.  60  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY  An Optimized Latin Hypercube Design for two variables with 25 samples is given in Table 3.7 and plotted in Figure 3.6.  Table 3.7 An Optimized Latin Hypercube Design for two variables X Sample No. x, 0.413191 0.491425 1 0.920369 0.034870 2 0.870449 0.598705 3 0.908647 0.356883 4 0.231409 0.440257 5 0.511703 0.896336 6 0.309849 0.082090 7 0.792904 0.206978 8 0.304578 0.885799 9 0.069002 0.135979 10 0.757601 0.882969 11 0.625134 0.335790 12 0.072808 0.474810 13 0.618938 0.708869 14 0.393954 0.661865 15 0.118527 0.917636 16 0.381604 0.278858 17 0.173540 0.284570 18 0.647998 0.489135 19 0.057946 0.754425 20 0.211304 0.668308 21 0.884717 0.785505 22 0.541039 . 0.115413 23 0.936214 0.965401 24 0.703390 0.025384 25 2  As before, the generated points are in the unit hypercube I = (0,1)% and they have to be s  transformed to the design space by Equation (3.3) for practical application  61  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY  0.8  0.6  0.4  0.2  0.2  0.6  0.4  0.8  Figure 3.6 An Optimized Latin Hypercube Design for two variables  3.4 Summary and Discussion Computer simulations using response representation are used extensively in science, engineering and industry. The design of experiments constitutes an indispensable prerequisite, and the success of the computer simulation depends to a great extent on the appropriateness of the experimental design.  Classic experimental design methods are aimed for physical experiments, to build simple response surface in the form of linear or quadratic polynomials. Since physical experiments are costly and time-consuming, the designs are usually parsimonious to reduce the experimental efforts. Central Composite Design is the most popular and almost standardized  62  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY classic experiment design method. It is widely used for constructing second-order polynomials response surface.  In classical experiment design, the approximate function is parametric, namely, linear or quadratic polynomials. This is a model-driven approach. In real life, such a simple model is incapables of modeling complex systems. Thus the classic experiment design is not suitable for computer experiments. During the past years, space-filling designs have been proposed to answer the needs of computer simulation, where the mathematical model is not known in advance. This is a data-driven approach. Computer experiments allow the user to try different models. The model can be very flexible, linear or nonlinear; parametric, semiparametric or non-parametric. The model should be adaptive, allowing a good fit to the available data and ensuring a good generalization.  Latin Hypercube Design was thefirstapproach introduced to address computer experimental design. It is a stratified Monte Carlo method, such that variables at different levels are sampled with the same chance. Because the samples generated by Latin Hypercube Design are not uniformly distributed and may show congregations in some areas and voids elsewhere, other methods are proposed to overcome the problem. Uniform Design is the application of number-theoretic methods in statistics. It provides an experimental design with good uniformity and equidistance. It is a very efficient design in which the number of samples is far fewer than those needed for other methods if the number of levels is large. Nevertheless, it is usually generated by looking up a design table. Low discrepancy sequences are used  for numerical integration, optimization and simulation. The sequence  is a set of points that are uniformly scattered over a unit hypercube asymptotically. They  63  CHAPTER 3 DESIGN OF COMPUTER EXPERIMENTS METHODOLOGY  seem to be promising tools in experiment design as regards the good uniformity of the data points they generate, especially when the sample size is large.  A Grid-based Optimal Design and an Optimized Latin Hypercube Design have been proposed in this thesis to improve design uniformity by optimization based on the max-min criterion (to maximize the minimum inter-point distance) or controlling minimum inter-point distance. In the former approach, the user can control the number of levels for each variable, and every block has the same number of data points. The latter method is based on progressively merging the data points whose distance is below a certain limit and then sorting the database for the required number of data points. The generated designs cover the entire design space and exhibit good uniformity.  64  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION 4.1  Introduction  Artificial Neural Networks (ANN) are computational devices composed of many highly interconnected processing units. Each processing unit keeps some information locally and is able to perform some simple computations. The networks as a whole have the capability to respond to input stimuli and produce the corresponding response, and to adapt to the changing environment by learningfromexperience.  There are a number of artificial neural network paradigms. Among them, the most widely used are the Multilayer Backpropagation Neural Networks (Multilayer Perceptrons, MLP) and the Radial Basis Function Networks (RBFN). Generally speaking, the Multilayer Backpropagation Neural Networks encompass the following basic elements: (1) an input layer whose neurons receive inputsfromexternal sources, and send the signals to the neurons of the subsequent layer; (2) one or several hidden layers whose neurons receive inputs from neurons of the preceding layer, perform some calculations, and broadcast their outputs to the neurons of the next layer; (3) an output layer whose neurons process the inputs and produce the final responses; (4) the connecting weights between the neurons of the adjacent layers which embody the strengths of connection; (5) a transfer function (activation function) for processing the inputs to a neuron; (6) a learning rule employed to train the networks; (7) training data, the set of examples from which the networks learn the functional relationship between inputs and outputs. 65  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  An artificial neural network must be trained prior to practical application. The neural network is presented a set of examples, and from these examples it discovers the underlying mapping from the input space to the output space. A learning rule must be employed, and the weights are iteratively adjusted to reconstruct the presented examples.  After the neural  network has been well trained and tested, it has learned the functional dependencies and is able to respond to a unseen input pattern and predict the corresponding output. A welltrained neural network can perform either causal mapping (from causes to effects) or inverse mapping (form effects to possible causes).  Artificial neural networks possess some distinctive properties not found in conventional computational models. Traditional computing models are based on predefined rules (equations, formulas, etc.) that clearly specify the problem. The program follows an explicit step-by-step procedure to compute the desired outputs. This is feasible when the rules that define the problem are known in advance. In most cases, there are only observational data of the problem, while the underlying rules relating the input variables (independent variables, predictor variables) to the output variables (dependent variables, response variables) are either unknown or extremely difficult to discover. Under these circumstances, artificial neural networks exhibit their superiorities, and they have the following favorable attributes, (1) Inherently parallel structure which can tackle complex problem by many massively connected simple processing units; (2) Ability to learn and generalizefromexperience and examples; (3) Robustness when dealing with noisy or incomplete input data; (4) Adaptivity to new information.  66  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  Artificial neural networks have been proven to be effective computational tools for a great variety of tasks such as pattern recognition, classification, signal processing, system identification, estimation and prediction, analysis and design, data compression, adaptive control and optimization. They are continuouslyfindingnew applications in a spectrum of diverse fields such as science, engineering, medicine, business, and industry (Kumar and Topping, 1999). In the next two sections, the fundamentals of MLP and RBFN, as well as their implementations in this study will be discussed.  4.2 Multilayer Backpropagation Neural Networks 4.2.1  General  Multilayer Backpropagation Neural Network is one of the well known and the most widely used artificial neural networks paradigms. The network is composed of an input layer, one or several hidden layers and one output layer of neurons. The neurons of adjacent layers are interconnected by weights that indicate the strength of connectivity. The input layer neurons do not perform any calculations, and they just receive signals from the outside environment. The presence of a series of hidden layers and the adoption of nonlinear transfer function enable the network to learn complex nonlinear functional mapping between the input quantities and output quantities. The network must be trained by presenting a set of training input-output pairs. This is achieved by carrying out optimization in an attempt to minimize the training error through weight updates. During operation of the network, the data flow from input layer forwards to output layer. Each neuron computes the weighted sum of its inputs and subtracts a threshold. The result passes a nonlinear transfer function and the output from the neuron is produced. Then the neuron output is sent to the neurons of the  67  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  subsequent layer. This process is repeated for every following layer of neurons, and the outputsfromneurons of the last layer serve as the network predictions.  4.2.2  Artificial neuron model  Artificial neuron is the basic building block of the complex neural network system. Its operation determines the function of the entire network. A schematic diagram of artificial neuron is illustrated in Figure 4.1.  The neuron receives inputsfromthe neurons of the preceding layer Xi, X , ... X„, calculates 2  n  the weighted sum of the inputs and subtracts a threshold 0j, ie. ^JV X ij  i  - 0 , . Then the  1=1  neuron passes this outcome through a nonlinear transfer function f(.) and produces the  neuron output as,  X  x„ Figure 4.1 A schematic diagram of an artificial neuron  n  (4.1)  i=i  68  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION where Xi, i = 1,2,... ,n are the inputs to neuron j; Wy denotes the weights connecting neuron j and preceding neuron i; 0j denotes the threshold of neuron j; Yj denotes the outputfromneuron j; f(.) is the nonlinear transfer function, usually logistic function fix) = 1.0/(l+e" ) (Figure 4.2) x  or hyperbolic tangent function f(x) = tanh(x).  1.5 1 logistic function —" 5  "  3  (J^ •  o - -0.5 1  1  3  !  -1 -1.5 -  Figure 4.2 Transfer function  4.2.3  Network architecture  Generally, a Multilayer Backpropagation Neural Network is made of an input layer of neurons, one or several hidden layer of neurons and an output layer of neurons. The neighboring layers are fully interconnected by weights. A typical layout of a three-layer neural network is illustrated in Figure 4.3. The network shown consists of an input layer with five neurons, a hidden layer with three neurons and an output layer with two neurons. The input layer neurons receives informationfromthe outside environment and transmits them to  69  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  the neurons of the hidden layer; the hidden layer neurons process the incoming information and extract useful features to reconstruct the mappingfrominput space to output space; and the output layer neurons produce the network predictions to the outside world.  Prior to being applied for prediction, the neural network architecture (the number of hidden layers, and the number of neurons in each layer) must be set up, then a set of training samples are used to train the network so that it learns the functional relationship between the input variables and the output variables. At the start of training, the weights are randomly set  Input layer  Hidden layer  Output layer  Figure 4.3 A typical Multilayer Backpropagation Neural Network  to some small real numbers. Then the examples are presented to the network and a forward pass operation is performed. Each neuron calculates the weighted sum of its inputs and transmits the result through a transfer functionfromwhich the neuron output is obtained. The  70  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  typical transfer function is the sigmoid function. The data flow forwards layer by layer. The outcomes from the output neurons serve as the network estimates.  The discrepancies  between the target outputs and the predicted outputs measure the training error. In order to achieve satisfactory estimation, the weights of the network must be adapted to minimize the training error, and this is done by a backward pass of the error, which is called error backpropagation. The network error is passed backwardsfromthe output layer to the input layer, and the weights are adjusted based on some learning strategies to reduce the network error to an acceptable level. After the network is well trained, all the weights arefrozen,and the network can be applied for prediction.  4.2.4  Training strategies  The training of artificial neural networks is an unconstrained optimization process: to find the optimal neural network parameters, the connecting weights, so that the network errors on the training examples are minimized. Any unconstrained optimization method can be used toward this end. The optimization methods can be categorized into two classes: the deterministic method and the stochastic method. The deterministic class comprises first-order methods such as gradient descent and second-order methods such as Newton's method. The stochastic method is based on random search, such as simulated annealing and evolutionary algorithm. Backpropagation algorithm will be discussed in the following section, while other training methods will be briefly mentioned.  71  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION 4.2.4.1 Backpropagation algorithm  The backpropagation learning method is an approximate gradient descent method. The amount of learning is proportional to the difference (delta) between the target output and the computed output, so it is also called delta rule. Rumelhart et al (1986) extended it to multiplayer feedforward  neural networks,  and named it error-backpropagation or  generalized delta rule. The following discusses the backpropagation algorithm for training multiplayer neural networks.  Consider a three-layer neural network as shown in Figure 4.3. Assume that the number of neurons in the input layer, hidden layer and output layer are I, J and K respectively. Let Xf be the p-th input to the i-th neuron of the input layer, I ? be the p-th network input to the j-th neuron of the hidden layer, H ? be the p-th output from the j-th neuron of hidden layer; I  p k  be the p-th input to the k-th neuron of the output layer, and Y  p k  be the p-th output from  the k-th neuron of the output layer; hence, we have the following expressions,  (4.2)  (4.3) j  (4.4) (4.5) where W denotes the weight connecting i-th input neuron to j-th hidden neuron; fl  Wy denotes the weight connecting j-th hidden neuron to k-th output neuron; /(.) denotes the transfer function.  72  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  Suppose the p-th desired output for the k-th neuron of the output layer is T , and then the p  k  sum of squared error over all neurons of the output layer is defined as follows,  ^ZZC /" *') 7  *• P  1  (4-6)  2  K  The backpropagation algorithm minimizes the above error functional by incremental updating the weight in proportion to the instantaneous gradient of the error with respect to the corresponding weight.  For the weights connecting the hidden layer to the output layer, r)F  AW . = -tj-=h  *  dW^  V  (4.7) '  where rj represents the learning rate which indicates the rate of change of the weight.  Using the chain rule of derivative, we can rewrite the above equation as,  =  Let S  p k  = (77 -  Y )f(I ), p  p  p  -Yn/'UDH?  (4.8)  then Equation (4.8) can be written as  A^=//ZW  (-) 4 9  p  For hidden neurons, there are not target outputs. In order to apply the same principle to the neurons in the hidden layers, the error must be backtracked to the hidden layer neurons. The weight update rule for the weights W is again formulated as, fi  r)F AW, = -rj——  (4.10)  73  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION Using the chain rule of derivatives, the above equation can be rewritten as,  P  k  p t (4.11) p where  5/=/ (/;)2:^. I  k  If there is more than one hidden layer, the same procedure can be applied to each hidden layer by backtracking the error. The selection of learning rate is problem-dependent, and requires experience and experimentation. If the learning rate is too large, we might overshoot the minimum; on the other hand, if it is too small, the convergence will be slow. Initially all weights should be set to small random values to prevent neuron saturation in the early training stage which results in slow learning.  In the classical backpropagation method, the training is fast at the beginning, but at a flat region of the error surface, the progress is very slow. In order to circumvent this drawback, a momentum term can be added, AW(g) = -rjVE(g) + aAW(g -1)  (4.12)  in which g indicates the iteration number.  To be more effective, it is more reasonable to adapt the learning rate and momentum rate during the training process (Hagan, 1996). J?(g) =  S"(g-i) CT7]( g-1)  E(g)>s-E(g-1) E(g)<s-E(g-1)  74  (4.13)  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY A N D IMPLEMENTATION  where £ = 0 . 7 , a= 1.05, e =1.04  4.2.4.2  Other training algorithms  A number of fast training methods can be used to speed up the learning process (Shepherd, 1997). Among them, the conjugate gradient method and second-order methods such as Quasi-Newton method, Levenberg-Marquardt method, model-trust region strategies are worth mentioning. Compared to the first-order approaches, these methods need more calculations in each iteration. The fast training methods are not general, as their efficiency highly depends on the problem under consideration.  4.2.5  Performance evaluation  After the network has been trained, the network error is minimized to a certain lower level. However, a lower training error does not imply a lower generalization error. To make sure that the network is well trained and it has the capability to generalize, a subset of examples have to be presented to the network, with the network outputs compared to the target outputs. The testing errors indicate the extent of generalization error when the network is put to use. If the testing errors are considered acceptable by the user, then the training stage is complete. The network topology is kept fixed, and the network weights are frozen. The network is ready for application.  4.2.6  Neural networks implementation in this study  4.2.6.1 Data preparation  75  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  Since neural network generalizes by learning from examples presented to it, its ability of generalization is strongly affected by the training data. Hence, generation of sufficient number of training examples is extremely important. The training examples must cover the range from the lower bounds to the upper bounds of all input variables and distribute uniformly over the whole design space. The data should be comprehensive, representing particular features of the entire variable population.  If the input variables have a large dimensionality, it may be advantageous to apply some statistical methods such as Principle Component Analysis or Factor Analysis to select a smaller set of important input variables. That will reduce the number of instances required for network training and accordingly the network complexity.  During training of network using backpropagation algorithm, the network weight change is proportional to the derivative of mean square error with respect to the weight under consideration. Since the derivative tends to have a smaller value as the absolute value of the weight goes up, it is customary to scale the input variables into a small range, for instance, [-1.0,1.0] in order to speed up training. A simple linear normalization function is used,  U = -l.0 + 2.0(X-X )l(X -X ) l  u  t  (4.14)  Where U denotes the normalized value of input variable X; X, denotes the lower bound of input variable X; X denotes the upper bound of input variable X; u  The normalization of output variables depends on the range of the transfer function, the sigmoid transfer function has a lower and an upper output limits, which are 0.0 and 1.0 for  76  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  logistic function, or -1.0  and 1.0 for hyperbolic tangent function. Usually linear  transformation works well, albeit a nonlinear transformation may be conducive if the data are clustered. Thus in this work, the output variables arefirsttransformed in the following,  S = \n(Y)  (4.15)  where Y denotes the target value of output variable.  Then S is normalized within the values of 0.1 to 0.9 (for logistic function), or -0.9 to 0.9 (for hyperbolic tangent function),  F = 0.1 + 0.8(5-ln7 )/(ln7 -In7,) /  (4.16a)  B  F = -0.9 + 1.8( S'-lny )/Ony,-ln7 )' 1  j  l  (4.16b)  Where V is the normalized target value of output variable Y, Y denotes the lower bound of output variable Y; l  Y denotes the upper bound of output variable Y; u  4.2.6.2 Topology of the network The architecture of the neural network must be determined before training. The number of input variables and the number of output variables are determined by the problem specifications. It is recommended to reduce the number of input variables based on experience and engineering judgment, as too many inputs will make the model complex as well as slow down the learning process. Though there are theorems that guarantee that multiplayer feedforward neural networks with (at least) two hidden layers are capable of approximating any nonlinear function within a desired accuracy (Hornik, 1991), no general  77  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION guidelines are available to select the appropriate topology of the networks. Generally, a trialand-error approach is followed to find the best network structure. Several networks with different architectures are trained and tested, and the best one with the least test error is used for application.  The hidden layer plays a crucial role in the neural network performance. It enables the network to model complicated nonlinear relationships and to capture the features underlying the inputs and the outputs. An optimal number of neurons in the hidden layer are required. A network with few neurons may not be able to capture the complex underlying relationship between inputs and outputs, thus it cannot generalize well to unseen data. On the other hand, too many neurons tend to result in over-fitting of the training data, ie, the model is too complicated to be reliably inferred from a limited amount of training data, hence the network prediction will be poor in spite of a very low training error. There is no general rule for choosing the optimal number of neurons in the hidden layer. It is problem dependent, and to some extent, it hinges on the amount and the quality of the training data. In a word, it must be large enough to be able to model the complicated nonlinear mapping while small enough to ensure a good generalization. In addition, the number of neurons should be so small that the number of weights is fewer than the number of training instances. Some heuristic approaches can be applied to improve on the initial architecture. One hidden layer is usually adopted. There are two algorithms available: cascade algorithm and pruning algorithm. In cascade algorithm, we start with a simple architecture with only a few hidden neurons, and evaluate the performance by parallel training and testing. If the training error is high, more hidden neurons are added. The process is repeated until at some step the testing error begins to increase. In pruning algorithm, we start from a complex architecture with many hidden  78  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  neurons, and evaluate the network performance by parallel training and testing. If over-fitting occurs, the neurons in the hidden layer are reduced until the training error is reduced to an acceptable level.  In this work, one hidden layer is adopted and the number of neurons in the hidden layer is determined by cross validation. There are two types of cross validation, leave-one-out cross validation and multi-fold cross validation (Cherkassky, 1998). The model selection procedure is outlined in the following pseudo code. (1) Set the initial number of neurons to half the number of input variables, H, =112; (2) Set the maximum number of neurons to H = (N^ u  (3)  -1) /(/ + 2);  DoH=H„H  u  If (H = H,), initialize the weights to some random small values; If (H * H,), initialize the weights connecting the newly added neuron to input neurons and output neurons; Divide the training dataset intofivesubsets; Don =1,5 Train the network using four of the five datasets and use the rest for testing; Calculate the training sum square error SSE" , and the testing sum square rajn  error SSE" ; est  Calculate the error criterion EC =WSSE^/(N^-N )) a  +  w  End Do  79  ]otiSSEi,/N ) ml  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  Calculate the average EC" = - £ £ C „ 5 n=l,5  End Do (4) Select the number of neurons as H which has the minimal EC" where / denotes the number of input variables; SSE^n denotes the training sum square error; SSE^  denotes the testing sum square error;  Ntrain denotes the number of training examples; N  test  N  w  denotes the number of testing examples; denotes the number of network weights;  The network with the minimum EC value is selected as the best network structure.  4.2.6.3 Training There are two training modes, batch mode and pattern mode. In a batch mode, the entire set of training examples is presented to the network, and the network output for every input vector is computed. Then the mean square error of the network is calculated and the network weights are adjusted backwards using error backpropagation. In a pattern mode, every example is presented to the network and the corresponding network error is calculated, then the network weights are adjusted backwards based on the error from that example alone. Usually batch mode is preferred due to the following reasons: (1) Pattern mode training needs more weight updates and thus is slower, since weights must be changed for every example;  80  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION (2) In a pattern mode, the ordering of examples has an impact on the training. Examples presented at the end of training have more influences than those presented at the beginning. The network tends to " forget the past ". While in a batch mode, the order of presentation does not make a difference; (3) Batch mode provides a more accurate measurement of weight changes on the average.  The network has to be trained many epochs (the presentation of the entire training dataset to the network is termed an epoch) before the network error decreases gradually to a stable value. The training time depends on such factors as network topology, the number of hidden layers, and the number of neurons in each layer, the training data as well as the nature of the input-output relationship. The training can be stopped when the iteration limit is reached, or the training error has reached a predefined error limit.  In this study, batch mode training was adopted and the weights obtained from previous training were kept. The following procedure was employed for the final training, with the optimal number of neurons determined by the above model selection. (1) Divide the training dataset into two subsets, namely, training dataset (80% of the total data) and the test dataset (20% of the total data); (2) Train the network using the training dataset and evaluated its performance with the testing dataset; (3) If the error for a certain testing case is large the predefined threshold, put it into the training dataset, meanwhile, the case with the smallest error in the training dataset is put into the testing dataset;  81  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  (4) Repeat the training process until both the training root mean square relative error (RMSRE) and the testing RMSRE are reduced to the acceptable limit, or the number of iteration is exhausted.  4.3  Radial Basis Function Networks  4.3.1. General  Radial Basis Function Network (RBFN) is composed of a linear combination of radial basis functions, whose output is symmetric about its center and decays monotonically with the distance from the center. It is another neural network paradigm for function approximation and classification. A typical radial basis function is the Gaussian function.  ^(x) = exp  (4.17)  r  2  J  where x denotes the input variable vector; c denotes the center of the function (vector); r denotes the radius of the function (scalar); ||«|| is a vector norm.  A radial basis function network comprises linear combination of a set of radial basis functions, and it can be expressed in the following form,  M  **) =  (4.18) j=0  where ^ (x) = 1.0; 0  82  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  M is the number of radial basis functions; Wj denotes the j-th weight; (fijix) denotes the j-th radial basis function.  If a Gaussian function is selected as the radial basis function, then Equation (4.18) becomes,  M  y(x)=^w exp j  (4.19)  J  Figure 4.4 illustrates a radial basis function network with three layers, ie, an input layer, a hidden layer and an output layer.  The neurons in the input layer receives informationfromthe outside world and transmit them to the hidden layer neurons, which perform a nonlinear transformation of the input vector by means of radial basis function. The outcomes from the hidden neurons are linearly combined with the coefficients (weights) and exported as the network output.  4.3.2. Radial basis function network training  The design of Radial Basis Function Network involves selection of proper radial basis function, determination of the number of hidden neurons and network training. Usually the Gaussian function is chosen as the radial basis function, but others such as multiquadratic function, Cauchy function, and thin-plate splines are used in some applications. Once the type of radial basis function and the number of neurons are established, training is performed to determine the values of network parameters. Recall in Equation (4.19), there are three parameters for each hidden neuron, namely, the center vector c , the radius r and the weight ;  83  j  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION Wj, (j = 0,1,2,...,M), where M i s the number of hidden neurons. There are two ways of training a RBFN, viz, supervised training or two-stage training.  Output layer  Hidden layer  Input layer Figure 4.4 A schematic Radial Basis Function Network  Supervised training of RBFN is similar to the training of Multilayer Perceptron. The values of the parameters are adjusted to minimize the sum-of-squares error,  (4.20) n=l  k=\  where P is the number of samples; K is the number of outputs; tl denotes the target value of k -th output corresponding to the input vector x";  84  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  y (x ) denotes the calculated value of k -th output corresponding to the input vector x"; n  k  If gradient descent is employed as the training algorithm, the following update rule can be used for adjusting the values of the model parameters (Ghosh et al, 1992),  & k/ ~ 7i ifk ~yk( "))0j (*") w  Ac, =  x  7 a  (4.21)  ^ ( x " ) f c - ^ £ ( r ; -*(*")>•'*  X" - C , | |  Ar, = ^(*")""  2  (- > 4  22  K  3'"'  -y*(x")>»*  (4-23)  where 77,, n , rj are the learning rates. 2  3  Two-stage training involves unsupervised training of radial basis function centers and radii, followed by training of the weights.  The training of centers is accomplished by K-means  learning, a type of competitive learning in which the Euclidean distances between a certain input vector and all the centers are calculated, and the center with the minimal distance gets the privilege to update. The pseudo code for this algorithm is listed as follows, (1) Initialize the centers by randomly assigning input vectors to them; (2) Do n = 1, number of samples Doj=l,M Calculate d = s  -cj  End Do Find the neuron j which has the minimal dj, and n = n . +1; j  Update the center c™ = (ntf" + x")/(n +1); w  y  85  }  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION End Do  The radii of neurons play a very important role as they determine the quality and smoothness of the mapping function. All neurons may have the same value or each neuron has its own value. When the same value is used for all neurons, it can be set as a multiple of the average distance among the centers of all neurons. When each neuron has its own radius, the value is usually taken as 1.5 to 2 times the average distance between the neuron center and the centers of some nearest neighbors.  After the basis parameters (centers and radii) have been determined, the weight values can be determined by solving a system of linear equations. Based on the available training data, Equation (4.18) yields, <S>W = T  (4.24)  where O denotes the design matrix, with element corresponding to n-th sample and j-th neuron given by ^ (x"); ;  W denotes the weight vector; T denotes the target output matrix, with element corresponding to n-th sample and k-th output variable given by t ; k  The above equation can be solved by singular value decomposition as,  W = (® <&y Q> T T  x  (4.25)  T  86  CHAPTER 4 ARTIFICIAL N E U R A L NETWORKS THEORY A N D IMPLEMENTATION  4.3.3. Radial basis function networks implementation in this study  In this study, a RBFN was implemented in order to compare its performance with that of a multilayer perceptron (MLP). K-means learning was employed to train the neuron centers, and the radius of a certain neuron was set to ]/y/~2 of the largest distance between it and some of its nearest neighbors. Gradient descent was adopted for training of the network weights.  RBFN has found wide applications in pattern recognition, signal processing, nonlinear system identification and medical diagnosis, etc. owing to their universal and smooth functional approximation capabilities. Compared to a MLP, a RBFN needs more memory for storing the centers, radii and weights, so it is more susceptible to the curse of dimensionality.  4.4  Summary and Discussion  Artificial intelligence and machine learning have witnessed great advancements and found applications in a wide range of fields. Multilayer feedforward neural networks and radial basis function networks have been discussed. The common feature of these computational learning models is that they are able to learn the underlying complex input-output functional relationship given a collection of training data, and they can adapt to a changing environment.  Multilayer perceptron and radial basis function networks have been implemented in this work for seismic reliability analysis and design optimization. They will be employed as a surrogate model in lieu of the more expensive and time-demanding computer code, as the cost of a precise solution is much higher compare to an approximate one with impreciseness  87  CHAPTER 4 ARTIFICIAL NEURAL NETWORKS THEORY AND IMPLEMENTATION  within the range of acceptability. By doing this way, the computational efficiency is greatly improved, which will be verified by subsequent case studies of seismic reliability analyses and applications in performance-based seismic design.  88  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN M E T H O D L O G Y  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODOLOGY 5.1 Introduction Earthquakes constitute one of the major natural hazards to society. The past seismic data show that strong earthquakes resulted in great human casualties and large economic losses around the world. The failures of infrastructure, such as buildings, bridges, highways, dams, etc. during severe ground motions were responsible for these fatalities and losses. Generally, major casualties were concentrated in densely populated regions with poorly built facilities vulnerable to earthquakes, while major economic losses were located in areas with modern industrial and commercial developments. Thanks to advancements of earthquake engineering in the past decades, human casualties have been reduced significantly during severe earthquakes. This demonstrates, in part, the success of modern seismic resistant design philosophy and engineering practice. On the other hand, earthquake resistant design still faces many challenges and difficulties. The 1989 Loma Prieta earthquake, the 1994 Northridge earthquake and the 1995 Kobe earthquake witnessed enormous economic losses and exposed further deficiencies in seismic resistant design and construction. The huge economic losses can be ascribed to the following reasons:  (1) Due to industrialization and urbanization, cities are expanding constantly as more people work and live in large cities. Many of these densely populated cities possess high seismic hazards, as they usually are situated close to the boundaries of tectonic plates or in regions with soft subsoil.  89  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY (2) Most of the existing buildings and other constructions in the seismic zones were designed and constructed in conformance with past code of practices, which are deemed incapable of withstanding the expected ground motions by modern seismic codes. All those substandard buildings and facilities need to be retrofitted up to the current standard. Unfortunately, owing to the lack of adequate funding, seismic rehabilitation of these buildings and structures may not be accomplished in time, leaving them as the target of next earthquakes.  (3) The basic philosophy of the modem seismic design code aims to accomplish the following goals: a) to resist a minor earthquake without damage; b) to resist a moderate earthquake without structural damage, but nonstructural components may suffer from some damage; c) to resist a strong earthquake without collapse to ensure life safety, but the structure and non-structural components may experience severe damage. However, the emphasis of the code is to provide life safety for the public by preventing collapse of the structures under severe earthquakes, whereas economic losses due to property damages and business interruptions are secondary. Though three objectives are stated in the code, only the life safety goal has been explicitly executed, and no specific procedures have been provided in the code for explicit evaluations of other performances, like the vulnerability of non-structural elements, contents, equipments, etc., which can cause more economic losses than the structural damage, even for a moderate earthquake.  (4) The complexity of the structural behavior during a strong ground motion is not fully accounted for by the code approaches.  To implement the seismic design, the code  specifies a ground motion criterion, usually in the form of a seismic zone factor and a  90  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY design spectrum. For simple and regular structures, an equivalent static force method is provided where only the first vibration mode is allowed for. The elastic seismic forces are established based on the seismic zone, the structural importance, and the site condition. The seismic design forces are determined empirically by taking advantage of the inelastic structural behavior and ductility to reduce the elastic seismic forces to a design level. For large and complex structures, the modal decomposition response spectrum method or nonlinear dynamic time history analysis are generally recommended, but only some guidelines are given in the code for reference (NBCC, 1995). Consequently, most of the buildings have been designed based on an oversimplified analysis approach, without elaborate modeling of the structural behavior and the effects of non-structural components. During a strong earthquake ground motion, the responses of the structure are strongly nonlinear owing to stiffness degradation as well as strength deterioration. Hence, a nonlinear dynamic analysis should be carried out to realistically capture the actual behavior of the structure. Though there are some commercial software packages available for this purpose, the analysis process is highly dependent on the analyst's capability to accurately model the structural system. In addition, nonlinear dynamic time history analysis requires representative ground accelerograms for the site. Engineers routinely use historically recorded accelerograms without paying much attention to the seismic background. For some regions, there are historic recordings available for ready usage, nevertheless, they represent past earthquakes and may never be recorded in the future. For a site without historical recordings, the adoption of accelerograms recorded in other regions can lead to unfathomable errors in structural response predictions.  91  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY  (5) Even though the seismic resistant design is carried out based on the state-of-art method, the goal can only be achieved through proper detailing and good quality assurance during construction. During the 1994 Northridge earthquake and the 1995 Kobe earthquake, some steel moment-resistant frames that are generally considered as ductile systems underwent brittle damages, especially in the field-welded beam-column joints. Those damages were caused by poor detailing of the connection where the moments at the web of beams could not be completely transmitted to the column, leading to stress concentration in beam flanges. Some other failures were attributed to desultory inspection and poor workmanship during construction (Mazzolani and Gioncu, 2000).  (6) Structural maintenances throughout the service life play an important role for structures that may subject to future seismic ground shakings. Some buildings are renovated during their service life for other usage with their seismic resistance sacrificed instead of strengthened, and could become the easy target of an upcoming earthquake.  (7) Seismic ground motion is one of the most important and less understood factors, due to the randomness and uncertainty involved. Obviously, the present code design spectrum cannot fully describe the expected seismic loading for a structure. In general, the code provisions give a macro-zonation at a country level, with the single design spectrum roughly corrected by considering the local site soil conditions. This approach is deficient in that the actual site seismic conditions are not clearly accounted for, such as magnitude, distance to potential seismic sources, attenuation law, site soil stratification, etc. Probabilistic seismic hazard assessment of the site is crucial for a successful seismic resistant analysis and design. Moreover, the recent earthquakes drew engineers attention  92  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY to an important aspect of ground motions that was ignored in the past, ie., the differences in ground motions from far-source and near-source earthquakes. The tremendous damages indicated that the earthquake action model employed in the present code, based on the ground motions recorded in far-source regions, could not be used to depict the earthquake effects in near-source regions.  Based on lessons we have learnedfromthe last major earthquakes, the structural engineering community commenced to reexamine the seismic design philosophy and engineering practice, to find out the deficiencies in the current code of practice, and to propose procedures to remedy the drawbacks inherent in the present code. Performance-based seismic design has been put forward as the cornerstone of the next generation code.  SEAOC's Vision 2000: Performance-based Engineering of Buildings and BSSC's NEHRP FEMA 273: Guidelines for the Seismic Rehabilitation of Buildings have laid the foundation of performance-based seismic engineering by introducing multiple performance goals, design criterion associated with each performance level, and refined analytical procedures for performance evaluations. It is generally agreed that: (1) The traditional way of design focuses mainly on life safety, which is basically a single level design. The protection of integrity of building contents and prevention of business interruption are also equally important for some critical buildings. Hence, a multiple design performance levels should be employed based on the function and contents of the building after the strike of a severe earthquake, in order to reduce damages and maintain functional continuity. (2) Multiple levels of earthquake ground motions need to be adopted, accounting for diverse hazards imposed on the structure throughout its service life. (3) Refined and sophisticated numerical  93  C H A P T E R 5 P E R F O R M A N C E - B A S E D SEISMIC D E S I G N M E T H O D L O G Y  procedures need to be developed to realistically evaluate the intricate responses of the structure. (4) Seismic design should be carried out in the framework of reliability-based design in order to reliably satisfy the multiple performance goals by taking into account various uncertainties and randomness involved in the seismic design process. (5) Extensive experiments have to be undertaken to validate effective detailing, and rigorous supervision is to be dictated to guarantee the construction quality. (6) From an economic perspective, earthquake resistant design should be based on whole-life cost-benefit analysis, allowing for all major factors involved in the design, construction, and maintenance of the building as well as probable maximum losses due to failures of the building and its contents during an earthquake.  5.2 Performance-based Seismic Design 5.2.1. Multiple performance objectives in SEAOC Vision 2000  Performance-based seismic design implies that multiple target performance objectives are expected to be satisfied when the structure is subjected to earthquake ground motion of a certain intensity associated with that performance level. In SEAOC's  Vision 2000:  Performance-based Engineering of Buildings and BSSC's NEHRP FEMA 273: Guidelines for the Seismic Rehabilitation of Buildings, four performance levels have been proposed, as shown in Table 6.1. The two systems of performance levels are quite similar to each other, albeit different terminology is utilized. In NEHRP, acceptance criteria for Life Safety and Collapse Prevention performance levels are defined at the component level. The performance levels recommended by SEAOC Vision 2000 for different types of buildings under distinct ground motion intensities are shown in Figure 5.1. Buildings are categorized according to  94  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY their occupancy and use, namely, basic facilities, essential/hazardous facilities, and safety critical facilities. It is expected that, after a severe ground shaking, buildings for emergency response and essential public service should have a low probability of being damaged beyond the limit which affects their normal function, and those facilities which house hazardous materials such as poisonous chemicals or radioactive materials should have an lower damage level to prevent any disastrous releases. For a moderate earthquake, all ordinary buildings should undergo limited user-acceptable damages to reduce economic losses and business interruptions.  Four distinct ground shaking intensities are specified in SEAOC Vision 2000, namely, frequent earthquake with a return period of 43 years, occasional earthquake with a return period of 72 years, rare earthquake with a return period of 475 years, and very rare earthquake with a return period of 970 years. The earthquake scenario should reflect the probable seismic hazards of the site under consideration.  Though Both SEAOC and NEHRP have made the first step toward the development of performance-based design procedures, there are still a lot to do for its full growth. Ground motion characteristics such as near-field velocity pulse effects and duration are not accounted for in the provisions. An explicit serviceability evaluation procedure must be developed to estimate structural damages. Also, an approach needs to be elaborated to evaluate the possible damages to non-structural members and building contents, for instance, costly equipment and motion-sensitive instrument, hazardous substance container, etc. More refined and sophisticated analytical models need to be established in order to realistically simulate the structural behavior and its surroundings throughout strong ground shaking. Apart from  95  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY  performance evaluation at component level, performance acceptance criteria are also required at system level, considering the systematic behavior of the structure as a whole, and the integrity of the structure during strong ground motion.  Performance Objective  EarWiqtKlte ProDabilHy  OpiAnd  Operational ^^^^^^^^^^^^^  life Safe  Cofcpt*  frequent  PerForr  Occasional  law  HHHHHB  Vary Rare  Figure 5.1 SEAOC Vision 2000 performance levels (SEAOC, 1995)  5.2.2. Performance-based seismic design criteria in this study At present, most seismic codes merely consider one design earthquake: a rare, severe earthquake with a return period of 475 years. Though three performance levels are specified, only one performance level, life safety, is explicitly executed. Even this performance level is poorly implemented. A standard design spectrum is defined for the whole country, and elastic base shear is calculated by adjusting the spectrum on the basis of local peak ground acceleration, structural importance and site soil conditions. The design base shear is calculated as a fraction, 1/R, of the expected elastic base shear. The R factor is determined based on engineering judgment and observations of structural performances during past earthquakes. In this way, the code tries to achieve the life safety level by qualitatively  96  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY  Table 5.1 Performance level definitions in NEHRP and SEAOC (Hamburger, 1996) Performance definitions Descriptions NEHRP  Operational  Immediately Occupancy  Life Safety  Collapse Prevention  SEAOC  Fully Operational  Functional  Life Safety  Near Collapse  No significant damage has occurred to structural and non-structural components. Building is suitable for normal intended occupancy and use  Only very minor damage has occurred. The building retains its original stiffness and strength. Nonstructural components operate, and the building is available for normal use. Repairs, if required, may be instituted at the convenience of the building users. The risk of lifethreatening injury during the earthquake is negligible.  Only minor structural damage has occurred. The structure retains nearly all its original stiffness and strength. Nonstructural components are secured, and if utilities are available, most would function. Life-safety systems are operable. Repairs may be instituted at the convenience of the building users. The risk of life-threatening injury during the earthquake is very low.  Significant structural and nonstructural damage has occurred. The building has lost a significant amount of its original stiffness, but retains some lateral strength and margin against collapse. Nonstructural components are secure, but may not operate. The building may not be safe to occupy until repaired. The risk of life-threatening injury during the earthquake is low.  97  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY  limiting damage to structural components. Hence, no attempts have been made to rigorously assess the safety margin against failure by this approach. After the last earthquakes, it is generally recognized that damage control should be included as an integral part of seismic design. Since it is economically unjustifiable and technically infeasible to design all structures to withstand the severe earthquake without any damage, it is agreed that an ordinary structure should be able to resist a moderate earthquake with user-acceptable damage in non-structural elements as well as building contents, and structural damages should be reparable. In the event of severe earthquake, the structure should be able to dissipate the input seismic energy by inelastic deformations, though the structural damages may be irreparable, its integrity must be maintained to ensure the life safety of the occupants through collapse prevention. In case that the owner would like to pay extra expenses for enhanced performance beyond the minimum code requirements for continued operation of the building even after a strong earthquake, the engineer should provide an optimal design with higher reliability in performance but lower overall cost. In this study, different performance objectives will be defined corresponding to different levels of ground motions. A framework for performance-based design is proposed.  5.2.2.1 Multiple seismic hazard levels  Earthquake ground motion is the most important factor affecting seismic design, since it involves lots of uncertainties and randomness. A successful seismic design hinges largely on the appropriate characterization of the earthquake motions. After the last earthquakes, it is commonly agreed that multiple seismic hazards should be considered for performance-based design. How many levels of earthquakes need to be allowed for and the characteristics of the earthquake motions depend on the location of site to all potential sources of earthquakes, the  98  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY seismic features of each source, and the travel path geology from each source to the site as well as site soil stratification and properties. The characterization of the possible seismic hazards at the site can be rationally achieved in theframeworkof probabilistic seismic risk assessment.  One phenomenon worthy of attention is the recently observed differences between nearsource ground motions and far-source ground motions. The design methods in most seismic codes nowadays are based on recordings from far-field earthquakes that can not be used to properly delineate near-source motions, which is one of the reasons for the tremendous economic losses in recent near-field earthquakes. The major differences between them are as follows (Mazzolani and Gioncu, 2000): (1) Near-source ground motions are of short-duration and pulsate in acceleration, velocity and displacement, while far-source ground motions are cyclic with longer duration; (2) Near-source ground motions have significant vertical component, while horizontal components dominate far-source ground motions; (3) Near-source ground motions have very high velocities; (4) The effect of directionality of wave propagation is substantial for near-source ground motions, while local soil stratification has a great influence for far-source ground motions. A seismic region around the site can be subjected to ground shakings of different intensities, low, moderate or severe. A low earthquake occursfrequently,but it will cause no structural damage or slight non-structural damage. A moderate earthquake happens occasionally, and it  99  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY may give rise to moderate or even heavy non-structural damage as well as reparable structural damage. A severe earthquake rarely takes place; nevertheless, its occurrence may result in heavy structural damage or even collapse. The seismic hazard levels are disputable, as they depend on the site seismicity as well as other socio-economic factors. In contrast to SEAOC definitions, the earthquake hazard levels mentioned in Mazzolani and Gioncu (2000) are: (1) frequent, with a return period of 8-10 years; (2) occasional, with a return period of 20-30 years; (3) rare, with a return period of 450 years; (4) very rare, with a return period of over 970 years. It seems that the frequent earthquake and the occasional earthquake defined in SEAOC are not so distinct. When probabilistic seismic risk assessment of the site is not carried out, four levels of earthquakes are suggested in this study: (1) frequent minor earthquake, with probability of exceedance of 90% in 50 years (return period 22 years); (2) occasional moderate earthquake, with probability of exceedance of 50% in 50 years (return period 73 years); (3) rare major earthquake, with probability of exceedance of 10% in 50 years (return period 475 years); (4) very rare severe earthquake, with probability of exceedance of 5% in 50 years (return period 970 years). For a very important structure, as for example a nuclear power plant, a maximum probable earthquake is defined as an earthquake with probability of exceedance of 2% in 50 years, with corresponding return period of 2475 years. If probabilistic seismic hazard assessment is performed, ground motions corresponding to earthquake scenarios specific and appropriate to the site will be considered and used for design. This level of earthquake normally will only be used for design of essential or critical structures.  100  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY  5.2.2.2 Multiple performance objectives  Performance-based seismic design should be carried out by transparently satisfying multiple performance levels corresponding to multiple hazards with the corresponding target reliabilities, based on accurately modeling structure responses using sophisticated numerical procedures. The target reliability indices depend on occupancy, importance and consequence of non-performance of the structure after an earthquake. They can be obtained by backcalculating reliability of existing structures. ISO "General Principles on Reliability for Structures" should be referred to in determining the proper values, taking into account owner's requirements and the economic impacts. The target reliability indices mentioned later on are for illustration purpose only. In SEAOC  Vision  2000, inter-story drift ratio (the  ratio of the difference of lateral displacements of adjacent floors to the story height) is adopted as the performance criterion. The recommended limits for the four performance objectives are respectively, 0.2% (Fully Operational), 0.5% (Operational), 1.5% (Life Safety) and 2.5% (Near Collapse). There are no universally accepted limit values, and they must be determined according to building function and owner demand. Aside from story drift ratio, other criteria should be adopted for performance evaluation. Four performance objectives are suggested in this study. (1)  Serviceability:  for afrequentminor earthquake, a structure is in  the elastic range, so the non-structural components are checked for possible damages and building contents examined for normal functioning. An inter-story drift ratio limit of 1/500 may be adopted with target reliability index between 1.5-2.5. The maximum floor acceleration may need to be checked for some motion-sensitive instrument. (2)  Capability:  for an occasional moderate earthquake, a structure may work in the elasto-plastic state. The structure is assumed to suffer reparable damages, with non-structural elements moderately  101  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY  damaged. The yield strengths of major structural components are to be examined for evaluation of local structural damages. An inter-story drift ratio limit of 1/200 may be used. The target reliability index for this limit state can be set as 2.0-3.0. (3) Stability: for a rare strong earthquake, a structure is presumed to work in ultimate state. The structure may suffer moderate damages but still maintains its integrity. The ultimate strengths of major structural components are to be investigated for structural stability. An inter-story drift ratio of 1/100 may be set as the limit. The target reliability index for this limit state can be set as 2.5-3.5. (4) Survivability: for a very rare, severe earthquake, a structure is at the edge of collapse with a kinematical mechanism formed. The structure is heavily damaged and need demolition afterwards. The drift ratio limit of the entire structure could be 1/50. In order to guarantee the safety of the occupants, the ductility of the whole structure is checked for collapse prevention. The target reliability index for this limit state could be specified as 3.0-4.0.  The performance levels, earthquake hazard levels, story drift limits and the suggested target reliability indices are summarized in Table 5.2.  Table 5.2 Performance objectives Performance level Serviceability Capability Stability Survivability  Probability of exceedance in 50 years 90% 50% 10% 5%  Story drift ratio limit  Target reliability index  1/500 1/200 1/100 1/50  1.5-2.5 2.0-3.0 2.5-3.5 3.0-4.0  5.2.2.3 Structural analysis approach  102  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY Time history analysis is the only method that is able to reveal the actual behaviors of the structure during an earthquake. A mechanical model of the structure is built by a realistic modeling of material nonlinear hysteresis, member connections, non-structural component effects and soil-structure interaction. Some historic acceleration recordings or simulated ground motions that are deemed representative of the future earthquake motions are selected, based on the site seismic risk assessment. The structural responses are obtained by numerical integration of the equations of motion. The difficulty of the method consists in appropriate modeling the structure and its environment, as well as choice of proper earthquake accelerograms.  As earthquake motion involves many uncertainties, a spectrum of  accelerograms should be selected considering a variety of possible earthquake scenarios. Although time history analysis is not used widely in current engineering practice, it is believed that it will be indispensable in the upcoming years, as performance-based seismic design becomes the backbone of the code of the next generation.  5.2.2.4 Seismic design criteria  0  Four seismic design criteria are employed corresponding to the foregoing four performance objectives.  (1) Stiffness design criterion  To control damage to non-structural components and building contents, and ascertain that the structure works in the elastic range under a minor earthquake, the rigidity of the structure must be checked. Inter-story drift ratio is usually adopted for this purpose, with the limit state function in the following form,  103  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY  G =90-e  (5.1)  where G denotes the inter-story drift ratio limit; 0  6 denotes the computed maximum inter-story drift ratio; There is not a fixed value of inter-story drift ratio limit, as it depends on the nature of the non-structural elements and the requirement of the user. A commonly accepted value varies between 0.1%-0.3%.  For some delicate or precise instrument housed in the building, it may be necessary to check the maximum floor acceleration or velocity to assure their normal functioning. The corresponding limit state function can be expressed in the following forms,  G =A -A 0  (5.2)  m a x  G =V -V„,  (5.3)  0  where A denotes the acceptable acceleration at thefloorlevel; 0  .Amax denotes the computed maximumflooracceleration; V denotes the acceptable velocity at the floor level; 0  denotes the computed maximumfloorvelocity;  The acceleration limit or the velocity limit depends on the requirements of the building contents, which can be obtainedfromthe manufacturers. For this level of serviceability limit state, the target reliability index could be set about 1.5-2.5, as the consequence of failure is not disastrous.  104  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY (2) Yield strength design criterion  Strength design plays a pivotal role in traditional structural design, and it will continue to be a major part in performance-based seismic design. Under the action of an occasional moderate earthquake, a structure may enter the inelastic range, all potential plastic hinges need to be examined for their probable yield strengths, and non-hinge zones checked to preclude unanticipated hinges forming. The limit state functions can be formulated as,  (5.4)  G =M - M y  where M is the yield moment capacity of member and M is the computed moment. The y  possible over-strength of the members should be considered for real representation of the yield strengths.  The stability of columns and beams with thin-wall section must be checked for possible local buckling.  In addition, the story yield ratio should be checked to prevent week story mechanism, the limit state of which can be expressed as,  G =10-V *  (5.5)  /V  max '  "y  in which V ^ is the computed maximum i-th story shear force; 1  Vy is the i-th story shear capacity.  (3) Ultimate strength design criterion  105  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY  The ultimate strengths of major structural components need to be evaluated as in conventional design. The principle of capacity design should be employed to establish a viable plastic structural failure mechanism. On one hand, the demand on flexural strengths can be realistically evaluated by nonlinear dynamic time history analysis of the structure subject to possible earthquake ground excitations. On the other hand, the bending strengths at ends of members are estimated by assuming plastic behavior at those sections, with overstrength considered due to variability of material properties and hardening effect. The shear strengths must also be checked to prevent any premature brittle failure. The stability of columns and beams must be checked to preclude global buckling.  In this case, the limit state function can be written as,  G =M - M  (5.6)  G =V -V  (5.7)  G =M - M  (5.8)  G =N - N  (5.9)  p  p  b  b  where M denotes the probable moment capacity; p  M denotes the computed moment demand; V denotes the shear strength capacity; p  V denotes the computed shear force; M N  b  b  denotes the moment capacity of a beam to prevent buckling; denotes the buckling capacity of a column;  N denotes the applied axial load on a column;  106  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY  For this level of capability limit state, the target reliability index could be set between 2.5 and 3.5. Beam should be allocated a smaller target reliability compared to column and joint. In any circumstance, the limit state regarding shear strength and buckling should have a higher reliability relative to bending strength, as the failures tend to be sudden without warning.  (4) Ductility design criterion  Ductility is the ability of the structure to dissipate input earthquake energy through undergoing high inelastic deformations without significant strength degradation at some predefined locations. During a strong earthquake, life safety is assured in a well-designed structure, as a kinematical mechanism will be formed to prevent collapse. Both the global displacement ductility and the local member rotational ductility need to be assessed for a transparent ductile design. The local ductility is checked to prevent plastic deformation from being concentrated in some members. The structural global ductility is the manifestation and collective behavior of members' local ductility. The global ductility demand and the local rotational ductility demands can be computed by inelastic dynamic time history analysis. The global displacement ductility capacity can be evaluated by push-over analysis based on the assumption that a global kinematical mechanism is formed with plastic hinges developed at the ends of beams and the bottom of columns, and is defined as,  H =A /A A  U  (5.10)  y  where, u. = global displacement ductility; A  the roof displacement when a kinematical mechanism forms; the roof displacement when thefirstbeam plastic hinge forms;  107  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY Member local rotational ductility can be evaluated based on the ultimate rotational capacity of the plastic hinge. The rotational ductility is calculated from moment-rotation curve assuming an elastic-perfectly plastic behavior, and defined as follows,  n =e /e e  u  (5.ii)  y  where, u. = local rotational ductility; e  9 = the ultimate plastic rotation; U  0 = the rotation at the yield moment; y  Both the local rotational ductility at the component level and the global displacement ductility at the system level should be examined in order to provide the structure with sufficient ductility. The limit states are in the following forms,  G = u£-n  A  (5.12)  G = H°-n  9  (5-13)  where u,° denotes the structural displacement ductility capacity; |i  A  denotes the required structural displacement ductility;  u,g denotes the member rotational ductility capacity; u. denotes the required member rotational ductility; e  There is no widely accepted global ductility limit as well as local ductility limit, since they depend on the structural type and configuration, material, member connections, foundation type, site condition, etc. For this level of survivability limit state, the target reliability index  108  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY  should be set around 3.0-4.0 or even higher, as the consequence of failure due to overall collapse would be catastrophic.  5.3  Implementation of Performance-based Seismic Design  5.3.1  Reliability and performance-based seismic design  In Bertero and Bertero (2002), it is defined that performance-based seismic design is "consisting of selection of design criteria and structural systems such that at the specified levels of ground motion and with defined levels of reliability, the structure will be damaged beyond certain limiting states or other useful limits". In other words, the essence of performance-based design is to control damages due to different levels of hazard by controlling structural responses, with defined levels of reliability. As a result, performancebased design encompasses proper determination of multi-level earthquake ground motions corresponding to the site seismic risks, and definition of multiple performance criteria associated with each level of seismic hazard according to the minimum code requirement as well as the enhanced requirements specified by the owner.  As such, the designer must  employ multi-level design criteria and execute elaborate structural analysis to realistically evaluate the structural performances, so that all the performance objectives are satisfied with the specified confidence and accordingly the whole-life cost is minimized. All the work can only be accomplished in the framework of reliability-based optimum design, in view of the great amount of uncertainties involved in the entire design process.  Li and Foschi (1998) introduced a general Inverse Reliability Method for estimation of design parameters corresponding to given target reliabilities with multiple constraints. The  109  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY approach had been applied successfully to solving some inverse reliability problems in earthquake engineering and offshore engineering. Foschi et al (2002) proposed a computational approach for efficient implementation of performance-based design, and several case studies were presented to illustrate its applicability. Bertero and Bertero (2002) emphasized that performance-based seismic design should be carried out in the format of probabilistic design. A reliability-based framework for performance-based design was put forward in Wen (2001), where minimum lifecycle cost criteria were adopted to determine the target reliability for structures under multiple natural hazards.  The successful implementation of performance-based design hinges on satisfying the code and the owner's requirements by fulfillment of multiple performance objectives under multiple levels of hazard with minimum cost. Owing to the great amount of uncertainties involved in the design process, reliability assessment of the structural design is deemed indispensable; therefore, performance-based design should be carried out in the format of reliability-based design, with the solution obtained by optimization.  5.3.2. Performance-based seismic design using neural networks  In this study, performance-based seismic design will be formulated in the context of reliability-based design, and the design parameters are to be calculated by structural optimization. Neural networks will be employed to expedite the optimization process, by improving computational efficiency.  Besides the basic requirements of the seismic code, the designer must also satisfy the requirements specified by the owner with predefined target reliability levels. Higher  110  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY reliability implies higher initial cost, lower maintenance cost and lower expected damage cost for the same earthquake motion. This can be achieved by maximizing the expected benefit and minimizing the expected whole life cost. In the absence of benefit, the criterion of minimal lifecycle cost should be adopted. After the owner and the designer have reached an  agreement  on multi-level performance objectives and the corresponding target  reliabilities, the design can be formulated as a structural optimization problem in the following form,  Find the design parameter vector Xa to minimize the objective function  V=Z(Pl-fi (X )) k  subject to  4  +C(XJ  Xi < X < X»  (5.14a) (5.14b)  d  in which B[ - the target reliability index corresponding to k-th performance objective; B (X )= k  d  the calculated reliability index corresponding to k-th performance objective associated with design parameter vector Xa;  C(X ) d  = a cost function defined in terms of the design parameter vector Xa;  Xi = the vector of lower bounds of design parameter vector Xa; X = the vector of upper bounds of design parameter vector Xa; u  Structural optimization will be applied to calculate the optimal design parameter vector through minimizing the objective function. The optimization can be effected by executing an optimization program linked to a reliability analysis sub-program and a structural analysis sub-program. Such type of software is not available on the market. Even if such a program were ready to use, a large and complex structure under strong seismic excitation would  111  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY  require, during the optimization process, a large number of structural analyses and reliability assessments. To reduce the computing effort and to improve computational efficiency, use will be made of approximation models that can provide acceptable accuracy and in the same time save computational demand. Neural networks will be employed as a surrogate for the expensive and time-demanding structural analysis needed in the reliability assessment and the optimization for performance-based seismic design.  The stochastic structural response is a major concern in performance-based design. In this study, the probabilistic response is estimated by fitting a series of its deterministic values to a proper probability distribution. A database of input variables is generated first. For a given input variable combination, the nonlinear dynamic structural response is calculated using program CANNY (Li, 1996) based on a set of ground motion accelerations that are characterized by the common ground parameters. Then the probability distribution of the response is found by fitting the response data to an appropriate distribution (Lognormal distribution or Extreme Value I distribution) with the mean and standard deviation obtained. This process is repeated for all the combinations in the input variable database. Finally, there exist a response database for the mean value and one for the standard deviation. Two neural networks will be trained, one for the mean value and the other for the standard deviation, and they will be used for seismic reliability analysis.  Performance-based design is formulated above as a constrained optimization problem, and can be solved in general by any constrained optimization approach. Since the structural responses are very complicated for a strong ground excitation, with peaks and troughs due to resonance, gradient-based methods may encounter convergence difficulties or even diverge.  112  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY  Gradient-free algorithms such as simulated annealing (Kirkpatrick et al, 1983), genetic algorithm (Goldberg, 1989), trust region method (Byrd, et al, 1987, 1988; Conn et al, 2000), Tabu search (Glover, 1989, 1990; Glover and Laguna, 1993, 1997; Corne et al, 1999; Karaboga and Pham, 1999), particle swarm algorithm (Eberhart and Kennedy, 1995; Kennedy and Eberhart, 1995; Kennedy et al, 2001), or other random search tools may be more suitable in this circumstance.  5.4 Summary and Discussion Performance-based design has been established as the mainstream for structural design in seismic regions after reexamination of the philosophy and engineering practice of current seismic design by the structural engineering community, because of the colossal economic losses in the last earthquakes. SEAOC Vision 2000 and FEMA 273 have laid the groundwork of performance-based design by specifying multiple levels of hazard and multiple performance objectives as well as presenting refined numerical analytical procedures. However, there are still a lot to do for implementation of performance-based in routine seismic design. The realistic determination of the characteristics of future earthquake ground motion on the basis of seismic hazard at the site is a pivotalfirststep for a successful seismic design, with the participations of geotechnical engineers and seismologists essential and conducive. The multiple performance objectives and the corresponding target reliability levels subject to different hazards have to be decided collectively by the owner, structural engineer and municipal authority. The structural design should be carried out based on realistic modeling of the structure and its environment. Sophisticated structural model has to be elaborated by reflecting the nonlinear behavior of the members, connections and soil-  113  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY structure interaction, as well as the effects of non-structural components and building contents. Nonlinear dynamic time history analysis should be resorted to so that the real responses of the structure are calculated. The structural detailing must rely on well-proven engineering practice or extensive experimental verification. Strict construction quality control and rigorous inspection throughout the whole process are necessary for realization of the design. Effective maintenance and timely rehabilitation of the structure during its service life will be required to keep its performance up to the standard and reduce time-dependent risks.  Many uncertainties are involved throughout the design process as regard to the ground motion, material property, structural configuration and detailing, analytical model, construction and maintenance. It is critical to consider all the major uncertainties to guarantee that the design objectives are met with a certain confidence. Hence, performancebased design should be implemented in the context of reliability-based design, with the design parameters computed by optimization.  A performance-based design framework has been proposed in this study. Four performance objectives are described corresponding to four levels of seismic hazard. Four design criteria are discussed as to structural stiffness, yield strength, ultimate strength and ductility. Because of the complicated responses of the structure when it is subjected to earthquake motions, reliability assessment and optimal design generally are computational intensive and time demanding. In order to improve computational efficiency and reduce the work burden, neural networks are applied to find a mapping of the input-output functional relationship, and are  114  CHAPTER 5 PERFORMANCE-BASED SEISMIC DESIGN METHODLOGY employed as the surrogate for the computer code in the design process, making a computationally prohibitive task tractable and executable.  Performance objectives are accomplished by optimization of an objective function that may include cost, making a design technically dependable and economically beneficial. It is expected that greater structural reliability will be achieved in performance-based design for structures with various performance requirements, and economic values of the buildings and their contents will be protected. With progressive developments of this "controlled design" procedure, it is envisioned that performance-based design will be applied widely in the near future for rehabilitation of existing structures and creation of new buildings, with design implemented in a cost-effective manner and risks well controlled.  115  CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES  CHAPTER 6  SEISMIC RELIABILITY ANALYSES: CASE STUDIES  6.1 Introduction Five case studies are now presented for seismic reliability assessment of structures.  (1) A low two-story reinforced concrete frame was used as a first example of an existing building. The responses of interest were the maximumfloordrift and the maximum roof drift.  (2) A twenty-story reinforced concrete structure was used as an example of a high-rise building, with its seismic performances evaluated for two levels of earthquakes. The responses chosen were the maximum values of roof displacement, roof acceleration, inter-story drift ratio of the 15 floor,the inter-story drift ratio of the 5 floor, base shear th  th  force and base overturning moment.  (3) A bridge bent without or with seismic isolation was assessed for its seismic performance subjected to two levels of ground shakings. The maximum values of displacement at the cap beam, column base moment, column base shear and column ductility, and beam moment and beam ductility, were selected as the responses.  (4) A wood shear wall, for which the influence of the nail spacing on structural performance was investigated.  116  CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES (5) An actual instrumented building that has experienced three earthquakes and suffered damage was evaluated for its seismic performance, if it were subjected to a ground shaking similar to the Northridge, California earthquake of 1994.  6.2 Description of The Nonlinear Dynamic Analysis Program Any nonlinear dynamic analysis program can be used for calculation of structural response. In this thesis, a general-purpose 3D nonlinear static and dynamic structural analysis program, CANNY (Li, 1996), was used. The material nonlinearity is embodied by a lumped plasticity model. For geometric nonlinearity, P-A effects can be included. The structural system is discretized into an assembly of massless elements. Altogether, seven types of element are available, ie., beam element, column element, shear panel element, link element, support element, cable element and isolation element. The mass can be lumped at structural joints or concentrated at the center of gravity on each floor when a rigid diaphragm is assumed.  The program can be used for analyzing structural responses due to dead, live, wind and seismic load. Nonlinear static pushover analysis for structure subject to monotonic or cyclic loading can be undertaken with a limit on roof displacement or base shear specified. Nonlinear dynamic analysis is conducted step by step in the time domain using either Newmark's P method or Wilson's 0 method.  A number of hysteresis models are built in the program for description of member nonlinear force-displacement (moment-curvature) behavior when subjected to cyclic loading. Uniaxial hysteresis models are devised to simulate the inelastic behaviors of uniaxial bending, shear and axial tension or compression. Multiple axial spring models can be used to simulate the  117  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES flexural behavior of reinforced concrete column under the action of varying axial load and biaxial bending. Biaxial shear models are developed to approximate the column biaxial shear deformation or the lateral stiffness of a layered rubber bearing under bi-directional lateral loads.  This program was selected as it uses the hard disk as virtual memory to store the stiffness matrix and to conduct the analysis, with no limitation as to the size of the problem. It can perform nonlinear dynamic analysis of a structure quickly, and there is a library of hysteresis models available to choose.  6.3 Case Study 1: A Two-story Reinforced Concrete Plane Frame 6.3.1. Description of the structure and ground motion  (300x750)  Wi  (400x500) 4.0m W  4.0m  (300x750)  2  (400x500)  777  9.0m  Figure 6.1 Reinforced concrete planeframegeometry  118  C H A P T E R 6 SEISMIC RELAEBILITY ANALYSES: C A S E STUDIES  A one-bay, two-story reinforced concrete plane frame is selected as a first example of an existing structure for seismic reliability assessment. The dimensions of theframeare shown in Figure 6.1. It has a span of 9.0 m and story height of 4.0 m. All the columns have cross section 400 x 500 mm, while the beams have cross section 300 x 750 mm. The columns are symmetrically reinforced using 5#25 steel rebars (As = 2550 mm ), and the beams have 4#25 2  rebars (As = 2040 mm ) at the top and the bottom. The weights on the roof and the floor are 2  denoted by W, and W respectively. 2  For seismic retrofit of an existing structure, reliability assessment needs to be carried out to evaluate its performance under seismic excitation, in an effort to identify the weaknesses and propose strengthening measures. It was assumed that the earthquake occurrence could be described as a Poisson process with arrival rate v - 0.01/year, and the PGA had a Lognormal distribution with coefficient of variation (COV) 0.6, with design PGA (return period 475 years) a = 400cm/sec . 2  d  P (a>a )  = 1.0- exp(-vP (a >a )) = 1/475  PJa>a )  =  a  d  e  d  (6.1)  or  d  2.10748234e-3 = 0.210 0.01  (6.2)  The corresponding Normal variate for the event is fi = 0.803 e  Since o  (6.3)  = Jln(l + V ) = 0.6, or V = 0.658 2  lna  And a = d  a  a  a =exp(P Jlnfl + V )) = 400 cm/sec 2  e  2  a  119  (6.4)  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  So a  400,1 i +  v:  exp(8 ^ln(l + V )) e  478.8869452 , = 300 cm/sec exp(0.803* 0.6) o  2  a  n  n  (6.5)  2  (6.6)  And <7 =aV= 300 x 0.658 = 200 cm/secf a a  Hence, the earthquake peak ground acceleration is assumed to have a Lognormal distribution with a mean 300 cm/sec and a standard deviation 200 cm/sec . 2  2  Due to the high uncertainties associated with the expected earthquake ground motion, the seismic ground shaking was presumed to be characterized mainly by three parameters (PGA  A,  predominant ground frequency co and duration T ) that have probability distributions g  d  as given in Table 6.1.  Thirty random combinations of A , co and g  g  T  d  were generated via Latin Hypercube  Sampling as given in Table 6.2. Based on the combinations, thirty ground motion acceleration time histories were synthesized using the Hsu & Bernard modulation function (where to was taken as 0.2Ta).  Table 6.1 Case study 1: Ground motion parameters distributions and statistics Standard deviation Mean Distribution Parameter 200 300 Lognormal AX (cm/sec ) 2  co (rad/sec)  Normal  7.50  2.00  T (sec)  Normal  40.0  10.0  g  d  6.3.2. Construction of the response databases  To build response databases for neural network training, five variables were chosen as input variables, namely, steel yield strength f , y  concrete compression strength f' , modulus of c  120  CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES  elasticity of concrete E , weight on the roof W and weight on the floor W . Though f' and c  t  2  c  E were correlated, they were treated as independent variables. The bounds are given in c  Table 6.2 Case study 1: Ground motion parameter combinations co (rad/sec) T (sec) A (cm/sec ) 2.987 214.635 "7.177 29.572 7.389 106.379 54.089 286.670 5.858 37.919 8.806 326.559 5.320 39.631 430.634 52.725 8.299 326.075 , 9.708 47.239 246.511 38.375 421.700 10.769 38.375 7.575 102.975 37.577 55.258 4.649 54.033 8.849 362.672 24.269 6.607 388.662 44.706 214.214 9.857 41.971 137.108 9.478 43.697 6.579 588.179 32.187 11.666 740.161 11.623 239.109 9.796 46.306 130.679 9.465 25.356 8.369 512.112 36.272 219.398 9.566 55.202 8.640 243.753 28.331 8.068 140.709 38.223 6.076 156.997 39.924 8.736 225.254 50.279 10.082 212.679 40.164 5.902 811.417 30.247 5.976 126.201 42.378 7.015 879.899 5.997 67.593 607.912 46.157 8.602 666.123 2  g  g  d  Table 6.3. Two responses were selected as the output variables, viz., the displacement at the floor D , and the displacement at the roof D , . Latin Hypercube Sampling was applied to 2  generate a design of 150 combinations of the five input variables (f ,f ,E ,W ,W ). y  121  c  c  1  2  Then,  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  for  every combination of the input variables, the program CANNY was run to compute the  corresponding responses D , D, for the 30 synthesized ground acceleration time histories. 2  Subsequently, for each response, its mean and standard deviation were calculated based on the 30 values for the 30 artificial ground accelerograms. Finally, for every response, two response databases were created, one for its mean and the other for its standard deviation. Altogether, four response databases were constructed. Appendix A shows just the first 10 combinations and the corresponding responses.  The cross-peak tri-linear model CP3 was adopted to simulate the hysteresis behavior of the reinforced concrete members, with its hysteresis skeleton curve shown in Figure 6.2. This model can be used to simulate the post-yield unloading stiffness degradation and strength deterioration.  Figure 6.2 Cross peak tri-linear hysteresis model CP3 (Li, 1996)  122  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  Table 6.3 Case study 1: Input variable bounds Lower bound Variable  Upper bound  / (MPa)  300  500  /;(MPa) £ (MPa)  15  45  19500  25500  W (KN)  280  370  360  540  v  c  1  w (m 2  6.3.3. Reliability assessment  Based on the aforementioned four response databases, four neural networks were trained, for the mean and standard deviation of the two responses, as the earthquakes were changed according to Table 6.2. The neural network-training program was run to learn the unknown functional dependencies between the five input variables (f ,f ,E ,W ,W ) y  output variables (D , S 2  c  c  1  2  and the four  ,D,,S ).  D2  D1  Hereafter, neural network relative error is defined as, O -Y k  (6.7)  k  with root mean square relative error (RMSRE) given by,  RMSRE =  k=l  (6.8)  where Ok denotes the target output for the k-th example; Yk denotes the neural network output for the k-th example; P denotes the number of examples.  123  CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES In this case, of the 150 examples, 120 were used for training and 30 were used for testing. The number of hidden neurons and network RMSREs for the four responses are given in Table 6.4, with the relative error statistics for every response shown in Table 6.5.  Table 6.4 Case study 1: Neuron numbers and neural network RMSREs Testing Training Neuron number Response  D, $D1  5  0.017  0.024  6  0.021  0.020  4  0.015  0.015  3  0.019  0.015  in the table, Z) denotes the mean value of roof displacement D i ; 7  S  Dl  denotes the standard deviation of roof displacement D i ;  D denotes the mean value of floor displacement D2; 2  S  D2  denotes the standard deviation of floor displacement D2;  Table 6.5 Case study 1: Neural networks training relative error statistics Standard deviation Mean Relative error e(D )  -0.0008  0.0185  e(S )  0.0002  0.0152  efD,)  0.0004  0.0197  e(S )  -0.0008  0.0224  2  D2  Dl  During reliability analysis, the input variables were postulated to have the following probability distributions and statistics as shown in Table 6.6.  124  CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES  Table 6.6 Case studyl: Input variable probability distributions and statistics Standard deviation Distribution Mean Input variable / (MPa)  Lognormal  400.0  30.0  /;(MPa) £ (MPa)  Normal  30.0  4.5  Normal  22500.0  1000.0  W,(KN)  Normal  360.0  24.0  W (KN)  Normal  450.0  30.0  y  c  2  Three limit states were examined in this study corresponding to three performance levels, ie., collapse prevention,  life  safety and normal function.  For each combination,  the  displacements over the 30 records were fitted to a Lognormal distribution. Thus, the roof displacement D, and the floor displacement D were expressed as, 2  D, 1+  explR  D = 2  (S In 7 + 1 r  exp\  >  A  1  ln\ 1 +  °D2  (6.9a)  (6.9b)  1+  where Rn is a random variable with Standard Normal distribution.  If the responses were fitted to an Extreme Value-I distribution, then the responses could be calculated by inverse transform as,  D,=D,-^—SLfr  —  D =D 2  2  y/6S  D2  + bi(-bip)J  (6.10a)  [y + ln(-lnp)J  (6.10b)  125  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES where Euler constant y = 0.5772; pis a random variable with uniform distribution over the interval [0,1].  (1) Collapse prevention limit state  For the limit state of collapse prevention, one failure mode was considered in respect to the roof displacement as indicated by the following performance function, with the displacement limit set to 3% of the building height. The associated reliability indices by mean of Importance Sampling (IS) and Monte Carlo Simulation (MCS) are shown in Table 6.7, with responses calculated by Neural Networks (NN) and Local Interpolation (LI) (Foschi et al, 2002).  G = 0.240-D (f ,f' ,E ,W„W ) 1  y  e  o  (6.11)  a  Table 6.7 Case study 1: Reliability indices for collapse prevention limit state IS MCS Performance function LI NN NN G = 0.240 -D,  1.828(1.826)  1.767(1.780)  LI  1.825(1.821) 1.769(1.782)  Note: The values in parentheses are based on Extreme Type-I distribution.  (2) Life safety limit state  For the limit state of life safety, three failure modes were considered in regard to the roof displacement, the floor displacement as well as the inter-story drift between the roof and the floor, as indicated by the following three performance functions. The drift limit was set to 1.5% of the height of the story or building. The associated reliability indices by IS and MCS  126  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  for the three failure modes, as well as the system reliability are given in Table 6.8, and the responses were calculated by NN and LI.  G, =0.120-D (f ,f ,E ,W ,W )  (6.12a)  G =0.060-D (f ,f;,E ,W ,W )  (6.12b)  1  2  y  2  c  c  y  1  c  2  1  2  G =0.060-[D (f ,f' ,E ,W ,W )-D (f ,f ,E ,W ,W )] 3  1  y  c  c  1  2  2  y  c  c  1  (6.12c)  2  Table 6.8 Case study 1: Reliability indices for life safety limit state IS Performance function LI NN NN Gj = 0.120 -Dj G = 0.060 -D 2  2  G = 0.060-(Dj-D ) System reliability 3  2  MCS LI  0.937 (0.741)  0.887 (0.671)  0.935 (0.740)  0.887 (0.675)  0.719 (0.418)  0.690 (0.420)  0.722 (0.423)  0.694 (0.423)  0.625 (0.399)  0.614 (0.398)  0.623 (0.398)  0.614 (0.399)  N/A .  .N/A  0.052 (-0.356) 0.030 (-0.339)  Note: The values in parentheses are based on Extreme Type-I distribution. N/A = Not available (3) Functionality limit state  For this limit state, three failure modes were considered regarding the roof displacement, the floor displacement and the inter-story drift between the roof and the floor. The three performance functions are listed below, with the displacement limit set to 0.5% of the height of the story or building. The associated reliability indices by IS and MCS for every failure mode, as well as the system reliability are presented in Table 6.9.  G, =0.040-D,(f ,f;,E ,W„W ) y  c  (6.13a)  2  G =0.020-D (f ,f ,E ,W ,W ) 2  2  y  c  c  1  (6.13b)  2  127  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  G =0.020-[D (f ,f ,E ,W ,W )-D (f ,f ,E ,W ,W )] 3  l  y  c  c  1  2  2  y  c  c  1  (6.13c)  2  Table 6.9 Case study 1: Reliability indices for functionality limit state IS Performance function NN LI NN  -.507 (-.461)  -.587 (-.476) -.061 (-.093)  -.084 (-.100)  -.056 (-.089)  -1.499 (-.555)  -1.529 (-.513)  -.511 (-.475)  -.511 (-.464)  G = 0.020-D  -.480 (-.414) -.091 (-.102)  2  G = 3  2  0.020-(D,-D ) 2  System reliability  LI  -.510 (-.472) ..479 (-.406)  G = 0.040-D, }  MCS  N/A  N/A  -.582 (-.471)  It can be observed from the above results that, subject to the probabilistic earthquake ground motion and the assumed variable statistics, the performances of the structure can be considered as below standard. Though collapse is less likely to happen (with probability of failure about 2%), life safety of the occupants cannot be guaranteed (with probability of failure about 50%), to say nothing of normal operation (with probability of failure more than 90%). Hence, it needs to be retrofitted up to standard based on the assumed seismic hazard. For comparison with Neural Networks, another approximation scheme, Local Interpolation was also employed. It was found that Local Interpolation took more time than Neural Networks, as for each query point (the point whose response is sought), it involves searching the response database for some nearest neighbors and estimating the response by interpolation, which is time consuming especially for a large database. It can also be seen that, in general, the reliability prediction based on Lognormal distribution is at about the same level as that of Extreme Value-I distribution.  6.3.4. Sensitivity analysis  Sensitivity analysis was conducted to evaluate the influence of each variable on reliability index, based on which the important variables can be identified. Only the collapse prevention 128  CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES limit state was considered for this purpose, with responses fitted to Lognormal distribution. The results were given in Table 6.10, in which each mean was varied up and down by 5% while the others kept unchanged.  Table 6.1,0 Case study 1: Variation of reliability index with statistical parameters Parameter value Reliability index Parameter Variable  /,(MPa) °(fy)  tff'c) / ' (MPa) c  <*(f.) rl(E ) c  £ (MPa) c  a(E ) c  rfWi) Wj (KN) a(W ) x  »(W ) 2  <j(W ) 2  380 400 420 4 20 40 28.5 30.0 31.5 0.3 3.0 6.0 21375 22500 23625 225 450 1000 342 360 378 3.6 7.2 18.0 427.5 450.0 472.5 4.5 9.0 22.5  1.755 1.808 1.860 1.807 1.807 1.804 1.782 1.825 1.870 1.934 1.820 1.786 1.940 1.807 1.671 1.820 1.815 1.807 1.854 1.807 1.757 1.811 1.811 1.809 1.841 1.807 1.772 1.808 1.808 1.808  Based on the above results, it can be concluded that the mean values of the five input variables are important for the reliability evaluation, while their standard deviations are not so important as the reliability index is not sensitive to the variation of the standard deviations.  CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES  ju(f ) and ju( f' ) have a positive influence on reliability, whereas n(E ), p(M,) and y  c  c  ju(M ) have a negative impact on reliability. 2  6.4 Case Study 2: A Tall Reinforced Concrete Frame 6.4.1 Description of the structure  The structure under investigation is a two-bay, twenty-story reinforced concrete frame (Figure 6.3), taken as an example of a tall building. The story height is 4 m, and each bay is 8 m. The beams have a constant cross section 350mm x 700 mm. The columns have varied cross sections along the height of the building:fromstories 1 to 7, BjxH,; from stories 8 to 14, B xH ; 2  2  from  stories 15 to 20, B xH . 3  The reinforcement ratio for the beams and  }  columns is assumed about 1%.  6.4.2 Construction of the response databases  Fifteen random variables were selected as the input variables, and they were, •  peak ground acceleration, A ;  •  predominant groundfrequency,co ;  •  earthquake strong motion duration, T ;  •  distributed vertical load on beam, q;  •  steel yield strength, f ;  •  concrete compression strength for columnsfromstory 1 to 7,  •  concrete compression strength for columns from story 8 to 14,  g  g  d  y  130  f; cl  f; c2  CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES concrete compression strength for columnsfromstory 15 to 20,  f; c3  concrete compression strength for beams, f ; b  cross section width of columnsfromstory 1 to 7, B, ; cross section depth of columnsfromstory 1 to 7, H,; cross section width of columns from story 8 to 14, B ; 2  cross section depth of columnsfromstory 8 to 14, H ; 2  cross section width of columnsfromstory 15 to 20, B ; 3  cross section depth of columnsfromstory 15 to 20, H ; 3  |< H« 8 m  8 m  »|  Figure 6.3 Geometry of tall building  The lower bounds and upper bounds of these variables for constructing an experimental design, are given in Table 6.11.  131  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  Table 6.11 Case study 2: Input variable bounds Input variable Lower bound  Upper bound  A (cm/sec )  10  980  co (rad/sec)  7t  1271  T (sec)  1  60  q (KN/m)  15  60  /„(MPa)  400  450  35  45  / (MPa)  25  35  /  (MPa)  15  25  / (MPa)  15  25  2?, (mm)  700  1000  /f, (mm)  900  1200  5 (mm)  500  700  H (mm)  700  900  ^(ram)  400  500  //^ (mm)  500  700  2  g  d  /  c l  (MPa)  c2  c 5  6  2  2  Six response variables were selected; namely, the maxima of the roof displacement D  20>  roof acceleration A , the 15 story drift ratio 6 , the 5 th  20  15  th  the  story drift ratio 0 , the base S  overturning moment M and the base shear force V. Hammersley sequence sampling was adopted to generate 300 combinations of the fifteen input variables. For every combination of the input variables, the program CANNY was run to compute the desired responses for twenty synthesized ground acceleration time histories (characterized by the three ground motion parameters, ie., A , co and T ). Next, for each response, its mean and standard g  g  d  deviation were calculated based on the values for the twenty artificial ground accelerograms. Appendix B shows just thefirst10 combinations and the corresponding responses.  The CANNY tri-linear model CA7 was employed to simulate the hysteresis behavior of the  132  CHAPTER 6 SEISMIC RELAIBJLITY ANALYSES: CASE STUDIES reinforced concrete members, as this model can delineate stiffness degradation, strength deterioration and pinching behavior of reinforced concrete. Its hysteresis skeleton curves are shown in Figure 6.4. It was assumed that shear strengths were sufficient for both the columns and beams, so the elastic model ELI was used for shear calculations.  .fy  4'm ~~l  ^u*^**^  j  ••  —  *  «  XJ" :::::::::  :::::::::::::::::::::::::  Y* tiFy (a) Unloading Stiffness Degradation  00 Strength Deterioration  (<0 Pinching Behavior  Figure 6.4 CANNY tri-linear hysteresis model (Li, 1996)  133  •D  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  6.4.3. Reliability assessment  6.4.3.1 Neural networks training  Twelve neural networks were trained for the mean and standard deviation of the six responses. Of the 300 combinations, 240 were used for training and 60 were used for testing. The number of hidden neurons and network RMSREs for the twelve responses are presented in Table 12, with the relative error statistics for every response given in Table 6.13.  Table 6.12 Case study 2: Neuron numbers and network RMSREs Testing Neuron number Training Response 0.030 0.011 9 e  9  0.015  0.039  7  0.009 -  0.017  4  0.032  0.040  9  0.015  0.024  6  0.020  0.032  7  0.011  0.019  7  0.017  0.034  8 9  0.010 0.017  0.026 0.040  V  9  0.008  0.022  Sy  8  0.020  0.037  •^20 $A20  o  15  $915  G  5  M  s  M  In the table, D , S 2 0  D 2 0  A ,S 2 0  0 ,S 15  9,S 5  M,S  M  A 2 0  denote the mean and standard deviation of roof displacement D ; 20  denote the mean and standard deviation of roof acceleration A ; 2 0  denote the mean and standard deviation of the 15 story drift ratio 6 ; th  915  l5  denote the mean and standard deviation of the 5 story drift ratio 0 ; th  65  S  denote the mean and standard deviation of base overturning moment M;  134  C H A P T E R 6 SEISMIC R E L A I B I L I T Y ANALYSES: C A S E STUDIES  V,S denote the mean and standard deviation of base shear V; V  Table 6.13 Case study 2: Neural network training relative error statistics Relative error  Mean  Standard deviation  e(D )  -0.0060  0.0178  0.0158  0.0304  s(A )  0.0001  0.0120  s(S )  0.0099  0.0295  e(0„)  -0.0056  0.0202  e(S )  0.0016  0.0209  e(G )  -0.0079  0.0193  0.0016  0.0197  e(M)  0.0006  0.0158  e(S )  -0.0003  0.0246  e(V)  0.0028  0.0123  e(S )  -0.0004  0.0247  20  (^D2o)  £  20  A20  6l5  5  M  v  Two limit states were considered in this study corresponding to two performance levels, serviceability limit state and ultimate limit state. The responses were fitted to a Lognormal distribution as follows,  f  f  D 20  D 20  exp\ R  [1  'D20  1+  2\  K 20j  °D20  In 1 +  < ^20  (6.14a) j  J  D  f  20  x  ^20  1+  vi  exp\  'A20  \ Ao J  f  2\  In 1 +  °A20  L  2  135  20  A  (6.14b) -J  JJ  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  f  9 15  o = l5  exp\  s  1+  [i  v  r  \ 15 J 9  5  05 = 1+  s  M  M  f  In 1 +  expl v  r  \  rexplR  (6.14c)  9  bi f  o  2\ f In 1 + ^015 < 15 J )  fs }  )  2\  <5 j  (6.14d)  6  J  )  In 1 +  (6. He)  •  J+  M J  f V=  (  exp\ J+  <v J  11  In 7+  fs  2^  }  { UJ  (6.141) JJ  in the above, Rn is a random variable with standard Normal distribution.  If the responses were fitted to Extreme Value-I distributions, then the responses could be calculated by inverse transform as,  ^o-D -^^[Y+ln(-lnp)] D„ = D,  (6.15a)  A  (6.15b)  20  20  = * 2 o - ^ ^ f r + lnf-lnp)]  7T  15 ~ 15 ~  9  9  0 =e -^-^fr s  s  ^S,015 f + ln(-lnp)J r  + ln(-lnp)J  136  (6.15c)  (6.15d)  CHAPTER 6 SEISMIC RELAffilLITY ANALYSES: CASE STUDIES  _  |7y  M =M -——^-[y + lnf-lnpJJ n  V  -±A}L[  =  V  (6.15e)  + l (-l p)]  r  n  n  (6.15f)  n  where Euler constant y = 0.5772; with p is a random variable with uniform distribution over the interval [0,1].  6.4.3.2 Two levels of design earthquakes  Two levels of earthquake, a frequent minor earthquake for serviceability limit state evaluation and a rare strong earthquake for ultimate limit state evaluation, were considered.  (1) The earthquake for serviceability limit state  Assume that occurrence of a minor earthquake can be characterized by a Poisson process with arrival rate of v = 0.10/year, and the probability of exceedance of the design earthquake a in 50 years is 50% (annual probability of exceedance 0.013767), then d  P (a>a ) a  d  = 1.0-exp(-vP (a>a )) = 0.013 767 e  (6.16)  d  or P.(a >a ) = ' d d  0  0  1  2  8  6  0.10  2  = 0.13863  (6.17) V  J  The corresponding Normal variate for the event is B = 1.086 e  For this earthquake, assume its peak acceleration has a Lognormal distribution with coefficient of variation 0.6, and that the design earthquake is set at 0.15g, or 147.15 cm/sec , 2  °i  = Jl»0 + V ) = 0.6, or V = 0.658  (6.3)  3  na  a  a  137  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  Since a = . °  exp(0 Jln(l + V )) = 147.15 cm/sec 2  d  _ So a = o  (6.18)  2  e  I47.15Jl + V* *  176.170535  + Vl))  eapf &  ex  _ . = P2 cm/sec  2 2  (6.19)  P ( J- 086 *0.6)  And o = aV =92 x 0.658 = 61 cnVsec  (6.20)  2  a  a  Hence, the earthquakes peak ground acceleration is assumed to have a Lognormal distribution with a mean 92 cm/sec and a standard deviation 61 cm/sec . 2  2  (2) The earthquake for ultimate limit state  For this level of earthquake, assume that occurrence of the earthquake can be modeled by a Poisson process with arrival rate of v = 0.01/year, and the design earthquake a  d  with a  return period of 475 years is 0.4g, or, 392.4 cm/sec . As the annual risk is given by, 2  P (a >a ) = 1.0-exp(-vP (a > a )) = 1/475 a  d  e  d  (6.1)  or . P(a>a ) d  2.10748234e-3 „ „ , = 0.210748 0.01 f l 7  =  ,, (6.2)  0  The corresponding Normal variate for the event is B = 0.803 e  For this earthquake, assume that its peak acceleration also has a lognormal distribution with coefficient of variation 0.6, then,  o  = Jln(l + V ) = 0.6, or V = 0.658  (6.3)  2  lna  a  Since a = , ° d  a  exp(B Jln(l + V )) = 392.4 cm/sec 2  e  2  a  138  (6.21)  CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES  392.4^1+ V  _  2  So a  exp(/3 ylln(J + V )) 2  e  a  469.788  (6.22)  = 290 cm/sec exp(0.803* 0.6)  2  (6.23)  And a = aV = 290x 0.658 = 191 cm/sec  2  a  a  Thus, the postulated earthquake peak ground acceleration has a Lognormal distribution with a mean 290 cm/sec and a standard deviation 191 cm/sec . 2  2  6.4.3.3 Reliability assessment for serviceability limit state  The roof displacement, the roof acceleration, the 15 story drift ratio and the 5 story drift th  th  ratio were evaluated for this limit state. The roof displacement limit was set to 1/400 of the total building height, with the acceleration limit set to 2.0 m/sec . The story drift ratio limit 2  was set to 0.0025 (1/400). The probability distributions and statistics of the input variables were given in Table 6.14. The statistics of steel yield strength and concrete compressive strengths were calculated so that the lower bound and upper bound of each variable (Table 6.11) cover the range from mean - 3*standard deviation to mean + 3*standard deviation. It was assumed that the dimensions were well controlled, so a COV of 1% was used. The performance functions are expressed as the followings,  G, = 0.200 -D20  (6.24a)  G  (6.24b)  2  =2.000-A20 l  G =0.0025-0,15  (6.24c)  G =0.0025-6  (6.24d)  3  4  5  139  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES Reliability analysis was carried out using IS and MCS, with the responses estimated by neural networks trained beforehand. The responses were fitted to two distributions, ie, Lognormal distribution and Extreme Value-I distribution (reliability index in parenthesis). The results are presented in Table 6.15.  It can be seen that for the specified performance criteria, the structure may be considered to maintain normal operation under the considered earthquakes if a minimum target reliability index was set to be 1.5.  Compared to the 5 story, the 15 story has a lower reliability, th  th  which implies that more deformation occurs at the higher stories of the structure. Whether the responses have a Lognormal or an Extreme value-I distribution, the reliability estimates are quite similar.  Table 6.14 Case study 2: Input variable probability distributions and statistics (Serviceability limit state) Mean Standard deviation Distribution Input variable 61.0 92.0 Lognormal A (cm/sec ) 2  g  a) (rad/sec) T (sec) q (KN/m)  Normal Normal  571  7t  20.0  5.0  Normal  45.0  4.5  / „ (MPa)  Lognormal  400.0  10.0  L, (MPa)  Lognormal  40.0  1.5  / „ , (MPa)  Lognormal  30.0  1.5  L, (MPa)  Lognormal  20.0  1.5  A (MPa) B, (mm) H, (mm)  Lognormal  20.0  1.5  Normal  900.0  9.0  Normal  1100.0  11.0  B (mm) H (mm)  Normal  600.0  6.0  Normal  800.0  8.0  Normal  450.0  4.5  Normal  600.0  6.0  g  d  7  2  B, (mm) H (mm) 3  140  1  CHAPTER 6 SEISMIC RELAffilUTY ANALYSES: CASE STUDIES  Table 6.15 Case study 2: Reliability index for serviceability limit state Neural networks Performance function MCS IS 2.173 (2.168)  2.168 (2.170)  2.083 (2.079)  2.071 (2.076)  G = 0.0025-0  1.576(1.576)  1.551 (1.552)  G = 0.0025-0,  1.854(1.840)  1.815 (1.815)  G, = 0.200 -D  20  G = 2.000 -A 2  20  3  1S  4  6.4.3.4 Reliability assessment for ultimate limit state  The roof displacement, the 15 story drift ratio, the 5 story drift ratio, base overturning th  th  moment and base shear force were evaluated for this limit state. The roof displacement limit was set to 1.0% of the building height, with the story drift ratio limit set to 1.0%. The base shear capacity mean was assumed to equal to 10% of the total floor weight of the structure as: 45x16x20x0.1 = 1440KN, and the COV of base shear capacity was assumed 10%. The base overturning moment resistance depends on the column axial forces that are varying during the earthquake, so it is very difficult to estimate. For illustration purpose, it was assumed that the base moment capacity had a Normal distribution with mean 54000 KNm and standard deviation 5400 KNm.  The probability distributions and statistics of the input  variables were given in Table 6.16, with the performance functions expressed as follows,  G,= 0.800 -D  (6.25a)  20  G = 0.010-0  (6.25b)  G = 0.010 - 0  (6.25c)  G =M -M  (6.25d)  2  15  3  4  5  0  141  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES G =V -V 5  (6.25e)  0  Again, reliability analysis was carried out using IS and MCS, with the responses estimated by the neural networks trained in advance. As before, the responses were fitted to two distributions: Lognormal and Extreme Value -1. The results are presented in Table 6.17, with values in parentheses corresponding to Extreme Value -1 distribution.  From the above calculations, it can be observed that the reliability predictions are close to each other, no matter whether the responses have a Lognormal or an Extreme Value-I distribution. In compared with the 5  th  story, the 15  th  story has a lower reliability, the  implication of which is that more deformation occurs at the upper stories of the structure.  Table 6.16 Case study 2: Input variable probability distributions and statistics (Ultimate limit state) Mean Standard deviation Distribution Input variable 191.0 290.0 Lognormal A (cm/sec ) 2  g  co (rad/sec) T (sec) q (KN/m)  Normal Normal Normal  45.0  5.0 4.5  / „ (MPa)  Lognormal  400.0  10.0  L, (MPa)  Lognormal  40.0  1.5  (MPa) (MPa)  Lognormal  30.0  1.5  Lognormal  20.0  1.5  f (MPa)  Lognormal  20.0  1.5  B, (mm) H, (mm)  Normal  900.0  9.0  Normal  11.0  B (mm) H (mm) y  Normal  1100.0 600.0  2  Normal  800.0  8.0  B, (mm) H (mm)  Normal  450.0  4.5  Normal  600.0  6.0  g  d  h  3  5n 30.0  142  7t  6.0  CHAPTER 6 SEISMIC RELAffilLITY ANALYSES: CASE STUDIES  Table 6.17 Case study 2: Reliability index for ultimate limit state Neural networks Performance function MCS IS 2.884 (2.874) 2.727 (2.768) G, = 0.800-D 20  2.057 (2.086)  2.160 (2.162)  G =0.010-e  2.298 (2.333)  2.395 (2.394)  G =M -M  2.543(2.571)  G =V -V  2.322 (2.345)  2.648 (2.623) 2.442 (2.443)  G = 0.010 -e  15  2  3  4  5  0  5  0  6.5 Case Study 3: A Bridge Bent Without or With Seismic Isolation 6.5.1 Description of the structure  A bridge bent without or with seismic isolation was studied for its seismic performance. The geometry of the bridge with four Lead Rubber Bearing (LRB) isolators is shown in Figure 6.5. The two round columns have a diameter D (mm). The height of the cap beam from the ground is 8 m, with a rectangular section BxH  (B isfixedto D + 500, H = 1500 mm).  The bearings have a square section (width B ) with a round lead plug of diameter B I 4, r  r  and their height is assumed to be 0.4 B The reinforcement ratios of the column and beam r  are assumed 1.25% and 1% respectively.  6.5.2. Construction of the response databases  In the case without isolation, five variables were selected as the input variables, namely, peak ground acceleration A, predominant ground frequency co , strong motion duration T g  d  (Figure 6.6), column diameter D, and vertical load on the bearing Q. In the case with isolation, a sixth variable, the width of the isolators B was added. The lower bounds and the r  upper bounds of the variables are given in Table 6.18.  143  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  Six variables were chosen as the output variables, ie., the maxima of displacement at the cap beam A, column base moment M , c  column base shear V , column ductility c  beam  moment M , and beam ductility ju . h  b  Isolator, width B  r  Figure 6.5 Bridge bent with isolation  Figure 6.6 Modulation function  Table 6.18 Case study 3: Input variable jounds Lower bound Upper bound Input variable 1960 20 A (cm/sec ) 2  g  co (rad/sec)  71  1071  T (sec) D (mm)  1  60  1500  2100  Q (KN)  1200  3600  B (mm)  500  1000  g  d  r  Optimized Latin Hypercube Design was adopted to generate 200 combinations of the input variables, including all the data points on the boundary. For every combination of the input variables, twenty artificial earthquake accelerograms (characterized by the three ground motion parameters, ie., A , co and T but with different phases) were generated and the g  g  d  program CANNY was run to compute the corresponding responses. Then, for each response,  144  CHAPTER 6 SEISMIC RELAJBILITY ANALYSES: CASE STUDIES  its mean and standard deviation were calculated based on the values for the twenty artificial ground accelerograms. Finally, for every response, two response databases were constructed, one for its mean and the other for its standard deviation. Appendix C shows just thefirst10 combinations and the corresponding responses.  The CANNY tri-linear model CA7 was also employed here to simulate the hysteresis behavior of the reinforced concrete members. The degrading bilinear model BL2 was adopted to describe the nonlinear behavior of the isolators, with the hysteresis skeleton curve shown in Figure 6.7. The shear modulus of rubber is taken as 1.0 MPa, with the yield strength of the lead plug set to 10.0 MPa (Priestley and Calvi, 1996). The yield displacement was assumed 10% of the isolator height, and the post-yield stiffness was taken as 1/3 of the initial stiffness.  6.5.3. Reliability assessment 6.5.3.1 Neural networks training Twelve neural networks were trained for the mean and standard deviation of the six responses. Of the 200 combinations, 160 were used for training and 40 were used for testing. The number of hidden neurons and the network RMSREs for the twelve responses are given in Table 6.19 for the bridge without isolation and Table 6.20 for the bridge with isolation. The relative error statistics for every response are shown in Table 6.21.  145  CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES  Figure 6.7 Degrading bilinear model (Li, 1996)  Table 6.19 Case situdy 3: Neuron numbers and network RM SREs (without isc Testing Training Neuron number Response A  K  Mb  12 5  0.018 0.026  0.024 0.039  8  0.012  0.024  12  0.027  0.033  9  0.014  0.028  6  0.045  0.060  9  0.020  0.040  3  0.024  0.030  10  0.019  0.025  8  0.040  0.046  4  0.024  0.025  10  0.033  0.030  Where A,S denote the mean and standard deviation of displacement A; A  M ,S c  Mc  denote the mean and standard deviation of column moment M ; c  146  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  V ,S c  Vc  denote the mean and standard deviation of column shear force V ; e  Jic^^ denote the mean and standard deviation of column rotational ductility p ; c  M ,S b  denote the mean and standard deviation of beam moment M ;  m  b  Mb>Sfi> denote the mean and standard deviation of beam rotational ductility ju ; b  Table 6.20 Case study 3: Neuron number and network RMSREs (with isolation) Training Testing Neuron number Response A  K ~Pc M  b  •  Mb Sp  b  10 11  0.017 0.021  0.020 0.034  10  0.019  0.030  7  0.046  0.044  9  0.015  0.021  8  0.041  0.040  12  0.016  0.022  7  0.026  0.029  7  0.019  0.031  10  0.025  0.041  10  0.015  0.034  7  0.032  0.040  Two limit states were investigated that correspond to two performance levels, serviceability limit state and ultimate limit state. With the responses fitted to a Lognormal distribution, they can be calculated as follows,  i R„ i In 7 + \U expl AJ J  A=  7+  ^  11  147  (6.26a)  C H A P T E R 6 SEISMIC R E L A I B I L I T Y A N A L Y S E S : C A S E STUDIES  (S  (  -  exp\R. In J  M = 1+  exp\R,  V = c  +  ^  ^Mc  \  (6.26b)  (6.26c)  7+  7+  In 1 +  ex/?  s  2^  [ i I  7+  (6.26d)  fJC  JJ  2\  (6.26e)  1  ex/? 7? 7+  Mb  1+ Mb  exp\R  In 7+  1  ^  J  In the above, Rn is a random variable with standard normal distribution.  148  (6.26f)  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  Table 6.21 Case study 3: Neural network training relative error statistics With isolation Without isolation Relative error Mean Std dev Mean Std dev 0.0017 0.0176 0.0000 0.0220 e(A) e(SJ  -0.0004  0.0305  -0.0017  0.0265  s(M )  -0.0017  0.0164  -0.0011  0.0213  *(S )  -0.0036  0.0289  -0.0021  0.0460  e(V )  -0.0021  0.0179  0.0011  0.0162  B(S )  -0.0018  0.0632  -0.0015  0.0406  -0.0003  0.0254  0.0019  0.0176  0.0042  0.0269  0.0014  0.0270  s(M )  -0.0027  0.0235  0.0044  0.0235  e(S )  -0.0020  0.0433  -0.0021  0.0460  -0.0019  0.0244  0.0036  0.0199  0.0010  0.0321  -0.0019  0.0336  c  Mc  c  VC  e(Hc)  b  m  e(Sp ) b  As in previous examples, if the responses werefittedto an Extreme Type-I distribution, then they could be calculated as,  A = A -^-^-fr n  + ln(-lnp)J  (6.27a)  — JEs M  c  =M -2L-ML C  [r  i (-i )j  +  n  np  (6.27b)  7t  V<=V -^-*-[y n c  ^*  Mc=Mc~  + ln(-lnp)]  f / + In(-lnp)]  Sf  (6.27c)  (6.27d)  7t  —  M  b  J6S  =M -?-^L[y b  n  Mb =M  —f/ +  b  +  ln(-lnp)]  (6.27e)  ln(-lnp)J  (6.27f)  n  149  CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES  where Euler constant y = 0.5772; and p is a random variable with uniform distribution over the interval [0,1].  6.5.3.2 Two levels of earthquakes  Two levels of earthquake were considered in this case. One was a minor earthquake for serviceability limit state, and the other was a major earthquake for ultimate limit state.  (1) The earthquake for serviceability limit state  Assume that occurrence of a minor earthquake can be characterized by a Poisson process with arrival rate of v = 0.05/year, and the probability of exceedance of the design earthquake a in 50 years is 50% (annual probability of exceedance 0.013767), then d  P (a>a ) a  = 1.0-exp(-vP (a >a )) = 0.013 767  d  e  (6.16)  d  or P (a>a )=°° ' "  13862  e  d  0.05  = 0.277253  (6.28)  The corresponding Normal variate for the event is B = 0.591 e  For this earthquake, assume its peak acceleration has a Lognormal distribution and coefficient of variation 0.6, and that the design earthquake is set to a = 180 cm/sec , then 2  d  °  = yll"(l + V ) = 0.6, or V = 0.658  (6.3)  2  lna  a  Since a = d  °  a  exp(B ^ln(l + V )) = 180 cm/sec 2  (6.29)  2  e  180jl + V 215 499 , So a = — ; = = 151 cm/sec exp(B jln(l + V )) exp( 0.591* 0.6 ) 2  2  2  e  150  (6.30)  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  And a = aV = 151 x0.658 = 99 cm/sec  (6.31)  2  a  a  Hereby, the earthquake peak ground acceleration has a Lognormal distribution with mean 151 cm/sec and standard deviation 99 cm/sec . 2  2  (2) The earthquake for ultimate limit state  Assume that occurrence of earthquake can be modeled by a Poisson process with arrival rate of  v = 0.01/year, and the design earthquake with a return period of 475 years is  a = 440 cm/sec , then the annual risk is given by, 2  d  PJa>a ) d  = 1.0-exp(-vP (a > a )) =1/475  (6.1)  2.10748234e-3 ^ = 0.210748 0.01  , . (6.2)  e  d  or . P„(a>a ) d  n  =  i n n A O  £  n  The corresponding Normal variate for the event is 6 = 0.803. e  &,  = ylln(l + V ) = 0.6, or V = 0.658  (6.3)  2  na  a  Since a =  °  d  So a =  a  exp(0 Jln(l + V )) = 440 cm/sec 2  (6.32)  2  e  ° f ^ = ' = 325cm/sec exp(Bjln(l + V )) exp(0.803* 0.6) 4  4  5 2 6  7 7  2  (6.33)  2  a  And a = aV =325x0.658 = 214 cm/sec  (6.34)  2  a  a  Hence, the earthquake peak ground acceleration has a Lognormal distribution with mean 325 cm/sec and standard deviation 214 cm/sec . 2  2  151  CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES 6.5.3.3 Reliability assessment for serviceability limit state  The displacement at the cap beam was checked against the limit that was set as 1/200 of the height. The distributions and statistics of the input variables were given in Table 6.22, with the performance function in the following form,  G = 8.0/200 -A(A ,a) ,T ,D,Q) g  or  g  (6.35a)  s  G = 8.0/800-A(A ,(o ,T ,D,Q) g  g  (6.35b)  s  where A denotes the cap beam lateral displacement.  Table 6.22 Case study 3: Input variable probability distributions and statistics (Serviceability limit si.ate) Standard deviation Mean Distribution Input variable A (cm/sec )  Lognormal  151.0  99.0  co (rad/sec)  Normal  571  71  T (sec) /J(mm)  Normal  20  5  Normal  1800  90  Q(KN)  Normal  2400  240  B (mm)  Normal  750  7.5  2  g  g  s  r  (1) Bridge bent without isolation  In this case, reliability analysis was conducted by IS and MCS, with responses calculated by neural networks and a Local interpolation scheme. The results are given in Table 6.23.  Table 6.23 Case study 3: Reliability index for serviceability limit state without isolation MCS IS Performance function LI NN LI NN G = 8.0/200-A  2.101(2.096) 1.526(1.517) 2.104(2.101)  1.536(1.510)  Note: The values in parentheses are based on Extreme Value-I distribution  152  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  (2) Bridge bent with isolation  In this case, reliability analysis was also carried out with IS and MCS, and the results were presented in Table 6.24, where the mean of B was set to 750 mm with a COV of 0.01. r  Table 6.24 Case study 3 Reliability index for serviceability limit state with isolation Performance function  MCS  IS NN  NN  LI  G = 8.0/200-A  3.760(3.719)  3.471(N/A)  G = 8.0/800 -A  2.063(2.067) 1.896(2.002) 2.064(2.068)  LI  3.860(3.812) 3.800(3.823) 1.882(1.872)  The sensitivity of reliability index (G = 8.0/800 - A) with respect to the mean of B was r  investigated and plotted in Figure 6.8, in which the COV of B was assumed 0.01. r  It can be observed from Table 6.23 and Table 6.24 that, for serviceability limit state, the reliability level of the isolated bridge where the lateral displacement limit is set to 1/800 of its height, is about the same as that of the non-isolated bridge where the lateral displacement limit is set to 1/200 of its height; hence, seismic isolation can greatly improve the bridge performance. The analysis using neural networks takes less time compared to Local Interpolation, as Local Interpolation involves searching the whole database and ranking the closest neighbors to the query point.  It can be seen from Table 6.23 and 6.24 and Figure 6.8 that the reliability indices are approximately the same, whether the response is fitted to Lognormal or Extreme Value-I distributions. Figure 6.8 shows that the reliability index decreases as the isolator mean width increases. This is explained by the fact that, as the isolator mean width increases, so does its  153  C H A P T E R 6 SEISMIC R E L A J B E J T Y A N A L Y S E S : C A S E STUDIES  stiffness; therefore, more inertial force is transmitted to the bridge bentfromthe deck, which results in larger displacement.  3A  1  500  1  600  700  1  800  1  1  900  1000  Isolator mean width B (mm)  Figure 6.8 Variation of reliability index with B (mm) r  6.5.3.4 Reliability assessment for ultimate limit state  Strong earthquakes were applied for reliability analysis of the ultimate limit states. The probability distributions and statistics of the input variables are given in Table 6.25.  (1) Bridge bent without isolation In this case, five performance functions were evaluated regarding cap beam displacement, column moment, column shear, column ductility, and beam moment. They are given below,  Table 6.25 Case study 3: Input variable probability distributions and statistics (Ultimate limit state) Standard deviation Distribution Mean Input variable 214.0 325.0 Lognormal A (cm/sec ) 2  g  o) (rad/sec)  Normal  571  T (sec) /J(mm)  Normal  30  5  Normal  1800  90  g(KN)  Normal  2400  240  B (mm)  Normal  750  7.5  g  d  r  154  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  G, = 0.2-A(A ,co ,T ,D,Q)  (6.36a)  G =M  (6.36b)  g  U  2  g  d  -M (A ,co ,T ,D,Q) c  g  g  d  G =V -V (A ,a> ,T ,D,Q)  (6.36c)  G =p -p (A ,co ,T ,D,Q)  (6.36d)  G =M  (6.36e)  3  c  U  4  0  c  y  s  g  g  g  g  d  d  -M (A ,co ,T ,D,Q) b  g  g  d  where A denotes the cap beam lateral displacement; Mu and Mc denote the column ultimate moment capacity and seismic demand; V„ and V denote the column ultimate shear capacity and seismic demand; c  u, and u. denote the column hinge rotational ductility capacity and seismic demand; 0  c  My and Mb denote the beam yield moment capacity and seismic demand;  In the above equations, the cap beam lateral displacement limit was set to 2.5% of the height, and the assumed statistics of other variable are presented in Table 6.26. Table 6.26 Case study 3: Random variable probability distributions and statistics Standard deviation Mean Distribution Random variable 445.0 8900.0 Normal M (KNm) u  V (KN)  Normal  2250.0  112.25  Ho M (KNm)  Normal  12.0 16000.0  1.2  u  v  Normal  800.0  Importance Sampling and Monte Carlo simulation were conducted for reliability calculation, with the responses fitted to Lognormal distribution and estimated by neural networks. The results are presented in Table 6.27.  155  CHAPTER 6 SEISMIC RELAIBELrTY ANALYSES: CASE STUDIES  Table 6.27 Case study 3: Reliability index for ultimate limit state without isolation MCS IS Performance function Gj =  0.2-A  G =M -M 2  U  C  G,=K-V  e  G =My-M 5  b  2.368  2.438  2.255  2.589  2.242  2.506  2.265  2.283  5.218  Not done  (2) Bridge bent with isolation  Reliability analysis was also carried out with Importance Sampling and Monte Carlo simulation, with the responses estimated by neural networks. The results are presented in Table 6.28, where the mean of B was set to 750 mm with a coefficient of variation 0.01. r  Table 6.28 Case study 3: Reliability index for ultimate limit state with isolation Reliability index Performance function G, =  3.606 (MCS)  0.2-A  G =M -M  3.588 (MCS)  G =V -V  3.716 (MCS)  G =M -Mc  6.000* (IS)  G =M -M  5.123" (IS)  2  3  4  5  U  U  C  C  0  y  b  Note: * number of samples = 5000000, coefficient of variation of probability of failure = 36.55% ** number of samples = 5000000, coefficient of variation of probability of failure = 18.62%  For performance functions G 4 and G 5 , as the reliability indices are very high, even Importance Sampling simulation with sample size 5,000,000 yielded poor reliability estimates.  156  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  The sensitivity of reliability index to the mean of B was investigated (G, =0.2-A) and r  plotted in Figure 6.9, in which the coefficient of variation of B was assumed as 0.01 of its r  mean.  3  .j  ,  ,  ,  ,  1  500  600  700  800  900  1000  Isolator mean width (mm)  Figure 6.9 Variation of reliability index with isolator width mean B  r  To achieve the same level of reliability as the non-isolated case, the displacement limit was set to 1/250 of the height, and the assumed statistics of other random variables were modified as given in Table 6.29. The results of reliability analysis are presented in Table 6.30.  Table 6.29 Case study 3: Random variable probability distributions and statistics Standard deviation Distribution Mean Random variable 311.5 6230.0 Normal M (KNm) u  V  u  M  v  (KN)  Normal  1668.75  83.4375  Mo  Normal Normal  1.5 12000.0  0.15 600.0  (KNm)  157  C H A P T E R 6 SEISMIC R E L A I B I L I T Y ANALYSES: C A S E STUDIES  Table 6.30 Case study 3: ]Reliability index for ultimate limit state with isolation MCS Performance function IS G, =0.032-A  2.505  2.569  G =M -M  2.184  2.410  G,=K-V  1.734  2.530  G =\i -\i  2.091  2.124  4.754  Not done  2  U  C  e  4  0  c  G =M -M 5  y  b  The above calculations show that, for ultimate limit state, seismic isolation can significantly improve seismic performance of the bridge bent by reducing the inertial force transmitted to the pierfromthe deck. In both cases, the bending capacity of the cap beam is far greater than the seismic demand. In comparison of Table 6.27 with Table 6.30, it can be seen that, to achieve the same level of reliability, the capacities in the isolated bridge can be reduced to different extents for different performance criteria. As addressed before, the reliability index decreases with increase of isolator mean width, which has been explained previously. This example is similar to the bridge design in Dicleli (2002) where hybrid seismic isolation was used.  6.6 Case Study 4 : A Wood Shear Wall 6.6.1 Description of the structure  Wood shear walls are typically used for residential construction in North America (Figure 6.10). It is composed offramingmembers and sheathing panels that are connected with the frame members by means of nails or screws. In this case study, the wood shear wall under investigation has a height of 2.4 m and a width of 2.4 m, with 12 mm thick Oriented Strand  158  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  Board (OSB) sheathing panels on one side and vertical elements (studs) spacing of 400 mm. The sheathing panels are connected with theframeusing common 50 mm long nails.  .Framing Member _Sheathing Panel  A  -Fastener Figure 6.10 Wood shear wall construction  6.6.2  Random variables  The response of a wood shear wall during an earthquake depends on several factors: (1) the characteristics of the ground shaking, such as the peak ground acceleration, duration and frequency content; (2) the mass carried by the wall; (3) the nail and its interaction with the wood media; (4) the nail spacing around the periphery of the wall and in the interior of the wall. In this case study, four variables were selected as the input variables, namely, the peak ground acceleration A , the mass on the wall M, the nail spacing along the perimeter e g  1  and the nail spacing in the interior e , with the probability distributions and statistics given 2  in Table 6.31.  159  CHAPTER 6 SEISMIC RELAJBILITY ANALYSES: CASE STUDIES  Table 6.31 Case study 4: Random variable probability distributions and statistics Distribution Mean Standard deviation Random variable 0.0050 0.050 Normal e (m) t  e (m) M (KN.sec^/m)  Normal  0.120  0.0120  Normal  6.0  0.6  A (m/sec )  Lognormal  0.927  0.556  2  2  g  The earthquake considered was that of Landers, Joshua Tree Station, 1992, with its peak acceleration adjusted according to the statistics of Table 6.31. The distribution of A is g  consistent with a design acceleration of 0.25g at a return period of 475 years. The earthquake was assumed to occur, on average, once every 10 years. The coefficient of variation of A  g  was assumed as 0.6.  6.6.3  Performance evaluation  Two responses were selected to evaluate the structural performance: the drift of the top of the wall A and the nail tearing force V. 131 combinations of the input variables were generated and the structural responses were calculated by a software package DAP3D developed at the University of British Columbia for 3-dimensional analysis of wood frame structures. This software can perform nonlinear dynamic analysis of an arbitrary wood frame structure, taking into account the hysteresis behavior of the nails in the wood medium. Appendix D shows just the first 10 combinations and the corresponding responses. Based on the response databases, two neural networks were developed for the responses. Of the 131 examples, 104 were used for training and 27 were used for testing. The hidden neuron numbers and network RMSREs for the two responses are given in Table 32, with the relative error statistics for every response shown in Table 6.33.  160  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES Table 6.32 Case study 4: Neuron number and network RMSREs Neuron number Training Testing Response 0.011 0.029 13 A 6  V  0.045  0.045  Table 6.33 Case study 4: Neural network training relative error statistics Mean Standard deviation Response e(A)  0.0034  0.0163  e(V)  0.0005  0.0453  The performance criteria were embodied by the following performance functions, G =H/200-A(e ,e ,M,A )  (6.37a)  G =V -V(e„e M,A )  (6.37b)  1  2  1  0  2  2  g  g  where V was the nail force capacity in terms of sheathing edge tearing, which, from test, 0  was assumed normal with a mean of 1.05 KN and a standard deviation of 0.105 KN.  Reliability analysis was conducted with Importance Sampling and Monte Carlo simulation, with the responses estimated by neural networks. The results are presented in Table 6.34.  Table 6.34 Case study 4: Reliabiity indices for wood shear wall Performance function IS MCS Gj  =H/200-A(e e ,M,A ) ]t  2  g  G =V -V(e ,e ,M,A ) 2  0  1  2  g  2.663  2.778  2.524  2.843  The effects of the e, and e  on reliability index associated with performance function  Gj = H /200-A(e e ,M,A )  were investigated by varying e or e independently while  2  Jt  2  g  t  2  keep the distributions of other variables unchanged, as shown in Figure 6.11 and Figure 6.12.  161  C H A P T E R 6 SEISMIC RELAIBILITY A N A L Y S E S : C A S E STUDIES  0  c  1 £  S  3 2.9 2.8 2.7 2.6  I  2.5  01 2.5  = J2  a>  0.015  0.03  0.045  0.06  0.075  e1 mean (m)  4 37 3.1 2.8 0.03  0.06  0.09  0.12  0.15  e2 mean (m)  Figure 6.12 Variation of reliability index with respect to e  Figure 6.11 Variation of reliability index with respect to e,  2  It can be observed from the above figures that as e, changed from 0.070 m to 0.010m (a reduction of 85.7%), the reliability index increased from 2.578 to 2.814 (a growth of 9%); whereas the reliability index increased from 2.647 to 3.511 (a growth of 32.6%) as e changedfrom0.125m to 0.025 m (a reduction of 80%). It seems that it is more effective to 2  improve reliability by decreasing the nail spacing in the interior of the wood shear wall.  6.7 Case Study 5: A n Instrumented Structure for Earthquake Response Measurement 6.7.1 Description of the structure  This example structure is an actual building, a Holiday Inn located in the city of Van Nuys, California. It is a seven-story reinforced concreteframe-slabstructure with a height of about 20 m, which has 8 bays in the longitudinal direction and 3 bays in the transverse direction. The typical floor plan is about 19m by 46 m, as shown in Figure 6.13. It was designed and built in the 1960s, and since has experienced three earthquakes, ie., 1971 San Fernando, 1987 Whittier and 1994 Northridge events. Instruments operated by the California Strong Motion Instrument Program (CSMTP) recorded the structural responses during these earthquakes. For  162  CHAPTER 6 SEISMIC RELAffiJLITY ANALYSES: CASE STUDIES details of the related information, readers are referred to Rahmatian (1997). The aim of this study is to investigate the influences of different components of ground motion on structural performance, and compare the seismic performance of the structure before and after a seismic retrofit. The retrofit strategy suggested here are steel cross brace dampers along longitudinal axes A and D, between transverse axes 4 and 6; and along transverse axes 1 and 9, between longitudinal axes B and C.  CP  m  ©  ®  ©  0  ®  B&ajK'atan]B m - 4 5 . 7 2 PI  0 4G> 216 nfTi Earerate slab (tjp.)  450 mm'squai* interior ccrcratp, mlinin ftp.)  -@  Exterior sp^ncrai. bBamatperipbeiy  concrete d a b (Vp J  3SBx5Dflrrnv  lBKtaior  :  concrete  CDlJFmrlfyp.)  Figure 6.13 A typical floor plan of the Holiday Inn (Ventura et al, 2002)  6.7.2 Ground motions  It was assumed that the building was subjected to the same ground motions recorded at the ground floor level due to the Northridge earthquake (CSMTP record channel 1, 13, 15, 16).  163  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  The ground motions had four components, in longitudinal direction, transverse direction, vertical direction and a rotational component in the horizontal plane. The peak values for the longitudinal, transverse, vertical and rotational components were, respectively, 444.5 cm/sec , 408.9 cm/sec , 295.2 cm/sec and 0.0954 rad/sec , sampling interval being 0.02 sec. 2  2  2  2  For application purpose, the peak values were scaled to 1.0 or -1.0, and they were plotted in Figure 6.14.  1 c 0.5 o  E «J  o  0  f  If)™  20  30  40  50  Time (sec) (a) Longitudinal accelerogram  -1  Time (sec) (b) Transverse accelerogram  Figure 6.14 Holiday Inn earthquake ground motions, Northridge 1994  164  6  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  -1 J  Time (sec) (c) Vertical accelerogram  Time(sec) (d) Rotational accelerogram Figure 6.14 Holiday Inn earthquake ground motions, Northridge 1994 (continued)  (1) The longitudinal and transverse components  The longitudinal and transverse components are supposed to have the same peak design value. Assuming that occurrence of the earthquakes can be modeled by a Poisson process with an arrival rate of v = 0.05/year, and that the design earthquake a with a return period d  of 475 years is 4.400 m/sec , 2  P (a>a ) a  d  = J.0-exp(-vP (a>a )) e  d  = 1/475  '  and from which the probability of exceeding the design acceleration during an event is,  165  (6.1)  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  .  P„(a>a ) d  =  2.10748234e-3 „ . * „ = 0.042149646 n  n  j  (6.2)  n  The corresponding Normal variate for the event is B = 1.726 e  Assuming that the peak accelerations have a Lognormal distribution with coefficient of variation 0.6, then = ^ri(l + V ) = 0.6, or V = 0.658  (6.3)  2  a  a = . °  a  exp(B Jln(l + V )) = 4.400 m/sec 2  d  (6.38)  2  e  4.400Jl + V  5.267756398 , . —= = 1.870 m/sec exp(Bjln(l + V )) exp(1.726* 0.6) 2  a=  2  (6.39)  v  2  a  a = aV = 1.870 x 0.658 = 1.230 m/sec  (6.40)  2  a  a  Thus, the postulated peak horizontal ground acceleration during events has a Lognormal distribution with a mean 1.870 m/sec and a standard deviation 1.230 m/sec . 2  2  (2) The vertical component  The vertical component is supposed to have a peak design value of 2/3 of the horizontal peak value. Again, assume that occurrence of the earthquakes can be modeled by a Poisson process with arrival rate of v = 0.05/year, and that the design earthquake a with a return d  period of 475 years is 2.930 m/sec . As before, 2  Pa(<* > J  = 1.0-exp(-vP (a >a )) = 1/475  (6.1)  , . 2.10748234e-3 PJa>a,) = = 0.042149646 0.05  (6.2)  a  or  e  d  n  e  d  166  CHAPTER 6 SEISMIC RELAffilLITY ANALYSES: CASE STUDIES The corresponding Normal variate for the event is B = J. 726 e  Assuming again that the peak acceleration has a Lognormal distribution with coefficient of variation 0.6, then ^ina = JWl + Va) = 0.6, or V = 0.658  (6.3)  a  a = . °  exp(B Jln(l + V )) = 2.930 m/sec 2  d  (6.41)  2  e  2.930^1 + V  3.507846874 i —= = 1.245 m/sec exp(P Jln(l + V )) exp(1.726* 0.6) 2  a  a=  (6.42)  v  2  e  a = aV = 1.245 x 0.658 = 0.819 m/sec  (6.43)  2  a  a  Thus, the postulated peak vertical ground acceleration during an event has a Lognormal distribution with a mean 1.245 m/sec and a standard deviation 0.819 m/sec . 2  2  (3) The rotational component Following the same procedures and assuming the design peak rotational acceleration of 0.1 rad/sec , the postulated peak rotational ground acceleration during an event has a Lognormal 2  distribution with a mean 0.0425 rad/sec and a standard deviation 0.0280 rad/sec . 2  2  6.7.3 Random variables  A structural analysis model has been calibrated by Ventura et al (2002), so only the uncertainties associated with ground motions are considered. The peak values of the four ground motion components, A^.A^.A^A^,  were selected as the random variables for the  structure before seismic retrofit; and an additional random variable, the hysteretic damper  167  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES sectional area, A , was chosen for the structure after seismic retrofit. The variable bounds d  before and after retrofit are given in Table 6.35 and 6.36. The distributions and statistics of the random variables were listed in Table 6.37.  Table 6.35 Case study 5: Input variable bounds (before retrofit) Upper bound Lower bound Random variable 4.900 0.100 (m/sec ) 2  A  m  (m/sec )  0.100  4.900  A  x z  (m/sec )  0.100  4.900  (rad/sec )  0.005  0.120  A  2  2  g r  Table 6.36 Case study 5: Input variable bounds (after retrofit] Lower bound Upper bound Random variable 9.000 0.100 A ^ (m/sec ) 2  A ^ (m/sec )  0.100  9.000  A „ (m/sec )  0.100  9.000  A, (rad/sec')  0.002  0.200  (mm )  1500  15000  2  2  2  A  d  Table 6.37 Case study 5: Input variable probability distributions and statistics Standard deviation Mean Distribution Random variable 1.870 1.230 Lognormal A ^ (m/sec ) 1.230 1.870 Lognormal Agy (m/sec ) 2  2  A A  (m/sec )  Lognormal  1.245  0.819  (rad/sec )  Lognormal  0.0425  0.0280  Normal  9000  90  2  x z  2  g r  (mm ) 2  A  d  6.7.4 Performance evaluation  6.7.4.1 The structure before seismic retrofit  (1) Reliability analysis  168  CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES  Eighty combinations of the four random variable values were generated using Optimized Latin Hypercube Design, and for each combination, CANNY was run to calculate the structural responses: roof drifts, story drift ratios, base moments and base shear forces. Only the roof drifts were used in reliability assessment. The time step integration was carried out using Newmark's method, and viscous damping (5%) was assumed proportional to mass and instantaneous stiffness.  Appendix E shows just the first 10 combinations and the  corresponding responses. Finally, for each response, a neural network was trained and used for reliability assessment. Of the 80 examples, 64 were used for training and 16 were used for testing. The hidden neuron number and network RMSREs for the two responses are given in Table 38, with the relative error statistics for every response shown in Table 6.39.  Table 6.38 Case study 5: Neural network training RMSREs (before retrofit) Testing Training Neuron number Response 0.023 0.022 4 A, 0.030  6  ,  D  0.035  Table 6.39 Case study 5: Neural network training error statistics (before retrofit) Standard deviation Mean Response 0.0222 -0.0001 x  D  0.0020  0.0315  where D denotes roof longitudinal displacement; x  D  y  denotes roof transverse displacement;  The roof displacements were evaluated against the serviceability limit state and the life safety limit state. For the serviceability limit state with drift limit of 1/200 of its height, the following performance functions were used,  169  CHAPTER 6 SEISMIC RELAffilUTY ANALYSES: CASE STUDIES  G, =  0.100-D (A ,A ,A ,A )  (6.44a)  G =  0.100-D/A^.Agy.A^.A^)  (6.44b)  x  2  gx  gy  gz  gr  and for the limit state of life safety with drift limit of 1/100 of building height, the performance functions were,  G^O.IOO-DJA^A^A^) G  (6.45a)  =0.200-D (A ,A ,A ,A )  2  y  gx  gy  g2  (6.45b)  gr  The reliability analysis was conducted by Importance Sampling and Monte Carlo Simulation, with structural responses estimated using neural networks and Local Interpolation. The results were given in Table 6.40 for serviceability limit state and Table 6.41 for life safety limit state.  Table 6.40 Case study 5: Serviceability reliability indices (before retrofit) IS (10 ) 4  Performance function  G^O.IOO-DJA^A^A^AJ G =0.100-D/A ,A ,A ,A ) 2  gx  gy  gz  gr  CPU time (sec)  MCS (10 ) 5  NN  LI  NN  LI  0.385  0.282  0.414  0.365  -0.038  0.098  -0.066  0.122  10  15  25  70  Table 6.41 Case study 5: Life safety re iability indices (before retrofit) IS (10 ) 5  Performance function  G^OJOO-DJA^A^A^) G  2  =0.200-D (A ,A ,A ,A ) y  gx  gy  gz  gr  CPU time (sec)  170  MCS (10 ) 6  NN  LI  NN  LI  1.644  1.577  1.821  1.815  0.914  1.079  1.363  1.313  50  135  195  690  CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES  From the tables, it seems that for the serviceability limit state, the reliability in the longitudinal direction is higher than that of the transverse direction. The low reliability indices show that the structure is relatively flexible, as it is a column flat slab system with low lateral stiffness. The reliability prediction based on Neural Networks is a somewhat greater than thatfromLocal Interpolation, with the latter taking more time. As it takes 2525 seconds to run CANNY once, reliability assessment using Monte Carlo simulation by integrating RELAN and CANNY would take 2525x10 seconds (2922 days) on a Pentium III 5  500 MHz PC. For the life safety limit state, again, the reliability index in the longitudinal direction is higher than that of the transverse direction. The reason might be that the transverse direction has a larger radius of rotation, which is susceptible to strong rotational ground motion. Neural Networks takes far less time than Local Interpolation in reliability calculation. A direct Monte Carlo simulation using CANNY dynamic analysis would take 2525x10 seconds (29224 days) on a Pentium III 500 MHz PC. Neural Networks exhibit 6  robustness compared to Local Interpolation. Both are utilized with Importance Sampling, after the estimation of a "design point", as described. When the reliability is high, the design point could be far away from the mean value point, which renders Local Interpolation less effective as there might be fewer data around the design point.  (2) Influences of the different components of ground motion  The influence of each variable on structural performance was studied by varying its mean and keeping its coefficient of variation constant 0.66, while the statistics of other variables were kept unchanged.  The results were shown in Figure 6.15, where the solid line  corresponds to performance function G, = 0.200-D and the dashed line is associated with x  171  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  G = 0.200 -D 2  y  (a) Variation of reliability index with respect to A  (b) Variation of reliability index with respect to A  gy  (c) Variation of reliability index with respect to A  111  CHAPTER 6 SEISMIC RELAffilLITY ANALYSES: CASE STUDIES  2  -i—  1.8  —  x |  t  16 —  1 1.4 co  ^  01  1.2 - — 1 -— 0.035  0.04  0.045  0.05  Peak rotational acceleration  (d) Variation of reliability index with respect to A  gr  Figure 6.15 Variation of reliability index with respect to ground motion components  It can be seen from above that peak longitudinal acceleration has a great influence on longitudinal response, while peak transverse acceleration has a significant effect on transverse response. The peak vertical and rotational accelerations have no obvious impact on longitudinal response, though they have a slight effect on transverse response.  6.7.4.2 The structure after seismic retrofit  (1) Reliability analysis After the Northridge earthquake, the Holiday Inn suffered different levels of damage, and was repaired afterwards. Several options were available for seismic retrofit, such as adding shear walls, installing steel frames, upgrading with base isolation, providing energy dissipation devices, etc. In this study, steel cross brace type dampers with hysteretic damping (Huang et al, 2002) were used as the retrofit scheme, as they added little weight to the structure and were simple to erect. The yield strength was assumed 350 MPa.  173  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  96 combinations of the five input random variables were generated by Optimized Latin Hypercube Design (Appendix E), and CANNY was run for each case. Finally, for each response, a neural network model was built for reliability analysis. Of the 96 examples, 76 were used for training and 20 were used for testing. The hidden neuron number and network training RMSREs for the two responses are given in Table 42, with the relative error statistics for every output variable shown in Table 6.43.  Table 6.42 Case study 5: Neuron number and network RW[SREs (after retro Testing Training Neuron number Response 0.026 0.025 3 D x  0.036  4  y  D  0.036  Table 6.43 Case study 5: Neural network training error statistics (after retrofit) Standard deviation Mean Response 0.0256 0.0002 D x  0.0001  D  0.0378  v  Reliability analysis was carried out using Monte Carlo simulation, and the variation of reliability index with respect to damper mean area were plotted in Figure 6.16 and Figure 6.17, where the standard deviation of the area was assumed 1% of its mean. The performance functions for serviceability limit state were as follows, Gj =0.100-D (A ,A ,A ,A ,A )  (6.46a)  G = 0.100-D/A^.A^.Av.A^.AJ  (6.46b)  x  gx  gy  gz  gr  d  2  and for life safety limit state as, G, = 0.200-D (A ,A ,A ,A ,A ) x  gx  gy  gz  gr  (6.47a)  d  G = 0.200 - D ( A ,A ,A ,A ,A 2  y  gx  gy  gz  gr  d  )  174  (6.47b)  CHAPTER 6 SEISMIC RELAffilUTY ANALYSES: CASE STUDIES  x  2  1 1.7 £  i *  <L>  1.4 1.1 0.8  OL 0.5 1000 3000 5000 7000 9000 1100 1300 1500 0 0 0  Damper mean area  Figure 6.16 Variation of reliability index with respect to damper mean area A  d  (Serviceability limit state)  In the above, the solid line corresponds to performance function G, = 0.100-D  and the  X  dashed line is associated with G  2  =0.100-D . y  x 3 •o 2.6 |.2.2 5 1.8 1.4 * 1  I  —i r -i r 1000 3000 5000 7000 9000 1100 1300 1500 0 0 0  Damper mean area  Figure 6.17 Variation of reliability index with respect to damper mean area A  d  (Life safety limit state) In the above, the solid line corresponds to performance function G, =0.200- D and the x  dashed line is associated with G, = 0.200 -D . v  It is can be seen that as the mean damper sectional area A increases, the reliability in the d  longitudinal direction grows steadily when A is greater than 7000 mm ; though it decreases 2  d  175  CHAPTER 6 SEISMIC RELAIBILrTY ANALYSES: CASE STUDIES  slightly when A is less than 7000 mm ; on the contrary, the reliability increases with A 2  d  when A  d  is less than 9000 mm , it decreases afterwards. Overall, the reliability of the 2  d  retrofitted structure is improved compared to that of the original structure. Since the dampers in the two directions have the same cross sectional area throughout the height of the building, the reliability in the transverse direction begins to decrease after some point. This is due to the increasing rigidity, resulting in more inertial forces that compromise the benefit of increasing the damper size. To accomplish a better performance, the damper size should be optimized along the two horizontal directions as well as in the vertical direction.  (2) Influences of the different components of ground motion  The influence of each variable on structural performance was again investigated by varying its mean by -10% to 10% and keeping its coefficient of variation constant 0.66, while the statistics of other variables were kept unchanged. Only the life safety limit state was considered. The results were shown in Figure 6.18, where the solid line corresponds to performance  function  G =0.200-D t  and the  x  dashed  line  is  G = 0.200 -D . 2  y  31  «  •g  1  2.8  •!  26  !5 2.4 co =55 2.2 2 A 1.6  "  i  "  '  i  i  "  "  i  "  1.7 1.8 1.9 2 Peak longitudinal acceleration  (a) Variation of reliability index with respect to A,  176  1  2.1  associated with  CHAPTER 6 SEISMIC RELAIBILITY ANALYSES: CASE STUDIES  (b) Variation of reliability index with respect to A gy 2.5  x  S  2.4 2.3  3  2.2  co  =55 2.1 2  1.1  1.15  1.2  1.25  1.3  1.35  1.4  Peak vertical acceleration  (c) Variation of reliability index with respect to A  (d) Variation of reliability index with respect to A Figure 6.18 Variation of reliability index with respect to ground motion components  177  CHAPTER 6 SEISMIC RELAffilLITY ANALYSES: CASE STUDIES  It is observed that the peak longitudinal acceleration has a significant influence on longitudinal performance, while the peak transverse acceleration affects the transverse performance substantially. The peak vertical acceleration and peak rotational acceleration have minor effects on both the longitudinal and transverse performances.  6.8 Reliability Assessment: Summary and Conclusions In view of the many sources of uncertainties inherent in earthquake resistant design, reliability analyses need to be undertaken in the design process by properly taking into account those uncertainties. Five case studies of structural seismic reliability analyses were presented to demonstrate the applicability and effectiveness of the proposed approach. Two levels of performances, serviceability and ultimate limit state, are generally assessed for the corresponding two levels of earthquakes. The examples proved the near impossibility of performing seismic reliability assessment by means of standard Monte Carlo simulation, using nonlinear dynamic analysis directly. The case studies illustrate, instead, that the reliability assessment can be carried out, quickly and accurately, by using Designed Experiments and Neural Networks trained with databases of responses of the structural system, under probabilistic seismic ground excitation. The results also demonstrate that Neural Networks are more robust and efficient than local regression methods of interpolating responses. Powered by this tool, structural engineers can accomplish the seismic design objectives with explicit reliability, which will assure life safety and mitigate seismic risks by reducing possible damage.  178  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS 7.1 Introduction Performance-based seismic design requires that for multiple seismic hazards, multiple performance criteria be satisfied explicitly with specified reliability levels. Because many uncertainties are involved, performance-based design should be carried out in the framework of reliability analysis, by taking into consideration the effect of all major uncertainties, in order to achieve the pre-defined design objectives. Among all the uncertainties, the earthquake ground motion is the most important and it is not well understood. As the input to structural analysis, its appropriate characterization ultimately determines the success of a seismic design. The intricate structural response under excitation of ground shaking depends on the inelastic behavior of the structure and its connections, the influence of non-structural elements and building content, soil-structure interaction, as well as the analytical model based on assumptions and simplifications. The real response of the structure can only be estimated using stochastic nonlinear dynamic analysis. In this manner, both the uncertainties in structural capacities and seismic demands can be considered, so that seismic design can be undertaken within a transparent and realistic methodology.  In the present state of practice of seismic design, the uncertainties in structural capacity and seismic demand are generally unaccounted for. The seismic capacity is usually estimated by a nonlinear static analysis (pushover analysis) in which only the fundamental mode is allowed for, with the response obtained by monotonic incremental lateral loading. The elastic  179  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS analysis conducted according to the codified procedure does not consider the intrinsic uncertainties and the computations do not reflect the actual structural behavior. The resulting designs may have an uncertain reliability level, certainly quite variable across design conditions.  Performance-based design aims at improving the procedure by attempting to meet performance requirements with specified reliabilities. Thus, the design is formulated as an optimization problem in the context of reliability-based design. Stochastic nonlinear dynamic structural responses are implemented within reliability analysis when the structure is subjected to a probabilistic earthquake. Design parameters are sought by minimizing the objective function defined in Equation (5.14). There are two ways of solving this problem: a) the reliability analysis can be conducted using FORM or simulations; or b) a reliability index database can be generated in terms of the design parameters, with a neural networks model subsequently built, and used for optimization. Gradient-free algorithms will be relied on for the optimization due to the possible highly nonlinear behavior of the structure.  Four applications are presented to illustrate the applicability of the proposed method. These correspond to the same examples discussed in the previous chapter, ie., a two-story reinforced concreteframe,a tall reinforced concrete building, a bridge bent without or with seismic isolation, and a wood shear wall. Advantage is taken of the response databases and trained neural networks developed beforehand. Multiple seismic hazards are considered, with multiple performance objectives.  180  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS  7.2 A Two-story Reinforced Concrete Frame 7.2.1  Description of the structure  The structure has been discussed in Chapter 6, a two-story reinforced concrete frame. It is subjected to the same probabilistic earthquake ground shaking, and the mean weight on the roof Wj and the mean weight on the floor W are the design parameters to be calculated, 2  allowing for each one a coefficient of variation of 0.05.  7.2.2  Random variables  The steel yield strength f , y  the concrete compressive strength f' , the concrete elastic c  modulus E , the weight on roof W,, and the weight on floor W , were selected as the five c  2  input variables. The response variables were the drift at the roof D, and the drift at the floor D . The probability distributions and statistics of the input variables were as shown in Table 2  7.1.  Table 7.1 Reinforced concrete planeframe:Input variable probability distributions and statistics Standard deviation Mean Distribution Input variable 30.0 400.0 Lognormal / (MPa) v  /JOVlPa) £ (MPa) fF,(KN)  Normal  30.0  4.5  Normal Normal  22500.0  1000.0  ?  0.05^  W (KN)  Normal  ?  0.05W,  c  2  181  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS 7.2.3  Performance-based design formulation  For this probabilistic strong ground shaking, the limit state of collapse prevention was evaluated, as indicated by the following two performance functions,  G =0.240-D (f ,f ,E ,W ,W )  (7.1a)  G =0.120-D (f ,f' ,E ,W ,W )  (7.1b)  l  l  2  y  2  c  y  c  c  1  c  2  1  2  In the above, the drift limit was set to 3% of the story heightfromground. D and D were t  2  assumed Lognormally distributed, and estimated by means of neural networks that were developed in the previous chapter. The target reliability indices for the two modes were set to P| =1.800 and p2 = 1.500, respectively.  7.2.4  Results  Two approaches were used for the optimization problem, depending on how 6/X  d  ) was  obtained. (1) Conventional approach  By this approach, the reliability index is calculated each time by using standard method like FORM and Importance Sampling. The design parameters and the achieved reliability indices were found to be,  W,=370.5KN,  W  2  = 426.7KN,  B,  =1.799;  B  =1.500.  2  182  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS  (2) Neural network approach  Two reliability index databases were constructed by running RELAN (Foschi et al, 2002) for combinations of W, and W , while keeping the coefficients of variation constant as 0.05. 2  Appendix F shows just the first 10 combinations of the databases. Two neural networks were then developed for the two reliability indices 6, and B . 2  The design parameters and the  achieved reliability indices were obtained by optimization,  W,=367.0KN, Bj =1.800; W =455.8KN, B =1.500. 2  2  It can be seen that the two approaches produced roughly the same answer. Though building reliability index databases involves some more time, it is compensated by savings during the actual optimization. This approach using B neural networks will show its superiority as the number of design parameters increases.  7.3 A Tall Reinforced Concrete Building 7.3.1  Description of the structure  This structure has been discussed in Chapter 6. It is a two-bay, twenty-story reinforced concrete frame, with story height of 4 m and span of 8 m. The beams have a constant cross section 350 mm x 700 mm. The column cross sections vary along the height of the building: from stories 1 to 7, BjxH,; from stories 8 to 14, B xH ; from stories 15 to 20, 2  183  2  B xH . 3  3  C H A P T E R 7 P E R F O R M A N C E - B A S E D SEISMIC DESIGN APPLICATIONS  7.3.2  Random variables  As before, fifteen input random variables were considered and, for clarity, they were listed below, (1) peak ground acceleration, A ; g  (2) predominant ground frequency, co ; g  (3) earthquake strong motion duration, T ; d  (4) beam distributed vertical load, q; (5) steel yield strength, f ; y  (6) concrete compression strength for columnsfromstory 1 to 7,  f;  (7) concrete compression strength for columnsfromstory 8 to 14, (8) concrete compression strength for columnsfromstory 15 to 20, (9) concrete compression strength for beams, f ; b  (10) cross section width of columns from story 1 to 7, B, ; (11) cross section depth of columnsfromstory 1 to 7, H,; (12) cross section width of columnsfromstory 8 to 14, B ; 2  (13) cross section depth of columnsfromstory 8 to 14, H ; 2  (14) cross section width of columnsfromstory 15 to 20, B ; 3  (15) cross section depth of columnsfromstory 15 to 20, H ; 3  184  cl  f; c2  f; c3  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS  The probability distributions and statistics of the fifteen input random variables were given in Table 7.2, where B,,H B ,H lt  ,B ,H  2  2  3  3  were selected as the design parameters, and the  column dimensions were assumed well-controlled with COV 0.01.  7.3.3  Performance-based design formulation  Six structural responses were studied, namely, the roof displacement D  20<  acceleration A , 20  the 15  story drift ratio 0 ,  th  1}  the 5  th  the roof  story drift ratio 0 , the base 5  overturning moment M and the base shear V. They were assumed to have Lognormal distribution and calculated using the neural networks developed in Chapter 6. For the six ultimate limit states, the following performance functions were considered,  G, = 0.800-D  (7.2a)  20  G = 4.900 -A  (7.2b)  G = 0.0 JO -9  (7.2c)  2  20  3  15  G = 0.010 -6 4  (7-2d)  5  G =M -M  (72e)  G =V -V  (7-2f)  5  6  0  0  For illustration purpose, M was assumed Normally distributed with mean 54000 KNm and g  standard deviation 5400 KNm; and V was assumed Normally distributed with mean 1440 0  KN and standard deviation 144 KN.  The target reliability indices for the six performance functions were specified as, p| = 3.000 ;{3 = 2.500; ft =2.500 ;ft = 2.500 ;p; = 2.500 ;ft = 2.500 2  185  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS Table 7.2 Tall building: Input variable probability distributions and statistics Standard deviation Mean Distribution Input variable 191.0 290.0 Lognormal 4, (gal) eo (rad/sec)  Normal  5TC  7C  T (sec) q (KN/m)  Normal  30.0  5.0  Normal  45.0  4.5  / „ (MPa)  Lognormal  400.0  10.0  / . , (MPa)  Lognormal  40.0  1.5  / , , (MPa) L, (MPa) /„ (MPa)  Lognormal  30.0  1.5  Lognormal  20.0  1.5  Lognormal  20.0  1.5  B, (mm) i / , (mm)  Normal  ?  0.0 IB,  Normal  ?  0.01H,  i? (mm) H (mm)  Normal  ?  0.0IB\  Normal  ?  0.01H  5, (mm)  Normal  ?  0.0 IB  Normal  ?  0.01H  g  d  2  2  H (mm) 3  7.3.4  2  3  3  Results  Sixty-four combinations of the six design parameters B H,,B ,H ,B ,H' lt  2  2  3  were created.  3  Six reliability index databases were constructed for the six performance functions using RELAN, maintaining the coefficients of variation of B,, H , t  B, H, 2  2  B, H 3  3  as 0.01.  Appendix G shows the first 10 combinations of the reliability index databases. Next, six neural networks were developed for the six reliability indices. Finally, the design parameters and the achieved reliability indices were found by optimization as,  B, = 830mm, H, = 1129mm; B = 536mm, H = 747mm; 2  2  B = 497mm,H = 608mm; 3  3  186  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS 8, = 2.952, 8 = 2.525, B = 2.505, B = 2.543, B = 2.923, B = 2.574 2  3  4  5  6  7.4 A Bridge Bent Without or With Seismic Isolation 7.4.1  Description of the structure  This is the same bridge outlined in Chapter 6. It has two round columns connected on top by a cap beam, which is 8 m above the ground. In the case of seismic isolation for the deck, four identical Lead Rubber Bearing isolators are put on the cap beam. The performance of this bridge bent without or with seismic isolation, for two levels of earthquake, is studied.  7.4.2  Bridge bent without seismic isolation  7.4.2.1 Random variables  In this case, five variables were selected as the input variables, namely, peak ground acceleration A , predominant ground frequency co , strong motion duration T , column g  g  d  diameter D, and vertical load on the bearing Q. The probability distributions and statistics of the variables were given Table 7.3, where the values in parenthesis were for ultimate limit state.  Table 7.3 Bridge bent: Input variable probability distributions and statistics (without seismic isolation) Standard deviation Mean Distribution Input variable A (cm/sec )  Lognormal  151.0(325.0)  99.0 (214.0)  co (rad/sec)  Normal  5TC  K  7 ; (sec)  Normal  20 (30)  5  D(mm)  Normal  1800  18  0(KN)  Normal  2400  240  2  g  g  187  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS  7.4.2.2 Performance-based design for serviceability limit state  The displacement at the cap beam was evaluated against a limit of 1/200 of its height from ground, based on the following limit state function,  G =  (7.3)  0.04-A(A ,co ,T ,D,Q) g  g  s  where the displacement A was assumed to have a Lognormal distribution and estimated by the neural networks developed in Chapter 6.  The design parameter was the mean of the column diameter, D, assuming the standard deviation as 1% of the mean. The target reliability index was specified as  B' = 2.000.  From  optimization, the value of the design parameter and the achieved reliability index were found to be,  D=  1754mm,  0 =  2.000  7.4.2.3 Performance-based design for ultimate limit state  This limit state was supposed to be associated with the state of collapse prevention. The displacement at the cap beam was evaluated against a limit of 1/40 of its height from ground, as indicated by the following limit state function,  G = 0.200-A(A ,co g  ,T ,D,Q)  g  s  (7.4)  in which the displacement A was assumed to have a Lognormal distribution and estimated by neural networks.  188  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS The design parameter was again the mean of the column diameter, D, with a standard deviation fixed at 1% of its mean. The target reliability index was specified as P' =2.500. From optimization, the value of the design parameter and the achieved reliability index were found to be,  D = 1953mm, B = 2.500  7.4.3  Bridge bent with seismic isolation  7.4.3.1 Random variables In this case, six variables were selected as the input variables, namely, peak ground acceleration A , predominant ground frequency co , strong motion duration T , column g  d  diameter D, and vertical load on isolators Q, and width of the isolators B . The probability r  distributions and statistics of the variables are given in Table 7.4, in which the values in parenthesis correspond to ultimate limit state.  Table 7.4 Bridge bent: Input variable probability distributions and statistics (with seismic isolation]) Standard deviation Mean Distribution Input variable 99.0 (214.0) 151.0(325.0) Lognormal A (cm/sec ) 2  g  co (rad/sec)  Normal  5TC  71  T (sec) £>(mm)  Normal  20 (30)  5  Normal  1800.0(?)  G(KN)  Normal  5 (mm)  Normal  2400 ?  18.0(0.01 D) 240  g  d  r  189  0.0IB;  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS 7.4.3.2 Performance-based design for serviceability limit state  The displacement at the cap beam was evaluated against a limit of 1/800 of its height from ground, as indicated by the following limit state functions,  (7.5)  G = 0.010- A(A ,o) ,T ,D, Q, B ) g  g  s  r  where the displacement A was assumed to have a Lognormal distribution and estimated by neural networks.  The design parameter was the mean of the isolator width, B , r  assuming the standard  deviation to be 1% of the mean. The target reliability index was specified as B' = 2.000. By optimization, the value of the design parameter and the achieved reliability index were found to be,  B= r  795mm, B = 2.000  7.4.3.3 Performance-based design for ultimate limit state  The displacement at the cap beam was evaluated against a limit of 1/250 of its height from ground, and the column base moment was assessed against the yield capacity M , in terms y  of the following limit state functions,  G, =  0.032-A(A ,a) ,T ,D,Q,B ) g  g  s  r  G =M -M (AG,G> ,T ,D,Q,B ) 2  y  c  g  s  (7.6a) (7.6b)  r  190  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS where the displacement A and M  c  were assumed to have a Lognormal distribution and  estimated by neural networks, while M  y  was the yield moment of column which was  assumed to have a Normal distribution with mean 5800 KNm and standard deviation 290 KNm.  The design parameters were the means of the column diameter D and the isolator width B , r  assuming the standard deviations to be 1% of the means. The target reliability indices were specified as  B\ = B' =  2.500.  2  Two approaches were used to solve the problem, ie, one was the conventional approach and the other was based on reliability index database and neural networks.  (1) The conventional approach  In this approach, the reliability index is calculated by standard methods such as FORM or Importance Sampling. From optimization, the values of the design parameters and the achieved reliability indices were found to be,  D  = 1765mm,  B = r  748mm,  B  t  B  2  =2.507;  =2.502  (2) Neural network approach  Two reliability index databases were constructed by running RELAN for combinations of the design parameters, while keeping the coefficients of variation constant; and Appendix H shows thefirst10 combinations of the databases.  191  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS  Two neural networks were respectively built for the two reliability indices: B, and B . By 2  optimization of the same problem, the design parameters and the corresponding achieved reliability indices were found to be,  D= 1758mm, /?, =2.500; B = 779mm, B =2.500 r  2  The results are very close to the results obtained with the conventional approach. Though construction of reliability index databases takes somewhat more time, the calculation of design parameters is very fast.  7.5 A Wood Shear Wall 7.5.1  Description of the structure  This wood shear wall structure has been introduced in Chapter 6. It is a 2.4 m high and 2.4 m wide wall, with 12 mm thick OSB sheathing panels attached to the frame using common 50 mm long nails. The spacing of the vertical members is 400 mm. The structure is subjected to the same earthquake as before (Joshua Tree Station earthquake with amplitude adjusted).  7.5.2  Random variables  The neural networks developed in the preceding chapter were used. The four input variables were the nail spacing along the perimeter e,, the nail spacing in the interior e , the mass 2  carried by the wall M, and the peak ground acceleration A . Two responses were the wall g  192  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS  drift A and the nail edge tearing force V. The probability distributions and statistics of the input variables were presented in Table 7.5.  Table 7.5 Wood shear wall: Input variable probability distributions and statistics Mean Standard deviation Distribution Random variable ? Normal O.le, e, (m) e () M (KN.secVm)  Normal  ?  Normal  6.0  0.1e 0.6  A (m/sec )  Lognormal  0.927  0.556  m  2  2  2  g  7.5.3  Performance-based design  The design parameters were the mean of e,, e,, and the mean of e , e , so that the target 2  2  reliability indices were met as B\ = B = 2.5 for the following performance criteria, 2  G =A -A(e ,e M,A ) g  (7.7a)  G =V -V(e„e M,A )  (7.7b)  1  0  2  0  1  2  2  g  where Ag denotes the drift limit which was set to 1/200 of the wall height, 12.0 mm; V denotes the nail edge tearing capacity that was assumed Normal with a mean of 0  1.05 KN and a standard deviation of 0.105 KN;  The optimization was conducted using the conventional approach, in which FORM and Importance Sampling were utilized for reliability calculation. The design parameters and the corresponding achieved reliability indices were found to be,  e = 0.058m, B =2.570; l  t  e = 0.128m, B = 2.570 2  2  193  CHAPTER 7 PERFORMANCE-BASED SEISMIC DESIGN APPLICATIONS  7.6  Summary  Four performance-based seismic design applications have been presented illustrating the applicability and efficiency of the proposed method for performance-based seismic design. The proposed method allows for uncertainties in earthquake demands, and seismic design can be carried out in the framework of reliability-based design, so that multiple performance requirements can be satisfied with specified reliability levels.  194  CHAPTER 8 SUMMARY AND FUTURE WORK  CHAPTER 8  SUMMARY AND FUTURE WORK  8.1 Summary In earthquake resistant design, multiple performance criteria must be satisfied with prespecified reliabilities, and structural responses are evaluated based on probable earthquakes that may occur during the service life of the structure. The many uncertainties inherent in the design process are not explicitly coped with in current codified recommendations. Therefore, the achieved reliability level is not known and likely non-uniform across different design situations.  This thesis has presented a methodology for 1) reliability assessment and 2) performancebased design. It is based on computer simulations utilizing neural networks for the evaluation of structural responses.  As the primary uncertainty in seismic design, earthquake ground motion needs to be properly characterized. The ground motion model should take account of all the major factors that are of engineering significance. To this end, three parameters are identified to characterize the ground accelerogram, namely, the peak ground acceleration, the frequency content and the duration. Non-stationary ground acceleration time history is synthesized by multiplying a modulation function with a stationary stochastic process that is generated on the basis of a power spectrum. To prevent spurious displacement and velocity shift, a baseline correction method has been devised to process the synthesized accelerogram. The goal of the artificial ground motion generation is not to suggest a new approach, but to produce an ensemble of  195  CHAPTER 8 SUMMARY AND FUTURE WORK artificial ground accelerograms that share the same seismic parameters, in an attempt to consider the uncertainty in ground shaking for seismic analysis and design. The generated artificial accelerograms are used as inputs to a nonlinear dynamic structural analysis program to compute the responses of a structure.  In order to build the neural network models for seismic reliability assessment, response databases have to be constructed in advance as the training data for neural network. This can be accomplished by virtue of design of experiment techniques. A new experimental design method is proposed in this study. First, a design is generated with more data points than needed using Latin Hypercube Sampling, then a minimum inter-point distance is set and the data points whose distance is less than the threshold are merged. The process is repeated until the inter-point distance threshold grows to a certain limit. Finally, the data points are sorted according to distance with one of the pair of close points eliminated, until the required number of data points is left. This method proves effective and efficient as the minimal interpoint distance is controlled, thus enforcing the uniformity of the design.  Neural networks are proposed for seismic reliability analysis and performance-based design. Multiple layer neural networks are employed that has one hidden layer of neurons. The optimal number of neurons is determined by cross validation. The data are divided into five equal groups. The neural network is trainedfivetimes, and each time four of the subsets are used for training and the remaining one for verification. Both training errors and verification errors are calculated. A criterion based on the training error and verification error is used for measurement of network goodness. The network with the minimal criterion value is judged as the optimal network. Finally, the optimal network is trained by shuffling the data in the  196  CHAPTER 8 SUMMARY AND FUTURE WORK training dataset and the testing dataset. Any sample in the testing dataset with an error larger than a threshold is put into the training dataset, while the same number of samples in the training dataset that have the least error are put into the testing dataset. The training process is repeated until the training error is reduced to a certain limit or the limit on iteration is reached. By doing this way, all the critical data points are included in the training dataset, ensuring that the underlying functional relationship is well represented by the neural network.  Performance-based seismic design is explored by mean of neural networks in the framework of reliability-based design. The purpose of this approach is to take into consideration all the major uncertainties in the design, to improve the computational efficiency, and to satisfy multiple performance criteria with preset reliability levels when the structure is subjected to multiple seismic hazards. Four performance levels are proposed, namely, serviceability limit state, capability limit state, stability limit state and survivability limit state corresponding respectively to four levels of earthquakes, ie, a frequent minor event, an occasional moderate event, a rare major event and a very rare event. Design criteria are outlined for each performance level. A framework for performance-based seismic design is outlined, in which performance-based seismic design is formulated as an optimization problem, with an objective function minimized to obtain the design parameters.  Five case studies on seismic reliability of structures are presented to illustrate the proposed method. A two-story reinforced concrete frame is used as the first example of existing building, and its seismic reliability is assessed in relevance to a probabilistic earthquake. A twenty-storied reinforced concreteframeis utilized as a representative of a tall building, and its seismic performances corresponding to two levels of earthquakes are evaluated. A bridge  197  CHAPTER 8 SUMMARY AND FUTURE WORK  bent without or with seismic isolation is studied as the third example, in which two levels of earthquakes are considered and seismic reliabilities of the bridge without or with seismic isolation are calculated. The results show that the reliability can be greatly enhanced by seismic isolation. As the fourth example, a wood shear wall is investigated regarding its seismic performance. The final example is concerned with an actual instrumented structure that has experienced three earthquakes and suffered damages. The seismic performance of this building during possible future earthquake is studied. The case studies prove the applicability and efficiency of the proposed approach, which can reduce the computational burden in seismic analysis and thus providing a promising tool for seismic reliability assessment.  Performance-based seismic design applications are provided to illustrate the proposed approach, where design of experiments and neural networks are used. Two methods can be used to find the optimum design parameters, one is the conventional approach that utilizes FORM and Importance Sampling for reliability analysis, and the other is the P database approach where databases of the reliability indices are created first in terms of the design parameters and then neural networks of the reliability indices are built. The pre-constructed response databases and neural network models are taken advantage of for this purpose. In the case of the two-storied reinforced concreteframe,the maximum masses that can be carried on the roof and the floor are calculated. For the tall building in the ultimate limit state, the mean values of the column dimensions are calculated to meet the predefined target reliability levels. The mean of the column diameter of the bridge without isolation is calculated, whereas the means of column diameter and isolator width are optimized for the bridge with isolation. The optimal spacing of nails is found in the case of wood shear wall. These case  198  CHAPTER 8 SUMMARY AND FUTURE WORK studies prove the applicability of the proposed method for performance-based design by means of optimization to achieve the multiple performance objectives.  It is concluded that designed experiments and neural networks provide a robust and efficient tool in seismic reliability analysis and performance-based design, improving the simulation of real behavior of a complex structure and reducing the computational burden.  8.2 Future Work This thesis work has explored seismic reliability assessment and performance-based seismic design using designed experiments and neural networks, however, the fields covered are so extensive that some issues are left for future developments in both theories and applications.  •  Earthquake ground motions, as the excitations to structure, need to be properly characterized in order to realistically predict the structural responses, provided the structural model is able to simulate the actual behavior of the structure. At present, there is not such a ground motion model that can accurately predict the future earthquake at a given site with high degree of reliability. Emphasis should be placed on the high uncertainty involved. Further works needs to be done in this direction, as the success of earthquake resistant design, to a large degree, hinges on appropriate characterization of the future ground motions at a specific site.  •  Though uncertainties in the seismic demand are taken into account by means of making assumptions of the distributions of the input variables and estimating the responses using neural networks, the uncertainties in the structural capacity are hard to deal with. Numerical procedures must be developed to evaluate the uncertainties in relation to  199  CHAPTER 8 SUMMARY AND FUTURE WORK  moment, shear and axial strengths of member subjected to complex loadings, and global displacement ductility of structure, as well as member plastic hinge characteristics and its rotational ductility, etc.  •  Design of experiments is indispensable for response database construction. Albeit some efforts have been made in this regard, there is more room for exploration in order to achieve an efficient and uniform experimental design.  •  Multilayer neural networks and radial basis function networks are explored for seismic reliability assessment and performance-based design. Other machine learning paradigms such as Gaussian process (Williams, 1995; Rasmussen, 1996; Neal, 1997), and support vector machines (Vapnik, 1998, 2000) need to be investigated, or new learning methods need to be innovated for further improvement and development of this approach.  •  The optimization for performance-based design is slow when the number of design parameters is relatively large. Faster algorithms need to be developed to expedite the optimization process.  •  Performance-based seismic design should be undertaken in the format of reliability-based design, with the design parameters determined through lifecycle cost benefit analysis. The decision making process involves political, economic, societal and technical factors. Both theory and application need further development to facilitate its successful implementation in practical engineering projects, where structural safety is guaranteed with potential hazards well assessed and risks well managed.  200  REFERENCES Amin, M. and Ang, A. H. - S. (1968). Non-stationary stochastic models of earthquake motions, Proceedings ASCE, Journal of the Engineering Mechanics Division, 94, EM2 Arora, J., (eds.), (1999). Guide to Structural Optimization, ASCE publications Atkinson, G. and I. Beresnev (1998). Compatible ground motion time histories for new national seismic hazard maps, Canadian Journal of Civil Engineering, Vol.25, pp.305-318 Bertero, R. D. and Bertero, V. V. (2002). Performance-based seismic engineering: the need for a reliable conceptual comprehensive approach, Earthquake Engineering and Structural Dynamics, Vol.31, pp.627-652 Beskos, D. E. and Anagnostopoulos, S. A.(eds.), (1997). Computer Analysis and Design of Earthquake Resistant Structures: A Handbook, Computational Mechanics Publications Boore, D. M. (1983). Stochastic simulation of high-frequency ground motions based on seismological models of the radiated spectra, Bulletin of the Seismological Society of America, Vol. 73, No.6, pp.1865-1894 Boore, D. M., Stephens, C. D. and Joyner, W. B. (2001). Comments on baseline correction of digital strong motion data: examples from the 1999 Hector Mine, California Earthquake, Bulletin of the Seismological Society ofAmerica Box, G. E. P. and Wilson, K. B. (1951). On the experimental attainment of optimum conditions, Journal of the Royal Statistical Society, Series B, 13, pp. 1-45 Box, G. E. P. and Behnken, D. W. (1960). Some new three-level designs for the quantitative study of variables, Technometrics, 2, pp.455-475 Box, G. E. P. and Jenkins, G. M. (1976). Time Series Analysis: Forecasting and Control, Holden-Day Bratley, P. and Fox, B. L. (1988). Algorithm 659: implementing Sobol's quasi-random sequence generator, ACM Transactions on Mathematical Software, Vol.14, No.l, pp.88-100 Bratley, P. and Fox, B. L. and Niederreiter, H. (1992). Implementation and tests of lowdiscrepancy sequences, ACM Transactions on Modeling and Computer Simulation, Vol.2, No.3, pp. 195-213 Bratley, P. and Fox, B. L. and Neiderreiter, H. (1994). Algorithm 738: programs to generate Niederreiter's low-discrepancy sequences, ACM Transactions on Mathematical Software, Vol.20, No.4, pp.494-495  201  Breitung, K. and Faravelli, L. (1996). Response surface methods and asymptotic approximations, Mathematical Models for Structural Reliability Analysis, (Fabio Casciati and Brian Roberts eds.), CRC Press Bucher, C.G., Bourgund, U. (1990). A fast and efficient response surface approach for structural reliability problems, Structural Safety, Vol.7, pp.57-66 Byrd, R. H., Schnable, R. B. and Shultz, G. A. (1987). A Trust Region algorithm for nonlinearly constrained optimization, SIAM Journal of Numerical Analysis, Vol.24, pp. 11521170 Byrd, R. H., Schnable, R. B. and Shultz, G. A. (1988). Approximate solution of the Trust Region problem by minimization over two-dimensional subspace, Mathematical Programming, Vol.40, pp.247-263 Cai, G. Q. and Lin, Y. K. (1998). Reliability of nonlinear structural frame under seismic excitation, Journal of Engineering Mechanics, Vol.124, No.8, pp.852-856 Cakmak, A.S. , Sherif I. and Ellis, G.W. (1985). Modeling earthquake ground motions in California using parametric times series methods, Soil Dynamics and Earthquake Engineering, Vol.4, No.3, pp 124-131 Capon, J. (1969). High resolutionfrequency-wavenumber analysis, Proc. IEEE, No.57, ppl408-1418 Chang, M.K. et al (1982). ARMA models for earthquake ground motions, Earthquake Engineering and Structural Dynamics, Vol.10, No.5, pp.651-662 Cherkassky, V. andMulier, F. (1998). Learningfrom Data: Concepts, Theory, and Methods, John Wiley & Sons Clough, R. W. and Penzien, J. (1993). Dynamics of Structures, McGraw-Hill Collins, K. R., Wen, Y. K. and Foutch, D. A. (1996). Dual-level seismic design: a reliability-based methodology, Earthquake Engineering and Structural Dynamics, Vol.25, No. 12, pp. 1433-1468 Conn, A. R., Goulg, N. I. M. and Toint, P. L. (2000). Trust-region Methods, SIAM Converse, A. (1992). BAP Basic Strong Motion Accelerogram Processing Software, Version 1.0, USGS open-file Report 92-296A Corne, D., Dorigo, M. and Glover, F. (eds.), (1999). New Ideas in Optimization, McGrawHill Cressie, N. A. C. (1991). Statistics for Spatial Data, John Wiley & sons  202  Cunba, A. (1994). The role of the stochastic equivalent linearization method in the analysis of the nonlinear seismic response of building structures, Earthquake Engineering and Structural Dynamics, Vol.23, No.8, pp.837-858 Das, P. K., Zheng, Y. (2000). Cumulative formation of response surface and its use in reliability analysis, Probabilistic Engineering Mechanics, Vol.15, pp.309-315 Deodatis, G., Shinozuka, M. and Papageorgiou, A. (1990a). Stochastic wave representation of seismic ground motion I: F-K spectra, Journal of Engineering Mechanics, Vol. 116, No. 11, pp.2363-2379 Deodatis, G., Shinozuka, M. and Papageorgiou, A. (1990b). Stochastic wave representation of seismic ground motion II: simulation, Journal ofEngineering Mechanics, Vol. 116, No. 11, pp.2381-2399 Deodatis, G. (1996). Non-stationary stochastic vector process: seismic ground motion applications, Probabilistic Engineering Mechanics, Vol.11, No.3, pp. 149-168 Dicleli, M. (2002). Seismic design of lifeline bridge using hybrid seismic isolation, Journal of Bridge Engineering, Vol.7, Issue. 2, pp. 94-103 Drucker, H., Burges, C , Kaufman, L., Smola, A. and Vapnik, V. (1997). Support vector regression machines, in Advances in Neural Information Processing Systems 9, Mozer, M . Jordan, M. and Petsche, T. (eds.), MIT Press, pp. 155-161 Duahe, S, Kennedy, A. D., Pendleton, B. J., and Roweth, D. (1987). Hybrid Monte Carlo, Physics Letters B, Vol. 195, pp.216-222 Eberhart, R. C. and Kennedy, J. (1995). A new optimizer using particle swarm theory, Proceedings of the Sixth International Symposium on Micromachine and Human Science, pp.39-43, Nagoya, Japan Ellis, G. W., Srinivasan, M . and Calmak, A. S. (1990). A Program to Generate Site Dependent Time Histories: EQGEN, Technical report NCEER-90-0009, National Center for Earthquake Engineering Research Enevoldensen, I., Faber, M. H. and S<j>rensen, J. D. (1993). Adaptive response surface * techniques in reliability estimation, Structural Safety and Reliability, Schuller, Shinozuka & Yao(eds), pp. 1257-1264 Etter, D. M . (1987). Structured Fortran?? for Engineers and Scientists, Benjamin/Cummings Publishing Company  The  Fajfar, P. and Krawinkler, H. (eds.), (1997). Seismic Design Methodologies for the Next Generation of Codes, A. A. Balkema  203  Fang, K.-T. (1980). Experimental design by uniform distribution, Acta Mathematice Applicatae Sinica, Vol.3, pp.363-372 Fang, K.- T. and Wang, Y. (1994). Number-theoretic Methods in Statistics, Chapman & Hall Fang, K.-T., Lin, D. K. J., Winker, P. and Zhang, Y. (2000). Uniform design: theory and applications, Technometrics, Vol. 42, pp. 237-248 FEMA (1997). NEHRP Guidelines for the Seismic Rehabilitation of Buildings, FEMA report 273 FEMA (1996). Performance-based Seismic Design of Buildings, FEMA report 283 Foschi, R. O. and Li, H. (1997). Inverse reliability method and application in offshore engineering, Structural Safety and Reliability, Shiraishi, Shinozuka & Wen (eds) Foschi, R. O., Li, H., Folz, B., Yao, F., Zhang, J. and Baldwin, J. (2000). RELAN: A General Software Package for Reliability Analysis. Department of Civil Engineering, University of British Columbia, Vancouver, Canada Foschi, R. O. and Li, H. (2001). Reliability and performance-based design in earthquake engineering, Structural Safety and Reliability, Corotis, Shueller & Shinozuka (eds) Foschi, R. O., Li, H., and Zhang, J. (2002). Reliability and performance-based design: a computational approach and applications, Structural Safety, Vol.24, pp.205-218 Foschi, R. O. and Zhang, J. (2003). Performance-based design and seismic reliability using designed experiments and neural networks, Proceedings of the 5 International Conference on Stochastic Structural Mechanics, August 11-13, Hangzhou, China th  Foschi, R. O. and Zhang, J. (2003). Neural networks application in seismic reliability and performance-based design, Proceedings of the 9 International Conference on Statistics and Probability in Civil Engineering, July 6-9, San Francisco, California, USA th  Fox, B. L. (1986). Algorithm 647: implementation and relative efficiency of quasirandom sequence generation, ACM Transactions on Mathematical Software, Vol.12, No.3, pp.362376 Friedman, J. H. (1991). Vol.19, No.l  Multivariate adaptive regression splines, Annals of Statistics,  Gasparini, D. and Vanmarcke, E. H. (1976). Simulated earthquake motions compatible with prescribed response spectra, Technical Report, Department of civil Engineering, Massachusetts Institute of Technology, Publication No. R76-4  204  Gersch, W. and Kitagawa, G. (1985). A time varying AR coefficient model for modeling and simulating earthquake ground motion, Earthquake Engineering and Structural Dynamics, Vol.13, No.2, pp.243-254 Ghaboussi, J. and Lin, C.-C. (1998). New method of generating spectrum compatible accelerograms using neural networks, Earthquake Engineering and Structural Dynamics, Vol.27, No. 12, pp.377-396 Ghaboussi, J. and Lin, C.-C. J. (2000). Performance-based design using structural optimization, Earthquake Engineering and Structural Dynamics, Vol.29, pp. 1677-1690 Ghosh, J., Deuser, 1. and Beck, S. (1992). A neural network based hybrid system for detection, characterization and classification of short-duration oceanic signals, IEEE Journal of Oceanic Engineering, Vol.17, No.4, pp.351-363 Ghosh, S. and Rao, C. R. (eds.), (1996). Handbook of Statistics 13: Design and Analysis of Experiments, Elsevier Gibbs, M. N. (1997). Bayesian Gaussian Processes for Regression and Classification, PhD Thesis, University of Cambridge Gioncu, V. and Mazzolani, F. (eds), (2002). Ductility of Seismic Resistant Structures, Spon Press Glover, F. (1989). Tabu Search, Part I, ORSA Journal of Computing 1, pp. 190-206 Glover, F. (1990). Tabu Search, Part II, ORSA Journal of Computing 1, pp.4-32 Glover, F. and Laguna, M . (1993). Tabu Search, Modern Heuristic Techniques for Combinatorial Problems (Reeves, C. R. eds.), Blackwell, Oxford, pp. 10-150 Glover, F. and Laguna, M. (1997). Tabu Search, Kluwer Academic Press Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, Reading, MA Haddon, R.A. W. (1996). Use of empirical Green's function, spectral Ratios, and kinematic source models for simulating strong ground motion, Bulletin of the Seismological Society of America, Vol. 86, No.3, pp.597 - 615 Hadley, D. M. and Helmberger, D. V. (1980). Simulation of strong ground motions, Bulletin of the Seismological Society ofAmerica, Vol. 70, No. 2, pp. 617-630 Hagan, M. T., Howard, B. D. & Beale, M. (1996). Neural Network Design, PWS Publishing Company  205  Halton, J. H. (1960). On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals, Numer. Math. , Vol.2, pp.84-90 Hamburger, R. (1996). Performance-based seismic engineering: the next generation of structural engineering practice, EQE Review Hammersley, J. M . (1960). Monte Carlo methods for solving multivariable problems, Ann. New York Acad. Sci., Vol.86, pp.844-874 Hartzell, S. H. (1978). Earthquake aftershock as Green's functions, Geophysical Research Letters, 53, pp. 1425-1436 Hedayat, A. S., Sloane, N. J. and Stufken, J. (1999). Applications, Springer-Verlag, New York  Orthogonal Arrays: Theory and  Holland, J. H. (1975). Adaptation in Natural and Artificial Systems, University of Michigan Press, Michigan Hornik, K. (1991). Approximation capabilities of multilayered feedforward networks, Neural Networks, 4, pp.251-257 Howlett, R. J. and Jain, L. (Eds.), (2001). Radial Basis Function Networks, Physica-Verlag Hsu, H.-I. and Bernard, M. C. (1978). A random process for earthquake, Earthquake Engineering and Structural Dynamics, Vol.6 No.4  Huang, Y., Wada, A., Iwata, M., Mahin, S. A. & Connor, J. J. (2002). Design of damagecontrolled structures, Innovative Approaches to Earthquake Engineering, WIT Press  Hutchings, L. and Wu, F. (1990). Empirical Green's functions from small earthquakes: a waveform study of locally recorded aftershocks of the 1971 San Fernando earthquake, Journal of Geophysical Research, Vol. 95, No. B2, pp. 1187-1214  Hutchings, L. (1991). Prediction of strong ground motion for the 1989 Loma Prieta earthquake using empirical Green's functions, Bulletin of the Seismological Society of America, Vol. 81, No. 5, pp. 1813-1837 Hutchings, L. (1994). Kinematic earthquake models and synthesized ground motion using empirical Green's functions, Bulletin of the Seismological Society of America, Vol.81, No. 5, pp. 1813-1837 Hwang, H. H. M. and Huo, J. R. (1994). Generation of hazard-consistent ground motion, Soil Dynamics and Earthquake Engineering, Vol. 13, No.6, pp. 377-386  International Organization for Standardization (ISO) (1998) General Principles on Reliability for Structures, ISO/FDIS 2394  206  Iyama, J. and Kawamura, H. (1999). Application of wavelets to analysis and simulation of earthquake motions, Earthquake Engineering and Structural Dynamics, Vol.28, No.3, pp. 255-272 Jennings, P.C, Housner, G. W. and Tsai, N. C. (1968). Simulated earthquake motions, Report, Earthquake Engineering Research Laboratory, California Institute of Technology, April Johnson, M. E., Moore, I. M. and Ylvisaker, D. (1990). Minimax and Maximin distance designs, Journal of Statistical Planning and Inference, Vol.26, No.2, pp. 131-148 Kalagnanam, J. R. and Diwekar, U. M. (1997). An efficient sampling technique for offline quality control, Technometrics, Vol. 39, No.3, pp. 308-319 Kamae, K., Irikura, K. and Pitarka, A. (1998). A technique for simulating strong ground motion using hybrid Green's function, Bulletin of the Seismological Society of America, Vol. 88, pp. 357-367 Kanai, K. (1957). Semi-empirical formula for the seismic characteristics of the ground, Bulletin of Earthquake Research Institute, Vol. 35, pp.309-325 Karaboga, D., Pham, D. T. (1999). Intelligent Optimization Techniques: Genetic Algorithms, Tabu Search, Simulated Annealing and Neural Networks, Springer Verlag Kartam, N., Flood, I., and Garrett, J. (eds.), (1997). Artificial Neural Networks for Civil Engineers: Fundamentals and Applications, ASCE publications Kecman, V. (2001). Learning and Soft Computing, MIT Press Kennedy, J. and Eberhart, R. C. (1995). Particle swarm optimization, Proceedings of the IEEE International Conference on Neural Networks, Vol. IV, pp. 1942-1948, Piscataway, NJ Kennedy, J. and Eberhart, R. C. and Shi, Y. (2001). Swarm Intelligence, Morgan Kaufmann Publishers Kirkpatrick, S., Gelatt, C. D., Vecchi, M. P. (1983) Optimization by simulated annealing, Science, May Koehler, J. R. and Owen, A. B. (1996). Computer experiments, Handbook of Statistics (Ghosh, S. and Rao, C. R., eds.), Elsevier Science, New York, pp.261-308 Kramer, S. L. (1996). Geotechnical Earthquake Engineering, Prentice-Hall Kumar, B. (eds.) (1996). Information Processing in Civil and Structural Engineering Design, Civil-Comp Press  207  Kumar, B. and Topping, B. H. V. (eds.), (1999). Artificial Intelligence Applications in Civil and Structural Engineering, Civil-Comp Press Li, H. and Foschi, R. O. (1997). A inverse reliability method and its application, Structural Safety, Vol.20, pp.257-270 Li, H. (1999). An inverse reliability method and its applications in engineering design, Ph.D. Thesis, Department of Civil Engineering, University of British Columbia Li, K. N. (1996). Three-dimensional Nonlinear Dynamic Structural Analysis Computer Program Package CANNY-E User's Manual, Canny Consultants PTE LTD, Singapore MacKay, D. J. C. (1992). A practical Bayesian framework for backpropagation networks, Neural Computation, Vol.4, No.3, pp.448-472. MacKay, D. J. C. (1993). Bayesian methods for backpropagation networks, In van Hemmen, J. L., Domany, E. and Schulten, k., editors, Models of Neural Networks II, Springer. MacKay, D. J. C. (1997). Gaussian processes - A replacement for supervised neural networks? Lecture notesfor a tutorial at NIPS 1997 Mazzolani, F. and Gioncu, V. (eds.), (2000). Seismic Resistant Steel Structures, SpringerVerlag Mckay, M. D., Beckman, R. J. and Conover, W. J. (1979). A comparison of three methods for selecting values on input variables in the analysis of output from a computer code, Technometrics, Vol. 21, No.2, pp. 239-245 Morris, M. D. and Mitchell, T. J. (1995). Exploratory designs for computer experiments, Journal of Statistical Planning and Inference, Vol.39, No. 1, pp.95-111 Myers, R. H. and Montgomery, D. C. (1995). Response Surface Methodology: Process and Product Optimization Using Designed Experiments, John Wiley & Sons, New York National Building Code of Canada (1995). Codes, National Research Council, Ottawa.  Canadian Commission on Building and Fire  Nau, R. F., Oliver, R. M. and Pister, K. S. (1982). Simulation and analyzing artificial nonstationary earthquake ground motions, Bulletin of the Seismological Society of America, Vol. 72, No.2, pp. 615-636 Neal, R. M. (1993a). Bayesian Learning via Stochastic Dynamics, In Hanson, S. J., Cowan, J. D. and Giles, L. L., editors, Neural Information Processing Systems 5, pp.475-482, Morgan Kaufmann, San Meteo, CA  208  Neal, R. M . (1993b). Probabilistic inference using Markov chain Monte Carlo method, Technical Report CRG-TR-93-1, Department of Computer Science, University of Toronto. Neal, R. M . (1995). Bayesian learning for neural networks, Ph.D. Thesis, Department of Computer Science, University of Toronto. Neal, R. M . (1997). Monte Carlo implementation of Gaussian process models for Bayesian regression and classification, Technical Report No. 9702, Department of statistics, University of Toronto Niederreiter, H. (1987). Point sets and sequences with small discrepancy, Monatsh. Math., Vol.104, pp.273-337 Niederreiter, H. (1988). Low-discrepancy and low-dispersion sequences, Journal of Number Theory, Vol.30, pp.51-70 Niederreiter, H. (1992). Random Number Generation and Quasi-Monte Carlo Methods, SIAM, Philadelphia Olafsson, S.and Sigbjornsson,R. (1995). Application of ARMA models to estimate earthquake ground motion and structural response, Earthquake Engineering and Structural Dynamics, Vol. 24, No.7, pp.951-966 Owen, A. B. (1992). Orthogonal Arrays for computer experiments, integration and visualization, Statistica Sinica, Vol.2, pp.439-452 Owen, A. B. (1994). Randomly permuted (t,m,s)-nets and (t,s)-sequences, in H. Niederreiter and P. J.-S. Shiue, editors, Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, pp.299-317, Springer, New York Papadrakakis, M . , Papadopoulos, V., Lagaros, N. (1996). Structural reliability analysis of elastic-plastic structures using neural networks and Monte Carlo simulation, Computer Methods in Applied Mechanics and Engineering, Vol. 136, pp. 145-163 Papageorgiou, A. S. and Aki, K. (1983a). A specific barrier model for the quantitative description of inhomogeneous faulting and the prediction of a strong ground motion I. Description of the model. Bulletin of the Seismological Society of America, Vol. 73, pp.693722 Papageorgiou, A. S. and Aki, K. (1983b). A specific barrier model for the quantitative description of inhomogeneous faulting and the prediction of a strong ground motion II. Applications of the model. Bulletin of the Seismological Society ofAmerica, Vol. 73, pp.953978  209  Park, J.-S. (1994). Optimal Latin-Hypercube designs for computer experiments, Journal of Statistical Planning and Inference, Vol.39, No. 1, pp.95-111 Patterson, D. W. (1996). Artificial Neural Networks: Theory and Application, Prentice Hall Paz, M . (1997). Structural Dynamics: Theory and Computation, Chapman & Hall  Polhermus, N. W. and Cakmak, A. S. (1981). Simulation of earthquake ground motions using Autoregressive Moving Average models, Earthquake Engineering and Structural Dynamics, Vol.9, No.4, pp. 343-354 Priestly, M. J. N., Calvi, G. M. (1996). Seismic Design and Retrofit of Bridges, John Wiley & Sons Rahmatian, P. (1997). Three-dimensional nonlinear dynamic seismic behavior of a seven story reinforced concrete building, M. A. Sc. Thesis, Department of Civil Engineering, University of British Columbia Rasmussen, C. E. (1996a). Evaluation of Gaussian processes and other methods for nonlinear regression, PhD Thesis, Department of computer Sciences, University of Toronto Rasmussen, C. E. (1996b). A practical Monte Carlo implementation of Bayesian learning, In Tourtzky, D. S., Mozer., M. C. and Hasselmo, M. E., editors, Advances in Neural Information Processing Systems 8, MIT Press  Rumelhart, D. E., Hinton, G. E. & Williams, R. J. (1986). Learning internal representations by error propagation, In Parallel Distributed Processing, Volume I: Foundations, ed.  David E . Rumelhart, James L. McClelland and the PDP Research Group, pp.318-362. The MIT Press. Sacks, J., Welch, W. J., Mitchell T. J. and Wynn P. (1989). Design and analysis of computer experiments, Statistical Sciences, Vol. 4, No. 4, pp. 409-435 SEAOC (1995). Vision 2000: Performance-based Seismic Engineering of Buildings. Shepherd, A. J. (1997). Second-Order Methods for Neural Networks - Fast and Reliable Training Methods for Multi-layer Perceptron, Springer  Shinozuka, M.,and Deodatis, G. (1988). Stochastic process models for earthquake ground motion, Probabilistic Engineering Mechanics, Vol.3, No.3, pp. 114-123 Shinozuka, M . , Zhang, R. and Deodatis, G. (1994). Sine-square modification to KanaiTajimi earthquake ground motion spectrum, Structural Safety & Reliability, Schuller, Shinozuka & 7ao(eds.), pp.2217-2223  210  Simpson, T., Lin, D. K. J., and Chen, W. (2001). Sampling strategies for computer experiments: design and analysis, InternationalJournal of Reliability and Applications Smith, M. (1993). Neural Networks for Statistical Modeling, Van Nostrand Reinhold Solnes, J. (1997). Stochastic Processes and Random Vibrations: Theory and Practice, John Wiley & Sons Spanos, P. D. and Zeldin, B. A. (1996). Efficient iterative ARMA approximation of multivariate random process for structural dynamics application, Earthquake Engineering and Structural Dynamics, Vol.25, No. 5, pp.497-508 Srinivasan, M., Corotis, R. and Ellingwood, B. (1992). Generation of critical stochastic earthquake, Earthquake Engineering and Structural Dynamics, Vol. 21, No. 4, pp.275-288 Sunder, S. and Conner, J. (1982). A new procedure for processing strong-motion earthquake signals, Bulletin of the Seismological Society ofAmerica, Vol. 72, No. 2, pp. 643-661 Sunder, S. and Schumacker, B. (1982). Earthquake motions using a new data processing scheme, Journal of Engineering Mechanics, ASCE, Vol. 108, No.6, pp. 1313-1329 Suzuki, S., Hada, K. and Asano, K. (1998). Simulation of strong ground motions based on recorded accelerograms and the stochastic method, Soil Dynamics and Earthquake Engineering, Vol.17, No.7-8, pp.551-556 Tajimi, H. (1960). A statistical method of determining the maximum response of a building structure during an earthquake, Proceedings of the 2 World conference of Earthquake Engineering, Vol.11, pp.781-798 nd  Tang, B. (1993). Orthogonal Array-based Latin Hypercubes, Journal ofAmerican Statistical Association, Vol.88, No.424, pp. 1392-1397 Topping, B. H. V. and Kumar, B. (eds.), (1999). Optimization and Control in Civil and Structural Engineering, Civil-Comp Press Topping, B. H. V. (eds.), (2000). Nature, Civil-Comp, Edinburgh  Computational Engineering Using Metaphors from  Trifunac, M. D. (1971). Zero baseline correction of strong motion accelerograms, Bulletin of the Seismological Society ofAmerica, Vol. 61, No.5, pp. 1201-1211 Trujillo, D. M. and Carter, A. L. (1982). A new approach to the integration of accelerometer data, Journal of Earthquake Engineering and Structural Dynamics, Vol.10, pp. 529-535 Vapnik, V. (1998). Statistical Learning Theory, John Wiley & Sons  211  Vapnik, V. (2000). The Nature of Statistical Theory, Springer-Verlag Vapnik, V., Golowich, S. E. and Smola, A. (1997). Support vector method for function approximation, regression estimation, and signal processing, in Advances in Neural Information Processing Systems 9, Mozer, M . , Jordan, M. and Petsche, T. (eds), MIT Press, pp.282-287 Ventura, C. E., Rahmatian, P., Li, K. and Kubo, T. (2002). Reliability of 3-D nonlinear dynamic analysis of a seven story reinforced concrete building, Proceedings of the 12  th  European Conference on Earthquake Engineering, London, September Waarts, P. H. (2000). Structural Reliability Using Finite Element Methods - An Appraisal of DARS: Directional Adaptive Response surface Sampling, Delft University Press Waszczyszyn, Z. (eds.), (1999). Neural Networks in the Analysis and Design of  Structures, Springer-Wien Wen, Y. K. (2001). Reliability and performance-based design, Structural Safety, Vol.23, pp.407-428 Williams, C. K. I. (1995). Regression with Gaussian processes, Mathematics of Neural Networks: Models, Algorithms and Applications (Ellacott, S. W., Mason, J. C. and  Anderson, I. J., eds), Kluwer, 1997 Williams, C. K. I. and Rasmussen, C. E. (1996).  Gaussian processes for regression,  Advances in Neural Information Processing Systems 8, MIT Press  Ye, K. Q. (1998). Orthogonal column Latin Hypercubes and their application in computer experiments, Journal of American Statistical Association, Vol. 93, No.444, pp. 1430-1439  Zeng, Y. (1994). A composite source model for computing realistic synthetic strong ground motions, Geophysical Research Letter, Vol. 21, pp. 725-728  212  Appendix A Database for the two-story reinforced concrete frame 1. Database of the input variable combinations  f (MPa)  f;(MPa)  E (MPa)  W,(KN)  W (KN)  394.384  25.212  22173  307.79  435.15  415.878  28.848  22870  375.62  493.47  381.933  27.984  22751  354.41  471.60  378.513  39.612  24779  363.44  452.97  390.501  27.984  22751  364.07  498.06  388.632  30.180  23148  381.50  414.63  421.400  36.336  24225  375,62  513.18  421.400  25.356  22194  320.18  406.53  392.496  42.420  25170  401.45  468.90  383.194  26.796  22515  291.41  455.40  y  c  2  2. Database of the responses  D (m)  S (m)  D (m)  S (m)  0.083  0.071  0.051  0.048  0.089  0.082  0.052  0.054  0.09  0.087  0.052  0.056  0.079  0.078  0.047  0.052  0.087  0.084  0.052  0.056  0.082  0.076  0.047  0.048  0.083  0.077  0.050  0.053  0.072  0.061  0.042  0.041  0.084  0.083  0.049  0.054  0.074  0.068  0.046  0.049  3  D3  2  213  D2  Appendix B Database for the tall reinforced concrete building 1. Database of the input variable combinations  <°s  T  q  C  4  4  f  35  25  15  15  H H B B 713 910 506 705 402 504  Bi  Hi  10  22 20.7  24  fy 408  13  12 40.3  33  417  36  26  16  16  726 920 512 710 404 509  16  31  7.56  42  425  37  27  16  16  739 931 519 716 407 513  19  7.6 27.2  51  434  38  28  17  17  752 941 525 721 409 518  22  26 46.9 16.8 442  39  28  17  17  765 951 532 727 412 523  26  17  14.1 25.8 451  40  29  18  18  778 962 538 732 414 527  29  35  33.8 34.8 401  41  30  19  18  791 972 545 737 417 532  32  5.3 53.4 43.8 409  42  31  19  19  804 982 551 743 419 537  35  24  3.19 52.8 418  43  31  20  19  817 993 558 748 421 541  39  15 22.9 18.6 426  44  32  20  20  830 1003 564 754 424 546  s  b  2  2  3  2. Database of the corresponding responses (mean value)  •^20  e„  e  M  V  0.005001  0.159223  0.0121  0.009  8.49E+02  21.68  0.009044  0.184871  0.0223  0.0167  1.52E+03  40.49  0.006888  0.203147  0.0193  0.0128  1.13E+03  36.41  0.025976  0.250284  0.0612  0.0452  4.17E+03  96.77  0.006775  0.36076  0.0171  0.0131  1.22E+03  33.34  0.018344  0.473205  0.0399  0.0338  3.31E+03  81.45  0.00671  0.32982  0.0219  0.0133  1.14E+03  43.47  0.036875  0.388779  0.0885  0.0676  5.71E+03  141.62  0.018662  0.428509  0.0478  0.0375  3.34E+03  102.68  0.017346  0.652247  0.0384  0.0324  3.11E+03  79.97  D  20  5  214  3  3. Databases of the responses (standard deviation)  D20  J  ^ A 2 0  s  ei5  $95  S  M  s  v  0.001526  0.022897  0.0032  0.0024  2.47E+02  5.38  0.002704  0.034733  0.0063  0.005  5.09E+02  10.32  0.001176  0.029231  0.0034  0.0023  2.56E+02  5.57  0.007111  0.03689  0.0178  0.0107  1.08E+03  19.53  0.002096  0.068635  0.0045  0.0035  4.05E+02  8.72  0.00584  0.071256  0.0109  0.0101  1.00E+03  20.19  0.002717  0.060925  0.0051  0.0039  4.47E+02  8.87  0.014607  0.051566  0.0275  0.0251  2.13E+03  36.07  0.005389  0.061194  0.0115  0.0096  1.01E+03  23.69  0.004441  0.110517  0.0066  0.009  7.66E+02  12.91  215  Appendix C Databases of the bridge without or with seismic isolation 1. Database of the input variable combinations  A , (gal)  o (rad/sec)  T (sec)  D(rnm)  B (mm)  Q (KN)  47.3  8.3  46.45  2067  873  3142  67.9  22.4  33.61  1608  555  1658  228.7  3.7  7.42  1767  845  1829  627.8  18  27.05  2033  680  1854  702.5  27.1  40.83  1740  663  2168  197.2  30.3  2.87  1821  967  2849  271.5  19.8  7.66  1721  597  2967  369.3  18.1  51.35  1704  680  1767  376.3  12.2  32.72  1692  543  2616  34  15.2  22.17  1668  727  1932  g  s  r  2. Responses (mean) database for bridge bent without seismic isolation  A(m)  M (KNm)  V (KN)  He  M (KNm)  H  1.5478  6567.751  1685.356  88.7234  5923.284  0 3542  2.8191  10481.97  2686.742  61.6259  9401.672  0 6043  0.6442  12910.6  3321.84  51.532  11813.51  0 5659  1.2087  18468.42  4734.917  96.6833  16472.42  0 8465  1.439  5563.964  1427.193  82.5027  5010.298  0 2885  2.2794  8811.976  2259.199  130.6768  7917.119  0 4988  0.2215  10449.36  2642.931  17.7033  9270.353  0 4361  0.813  13464.31  3451.355  65.0327  12104.14  0 5846  0.1733  3624.891  928.498  9.9191  3330.208  0 1705  0.2636  3995.262  1019.045  15.1239  3681.271  0 1961  c  C  216  b  b  3. Responses (standard deviation) database for bridge bent without seismic isolation  (KNm)  S (m)  S (KNm)  S c(KN)  0.0004  243.91  57.767  0.0298  183.478  0.0121  0.0006  139.467  33.711  0.0389  102.272  0.0028  0.002  518.08  131.827  0.1268  439.568  0.0288  0.0069  105.144  84.351  0.5428  1185.951  0.0262  0.0284  54.383  30.166  1.8965  244.438  0.0151  0.0017  585.322  143.612  0.1109  453.483  0.0288  0.008  129.36  52.568  0.5282  350.509  0.0228  0.0051  410.695  116.739  0.3351  535.487  0.0347  0.0108  41.454  29.104  0.6957  243.132  0.0153  0.0003  74.579  18.397  0.0188  54.603  0.0014  A  Mc  ^ Mb  V  4. Responses (mean) database for bridge bent with seismic isolation  M (KNm)  A(m)  M (KNm)  V (KN)  0.0126  1098.438  249.9  0.0347  926.658  0.0186  0.0121  699.679  171.1  0.0517  719.342  0.0194  0.0781  4220.083  1032.32  0.6905  3489.402  0.1534  0.1147  5354.663  1270.148  0.5442  4164.677  0.1679  0.0938  4357.546  1075.5  0.7858  3788.365  0.1612  0.0239  2232.411  546.755  0.2663  1951.901  0.054  0.0831  2665.8  659.856  0.4351  2291.845  0.0788  0.0647  2691.394  664.914  0.4609  2287.903  0.0781  0.115  2737.568  673.086  0.4807  2320.041  0.0809  0.0074  471.07  112.983  0.029  540.318  0.0142  c  b  C  217  U  b  Responses (standard deviation) database for bridge bent with seismic isolation  S (m)  S c(KNm)  SvoCKN)  0.0036  271.644  62.498  0.0086  197.867  0.004  0.0038  99.738  26.661  0.0118  92.098  0.0025  0.0136  440.268  107.2  0.0978  331.833  0.0217  0.0281  632.524  156.214  0.0874  488.48  0.0284  0.0212  579.226  152.06  0.1822  615.21  0.034  0.0056  393.488  92.972  0.0727  325.443  0.0152  0.021  321.78  80.52  0.0722  321.954  0.0183  0.0136  314.446  78.103  0.0726  231.165  0.0154  0.0399  324.316  81.263  0.0733  317.919  0.0206  0.0016  90.662  21.678  0.0056  62.415  0.0016  A  M  Sivib (KNm)  218  Appendix D Response database of the wood shear wall  el(m)  e2 (m)  M (KNs /m)  0.01  0.025  0.01  A(mm)  V(KN)  2  * (m/sec ) 0.5  0.343  0.3006  0.025  2  1  0.695  0.2983  0.01  0.025  2  1.5  1.04  0.3026  0.01  0.025  2  2  1.388  0.3521  0.01  0.025  2  2.5  1.717  0.3853  0.01  0.025  4  0.5  0.933  0.4988  0.01  0.025  4  1  1.652  0.674  0.01  0.025  4  1.5  2.532  0.5028  0.01  0.025  4  2  3.367  0.5753  0.01  0.025  4  2.5  4.065  0.4914  2  219  A  2  Appendix E Response database of the Holiday Inn 1. Response database of the Holiday Inn before seismic retrofit  D (m)  Dy(m)  Ax (m/sec )  Ay (m/sec )  Az (m/sec )  A (rad/sec )  0.1  0.1  0.1  0.005  0.005694  0.007909  0.1  0.1  0.1  0.12  0.006339  0.007562  0.1  0.1  4.9  0.005  0.005459  0.008592  0.1  0.1  4.9  0.12  0.005999  0.008365  0.1  4.9  0.1  0.005  0.011515  0.329426  0.1  4.9  0.1  0.12  0.010811  0.344291  0.1  4.9  4.9  0.005  0.012482  0.27927  0.1  4.9  4.9  0.12  0.009658  0.29652  4.9  0.1  0.1  0.005  0.22921  0.090296  4.9  0.1  0.1  0.12  0.215678  0.090498  2  2  2  2  x  R  2. Response database of the Holiday Inn after seismic retrofit  A (m/sec )  A (m/sec )  A (m/sec )  A (m/sec )  Ad (mm )  0.1  0.1  0.1  0.002  1500  0.00403  0.1  0.1  0.1  0.002  10500  0.002024 0.002666  0.1  0.1  0.1  0.2  1500  0.004296  0.00395  0.1  0.1  0.1  0.2  15000  0.002181  0.005977  0.1  0.1  9  0.002  1500  0.004151  0.003832  0.1  0.1  9  0.002  15000  0.002267  0.003439  0.1  0.1  9  0.2  1500  0.004493  0.004541  0.1  0.1  9  0.2  15000  0.002393  0.00748  0.1  9  0.1  0.002  1500  0.014179  0.450605  0.1  9  0.1  0.002  15000  0.002889  0.350399  x  2  Y  2  z  2  R  2  220  2  D (m) x  D  Y  (m)  0.003481  Appendix F Reliability index database for the two-story reinforced concrete building w,  W  310.000  P.  P  390.000  o.ooo  2.020  310.000  403.300  1.675  2.003  310.000  416.700  1.648  1.987  310.000  430.000  1.621  1.972  310.000  443.300  1.595  1.955  310.000  456.700  1.570  1.939  310.000  470.000  1.546  1.922  310.000  483.300  1.523  1.905  310.000  496.700  1.500  1.889  310.000  510.000  1.480  1.873  2  221  2  Appendix G Reliability index database for the tall building 1. Input variable combinations  B, (mm)  H, (mm)  B (mm)  H (mm)  B (mm)  H (mm)  836  1008  590  778  407  591  812  984  574  787  410  626  787  1008  556  777  455  585  784  1027  579  779  460  648  850  979  586  839  409  600  787  971  597  837  404  635  805  954  562  803  466  567  763  1029  578  814  477  640  844  992  603  775  425  577  829  1038  609  757  409  642  Ps  P  Ps  2  2  3  3  2. Reliability index database  p,  P  2  4  P  6  3.032  2.874  2.047  2.256  2.857  2.669  2.981  2.758  2.209  2.189  2.908  2.659  2.914  2.811  2.265  2.237  2.782  2.685  2.897  2.507  2.581  2.262  2.749  2.690  2.974  2.786  2.084  2.189  2.843  2.601  3.054  2.767  2.339  2.254  3.063  2.774  2.923  2.851  2.288  2.194  2.844  2.719  2.962  2.463  2.672  2.313  2.772  2.692  2.946  2.881  2.059  2.226  2.809  2.690  3.011  2.641  2.228  2.281  2.798  2.661  •  222  Appendix H Reliability index database for the bridge bent with isolation  D(mm)  B (mm)  ft  P  1550.000  550.000  2.210  3.801  1550.000  616.700  2.082  3.641  1550.000  683.300  1.906  3.449  1550.000  750.000  1.644  3.161  1550.000  816.700  1.360  2.546  1550.000  883.300  1.229  1.735  1550.000  950.000  1.190  1.102  1633.000  550.000  2.491  3.763  1633.000  616.700  2.422  3.650  1633.000  683.300  2.277  3.522  r  223  2  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0063534/manifest

Comment

Related Items