International Conference on Applications of Statistics and Probability in Civil Engineering (ICASP) (12th : 2015)

Targeted random sampling for time-invariant reliability analysis Shields, Michael D.; Sundar, V. S. 2015-07

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata


53032-Paper_358_Shields.pdf [ 135.85kB ]
JSON: 53032-1.0076198.json
JSON-LD: 53032-1.0076198-ld.json
RDF/XML (Pretty): 53032-1.0076198-rdf.xml
RDF/JSON: 53032-1.0076198-rdf.json
Turtle: 53032-1.0076198-turtle.txt
N-Triples: 53032-1.0076198-rdf-ntriples.txt
Original Record: 53032-1.0076198-source.json
Full Text

Full Text

12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015Targeted Random Sampling for Time-invariant Reliability AnalysisMichael D. ShieldsAssistant Professor, Department of Civil Engineering, The Johns Hopkins University,Baltimore, MD, USAV. S. SundarPost doctoral researcher, Department of Civil Engineering, The Johns HopkinsUniversity, Baltimore, MD, USAABSTRACT: The Targeted Random Sampling (TRS) method is extended to moderate-to-high dimen-sional time invariant reliability analysis. TRS is an adaptive sampling method rooted in stratified sam-pling variance reduction technique and represents a specific algorithm for Refined Stratified Sampling.In this work, a Markov Chain is used to sample the space densely in the vicinity of the limit stat andmulti-dimensional stratum division is proposed along with a neural network based approximation of theperformance function. The method is shown to perform efficiently and accurately for estimation of prob-ability of failure for a 20 dimensional problem.In a probabilistic reliability analysis, the uncertain-ties associated with the external loading, structuraland material properties are quantified in terms of an−dimensional random vector X with joint proba-bility density function (pdf) pX(x). In general, thecomponents of the random vector are taken to bemutually correlated and non-Gaussian.When ran-dom fields or random process models are used, itis typically assumed that these can be adequatelyrepresented in terms of random variables througha suitable discretization scheme (Ghanem andSpanos (1991); Shinozuka and Deodatis (1991);Sudret and Der Kiureghian (2000)). Defining afunction g(X) that serves as a metric for structuralperformance such that g(X) < 0 represents unsat-isfactory performance of the structure under con-sideration and g(X) = 0 is referred to as the limitsurface separating the “safe" and “failure" domains,the probability of failure of the structural systemwith respect to g(X) is defined asPF =∫g(x)≤0pX(x)dx=∫RnI[g(x)≤ 0]pX(x)dx=〈I[g(x)≤ 0]〉(1)where I[•] is the indicator function and < •> is theexpectation operator.There exist broadly two alternative approachesfor solving Eq. (1), namely, Taylor seriesexpansion-based methods utilizing reliability in-dices and sampling-based methods (e.g. MonteCarlo simulation - MCS) (Melchers (1999)). In thefirst class of methods, the performance function, inthe standard normal space, is expanded as a firstorder Taylor series (first order reliability method,FORM) and the shortest distance from the originto the limit surface is taken as a measure of reli-ability - termed as the Hasofer-Lind reliability in-dex, βHL (Hasofer and Lind (1974)). Denotingby U the vector of random variables in the stan-dard normal space and by G(U) the correspond-ing performance function, the problem of reliabil-ity analysis is posed as: minimize√UtU subjectto G(U) = 0. The probability of failure is then ex-pressed as PF = Φ(−βHL), where Φ(•) is the n−dimensional standard normal pdf.The transformation to the standard normal spacefrom the basic random variable space is carried outeither using the Nataf transformation or Rosenblatttransformation (Melchers (1999), Madsen et al.(2006)). For structures with simple limit states,112th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015these approaches also lead to the determination ofa design point (e.g. the point of most probable fail-ure) and measures of sensitivity for failure proba-bility with respect to the basic random variables.An important feature of these methods is that theproblem of evaluation of a multi-dimensional inte-gral over an irregular domain is posed as an equiv-alent problem in constrained non-linear optimiza-tion. In nearly all practical applications, however,the method remains approximate in nature and theyproduce poor approximations for problems withcomplex (e.g. nonlinear or discontinuous) limitstates. These are the problems of specific interestin this work.On the other hand, sampling-based methods uti-lizing MCS produce a point estimate of probabilityof failure that is exact as the sample size becomeslarge. The basic idea with MCS is to simulate re-alizations of the random vector X according to theprescribed joint pdf and use statistical tools to esti-mate the probability of failure. Denoting a set of Nrealizations of the basic random variable vector by{x(i)}Ni=1, an estimate for the probability of failureis obtained asPˆF =1NN∑i=1I[g(x(i))≤ 0] (2)It can be shown that PˆF is an unbiased and con-sistent estimator for PF with variance being equalto PF(1−PF)/N (Rubinstein and Kroese (2008)).It should be noted that the sampling variance is in-dependent of n, the size of the random vector X.Also, by virtue of the strong law of large numbers,it can be shown that PˆF → PF with probability 1 asN→ ∞. It is clear that the variance of the estimatoris inversely proportional to the sample size; hencein order to estimate a probability of failure as lowas 10−4, a minimum of 105 to 106 samples are re-quired. This poses severe computational issues, es-pecially if the performance function is defined im-plicitly using, for example, a finite element code,as the computational time for one structural analy-sis may be huge depending on the size and type ofproblem being solved.On the footings laid by the FORM and MCS,several improvements have been proposed. Incase of reliability index based methods, the focushas been on accounting for multiple design pointsand implicitly defined performance functions(e.g.Bucher and Bourgund (1990), Der Kiureghian andDakessian (1998)). Improving the performance ofMCS equates to reducing the variance of the statis-tical estimator given in Eq. (2), i.e. obtaining a bet-ter estimate with a smaller sample size. Generallysuch methods are referred to as variance reductiontechniques.Numerous variance reduction methods have beendeveloped for reliability analysis - the most widelyused being importance sampling (Engelund andRackwitz (1993)), subset simulation (Au and Beck(2001)), and line sampling (Koutsourelakis et al.(2004)). While these methods produce signifi-cant improvement over classical MCS, they oftenproduce estimates with large coefficient of varia-tion (Schuëller and Pradlwarter (2007)). The mo-tivation for the Targeted Random Sampling (TRS)methodology proposed in this work is to producesample-based reliability estimates for moderate-dimensional problems with complex limit states ina small number of samples with reduced coefficientof variation when compared to existing methods.The TRS method (Shields and Sundar (2014))is based upon the application of a variance reduc-tion technique called stratified sampling to relia-bility analysis. In particular, the Refined Strati-fied Sampling (RSS) method developed by Shieldset al. (2014) - wherein the strata that decomposethe probability space are divided to efficiently addsamples to a stratified design - underpins the TRSmethod. The method presents a distinct break fromclassical Monte Carlo approach in certain regards.In particular, it operates by decomposing the proba-bility space of the random vector into a space-fillingset of disjoint regions (strata) that are defined froma carefully selected ("targeted") set of random sam-ples (i.e. samples are targeted in the vicinity of thelimit state). Probability of failure is then estimatedby determining the total volume associated withstrata in the failure domain. In other words, ratherthan utilizing classical unbiased statistical estima-tors (e.g. Eq. (2) where the denominator N must be212th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015very large), the method operates by targeting sam-ples in such a way that the limit state can be ap-proximated through an efficient partitioning of theprobability space.The focus of the present work is on extendingthe original TRS method to problems in high di-mensions. The method is illustrated for a 20 di-mensional time invariant reliability problem withexplicitly defined limit state function. Note, how-ever, that the method provides a generalized frame-work for any complex limit state (explicit or im-plicit). We begin with a brief background of theRefined Stratified Sampling concepts upon whichTRS is constructed. A few drawbacks of the orig-inal TRS method are discussed along with possi-ble solution strategies. An algorithmic descriptionof the proposed multicut TRS method is then pro-vided, followed by numerical illustration and con-clusions.1. STRATIFIED AND REFINED STRATI-FIED SAMPLINGStratified sampling operates by dividing theprobability space Ω into a set of M subdomains(strata) Ωi that are space filling (⋃Mi=1Ωi = Ω) anddisjoint (Ωi ∩Ω j = /0). Samples are then drawnfrom within each of the strata and the quantity ofinterest, Θ, is estimated byΘˆ=M∑i=1piMiMi∑j=1yi j (3)where pi is the volume of the ith stratum, Mi is thenumber of samples drawn from the ith stratum, andyi j is the response quantity evaluated from the jthsample drawn from stratum i.In this setting, the Refined Stratified Samplingmethod can be employed to add samples to sparselysampled regions of the space by dividing specif-ically selected strata to obtain a more desirabledistribution of samples in the space. In its ini-tial implementation (Shields et al. (2014)), an ini-tial stratification of the space is provided with asingle sample drawn from each stratum. Strataare then selected for division based upon theirassociated probability weights (i.e. larger strataare divided first). To clarify this, consider Nnumber of n−dimensional vectors scattered in an−dimensional space. Assume that there exists aninitial rectilinear stratification. The volume of thestrata are given as pi = ∏nj=1 Lji , where Lji is thelength of stratum i in the jth dimension. The stra-tum corresponding to pmax = max1≤k≤M(pi) is chosenas the stratum to divide. An orthogonal direction isdefined as { j : L j = max1≤k≤pLk}. The stratum is nowdivided in half along this direction and a sampleis drawn randomly from the newly created emptystratum. In this way, the RSS method creates aneven distribution of samples across the space andhas been proven to reduce variance when comparedto the method of simply adding samples to existingstrata without stratum division. Additional detailscan be found in Shields et al. (2014).In general, an even distribution of samplesthroughout the space may reduce variance for cer-tain statistics (like estimates of the mean). How-ever, for reliability analysis the desire is to producesamples that are concentrated in the vicinity of thelimit state. This can be accomplished by modify-ing the RSS algorithm to divide strata according toa combination of the stratum size and the value ofthe performance function g(X). That is, the stratumto break is chosen to be one with highest probabilitycontent amongst the strata that lie in the vicinity ofthe limit surface. Specifically, the TRS method rep-resents a specific RSS algorithm that divides stratathat are near the limit state in proportion to the stra-tum size (i.e. large strata in the vicinity of the limitstate are divided first). In this way, the definitionof the strata are refined to partition the space into“failure strata" and “safe" strata. It is to be notedthat the stratification is performed in the probabilityspace where the random variables are independentand uniformly distributed in [0,1], thus calculatingvolumes of rectilinear strata in high dimensions donot possess a problem. Also, it is assumed that thereexists a transformation, T, that relates the basic cor-related non-Gaussian random variable space to theprobability space, i.e. X = T(Z), and vice-versa.Here Z∼ U[0,1] (e.g. Nataf Transformation).312th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 20152. TARGETED RANDOM SAMPLINGThe TRS method detailed in Shields and Sundar(2014) is briefly described below. The drawbacksare discussed and an improved algorithm that over-comes these deficiencies is presented in the nextsection. The steps for the implementation of theTRS method are as follows:• Produce an initial stratified design. Note thatthis initial stratified design must possess atleast one sample in each disjoint failure region.• Identify all “fail-safe” pairs composed ofpoints adjacent to one another but on oppositesides of the limit state. The strata correspond-ing to the “fail-safe” pairs make up the so-called target sampling subdomain. This is thedomain from which future sample are drawn.• Select the stratum from the target samplingsubdomain that has the maximum probabilitycontent (i.e. largest volume). Divide this stra-tum in one direction by interpolating to iden-tify the approximate location of the limit state.• Generate a random sample from the newly de-fined empty stratum.• The probability of failure is reported as thesum of the probability content (strata volumes)associated with all the failure samples.• Repeat until satisfactory convergence isachieved.2.1. DrawbacksA few shortcomings of the original TRS methodare discussed along with the solution strategies.2.1.1. Locating the failure regionThe TRS method requires at least one sample tolie within each disjoint failure region in order to ini-tiate the algorithm. Obtaining these samples maynot be trivial task, possibly due to the implicit andhigh non-linear nature of the limit state function.Intuitively, one may sample from the tails of thepdf’s of the random variables. However, such in-tuition fails to address problems whose failure do-mains are not associated with the rare events in agiven random vector. Instead, failure may be asso-ciated with some combination of undesirable val-ues of random variables - that need not lie in thetails of the distribution. This is certainly the casein the “failure island” scenario - a case where thelimit surface forms a closed region in the probabil-ity space. Additionally, this strategy may be com-putationally inefficient when applied to high dimen-sional problems because the number of strata growexponentially with dimension. Consider, for exam-ple, a simple stratification where each dimension iscut once to produce two strata. In 2-dimensions,this produces 22 = 4 strata. But in 20-dimensions,this produces 220 = 1,048,576 strata - clearly anunreasonable initial stratification.To identify an initial sample set possessing therequisite samples in the failure domain, it may bepossible to use the end results from a FORM anal-ysis as a starting point. Using the design point ob-tained from FORM, importance sampling may beperformed to obtain a small number of samples (say10 or less) in the failure domain. However, this doesnot address the challenge of identifying multiplefailure domains. To solve this problem in a robustand computationally efficient manner, we proposeto initialize multiple Markov Chains in the standardnormal space that are designed to converge to thefailure domain - much like the approach used in thesubset simulation method. Multiple chains are ini-tiated in order to facilitate convergence to multiplefailure domains. This process is described in Sec-tion Curse of dimensionalityFor high dimensional problems, dividing stratain a single dimension, as proposed in the originalTRS method results in slow convergence due to thepoor resolution of the performance function at thelimit surface. One way to overcome this problemis by employing simultaneous cuts along all the di-mensions. Though this approach seems intuitive,the associated computational cost is high becauseat each cut, the number of times we need to sample(and thus the number of performance function eval-uations) is often equal to the dimension of the prob-lem. In order to alleviate this burden, an approachutilizing neural networks is used. The neural net-work is trained and validated using the samples ob-tained as states of the initial Markov Chains. Oncethe neural network is trained, the performance func-tion value at each of the added samples is approx-412th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015imated by the trained neural network. Additional“real” samples can be strategically placed withinlarge strata where it is expected that the neural net-work may be poorly resolved.3. MULTI-CUT TARGETED RANDOMSAMPLINGThis section provide a step-wise procedure forimplementing the multi-cut TRS algorithm de-signed to address the drawbacks of the original TRSmethod outlined above.3.1. Locating failure domainsA set of nc suitably designed Markov Chains areused to identify and explore the failure domain. TheMarkov Chain is constructed in such a way that theforward propagation of the chain would imply thatthe chain is marching toward the failure domain,i.e. if g¯1, g¯2, · · · , g¯i denote the performance func-tion values evaluated at the states of the chain, theng¯1 > g¯2 > · · · > g¯i. Denote the number of MarkovChains by nc. For each of the Markov Chain, start-ing from points ui; i = 1,2, · · · ,nc, run the chainsuch that n f number of samples are obtained nearthe vicinity of the limit surface. The propagation ofthe Markov Chain from ith state to (i+1)th state isprovided below:• Choose a proposal density function q(•), sayN(xi,σ). Let g(xi) = gi. The proposal den-sity function is, in general, chosen to be apdf which is easy to sample from; for ex-ample, normal or uniform. Choosing a nor-mal/uniform pdf as proposal simplifies the cal-culation of the acceptance ratio, α , due totheir symmetric form(Rubinstein and Kroese(2008)).• Sample y∼ q(xi)• Calculate the acceptance probabilityα = min[1,φ(y)/φ(xi)] where φ(•) isthe n−dimensional standard normal pdf.• Evaluate gy = g(y).• Generate z∼U[0,1].• Acceptance criteria prior to the Markov chaincrossing the limit surface (i.e. g(xi) > 0 ∀i)xi+1 ={y, α > z and gy < gixi, otherwise.• Acceptance criteria after the Markov chaincrosses the limit surfacexi+1 ={y, α > z and g f < gi+1 < gsxi, otherwise.where gs is the value of the performance func-tion at the sample state prior to the MarkovChain leaving the safe domain for the firsttime, and g f is the performance function valueof the sample state after the Markov Chaincrosses the safe domain for the first time.• Through a suitable post-stratification tech-nique (here using rectilinear grids), assignstrata to each of the samples generated fromthe Markov Chains. One possible methodol-ogy is explained in section 4.Each Markov Chain initialized above is design toconverge toward the limit state. Once the limit stateis crossed, the Markov Chain is redesigned to con-centrate samples in the immediate vicinity of thelimit state. In this way, the samples converge to thefailure domain and then “trace” its boundary. Inany case, the motivation behind this step is to ob-tain as many number of samples possible near thevicinity of the limit surface. Post-stratification ofthe samples allows the volume of the failure domainto be assessed and an initial estimate of probabilityof failure to be produced.3.2. Multiple cutsThe process for utilizing multiple cut planes forstratum refinement is now described. In the samemanner as the original TRS algorithm, identify“fail-safe” sample pairs and the associated strata(i.e. the target sampling subdomain). From thesepairs, let x fa ,a = 1,2, · · · ,n f , and xsb,b = 1,2, · · · ,nsdenote the n f samples in the failure domain and nssamples in the safe domain respectively. Denotetheir corresponding strata by Ω fa ,a = 1,2, · · · ,n fand Ωsb,b = 1,2, · · · ,ns. For the sake of demon-stration, consider a failure point x f whose stratumis adjacent to k strata corresponding to safe pointsxsi , i = 1,2, · · · ,k. For each xsi , i = 1,2, · · · ,k letdi denote the dimension along which its stratumshares a boundary with the stratum possessing tox f . For each pair of x f and xsi , the stratum pairis divided along the dimension di at the interpo-512th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015lated approximate location of the limit state func-tion g(x) = 0. This creates k new strata from whichnew random samples are drawn. As previously, theprobability of failure is determined as the sum ofthe strata weights corresponding to all failure sam-ples. The iterations are terminated after a satisfac-tory level of convergence in the desired estimate isobserved.3.3. Neural network-based samplingThe above multi-cut TRS method necessitates knew samples for every existing sample in the fail-ure domain - which will be computationally quiteexpensive. To mitigate this cost, an artificial neuralnetwork is trained to approximate the performancefunction. Using this artificial neural network, it ispossible to approximate the performance functionat the randomly generated points in the new strata -thus replacing the expensive performance functionevaluation through, for example, a finite elementmodel.In general, the neural network based perfor-mance function approximation will be very accu-rate in the vicinity of the limit state given the con-centration of samples in this important region pro-duced from the Markov Chain. The neural networkis likely to be far less accurate in regions some dis-tance from the limit state. However, given the na-ture of the stratification process - specifically thestrata refinement in the vicinity of the limit stateonly - the neural network will not be utilized forthese inaccurate regimes. Additionally, sparsity ofsamples in certain regions near the limit state maycause local inaccuracies in the neural network. Insuch cases, it is possible to produce a small numberof true performance function evaluations to locallyrefine the approximation and improve the strata re-finement.4. NUMERICAL ILLUSTRATIONThe proposed multi-cut TRS algorithm is il-lustrated through a 20−dimensional problem (En-gelund and Rackwitz (1993)). The performancefunction is defined asg(X) =20∑i=1Xi +C (4)where C = −5.343 producing a convex limit sur-face. The probability of failure is defined as PF =P[g(X)≤ 0].Twenty Markov Chains are initiated in order toobtain approximately 400 samples in the vicinityof the limit surface. Around 70% of these samplesare used to train the neural network and the remain-der are used for validation and testing. On aver-age, between 700-1000 total performance functionevaluations are required to obtain these samples -including those necessary for convergence to thelimit state and those rejected by the chain. Post-stratification of the Markov Chain sample set is per-formed by adding each sample sequentially. As asample is added, it will fall within a stratum thatis already occupied by a previously added sample.The stratum in which it falls is divided along thedirection of the maximum distance between it andthe other sample in that stratum. This method ofpost-stratification is admittedly simplistic and maynot be well-suited for many problems. More ad-vanced initial stratification methods that more ac-curately approximate the limit state are currentlyunder development. The multi-cut TRS algorithmis run for 20 iterations. The number of MarkovChains is problem dependent and needs to be se-lected in such a way that at least once sample is ob-tained per dis-joint failure region. A heuristic ap-proach is to consider the number of chains equalto the number of dimensions. This, however, maynot always be the optimal number and, to the ex-tent possible, should be inferred from the problemitself.The proposed method is compared with subsetsimulation (SS) (Au and Beck (2001)) and impor-tance sampling (IS) based on design point (Melch-ers (1999), Madsen et al. (2006)). The efficiency ofthe methods is compared in terms of the coefficientof variation, COV, (over 20 independent trials) andthe number of performance function evaluations Ngas reported in Table 1. The exact probability offailure for the problem considered is 1×10−6 (En-gelund and Rackwitz (1993). For the subset simula-tion method, 500 samples are used for determiningthe intermediate conditional probabilities of failurein increments of 0.1. Importance sampling estimate612th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015is obtained using 1000 samples with the importancesampling density function as normal with mean lo-cated at the design point and identity covariancematrix.As is witnessed from Table 1, the performanceof the proposed multi-cut TRS method is compara-ble to importance sampling and produces approx-imately the same COV as subset simulation withapproximately 1/3 the number of function evalua-tions. The large COV in the TRS estimates is likelydue to the limited number of TRS iterations alongwith the crudeness of the initial stratification. Thiscan also be witnessed from Figure 1, which showsthe convergence of the proposed method for 20 it-erations and 10 independent trials. The estimate forprobability of failure will converge to the true valueas the number of iterations are increased beyond20, provided, the Markov Chains have located allthe regions that contribute significantly to the prob-ability of failure. Given that most of the trials pro-duce a good estimate for PF , 20 iterations can bedeemed as being suitable for the chosen example.The trials that do not converge to the true value areprovided to emphasize the fact that the number ofiterations, and the estimate for PF crucially dependson the identification of all the failure regions by theMarkov Chains.Table 1: Comparison of estimates of PF obtained usingTarget Random Sampling (TRS), Importance Sampling(IS), and Subset Simulation (SS).Method Estimate (COV) NgTRS 1.2002×10−6 1000(56%) approx.IS 8.9474×10−7 1000(40%)SS 1.0888×10−6 2700(51%)FORM 5.9942×10−4 –SORM 6.4885×10−4 –5. CONCLUSIONSThe Targeted Random Sampling method hasbeen extended to handle moderate-high dimen-sional problems. The approach is based on applica-Number of iterations2 4 6 8 10 12 14 16 18 20P F10-710-610-510-4Figure 1: Convergence of the multicut TRS method for10 independent trials.tion of multi-directional stratum division and neu-ral network based performance function approxi-mation. Identification of all disjoint failure do-mains and good initial stratification technique arecrucial to the performance of the algorithm. Theauthors are working on development of improvedinitial stratification methodologies, and extendingthe method to time variant reliability problems.6. REFERENCESAu, S.-K. and Beck, J. L. (2001). “Estimation of smallfailure probabilities in high dimensions by subset sim-ulation.” Probabilistic Engineering Mechanics, 16(4),263–277.Bucher, C. and Bourgund, U. (1990). “A fast and effi-cient response surface approach for structural reliabil-ity problems.” Structural safety, 7(1), 57–66.Der Kiureghian, A. and Dakessian, T. (1998). “Multi-ple design points in first and second-order reliability.”Structural Safety, 20(1), 37–49.Engelund, S. and Rackwitz, R. (1993). “A benchmarkstudy on importance sampling techniques in structuralreliability.” Structural Safety, 12(4), 255–276.Ghanem, R. G. and Spanos, P. D. (1991). Stochasticfinite elements: a spectral approach, Vol. 387974563.Springer.Hasofer, A. M. and Lind, N. C. (1974). “Exact and in-variant second-moment code format.” Journal of theEngineering Mechanics Division, 100(1), 111–121.Koutsourelakis, P., Pradlwarter, H., and Schuëller, G.(2004). “Reliability of structures in high dimensions,part i: algorithms and applications.” Probabilistic En-gineering Mechanics, 19(4), 409–417.712th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015Madsen, H. O., Krenk, S., and Lind, N. C. (2006). Meth-ods of structural safety. Courier Dover Publications.Melchers, R. E. (1999). “Structural reliability analysisand prediction (civil engineering).” Wiley.Rubinstein, R. Y. and Kroese, D. P. (2008). Simula-tion and the Monte Carlo method, Vol. 707. Wiley-interscience.Schuëller, G. and Pradlwarter, H. (2007). “Benchmarkstudy on reliability estimation in higher dimensionsof structural systems–an overview.” Structural Safety,29(3), 167–182.Shields, M., Teferra, K., Hapij, A., and Daddazio, R.(2014). “Refined stratified sampling for efficientmonte carlo based uncertainty quantification.” Relia-bility Engineering and System Safety (under review).Shields, M. D. and Sundar, V. S. (2014). “Targeted ran-dom sampling: A new approach for efficient relia-bility estimation for complex systems.” InternationalJournal of Reliability and Safety (under review).Shinozuka, M. and Deodatis, G. (1991). “Simulation ofstochastic processes by spectral representation.” Ap-plied Mechanics Reviews, 44(4), 191–204.Sudret, B. and Der Kiureghian, A. (2000). Stochasticfinite element methods and reliability: a state-of-the-art report. Department of Civil and EnvironmentalEngineering, University of California.8


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items