International Conference on Applications of Statistics and Probability in Civil Engineering (ICASP) (12th : 2015)

Rare event simulation : a point process interpretation with application in probability and quantile estimation Walter, Clément; Defaux, Gilles Jul 31, 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
53032-Paper_255_Defaux.pdf [ 230.39kB ]
Metadata
JSON: 53032-1.0076243.json
JSON-LD: 53032-1.0076243-ld.json
RDF/XML (Pretty): 53032-1.0076243-rdf.xml
RDF/JSON: 53032-1.0076243-rdf.json
Turtle: 53032-1.0076243-turtle.txt
N-Triples: 53032-1.0076243-rdf-ntriples.txt
Original Record: 53032-1.0076243-source.json
Full Text
53032-1.0076243-fulltext.txt
Citation
53032-1.0076243.ris

Full Text

12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015Rare Event Simulation: A Point Process Interpretation WithApplication In Probability And Quantile EstimationClément WalterLaboratoire de Probabilités et Modèles Aléatoires, Univ. Paris Diderot, Paris, FranceCEA, DAM, DIF, F-91297 Arpajon, FranceGilles DefauxCEA, DAM, DIF, F-91297 Arpajon, FranceABSTRACT: This paper addresses the issue of estimating extreme probability and quantile on the outputof complex computer codes. We introduce a new approach to solve this problem in term of a random walkin the output space, leading to two main results: (1) the number of samples required to get a realisation ofa random variable in a given domain of probability measure p is drastically reduced, following a Poissonlaw with parameter log1/p; and (2) we get parallel algorithms for estimating probabilities and quantilesand especially the optimal parallel Multilevel Splitting algorithm where there is indeed no subset to defineanymore.1. INTRODUCTIONExtreme events simulation and quantification comefrom the need to insure that undesirable events willnot appear. Typically such events are failure of in-dustrial critical systems, ie. systems for which fail-ure is regarded as a massive catastrophic situation.Usually the system is a "black box" whose outputdetermines safety/failure domains.Formally, let X ∈ Rd be a random vector and gbe a measurable function from Rd to R defining thefailure domain F = {x ∈ Rd | g(x) > q}, we seekto measure F with respect to the distribution of X:µX(F) = P [X ∈ F ] = p or to find back q from agiven p. The two main difficulties of this prob-lem arise from the order of magnitude of p (sayp < 10−5) and the computational time of g (fromcouples of hours to several months).While the naive Monte Carlo (MC) approachfails to solve this problem because of the rarity ofthe sought event (its squared coefficient of variationis δ 2 ≈ 1/(N p) with N the sample size), two devel-opments have brought improvement in estimatingextreme probability. On the one hand ImportanceSampling (Robert and Casella, 2004; Asmussenand Glynn, 2007) modifies the distribution of X tolower the variance of the naive Monte Carlo estima-tor; unfortunately this method is intrusive and thesearch for an appropriate change of measure is notobvious. In particular it is known that the optimalchange depends on the quantity of interest and isthus not directly available. On the other hand Mul-tilevel Splitting (MS) methods uses a finite increas-ing sequence (qk)k=0..m with q0 = −∞ and qm = qsuch that F = {g(X) > q} can be written as an in-tersection of nested subsets F =⋂m{g(X) > qm},each of those subsets being measured by Markovchain Monte Carlo (McMC) (see (Au and Beck,2001) or (Glasserman et al., 1999) for an in-depthreview of splitting techniques). This algorithmwas further improved by Cérou et al. (2012) andGuyader et al. (2011) proposed a limit case optimalin terms of computational efficiency but disablingparallel computing.Recasting results from Guyader et al. (2011) andHuber et al. (2011) we introduce here a general ap-proach to this issue in term of a random walk inthe output space, i.e. that we focus on the real-valued random variable Y = g(X) ∈ R. This ap-proach brings three main results: (1) the numberof samples needed to get a realisation of X in thefailure domain follows a Poisson law with param-112th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015eter log1/p, this has to be compared to a classicalGeometric law with parameter p for naive MonteCarlo; (2) we present two new parallel estimatorsfor estimating probability and quantile; (3) we linkthis framework with the MS algorithm and show itbrings its optimal implementation in terms of com-putational time against variance of the estimator.Especially it is directly related to the optimal MSalgorithm by Guyader et al. (2011), which appearsto be a specific implementation of the new estima-tor.The outline of the paper is as follows: firstly weintroduce the new framework and derive the two es-timators for probability and quantile; then we com-pare the probability estimator with the usual Multi-level Splitting algorithm and finally we discuss itsimplementation and provide examples of probabil-ity and quantile estimation on academic test cases.2. RARE EVENT SIMULATIONWe introduce here the key brick of our new estima-tors. Unlike with usual Monte Carlo methods, weare not going to work with independent and iden-tically distributed (iid) samples of the input ran-dom variable of interest X but with Markov chainsdefined on the output real-valued random variableY = g(X). Denote µY its distribution and assumeits cdf FY is continuous.Definition 1 (Increasing random walk) Let Y0 =−∞ and define recursively the Markov sequence(Yn)n such that for all n ∈ N :P [Yn+1 ∈ A | Y0, · · · ,Yn] =µY (A∩ (Yn,+∞))µY ((Yn,+∞))In other words (Yn)n is a random increasing se-quence where each element is generated condi-tionally greater than the previous one. Consid-ering the sequences (Yn)n≥1 and (Tn)n≥1 | Tn =− log(1−FY (Yn)), it can be shown that (Tn)n≥1 isdistributed as the arrival times of a Poisson Pro-cess with parameter 1 (Huber et al., 2011; Guyaderet al., 2011; Simonnet, 2014). Thus, the countingrandom variable of the number of events before q:Mq = card{n ≥ 1 | Yn ≤ q} follows a Poisson lawwith parameter − log(1−FY (q)) =− log p.Firstly, this means that implementing thisMarkov chain allows us to obtain extreme realisa-tions of a real-valued random variable (eg the out-put of a computer code) much faster than with an(iid) sampling because of this "log attribute": onaverage log1/p samples against 1/p with an (iid)sampling. Secondly we have a natural estimator forthe quantity log1/p, namely the maximum likeli-hood estimator (MLE) of the parameter of a Pois-son law: given (Miq)i=1..N N random counting vari-ables, one has:̂log1/pMLE =1NN∑i=1Miqdef=1NMqMq is then the random counting variable of aMarked Poisson Process with parameter N. Hencefor estimating p given q, one can build the fol-lowing estimator, optimal by theorem of Lehmann-Scheffé:p̂ =(1−1N)Mq(1)and alternatively for estimating q given p, onecan consider the (m = bN log1/pc)th event of theMarked Poisson Process, i.e. the mth smallest ele-ment of the N Markov chains. Denote qm this ele-ment and qm+1 the following one, we therefore con-sider the estimator for a quantile:q̂ =12(qm +qm+1) (2)Proposition 1 (Statistical properties of p̂)E [p̂] = pvar [p̂] = p2(p−1/N−1)∼N→∞p2log1/pNThis estimator has a smaller variance than a MonteCarlo one, exhibits a logarithmic efficiency and al-most achieves the Cramer-Rao bound −p2 log p/N(Walter, 2015).Proposition 2 (Limit distribution of q̂) Given Yhas a density fY , one has:√N (q̂−q)L−−−→N→∞N(0,p2 log1/pfY (q)2)212th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015As for a Monte Carlo estimator, it has a bias of or-der 1/N, which is in this case centred for the expo-nential distribution. Furthermore, this estimator hasthe nice property of providing a confidence intervalwithout any estimation of the density fY . Recall-ing we have considered m instead of the randomvariable Mq, we can also consider m− and m+ suchthat P[Mq ∈ [m−,m+]]= 0.95, build q̂− and q̂+ andthen compute a confidence interval for q̂ at 95%.To end up this section, we stress out the fact thatthese estimators can really be seen as the twin ofMonte Carlo ones with a "log attribute", i.e. thatthe statistical properties of p̂ and q̂ are similar butadding a log to the 1/p factor: denote p̂MC and q̂MCthe usual Monte Carlo estimators for a probabilityor a quantile, one has:var [p̂MC]≈p2N p→ var [p̂]≈p2 log1/pNvar [q̂MC]≈p2N f (q)2 p→ var [q̂]≈p2 log1/pN f (q)23. LINK WITH MUTLILEVEL SPLITTING METH-ODSThe purpose of this section is to link this theoreti-cal probability estimator with well known MS algo-rithms. We first recall the main steps of this kind ofalgorithms and then point out some of their prin-cipal issues. Finally we show that our approachbrings the best one could expect from MS methodsin terms of variance of the estimator against com-putational time.3.1. Definition of Multilevel SplittingOriginally, Multilevel Splitting (MS) methods workwith a finite increasing sequence (qi)i=0..m such thatq0 = −∞, qm = q, write the target probability as aproduct of conditional ones:P [g(X) > q] =m∏i=1P [g(X) > qi | g(X) > qi−1]and then seek to estimate each conditional probabil-ity with Markov chain Monte Carlo (McMC). It hasbeen shown (Cérou et al., 2012) that to lower thetotal variance of the estimator, the sequence (qi)ishould be chosen so that all conditional probabili-ties are equal. When it is not possible to infer sucha sequence, the idea is to adaptively define it so thateach conditional probability is equal to a given con-stant p0 (p0 = 0.1 in (Au and Beck, 2001)) and thisis known as Adaptive Multilevel Splitting method.While providing an efficient way to implement MSalgorithm, it introduces a bias in the estimator. Fur-thermore the question of a proper choice for p0 isstill open.Basically the workflow of adaptive MS is as fol-lows:1. Sample N (iid) random variables (Xn)n=1..N2. Get q1 the p0 quantile; i = 13. While qi < q• Generate (1 − p0) samples from thep0 samples such that g(Xn) > qi withMarkov chain• Get qi+1 the p0 quantile; i = i+14. Set qi = q5. Get the final conditional probability p f6. p̂ = pi−10 · p fThe regeneration step is crucial as one requires (iid)samples to estimate properly the conditional proba-bility. To do so, the Metropolis-Hastings algorithmis often used, with a burn-in parameter b used toincrease the convergence of the Markov chain to itsstationary distribution, i.e. that b samples are gen-erated for each finally kept Xn.The bias in this adaptive version comes from theestimation of quantile at each step. Alternatively,it has been proposed to set p0 = 1− k/N; Bréhieret al. (2014) extensively studied this parametrisa-tion and show unbiasedness whatever k. However,while optimal variance is found for k = 1, it dis-ables parallel computing since at each step, only 1sample is regenerated.3.2. Effective computing timeWe now intend to challenge our new approachagainst MS methods in terms of effective comput-ing time (ECT). By ECT we mean the unparal-leled sequence of simulated samples one requires toreach a given precision on the variance of the finalestimator. More precisely, if one considers that nccores are available for parallel computation, ECT isthe number of samples calculated by each core; forexample a naive Monte Carlo estimator with samplesize N has an ECT tMC = dN/nce = d(nc pδ 2)−1e312th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015with δ its coefficient of variation and p the soughtprobability.In an adaptive MS algorithm, Cérou et al. (2012)showed that the number of levels m converges al-most surely to d(log p)/(log p0)e when N → ∞. Ifone considers that p f ≈ p0 and that the samples are(iid) for each subset, then one gets for the coeffi-cient of variation δMS:δ 2MS ≈log plog p01− p0N p0The first step of MS requires N simulations, andthen there are N(1− p0) · b samples to be regener-ated at each step. This gives:tMS =Nnc+ dlog plog p0eb[N(1− p0)nc∧1]Considering the first term is negligible, we willhave either N(1− p0)≥ nc and so:tMS ≈b(log p)2ncδ 2MS(1− p0)2p0(log p0)2(3)or tMS ≈ b log p/ log p0. The first expression de-creases in p0 while the second one increases. Onthe one hand this brings an optimal value for p0depending on nc, p and targeted precision, on theother hand one has tMS ≥ b(log p)2/(ncδ 2).For our algorithm presented in Section 2, onecore will generate N/nc Markov chains, andeach Markov chain requires ≈ − log p simulations,which brings t ≈ (− log p)N/nc ≈ (log p)2/(ncδ 2).As we shall see further in the technical implementa-tion part, we have to include here a burn-in param-eter too, which gives finally t ≈ b(log p)2/(ncδ 2).Therefore it is clear that our algorithm brings thebest MS method in terms of ECT; basically it al-lows for taking p0→ 1 without disabling the paral-lel computation possibility.To conclude this section, we point out the factthat this result of optimality is expected to be foundeach time a MS method is applied (maximum ofa stochastic process, counting...) because the onlyassumptions is that the output of the system is real-valued with continuous cdf . Indeed, we argue thatthis random walk is the underlying mechanism ofeach MS method.4. PRACTICAL IMPLEMENTATIONIn this section, we precise how to implement thesimulation of the random walk and then present re-sults on test cases.4.1. Simulating conditional distributionsThe simulation of the random walks requires tohave access to conditional simulators. Indeed theestimator requires to generate the output Y accord-ing to the conditional law µY ( · | Y > y), i.e. togenerate the input X according to µX( · | g(X) > y)for a given y. A general idea is to use convergenceproperties of an ergodic Markov chain to its uniqueinvariant probability to sample from a given distri-bution. Assuming µX has a pdf fX , it means we in-tend to generate a Markov chain with stationary pdf1g(x)>y fX(x)/P [g(X) > y]. This implementation israther simple when a reversible transition kernel Kis available and the random walk the Metropolis-Hastings method (Hastings, 1970) provides such akernel.Algorithm 1 Metropolis-Hastings methodGenerate W a standard multivariate Gaussian sampleor following a symmetric Uniform distribution over acompact set of RdX∗← x+σWρ ←min(1, fX(X∗)/ fX(x))1g(X∗)>yAccept the transition X∗ with probability ρ , return xotherwiseIn the special case where XL∼N (0, I) a direct con-struction in also possible (Cérou et al., 2012).Algorithm 2 Direct construction for X L∼N (0, I)Generate W a standard multivariate Gaussian sampleX∗←x+σW√1+σ2return X∗ if g(X∗) > y, x otherwiseRemark 1 Depending on the distribution of Y , itcould be possible to use directly (iid) samples.For example if Y is exponentially distributed, thenYn+1L∼ Yn +Zn with Zn an (iid) copy of Y ; and if Yis a Pareto random variable, then Yn+1L∼ YnZn withZn an (iid) copy of Y .412th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 20154.2. AlgorithmsWe give here pseudo-codes for estimating a proba-bility or a quantile. We want to underline the factthat there is nothing more to do than simulatingseveral random walks. However to make use ofMarkov chain sampling as presented in Section 4.1,one needs to manipulate several chains simultane-ously: the starting point of the Markov chain hasto be selected in a population already following thetargeted distribution.Algorithm 3 Generating N random walksRequire: N, qGenerate N copies (Xi)i=1..N according to µXY ← (g(X1), · · · ,g(XN))Nevent = (0, · · · ,0)while minY < q doind← whichYi < qfor i in ind doNevent[i] = Nevent[i]+1Generate X∗ ∼ µX(· | X > g(Xi))Xi← X∗; Yi = g(X∗)end forend whilereturn Nevent, (Xi)i=1..N , (Yi)i=1..NFor the conditional generation step, it may benecessary to store the history of the positions of the(Xi)i=1..N to select a starting point among samplesgenerated according to thresholds lower or equalto the current one. If one does not want to do so,then it is possible to adapt the algorithm: "ind←argminY " instead of "ind← whichYi < q". Withthis slite change, all the current states of the otherMarkov chains will be good candidates. However,it disables the sequential parallelisation possibilityof this algorithm.The probability estimator only requires the num-ber of events of each random walk and is describedin Algorithm 4. Basically it is a wrap-up of Algo-rithm 3 ; in this scope it is to be noticed that Algo-rithm 3 returns a vector Nevent containing for eachrandom walk the number of events and not a singleinteger. The parallel implementation of the quan-tile estimator is a bit more complicated as the stop-ping criterion is expressed as a number of eventsover the total random walk. Indeed to sum sev-Algorithm 4 Probability estimatorRequire: N, nc, q;Run nc Algorithm 3 with parameters N and qfor i in 1 : nc doNevent[i] =N∑j=1Nevent(Algorithm[i])[j]end forGet the total number of events Nevent =nc∑i=1Nevent[i]p̂ = (1−1/(nc ·N))Neventeral partially simulated point processes, one has tomake sure they have all overpassed a given thresh-old q; then the sum process will be complete untilthis threshold q. We then suggest a two-fold algo-rithm: during the first step the stopping criterionis a number of events on each single random walk;then one considers the greatest reached time and re-launch all the random walks until they also reachthis time. The issue of choosing a right number ofevents m0 for the first step is discussed in Walter(2015). The main idea is that the final number ofsimulated events at the end of the algorithm shouldbe as close as possible to the targeted one. We givehere directly the guideline:m0 = dN log1/p+β 2/2−β√∆/2ewith:β = bnc−log log1/α√2logncbnc being the localisation parameter of the Gaussianlaw in the framework of Extreme Value Theory:bnc =√2lognc− (log lognc + log4pi)/2√2lognc,∆= β 2+4N log1/p and α a probability of not hav-ing enough events in the end (typically α = 0.05).Another requirement is to store a copy of vector Yof Algorithm 3 instead of replacing values in it tokeep a record of the times of the events and not onlytheir numbers. This is all summed up in Algorithm5.4.3. Test caseWe now present the performance of both Algo-rithms 4 and 5 on academic examples.4.3.1. Probability estimatorThe two-degrees-of-freedom damped oscillatorsketched in Figure 1 was first proposed by Der Ki-512th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015Algorithm 5 Quantile estimatorRequire: N, nc, pRun nc Algorithm 3 with statement "ind← argminY "and stopping criteria "∑Nevent > m0"q←maxi=1..nc min j=1..N Y(i)jRelaunch the nc algorithms with original stopping cri-teria "minY > q"Mix all the recorded times of the nc algorithms andsort them: (qi)im = b−(N ·nc) log pcq̂ = 0.5(qm +qm+1)Figure 1: A 2 degrees of freedom damped oscillator(from Dubourg et al. (2011))ureghian and De Stefano (1991) and then used byBourinet et al. (2011) and Dubourg et al. (2011).Table 1 presents the physical parameters and theprobabilistic model used. Igusa and Der Kiureghian(1985) showed that the mean-squared relative dis-placement of the secondary spring under a whitenoise base acceleration with intensity S0 writes:E[x2s]=pi S04ζsω2sζaζsζpζs(4ζ 2a +θ 2)+ γζ 2a(ζpω3p +ζsω3s )ωp4ζaω4awith γ = ms/mp, ω2p = kp/mp, ω2s = ks/ms, ωa =(ωp + ωs)/2, ζa = (ζp + ζs)/2 and θ = (ωp −ωs)/ωa.Finally, Der Kiureghian and De Stefano (1991)proposed that the limit-state function could writeunder reasonable approximation as follows:g(x) =C−η ks√E [x2s ]with C the force capacity of the secondary springand η a peak factor here set to 3 as in Dubourget al. (2011). Table 1 presents the probabilisticmodel used. As it uses lognormal distributions andreversible kernel of Algorithm 2 is defined in theVariable Mean CV (%) Descriptionmp 1.5 10}massems 0.01 10kp 1 20}stiffnessks 0.01 20ζp 0.05 40 } damping ratioζs 0.02 50C 27.5 10 Force capacityS0 100 10 AccelerationTable 1: Stochastic model of the oscillator; all param-eters have log-normal distribution. CV: coefficient ofvariationstandard space, an isoprobalistic transformation isdone before each call to the limit-state function g.Figure 2 compares the estimator of P [g(X) < 0]defined in Eq. (1) with different implementationsfor a total of nc ·N = 1000 random walks to illus-trate the effect of the use of Markov chain drawingof Section 4.1. Results are presented as boxplotsover 100 simulations, whisker extending to the ex-treme values. The reference value (red dashed line)is obtained from an usual Subset Simulation (Auand Beck, 2001) with p0 = 0.1 and N = 4× 106samples and the horizontal grey dotted lines standfor a 95% confidence interval around it. Henceone can say that the estimator converges well andthat parameter N of Algorithm 3 should not be toosmall; as a matter of fact it appears that N & 5 to 10is a conservative choice. Then we benchmark Al-gorithm 4 against usual MS methods: the originalSubset Simulation (SS) algorithm presented by Auand Beck (2001) with p0 = 0.1 and the optimal non-parallel algorithm Least Particle Algorithm (LPA)of Guyader et al. (2011) with k = 1. All algorithmsrequire to simulate according to conditional lawsand the burn-in parameter is set to 20 for all ofthem. We thus run 100 simulations with nC = 100cores and Table 2 summarises the results. The to-tal number of calls to the limit-state function Ncallincludes the burn-in.The three algorithms have similar performancesin terms of coefficient of variation and a compar-ison in terms of number of calls to the limit-statefunction and effective computing time is then rel-612th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015543211000 100 50 20 10 5 2 1NProbability(10−7)Figure 2: Failure probability estimated with a totalof 1000 random walks simulated by batches of size Nusing Algorithm 4Method p̂ (.10−7) CV Ncall ECTSS 3.81 0.12 568 800 5 688LPA 3.85 0.13 292 446 292 446Algo. 4 3.75 0.12 296 932 3 400Table 2: Comparison of Algorithm 4 against usual MSmethods with nc = 100 cores available. Methods: SS:Au and Beck (2001) with p0 = 0.1 and N = 4500; LPA:Guyader et al. (2011) with N = 1000; Algo. 4 withbatches of size N = 10. Results from 100 simulations;p̂ is the empirically averaged probability and CV is theempirical coefficient of variation; Ncall and ECT areaveraged over the 100 simulationsevant. Firstly the number of calls of SS is muchgreater than for LPA and Algorithm 4: LPA is theoptimal MS algorithm in terms of computationalefficiency and is also a particular implementationof Algorithm 4 (Walter, 2015). Secondly the ECTof LPA is much greater because it is not paral-lel, making this implementation inefficient practi-cally speaking. Furthermore, the gain in ECT be-tween Algorithm 4 and SS is in good agreementwith the theoretical formula (3): with p0 = 0.1,(1− p0)2/p0/(log p0)2≈ 1.53 and the empirical in-crease is 5688/3400 ≈ 1.67 Theses results lead tothe conclusion that Algorithm 4 is the parallel op-timal MS algorithm, being 35% to 40% faster thanthe original Subset Simulation algorithm.4.3.2. Quantile estimatorThe watermarking detection example is used byCérou et al. (2012) and Guyader et al. (2011). Letd ∈N∗ be the dimension of the input space and u bea unit vector in Rd; the failure domain is regardedas the interior of a double cone of axis u (see Mer-hav and Sabbag (2008)):F = {x ∈ Rd |Φ(x) =| xT u |‖ x ‖> q} (4)The analytic relation between p and q writes as fol-lows:p = P [Φ(X) > q] = 1−G((d−1)q21−q2)with G the cdf of a Fisher variable with (1,d− 1)degrees of freedom. We set here d = 20 and p =4.70410−11 and estimate q = 0.95 as in Guyaderet al. (2011) and Cérou et al. (2012).Results are presented as boxplots over 100 sim-ulations, whiskers extending to the extreme val-ues. As for the probability estimator, Figure 3compares different implementations for a total ofnc ·N = 1000 random walks.The same conclusions as for the probability esti-mator apply while the dimension is more than twiceas big. As far as we know, it is the first parallel al-ternative to naive Monte Carlo and thus allows fora massive gain in quantile estimation.5. CONCLUSIONThis paper presents a new approach to rare eventsimulation for real-valued random variables interms of a random walk and showed its relationwith a Poisson process with parameter 1. As soonas the performance function of a complex computercode is real-valued, this framework applies. It hasbeen shown that it is related to Multilevel Splittingalgorithms and that it gives the optimal implemen-tation in terms of variance against computationaltime. This optimality has been proved theoreticallyand verified in the examples. Especially, it enables712th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 20151000 100 50 20 10 5 2 10.9480.9490.9500.9510.9520.9530.954QuantileNFigure 3: 4.70410−11 quantile estimated with a total of1000 random walks simulated by batches of size Nfull parallelisation of the estimator. However, it alsorequires to generate according to conditional distri-butions and this can bring some limitation in theparallel implementation.ACKNOWLEDGEMENTSThe author would like to thank his advisor Jos-selin Garnier (University Paris Diderot) for numer-ous advices and suggestions.6. REFERENCESAsmussen, S. and Glynn, P. W. (2007). Stochastic Sim-ulation: Algorithms and Analysis: Algorithms andAnalysis, Vol. 57. Springer.Au, S.-K. and Beck, J. L. (2001). “Estimation of smallfailure probabilities in high dimensions by subset sim-ulation.” Probabilistic Engineering Mechanics, 16(4),263–277.Bourinet, J.-M., Deheeger, F., and Lemaire, M. (2011).“Assessing small failure probabilities by combinedsubset simulation and support vector machines.”Structural Safety, 33(6), 343–353.Bréhier, C.-E., Lelievre, T., and Rousset, M. (2014).“Analysis of adaptive multilevel splitting algorithmsin an idealized case.” arXiv preprint arXiv:1405.1352.Cérou, F., Del Moral, P., Furon, T., and Guyader, A.(2012). “Sequential Monte Carlo for rare event es-timation.” Statistics and Computing, 22(3), 795–808.Der Kiureghian, A. and De Stefano, M. (1991). “Effi-cient algorithm for second-order reliability analysis.”Journal of engineering mechanics, 117(12), 2904–2923.Dubourg, V., Deheeger, F., and Sudret, B. (2011).“Metamodel-based importance sampling forthe simulation of rare events.” arXiv preprintarXiv:1104.3476.Glasserman, P., Heidelberger, P., Shahabuddin, P., andZajic, T. (1999). “Multilevel splitting for estimatingrare event probabilities.” Operations Research, 47(4),585–600.Guyader, A., Hengartner, N., and Matzner-Løber, E.(2011). “Simulation and estimation of extreme quan-tiles and extreme probabilities.” Applied Mathematics& Optimization, 64(2), 171–196.Hastings, W. K. (1970). “Monte carlo sampling meth-ods using markov chains and their applications.”Biometrika, 57(1), 97–109.Huber, M., Schott, S., et al. (2011). “Using tpa forbayesian inference.” Bayesian Statistics 9, 9, 257.Igusa, T. and Der Kiureghian, A. (1985). “Dy-namic characterization of two-degree-of-freedomequipment-structure systems.” Journal of engineeringmechanics, 111(1), 1–19.Merhav, N. and Sabbag, E. (2008). “Optimal watermarkembedding and detection strategies under limited de-tection resources.” Information Theory, IEEE Trans-actions on, 54(1), 255–274.Robert, C. P. and Casella, G. (2004). Monte Carlo sta-tistical methods. Springer.Simonnet, E. (2014). “Combinatorial analysis of theadaptive last particle method.” Statistics and Comput-ing, 1–20.Walter, C. (2015). “Moving particles: A parallel optimalmultilevel splitting method with application in quan-tiles estimation and meta-model based algorithms.”Structural Safety, 55(0), 10 – 25.8

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.53032.1-0076243/manifest

Comment

Related Items