International Conference on Applications of Statistics and Probability in Civil Engineering (ICASP) (12th : 2015)

Adaptive Kriging reliability-based design optimization of an automotve body structure under crashworthiness… Moustapha, Maliki; Sudret, Bruno; Bourinet, Jean-Marc; Guillaume, Benoît 2015-07

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata


53032-Paper_529_Moustapha.pdf [ 546.08kB ]
JSON: 53032-1.0076165.json
JSON-LD: 53032-1.0076165-ld.json
RDF/XML (Pretty): 53032-1.0076165-rdf.xml
RDF/JSON: 53032-1.0076165-rdf.json
Turtle: 53032-1.0076165-turtle.txt
N-Triples: 53032-1.0076165-rdf-ntriples.txt
Original Record: 53032-1.0076165-source.json
Full Text

Full Text

12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015Adaptive Kriging Reliability-Based Design Optimization Of AnAutomotive Body Structure Under Crashworthiness ConstraintsMaliki MoustaphaGraduate student, Institut Pascal & PSA Peugeot-Citroën, Vélizy-Villacoublay, FranceBruno SudretProfessor, Chair of Risk, Safety & Uncertainty Quantification, ETH Zurich, SwitzerlandJean-Marc BourinetAssociate Professor, Institut Pascal, IFMA, Clermont-Ferrand, FranceBenoît GuillaumeResearch Engineer, PSA Peugeot Citroën, Vélizy-Villacoublay, FranceABSTRACT: The increasing use of surrogate models has widened the range of application of classicalreliability-based design optimization (RBDO) techniques to industrial problems. In this paper, we con-sider such an approach to the lightweight design of an automotive body structure. Solving this problemwhile approximating the complex models (nonlinear, noisy and high-dimensional) with a single meta-model would require a very large and non affordable design of experiments (DOE). We thus investigateand propose a methodology of adaptive Kriging based RBDO where an initial DOE is iteratively updatedso as to improve the Kriging models only in regions that actually matter. The nested reliability analysisis expressed in terms of quantiles assessment. Two stages of enrichment are performed. The first oneseeks to gradually improve the accuracy of the metamodels where the probabilistic constraints are likelyto be violated. The second one is embedded in an evolution strategy optimization scheme where, at eachiteration, the accuracy of the quantile estimation is improved if necessary. The methodology is appliedon an analytical and crashworthiness design problems showing good performance by enhancing accuracyand efficiency with respect to a traditional approach.The computational cost of the latest high fidelitysimulation codes make them unaffordable when itcomes to structural design problems such as opti-mization or reliability analyses. Designers are nowfamiliar with so-called metamodelling approacheswhere an easy-to-evaluate function is used as aproxy of the true model M : x ∈ X ⊂ Rs 7→ y =M (x). The metamodel is fitted by learning overan initial design of experiments which is basicallysome pairs of known inputs-outputs of the code:D = {(x(i),yi), i ∈ {1, . . . ,n},∀x(i) ∈ X ⊂ Rs,yi =M (x(i)) ∈ Y ⊂ R}. In this paper, we consider theapplication of such an approach to the lightweightdesign of an automotive body structure. The as-sociated constraints have shown to be very noisybecause of the chaotic nature of a vehicle frontalimpact. To account for the various uncertainties as-sociated with this problem, reliability-based designoptimization (RBDO) is performed. However, per-forming the RBDO on surrogate models built ona single space-filling design does not yield accu-rate solutions nor is it efficient. This is because ul-timately, the region of interest for optimization isvery often only a small subset of the entire designspace. Alternatively, an initial surrogate model canbe iteratively updated to accurately approximateM112th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015only where necessary such as introduced in theframework of expected global reliability analysis(Bichon et al. (2008)).We propose here a methodology for adaptiveKriging-based RBDO embedded in an evolutionstrategy where at each iteration, the accuracy ofthe surrogate model is checked and if necessary im-proved by local enrichment. The paper is organizedas follows. First we set up the RBDO problem andbriefly review Kriging. Then we introduce the tech-niques for adaptive design of experiments followedby the entire methodology. Finally, an analyticaland a finite element-based examples are consideredfor application.1. DESIGN OPTIMIZATION UNDER UN-CERTAINTYUncertainties are ubiquitous in structural systemsand play a key role in the robustness and reliabil-ity of optimized solutions. The reliability aspect ismost often handled with RBDO. Such a techniqueseeks to balance some cost and a predefined level ofreliability. Generally, the RBDO problem is statedin the following terms:d∗ = argmind∈Dc(d) s.t.{f j(d)≤ 0, { j = 1, . . . ,ns}P(gk(X (d),Z) < 0)≤ Pfk , {k = 1, . . . ,np}(1)where a cost function c is minimized with respect todesign variables d while satisfying to a collection ofns soft and np performance constraints respectivelydenoted by f and g. The former simply bounds thedesign space and the latter splits it into safety andfailure domains. To account for the uncertainties,the RBDO approach addresses the problem in termsof probability of failure being lower than a giventhreshold, herein P f = {Pfk ,k = 1, . . . ,np}. In thisrespect, the random variables X ∼ fX |d and Z ∼ fZare introduced and respectively stand for the designand environment variables.The solution of Eq. (1) requires the assessment ofthe probability of failure for any given design dur-ing the optimization process. This is usually doneby integrating the joint probability density functionof X and Z over the failure domain, which canturn out to be a cumbersome task (Dubourg et al.(2011)).Despite many techniques exist in the literature tosolve this problem, we adopt a simpler yet efficientapproach by replacing the probabilistic constraintwith a quantile assessment. This is especially justi-fied since the target probabilities of failure are rel-atively high in the problems we are addressing (ac-tually around 5%).Eq. (1) then rewrites:d∗ = argmind∈Dc(d) s.t.{f(d)≤ 0gα(X (d),Z)≤ 0(2)where for any performance function gk, the quantilegkα (X (d),Z)≡ gkα (d) is defined such that:P(gk(X (d),Z)≤ gkα (d)) = α (3)The quantile estimation mostly resorts to MonteCarlo (MC) sampling. More specifically for a givendesign d (i), a MC population is sampled followingfX |d and fZ :C(i)q ={(x j(d(i)),z j), j = 1, . . . ,Nq}(4)The model is then evaluated on these points and theresults ranked in ascending order. The estimatedquantile corresponds to the bNqαc-th term, whereb•c denotes the floor function.This simulation technique is nonetheless very-time consuming as it relies on multiple modelevaluations. To make the RBDO affordable, asurrogate-based approach is adopted. In this paper,Kriging has been chosen as default surrogate andwill be briefly reviewed in the next section.2. KRIGING SURROGATEKriging or Gaussian process modeling relies on amajor hypothesis which is to assume that the func-tion to emulate is one realization of a stochastic pro-cess (Santner et al. (2003)) that reads:M (x) =p∑j=1β j f j(x)+Z(x) (5)where ∑pj=1β j f j(x) is a linear combination ofsome basis functions which captures a global212th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015trend, as conventionally assumed in universal Krig-ing, and Z(x) is a zero-mean Gaussian Processwith auto-covariance function Cov [Z(x),Z(x′)] =σ2R(x,x′;θ ). In this setting, σ2 is the variance,R(x,x′;θ ) the auto-correlation function and θ avector gathering its hyperparameters.Building up the model requires first to makesome choices about the basis and auto-correlationfunctions, the most popular ones being respectivelylow order polynomials and parametric multi-variatestationary functions. The latter encodes the as-sumptions about the underlying process such asregularity.The Kriging approximate for a new point x(0)is provided by a realization of a Gaussian variableG∼N (µĜ,σ2Ĝ):{µĜ = f (x(0))T β̂ + rT0 R−1(y−F β̂ )σ2Ĝ= σ̂2(1− rT0 R−1r0 +uT (F T R−1F )−1u)(6)where β̂ = (F T R−1F )−1F T R−1y and σ̂2 = 1n(y−Fβ )T R−1(y−Fβ ) are the generalized least-squareestimates of the Kriging parameters for a poly-nomial trend and u = F T R−1r0 − f (x(0)). Thefollowing matricial notation have been introduced:F = {Fi j = f j(x(i)), i = 1, . . . ,n, j = 1, . . . , p}, R ={rik = R(x(i),x(k)), i = 1, . . . ,n,k = 1, . . . ,n} andr0 = {R(x(0),x(i)), i = 1, . . . ,n}.The parameters θ are inferred from the data.This learning stage usually resorts to various tech-niques among which the widely used maximumlikelihood estimation. It turns out to be an optimiza-tion problem which reads:θ̂ = arg minθ∈Rdθσ̂2(θ )(detR(θ ))1n (7)where dθ is the number of parameters. This crucialoptimization problem is solved here by an hybridalgorithm (genetic followed by BFGS) as proposedin the R package DiceKriging we are using (Rous-tant et al. (2012)).Eq. (6) displays the mean and variance of theKriging prediction. This variance provides Krig-ing with a local estimator of error mainly due tothe sparsity of data. This feature is exploited in so-called adaptive designs.3. ADAPTIVE DESIGNS3.1. Learning functionFollowing the ideas developed in the frameworkof efficient global optimization, various approacheswere introduced to adaptively update initial designof experiments. According to whether it is the ob-jective or the constraint function that is surrogated,different families of learning functions are used.In our context of constraints handling with a per-fectly known objective function, we focus on AK-MCS (Active Kriging - Monte Carlo Simulation)proposed by Echard et al. (2011). Many other learn-ing functions exist and are to the authors experiencealmost equally efficient. However, AK-MCS has avery simple formulation and is easy to interpret.The idea in AK-MCS is to sample a very largeMC population of candidates for enrichment. Thelearning function will then select among thesepoints the one which promises the highest expectedgain of information. This gain of information isconsidered here with respect to either quantile es-timation or contour approximation. Therefore it re-sults in two slightly different formulations.For quantile estimation, we follow the method-ology proposed in Schöbi and Sudret (2014). Letus consider we are at iteration i of the optimizationprocess. With the current design d (i), the MC pop-ulation C(i)q considered for enrichment is sampledaccording to Eq. (4). The learning function thenreads:Uq(x,z) =|µĜ(x,z)− ĝα(d(i))|σĜ(x,z)(8)where ĝα(d (i)) is the estimate of the quantile com-puted through the Kriging model. The best nextpoint is the one that minimizes Uq. It simply cor-responds to points with estimates close to the cur-rent quantile or with high variance. By iterativelyadding points in this fashion, the accuracy of thequantile estimate will be improved.For contour approximation, the candidate MCpopulation is defined in the joint space of designand environment variables: Cc = {(x( j),z( j)), j =1, . . . ,N} and the learning function reads:Uc(x,z) =|ĝα(d)|σĜ(x,z)(9)312th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015The same mechanisms as for Uq are in play here.Therefore samples with a high probability of con-straint violation and/or high variance are likely tobe added in the design. The computational costof Uc is quite high as it implies N × Nq calls ofthe Kriging model. To reduce this computationalburden, the quantile is estimated here with a verylimited number of samples in a bootstrapping ap-proach. More specifically in this paper, we go fromthe usual Nq = 104 to Nq = 500 samples with re-spectively 200 and 500 bootstrap replicates for thefirst and second applications. This setting offersa fair trade between accuracy and computationaltime.3.2. Multi-constrained and multi-points enrich-mentAs stated above, we are concerned with multi-constrained optimization problems. Furthermore,the computational power of clustered PCs allows usto simultaneously launch many evaluations of thetrue model. In such a case, the adaptive schemeshould allow for multi-points enrichment. Consid-ering these two aspects, the enrichment methodol-ogy described above is slightly modified.First, let us consider the presence of multipleconstraints. Fauriat and Gayton (2014) proposed acomposite criterion for system reliability problems.That is, the learning function is evaluated only withrespect to the constraint that plays the largest rolein the system failure. For Eqs. (8,9), this translatesrespectively to:U sq(x,z) =|µĜs(x,z)− ĝsα (d(i))|σĜs(x,z)(10)U sc (x,z) =|ĝsα (d)|σĜs(x,z)(11)where s is the index of the performance functionwith the highest value at the evaluated point.On the other hand, to allow for multiple pointsenrichment, we first define the following 95% mar-gins of uncertainty:Mq = ∪npk=1{(x,z) ∈ Cq : ĝkα −2σĜk ≤µĜk ≤ ĝkα +2σĜk}(12)Mc = ∪npk=1{(x,z) ∈ Cc :−2σĜk ≤ ĝkα ≤ 2σĜk}(13)where Mq and Mc correspond respectively tothe quantile estimation and contour approximationproblems.The learning function is evaluated on this sub-set of points. K clusters are identified by means ofweighted K-means clustering with weight ϕ(−Uc)(or ϕ(−Uq) ), where ϕ is the standard Gaussianprobability density function. By this weighting, re-gions where the criterion is high will be favored.The best next points to add are selected as the onesin Mc (or Mq) which are the closest to the clusterscentroids.4. ADAPTIVE KRIGING BASED RBDOThe above-mentioned topics are now embedded inan optimization scheme so as to propose a method-ology for adaptive Kriging-based RBDO.We start by applying a few iterations of enrich-ment for contour approximation. This is to roughlylocate and enrich regions of the space where theconstraints are likely to be violated. This allows usto start from a very scarce initial design of experi-ments. We stop this enrichment procedure when thesize of the 95% margin of uncertainty is consider-ably reduced. Particularly, we consider the follow-ing criterion: the ratio of Card(Mc) between thecurrent and the initial iteration being lower than agiven threshold, say 0.15. The residual epistemicuncertainty will be reduced through enrichment forquantile estimation during the optimization.Let us now consider we are solving the RBDOproblem in an iterative way:d (i+1) = d (i)+ν (i) (14)where ν (i) is a step in the search space promisingan improvement of the objective function.Since the true functions have been replaced byKriging approximations, a sufficient level of accu-racy of the metamodels should be ensured beforeproceeding to the updating scheme in any given it-eration. We consider that the following relationshipshould hold:|ĝ+kα − ĝ−kα|µmaxĜk−µminĜk≤ εgk , ∀k = {1, . . . ,np} (15)412th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015where ĝ±kα are the quantiles estimated with the func-tions µĜk ± 2σĜk , µmaxĜkand µminĜkare respectivelythe maximum and minimum values of µĜk evalu-ated on C(i)q and εgk a predefined threshold.If Eq. (15) holds for any small values of εgk , thenthe metamodels are deemed accurate enough to betrusted as surrogates of the true functions and onecan proceed to the updating scheme. However, ifthis relationship does not hold, instances of enrich-ment should be performed so as to improve the ac-curacy of the quantile estimates.For this strategy to make sense, the optimiza-tion algorithm should handle only one point periteration. Otherwise the accuracy of the quantileswould have to be estimated on all the points gener-ated in an iteration. This would make the strategycumbersome. The condition of one point per iter-ation is generally well fulfilled by gradient-basedapproaches. However and globally speaking, lo-cal search algorithms are not adapted to the multi-modal problems we are intending to solve. In thispaper, we use the covariance matrix adaptation -evolution strategy (CMA-ES), more specifically the(1+1)-CMA-ES for constrained optimization pro-posed by Arnold and Hansen (2012). The overallstrategy is summarized in the algorithm in Figure 1.1. Intialize D , d (0)2. Enrich for contour approximation and updateD - Eq.(9)3. Start optimization, i = 04. Build MC population C(i)q - Eq. (4)5. Compute ĝα , ĝ+α , ĝ−α , µmaxĜ and µminĜ6. Check for accuracy - Eq. (15)7. While not accurate enough, enrich for quantileestimation - Eqs. (4,8)8. Do one iteration of CMA-ES, i← i+19. While no convergence of CMA-ES, go to 4.Figure 1: Methodology for adaptive Kriging based-RBDO.4.1. A short introduction to (1+1)-CMA-ES forconstrained optimizationIn a nutshell, CMA-ES is an evolution strategywhich relies on multivariate normal distributions toiteratively sample solutions in the descent directionof the objective function. Considering Eq. (14),ν i, the so-called mutation term, is a realization ofa zero-mean Gaussian process of auto-covarianceC(i). The entire strategy relies on an appropriateupdate of C(i) so as to iteratively increase the prob-ability of sampling offsprings which promise thelargest fitness progress.In a practical point of view, the updating schemein the constrained (1+1)-CMA-ES reads:d (i+1) = d (i)+σ (i)A(i)zc (16)where zc is a realization of a standard Gaussian ran-dom variable, σ (i) an optimal step size and A(i) theCholesky decomposition of C(i), introduced as amean to sample from N (0,C(i)).The adaptation of the covariance is directlycarried out on A to avoid the costly successiveCholesky decompositions. Besides, directions ofunfeasible region in the vicinity of the parent areidentified and variance of the distribution in thesedirections are appropriately decreased. Hence, thisallows the constraint handling. Many heuristics areadditionally introduced by the authors to fine-tunethe efficiency of the algorithm. The reader is invitedto refer to Arnold and Hansen (2012) for further de-tails.5. APPLICATIONSIn this section, we apply the methodology to twoexamples which share the following settings.The anisotropic Matérn 5/2 auto-correlationfunction is chosen for the Kriging models. They arebuilt on the so-called augmented space as definedin Dubourg et al. (2011). This allows us to builda single Kriging model for all the nested reliabil-ity analyses (quantile estimation). In other words,we have x ∈∏sdj=1[d−j −5σ j;d+j +5σ j]where d−jand d+j are respectively the minimum and maxi-mum admissible values of the j-th component ofd and σ j corresponds to its associated standard de-viation. For the case where there is no random-512th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015ness in d , σ j = 0 for any j = {1, . . . ,sd}. An L2-discrepancy optimized Latin Hypercube is used tosample in the hypercube. On the other hand, theenvironment variables are uniformly sampled in anhypersphere of radius r0: Z = {z ∈ Rse|‖z‖2 ≤ r0}where ‖•‖2 denotes the L2-norm in Rse . For thefollowing applications r0 is set respectively to 5 and3).Considering CMA-ES, the initial step size σ (0)is set equal to 1/3 of the length of the widest searchdirection in the design space. Besides, the thresholdfor quantile accuracy is set loose in the early itera-tions of CMA-ES for the sake of efficiency. This isbecause CMA-ES will likely be exploring in theseiterations and it might not be necessary to have avery accurate estimate of the quantile when we arefar away from the potential optimum. Gradually de-creasing values of the threshold are therefore set ina simulated-annealing fashion with four levels.5.1. The modified Choi problemConsider the modified Choi problem which writes(Lee and Jung (2008)):d∗ = arg mind∈[0,10]210−d1 +d2 s.t.g1(x) =x21x220 −1g2(x) =(x1+x2−5)230 +(x1−x2−12)2120 −1g3(x) = 80(x21+8x2+5)−1(17)To solve the RBDO problem, we introduce the ran-dom design variables Xi ∼ N (di,0.62) and a tar-get probability of failure of 5.4% (correspondingto a reliability index of 2). The reference solu-tion is considered as the one found with the an-alytical functions while using the quantile assess-ment for the reliability analysis and reads d∗re f ={5.7060 , 3.4836}.Let us first illustrate the methodology with somediagnostic plots. Figure 2 refers to the enrich-ment for contour approximation. The thick blue,red and green lines respectively represent the con-tours ĝkα = 0,k = {1,2,3}. In Figure 2a, the bluetriangles are the initial DOE and the red squaresthe points added for enrichment. Additionally, thesmall marks show Mc, each color corresponding toa specific constraint (blue ’+’, red ’*’ and green ’x’respectively for g1α , g2α and g3α ). One can see howthis margin of uncertainty is reduced at the last it-eration as the constraints are more accurately ap-proximated. A non-negligible level of uncertaintyremains in the design space and will be reducedin the optimization process only where necessary.Figure 2b shows contours of the learning functionϕ(−U sc ). The marks with different colors highlightwhich constraint is considered for the computationof U sc .0 5 10−2024681012x1x 2(a) Kriging models and added points0 2 4 6 8 100246810x1x 2 Contour of the learning function U scFigure 2: Kriging models and enrichment for contourapproximation.After this enrichment for contour approximation,the optimization is performed with CMA-ES. Fig-ure 3 shows the history of the sampled points dur-ing optimization. Points for which an enrichmenthas been done are circled in cyan with this rule:the larger the radius, the more points were added in612th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015the corresponding C(i)q . The red triangles are thosewhich do not improve the current best point dur-ing CMA-ES and the blue squares those which im-prove it but are not feasible. The green circles arethe successive admissible improved solutions sam-pled during CMA-ES. The final solution is shownas a black diamond.x1x 2  −2 0 2 4 6 8 10 12−202468101251015Figure 3: Illustration of CMA-ES with enrichment.To assess the solutions accuracy, we repeat theoptimization five times. The results are gathered inTable 1 where MRE = |d∗−d∗re f |/d∗re f is the meanrelative error. Two cases are considered. The firstone gives fairly accurate solutions with an averagedesign of size n¯ = 30.4. The quantile accuracy cri-terion was set to 0.05 in this case. This threshold isdecreased to 0.01 for the case #2. We now havea very accurate solution but at the cost of func-tions evaluations which average to 33.4. Note that,with the same number of points but without en-richment, the optimization does not even converge.This shows the great improvement brought by theadaptive procedure.Table 1: Summary of the results for replicate optimiza-tions.Case Case #1 Case #2n¯ 30.4 33.4d∗ d1 d2 d1 d2Mean 5.7249 3.4792 5.7047 3.4832MRE 3.310−3 1.310−3 2.110−4 1.310−45.2. The sidemember sub-systemForward side-member Rear side-member Lower bulkhead Forward side-member base Wheel arch Figure 4: The sidemember subsystemThis application is about the lightweight design of aso-called sidemember subsystem under frontal im-pact. As its name suggests, it is a subsystem ofan automotive front end. Globally speaking, it isa collection of parts which impact the crash behav-ior of the car. The associated finite element modelis time-consuming and most importantly prone tonoise. This noise is due to the high sensitivity ofcrash simulations to the initial conditions. To ac-count for the uncertainties in the initial conditions,a probabilistic model is set up considering some in-formation from crash certification procedures. Asa result, the initial speed of the car V (in km/h)and the lateral position of the barrier P (in mm) areconsidered random with the following uniform andnormal distributions:Z = {V,P} : V ∼U (34,35), P∼N (0,2) (18)Thicknesses of five parts are considered as de-sign variables. Their nominal values are d ={1.95, 1.95, 2.44, 1.97, 0.87} corresponding to aweight of 9.67 kg. Two constraints are considered,namely the maximum wall force (y1) and sidemem-ber compression (y2). Their maximum admissiblevalues are respectively 170 kN and 525 mm. Itshould be stressed as this point that all numericalvalues in this application are different from thoseof an entire car model.712th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015To solve this problem, we start with an initial de-sign of 64 points in the 7-dimensional augmentedspace. The first stage of enrichment results in 78additional points (8 iterations of 10 points eachamong which two failed). It allows us to globallyreduce the margin of uncertainty of the models withrespect to the probabilistic constraints. The opti-mization converged in 650 iterations, among whichonly 14 were selected for enrichment summing upto 48 more points (3 to 5 points per enrichmentaccording to whether the sampled point improvesthe current best solution or not, some simulationsfailed). The final size of the DOE is then 190. Theoverall procedure leads to an optimum estimatedat d∗ = {2.1601, 1.6991, 2.0278, 1.5981, 0.6002}corresponding to a weight of 8.48 kg (12.30% ofweight saving).To validate this optimum, we simulate the associ-ated probabilistic constraint with the true finite ele-ment model. 200 points are generated for the quan-tile estimation. Results are gathered in the Table 2below. They show that the found optimum is actu-ally feasible as expected. As a comparison, a pre-vious work without adaptive design resulted in anoptimum which was not feasible with respect to thetrue model, despite a DOE of size 285.Table 2: Results of simulation with respect to the truefinite element model. Computed on 200 points with 500bootstrap replicates.Model ĝ1α g1α ĝ2α g2αBootstrap mean 156.3 163.1 513.6 518.46. CONCLUSIONThe proposed methodology for adaptive KrigingRBDO provides an updating procedure which isfirst global then local. The first stage simplyseeks to reduce the overall Kriging epistemic uncer-tainty. The second one, embedded in an optimiza-tion scheme, focuses on improving the probabilisticconstraint evaluation expressed in the nested relia-bility problem. A special care is given to efficiencyby saving a higher computational budget to itera-tions where (1+1)-CMA-ES is exploiting (in oppo-sition to early iterations of space exploration). Thetwo applications have shown improvement over atraditional approach both in terms of solutions fea-sibility and number of model evaluations. In anindustrial context however, this splitting of the en-richment into many iterations might be an issue insituations where the overall project lead time mat-ters.7. REFERENCESArnold, D. V. and Hansen, N. (2012). “A (1+1)-CMA-ES for constrained optimisation.” GECCO, T. Souleand J. H. Moore, eds., ACM, 297–304.Bichon, B., Eldred, M., Swiler, L., Mahadevan, S.,and McFarland, J. (2008). “Efficient global reliabil-ity analysis for nonlinear implicit performance func-tions.” AIAA Journal, 46(10), 2459–2468.Dubourg, V., Sudret, B., and Bourinet, J.-M. (2011).“Reliability-based design optimization using Krig-ing and subset simulation.” Struct. Multidisc. Optim.,44(5), 673–690.Echard, B., Gayton, N., and Lemaire, M. (2011). “AK-MCS: an active learning reliability method combin-ing Kriging and Monte Carlo simulation.” Struct. Saf.,33(2), 145–154.Fauriat, W. and Gayton, N. (2014). “AK-SYS: an adap-tation of the AK-MCS method for system reliability.”Rel. Eng. & Sys. Safety, 123, 137–144.Lee, T. and Jung, J. (2008). “A sampling technique en-hancing accuracy and efficiency of metamodel-basedRBDO: Constraint boundary sampling.” Computers& Sciences, 86, 1463 – 1473.Roustant, O., Ginsbourger, D., and Deville, Y. (2012).“DiceKriging, DiceOptim: two R packages for theanalysis of computer experiments by Kriging-basedmetamodeling and optimization.” J. Stat. Softw.,51(1), 1–55.Santner, T., Williams, B., and Notz, W. (2003). The de-sign and analysis of computer experiments. Springer,New York.Schöbi, R. and Sudret, B. (2014). “PC-Kriging: A newmeta-modelling method and its application to quan-tile estimation.” Proc. 17th IFIP WG7.5 Conferenceon Reliability and Optimization of Structural Systems,Huangshan, China, T. Francis, ed.8


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items