International Conference on Applications of Statistics and Probability in Civil Engineering (ICASP) (12th : 2015)

Low-rank tensor approximations for reliability analysis Konakli, Katerina; Sudret, Bruno 2015-07

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata


53032-Paper_159_Konakli.pdf [ 418.87kB ]
JSON: 53032-1.0076054.json
JSON-LD: 53032-1.0076054-ld.json
RDF/XML (Pretty): 53032-1.0076054-rdf.xml
RDF/JSON: 53032-1.0076054-rdf.json
Turtle: 53032-1.0076054-turtle.txt
N-Triples: 53032-1.0076054-rdf-ntriples.txt
Original Record: 53032-1.0076054-source.json
Full Text

Full Text

12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015Low-Rank Tensor Approximations for Reliability AnalysisKaterina KonakliPost-doc, Chair of Risk, Safety and Uncertainty Quantification, ETH Zürich, SwitzerlandBruno SudretProfessor, Chair of Risk, Safety and Uncertainty Quantification, ETH Zürich, SwitzerlandABSTRACT: Low-rank tensor approximations have recently emerged as a promising tool for efficientlybuilding surrogates of computational models with high-dimensional input. In this paper, we shed light onissues related to their construction with greedy approaches and demonstrate that meta-models built withsmall experimental designs can be used to estimate tail probabilities with high accuracy.1. INTRODUCTIONAdvances in computer science in combination withan improved understanding of physical laws lead tothe development of increasingly complex compu-tational models for simulating behaviors of physi-cal and engineering systems. Unfortunately, uncer-tainty propagation through such models becomesintractable in many practical situations when thecomputational cost of a single simulation is high.A remedy is the use of surrogate models, alsocalled meta-models, which possess similar statis-tical properties with the original models, but havesimple functional forms and are thus inexpensive toevaluate.Polynomial Chaos Expansions (PCE) representthe model response as an expansion onto a basisof orthonormal multivariate polynomials obtainedas tensor products of appropriate univariate polyno-mials. Although this meta-modeling approach hasproven powerful in a wide range of applications, itsuffers from the curse of dimensionality, meaningthe exponential growth of the basis size - and there-fore of the unknown coefficients - with the dimen-sion of the random input.A promising alternative for efficiently build-ing meta-models in high-dimensional spaces usingpolynomial functions is offered by Low-Rank Ap-proximations (LRA) (e.g. Nouy (2010), Doostanet al. (2013)). LRA exploit the tensor-product formof the polynomial basis to express the random re-sponse as a sum of a small number of rank-onefunctions. Such representations drastically reducethe number of unknown coefficients with respect toPCE, with this number growing only linearly withthe input dimension.Existing algorithms for building LRA are basedon greedy approaches, where the polynomial coef-ficients in separate dimensions are alternately up-dated and the rank of the approximation is pro-gressively increased. These algorithms involve asequence of error-minimization problems of smallsize that can be easily solved with standard tech-niques. However, stopping criteria and selection ofoptimal rank are open questions that call for furtherinvestigations.The aim of the present paper is to introduce anddemonstrate the potential of LRA in the context ofreliability analysis as well as to shed light on as-pects of their construction. The paper is organizedas follows: In Section 2, the mathematical setup ofthe problem is described. Following a brief reviewof PCE in Section 3, LRA are presented in Section4. In Section 5, we employ LRA to develop meta-models of the responses of a beam and a truss struc-ture; in these applications, we investigate propertiesof the greedy constructions and demonstrate the ef-ficiency of LRA for evaluating small probabilitiesof failure, also in comparison to PCE.112th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 20152. SURROGATE MODELS WITH NON-INTRUSIVE APPROACHESLet us consider a physical or engineering systemwhose behavior is represented by a - possibly com-plex - computational model M . We denote byX = {X1, . . . ,XM} the M-dimensional random in-put and by Y a scalar response quantity of interest.Our goal is to develop a surrogate Ŷ of the exactmodel response Y = M (X ), i.e. an approximatemodel that possesses similar statistical properties,but has a simple functional form.Non-intrusive methods for building surrogatemodels are of interest in the present study. Suchmethods rely on a series of calls to the determinis-tic computational model, which may be used with-out any modification. Building a meta-model in anon-intrusive manner requires an Experimental De-sign (ED) comprising a set of realizations of theinput vector, E = {χ (1), . . . ,χ (N)}, and the corre-sponding model evaluations at these points, Y ={M (χ (1)), . . . ,M (χ (N))}.Let us consider a set of realizations of the inputvector, X = {x1, . . . ,xn}. In order to define mea-sures of accuracy of a meta-model, we introducethe semi-inner product< a , b >X =1nn∑i=1a(xi) b(xi), (1)leading to the semi-norm ‖ a ‖X =< a, a >1/2X . Agood measure of accuracy is the generalization er-ror, ErrG = E[(Y − Ŷ)2], which can be estimatedbyÊrrG =∥∥∥Y − Ŷ∥∥∥2Xval, (2)where Xval = {x1, . . . ,xnval} is a sufficiently largeset of realizations of the input vector, denoted vali-dation set. The relative generalization error can beestimated by normalizing ÊrrG with the empiricalvariance of Yval = {M (x1), . . . ,M (xnval)}, i.e.êrrG =ÊrrGVar [Yval]. (3)In cases when one cannot afford the additionalmodel evaluations required to compute ÊrrG, an er-ror estimate based on the ED may be used instead.This is the empirical error, ÊrrE , given byÊrrE =∥∥∥Y − Ŷ∥∥∥2E. (4)The respective relative error is obtained by nor-malizing ÊrrE with the empirical variance of Y ={M (χ (1)), . . . ,M (χ (N))}, i.e.êrrE =ÊrrEVar [Y ]. (5)It should be noted that ÊrrE tends to underestimateErrG, which might be severe in cases of overfitting.3. POLYNOMIAL CHAOS EXPANSIONSLet us assume that the components of X are in-dependent with joint Probability Density Func-tion (PDF) fX (x) and marginal PDFs fXi(xi), i =1, . . . ,M.Polynomial Chaos Expansions (PCE) approxi-mate the exact model response Y =M (X ) asŶ = ∑α∈AyαΨα (X ), (6)where {Ψα ,α ∈ A } is a set of multivariate poly-nomials with multi-indices α = (α1, . . . ,αM) thatare orthonormal with respect to fX (x) and yα de-notes the corresponding polynomial coefficients.The multivariate polynomials are obtained by ten-sorization of univariate polynomials, i.e.Ψα (X ) =M∏i=1ψ(i)αi (Xi), (7)where ψ(i)αi (Xi) is a polynomial of degree αi in thei-th input variable belonging to a family of polyno-mials that are orthonormal with respect to fXi(xi).For standard distributions, the associated fam-ily of orthonormal polynomials is well-known; forinstance, a uniform variable with support [−1,1]is associated with the family of Legendre polyno-mials, whereas a standard normal variable is as-sociated with the family of Hermite polynomials.Other cases can be treated through an isoprobabilis-tic transformation of X to a basic random vector Ue.g. a standard normal or a standard uniform vector.Cases with mutually dependent input variables can212th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015also be treated through an isoprobabilistic transfor-mation (e.g. Nataf transformation) to a vector ofindependent standard variables.The set of multi-indices A is determined by anappropriate truncation scheme. A common schemeconsists in selecting multivariate polynomials up toa total degree pt , i.e. {ψα , α ∈ NM : |α | ≤ pt},where |α |=∑Mi=1αi. The corresponding number ofterms in the truncated series iscardA =(M+ ptpt)=(M+ pt)!M!pt!. (8)For other advanced truncation schemes, the readeris referred to Blatman and Sudret (2011).Once the basis has been specified, the set of coef-ficients y = {yα , α ∈A } may be computed as thesolution ofy = arg minυ∈RcardAE[(M (X )− ∑α∈AυαΨα (X ))2].(9)By replacing the expectation operator with the em-pirical mean over a sample set, the above equa-tion becomes a standard least-squares minimizationproblem, which may be solved with well-knowntechniques. A more efficient approach leading tosparse PCE is the Least Angle Regression (LAR)method (see Blatman and Sudret (2011) for furtherdetails).Note that in Eq.(8) the number of basis elementsgrows exponentially with the input dimension M.Consequently, the number of model evaluations re-quired to compute the polynomial coefficients maybe prohibitively large in high-dimensional prob-lems. This limitation, known as the curse of di-mensionality, constitutes a bottleneck in the PCEapproach. A promising alternative is offered bycanonical decompositions, described in the sequel.4. LOW-RANK APPROXIMATIONS4.1. Canonical decompositionsA rank-1 function of the input vector X ={X1, . . . ,XM} is a function of the formw(X ) =M∏i=1v(i)(Xi), (10)where v(i)(Xi) is a univariate function of Xi. A rep-resentation of the random response Y =M (X ) asa sum of rank-1 functions constitutes a canonicaldecomposition; this readsŶ =R∑l=1bl(M∏i=1v(i)l (Xi)), (11)where v(i)l (Xi) denotes a univariate function of Xi inthe l-th rank-1 component, bl are normalizing con-stants and R defines the rank of the decomposition.Herein, we consider decompositions with eachunivariate function v(i)l (Xi) expanded onto a poly-nomial basis that is orthonormal with respect tofXi(xi), i.e.Ŷ =R∑l=1bl(M∏i=1(pi∑k=0z(i)k,l P(i)k (Xi))), (12)where P(i)k is the k-th degree univariate polynomialin the i-th input variable of maximum degree piand z(i)k,l is the coefficient of P(i)k in the l-th rank-1term. A representation in the form of Eq.(12) dras-tically reduces the number of unknowns comparedto Eq.(6). In the case when pi = p, i = 1, . . . ,M,the number of unknowns in a rank-R decompositionis P = ((p+1)M +1)R, which grows only linearlywith M. Naturally, decompositions with small R areof interest, leading to the name Low-Rank Approx-imations (LRA).4.2. Construction of low-rank approximationsAlgorithms proposed in the literature for buildingLRA are based on greedy approaches, where thepolynomial coefficients along each dimension aresequentially updated and the rank of the decom-position is progressively increased. The algorithmproposed by Chevreuil et al. (2013b) involves a se-quence of pairs of a correction step and an updatingstep. In a correction step, a rank-1 tensor is built,whereas in an updating step, the set of normalizingcoefficients bl is determined. A modified version ofthis algorithm is employed in the subsequent exam-ple applications; details are given next.Let us denote by Ŷr the rank-r approximation of312th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015Y =M (X ), i.e.Ŷr =r∑l=1blwl, (13)where wl represents the l-th rank-1 componentwl =M∏i=1(pi∑k=0z(i)k,l P(i)k (Xi)). (14)In the r-th correction step, the rank-1 tensor wr isbuilt by solving the minimization problemwr = arg minω∈W‖Rr−1−ω‖2E , (15)where W is the space of rank-1 tensors and Rr−1 =Y − Ŷr−1 denotes the residual after the (r− 1)-thstep. The sequence of Ŷr is initiated by settingŶ0 = 0. Eq.(15) is solved through successive min-imizations along each direction i = 1, . . . ,M. Inthe minimization along direction j, the polyno-mial coefficients in all other directions are "frozen"at their current values and the coefficients z( j)r ={z( j)1,r . . .z( j)p j,r} are determined asz( j)r = arg minζ∈Rp j∥∥∥∥∥Rr−1−C( j) ·(p j∑k=0ζk P( j)k (X j))∥∥∥∥∥2E,(16)whereC( j) =∏i6= jpi∑k=0z(i)k,r P(i)k (Xi). (17)A correction step may involve several iterationsover the set of directions {1, . . . ,M}. We proposea stopping criterion that combines the number of it-erations over the set {1, . . . ,M}, denoted Ir, withthe decrease in the relative empirical error betweentwo successive iterations, denoted ∆êrrr, where theempirical error is given byêrrr =‖Rr−1−wr‖2EVar [Y ]. (18)We require that the algorithm exits the r-th correc-tion step if either Ir reaches a maximum allowablevalue Imax or ∆êrrr becomes smaller than a thresh-old ∆êrrmin.After the completion of a correction step, the al-gorithm moves to an updating step, in which the setof coefficients b = {b1, . . . ,br} is obtained asb = arg minβ∈Rr∥∥∥∥∥Y −r∑l=1βl wl∥∥∥∥∥2E. (19)Note that in each updating step, the size of vectorb increases by one. In the r-th updating step, thevalue of the new element br is determined for thefirst time, whereas the values of the existing ele-ments {b1, . . . ,br−1} are updated.The above algorithm relies on the solution of sev-eral small least-squares minimization problems (ofsize pi + 1, i = 1, . . . ,M, in each correction stepand of size r in the r-th updating step), which canbe solved using the Ordinary Least Squares (OLS)method. More efficient solution schemes can be de-veloped by replacing Eq.(16) and Eq.(19) with re-spective regularized problems.The progressive construction results in a set ofLRA of increasing rank. Chevreuil et al. (2013a)propose selection of the optimal rank using 3-foldCross Validation (CV). In the general case of k-foldCV, the ED is randomly partitioned in k sets of ap-proximately equal size. A meta-model is built con-sidering all but one of the partitions (training set)and the excluded set is used to evaluate the gener-alization error (testing set). By alternating throughthe k sets, k meta-models are obtained; the averagegeneralization error of those provides an estimate ofthe error of the meta-model built with the full ED.In the context of LRA, the above technique yieldsk meta-models of progressively increasing rank aswell as the respective error estimates. The rankyielding the smallest average generalization erroris identified as optimal and a new decompositionhaving the indicated rank is built using the full ED.The average generalization error corresponding tothe selected rank provides an estimate of the actualerror of the final meta-model.5. EXAMPLE APPLICATIONS5.1. Beam deflectionWe consider a simply supported beam with a rect-angular cross section of width b and height h, lengthL and Young’s modulus E. The beam is subjected to412th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015a concentrated load P at the midpoint of the span.The aforementioned quantities are modeled by in-dependent random variables following the distribu-tions listed in Table 1. Of interest is to constructLRA of the mid-span deflection, u = PL3/4Ebh3,in terms of the M = 5 input random variables. Weuse ED of varying sizes, N, drawn with Sobol sam-pling and assess the accuracy of the meta-modelswith a validation set of size nval = 104 drawn withMonte Carlo Simulation (MCS).Table 1: Distributions of random variables.Variable Distribution mean CoVb (m) Lognormal 0.15 0.05h (m) Lognormal 0.3 0.05L (m) Lognormal 5 0.01E (MN/m2) Lognormal 3e4 0.15P (MN) Lognormal 0.01 0.20First, we investigate rank selection by means of3-fold CV. After preliminary investigations, we setp1 = . . . = p5 = 5, Imax = 50 and ∆êrrmin = 10−8.Considered candidate ranks vary from 1 to 20. Fordifferent N, Figure 1 compares the rank R selectedwith 3-fold CV with the actual optimal rank yield-ing the minimum generalization error estimatedwith the validation set. The corresponding relativegeneralization errors are shown in Figure 2. Thefigures demonstrate that although the two ranks donot coincide in all cases, the corresponding general-ization errors have only small differences. Overall,the meta-models are highly accurate; it is notewor-thy that an accuracy of the order of 10−5 is achievedwith an ED of size as small as N = 50.Next, we examine optimal values of the errorthreshold in the correction step. Other parametersare fixed to their values above. For three differentsizes of ED and ∆êrrmin varying from 10−9 to 10−4,Figure 3 shows the relative generalization errors forranks selected with 3-fold CV. It is observed thatthe accuracy of LRA strongly depends on ∆êrrmin,particularly for the smaller ED. Selection of the op-timal error threshold is a compromise between thedesired accuracy and the number of iterations in thecorrection step.In the following, LRA are confronted with PCE,50 100 200 500 100011020NR  ED (3−fold CV)validation setFigure 1: Rank selected with 3-fold CV and actual opti-mal rank based on validation set.50 100 200 500 100010−1010−810−610−4NêrrG  ED (3−fold CV)validation setFigure 2: Relative generalization errors for rank se-lected with 3-fold CV and for actual optimal rank.10−9 10−8 10−7 10−6 10−5 10−410−1010−810−610−4∆êrrminêrrG  N = 50N = 200N = 1000Figure 3: Relative generalization errors for differentstopping criteria.512th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015considering meta-models of optimal polynomialdegrees for both approaches. In LRA, the optimalcommon maximum degree p is selected using 3-fold CV (other parameters are same as in Figures 1and 2). In PCE, the candidate basis is determinedby setting a maximum total degree pt and then,evaluating the coefficients with the LAR method(see Section 3); the optimal pt is selected by meansof the Leave-One-Out (LOO) error (see Blatmanand Sudret (2011) for details). Figure 4 shows therelative generalization errors of the resulting meta-models for N varying in 50− 1,000 (the corre-sponding optimal p varies in 3−6, whereas optimalpt varies in 2− 5). Clearly, LRA outperform PCEfor all considered N, yielding meta-models that are2 to 3 orders of magnitude more accurate. It is re-markable that LRA achieve an accuracy of the orderof 10−6 with only N = 50 points (p = 3).50 100 200 500 100010−1010−810−610−410−2NêrrG  LRAPCEFigure 4: Comparison of relative generalization errorsof LRA and PCE.Finally, we assess LRA versus PCE in the evalua-tion of tail probabilities required in reliability anal-ysis. Of interest is the probability that the beam de-flection exceeds a prescribed threshold ulim, calledfailure probability. We compare the failure proba-bilities evaluated using the actual model to the fail-ure probabilities evaluated using LRA and PCE;these are respectively denoted Pf , P̂LRAf and P̂PCEf .Because u follows a lognormal distribution, an an-alytical solution is available for Pf . P̂LRAf and P̂PCEfare evaluated with a MCS approach as N f /Nt ,where Nt = 107 is the total number of points inMCS and N f is the number of points at whichu > ulim. For ulim varying between 4mm and 10mm,Figure 5 compares the three failure probabilitiesconsidering the LRA and PCE meta-models in Fig-ure 4 for N = 50. Obviously, the approximationof all failure probabilities using LRA is excellent,whereas use of PCE considerably underestimatessmall failure probabilities when Pf < 10−3. Wenote that although the PCE meta-model has a rela-tively small relative generalization error (of the or-der of 10−3), it provides particularly poor approxi-mations at the tails of the response distribution.4 5 6 7 8 9 10x 10−310−710−610−510−410−310−210−1ulim(m)P  PfP̂fLRAP̂fPCEFigure 5: Comparison of failure probabilities evaluatedwith LRA and PCE to a reference value (N = 50).5.2. Truss deflectionWe now consider the simply supported truss in Fig-ure 6, loaded with vertical loads P1, . . . ,P6. Thecross-sectional area and Young’s modulus of thehorizontal bars are respectively denoted A1 and E1,whereas the cross-sectional area and Young’s mod-ulus of the vertical bars are respectively denoted A2and E2. The aforementioned quantities are modeledby independent random variables following the dis-tributions listed in Table 2. The response quantityof interest is the mid-span deflection, u, which iscomputed with a finite-element analysis code. Weconstruct LRA of u in terms of the M = 10 inputvariables using Sobol-sampling based ED of vary-ing sizes, N. The accuracy of the meta-models isassessed with a MCS-based validation set of sizenval = 104.612th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 20156 x 4m 2mP1 P2 P3 P4 P5 P6uFigure 6: Truss structure.Table 2: Distributions of random variables.Variable Distribution mean CoVA1 (m) Lognormal 0.002 0.10A2 (m) Lognormal 0.001 0.10E1,E2 (N/m2) Lognormal 2.1e11 0.10P1, . . . ,P6 (N) Gumbel 5e4 0.15Figure 7 compares the rank R ∈ [1,20] selectedwith 3-fold CV with the actual optimal rank (basedon the validation set) for the case p1 = . . . = p10 =3, Imax = 50 and ∆êrrmin = 10−5. The correspond-ing relative generalization errors are shown in Fig-ure 8. We observe that 3-fold CV always selectsR = 1, which is the actual optimal rank except forN = 1,000. Note that in this case, the size ofED only slightly affects the accuracy of the meta-model.50 100 200 500 100011020NR  ED (3−fold CV)validation setFigure 7: Rank selected with 3-fold CV and actual opti-mal rank based on validation set.Figure 9 shows the relative generalization er-rors of LRA with rank selected with 3-fold CV for∆êrrmin varying from 10−6 to 10−1 and other pa-rameters fixed to their previous values. It is ob-served that ∆êrrmin has a significant effect on the50 100 200 500 100010−610−410−2100NêrrG  ED (3−fold CV)validation setFigure 8: Relative generalization errors for rank se-lected with 3-fold CV and for actual optimal rank.meta-model accuracy only for the smallest consid-ered ED.10−6 10−5 10−4 10−3 10−2 10−110−610−410−2100∆êrrminêrrG  N = 50N = 200N = 1000Figure 9: Relative generalization errors for differentstopping criteria.As in the former example, LRA are confrontedwith PCE, considering meta-models of optimalpolynomial degrees (common degree p of univari-ate polynomials in LRA and total degree pt of mul-tivariate polynomials in PCE). Figure 10 shows therelative generalization errors of the resulting meta-models for N varying in 50− 1,000 (the corre-sponding optimal p varies in 2−3, whereas optimalpt varies in 2−4). In this case, LRA perform betterthan PCE for the smaller ED, whereas the reverseis true for the larger ones. Note however that al-though LRA yield fairly accurate meta-models forall N, this is not true for PCE and the smaller N.712th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 201550 100 200 500 100010−610−410−2100NêrrG  LRAPCEFigure 10: Comparison of relative generalization er-rors of LRA and PCE.We conclude the example by assessing LRA ver-sus PCE in reliability analysis. Of interest is theprobability u > ulim, where ulim varies between10cm and 16cm. The reference failure probabili-ties Pf are taken from Sudret (2007), whereas P̂LRAfand P̂PCEf are computed with a MCS sample ofsize Nt = 107. Figure 11 compares the three fail-ure probabilities considering the LRA and PCEmeta-models in Figure 10 for N = 100. We ob-serve that use of LRA leads to fairly accurate es-timates of the reference probabilities, whereas useof PCE is largely inaccurate (for ulim = 14cm andulim = 16cm, use of PCE yields N f = 0).0.1 0.11 0.12 0.13 0.14 0.15 0.1610−710−610−510−410−310−210−1ulim(m)P  PfP̂fLRAP̂fPCEFigure 11: Comparison of failure probabilities evalu-ated with LRA and PCE to a reference value (N = 100).6. CONCLUSIONSBy drastically reducing the number of unknownswith respect to Polynomial Chaos Expansions(PCE), Low-Rank Approximations (LRA) com-prise a promising tool against the curse of dimen-sionality. After examining issues in their construc-tion with greedy approaches, we have shown thatLRA may be used to accurately evaluate tail proba-bilities in reliability analysis with experimental de-signs that prove inadequately small for the PCE ap-proach.7. REFERENCESBlatman, G. and Sudret, B. (2011). “Adaptive sparsepolynomial chaos expansion based on Least AngleRegression.” J. Comput. Phys., 230, 2345–2367.Chevreuil, M., Lebrun, R., Nouy, A., and Rai, P.(2013a). “A least-squares method for sparse lowrank approximation of multivariate functions.” arXivpreprint arXiv:1305.0030.Chevreuil, M., Rai, P., and Nouy, A. (2013b). “Samplingbased tensor approximation method for uncertaintypropagation.” Proceedings of the 11th InternationalConference on Structural Safety and Reliability, NewYork, June 2013.Doostan, A., Validi, A., and Iaccarino, G. (2013). “Non-intrusive low-rank separated approximation of high-dimensional stochastic models.” Comput. MethodsAppl. Mech. Eng., 263, 42–55.Nouy, A. (2010). “Proper generalized decomposi-tions and separated representations for the numeri-cal solution of high dimensional stochastic problems.”Archives of Computational Methods in Engineering,17, 403–434.Sudret, B. (2007). Uncertainty propagation and sensitiv-ity analysis in mechanical models – Contributions tostructural reliability and stochastic spectral methods.Université Blaise Pascal, Clermont-Ferrand, France.Habilitation à diriger des recherches, 173 pages.8


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items