International Conference on Applications of Statistics and Probability in Civil Engineering (ICASP) (12th : 2015)

Small-sample probabilistic simulation software tool FReET Novák, Drahomír; Vořechovský, Miroslav Jul 31, 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
53032-Paper_646_Novak.pdf [ 496.16kB ]
Metadata
JSON: 53032-1.0076194.json
JSON-LD: 53032-1.0076194-ld.json
RDF/XML (Pretty): 53032-1.0076194-rdf.xml
RDF/JSON: 53032-1.0076194-rdf.json
Turtle: 53032-1.0076194-turtle.txt
N-Triples: 53032-1.0076194-rdf-ntriples.txt
Original Record: 53032-1.0076194-source.json
Full Text
53032-1.0076194-fulltext.txt
Citation
53032-1.0076194.ris

Full Text

12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015  1 Small-sample Probabilistic Simulation Software Tool FReET Drahomír Novák Professor, Institute of Structural Mechanics, Faculty of Civil Engineering, Brno University of Technol-ogy, Brno, Czech Republic Miroslav Vořechovský   Professor, Institute of Structural Mechanics, Faculty of Civil Engineering, Brno University of Technol-ogy, Brno, Czech Republic ABSTRACT: The objective of the paper is to present methods and software for the efficient statistical, sensitivity and reliability assessment of infrastructure. A special attention is devoted to small-sample simulation techniques which have been developed for the analysis of computationally intensive prob-lems. The paper shows the possibility of "randomizing" computationally intensive problems in the sense of the Monte Carlo type simulation. In order to keep the number of required simulations at an acceptable level, optimized Latin Hypercube Sampling is utilized. The technique is used for simulation of random variables and random fields. Sensitivity analysis is based on nonparametric rank-order corre-lation coefficients. Statistical correlation is imposed by the stochastic optimization technique – simulat-ed annealing. A hierarchical sampling approach has been developed for the extension of the sample size in Latin Hypercube Sampling, enabling the addition of simulations to a current sample set while maintaining the desired correlation structure. The paper continues with a brief description of the user-friendly implementation of the theory within FReET commercial multipurpose reliability software.  1. INTRODUCTION The presence of uncertainty in the analysis and design of engineering systems has always been recognized. Uncertainties are involved in every part of the system Structure – Load – Environ-ment. Traditional approaches simplified the problem by considering the uncertain parameters to be deterministic, and accounted for the uncer-tainties through the use of partial safety factors in the context of limit states. Such approaches do not guarantee the required reliability and they do not provide information on the reliability achieved and/or on the influence of individual parameters on reliability. Therefore, in recent years, attention is being given to fully probabilis-tic approaches and software tools which can be used for such purposes. Important topics can thus be treated in an advanced manner, e.g. the prob-abilistic vulnerability assessment of civil infra-structure systems followed by efficient decision-making processes.  The standard definition of an engineering problem featuring uncertainty or randomness, which is to be analyzed using computers, is as follows. A random response of the studied engi-neering system (e.g. a structure) is represented by random variable Z. In statistical analyses, Z may represent a random response of a system (e.g. deflection, stress, ultimate capacity, etc.) or, in reliability calculations; Z is called a safety margin. Random variable Z is a function of basic random variables X = var1 2, , , NX X X…  (or ran-dom fields):  ( )Z g= X  (1) where the function g(X), a computational model,  is a function of a random vector X (and also of other, deterministic quantities). Random vector X follows a joint probability distribution func-tion (PDF) ( )fX X  and, in general, its marginal variables can be statistically correlated. This pa-12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015  2 per deals with situations when the information about ( )fX X  is limited to the knowledge of uni-variate marginal distributions ( ) ( )var1, , Nf x f x…  and a correlation matrix, T (a symmetric square matrix of order Nvar). The output variable (or generally a vector) Z represents a transformed variable and the task is to perform statistical, sensitivity and possibly reliability analyses upon it. It is assumed that the analytical analysis of the transformation of input variables to Z is not pos-sible. Approaches focused on the estimation of statistical moments of response quantities, such as means or variances, are commonly termed statistical analyses. In sensitivity analysis, ap-proaches aiming at the quantification of the sen-sitivity of output (response, failure probability) to variations in input variables are applied. The main result of reliability analysis is an estimate of the theoretical failure probability. If g(X) represents a failure condition, then it is called the limit state function and Z becomes the safety margin. Usually, the convention is that it takes a negative value if a failure event occurs; Z ≤ 0, and a survival event is defined as ( ) 0Z g= >X . The limit state function can be an explicit or implicit function of basic random var-iables and it can take either a simple or a rather complicated form (e.g. a computer program). The performance of the system and its compo-nents may be described considering a number of limit states (multiple limit state functions). The aim of reliability analysis is the estimate of unre-liability using a probability measure called the theoretical failure probability, defined as  ( )P 0fp Z= ≤ . (2) This failure probability is again calculated as a probabilistic integral:  ( ) ( ) ( ) d dffDp I g f f=    =∫ ∫X XX X X X X  (3) The function ( )I g  X  is an indicator func-tion that equals one for failure event (g≤0) and zero otherwise. In this way, the domain of inte-gration of the joint PDF above is limited to the failure domain Df  where g(X) ≤ 0. The explicit calculation of the failure probability integral in Eq. (3) is generally impossible. A large number of efficient stochastic analysis methods have therefore been developed during the last seven decades.  A straightforward solution for these tasks is numerical simulation. The interest in simulation methods started in the early 1940’s with the pur-pose of developing inexpensive techniques for testing engineering systems by imitating their real behavior. These methods are commonly called Monte Carlo simulation techniques. The principle behind the method is to develop an ana-lytical model – a computer based response or limit state function (Eq. 1) that predicts the be-havior of the studied system and repeats it many times under all possible conditions. This simula-tion principle has remained formally the same up until the present day. The common feature of the many different techniques covering all the above-mentioned categories is the fact that they require repetitive evaluation (simulation) of the response or limit state function g(X). The development of methods is from a historical perspective a struggle to de-crease the amount of simulations, or avoid an excessive number of them. Crude Monte Carlo simulation cannot be applied to time-consuming problems, as it requires a large number of simu-lations (the repeated calculation of structural response) to deliver statistically significant esti-mates of the outputs.  In the context of reliability analyses, this ob-stacle was historically successfully solved for by the approximation techniques FORM and SORM, e.g. (Hasofer and Lind 1974, Madsen at al. 1986). In spite of some problems concerning accuracy, these techniques are widely accepted today and have become in some cases standard tools in code calibration. Once this was achieved, research then focused on the development of 12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015  3 advanced simulation techniques which concen-trate simulations in the failure region (Schuëller 1998). Among the many efficient methods de-veloped during the last decades, Latin Hypercube Sampling and response surface methodologies are often used for computationally demanding continuum mechanics problems.  The objective of the paper is to present methods for efficient statistical, sensitivity and reliability assessment implemented in FReET software (Novák et al. 2013, 2014). Attention is given to those techniques that have been devel-oped for the analysis of computationally inten-sive problems; nonlinear FEM analysis being a typical example. The paper shows the possibility of “randomizing” computational tasks in the sense of the Monte Carlo type of simulation. The stratified simulation technique Latin Hypercube Sampling is used in order to achieve variance reduction of the estimated outputs at a given number of simulations.  The paper contains basic information on FReET software and the implemented methods with relevant references.  2. UNCERTAINTY SIMULATION 2.1. A small-sample Monte Carlo type simulation  For time-intensive calculations, small-sample simulation techniques based on stratified sam-pling of the Monte Carlo type represent a rational compromise between feasibility and accuracy. Therefore, Latin Hypercube Sampling (LHS) (Conover 1975, McKay et al. 1979, Novák et al. 1998), which is well known today, has been se-lected as a key fundamental technique. LHS be-longs to the category of advanced stratified sam-pling techniques which result in the very good estimate of statistical moments of response using small-sample simulation. More accurately, LHS is considered to be a variance reduction tech-nique, as it yields lower variance in statistical moment estimates compared to crude Monte Car-lo at the same sample size. This is the reason the technique became very attractive for dealing with computationally intensive problems like e.g. complex finite element simulations.  2.2. Statistical correlation control Once Nsim samples of each marginal variable are generated, separately, the correlation structure prescribed by the target correlation matrix must be taken into account. There are generally two problems related to the statistical correlation: First, during sampling an undesired correlation can occur between the random variables (Vořechovský, 2012). For example, instead of a correlation coefficient of zero for the uncorrelat-ed random variables an undesired correlation of eg. 0.4 can be generated. This can happen espe-cially in  the case that only a very small number of simulations (in the order of tens) are carried out (in the order of tens), where the number of interval combinations is rather limited. The sec-ond task is to introduce the prescribed statistical correlation between the random variables defined by the correlation matrix. This can be achieved by rearranging the order of samples of each vari-able in the LHS simulation plan in such a way that either they diminish the undesired random correlation when unit matrix T is required or they introduce a target correlation structure. Such a rearrangement of the sample ordering can be achieved via several different techniques pub-lished in the literature on LHS (e.g. Iman and Conover 1982, Owen 1994); however, some se-rious limitations have been found by the authors while using them.  A robust technique to impose statistical cor-relation based on the stochastic method of opti-mization called simulated annealing has been proposed by Vořechovský and Novák (2009). Extensive studies on the performance of the al-gorithm (Vořechovský 2011) show that it per-forms considerably better than other widely used algorithms for correlation control, namely both Iman and Conover’s (1982) Cholesky decompo-sition and Owen’s (1994) Gram-Schmidt orthog-onalization. 2.3. Hierarchical sampling When using Monte Carlo-type simulation, the adequacy of a given sample for the purpose of giving acceptable estimates of desired statistical quantities cannot be determined a priori, and thus 12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015  4 the ability to extend or refine an experimental design may be important. This can be done very easily in crude Monte Carlo sampling. Very of-ten, though, running each realization (as either a physical or virtual experiment) is very expensive. In conventional Latin Hypercube Sampling, however, it is necessary to specify the number of simulations in advance. If too small a sample set is used (i.e. a set that does not give acceptable statistical results), the analyst normally has to abandon the results and run new analyses with a larger sample set. It is thus desirable to start with a small sample and then extend (or refine) the design if deemed necessary. The extension would permit the use of a larger sample set with-out the loss of any of the already performed, and possibly quite expensive, calculations. This problem has been overcome by the method called Hierarchical Latin Hypercube Sampling, which was proposed recently in (Vořechovský 2009, 2014). Note that a similar solution has been published in (Sallaberry et al. 2008). The method combines the addition of simulations to the current sample set (hierar-chical refinement of sampling probabilities) while maintaining the desired correlation struc-ture by employing an advanced correlation con-trol algorithm (Vořechovský and Novák, 2009) for the extended part of the sample. The initial LH-sample can have an arbitrary number of sim-ulations and the added sample must have an even integer times more sampling points than the cur-rent sample size (e.g. twice more). Numerical studies presented in (Vořechovský 2014) have shown that the extended sample has all the prop-erties that the same LH sample would have when simulated in a single LHS run. The advantage in sample size flexibility is obvious. 2.4. Sensitivity and reliability analyses An important task in structural reliability analy-sis is to determine the significance of random variables. With respect to the small-sample simu-lation techniques described above the most straightforward and simplest approach uses the non-parametric rank-order statistical correlation between the basic random variables and the structural response variable (Iman and Conover 1980, Novák et al. 1993). The sensitivity analy-sis is obtained as an additional result of LHS, and no additional computational effort is neces-sary. The relative effect of each basic variable on the structural response can be measured using the partial correlation coefficient between each basic input variable and the response variable. The method is based on the assumption that the ran-dom variable which influences the response vari-able most considerably (either in a positive or negative sense) will have a higher correlation coefficient than the other variables. Because the model for the structural response is generally nonlinear, a non-parametric rank-order correla-tion is used by means of the Spearman correla-tion coefficient or Kendall tau. Sensitivity analy-sis can be depicted using parallel coordinates (Inselberg, 2009); a strong positive influence (high correlation coefficient) results in parallel lines between the input variable and the response variable, while a strong negative influence re-sults in a bundle of intersecting lines. In cases when we are constrained by the use of only a small number of simulations (tens, hundreds) it can be difficult to estimate the fail-ure probability. The following approaches are therefore utilized here; they are approximately ordered from elementary (extremely small num-ber of simulations, inaccurate) to more advanced techniques: • Cornell´s reliability index ‒ calculation of the reliability index from an estimate of the sta-tistical characteristics of the safety margin, • The curve fitting approach ‒ based on the selection of the most suitable probability dis-tribution of the safety margin, • FORM approximation (Hasofer-Lind´s in-dex), • Importance sampling techniques, • Response surface methods. These approaches are not described here as they are well-known in the reliability literature, and also the provision of all details is beyond the aim of this paper. In some cases, these tech-12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015  5 niques do not always belong to the category of very accurate reliability techniques (especially the first three in the list). However, they repre-sent a feasible alternative in many practical cas-es. 3. FREET SOFTWARE FReET, the multipurpose probabilistic software for the statistical, sensitivity and reliability anal-ysis of engineering problems (Novák, Vořechovský and Rusina – Novák et al. 2003, 2009, 2013) is based on the efficient reliability techniques described above. There are three basic parts: The “Random Variables” window (Figure 1) allows the user-friendly input of basic random variables of the analyzed problem. Uncertainties are modeled as random variables described by their probability density functions (PDF). The user can choose from a set of selected theoretical models such as normal, lognormal, Weibull, rec-tangular, etc. Random variables can be described in three ways. The first option is to describe them by their statistical characteristics (statistical mo-ments): the mean value, standard deviation (or coefficient of variation), coefficient of skewness and kurtosis excess. Alternatively, they can be set based on their parameters or on a combina-tion of parameters and moments. The number of free parameters is identical in all three modes (moments, parameters or a mixture of both) and it represents the “degrees of freedom” of the dis-tribution. A special feature is enabled: the user can work with a variable that represents the i-th greatest or smallest variable of n independent and identically distributed (iid) variables selected from the basic (elemental) distribution (order statistics). In this way, e.g. the smallest of the n iid random variables can be selected and the software works with this transformed distribution as if this was on the list of available elemental distributions. This feature is accessible from the “Distribution details” window and this window also provides the option of performing basic computations with a single random variable. Another option allowing definition of the distribution of a single random variable is to use raw data. Upon loading an arbitrary list of val-ues, the program either enables the use of a his-togram or proposes the best matching available parametric distribution based on the Kolmogo-rov-Smirnov test. The “Statistical Correlation” window serves for the input of target correlation matrix T. The user can work at the level of a subset of correla-tion matrices (each related to a group of random variables) or at the global level (all random vari-ables resulting in a large correlation matrix). The level of correlation during interactive input is highlighted, and the positive definiteness is checked. Note that Simulated Annealing applied for correlation control does not require the posi-tive definiteness as it automatically delivers a sample having the nearest positive semidefinite correlation matrix to the target matrix T.  Random input parameters are generated ac-cording to their PDF using LHS sampling. Sam-ples are reordered by the Simulated Annealing approach in order to match the required correla-tion matrix as closely as possible. Generated re-alizations of random parameters are used as in-puts for the analyzed function (computational model). The solution is performed Nsim times and the results (structural response) are saved. At the end of the whole simulation process the resulting set of structural responses is statistically evaluat-ed. The results are: estimates of the mean value, variance, coefficient of skewness and kurtosis, and the empirical cumulative probability density function estimated by an empirical histogram of structural response. This basic statistical assess-ment is visualized through the “Histograms” window. It is followed by reliability analysis based on several approximation techniques: (i) the basic estimate of reliability by the Cornell safety index, (ii) the curve fitting approach ap-plied to the computed empirical histogram of response variables and (iii) the simple estimate of probability of failure based on the ratio of failed trials to the total number of simulations. Additional information regarding the problem solved is obtained via the sensitivity analysis of each response function based on its rank-order 12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015  6 correlation coefficient. Even though this is actu-ally a byproduct of the simulation which does not require any special additional effort, it pro-vides very useful information in many cases. If the correlation coefficient between a certain in-put variable and output variables is close to zero, we can conclude that the input variable has (in its simulated range) a small or even negligible effect on the output. This can sometimes help to de-crease the probabilistic dimension of the problem because such an input can be considered deter-ministic. 3.1. Summary of main features State-of-the-art probabilistic algorithms are im-plemented in FReET to compute the probabilistic response and reliability. FReET is a modular computer system for performing probabilistic analysis developed mainly for computationally intensive deterministic modeling and the running of user-defined subroutines. The main features of the software are: 3.1.1. Stochastic model (inputs) The fundamental part of the software is the user-friendly handling of inputs ‒ basic random varia-bles and theirs statistical correlation. The main features are: • A friendly Graphical User Environment (GUE). • 30 probability distribution functions (PDF), mostly 2-parametric, some 3-parametric, two 4-parametric (Beta PDF and normal PDF with a Weibullian left tail). • Unified description of random variables with the optional use of statistical moments or pa-rameters or a combination of moments and parameters. • PDF calculator. • Extreme value distributions and order statis-tics for any available parametric distribution. • Statistical correlation (there is also a weighting option). • Categories and comparative values for PDFs. • Visualization of basic random variables, in-cluding statistical correlation in both Carte-sian and parallel coordinates. 3.1.2. Response/Limit state function The user has several options to define the ana-lyzed function. The complexity of the task is decisive for the selection of an appropriate inter-face. Several efficient and user-friendly options are implemented: • Closed form (direct), using the implemented Equation Editor (simple problems). • Numerical (indirect), using a user-defined DLL function that can be prepared in practi-cally any programming language (C++, Fortran, Delphi, etc.). • General interface to third-party software us-ing user-defined *.BAT or *.EXE programs based on input and output text communica-tion files. • Multiple response functions assessed in the same simulation run. 3.1.3. Results (outputs) The assessment of outputs (the results of Monte Carlo-type simulation) consists of: • Histograms of output variables. • Sensitivity analyses. • Reliability estimates by various simulation and approximation methods. • Limit state functions. • Parametric studies. • Cost/Risk assessment. 3.1.4. Probabilistic techniques Both standard and advanced statistical, simula-tion and reliability techniques are implemented: • Crude Monte Carlo simulation. • Latin Hypercube Sampling (3 alternatives). • Hierarchical Latin Hypercube Sampling  (ex-tension of sample size). • First Order Reliability Method (FORM). • Curve fitting. • Simulated Annealing employed for correla-tion control over inputs. 12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015  7 • Bayesian updating. • Response surface. • Importance sampling around mean values.   Figure 1: “Random variables” window (above); “Reliability” window with empirical histogram, Curve fitting, Cornell safety index and Monte Carlo sampling estimates (below).  4. CONCLUSION The paper describes the main software features and stochastic methods implemented in FReET software. Efficient techniques of employing sto-chastic simulation methods were combined in order to offer an advanced tool for the probabilis-tic assessment of user-defined problems at ulti-mate capacity and serviceability limit states. The presented software tools may be applied in the advanced design of structures, when mak-ing decisions about alternatives, when searching for optimum life-cycle cost solutions, and in cost-effective decision-making processes con-cerning maintenance inspection and planning. With regard to this, the time aspect emphasizes the urgent need for durability limit state consid-eration.  Real world engineering structural design, development and assessment is very challenging as it is subjected to a whole host of sources of variation. Probabilistic techniques are therefore used in various engineering fields, offering ad-vantages over the alternative, but more tradition-al, deterministic methods that might otherwise be employed. Small-sample probabilistic simulation of the Monte Carlo type can address a lot of the shortcomings of classical deterministic ap-proaches and a ready-to-use software program has been developed for the analysis of any user-defined problem. Its wide range of applicability, both practical and theoretical, provides the op-portunity for further intensive development of the software tools. 5. ACKNOWLEDGEMENTS This paper has been worked out under the project No. LO1408 "AdMaS UP - Advanced Materials, Structures and Technologies", supported by Min-istry of Education, Youth and Sports under the „National Sustainability Programme I" and under the project No. P105-14-10930S "SPADD” sup-ported by the Czech Science Foundation (GAČR).   6. REFERENCES Conover W. J. (1975) On a better method for select-ing input variables. Unpublished Los Alamos National Laboratories manuscript, reproduced as Appendix A of Latin Hypercube Sampling and the Propagation of Uncertainty in Analyses of Complex Systems by J.C. Helton and F.J. Davis, Sandia National Laboratories report SAND2001-0417; 2002. Hasofer A. M., and Lind N. C. (1974) Exact and in-variant second-moment code format. Journal of Eng. Mech. ASCE; 100 (EM1)(1): 111–121. Iman R. C., Conover, W. J. (1980) Small sample sensitivity analysis techniques for computer models, with an application to risk assessment.  12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015  8 Communications in Statistics: Theory and Methods 1980; A9: 1749–1842. Inselberg A. (2009) Parallel Coordinates, Visual Mul-tidimensional Geometry and its Applications.  Springer. Madsen H.O., Krenk S., and Lind N. (1986) Methods of structural safety. Prentice Hall, Englewood Cliffs. McKay M. D., Conover W. J. and Beckman R. J. (1979) A comparison of three methods for se-lecting values of input variables in the analysis of output from a computer code. Technomet-rics; 21: 239–245. Novák D., Teplý B. and Shiraishi N (1993) Sensitivi-ty analysis of structures: A review. In: Proc. of The Fifth International Conference on Civil and Structural Engineering Computing, Scot-land, Edinburgh; 201‒207. Novák D., Teplý B. and Keršner Z. (1998) The Role of Latin Hypercube Sampling Method in Reli-ability Engineering. In: Proc. of ICOSSAR– 97, Kyoto, Japan: 1998; 403–409. Novák D., Vořechovský M. and Rusina R. (2003) Small-sample probabilistic assessment – soft-ware FREET. In: Proc. of  9th Int. Conf.  on Applications of Statistics and Probability in Civil Engineering – ICASP 9, San Francisco, USA, Rotterdam Millpress; 91‒96. Novák D., Vořechovský M. and Rusina R. (2009) Statistical, sensitivity and reliability analysis using software FReET. In: Furuta, Frangopol and Shinozuka, M. (Eds.), In: Safety, Reliabil-ity and Risk of Structures, Infrastructures and Engineering Systems, proc. of ICOSSAR 2009, Taylor & Francis Group, London, 2009; 2400–2403. Novák D., Vořechovský M. and Rusina M. (2013) FREET version 1.6 – program documentation, User´s and Theory Guides, Brno/Červenka Consulting, Prague; http://www.freet.cz. Novák, D., Vořechovský, M. and Teplý, B. (2014) FReET: Software for the statistical and reliabil-ity analysis of engineering problems and FReET-D: Degradation module. Advances in Engineering Software (Elsevier), 72:179-192,  doi:10.1016/j.advengsoft.2013.06.011. Owen A. B. (1994) Controlling correlations in Latin Hypercube Samples. Journal of the American Statistical Association (Theory and methods); 89 (428): 1517–1522. Sallaberry, C. J., Helton, J. C., Hora, S. C. (2008) Extension of Latin hypercube samples with correlated variables. Reliability Engineering & System Safety 2008; 93 (7): 1047‒1059. Schuëller G. I. (1998) Structural reliability – recent advances (Freudenthal lecture) In.: Proc. of Icossar´97, Vol. I. Balkema, 1998; 3‒35. Vořechovský M. (2009) Performance of correlation control by combinatorial optimization for Latin Hypercube Sampling. In: Furuta, Frangopol and Shinozuka, M. (Eds.), In: Safety, Reliabil-ity and Risk of Structures, Infrastructures and Engineering Systems, proc. of ICOSSAR 2009, 10th International Conference on Structural Safety and Reliability, held in Osaka, Japan. Taylor & Francis Group, London, 2009; 3844–3851. Vořechovský M. and Novák D. (2009) Correlation control in small sample Monte Carlo type sim-ulations I: A Simulated Annealing approach. Probabilistic Eng. Mechanics; 24(3):452–462. Vořechovský M. (2011) Correlation in probabilistic simulation, In Faber, Köhler and Nishijima  (eds) ICASP11 - Applications of Statistics and Probability in Civil Engineering, held in Zü-erich, Switzerland, Taylor & Francis Group, London, p. 2931–2939. Vořechovský M. (2012) Correlation control in small sample Monte Carlo type simulations II: Anal-ysis of estimation formulas, random correlation and perfect uncorrelatedness. Probabilistic En-gineering Mechanics (Elsevier); 29:105–120. Vořechovský M. (2014) Hierarchical refinement of Latin Hypercube Samples, Computer-Aided Civil and Infrastructure Engineering, in press, available online. DOI: 10.1111/mice.12088 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.53032.1-0076194/manifest

Comment

Related Items