Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Sensitivity analysis and section positioning for road design model Sarafrazi, Soroor 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2016_november_sarafrazi_soroor.pdf [ 6.6MB ]
JSON: 24-1.0314327.json
JSON-LD: 24-1.0314327-ld.json
RDF/XML (Pretty): 24-1.0314327-rdf.xml
RDF/JSON: 24-1.0314327-rdf.json
Turtle: 24-1.0314327-turtle.txt
N-Triples: 24-1.0314327-rdf-ntriples.txt
Original Record: 24-1.0314327-source.json
Full Text

Full Text

Sensitivity analysis and sectionpositioning for road design modelbySoroor SarafraziM.Sc., Shahid Bahonar University of Kerman, 2011A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinTHE COLLEGE OF GRADUATE STUDIES(Interdisciplinary Studies - Optimization)THE UNIVERSITY OF BRITISH COLUMBIA(Okanagan)September 2016c© Soroor Sarafrazi, 2016The undersigned certify that they have read, and recommend to theCollege of Graduate Studies for acceptance, a thesis entitled: Sensitivityanalysis and section positioning for road design model submittedby Soroor Sarafrazi in partial fulfillment of the requirements of thedegree of Master of ScienceProf. Yves Lucet (Arts & Sciences, UBC Okanagan)Supervisor, Professor (please print name and faculty/school above the line)Dr. Jason Loeppky (Arts & Sciences, UBC Okanagan)Co-Supervisor, Professor (please print name and faculty/school above the line)Dr. Warren Hare (Arts & Sciences, UBC Okanagan)Supervisory Committee Member, Professor (please print name and faculty/school abovethe line)Dr. Kasun Hewage (School of Engineering, UBC Okanagan)University Examiner, Professor (please print name and faculty/school above the line)September 2016(Date Submitted to Grad Studies)iiAbstractEconomy is one of the most important factors to develop a country androads are vital to expedite the economy growth. Although road networkslead to social and economic development, the road construction and main-tenance are expensive activities. Therefore, there is a strong interest inminimizing the construction cost, while satisfying safety and environmentalconstraints. Different road design models have been proposed, but the sen-sitivity of these models to their input parameters has not yet been analyzed.Sensitivity analysis of a road model before building the road is highlysuggested because it helps to investigate the stability of the model and iden-tify the parameters that have the most effect on the variability of the modeloutput. In addition, sensitivity analysis helps to enhance the communica-tion between modelers and decision makers (managers). Therefore, differentsensitivity analysis methods are presented in this thesis and the sensitivityof a road model to its inputs is studied.Moreover, in a road design model, the ground is discretized into sec-tions and the number of sections is highly correlated with the optimizationtime. Designing roads that consider all sections is too time consuming tobe practical. Thus, different methods are presented in this thesis to reducethe number of sections while keeping the accuracy of the ground profile.iiiPrefaceA paper co-authored with Dr. Jason Loeppky and Dr. Yves Lucet isadapted from the second chapter of this thesis. This paper is submittedfor publication to the journal of Reliability Engineering & System Safety[SLL16]. In addition, a presentation, joint work with Dr. Jason Loeppkyand Dr. Yves Lucet, was adapted from the third chapter and given at the58th Canadian Operational Research Society Annual Conference in Banff,Canada.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . xDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiChapter 1: Introduction . . . . . . . . . . . . . . . . . . . . . . . 1Chapter 2: A Survey on the Sensitivity Analysis Methods forUnderstanding Complex Models . . . . . . . . . . 32.1 Sensitivity analysis . . . . . . . . . . . . . . . . . . . . . . . . 32.2 Basic notation and definitions . . . . . . . . . . . . . . . . . . 62.3 Linear and monotonic Sensitivity Analysis (SA) methods . . 82.4 Nonlinear SA methods . . . . . . . . . . . . . . . . . . . . . . 92.4.1 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . 92.4.2 Variance-mean . . . . . . . . . . . . . . . . . . . . . . 172.4.3 Factor screening methods . . . . . . . . . . . . . . . . 202.4.4 Derivative-based method . . . . . . . . . . . . . . . . . 212.4.5 Distance between distributions . . . . . . . . . . . . . 232.4.6 Moment independence measures . . . . . . . . . . . . 242.5 Surrogate-based methods for expensive function models . . . 252.6 Sensitivity indices for multivariate outputs . . . . . . . . . . . 262.6.1 Multivariate Sobol sensitivity analysis . . . . . . . . . 27vTABLE OF CONTENTS2.6.2 Sequential sensitivity analysis . . . . . . . . . . . . . . 272.6.3 Principal Components Analysis (PCA)-based multi-variate sensitivity analysis . . . . . . . . . . . . . . . . 282.6.4 Variance-based multivariate sensitivity analysis . . . 282.7 Numerical examples and discussion . . . . . . . . . . . . . . . 292.7.1 Study 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 292.7.2 Study 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 302.7.3 Study 3 . . . . . . . . . . . . . . . . . . . . . . . . . . 342.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37Chapter 3: Section positioning in the road design verticalalignment problem . . . . . . . . . . . . . . . . . . . 403.1 The road design problem . . . . . . . . . . . . . . . . . . . . . 403.2 The road design model . . . . . . . . . . . . . . . . . . . . . . 413.3 Section positioning methods . . . . . . . . . . . . . . . . . . . 453.3.1 Method 1: The heuristic method . . . . . . . . . . . . 453.3.2 Method 2: The regression tree . . . . . . . . . . . . . 453.3.3 Method 3: Piecewise constant approximation . . . . . 463.4 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . 483.4.1 Comparing the three methods . . . . . . . . . . . . . . 503.5 Summary of the section positioning methods . . . . . . . . . 59Chapter 4: Conclusion . . . . . . . . . . . . . . . . . . . . . . . . 60Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62viList of TablesTable 2.1 The Pearson, Spearman and the main Sobol sensitivityindices of four functions (2.27). . . . . . . . . . . . . . 30Table 2.2 The sensitivity indices of the linear and monotonicmethods for the Ishigami function. . . . . . . . . . . . 32Table 2.3 The numerical values of the Mckay and Sobol indicesobtained from symbolic integration in Maple for theIshigami function. . . . . . . . . . . . . . . . . . . . . . 32Table 2.4 The average of sensitivity indices of different nonlinearmethods for the Ishigami function. . . . . . . . . . . . 33Table 2.5 The Sobol and FAST indices for Moreb’s problem. . . 36Table 3.1 Approximation of Cut volumes using side-slopes for acut area. . . . . . . . . . . . . . . . . . . . . . . . . . . 45Table 3.2 Four different grounds. . . . . . . . . . . . . . . . . . 49Table 3.3 The performance of the heuristic method on GroundA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53Table 3.4 The performance of the Regression tree on Ground A. 54Table 3.5 The performance of the piecewise constant approxima-tion method on Ground A. . . . . . . . . . . . . . . . 55Table 3.6 Comparing methods on Ground A according to twostopping criteria. . . . . . . . . . . . . . . . . . . . . . 55Table 3.7 Comparing methods on Ground B according to twostopping criteria. . . . . . . . . . . . . . . . . . . . . . 56Table 3.8 Comparing methods on Ground C according to twostopping criteria. . . . . . . . . . . . . . . . . . . . . . 56Table 3.9 Comparing methods on Ground D according to twostopping criteria. . . . . . . . . . . . . . . . . . . . . . 56Table 3.10 Comparing methods on Ground A based on the mate-rial volumes and according to two stopping criteria. . 57Table 3.11 Comparing methods on Ground B based on the mate-rial volumes and according to two stopping criteria. . 58viiLIST OF TABLESTable 3.12 Comparing methods on Ground C based on the mate-rial volumes and according to two stopping criteria. . 58Table 3.13 Comparing methods on Ground D based on the mate-rial volumes and according to two stopping criteria. . 58viiiList of FiguresFigure 2.1 The CSMXi(q) for three inputs of the Ishigami function. 18Figure 2.2 The effect of the symmetric reducing of the rangeto [q, 1 − q] on variance for the three inputs for theIshigami function. . . . . . . . . . . . . . . . . . . . . 19Figure 2.3 The scatter plot of Ishigami function with N = 1000. 31Figure 2.4 The road and ground profiles. . . . . . . . . . . . . . 34Figure 2.5 Summary of SA methods for inexpensive models. . . 38Figure 3.1 The vertical alignment of the ground profile. . . . . . 41Figure 3.2 The cross-section of a road with and without side-slopes. . . . . . . . . . . . . . . . . . . . . . . . . . . 44Figure 3.3 Approximation of side-slopes for a cut area. . . . . . 44Figure 3.4 Three iterations of the heuristic method. . . . . . . . 46Figure 3.5 Three iterations of the regression tree. . . . . . . . . . 47Figure 3.6 The piecewise constant approximation. . . . . . . . . 48Figure 3.7 Four ground profiles. . . . . . . . . . . . . . . . . . . 50Figure 3.8 An example for Algorithm 2 which approximates theside-slopes of merged sections. . . . . . . . . . . . . . 51ixAcknowledgmentsI would like to express my deep sense of gratitude to my supervisors Dr.Yves Lucet and Dr. Jason Loeppky for their updated knowledge, continuoussupport, great guidance and patience during my research.I also like to thank my committee members Dr. W. Hare, Dr. YvesLucet and Dr. Jason Loeppky for their great advice and inspiration.I truly appreciate the support of our industrial partner (Softree TechnicalSystems Inc.). In particular David Mills, Craig Speirs, and Alexis Guiguewere always very supportive and their technical information is highly appre-ciated.I owe my deepest gratitude to my parents, and family members for theirsupport and encouragement throughout my life.This research is partially funded by Natural Sciences and EngineeringResearch Council of Canada, Canada Foundation for Innovation, BritishColumbia Knowledge Development Fund, and Softree Technical SystemsInc.xDedicationTo my beloved parents and family members.xiChapter 1IntroductionA high quality road network can increase the standards of living, fa-cilitate trading and lead to social and economic development. However, aroad network has some negative effects including damaging environment,accidents, air and noise pollution [FJM+01]. Therefore, designing a roadimplies trade-offs between safety, social, environmental and economic im-pacts; it is a complicated and challenging process [JSJ06]. The road designis usually split into three interconnected stages: horizontal alignment, ver-tical alignment, and earthwork [HKL11].In the horizontal alignment, the road curve is examined from a satellite’seye view and the primary objective in this stage is minimizing environmentalissues and construction costs [EM07]. After the horizontal alignment is fixed,the starting and end points can be connected by a curve and the verticalground profile can be examined along this curve. In other words, a road’schange in elevation can be studied in the vertical alignment stage. Theprimary objective in this stage is minimizing the construction cost [HKL11].After the vertical alignment stage, the earth is rearranged with minimumcost in the earthwork optimization stage. In this stage, the ground profilechanges to a desired smooth road profile [MS81].In all stages of the road design model, there are several inputs and theiruncertainties lead to uncertainty in the model output. SA studies how themodel output uncertainty can be assigned to different sources of uncertaintyin inputs [SAA+10]. SA indicates if the model is robust to small changesin inputs and determines the robustness of the model when one or some ofthe model parameters are varied simultaneously [STCR04]. In other words,SA can help to improve the road design model and test the robustness ofa decision. Therefore, different SA methods are presented and comparedin this thesis and the most appropriate SA methods are applied to a roaddesign model.In addition to the SA of the road design model, the optimization time ofthis model should also be considered. In the road design model, the groundprofile is discretized into sections [HHLR14] and the optimization time isproportional to the number of sections. Optimizing long roads with a lot1Chapter 1. Introductionof sections is too time consuming to be practical. Therefore, the number ofsections should be reduced while considering a trade-off between time andaccuracy of approximation. Consequently, different methods are presentedand compared in this thesis for section positioning and reducing the numberof sections.This thesis is organized as follows. Chapter 2 is a survey paper on SAwhich is submitted for publication to the journal of Reliability Engineering& System Safety [SLL16]. This paper also includes SA on a road designmodel. Chapter 3 introduces and compares different methods for sectionpositioning. Finally, conclusions with directions for future research are pro-vided in Chapter 4.2Chapter 2A Survey on the SensitivityAnalysis Methods forUnderstanding ComplexModelsMathematical models representing real world situations are now ubiqui-tous across science, engineering and the humanities. One major challengewith these models is in understanding and quantifying the effect of uncertaininputs on the output of the function. In this Chapter, we review fundamen-tal sensitivity analysis methods and classify them to allow non-specialists toselect the method most appropriate for their specific problem. The classifi-cation is based on the SA method and assumption. These SA methods fornon-expensive models are summarized with a decision tree. We also considerthe sensitivity analysis of expensive models, and models with multivariateoutputs. The accuracy and computational costs of the methods are com-pared on three numerical examples and on a real-world application in roaddesign.2.1 Sensitivity analysisMathematical models widely used in natural sciences, engineering andnetwork science are subject to different sources of uncertainty, e.g., inaccu-rate parameter values, uncertain model structure, and unknown parameterranges [GMPS14]. Some of the biggest challenges for modelers are dealingwith uncertainty in the input parameters and finding the parameters thathave the largest effects on the model output. Different terms are used inter-changeably for these parameters, such as “important”, “sensitive”, “mostinfluential”, “effective”, “major contributor”, or “correlated (with output)”[Ham94, SCS+00]. Identifying the important parameters and understand-32.1. Sensitivity analysising their contributions to the output is one goal of sensitivity analysis (SA)[STCR04, OO04] and this goal is called factor prioritization [BP16].The other goals of the SA are factor fixing, model structure, direction ofchange and stability. Factor fixing is associated with finding the least im-portant inputs that can be fixed anywhere in their variability ranges. Themodel structure applies in cases where the model inputs affect the outputindividually or the interactions between inputs have an important role ininfluencing the output. Direction of change studies whether an increase (de-crease) in a model input causes an increase (decrease) in the model output.Stability determines the input space over which the optimal solution doesnot change [BP16].SA gives insight into a model’s behavior by these five goals [Alv09,Ham94]. SA also helps validating a model by factor fixing and model struc-ture goals [IL14, AIGMA05, CP02].SA has seen applications in a diverse setting. For example, it is used toassess the risk caused by the uncertainty in the parameters of a CO2 storagemodel [AON13]. Cannavo [Can12] implemented SA to validate and developa model that can best describe the state of volcano. Sellers and Crompton[SC04] implemented SA on a biomechanical model and they strongly en-courage the use of SA as a validation technique for modeling. Saint-Geourset al. [SGBG14] implemented a flood damage assessment model and thequality of the conclusions drawn from the model output are studied by SA.In [LMM14], SA is used to find the dependability of complex systems andhelps to predict failures and schedule maintenance.SA methods are based on producing a quantitative evaluation of thecontribution of each parameter in the form of sensitivity indices. Using thevalues of these indices, one can rank the input parameters from most impor-tant to least important. SA methods with different assumptions may leadto different ranking of inputs that influence the model output. Therefore,SA methods should be chosen properly to find the ranking most appropriatefor a given application [CHT00]. Using two or more methods with differentmechanisms is recommended to identify the key inputs with more confidence[CP02].SA methods are usually categorized into local and global methods. LocalSA methods concentrate on the sensitivity of the output with respect tosmall perturbations around the input values. In local SA, the values of theinput parameters of interest are perturbed, while the other parameters arefixed at their nominal values (all inputs are perturbed if the local sensitivityof output to all inputs is required). While local SA methods do not studythe whole ranges of inputs, they can capture co-dependencies using higher42.1. Sensitivity analysisorder derivatives or profile-based measures [SK11]. Global SA methods takeone or a combination of input parameters and study the model output overthe entire ranges of the input parameters [Sud08, SK11].One can classify global SA methods as mathematical, statistical or graph-ical [CP02]. However, in this paper, we choose to focus on the methodsassumptions and mechanisms. The global SA methods in this paper areclassified according to linear, monotonic and nonlinear methods.Linear SA methods assume a linear approximation of the model outputwith the variable of interest (where all others are ignored). If these methodsfind a highly important input, then the model has one active input and themodel can be approximated by a linear function of the important input.Monotonic SA methods assume modeling a monotonic relationship betweenthe model output with the variable of interest (where all others are ignored).Nonlinear SA methods do not assume any relationship between the modeloutput and inputs.In this paper, unlike the other review papers [IL14, Ham94, AIGMA05,Ham95], a wide range of global SA methods are studied and the presentedframework helps to choose an appropriate method where appropriateness isbased on the type of problems a method can solve and its computationalcost.When the evaluation of a function, provided as a black-box, is expen-sive in term of computation time, SA methods become too time consum-ing. In that situation, one needs to approximate the function using asurrogate. There are numerous methods to construct a surrogate modelsuch as moving Least-Squares [Lev98], Polynomial Chaos Expansion [Sud08,Ale13, DNM+03], Gaussian Process (GP) [OO04], Artificial Neural Network[MCLH14], linear model [PPS09], Support Vector Machine [BM08], Cheby-shev polynomials [WLZZ15], etc. In the present survey, we will focus on theGP methods since sensitivity indices are obtained while building the GPsurrogate (applying a Bayesian approach to the GP ). For other surrogatemodels, the sensitivity indices should be estimated after the surrogate isdeveloped.When a model is vector-valued (multivariate output), the model outputis time dependent (dynamic) or not (static). According to the model output,different SA methods have been presented in this chapter.For the static models with high dimensional outputs, Gamboa et al.[GJKL13] generalize the Sobol method. The sensitivity of dynamic modelscan be presented graphically or shown by indices. The sequential sensitivityanalysis method introduced in [PCS+96] and the PCA based method usedin [LMM11, LML+09, CMW06] present the sensitivity indices graphically.52.2. Basic notation and definitionsThe factor and time sensitivity indices introduced by Cao et al. [CDD13]are suggested for the sensitivity analysis of dynamic models. We discussboth static and dynamic multivariate sensitivity analysis in Section 2.6.The paper is organized as follows. Section 2.2 is devoted to basic no-tations and definitions. Section 2.3 presents the linear and monotonic SAmethods. The nonlinear SA methods are described in Section 2.4. Sec-tions 2.3 and 2.4 consider SA methods for nonexpensive models that can beevaluated quickly, while the methods for expensive models are discussed inSection 2.5. Section 2.6 presents the SA methods for models with multi-variate outputs. In Section 2.7, the SA methods are implemented on threenumerical examples and also on an engineering model with a comparisonof accuracy and computational costs. Finally, conclusions are provided inSection 2.8. These conclusions discuss how to pick a proper SA methodaccording to the SA goal, the number of model inputs and the complexitiesof the model and the SA methods.2.2 Basic notation and definitionsIn what follows a bold capital letter (X) denotes a matrix, a bold lower-case letter (x) is a vector, a regular capital letter (X) is a random variableand a scalar is denoted by a regular lower-case letter (x).The model input parameters are denoted by x = (x1, ..., xd) which hasdimension d. Let G(x) be the model function we wish to study; the possiblyvector-valued output is denoted by y = (y1, ..., yl) = G(x).The vector x−i of dimension d− 1 denotes all the inputs of x except xi.A subset of inputs is denoted by xR = (xi1 , xi2 , ..., xid′ ) with R = {i1, ..., id′}(d′ ≤ d). The complementary set for this subset is given by xRc (c denotesthe set complement) and contains the remaining inputs such that R ∪Rc ={1, ..., d} and R ∩Rc = ∅.Let fX(x) denote the joint distribution of a vector of random variables(x = (X1, ..., Xd)), then fXi(xi) =∫..∫fX(x)∏t6=i dxt is the marginal dis-tribution and FXi(xi) =∫ xi−∞ fXi(s)ds is the cumulative distribution func-tion (CDF) of Xi.The expectation, the variance, the standard deviation and the covariance62.2. Basic notation and definitionsof random variables X and Y are respectivelyE(Xi) =∫xifXi(xi)dxi,V (Xi) = E[(Xi − E[Xi])2],σXi =√V (Xi),Cov(Xi, Y ) = E[Xi − E[Xi]]E[Y − E[Y ]].When inputs are viewed as random variables, the output Y = G(X) isalso a random variable and E(Y ) =∫G(x)fX(x)dx is the output mean.The mean of G(X) conditional on knowing XR is given byE(Y |XR) =∫G(x)fXcR(x)dxRc ,where fXcR(x) is the conditional distribution of XRc .To estimate the sensitivity indices, N samples for each random variableare used. The sth sample of the input is indicated by xs = (x1,s, ..., xd,s),which leads to the output ys. Further let x¯i and σ2Xidenote the usual samplemean and variance of the ith coordinate.Remark 2.1 (Dependent inputs). Most SA methods assume the inputs areindependent. Consequently, these methods cannot be directly implementedon models with dependent inputs. For example, if x1 is dependent on x2,then the sensitivity index of x1 indicates the contributions of both x1 andx2.A procedure to define a set of independent parameters from a set ofdependent inputs is suggested by Mara and Tarantola [MT12]. These trans-formed inputs are then used to study the sensitivities using the methodsdiscussed in Sections 2.3 and 2.4. The new orthogonal input parametersx′ = (X ′1, X ′2, ..., X ′d) can be obtained from the dependent normal inputsx = (X1, X2, ..., Xd) using the following procedure [MT12]{X ′1 = X1,X ′i = Xi − E(Xi|X ′1, ..., X ′i−1), ∀i = 2, ..., d.It is assumed that the conditional expectation distinguishes the depen-dency between the inputs. The new inputs x′ = (X ′1, X ′2, ..., X ′d) are inde-pendent and orthogonal.It should be noted that this procedure produces new independent inputsand the sensitivity indices of these independent inputs do not indicate theimportance of the original inputs (the dependent inputs).72.3. Linear and monotonic SA methodsDepending on the problem, a change of variable to obtain independentinput followed by SA of output with respect to the new independent inputsmay be acceptable. Otherwise, the methods in Remark 2.4 or Subsection2.4.6 should be used.Remark 2.2 (Monte Carlo integration). Numerical integration is a particularusage of Monte Carlo methods, which were named by Ulam and Metropo-lis [MU49].Consider the integration I =∫Ω f(x)dx and let V =∫Ω dx. Monte Carlointegration uses N samples of x to estimateI ≈ I ′ ≡ VNN∑s=1f(xs).The error of Monte Carlo integration is independent of the number of di-mensions and proportional to 1/√N . Thus, the convergence of Monte Carlointegration can be slow but does not suffer from the “curse of dimension-ality”. Techniques to improve the performance of Monte Carlo integrationare discussed in [Wei00].2.3 Linear and monotonic SA methodsIn this section, SA methods that find the linear and monotonic effects ofeach input Xi on the output Y = G(x) are described. Because these meth-ods are mainstream, only a brief sketch of the methods are provided. Bothmethods assume inputs are independent. Pearson’s method is a correlation-based SA method and is a linear SA method. An example of MonotonicSA method is Spearman’s method that is also correlation-based sensitiv-ity analysis method. For finding the Pearson and Spearman coefficients, Nsamples for input x, which lead to N samples for the output Y , (y1, .., yN ),are considered.The concept of correlation was first introduced by Galton in 1888 [Gal88]and the Pearson (product-moment) correlation r was proposed by Pearsonin the 1890s [Pea95]. The correlation between the input Xi and the outputY = G(x) that shows the sensitivity of Xi is given byri =Cov(Xi, Y )σXiσY≈∑s(xi,s − x¯i)(ys − y¯)√∑s(xi,s − x¯i)2√∑s(ys − y¯)2, (2.1)where cov(X,Y ) denotes the covariance of X and Y .82.4. Nonlinear SA methodsThe positive or negative value of cov(X,Y ) shows the positive or negativeslope of Y as a function of X. The correlation coefficient (−1 ≤ ri ≤ 1)measures both the strength and direction of a linear relationship betweenXi and Y . If ri = ±1, it means input i is linearly important. If Xi andY are independent, then the covariance and the correlation are zero, butthe reverse is not always true. This method does not identify nonlinearrelationships between the input and output [Con80].The Spearman correlation is computed by replacing the data with theirranks in the Pearson correlation coefficient. In this situation, the Spearmancorrelation coefficient can be used to determine sensitivities of monotonicrelationships between the model output with the variable of interest (whereall others are ignored). SA methods that do not consider linear or mono-tonic relationships between model output and inputs are studied in the nextsection.2.4 Nonlinear SA methodsIn this section, nonlinear SA methods that do not assume any specificrelationship between model output and inputs are presented . These meth-ods are based on various mechanisms such as variance, metric distance anddifferentiation. In what follows we assume that the inputs are independent.2.4.1 VarianceIn this subsection, SA methods based on variance are introduced. Theyare classified into decomposing and classical approaches. In a decomposingapproach, such as Sobol’s method, a “decomposition” of the original modelis first computed. By this decomposition, the mapping factor between in-puts and output and all the sensitivity indices can be obtained. In theclassical family of methods, the Fourier Amplitude Sensitivity Test (FAST)estimates the variances directly to obtain the sensitivity indices [CFS+73,CLS78]. Higher order interactions can be computed using the ExtendedFAST method [STC99].Sobol’s methodSobol’s method is a global SA method for nonlinear models. It decom-poses the output variance and then analyzes the variances decomposition[Sob01]. In other words, the Sobol’s method is an analysis of variances92.4. Nonlinear SA methods(ANOVA) decomposition. The Sobol sensitivity indices are obtained fromthe ANOVA decomposition of G(x)G(x) = G0 +∑iGi(xi) +∑i<jGij(xi, xj) + ...+G12...d(x1, x2, ..., xd). (2.2)In this method, G(x) must be square-integrable and Equation (2.2) is uniqueonly if the inputs are independent. The expectations of all orthogonal com-ponents in Equation (2.2) except G0 are zero. These orthogonal componentscan be obtained byG0 = E[Y ],Gi(xi) = E[Y |Xi]−G0,Gi,j(xi, xj) = E[Y |Xi, Xj ]−G0 −Gi(xi)−Gj(xj).The variance of G(x) equalsV [G(x)] =V [G0] +∑iV [Gi(xi)] +∑i<jV [Gij(xi, xj)] + ...+ V [G12...d(x1, x2, ..., xd)].Finally, the sensitivity of the ith input denoted by (SIi) is the contri-bution of the input xi to the output variability. This sensitivity index isdefined asSIi =V [Gi(xi)]V [G(x)]∈ [0, 1]. (2.3)Note thatSIi + SIi,j + ...+ SIi,j,...,d = 1.Fact 2.3. [JKLN13] The following relation holds.SIi =V [Gi(xi)]V [G(x)]=cov(Gi(xi), G(x))V [G(x)].Proof. Considering E[Gi(xi)] = 0 and E[Gi(xi), G(x)] = E[Gi(xi)2] becauseof the orthogonal elements in Equation (2.2), V [Gi(xi)] = cov(Gi(xi), G(x))sinceV [Gi(xi)] = E[Gi(xi)2]− (E [Gi(xi)])2 = E [Gi(xi)2] , andcov(Gi(xi), G(x)) = E [Gi(xi), G(x)]− E [Gi(xi)]E [G(x)] = E[Gi(xi)2],the result follows.102.4. Nonlinear SA methodsThe complete sensitivity information of input xi is given by its maineffect on the output variability (SIi) and its interactions with other inputs.The main effect of input xi and its interactions are shown by its total sen-sitivity index (SItoti ), which is computed asSItoti = 1−V (G−i)V (G)= 1− SI−i, (2.4)where V (G−i) denotes the variance of all the orthogonal elements in Equa-tion (2.2) that do not have input xi. For example, SItot1 for a model with 3inputs isSItot1 = SI1 + SI1,2 + SI1,3 + SI1,2,3 = 1− SI2 − SI3 − SI2,3. (2.5)The Sobol total sensitivity indices SItoti help to find the inputs that can befixed in their ranges of variability without influencing the output[SRTC05](in this case, SItoti is zero for input i while its SIi is not zero [SRTC12]).Sobol used the Monte Carlo method to estimate the sensitivity indices andsuggested to use Sobol sequences for faster convergence [Sob01, SRA+08].Equation (2.3) (SIi) and Equation (2.4) (SItoti ) give a deep understandingof the model. Using Equation (2.4) and estimating it by the Monte Carlomethod helps to eliminate the curse of dimensionality since there is no needto estimate all terms in Equation (2.2) (or for example the terms in Equation(2.5)) [SRTC05].Janon et al. [JKLN13] presented the Homma [HS96] and Monod es-timators [MNM06] to approximate the Sobol indices. Two independentsamples with identical distributions xs = (xR,s,xRc,s), x′s = (x′R,s,x′Rc,s)are considered (s = 1, ..., N). In addition, two outputs ys = G(xs) andy∗s = G(xR,s,x′Rc,s) are computed. The Sobol index estimators areHxR =1N∑ysy∗s −(1N∑ys) (1N∑y∗s)1N∑y2s −(1N∑ys)2 ≈ cov(Y, Y ∗)V (Y ) ,MxR =1N∑ysy∗s −(12N∑ys + y∗s)212N∑y2s + y∗2s −(12N∑ys + y∗s)2 , (2.6)where HxR and MxR denote the Homma estimator and the Monod estima-tor, respectively. The asymptotic variance of the Monod estimator is lessor equal to that of the Homma estimator [JKLN13]. Consequently, MxR ismore accurate than HxR and should be used instead.There are other Monte Carlo based estimators for the Sobol indices[GI12, Sal02, STG+07, Jan99, SAA+10, LT09]. The Glen formula [GI12]112.4. Nonlinear SA methodsestimates the Sobol indices with regard to the Pearson correlation. Con-sider ys = G(xs), y∗s = G(xR,s,x′Rc,s), y∗∗s = G(x′R,s,xRc,s) and y′s = G(x′s).The Glen equations for the first order sensitivity and the total sensitivity ofxR are denoted by GXR and GtotXRand defined asGxR =12[r (Y ∗, Y ) + r(Y ′, Y ∗∗)− r (Y ′, Y )− r (Y ∗∗, Y ∗)] ,GtotxR = 1−12[r (Y ∗∗, Y ) + r(Y ′, Y ∗)− r (Y ′, Y )− r (Y ∗∗, Y ∗)] ,where r denotes the Pearson correlation. The term −r (Y ′, Y )− r (Y ∗∗, Y ∗)improves the accuracy of the index estimation [GI12].In [GI12], this estimator is implemented on a test function and comparedwith other Monte Carlo based estimation formulas such as Mauntz formulas[Sal02, STG+07], Jansen formula [Jan99, SAA+10], Lilburne and Tarantolaformula [LT09] and eight other formulas. The comparison on this test func-tion shows that the Glen formula [GI12], and the Lilburne and Tarantolaformula [LT09] are more accurate than the other methods.Remark 2.4 (Dependent input parameters). Chastaing et al. [CGP+12]introduce new indices measuring the sensitivity of output to the dependentinputs asSIi =V [Gi(Xi)] +∑v 6=∅,i 6∈v Cov(Gi(Xi), Gv(Xv))V [Y ], (2.7)where Gv(Xv) is any component in Equation (2.2) that does not have inputi.Fourier Amplitude Sensitivity Test (FAST)The FAST method is also based on the ANOVA decomposition andthe conditional variances are obtained by the Fourier coefficients. In otherwords, this method uses a periodic sampling and the Fourier transformationto find the output variance and the conditional variances [CFS+73]. Forobtaining the FAST indices, frequencies are assigned to the inputs by anarbitrary function gxi(s′)= gi(sin(wi.s′)) ,where the frequency, wi (i = 1, ..., d) is an integer [CLS78]. One of the trans-formations (g) suggested by Saltelli et al. [STC99] is xi =12 +1pi arcsin(sinwis+ϕi), where ϕi is a random phase-shift uniformly chosen ∈ [0, 2pi). Cukieret al. [CFS+73] explain how to select the set of integer frequencies. One122.4. Nonlinear SA methodsfrequency is assigned to each input parameter and each input is a functionof w and s′ (−∞ < s′ <∞). The Weyl theorem [Wey38] gives∫ 10G (x) dx1...dxd = limT→∞12T∫ T−TG(x(s′))ds′,therefore, G0 = limT→∞ 12T∫ T−T G (x (s′)) ds′. As the function G (x (s′)) isperiodic with period 2pi, then limT→∞ 12T∫ T−T G (x (s′)) ds′ = 12pi∫ pi−pi G (x (s′))ds′. Hence, the expectation and the variance of the output are approximatedasG0 =12pi∫ pi−piG(x(s′))ds′,V (G (x)) =12pi∫ pi−piG2(x(s′))ds′ −G20.(2.8)The application of Parseval’s theorem to Equation (2.8) leads toV (G (x)) = 2∞∑l=1(A2l +B2l),where, Al and Bl are the cosine and the sine Fourier coefficients at harmonicl [SB98].The conditional variance V (Gi(xi)) is obtained by the cosine and thesine Fourier coefficients at frequency wi and the higher harmonics (l). Thecosine and the sine Fourier coefficients at frequency wi and harmonic l aredenoted Al,wi and Bl,wi , and given byAl,wi =1pi∫ pi2−pi2G(x(s′)) cos(lwis′)ds′,Bl,wi =1pi∫ pi2−pi2G(x(s′))sin(lwis′) ds′.The variances are estimated asV (Gi (xi)) = 2∞∑l=1(A2l.wi +B2l.wi) ≈ 2 M∑l=1(A2l.wi +B2l.wi),V (G (x)) ≈ 2d∑i=1M∑l=1(A2l.wi +B2l.wi),132.4. Nonlinear SA methodsand the sensitivity index is computed asSIi =V (Gi (xi))V (G (x))≈∑Ml=1(A2l.wi +B2l.wi)∑di=1∑Ml=1(A2l.wi +B2l.wi) , (2.9)where M is the interface factor (4 or higher) [STC99] and for estimatingAl,wi and Bl,wi , N = 2Mwmax + 1 samples are used (s′ = 1, ..., N) wherewmax is the maximum frequency among wi(i = 1, .., d).Extended Fourier Amplitude Sensitivity Test (EFAST)The FAST method above only allows for estimation of main effects [AIGMA05].To improve the EFAST method, the extended FAST (EFAST) was intro-duced by Saltelli et al. [STC99]. The EFAST method is based on the FASTmethod and also finds the higher interactions. The EFAST sensitivity in-dices are computed asxi = g(sin(wi.s′)) ,V (Gi) = 2∞∑l=1(A2l.wi +B2l.wi) ≈ 2 M∑l=1(A2l.wi +B2l.wi),V (G) ≈ 2d∑i=1M∑l=1(A2l.wi +B2l.wi),SIi =V (Gi)V (G),SItoti = 1− SI−i.(2.10)For estimating V (Gi), N = 2Mwmax+ 1 samples are used (s′ = 1, ..., N) forxi and the cosine and the sine Fourier coefficients [MTS82] can be estimatedasAj ={0 when l is odd,1N[y (N0) +∑qk=1CosS(k) cos(pijkN)]when l is even,where CosS(k) = (y (N0 + k) + y (N0 − k)).Bj ={0 when l is even,1N[y (N0) +∑qk=1 SinS(k) sin(pijkN)]when l is odd,142.4. Nonlinear SA methodswhere SinS(k) = (y (N0 + k)− y (N0 − k)), q = N−12 and N0 = q + 1.The EFAST method (like the FAST method) needs fewer samples thanSobol’s method, and (like Sobol’s method) is also able to find higher inter-actions [AIGMA05, STC99].First-order variance coefficientThe first-order variance coefficient is another interpretation for ANOVAand is defined [M+95, MMM08] asθi = V [Y ]− E [V [Y |Xi]] ,ηi =θiV [Y ],(2.11)where ηi is equal to the Sobol sensitivity index SIi. For the Monte Carloestimation of these sensitivity indices, Morris et al. [MMM08] consider asampling method for inputs that is called unbiased permuted column sample(UPCS) plans, and finds the precise estimation of the first-order sensitivityindices. As the UPCS plan can estimate the first-order sensitivity indexprecisely and this index just gives information about the main effects ofinputs, this method can be used when the interactions between inputs arenot required to be studied.UPCS planThe UPCS is generated as follows. Suppose matrix Aa (a = 1, 2, ..., r)has N ′ rows and d columns and each column contains a random permutationof the integers from 1 to N ′. The random integer i in column j denotes theith sample in the sample set of input j. The N ′ samples of d inputs are shownby matrix Xa which can be obtained by matrix Aa and input sample sets.Consequently, N = r ×N ′ samples for inputs are obtained by r matrices ofXa which lead to N samples of Y .For example, suppose d = 3, N ′ = 4 and all inputs have the same randomsample set {0.125, 0.375, 0.625, 0.875}. If the matrix Aa is equal toAa =2 1 41 3 13 4 34 2 2 ,152.4. Nonlinear SA methodsthen Y a becomesXa =0.375 0.125 0.8750.125 0.625 0.1250.625 0.875 0.6250.875 0.375 0.375 .Note that this permuted column sample is unbiased when (Aai,j , Aai,j′) 6=(Aa′i′,j , Aa′i′,j′) for all a = 1, 2, ..., r, a′ = 1, 2, ..., r, a 6= a′, i = 1, 2, ..., N ′,i′ = 1, 2, ..., N ′, i 6= i′, j = 1, 2, ..., d, j′ = 1, 2, ..., d and j 6= j′.The ηi in Equation (2.11) is estimated by the Monte Carlo method andthe UPCS in [MMM08].Wei’s methodWei et al. [WLS13] introduced a variance-based SA method, which pro-vides different information than Sobol’s method. The ranking of the pro-posed indices indicates the influences of the parameters that reduce theoutput variance. The goal is to evaluate the impact of the range of inputparameters on the output. Reducing the size of the interval of input param-eters results in significant computation savings while providing the sameinformation.In this method, one input or a set of inputs (xR = (xi1 , xi2 , ..., xid′ ) withd′ ≤ d) is selected. For each selected input, two quantile values that arecorrelated and uniformly distributed are considered. Let qik(1) and qik(2)(ik ∈ {i1, ..., id′}) be the two quantile values for the kth member of theselected set of inputs such that 0 < qik(1) < qik(2) < 1. The quantile valuesof different inputs should be independent since the inputs are supposed tobe independent. These quantile values lead to values uik(1) = F−1ik(qik(1))and uik(2) = F−1ik(qik(2)) that define the reduced range of the input xik ∈[uik(1), uik(2)].Consider the two matricesQR =[qi1(1) qi2(1) ... qid′ (1)qi1(2) qi2(2) ... qid′ (2)]T,UR =[ui1(1) ui2(1) ... uid′ (1)ui1(2) ui2(2) ... uid′ (2)]T,the main and the total sensitivity indices for the selected inputs are denoted162.4. Nonlinear SA methodsby WR and WTR and they are defined [WLS13] asWR =EQR [V (Y )− VX(Y |xR ∈ UR)]V (Y )= 1− EQR([VX(Y |xR ∈ UR)]V (Y ),WTR =EQ−R [Vx(Y |X−R ∈ U−R)]V (Y ),(2.12)where the output model is noted Y , and VX is the variance with respectto xR over the reduced ranges UR and with respect to x−R over the fullranges. The expectation with respect to all elements of QR is noted EQRand these sensitivity indices are bounded in [0, 1].Sensitivity index WR studies the effect of reducing the selected inputsranges (to matrix UR) on the output variable. A small value of WR indi-cates that reducing the range (distribution) of xR has a small effect on thereduction of output variability. For example, if WR = 0, then EQR [VX(Y |xR∈ UR)] = V (Y ), which means that when the ranges of xR are reduced toUR, the variance of the output stays constant. In other words, reducingthe range of xR has no effect to reduce output variability. If WR = 1, thenEQR(VX(Y |xR ∈ UR) = 0, which means that by reducing the range of xR(xR ∈ UR), the output variability becomes 0.Sensitivity index WR studies the effect of reducing the remaining in-puts ranges on the output variable. If WTR = 1, then EQ−R(VX(Y |x−R ∈U−R)) = V (Y ) which means if the ranges of the remaining inputs x−R arereduced, the output variance stays constant V (Y ). This reveals that theoutput variance is fully dependent on xR. If WTR = 0, then the outputvariance is fully dependent on x−R and no variance is caused by xR.Consequently, the information provided by Wei’s and Sobol’s methodsare different. The high values of the Wei indices indicate the inputs thatreduce output variability when these input ranges are reduced, whereas thehigh values of the Sobol indices indicate which inputs has the highest con-tribution to the output variability when the distribution of inputs are fixed.2.4.2 Variance-meanThe contribution to the sample mean (CSM) plot and thecontributionto the sample variance (CSV) plot are two graphical SA methods. The CSVis presented by Tarantola in [TKBL+12] who combined it with the CSMproposed by Sinclair [Sin93]. The CSV and the CSM for the input Xi show172.4. Nonlinear SA methodsthe influence of this input on the output Y . They are defined asE(Y ) =∫ ∞−∞...∫ ∞−∞G(x)fX(x)dx,CSMXi(q) =1E(Y )∫ ∞−∞...∫ ∞−∞∫ F−1Xi (q)−∞G(x)d∏i=1fXi(xi)× dxidx1...dxi−1dxi+1...dxd, (2.13)V (Y ) =∫ ∞−∞...∫ ∞−∞fX(x) (G (x)− E (G (x)))2 dx,CSVXi(q) =1V (Y )∫ ∞−∞...∫ ∞−∞∫ F−1Xi (q)−∞(G (x)− E (G (x)))2d∏i=1fXi(xi)(2.14)× dxidx1...dxi−1dxi+1...dxd,where q ∈ [0, 1].The CSMXi values are plotted for different values of q and the sensitivityof Xi can be measured by how much the CSMXi(q) curves deviates from thediagonal. As an example, the CSMXi(q) for three inputs of the Ishigamifunction (2.28) shown in Figure 2.1 indicates the most important input isx1.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 fraction of input parameter range, qCSM(q)  x1x2x3DiagonalFigure 2.1: The CSMXi(q) for three inputs of the Ishigami function.The CSVXi(q) is used to study the reduction of output variability byreducing the ranges of inputs. Let us define V arRatioXi(q1, q2) (which can182.4. Nonlinear SA methodsbe bigger than 1) asV arRatioXi(q1, q2) =CSVXi(q2)− CSVXi(q1)q2 − q1 , (0 ≤ q1 < q2 ≤ 1).This function V arRatioXi(q1, q2) shows the ratio of the output variancewhen the range of input Xi is reduced ([q1, q2]) to the output variance whenthe range of input Xi is not reduced ([0, 1]). By plotting V arRatioXi(q1, q2),the effect of reducing the range of input Xi to the output variance is shown.The effect of reducing the input ranges on the output variance of the Ishigamifunction is shown in Figure 2.2.0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.500.  X: 0.051Y: 0.6776qRatio of variances (reduced/full)x1x2x3Figure 2.2: The effect of the symmetric reducing of the range to [q, 1 − q]on variance for the three inputs for the Ishigami function.As Figure 2.2 shows, if q = 0.05 for X3 (which means reducing the rangeof X3 to [0.05,0.95] or reducing the range of X3 by 10%), then the variance ofthe output reduces by 32% (100%− 68%) (the output variance ratio is 0.68which indicates that the ratio of the output variance when the range of X3is reduced (by 10%) to the output variance with no input range reductionis 0.68). Figure 2.2 also shows that reducing the range of X2 toward zeros(q = 0.5) increases V arRatioXi(q1, q2). Consequently, an effective variancereduction can be obtained by reducing the range of X3.Plischke [Pli12] introduces the CUSUNORO approach that improvesCSM for non-positive data or an output with mean of zero.192.4. Nonlinear SA methods2.4.3 Factor screening methodsIn this section, two methods based on factor screening proposed by Mor-ris [Mor91] and Saltelli et al. [SRTC12] are explained.Morris methodMorris introduced a SA method for models that have a moderate-to-largenumber of inputs [Mor91]. This method can measure the nonlinear relation-ship between the input and the output. The important input parametersare identified and ranked efficiently using an One-At-a-Time (OAT) design.The OAT SA design changes the input parameters by the same relativeamount. The input which makes the largest variation in the output is themost important input. The Elementary Effect (EE) is defined asEEi =y (x1, ..., xi−1, xi + ∆, ..., xd)− y (x)∆, (2.15)where ∆ is a multiple of 1/(p − 1) and p is the number of values (levels)an input can have in a range. The EE evaluates the effect of changing aninput by ∆. For example, if p = 4, then the possible values for inputs are{0, 1/3, 2/3, 1} and an input matrix for d = 3 (d is the input dimension) and∆ = 1/3 can beX =1 13 01 23 01 2313232313 ,where column i shows 4 samples for input Xi; one of the inputs is changedby a positive or negative ∆ in each input sample (row). By putting theoutputs of the first and second rows of this matrix in Equation (2.15), EE2is obtained (because only x2 is changed). EE3 is obtained by putting theoutputs of the second and third rows of this matrix in Equation (2.15) (asinput x3 is changed). While the outputs of the third and fourth rows giveEE1.Morris sensitivity indices are calculated by the mean (µi) and the stan-dard deviation (σi) of N values of EEi. The biggest values of µi and σishow the most important input parameter. In addition, σi shows the inter-action with other input and/or the nonlinear effect on the output. They are202.4. Nonlinear SA methodscomputed asµEEi =∑Ns=1EEisN,σi =√√√√ 1NN∑s=1(EEis − µEEi)2.(2.16)Saltelli et al. [STCR04] suggests to set N between 4 and 10 and Cam-polongo [CCS07] proposes to use µ∗EEi =∑Ns=1 |EEi,s| /N instead of µEEibecause µ∗EEi prevents any cancellation of the effects that may happen inµEEi .Radial screening measureSaltelli et al. introduce the following elementary effectEEi = |G(XiXX)−G (X)XiXX −X|, (2.17)where matrices X and XX are two independent samples for inputs (ma-trices with N rows and d columns) and all the columns of matrix XiXXare the same as the columns in X except its ith column is obtained frommatrix XX. Matrices X and XiXX span the input space since the N(d+2)samples of X, XX and XiXX make a “radial” design. In other words, thesamples from the first row of matrix X and the first rows of the d matricesXiXX can make a design. The core of this star is the sample from X whilethe d samples from d matrices XiXX make the rays of the star. As thereare N samples, the number of stars is N .Finally, the radial screening measure for input i is the average of N(N ≈4 − 8) elementary effects EEi. The advantage of this method is that itexplores the input space more efficiently than the Morris method.2.4.4 Derivative-based methodIn what follows, two methods based on derivatives, namely derivative-based global sensitivity measures (DGSM) [K+09] and Wang’s methods[WLHZ14], are described. DGSM is just based on the partial derivativeswhile the partial derivatives and the ANOVA-like decomposition (describedin Subsection 2.4.1) are used in Wang’s method.212.4. Nonlinear SA methodsDerivative-based global sensitivity measuresKucherenko et al.[K+09] use the local sensitivity measures based on par-tial derivatives ∂G∂xi and define three measures asM∗i =∫ ∣∣∣∣∂G∂xi∣∣∣∣dx,Σ∗i =√∫ (∣∣∣∣∂G∂xi∣∣∣∣−M∗i )2 dx,Γi = Σ∗i +M∗i .(2.18)The set of M∗i ,Σ∗i ,Γi are called derivative-based global sensitivity measures(DGSM) [K+09]. DGSM are compared with the Sobol and Morris methodsin [K+09] and the results show this approach is faster and more accuratethan the Morris method. In addition, there is a link between DGSM andthe Sobol indices. However, the computational time for computing DGSMis lower than the computational time for the Sobol indices.Wang’s methodThe distribution parameters of inputs (such as the means and variancesof input distributions) are not taken into account in the variance based sen-sitivity indices [WLHZ14]. Thus, Wang et al. propose sensitivity indicesbased on derivative and ANOVA-like decomposition that indicate the in-fluence of input distribution parameters [WLHZ14]. In other words, theproposed sensitivity indices are complementary to the variance-based sensi-tivity indices.The proposed sensitivity indices are obtained by the partial derivativefirst-order variance contribution (FOVC) with respect to the distributionparameters. These indices are simplified by the kernel functions that areobtained analytically for different distributions. For example, the partialderivatives of Vi with respect to the distribution parameters θXi of input Xiand θXj of input Xj (Xj ∈ x−R) are derived as∂Vi∂θXi= E((E (Y |Xi)− E (Y ))2 ki),∂Vi∂θXj= 2E ((E (Y |Xi)− E (Y )) (E (Y.kj |Xi)− E (Y.kj))) ,(2.19)where ki is the kernel function of θXi and equals∂fXi∂θXi1fXi (xi).222.4. Nonlinear SA methodsFor example, assume the model is a quadratic polynomial without crossterms such asY = a0 +d∑i=1aiXi +d∑i=1biX2i ,where input Xi has normal distribution with mean µi and variance σ2i . Thekernel function of these normal distribution parameters µi and σ2i are foundaskµi =Xi − µiσ2i,kσi =1σi[(Xi − µiσi)2− 1].Finally, the proposed sensitivity indices, which are the derivatives of FOVCwith respect to µi and σi, are∂Vi∂µXi= 4biσ2i (ai + 2biµi) ,∂Vi∂σXj= 2σi(a2i + 4aibiµi + 4b2i(µ2i + σ2i)).The FOVC Vi are independent of µj and σj in a model without cross terms.Since FOVC does not need additional computational cost during com-puting the variance-based sensitivity indices, this method should be usedas a by-product of the Sobol method (or any variance-based method). Inaddition, FOVC and variance-based sensitivity indices may lead to differentinformation and using the information of both methods leads to a betterinsight.2.4.5 Distance between distributionsChun [CHT00] examines the problem of characterizing the uncertaintyin the output with respect to changes in the distribution of the input. Inother words, the sensitivity indices are found by computing the distancebetween the two different cumulative distributions. This method is differentfrom other methods where the output uncertainty is studied by sampling aparticular input parameter with a fixed distribution.In this method, the Euclidean distance between two output cumulativedistributions is computed. The outputCDF when all the inputs have theirnominal distribution is noted FY . The output CDF when the distribution of232.4. Nonlinear SA methodsinput i is changed is noted F iY . In this method, this distance is normalizedby the mean of Y asSIi =(∫ [F iY (y)− FY (y)]2dy)1/2E (Y ). (2.20)This method can measure the effect of an input distribution on the out-put distribution (both mean and variance of output). In addition, the con-cept of this method is simple and the indices can be calculated easily. How-ever, the precise distribution of output is required.2.4.6 Moment independence measuresBorgonovo [Bor07] explains that the variance based SA methods cannotprovide complete information about the influence of input parameters onoutput, because variance is just one of the distribution moments (meanis another moment of distribution) and also the variance-based methodsassume that inputs are independent.In addition, monotonic transformation T is used in some engineeringapplications to obtain convergence and stable estimations. However, us-ing variance-based SA with transformation can be misleading because thevariance-based SA are not invariant to monotonic transformation whichmeans given any monotonic function T , the variance-based sensitivity ofT (Y ) to any input is dependent on T [BB13, BTPM14]. Consequently, sen-sitivity indices that are independent of any given monotonic transformationT are required.Some sensitivity indices, which are invariant under monotonic transfor-mation (the sensitivity of T (Y ) to any input is independent of the monotonicfunction T ), are proposed in [BB13, BTPM14]. These sensitivity indices arebased on the distance between distributions [BB13, Bor07, BTPM14] andare defined asSIi = Ei[distance(PY , PY |Xi)], (2.21)where, P is probability. The measure used in [Bor07] is based on the L1-distance between densities asSIBori =12∫fXi(xi)[∫ ∣∣∣∣fY (y)− fY |Xi(y)∣∣∣∣dy]dxi. (2.22)If Y is independent of Xi, then SIi equals zero. The sensitivity index ofa subset including all inputs SI1,2,...,d is 1. If inputs are independent then242.5. Surrogate-based methods for expensive function modelsSI1,2,...,d = SIi + ... + SId = 1 [Bor07]. This sensitivity index is efficientlyestimated in [PBS13]. It should be noted that it is possible that Borgonovo’smethod and the variance-based methods provide different importance rank-ing of inputs because they are based on different mechanisms [Bor07].The sensitivity measures in [BB13, BTPM14] are based on the cumu-lative distribution functions (SIi = Ei[distance(FY , FY |Xi)]) and use theKolmogorov-Smirnov metric (KS) and Kuiper metric (Ku) asSIKSi = E[supy{∣∣FY (y)− FY |Xi(y)∣∣}] ,SIKui = E[supy{FY (y)− FY |Xi(y)}+ supy{FY |Xi(y) − FY (y)}].(2.23)The SIKSi and SIKui are better than SIBori , because they are based onthe cumulative distribution functions and the cumulative distribution func-tions are defined for all distributions even if the distribution of output doesnot admit a probability density function (if the distribution of X admits aprobability density function, then E [X] =∫∞−∞ xfX(x)dx) [BB13]. There-fore, SIBori cannot be used in such cases that the output distribution doesnot admit a probability density function (for example, the Cantor distribu-tion).In addition, SIKSi and SIKui always converge but SIBori may not con-verge [BTPM14]. Baucells et al. also suggest using SIKSi and SIKui withfocus on SIKui [BB13].2.5 Surrogate-based methods for expensivefunction modelsIn some SA methods, computing the sensitivity indices requires a lotof function evaluations, which is impractical for expensive model functions.Therefore, constructing approximation models becomes necessary. The ap-proximating model is known as a surrogate model, metamodel, emulator orresponse-surface model. We now describe the Gaussian Process (GP ) thatnot only builds a surrogate model, but also accounts for the uncertainty inevaluating the parameters.Let (xs, ys), s = 1, 2, ..., N be a sample set of d-dimensional input andoutput. Then the GP surrogate is defined asGˆ(x) = g(x)β + Z(x), (2.24)252.6. Sensitivity indices for multivariate outputswhere g(.) is a 1×h vector of arbitrary regression functions of x (the choiceof g(x) is arbitrary, but it should fit the samples), β is a h × 1 vector ofparameters that should be estimated, and the function Z(x) is a Gaussianprocess that has mean 0 and variance σ2 [OO04]. The covariance betweenZ(x′) and Z(x′′) (x′ and x′′ are two input vectors) isCov[Z(x′), Z(x′′)] = σ2C(x′,x′′), (2.25)where C(·, ·) is a correlation function that decreases as |x′ − x′′| increasesand C(x,x) = 1 [OO04]. A popular choice for C(·, ·) isC(x′,x′′) =d∏j=1exp(−θj |x′j − x′′j |vj ), (2.26)where θj and 0 < vj < 2 are estimated from the sample set [SW06]. Schonlauand Welch [SW06] used the maximum likelihood to estimate β in Equation(2.24), σ2 in Equation (2.25), θj and vj in Equation (2.26). Alternatively,Oakley and O’ Hagan. [OO04] adopt a Bayesian approach for finding theunknown parameters.GP — varianceSchonlau et al. [SW06] compute the ANOVA decomposition of the GP sur-rogate, which can then be used to compute the Sobol indices. Alternatively,Oakley and O’ Hagan [OO04] and Farah and Kottas [FK14] apply a Bayesianapproach to the GP surrogate and account for the uncertainty in estimatingthe parameters when computing the Sobol indices.GP advantages and disadvantagesThe GP interpolates the observed data and is highly efficient for sensitivityanalysis [OO04]. Unlike other metamodelling methods, applying a Bayesianapproach to the GP accounts for uncertainty in using the surrogate in placeof the original function when estimating the Sobol indices.The disadvantages of the GP is that it requires various parameters to beestimated and the correlation function C is difficult to be chosen properly.Inappropriate values for these parameters or a bad choice for the correlationfunction may lead to a poor surrogate [BO09].2.6 Sensitivity indices for multivariate outputsThis section is devoted to the sensitivity analysis of models with mul-tivariate output. The multivariate output can be static, e.g., a vector of262.6. Sensitivity indices for multivariate outputsdifferent measurements, or dynamic (same measurement is observed overmultiple time points; sometimes referred to as functional output). A staticmodel is time-invariant while a dynamic model is time-dependent. In otherwords, the output of a dynamic model at time t∗ is dependent on inputs attime t < t∗. The dynamic output is multivariate although it is a scalar ateach specific time.Most of the multivariate SA methods are devoted to dynamic models,while Gamboa et al. [GJKL13] generalize Sobol’s method to static modelswith high dimensional outputs.2.6.1 Multivariate Sobol sensitivity analysisGamboa et al. [GJKL13] study the Sobol sensitivity indices for a staticmodel that has a vector output. By a Monte Carlo estimator, the sensitivityindex for a set of input (XR) is computed asSIR =∑lj=1(∑Ns=1 yj,sy∗j,s − 12N(∑Ns=1 yj,s + y∗j,s)2)∑lj=1(12∑Ns=1 y2j,s +(y∗j,s)2 − 12N (∑Ns=1 yj,s + y∗j,s)2) ,where y = (Y1, ..., Yl) = G(x), y∗ = (Y ∗1 , ..., Y ∗l ) = G(xR,x′−R) and x′−Ris another independent sample for x−R. We note yj,s the sth sample for Yjand y∗j,s the sth sample for Y ∗j .The advantage of this method is that the influence of each input on theoutputs of a multivariate model is shown by just one index. This methodmay be expensive when the model output is high dimensional.2.6.2 Sequential sensitivity analysisThe sequential sensitivity analysis method is introduced as a multivariateglobal SA for a dynamic model [PCS+96]. The output is stored in a N × Tmatrix. Each column of this matrix shows the value of the output at a giventime and each row presents the output for a given input.By ANOVA (described in Subsection 2.4.1), the sensitivity of the out-put at each time (each column) is computed and the sensitivity indices areplotted.The benefit of this method is that it shows when the influence of theinput parameters decreases or increases. The interactions between inputparameters can be computed and shown for any dynamic model with one ormore outputs.272.6. Sensitivity indices for multivariate outputsThis method is practical when the outputs at different times are notcorrelated.2.6.3 PCA-based multivariate sensitivity analysisCampbell et al. [CMW06, LML+09, LMM11] suggest to implement PCAon the output to find the non-correlated principal components. The sensitiv-ity index for each principal component is computed by ANOVA (describedin Subsection 2.4.1).The benefit of this method is that it can be used not only for the time-dependent outputs but also for any functions of any continuous inputs suchas the distance. The first order and the total sensitivity indices can be stud-ied. The weakness of this method is that the different principal componentslead to different importance rankings of inputs. Therefore, the sensitiv-ity indices obtained from all the principal components should be presentedgraphically.2.6.4 Variance-based multivariate sensitivity analysisThe output of a dynamic model at time t is affected by the inputs attime t (x(t)) and also the inputs before time t (x(t′), t′ < t). Therefore,factor sensitivity indices and time sensitivity indices based on variance areproposed by Cao et al. [CDD13].The factor sensitivity index measures the whole influence of an input onthe current output while the time sensitivity indices find the contributionof an input variance at a specific time to the current output variance (bothindices are based on ANOVA).The time sensitivity index that studies the sensitivity of the currentoutput y(ty) to the input xi at the time t∗ is computed astime SIi(t∗) =V[E[Y (ty)|Xi(t∗)]]V [Y (ty)],and the factor sensitivity index is obtained byfactor SIi =V[E [Y (ty)|Xi]]V [Y (ty)],where factor SIi studies the sensitivity of the current output to the inputxi at all times prior to time ty.282.7. Numerical examples and discussion2.7 Numerical examples and discussionIn this section, three studies are done. In the first study 2.7.1, thePearson, Spearman and Sobol (which is a popular nonlinear SA method)methods are implemented on four functions to determine the differencesbetween linear, monotonic and nonlinear sensitivities indices. In the secondstudy, some SA methods are compared with each other and in the thirdstudy the sensitivity of an engineer model is analyzed.2.7.1 Study 1The Pearson, Spearman and Sobol methods (that are respectively linear,monotonic and nonlinear SA methods) are implemented on the four followingfunctionF1(x) = 0.70x1 + 0.05x2 + 0.25x3,F2(x) = 0.70x1 + 0.05x2 + 0.25x3 + x1x2 + x1x2 + x2x3 + x1x2x3,F3(x) = 0.3x1 + 0.3x2 + 0.3x3,F4(x) = x31 + x2 + x3.(2.27)Among these functions, the first and third functions are linear (F1 andF3). The Pearson, Spearman and the main Sobol sensitivity indices of thesefunctions (2.27) are shown in Table 2.1.As Table 2.1 shows all the SA methods find the important input regard-less of linear or nonlinear function. However, the linear SA method studiesif the function can be approximated by a linear function while ignoringunimportant inputs.For example, The Pearson and Spearman methods indicate that func-tions F1 and F4 can be approximated by a linear function while consideringthe important input x1 and ignoring the unimportant inputs x2 and x3. Itshould be noted that function F1 is linear while function F4 in nonlinear.In addition, the linear SA method shows that the input effects of F2and F3 functions (on the functions outputs) are not linear. Therefore, thesefunctions cannot be approximated by a linear function regardless to that F2is nonlinear and F3 is linear.Consequently, these methods do not give information about the modelsbut give information about the effect of inputs on the model output.292.7. Numerical examples and discussionTable 2.1: The Pearson, Spearman and the main Sobol sensitivity indicesof four functions (2.27).Function SI Pearson Spearman Sobol0.94 0.94 0.880.07 0.06 0.000.34 0.32 0.110.68 0.71 0.460.45 0.44 0.200.52 0.52 0.270.58 0.57 0.330.58 0.57 0.330.58 0.57 0.330.91 0.94 0.980.10 0.17 0.010.10 0.17 0.01𝑆𝐼1 𝑆𝐼2 𝑆𝐼3 𝑭𝟏 𝑭𝟐 𝑭𝟑 𝑭𝟒 𝑆𝐼1 𝑆𝐼1 𝑆𝐼1 𝑆𝐼2 𝑆𝐼2 𝑆𝐼2 𝑆𝐼3 𝑆𝐼3 𝑆𝐼3 2.7.2 Study 2In this section, the Pearson, Spearman, Borgonovo, Mckay, Sobol, EFAST,Morris and Wei methods are implemented on the Ishigami Function. TheIshigami function [IH90] isI(x) = sin(x1) + asin2(x2) + bx43sin(x1), (2.28)where xi ∈ [−pi, pi], i = 1, 2, 3, a = 7 and b = 1/8, see Figure 2.3.In the second study 2.7.2, the complexity and the assumptions of thePearson, Spearman, Borgonovo, Mckay, Sobol, EFAST, Morris and Weimethods are compared. Finally, In the third study 2.7.3, the sensitivityof an engineering model is analyzed.The Morris (2.16), Sobol (2.3) and FAST (2.9) codes can be foundin SALib [Her15], SIMLAB [SH08], the R package [GBA+15], SAinter-face [Jac06] and the GSAT [Can14] libraries. The SIMLAB [SH08] libraryalso includes code for the Tarantola (2.13), (2.14), Mckay (2.11) and Pearson(2.1) formulas.The Pearson and Spearman indices are computed using the MATLABfunction corr. The codes of the Borgonovo’s, Mckay’s and Morris’ methodsare obtained from [Jac06], but the FAST code is obtained from GSAT [Can14]because the maximum number of inputs in GSAT is greater than the number302.7. Numerical examples and discussion−4 −3 −2 −1 0 1 2 3 4−20−1001020OutputX1−4 −3 −2 −1 0 1 2 3 4−20−1001020OutputX2−4 −3 −2 −1 0 1 2 3 4−20−1001020OutputX3Figure 2.3: The scatter plot of Ishigami function with N = 1000.of inputs in SAinterface [Jac06]. The code for EFAST (Equation (2.10)) isobtained from SAinterface [Jac06]. For estimating the Sobol indices Equa-tion (2.6) is used. The Wei indices for the Ishigami function (2.28) areobtained from [WLS13].The number of model runs in Pearson’s, Spearman’s and Borgonovo’smethods is dN (d is the dimension and N in the number of samples). Thenumber of model runs in Sobol’s and Morris’ methods is (d+1)N . Let wmaxbe the maximum frequency in the FAST method. The number of modelruns in the FAST method is 20wmax + 1 and the number of model runs forfinding the total sensitivity indices of the EFAST method is (20wmax + 1)d.The maximum number of samples considered in Mckay’s method code isNmax = 100 and the number of model runs is (2d+ 1)(N2/2).The level (p) in Morris’ method is set to 4 (which can be between 4 to10). The number of samples (N) is not the same for all the methods butthe maximum number of model runs considered for all the methods is about1000(d + 1). The results are shown as the average (ave) and the standarddeviation (std) of the sensitivity indices obtained from 100 runs of the SAmethods. The estimated sensitivity indices of the linear SA methods for theIshigami function (2.28) are shown in Table 2.2. The variable x1 has a smalllinear effect on the output and the methods in Table 2.2 do not provide anyinformation about the influence of the inputs x2 and x3 on the output.The sensitivity indices of the nonlinear methods for the Ishigami func-312.7. Numerical examples and discussionTable 2.2: The sensitivity indices of the linear and monotonic methods forthe Ishigami function.inputs/SI x1 x2 x3 Model runsSpearman (ave) 0.46 0 0 3,000Spearman (std) 0.02 0.03 0.03Pearson (ave) 0.45 0 0 3,000Pearson (std) 0.02 0.03 0.3Table 2.3: The numerical values of the Mckay and Sobol indices obtainedfrom symbolic integration in Maple for the Ishigami function.inputs/SI x1 x2 x3 sumMckay (ηi) 0.341 0.354 0 0.695Sobol (SIi) 0.342 0.354 0 0.695Sobol (SItoti ) 0.645 0.354 0.304 -tion are presented in Tables 2.3 and 2.4. Table 2.3 shows the numericalvalues of the Mckay and Sobol indices obtained from symbolic integrationin Maple. Table 2.4 displays the estimated sensitivity indices of the nonlin-ear SA methods and in this table, the “ave” and “std” present the averageand the standard deviation of 100 runs of the SA methods. All the methodsin Table 2.4 except Morris’ and Wei’s methods show that x2 is the mostimportant input but the Morris and Wei sensitivity indices indicate that x1is the most important input, although by a small difference.The Sobol, EFAST and Wei total sensitivity indices show that x1 hasthe highest interaction with other inputs. In the following, the informationprovided by each method is explained.The Borgonovo indices indicate the influence of the input distributionon the output distribution. The Mckay, Sobol and EFAST indices showthe contribution of inputs to the output variance. Table 2.4 shows that theEFAST is more efficient than Sobol’s and Mckay’s methods because it usesthe smallest number of model runs. Morris’ method indices show the averageand the standard deviation of the output when one input is changed. Morrisindices rank the inputs as x1 > x2 > x3.The Wei indices show x1 is the most important input because by re-ducing its distributions, the output variability is highly affected. The Wei322.7. Numerical examples and discussionTable 2.4: The average of sensitivity indices of different nonlinear methodsfor the Ishigami function.inputs/SI x1 x2 x3 sum Model runsBorgonovoave 0.28 0.30 0 0.58 3,000Borgonovostd 0.02 0.02 0 -Mckay ηavei 0.34 0.36 0 0.70 4,046Mckay ηstdi 0.04 0.04 0.02 -Sobol SIavei 0.34 0.35 0 0.69 4,000Sobol SIstdi 0.03 0.03 0.05 -Sobol SItot,avei 0.64 0.36 0.30 - 4,000Sobol SItot,stdi 0.05 0.02 0.02 -EFAST SIavei 0.34 0.39 0 0.73 301EFAST SItot,avei 0.64 0.37 0.30 - 903Morris µ∗,avei 9.24 7.87 7.82 - 4,000Morris µ∗,stdi 0.22 0 0.23 -Morris σavei 0.24 0.24 0.34 -Morris σstdi 0 0 0 -Wei SIavei 0.25 0.19 0.06 2,048Wei SIstdi 0 0 0Wei SItot,avei 0.72 0.53 0.56 2,048Wei SItot,stdi 0 0 0total sensitivity indices confirm that if the input distributions reduce, theinput x1 has the most interactions with others. It should be noted that thesmall number of model runs (2,048) in Wei’s method is due to the use of asurrogate.As each method gives a specific information, it can be concluded theinputs x1 and x2 have the most influences on the output sensitivity and x1has the most interactions with other inputs (the total sensitivity of x1 ishigher than the other inputs). The results match Formula (2.28): first ordersensitivity indices for x1 (resp. x2) is responsible for around 34% (resp.35%) of the variation due to the term sin(x1) (resp. a sin2(x2)). There isa nonlinear interaction since first-order interactions sum to 69%; it is thebx43 sin(x1) term that appears as a 64% total variation index for x1 so 30%is due to higher order interactions while x3 only exhibits a 30% higher order332.7. Numerical examples and discussion0 200 400 600 800 1000 1200 1400 160025303540length (m)height (m)  RoadGroundSegment 2Segment 1 Segment 3Figure 2.4: The road and ground profiles.interaction.2.7.3 Study 3We consider a simple road design model [KL10, Mor96] which minimizesthe construction cost. Figure 2.4 shows the ground, and the road. The ver-tical dashed lines indicate the sections. The road is divided into 3 segments(pieces on which the road is a quadratic function) as indicated as darkerdashed vertical lines. The first, second and third segments are divided into2, 2 and 4 sections, respectively.The general parameters in this model are n the number of sections, ngthe number of sections in each segment, Ai the area of section i, hi averageground height of section i, xi the start point of section i, and tg the x-coordinate at the end of segment g.The cost parameters are qd the cost of embanking to a landfill (dump)pit, pb the cost of excavating from a borrow pit, p the cost of excavating fromeach section, q the cost of embanking to each section, Cij(or Cid) the costof transporting one cubic unit from section i to j (or to the dump pit), andCbi the cost of transporting one cubic unit from the borrow pit to section i.The decision variables are ui the height of earth removed from section i,vi the height of earth added to section i, Tij the number of units transportedfrom section i to j (AiTij is the transferred volume), and agk the coefficientof xk in the quadratic model of the road on segment g.342.7. Numerical examples and discussionThe quadratic model for each segment isPg(x) = ag0 + ag1x+ ag2x2, x ∈ [tg−1, tg], i = 1, ..., ng, g = 1, 2, 3.The objective function in this linear programming model is to minimizethe total cost of embankment, hauling and excavation, i.e.minCost =n∑i=1(pAiui + qAivi)+n∑i=1n∑j=1,i 6=jCijAiTij +n∑i=1((Cid + qd)AiTid + (Cbi + pb)AiTbi)The following constraint ensures that the transported volume to or froma section is equal to the work done at that sectionn∑j=1,i 6=jAi(Tij − Tji) = Ai(ui − vi), i = 1, 2, ..., n.which can be simplified inton∑j=1,i 6=jTij − Tji = ui − vi, i = 1, 2, ..., n.The smoothness and continuity constraints of splines arePg(tg) = Pg+1(tg), g = 1, 2,P ′g(tg) = P′g+1(tg), g = 1, 2.The gap constraint ishi(xg,i+1−xg,i)−∫ xg,i+1xg,iPg(x) = (ui−vi)(xg,i+1−xg,i); i = 1, ..., ng, g = 1, 2.The slope constraints aremaxx∈[tg−1,tg]P ′g(x) ≤ U g = 1, 2, 3,minx∈[tg−1,tg]P ′g(x) ≥ L g = 1, 2, 3,where t0 = x1 = 0. The road height at the initial point and final point shouldbe 32m and 31m, respectively. The sensitivity analysis is implemented on352.7. Numerical examples and discussionthis model to find the importance of ground height and cost parameters.The sensitivity of the ground heights at the start and end points are notstudied because changing them may make the problem infeasible, and thesevalues are usually known with much greater accuracy. The range of costparameters is (0, 5) and the height parameters range is (20, 40) (these rangesare suggested by experts).All the SA methods can be implemented on this problem but we chosethe FAST method as a nonlinear variance based SA method. This problemis not high dimensional (7 height parameters and 5 cost parameters) andthe number of inputs is smaller than the maximum number of inputs for theFAST method. In addition, when a model is not high dimensional, FAST isthe cheapest method among the other variance based SA methods.Since the FAST and Sobol’s methods provide the same information, wealso implemented Sobol’s method to increase confidence in the results. An-other verification method is to run Sobol’s method several times and verifythat the standard deviation is small enough.Table 2.5: The Sobol and FAST indices for Moreb’s problem.inputs/SISobol FASTSI ave SI sd SItot ave SItot sd SIp 0.07 0.03 0.07 0.00 0.06q 0.06 0.03 0.07 0.00 0.06pb 0.00 0.03 0.00 0.03 0qd 0.00 0.03 0.00 0.02 0C 0.26 0.02 0.33 0.01 0.25h2 0.11 0.03 0.20 0.01 0.11h3 0.03 0.03 0.13 0.01 0.03h4 0.00 0.03 0.05 0.01 0.01h5 0.01 0.03 0.09 0.01 0.01h6 0.02 0.03 0.10 0.01 0.02h7 0.02 0.03 0.11 0.01 0.03h8 0.12 0.03 0.23 0.01 0.13sum 0.70 - - - 0.71In our tests, Equation (2.3) resulted in large negative indices so we usedEquation (2.6). To verify the result, we compared Sobol’s with FAST in-dices (Sobol’s method uses 13,000 model runs and the FAST method uses5,861 model runs). In addition, Sobol’s method was run 20 times. The av-362.8. Discussionerage and standard deviation of the Sobol indices with the FAST indices areshown in Table 2.5. For expensive models, we suggest to run Sobol’s method(Formula (2.6)) once and compare the result with the FAST method.The FAST method is deterministic, so we ran it once. We could not usethe EFAST method in the SAinterface [Jac06] library since it is limited to10 inputs for finding the total sensitivity indices.As Table 2.5 shows the C parameter, which is the transportation cost, isthe most important parameter while the pb and qd are unimportant and haveno interaction with other inputs. However, we cannot simplify the modelby removing the dump and borrow pits since they are used at the optimalsolution.2.8 DiscussionFigure 2.5 summarizes the SA methods for non-expensive models with adecision tree.The SA methods have different assumptions and mechanisms and aproper method for a specific goal should be used. For the factor prioritiza-tion, factor fixing and model structure goals, the screening or the variancebased methods can be applied. The monotonic SA method and plotting theANOVA decomposition (plotting G(x) versus Gi(xi) (Equation 2.2)) can beused for the model structure goal. The Wei method can be used for thestability goal.While stability goal applies to linear programming (LP) problems [BP16],a parametric LP analysis (which is not studied in this paper) can also beused [BBM03, Fil11]. A Matlab-based toolbox for parametric LP analysiscan be found in [MPT].In addition to sensitivity analysis objectives, the complexity of eachSA method should be taken into account for choosing a proper method.The maximum numbers of inputs for FAST (in GSAT library [Can14]) andEFAST (in SAinterface library [Jac06]) are 50 and 10, respectively (in theirimplementation). However, using EFAST with 50 inputs is still expensiveand Saltelli et al [SRTC05, SRTC12] suggest to use the variance based meth-ods when the model inputs are smaller than 20 or the model requires a smallamount of CPU time (up to 1 min per run). When the model is low dimen-sional, the number of function evaluations of EFAST is smaller than thenumber of function evaluations used in the Sobol method. In this case, wesuggest to use the EFASTT method instead of the Sobol method.Saltelli et al [SRTC05, SRTC12] suggest to use the screening method372.8. DiscussionSensitivityAnalysismethodsNonlinearVariance basedWeiSobolFirst-ordervariancecoefficientEFASTFASTmomentindependencemeasuresBorgonovoDistance ChunFactorscreening RadialscreeningmeasureMorrisDerivative-based DGSMWangVariance-mean basedTarantolaMonotonicCorrelation-Ranking basedSpearmanLinearCorrelationbasedPearsonFigure 2.5: Summary of SA methods for inexpensive models.based on radial sampling (Subsection 2.4.3) when the CPU time for eachrun of the model takes up to 10 min per run or the number of inputs is upto 100.Increasing the number of samples, increases the accuracy of sensitivityindices. In this paper, the number of samples for Sobol’s method is set to1, 000. If 1, 000 samples is expensive for a model (because each run of modeltakes more than 10 min or the number of inputs is more than 100), using thesurrogate methods described in Section 2.5 is suggested for this expensivemodel. Among all the surrogate methods, we suggest the Gaussian processbecause it accounts for uncertainty.In case of multivariate output models which are specific problems, differ-ent methods are discussed in Section 2.6. If the model is static, the Gamboaet al. [GJKL13] method is suggested. The sequential SA or the PCA basedSA is recommended to graphically present the sensitivity indices of a dy-382.8. Discussionnamic model. Cao et al. [CDD13] introduce two different indices (time andfactor sensitivity indices) for dynamic models. The time index finds thesensitivity of output at time ty (y(ty)) to an input at specific time (t∗) andthe factor index finds the sensitivity of (y(ty)) to an input at all time priorto time ty.Sobol’s and FAST methods are implemented on a road design model.The importance of cost and height parameters are studied for specific inputranges. The most important parameter was found to be the transportationcost. In the future, a more thorough study of the importance of all parame-ters of this model is needed to compute a range for each input on which theoptimal solution is meaningful.39Chapter 3Section positioning in theroad design verticalalignment problemThe road design problem is usually split into horizontal alignment, ver-tical alignment, and earthwork. After the horizontal alignment is fixed, thevertical alignment problem considers a discretization of the ground into sec-tions. The number of sections is directly linked to the computation time. Alarge number of sections increases the calculation time to find the optimalvertical alignment. We study different methods to approximate the groundwith non-uniformly spaced sections. The presented methods are comparedwith a commercial road design software.3.1 The road design problemA country’s economic development is highly correlated with the qual-ity of its road network. A reliable road network connects people, markets,services and knowledge together. In other words, the road network hasa huge impact on the economic growth. However, the road constructionand its maintenance are expensive activities. Therefore, the road designmodel should be optimized by minimizing the construction cost and consid-ering safety constraint, environmental and socioeconomic impacts [HHLR14,HAMM13].In the road design problem, the ground profile is discretized into seg-ments and sections as shown in Figure 3.1. In this figure, the ith sectionis shown by Si and Segment j is presented by Segj . Considering numeroussections in optimizing the road design model is too time consuming to bepractical. Therefore, we present different methods to select a reasonablenumber of sections and position them in a way that still provide enoughaccuracy on the resulting road alignment.The inputs to these methods consist of the number of sections (N), their403.2. The road design model   Height (m) x (m) 𝑆1  𝑆2  𝑆3  𝑆4  𝑆5  𝑆6  𝑆7  𝑆8  𝑆9  𝑆10  𝑆11  𝑆12  𝑆13  𝑆14  𝑆15  𝑆𝑒𝑔1  𝑆𝑒𝑔2  𝑆𝑒𝑔3  Ground profile Road profile Figure 3.1: The vertical alignment of the ground profile.distances from the starting point (x = (x0, x1, ..., xN )) and their heights(h = (h0, h1, ..., hN )). The outputs are denoted (N′,x′,h′) where N ′ < N .This paper is organized as follows. Section 3.2 presents the model. Sec-tion 3.3 is devoted to explaining the methods used for section positioningand reducing the number of sections. These methods are implemented ondifferent grounds and compared with each other in Section 3.4. Finally,conclusions are provided in Section The road design modelFor a good accuracy, the ground profile and the construction cost shouldbe estimated well. The cost is dependent on the material volume and theground profile. A linear programming road design model which has multi-material and side slope features is presented in this section. This modelconsists of several parameters, variables and constraints, which are intro-duced as they appear.In this model, the ground profile is discretized into N sections and eachsection is presented by its height hi and the start point (xi). Sections aregrouped into segments and the number of sections in each segment is shownby ng [HHLR14, KL10, Mor96]. For example, the number of sections (ng)in segments 1, 2 and 3 of Figure 3.1 are 4, 8 and 3, respectively. The x-coordinate at the end of segment g is shown by tg and the quadratic roadmodel for each segment is defined as413.2. The road design modelPg(x) = ag0 + ag1xi + ag2x2i , xi ∈ [tg−1, tg], i = 1, ..., ng, g ∈ G .where G is the index of segments. A quadratic function is used to avoidsharp connections and have linear programming problem. If cubic or higherorder spline is used, then the constraints cannot be enforced with linearprogramming.The road should be smooth and continuous which means the height andthe slope of the ending section of a segment should be equal to the heightand slope of the beginning section of the next segment. The smoothnessand continuity constraints of splines arePg(tg) = Pg+1(tg), g ∈ G − 1,P ′g(tg) = P′g+1(tg), g ∈ G − 1.(3.1)The slope constraints, which are considered for safety, aremaxx∈[tg−1,tg]P ′g(x) ≤ U g ∈ G ,minx∈[tg−1,tg]P ′g(x) ≥ L g ∈ G .(3.2)During the road construction, not only may some material need to bemoved between sections, but also some materials may be brought from Bor-row pits to sections or dumped from sections to Waste pits. Thus, theBorrow and Waste pits are considered as the external sections in the roaddesign problem.Let the variables V +i , and V−i represent the total cut and fill materialvolumes of section i, respectively. The cost parameters areqd : the cost of embanking to a Dump pit,pb : the cost of excavating from a Borrow pit,p : the cost of excavating from each section,q : the cost of embanking to each section,Cij : the cost of transporting one cubic unit from section i to j,Ciw : the cost of transporting one cubic unit from section i to the Waste pit,andCbi : the cost of transporting one cubic unit from the Borrow pit to sectioni.The objective function in this linear programming model is to minimize423.2. The road design modelthe total cost of embankment, hauling and excavation,min Cost =n∑i=1(pV +i + qV−i )+n∑i=1n∑j=1,i 6=jCijTij +n∑i=1((Ciw + qw)Tiw + (Cbi + pb)Tbi),(3.3)where the variables Tij, Tbi and Tiw represent the transferred volume fromsection i to j, the transferred volume from Borrow pit to section i and thetransferred volume from section i to Waste pit, respectively.The material volume moved from section i to other sections or Waste pitshould be equal to the volume cut from section i. Similarly, The materialvolume brought from other sections or Borrow pit to section i should beequal to the volume filled to section i. These constraints, which are calledbalance constraint, are defined asN∑j=1,i 6=jTij + Tiw = V+i , i, j ∈ N,n∑j=1,i 6=jTji + Tbi = V−i , i, j ∈ N.(3.4)In order to make the road durable, side-slopes are used. Side-slope refersto gradual increase (decrease) in height from a fill (cut) section of the groundprofile to the height of the road profile [HHLR14]. In Figure 3.2, the cross-section of a road with and without side-slopes for a cut and a fill are shownrespectively.To model the side-slopes, trapezoid shaped cross-sections are approx-imated with several rectangles. Each rectangle is presented by an index(level), the altitude of the lth rectangle at section i is shown by the OffsetLi,l and the variable ui is the difference between the ground and road heightsat section i asP (xi)− hi = ui i ∈ N. (3.5)An example of a side-slope for a cut area is shown in Table 3.1 and Figure 3.3.Assume ui = 7m, then l should be 5 and the cut volume can be computedeasily by R+i,4 +R+i,5−R+i,4Li,5−Li,4 (Li,5 − ui) = 215.25.433.2. The road design modelFigure 3.2: The cross-section of a road with and without side-slopes.                   Cut area without side-slope Cut area with side-slope Ground profile Road profile 𝐿𝑖,1  𝐿𝑖,4  𝐿𝑖,3  𝐿𝑖,2  𝐿𝑖,5  𝑈𝑖   Figure 3.3: Approximation of side-slopes for a cut area.In other words, the gap constraint areV + =R+i,l +R+i,l+1 −R+i,lLi,l+1 − Li,l (Li,l+1 − ui), i ∈ N, l ∈ {l′|Li,l′ ≤ ui < Li,l′+1},V − =R−i,l +R−i,l+1 −R−i,lLi,l+1 − Li,l (Li,l+1 − ui), i ∈ N, l ∈ {l′|Li,l′ ≤ ui < Li,l′+1},(3.6)where R+i,l and R−i,l are the cut and fill volumes at section i and level l,respectively.While the shrink and swell factors and compaction are material depen-dent, the model uses banked units to compute the transportation costs andthus avoids the complexity of tracking material dependent factors withoutany loss in approximation [Koc10, Remark 1.6].443.3. Section positioning methodsTable 3.1: Approximation of Cut volumes using side-slopes for a cut area.Level l Offset Li,l Cut Volume R+i,l1 0 02 2 0.083 4 70.064 6 168.445 8 262.06The optimization time for this model is dependent on the number ofsections (the cost model is an aggregate model that does not go into thedetails of construction duration). Considering a lot of sections approximatesthe ground profile well but increases the optimization time. Keeping theapproximation accuracy while reducing the optimization time can be doneby the section positioning methods presented in the following section.3.3 Section positioning methodsThe methods used for section positioning are the heuristic method usedin the RoadEng software, the regression tree [JWH14], and the piecewiseconstant approximation method [KK88]. These methods are explained inthe following Subsections.3.3.1 Method 1: The heuristic methodIn this method, N1 uniform sections are picked from N uniform sectionsand the optimal cost is computed. If|CostN1−Cost2N1 |CostN1<  is not satisfied,then N1 is doubled until|CostN1−Cost2N1 |CostN1<  is passed and 2N1 is returnedas the number of sections. Three iterations of this method are presented inFigure Method 2: The regression treeIn this method, x is the feature of sections and h is the response. First,all sections are considered as a single section (all sections are consideredin a single leaf). Afterward, h¯leaf and S presented in equation (3.7) arecalculated and two leaves are added by examining all the binary splits of allvariables value to find a value which leads to the largest decrease in S. This453.3. Section positioning methods Figure 3.4: Three iterations of the heuristic method.procedure continues until a stopping criterion is reached (Snew < δSold, δis small and defined by user) [JWH14]. The pseudo-code of this method ispresented in Algorithm 1.[ht]h¯leaf =1nleaf∑i∈leafhi,S =∑leaf∈leaves(Tree)∑i∈leaf(hi − h¯leaf )2.(3.7)Algorithm 1: The regression tree1: Consider all sections in a single leaf.2: Calculate S (3.7).3: Search over all binary splits. Take a split which can reduce S as muchas possible and add two new leaves.4: If the largest decrease in S is less than δS, go to Step 5. Otherwise, goto Step 2.5: Return the number of Sections N ′ (N ′ ≤ N).Three iterations of the regression tree are presented in Figure 3.5. Asthis figure shows, two sections (leaves) are added by binary split.3.3.3 Method 3: Piecewise constant approximationSuppose f(x), the ground profile is a piecewise constant function on aninterval x. The piecewise constant approximation approximates f(x) byg∗(x) which has the minimum number of constant pieces ((n(g∗)) or thenumber of sections). In other words, the piecewise constant approximationsolves463.3. Section positioning methods Figure 3.5: Three iterations of the regression tree.minn(g)s.t. ‖g − f‖∞ ≤ ′,(3.8)where ‖g‖∞ is the infinity norm of g [KK88]. For each section, the followingvariables are computedui = fi + ′,li = fi − ′,(3.9)and the algorithm connects section j to i if(i) min{ut|i < t ≤ j} ≥ max{lt|i < t ≤ j},(ii)ui ≥ max{lt|i < t ≤ j},(iii)li ≤ min{ut|i < t ≤ j}.(3.10)This method has linear complexity and is optimal for finding the numberof constant pieces with the constraint in Equation (3.8) [KK88]. Threeiterations of this method are shown in Figure 3.6.In this method, the best value of input ′ is unknown and can also beobtained bymin ||g − f ||2,s.t. n(g) ≤ k, (3.11)where k is the minimum value of n(g) obtained from Equation (3.8). Thecomplexity of problem (3.11) is O(kn2) [KK88] but Fulop et al. improvedthis complexity to O(n2 + kn log n) [FP92].In the road design model, the number of sections which provides a trade-off between optimization time and approximation accuracy is unknown.Therefore, Equation 3.11 cannot be used to obtain ′ since k in this Equation,473.4. Results and Discussion 2ϵ’ 2ϵ’ Figure 3.6: The piecewise constant approximation.which is the appropriate number of sections, is unknown. Consequently, thevalue of ′ for section positioning problem need to be defined by numericaltests.The values of δS and ′ are not related in this research as there aredifferent metrics to relate these values and we do not know which one is thebest. More details about these values are provided in Subsection Results and DiscussionThe three methods presented in this thesis are implemented on fourdifferent grounds. The number of sections N , the number of levels nl, theoptimal cost and the time for building a road from these grounds are shownin Table 3.2 and Figure 3.1. In Table 3.2, N presents the number of sections,nl indicates the number of levels for approximation the cut and fill volumesand T Solve is the optimization time (sec). It should be noted that thesegrounds are small part of real grounds and used as case studies in the presentthesis.Optimizing road construction from these grounds were solved using aC++ code in Visual Studio 2012 that called CPLEX on a Windows7, 64 bit computer with 4 cores hyper threading and 3.2 GHz.In order to reduce the number of sections, similar and close sections aremerged and represented as one section. The height of the new section is theheight of the first section in the merged sections. The average and medianof the merged sections heights can also be considered for the height of the483.4. Results and DiscussionTable 3.2: Four different grounds.Ground N nl Cost ($) T SolveA 2,368 22 493,306 27B 941 42 163,953 12C 2,309 62 3,780,000 26D 948 42 232,582 64new section. As the sectioning methods merge the consecutive sections thathave similar heights, all the metrics for presenting the new section heightmay lead to similar results.In order to approximate the side-slopes for this new section, Algorithm 2is used and an example for merging three sections with four levels is shownin Figure 3.8. According to this Figure, the properties of the new sectionfrom merging three sections are as described in the following paragraphs.Algorithm 2: The algorithm for approximating the side-slopes ofmerged sections1: Find the minimum (h′j,min) and the maximum (h′j,max) (j ∈ N ′) ofhi + Li,l, i ∈ the merged sections.2: Divide |h′j,max − h′j,min| to equal nl Offsets and obtain all L′j,l.3: The material volume related to the Offset L′j,l is the sum of volume ofrectangles (slabs) which satisfies the constraints h′j + L′j,l′−1 < hi + Li,land h′j +L′j,l′ ≥ hi +Li,l ( j ∈ N ′, i ∈ the merged sections and l, l′ ≥ 1).The height of the new section is equal to the height of the first section(h′ = h1) and the number of levels in the new section and the merged sectionsare equal. In addition, the maximum height of the new section slab (h′max)is equal to the maximum height of the merged sections slabs (h1 + L1,4).Similarly, the minimum height of the new section slabs (h′min = h′ + L′0) isequal to the minimum height of the merged sections slabs (h3 + L3,0).In Figure 3.8, The volumes in the slabs of the merged sections are ap-peared in the slabs of the new section based on the third step of Algorithm2. According to this Algorithm, the volume of each slab of the new sec-tion is the sum of the volumes of the slabs of the merged sections thath′j + L′j,l′−1 < hi + Li,l ≤ h′j + L′j,l′ (l, l′ ≥ 1). For example, the bot-tom slab of the new section contains V3,1 but not V2,1 and V1,1 because493.4. Results and Discussion0 200 400 600 800 1000 1200402040404060408041004120Ah (m)0 100 200 300 400 500 600 700 800 900 1000165170175180185190Bh (m)0 500 1000 1500 2000 2500115012001250Ch (m)0 100 200 300 400 500 600 700 800 900 1000350360370380390Dh (m)x(m)GroundGroundGroundGroundFigure 3.7: Four ground profiles.h′ + L′0 < h3 + L3,1 ≤ h′ + L′1 but h′ + L′0 < h2 + L2,1  h′ + L′1 andh′ + L′0 < h1 + L1,1  h′ + L′1. Consequently, V2,1 and V1,1 are in the sec-ond slab of the new section because h′ + L′1 < h1 + L1,1 ≤ h′ + L′2 andh′ + L′1 < h2 + L2,1 ≤ h′ + L′ Comparing the three methodsAs δ in the regression tree and ′ in the piecewise constant approximationmethod are unknown, 20 values for them are considered (10−9 ≤ δ ≤ 10−1and 10−2 ≤ ′ ≤ 100.5). For the heuristic method, six iterations are consid-ered and in each iteration, N1 sections (N1 ≈ N/(27−iteration)) are picked (7is the number of iterations +1). We considered smaller number of iterationsfor the heuristic method than the other methods because if 20 iterationsare considered for this method, then the number of iterations +1 = 21 andN1 ≈ N/(221−iteration) returns very small number of sections in the first503.4. Results and Discussion Sec1 Sec2 Sec3 h1 h2 h3 𝑉1,1 𝑉1,2 𝑉1,3 𝑉1,4 𝑉2,4 𝑉3,4 𝑉2,3 𝑉3,3 𝑉2,2 𝑉3,2 𝑉2,1 𝑉3,1 𝑉3,1 𝑉1,1 𝑉2,1 𝑉2,2 𝑉3,2 𝑉1,2 𝑉2,3 𝑉3,3 𝑉1,3 𝑉1,4 𝑉2,4 𝑉3,4 Merged to ℎ′𝑚𝑖𝑛 = ℎ′ + 𝐿′0 ℎ′ + 𝐿′1 ℎ′ + 𝐿′2 ℎ′ + 𝐿′3 ℎ′𝑚𝑎𝑥 = ℎ′ + 𝐿′4  ℎ1 + 𝐿1,1 ℎ1 + 𝐿1,0 ℎ1 + 𝐿1,2 ℎ1 + 𝐿1,3 ℎ1 + 𝐿1,4 ℎ2 + 𝐿2,0 ℎ2 + 𝐿2,1 ℎ2 + 𝐿2,2 ℎ2 + 𝐿2,3 ℎ2 + 𝐿2,4 ℎ3 + 𝐿3,0 ℎ3 + 𝐿3,1 ℎ3 + 𝐿3,2 ℎ3 + 𝐿3,3 ℎ3 + 𝐿3,4 h’=h1 Figure 3.8: An example for Algorithm 2 which approximates the side-slopesof merged sections.iterations.As each δ in the regression tree, each ′ in the approximation method andeach iteration of the heuristic method returns a specific number of sections,comparison of these methods with a lot of numbers of sections is hard.Therefore, two stopping criteria are considered to return a specific number ofsections for each method (stopping criterion 1 is obtained from the heuristicmethod explained in Subsection 3.3.1).Stopping criterion 1: Relative cost error between two iterations of amethod ≤ 0.05 (the results which can satisfy this criterion are shown by thecyan color).Stopping criterion 2: Relative cost error between original ground andeach iteration of a method≤ 0.05. In addition, The root of mean squarederrors (RMSE) between the height of approximated ground and the originalground ≤ 0.25m (the results which can satisfy this criterion are shown bythe green color).The following tables present the methods performances on the groundsshown in Table 3.2 and in these tables, the number of sections is shown513.4. Results and Discussionby Section. MaxE and RMSE present the maximum absolute error androot mean squared error between the heights of the original and approxi-mated grounds. The relative cost error between two consecutive iterationsis indicated by RelCosti,i+1. The relative and absolute errors between thecost of an iteration and the original ground cost are shown by RelCostEand AbsCostE, respectively. Implementation time of the methods (in Mat-lab R2013b) is shown by T method and T Solve shows the solving time(optimizing the model by C++ code).The cost and the height of the original ground are respectively shown byCost and h. The height and the cost of the approximated ground at iterationi are respectively presented by h′ and Cost′i. The number of sections of theapproximated ground is smaller than the number of sections of the originalground. Therefore, the number of elements in h′ is smaller than h.In order to compute MaxE and RMSE (3.12), we need to have thesame number of elements for h and h′. For this case, assume xj′ (j′ ∈ N ′and N ′ is the number of sections of the approximated ground) is the numberof merged sections from the original ground to make the j′th section of theapproximated ground. Therefore, any section of the approximated ground isreplicated for xj′ times to have the same length for h and h′. Consequently,MaxE, RMSE, RelCosti,i+1, RelCostE and AbsCostE are defined asMaxE = max |h− h′|,RMSE =√1N(hj − h′j), j ∈ N,RelCosti,i+1 =|Cost′i+1 − Cost′i|Cost′i,RelCostE =|Cost− Cost′i|Cost,AbsCostE = |Cost− Cost′i|.(3.12)For stopping criterion 1, Total T ime is sum of T method and T Solvefrom the first iteration till the iteration that satisfies this criterion. Accord-ing to stopping criterion 1, the method that returns the smallest Section,Total T ime andRelCosti+1,i+2 is the winner. We also considerRelCosti+1,i+2since the approximation is good enough if bothRelCosti,i+1 andRelCosti+1,i+2are small.For stopping criterion 2, Total T ime is sum of T method and T Solvefrom the iteration that has RMSE ≤ 0.25 until the iteration that satisfiesthis criterion. Based on stopping criterion 2, the method that returns thesmallest Section and Total T ime is the winner.523.4. Results and DiscussionFor comparing the performances of the regression tree and the piecewiseconstant approximation method, two studies are done. In the first study,the methods are implemented as they are explained in Subsections 3.3.2 and3.3.3. In other words, the ground elevation is used as a criterion to mergesections. In the second study, the material volume is used instead of theground elevation as the criterion for merging sections.It should be noted that the heuristic method requires neither groundelevation nor material volumes and this method just picks uniform sections.Therefore, the heuristic results in both studies are the same.Study 1: Comparison based on ground elevationGround elevation is used as a criterion for merging sections and the per-formances of the heuristic method, the regression tree and the piecewiseconstant approximation method on Ground A (2,368 sections and the roadconstruction cost is $493,306) are presented in Tables 3.3, 3.4 and 3.5, re-spectively. These tables are summarized based on the two stopping criteriain Table 3.6.Table 3.3: The performance of the heuristic method on Ground A.Iteration Section MaxE RMSE RelCostE AbsCostE T_Method T_Solve1 37 3.46 1.11 0.20 0.30 149830 0.0 0.52 74 1.92 0.57 0.02 0.05 24238 0.5 1.03 148 1.16 0.28 0.01 0.02 11315 1.6 0.84 296 0.94 0.14 0.01 0.01 7249 2.4 1.25 592 0.82 0.06 0.01 0.01 3762 3.5 2.86 1184 0.68 0.03 0.00 0.00 1000 6.3 9.2The heuristic method𝐑𝐞𝐥𝐂𝐨𝐬𝐭𝐢,𝐢+𝟏 In Table 3.4, the relative cost error between δ equals to 0.0008 and0.0003 is small (RelCosti,i+1 = 0.05) but RelCosti,i+1 between δ equals to0.0003 and 0.0004 is not small (RelCosti,i+1 = 0.12). Therefore, consid-ering stopping criterion 1 is not necessarily satisfactory to obtain a goodapproximation since RelCosti,i+1 may become bigger than 0.05 in the nextiterations. In addition, the absolute cost error for this delta (0.0003) is huge.533.4. Results and DiscussionTable 3.4: The performance of the Regression tree on Ground A.Delta Section MaxE RMSE RelCostE AbsCostE T_method T_Solve0.1 3 29.65 12.87 0.39 2.76 1362194 0.0 0.14E-02 4 18.34 9.34 0.16 4.23 2086554 0.0 0.11E-02 6 16.03 6.67 0.51 5.08 2507834 0.0 0.35E-03 8 9.77 4.68 0.08 2.00 987854 0.0 0.42E-03 10 9.11 4.02 0.34 1.76 867594 0.0 0.48E-04 16 5.03 2.36 0.05 0.82 405703 0.0 0.63E-04 18 4.80 2.13 0.12 0.74 363373 0.0 0.51E-04 30 3.70 1.28 0.15 0.53 260887 0.1 0.64E-05 35 2.54 1.08 0.09 0.30 149966 0.0 0.62E-05 57 2.09 0.68 0.06 0.19 94012 0.1 0.76E-06 71 1.33 0.54 0.01 0.12 59974 0.1 0.52E-06 110 1.03 0.35 0.05 0.11 52876 0.0 0.69E-07 138 0.66 0.27 0.00 0.05 26039 0.0 1.23E-07 208 0.62 0.18 0.03 0.06 28095 0.1 1.01E-07 270 0.33 0.13 0.00 0.03 13114 0.1 1.65E-08 397 0.25 0.09 0.02 0.03 15291 0.06 1.822E-08 535 0.17 0.06 0.00 0.01 7309 0.1 3.37E-09 718 0.11 0.04 0.01 0.02 8102 0.1 3.83E-09 979 0.07 0.03 0.01 0.03 12706 0.1 5.21E-09 1288 0.04 0.02 0.02 0.02 7729 1.4 9.4The Regression Tree𝐑𝐞𝐥𝐂𝐨𝐬𝐭𝐢,𝐢+𝟏 Based on stopping criterion 1 in Table 3.6, the regression tree returnsthe smallest number of section. However, the RelCosti+1,i+2, RelCostEand RMSE of the regression tree is the worst. Consequently, there is nowinner according to stopping criterion 1 for Ground A. Based on stoppingcriterion 2 in Table 3.6, the number of sections of all methods are close toeach other but the Total T ime of the piecewise approximation method isthe biggest time. Therefore, the heuristic method and the regression treeare the winners for Ground A.The results for Ground B, C and D are presented in Tables 3.7, 3.8 and3.9, respectively.Based on stopping criterion 1, the heuristic method has the smallestRelCosti+1,i+2 and Total T ime for Ground B but returns the biggest num-ber of sections. For Ground C and D, the regression tree returns the smallestnumber of sections and Total T ime but largest RelCosti+1,i+2. Therefore,stopping criterion 1 is not good enough to find the best method for sectionpositioning problem.543.4. Results and DiscussionTable 3.5: The performance of the piecewise constant approximation methodon Ground A.Epsilon Section MaxE RMSE RelCostE AbsCostE T_method T_Solve3.1623 21 2.08 0.90 0.12 0.64 313255 1.7 12.3357 29 1.55 0.65 0.04 0.43 214204 1.7 01.7252 39 1.23 0.49 0.10 0.37 183334 1.5 11.2743 52 0.86 0.36 0.08 0.23 113097 1.4 00.9412 72 0.82 0.27 0.04 0.13 66533 1.2 10.6952 95 0.43 0.20 0.01 0.09 46664 1.2 10.5135 128 0.34 0.15 0.04 0.10 50860 1.2 10.3793 172 0.24 0.11 0.01 0.06 29256 1.1 10.2801 230 0.19 0.08 0.02 0.05 25590 1.0 20.2069 303 0.15 0.06 0.00 0.03 17189 1.0 20.1528 403 0.11 0.05 0.01 0.03 15973 1.0 20.1129 533 0.08 0.04 0.00 0.03 12537 1.0 30.0834 689 0.05 0.03 0.01 0.02 11165 1.0 40.0616 867 0.04 0.02 0.01 0.02 7673 1.0 60.0455 1101 0.03 0.02 0.01 0.01 3403 1.0 70.0336 1452 0.02 0.01 0.01 0.02 8222 1.0 120.0248 1786 0.01 0.01 0.00 0.01 4396 1.0 140.0183 2036 0.01 0.00 0.02 0.01 6637 1.0 190.0135 2237 0.01 0.00 0.00 0.00 975 1.0 230.0100 2284 0.01 0.00 0.00 0.00 610 1.0 24The Piecewise Constant Approximation Method𝐑𝐞𝐥𝐂𝐨𝐬𝐭𝐢,𝐢+𝟏 Table 3.6: Comparing methods on Ground A according to two stoppingcriteria.Stopping CriterionMethods Section RMSE RelCostE Total_TimeHeuristic 148 0.28 0.01 0.02 2.4Regresssion Tree 18 2.13 0.12 0.74 2.6Piecewise approximation 39 0.49 0.10 0.37 6.5Heuristic 296 0.14 0.01 0.01 3.5Regresssion Tree 270 0.13 0.02 0.03 2.8Piecewise approximation 230 0.08 0.00 0.05 9.0Ground A12𝑹𝒆𝒍𝑪𝒐𝒔𝒕𝒊+𝟏,𝒊+𝟐 553.4. Results and DiscussionTable 3.7: Comparing methods on Ground B according to two stoppingcriteria.Stopping CriterionMethods Section RMSE RelCostE Total_TimeHeuristic 118 0.54 0.01 0.01 3.0Regresssion Tree 9 2.89 0.01 0.25 1.1Piecewise approximation 33 0.62 0.04 0.10 3.5Heuristic 471 0.11 0.00 0.00 7.9Regresssion Tree 185 0.23 0.00 0.01 1.8Piecewise approximation 125 0.20 0.00 0.00 1.9Ground B12𝑹𝒆𝒍𝑪𝒐𝒔𝒕𝒊+𝟏,𝒊+𝟐 Table 3.8: Comparing methods on Ground C according to two stoppingcriteria.Stopping CriterionMethods Section RMSE RelCostE Total_TimeHeuristic 145 1.13 0.04 1.29 10Regresssion Tree 70 1.52 0.18 1.30 15Piecewise approximation 83 0.65 0.02 2.25 20Heuristic - - - - -Regresssion Tree - - - - -Piecewise approximation - - - - -Ground C12𝑹𝒆𝒍𝑪𝒐𝒔𝒕𝒊+𝟏,𝒊+𝟐 Table 3.9: Comparing methods on Ground D according to two stoppingcriteria.Stopping CriterionMethods Section RMSE RelCostE Total_TimeHeuristic 119 0.43 0.02 0.70 15Regresssion Tree 4 4.44 0.40 0.62 2Piecewise approximation 32 0.45 0.16 0.72 76Heuristic - - - - -Regresssion Tree - - - - -Piecewise approximation - - - - -Ground D12𝑹𝒆𝒍𝑪𝒐𝒔𝒕𝒊+𝟏,𝒊+𝟐 563.4. Results and DiscussionTable 3.7 confirms that stopping criterion 1 is not good enough to evalu-ate the methods because the regression tree and the piecewise approximationmethods return small number of sections but their ground approximationsare bad (large RMSE and RelCostE). As Table 3.7 shows, based on theStopping criterion 2, the regression tree and piecewise approximation meth-ods are winners for Ground B.Tables 3.8 and 3.9 highlight that stopping criterion 1 is not good enoughto evaluate the methods. In addition, no methods can satisfy stoppingcriterion 2, thus the rows are blank. Therefore, there is no winner based onstopping criterion 2 in these tables.As the cost approximation error should be small and cost is dependenton the material volume, the volumes of materials are used instead of groundelevation in the following study.Study 2: Material volumes instead of ground elevationIn this study, the effect of using material volumes instead of ground elevationfor the regression tree and the piecewise constant approximation method isstudied. The methods are compared and the summarized results based ontwo criteria are presented in the following Tables.Table 3.10: Comparing methods on Ground A based on the material volumesand according to two stopping criteria.Stopping CriterionMethods Section RMSE RelCostE Total_TimeHeuristic 148 0.28 0.01 0.02 2.4Regresssion Tree 213 0.22 0.00 0.00 3.3Piecewise approximation 577 0.25 0.00 0.00 28.3Heuristic 296 0.14 0.01 0.01 3.5Regresssion Tree 213 0.22 0.01 0.00 0.7Piecewise approximation 577 0.25 0.01 0.00 2.2Ground A12𝑹𝒆𝒍𝑪𝒐𝒔𝒕𝒊+𝟏,𝒊+𝟐 573.4. Results and DiscussionTable 3.11: Comparing methods on Ground B based on the material volumesand according to two stopping criteria.Stopping CriterionMethods Section RMSE RelCostE Total_TimeHeuristic 118 0.54 0.01 0.01 3.0Regresssion Tree 90 1.02 0.02 0.07 2.9Piecewise approximation 624 1.05 0.00 0.16 8.9Heuristic 471 0.11 0.00 0.00 7.9Regresssion Tree - - - - -Piecewise approximation 733 0.18 0.00 0.01 8.3Ground B12𝑹𝒆𝒍𝑪𝒐𝒔𝒕𝒊+𝟏,𝒊+𝟐 Table 3.12: Comparing methods on Ground C based on the material volumesand according to two stopping criteria.Stopping CriterionMethods Section RMSE RelCostE Total_TimeHeuristic 145 1.13 0.04 1.29 10Regresssion Tree 389 0.38 0.23 1.67 12Piecewise approximation 2090 0.04 0.21 1.90 80Heuristic - - - - -Regresssion Tree - - - - -Piecewise approximation - - - - -Ground C12𝑹𝒆𝒍𝑪𝒐𝒔𝒕𝒊+𝟏,𝒊+𝟐 Table 3.13: Comparing methods on Ground D based on the material volumesand according to two stopping criteria.Stopping CriterionMethods Section RMSE RelCostE Total_TimeHeuristic 119 0.43 0.02 0.70 15Regresssion Tree 96 1.17 0.39 0.67 10Piecewise approximation 382 0.14 0.02 0.72 399Heuristic - - - - -Regresssion Tree - - - - -Piecewise approximation - - - - -Ground D12𝑹𝒆𝒍𝑪𝒐𝒔𝒕𝒊+𝟏,𝒊+𝟐 In this study, stopping criterion 2 is not successful to evaluate meth-ods and the heuristic method is mostly the winner based on the stoppingcriterion 1.583.5. Summary of the section positioning methods3.5 Summary of the section positioning methodsThis Chapter presents heuristic method, regression tree and piecewiseconstant approximation for section positioning in road design model. Twostopping criteria are considered for these methods and the methods perfor-mances are compared in two studies.In the first study, the ground elevation is used in the regression tree andpiecewise constant approximation method and the material volumes are usedin the second study for these methods. The heuristic method results are thesame in both studies, as this method just picks uniform sections regardlessof considering ground elevation or volumes.The results show there is no specific method that is the best for allgrounds. Therefore, all methods should be implemented on a small part ofa real ground and the method which has the best performance should beimplemented on the whole of real ground.59Chapter 4ConclusionThis thesis is based on two research projects. In the first project (Chap-ter 2), an extensive survey on the sensitivity analysis methods is done whichhelps the practitioners decide the most appropriate SA method. In addition,the most appropriate methods are implemented on a road design model. Inthe second project (Chapter 3), three methods are compared to obtain atrade-off between reducing number of sections and accuracy.The contributions of this thesis are1. Sensitivity analysis (Chapter 2): Categorizing the most efficient sen-sitivity analysis methods, comparing sensitivity analysis methods ineach category, helping practitioners decide the most appropriate sen-sitivity analysis method and analyzing the sensitivity of a road designmodel.2. Section positioning (Chapter 3): proof of concept that has great poten-tial for reducing the number of variables considered and subsequentlyreducing the computation time. In this thesis, up to 90% reduction innumber of section was observed while maintaining acceptable accuracythat translate to 88% savings in computation time. This is a proof ofconcept supported by 4 use cases that show tremendous potential.The limitations of this thesis are:1. Only construction cost was considered in the road design model whilein practice, environmental, users, and other costs should be taken intoaccount.2. Only vertical alignment was considered but SA should also be appliedto horizontal alignment; similarly, section reduction was applied tovertical alignment but may also give huge savings on the horizontalalignment.3. Even with only building costs considered, several approximation as-sumption were made e.g. no details on what machine was used, wages,construction duration etc. were used to keep the model tractable.60Chapter 4. ConclusionFuture work based on Chapter 2 could consider obtaining safe ranges ofroad design model inputs such as cost parameters. The true cost parameterscannot be known until after construction, and even then it may be difficultto estimate the true cost parameters. Therefore, ranges for the unit costsshould be found such that the construction cost is not sensitive to these unitcosts ranges.Suppose UC is the actual unit cost and UC1 is the uncertain unit cost.For a given set of standards and constraints (assume constant), function Fareturns an alignment for a given set of unit costs. For a given alignment, thetotal earthwork construction cost (C) can be estimated by the cost functionFc.A = Fa(UC),C = Fc(A,UC),(4.1)where A and C are the actual alignment and construction cost, respectively.As the designers do not have the actual unit costs, they use the uncertainunit costs (UC1) and find an alignment (A1 = Fa(UC1)). The cost ofconstructing this alignment is C1T that can be calculated by estimating theactual unit costs and plugging them in the cost function C1T = Fc(A1, UC).The designers expect C1T and C to be close to each other. In otherwords, they expect C1T−CC to have a small value. Therefore, designing a costmodel that helps to find an insensitive construction cost is required.Future work based on Chapter 3 is comparing the section positioningmethods on a large number of grounds. In addition, the grounds should becategorized into flat, hilly, and etc. For each category of grounds, specificvalues for the methods parameters (δ and ′) could be defined.61Bibliography[AIGMA05] J. C. Ascough II, T. R. Green, L. Ma, and L. R. Ahjua. Keycriteria and selection of sensitivity analysis methods appliedto natural resource models. In Modsim 2005 InternationalCongress on Modeling and Simulation, pages 2463–2469, Mod-eling and Simulation Society of Australia and New Zealand,Melbourne, Australia, 2005. → pages 4, 5, 14, 15[Ale13] A. Alexanderian. On spectral methods for variance based sen-sitivity analysis. Probability Surveys, 10:51–68, 2013. → pages5[Alv09] D. A. Alvarez. Reduction of uncertainty using sensitivity analy-sis methods for infinite random sets of indexable type. Interna-tional journal of approximate reasoning, 50(5):750–762, 2009.→ pages 4[AON13] M. Ashraf, S. Oladyshkin, and W. Nowak. Geological storageof CO2: Application, feasibility and efficiency of global sensi-tivity analysis and risk assessment using the arbitrary polyno-mial chaos. International Journal of Greenhouse Gas Control,19:704–719, 2013. → pages 4[BB13] M. Baucells and E. Borgonovo. Invariant probabilistic sensitiv-ity analysis. Management Science, 59(11):2536–2549, 2013. →pages 24, 25[BBM03] F. Borrelli, A. Bemporad, and M. Morari. Geometric algorithmfor multiparametric linear programming. Journal of optimiza-tion theory and applications, 118(3):515–540, 2003. → pages37[BM08] A. Basudhar and S. Missoum. Adaptive explicit decision func-tions for probabilistic design and optimization using supportvector machines. Computers & Structures, 86(19):1904–1917,2008. → pages 562Bibliography[BO09] L. S. Bastos and A. O’Hagan. Diagnostics for Gaussian processemulators. Technometrics, 51(4):425–438, 2009. → pages 26[Bor07] E. Borgonovo. A new uncertainty importance measure. Reli-ability Engineering & System Safety, 92(6):771–784, 2007. →pages 24, 25[BP16] E. Borgonovo and E. Plischke. Sensitivity analysis: A reviewof recent advances. European Journal of Operational Research,248(3):869–887, 2016. → pages 4, 37[BTPM14] E. Borgonovo, S. Tarantola, E. Plischke, and M. D. Morris.Transformations and invariance in the sensitivity analysis ofcomputer experiments. Journal of the Royal Statistical Soci-ety: Series B (Statistical Methodology), 76(5):925–947, 2014.→ pages 24, 25[Can12] F. Cannavo´. Sensitivity analysis for volcanic source modelingquality assessment and model selection. Computers & Geo-sciences, 44:52–59, 2012. → pages 4[Can14] F. Cannavo´. GSAT: Global sensitivity analysis toolbox., 2013–2014.→ pages 30, 37[CCS07] F. Campolongo, J. Cariboni, and A. Saltelli. An effectivescreening design for sensitivity analysis of large models. En-vironmental modelling & software, 22(10):1509–1518, 2007. →pages 21[CDD13] J. Cao, F. Du, and Sh. Ding. Global sensitivity analysis fordynamic systems with stochastic input processes. ReliabilityEngineering & System Safety, 118:106–117, 2013. → pages 6,28, 39[CFS+73] R. I. Cukier, C. M. Fortuin, K. E. Shuler, A. G. Petschek, andJ. H. Schaibly. Study of the sensitivity of coupled reaction sys-tems to uncertainties in rate coefficients. I theory. The Journalof Chemical Physics, 59(8):3873–3878, 1973. → pages 9, 12[CGP+12] G. Chastaing, F. Gamboa, C. Prieur, et al. General-ized hoeffding-sobol decomposition for dependent variables-63Bibliographyapplication to sensitivity analysis. Electronic Journal of Statis-tics, 6:2420–2448, 2012. → pages 12[CHT00] M. H. Chun, S. J. Han, and N. I. Tak. An uncertainty im-portance measure using a distance metric for the change ina cumulative distribution function. Reliability Engineering &System Safety, 70(3):313–321, 2000. → pages 4, 23[CLS78] R. I. Cukier, H. B. Levine, and K. E. Shuler. Nonlinear sen-sitivity analysis of multiparameter model systems. Journal ofcomputational physics, 26(1):1–42, 1978. → pages 9, 12[CMW06] K. Campbell, M. D. McKay, and B. J. Williams. Sensitivityanalysis when model outputs are functions. Reliability Engi-neering & System Safety, 91(10):1468–1472, 2006. → pages 5,28[Con80] W. J. Conover. Practical nonparametric statistics. Wiley NewYork, 1980. → pages 9[CP02] F. H. Christopher and S. R. Patil. Identification and reviewof sensitivity analysis methods. Risk analysis, 22(3):553–578,2002. → pages 4, 5[DNM+03] B. J. Debusschere, H. N. Najm, A. Matta, O. M. Knio, R. G.Ghanem, and O. P. Le Matre. Protein labeling reactions inelectrochemical microchannel flow: Numerical simulation anduncertainty propagation. Physics of Fluids (1994-present),15(8):2238–2250, 2003. → pages 5[EM07] S. M. Easa and A. Mehmood. Establishing highway horizontalalignment to maximize design consistency. Canadian Journalof Civil Engineering, 34(9):1159–1168, 2007. → pages 1[Fil11] C. Filippi. Sensitivity analysis in linear programming. WileyEncyclopedia of Operations Research and Management Science,2011. → pages 37[FJM+01] L. Figueiredo, I. Jesus, J. A. T. Machado, J. Ferreira, andJ. L. M. De Carvalho. Towards the development of intelligenttransportation systems. In Intelligent Transportation Systems,volume 88, pages 1206–1211, 2001. → pages 164Bibliography[FK14] M. Farah and A. Kottas. Bayesian inference for sensitivityanalysis of computer simulators, with an application to radia-tive transfer models. Technometrics, 56(2):159–173, 2014. →pages 26[FP92] J. Fu¨lo¨p and M. Prill. On the minimax approximation in theclass of the univariate piecewise constant functions. Operationsresearch letters, 12(5):307–312, 1992. → pages 47[Gal88] F. Galton. Co-relations and their measurement, chiefly fromanthropometric data. Proceedings of the Royal Society of Lon-don, 45(273-279):135–145, 1888. → pages 8[GBA+15] P. Gilles, I. Bertrand, J. with contributions from S. D. VeigaAlexandre, J. Fruth, L. Gilquin, J. Guillaume, L. L. Gratiet,P. Lemaitre, B. Ramos, and T. Touati. sensitivity: SensitivityAnalysis, 2015. R package version 1.11. → pages 30[GI12] G. Glen and K. Isaacs. Estimating Sobol sensitivity indices us-ing correlations. Environmental Modelling & Software, 37:157–166, 2012. → pages 11, 12[GJKL13] F. Gamboa, A. Janon, Th. Klein, and A. Lagnoux. Sen-sitivity indices for multivariate outputs. Comptes RendusMathe´matique, 351(7):307–310, 2013. → pages 5, 27, 38[GMPS14] G. Gal, V. Makler-Pick, and N. Shachar. Dealing with uncer-tainty in ecosystem model scenarios: application of the single-model ensemble approach. Environmental Modelling & Soft-ware, 2014. → pages 3[Ham94] D. M. Hamby. A review of techniques for parameter sensitivityanalysis of environmental models. Environmental Monitoringand Assessment, 32(2):135–154, 1994. → pages 3, 4, 5[Ham95] D. M. Hamby. A comparison of sensitivity analysis techniques.Health Physics, 68(2):195–204, 1995. → pages 5[HAMM13] E. Hayati, E. Abdi, B. Majnounian, and M. Makhdom. Appli-cation of sensitivity analysis in forest road networks planningand assessment. Journal of Agricultural Science and Technol-ogy, 15(4):781–792, 2013. → pages 4065Bibliography[Her15] J. Herman. SALib: Sensitivity analysis library., 2013–2015. → pages 30[HHLR14] W. Hare, Sh. Hossain, Y. Lucet, and F. Rahman. Models andstrategies for efficiently determining an optimal vertical align-ment of roads. Computers & Operations Research, 44:161–173,2014. → pages 1, 40, 41, 43[HKL11] W. Hare, V. R. Koch, and Y. Lucet. Models and algorithmsto improve earthwork operations in road design using mixedinteger linear programming. European Journal of OperationalResearch, 215(2):470–480, 2011. → pages 1[HS96] T. Homma and A. Saltelli. Importance measures in global sen-sitivity analysis of nonlinear models. Reliability Engineering &System Safety, 52(1):1–17, 1996. → pages 11[IH90] T. Ishigami and T. Homma. An importance quantification tech-nique in uncertainty analysis for computer models. In Uncer-tainty Modeling and Analysis, 1990. Proceedings, First Inter-national Symposium on, pages 398–403. IEEE, 1990. → pages30[IL14] B. Iooss and P. Lemaˆıtre. A review on global sensitivity analysismethods. arXiv preprint arXiv:1404.2405, 2014. → pages 4, 5[Jac06] J. Jacques. SAinterface., 2006. → pages 30, 31, 37[Jan99] M. J. W. Jansen. Analysis of variance designs for model out-put. Computer Physics Communications, 117(1):35–43, 1999.→ pages 11, 12[JKLN13] A. Janon, Th. Klein, A. Lagnoux, and C. Nodet, M.and Prieur.Asymptotic normality and efficiency of two Sobol index estima-tors. ESAIM: Probability and Statistics, 2013. → pages 10, 11[JSJ06] M. K. Jha, P. Schonfeld, and J. C. Jong. Intelligent road design,volume 19. WIT Press, 2006. → pages 1[JWH14] G. James, D. Witten, and T. Hastie. An introduction to sta-tistical learning: With applications in R., 2014. → pages 45,4666Bibliography[K+09] S. Kucherenko et al. Derivative based global sensitivity mea-sures and their link with global sensitivity indices. Mathemat-ics and Computers in Simulation, 79(10):3009–3017, 2009. →pages 21, 22[KK88] H. Konno and T. Kuno. Best piecewise constant approximationof a function of single variable. Operations research letters,7(4):205–210, 1988. → pages 45, 47[KL10] V. R. Koch and Y. Lucet. A note on: Spline technique for mod-eling roadway profile to minimize earthwork cost. Journal ofIndustrial and Management Optimization, 6(2):393–400, 2010.→ pages 34, 41[Koc10] Valentin Raphael Koch. Optimizing earthwork block removal inroad construction. PhD thesis, University of British Columbia,2010. → pages 44[Lev98] D. Levin. The approximation power of moving least-squares.Mathematics of Computation of the American MathematicalSociety, 67(224):1517–1531, 1998. → pages 5[LML+09] M. Lamboni, D. Makowski, S. Lehuger, B. Gabrielle, andH. Monod. Multivariate global sensitivity analysis for dynamiccrop models. Field Crops Research, 113(3):312–320, 2009. →pages 5, 28[LMM11] M. Lamboni, H. Monod, and D. Makowski. Multivariate sensi-tivity analysis to measure global contribution of input factorsin dynamic models. Reliability Engineering & System Safety,96(4):450–459, 2011. → pages 5, 28[LMM14] B. Lamoureux, N. Mechbal, and J. R. Masse´. A combined sen-sitivity analysis and kriging surrogate modeling for early vali-dation of health indicators. Reliability Engineering & SystemSafety, 130:12–26, 2014. → pages 4[LT09] L. Lilburne and S. Tarantola. Sensitivity analysis of spatialmodels. International Journal of Geographical Information Sci-ence, 23(2):151–168, 2009. → pages 11, 12[M+95] M. D. McKay et al. Evaluating prediction uncertainty. USNuclear Regulatory Commission, 1995. → pages 1567Bibliography[MCLH14] A. P. Melo, D. Co´stola, R. Lamberts, and J. L. M. Hensen.Development of surrogate models using artificial neural networkfor building shell energy labelling. Energy Policy, 69:457–466,2014. → pages 5[MMM08] M. D. Morris, L. M. Moore, and M. D. McKay. Using or-thogonal arrays in the sensitivity analysis of computer models.Technometrics, 50(2):205–215, 2008. → pages 15, 16[MNM06] H. Monod, C. Naud, and D. Makowski. Uncertainty and sen-sitivity analysis for crop models. D. Wallach, D. Makowskiand J. Jones, ed.: Working with Dynamic Crop Models, pages55–100, 2006. → pages 11[Mor91] M. D. Morris. Factorial sampling plans for preliminary compu-tational experiments. Technometrics, 33(2):pp. 161–174, 1991.→ pages 20[Mor96] A. A. Moreb. Linear programming model for finding optimalroadway grades that minimize earthwork cost. European Jour-nal of Operational Research, 93(1):148–154, 1996. → pages 34,41[MPT] MPT: The multi-parametric toolbox. → pages 37[MS81] R. H. Mayer and R. M. Stark. Earthmoving logistics. Journalof the Construction Division, 107(2):297–312, 1981. → pages 1[MT12] Th. A. Mara and S. Tarantola. Variance-based sensitivity in-dices for models with dependent inputs. Reliability Engineering& System Safety, 107:115–121, 2012. → pages 7[MTS82] G. J. McRae, J. W. Tilden, and J. H. Seinfeld. Global sen-sitivity analysisa computational implementation of the fourieramplitude sensitivity test (fast). Computers & Chemical Engi-neering, 6(1):15–25, 1982. → pages 14[MU49] N. Metropolis and S. Ulam. The Monte Carlo method. Journalof the American statistical association, 44(247):335–341, 1949.→ pages 868Bibliography[OO04] J. E. Oakley and A. O’Hagan. Probabilistic sensitivity anal-ysis of complex models: a Bayesian approach. Journal of theRoyal Statistical Society: Series B (Statistical Methodology),66(3):751–769, 2004. → pages 4, 5, 26[PBS13] Elmar P., Emanuele B., and Curtis L. S. Global sensitivitymeasures from given data. European Journal of OperationalResearch, 226(3):536 – 550, 2013. → pages 25[PCS+96] S. W. Pacala, Ch. D. Canham, J. Saponara, J. A. Silander Jr,R. K. Kobe, and E. Ribbens. Forest models defined by fieldmeasurements: estimation, error analysis and dynamics. Eco-logical monographs, 66(1):1–43, 1996. → pages 5, 27[Pea95] K. Pearson. Note on regression and inheritance in the caseof two parents. Proceedings of the Royal Society of London,58:240–242, 1895. → pages 8[Pli12] E. Plischke. An adaptive correlation ratio method using thecumulative sum of the reordered output. Reliability Engineering& System Safety, 107:149–156, 2012. → pages 19[PPS09] L. Pichler, H. J. Pradlwarter, and G. I. Schue¨ller. A mode-basedmeta-model for the frequency response functions of uncertainstructural systems. Computers & Structures, 87(5):332–341,2009. → pages 5[SAA+10] Andrea Saltelli, Paola Annoni, Ivano Azzini, Francesca Cam-polongo, Marco Ratto, and Stefano Tarantola. Variance basedsensitivity analysis of model output. design and estimator forthe total sensitivity index. Computer Physics Communications,181(2):259–270, 2010. → pages 1, 11, 12[Sal02] A. Saltelli. Making best use of model evaluations to com-pute sensitivity indices. Computer Physics Communications,145(2):280–297, 2002. → pages 11, 12[SB98] A. Saltelli and R. Bolado. An alternative way to computefourier amplitude sensitivity test (fast). Computational Statis-tics & Data Analysis, 26(4):445–460, 1998. → pages 13[SC04] W. I. Sellers and R. H. Crompton. Using sensitivity analysisto validate the predictions of a biomechanical model of bite69Bibliographyforces. Annals of Anatomy-Anatomischer Anzeiger, 186(1):89–95, 2004. → pages 4[SCS+00] A. Saltelli, K. Chan, E. M. Scott, et al. Sensitivity analysis,volume 134. Wiley New York, 2000. → pages 3[SGBG14] N. Saint-Geours, J. S. Bailly, and Ch. Grelot, F.and Lavergne.Multi-scale spatial sensitivity analysis of a model for economicappraisal of flood risk management policies. EnvironmentalModelling & Software, 60:153–166, 2014. → pages 4[SH08] A. Saltelli and T. Homma. SimLab: Free development frame-work for sensitivity and uncertainty analysis., 1999-2008. → pages 30[Sin93] J. Sinclair. Response to the PSACOIN level S exercise.PSACOIN level S intercomparison. Nuclear Energy Agency,OECD, 1993. → pages 17[SK11] H. Sulieman and I. Kucuk. Global derivative based sensitiv-ity method for parameter estimation. Journal of the FranklinInstitute, 348(7):1556–1573, 2011. → pages 5[SLL16] S. Sarafrazi, J. Loeppky, and Y. Lucet. A survey on the sen-sitivity analysis methods for understanding complex models.Under submission, 2016. → pages iv, 2[Sob01] I. M. Sobol. Global sensitivity indices for nonlinear mathemat-ical models and their Monte Carlo estimates. Mathematics andcomputers in simulation, 55(1-3):271–280, 2001. → pages 9, 11[SRA+08] A. Saltelli, M. Ratto, T. Andres, F. Campolongo, J. Cariboni,D. Gatelli, M. Saisana, and S. Tarantola. Global sensitivityanalysis: the primer. John Wiley & Sons, 2008. → pages 11[SRTC05] A. Saltelli, M. Ratto, S. Tarantola, and F. Campolongo.Sensitivity analysis for chemical models. Chemical reviews,105(7):2811–2828, 2005. → pages 11, 37[SRTC12] A. Saltelli, M. Ratto, S. Tarantola, and F. Campolongo. Update1 of: Sensitivity analysis for chemical models. Chemical reviews,112(5):PR1–PR21, 2012. → pages 11, 20, 3770Bibliography[STC99] A. Saltelli, S. Tarantola, and K. P. S. Chan. A quantitativemodel-independent method for global sensitivity analysis ofmodel output. Technometrics, 41(1):39–56, 1999. → pages9, 12, 14, 15[STCR04] A. Saltelli, S. Tarantola, F. Campolongo, and M. Ratto. Sensi-tivity analysis in practice: a guide to assessing scientific models.John Wiley & Sons, 2004. → pages 1, 4, 21[STG+07] I. M. Sobol, S. Tarantola, D. Gatelli, S. S. Kucherenko, andW. Mauntz. Estimating the approximation error when fixingunessential factors in global sensitivity analysis. Reliability En-gineering & System Safety, 92(7):957–960, 2007. → pages 11,12[Sud08] B. Sudret. Global sensitivity analysis using polynomialchaos expansions. Reliability Engineering & System Safety,93(7):964–979, 2008. → pages 5[SW06] M. Schonlau and W. J. Welch. Screening the input variables toa computer model via analysis of variance and visualization. InAngela Dean and Susan Lewis, editors, Screening, pages 308–327. Springer New York, 2006. → pages 26[TKBL+12] S. Tarantola, V. Kopustinskas, R. Bolado-Lavin, A. Kaliatka,E. Uspuras, and M. Vaisnoras. Sensitivity analysis using contri-bution to sample variance plot: Application to a water hammermodel. Reliability Engineering & System Safety, 99(0):62 – 73,2012. → pages 17[Wei00] S. Weinzierl. Introduction to Monte Carlo methods. arXivpreprint hep-ph/0006269, 2000. → pages 8[Wey38] H. Weyl. Mean motion. American Journal of Mathematics,60(4):889–896, 1938. → pages 13[WLHZ14] P. Wang, Zh. Lu, J. Hu, and Ch. Zhou. Sensitivity analysisof the variance contributions with respect to the distributionparameters by the kernel function. Computers & Mathematicswith Applications, 67(10):1756–1771, 2014. → pages 21, 22[WLS13] P. Wei, Zh. Lu, and J. Song. A new variance-based global sensi-tivity analysis technique. Computer Physics Communications,184(11):2540–2551, 2013. → pages 16, 17, 3171Bibliography[WLZZ15] J. Wu, Zh. Luo, N. Zhang, and Y. Zhang. A new intervaluncertain optimization method for structures using Chebyshevsurrogate models. Computers & Structures, 146:185–196, 2015.→ pages 572


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items