Probabilistic assessment of rock slope stability using response surfaces determined from finite element models of geometric realizations by Seyedeh Elham Shamekhi B.Sc., University of Tehran, 2006 M.Sc., University of Tehran, 2009 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The College of Graduate Studies (Applied Science) THE UNIVERSITY OF BRITISH COLUMBIA (Okanagan) July 2014 ©Seyedeh Elham Shamekhi, 2014 ii Abstract A new methodology for probabilistic rock slope stability assessment was developed. This methodology enables the analyst to estimate the probability of failure by incorporating the variability of the geometric parameters such as joint orientation and trace length in the stability analysis. This improves the reliability of any future design, remedial action or risk assessment. Although incorporating the variability of material strength parameters into numerical models is common in geotechnical engineering, similar analysis is very challenging when geometric parameters such as dip, dip direction, and trace length are considered non-deterministically. The challenge is related to mesh generation required for each numerical model as the geometric parameters change, resulting in high computational effort. For practical stability assessment, a representative yet computationally efficient number of realizations or numerical models is required. Therefore, commonly used sampling techniques such as the Monte-Carlo method that generate a large number of slope realizations cannot be used. The new methodology uses the Point Estimate Method to substitute each probabilistic variable by its two point estimates. As the number of probabilistic input parameters increase, the number of point estimates, and accordingly the number of realizations to be modeled, increase exponentially. To compensate for this problem, the methodology uses a ‘design of experiment’ framework to minimize the number of representative realizations needed. While Phase2 finite element software is used in this thesis, the methodology is flexible and can be used to increase the efficiency of other numerical tools that require re-meshing to accommodate changes in geometric parameters. The developed methodology was evaluated for two different slope configurations. Analysis of variance was implemented to identify the significant parameters affecting the factor of safety. To estimate the probability of failure, central composite design was used to generate more realizations of the significant parameters and to fit response surfaces to the factor of safety values. A method is presented to select the most accurate response surface to estimate the probability of failure. This response surface can be used to predict the stability behaviour of iii any arbitrary geometric realization of the slope without the need for further numerical modelling. The sensitivity of the methodology to the selection of the initial point estimates was investigated and was shown to be unbiased. iv Preface This research was conducted under supervision of Dr. Dwayne D. Tannant. Lab experiments were performed in the Civil Engineering Lab of the University of British Columbia-Okanagan. The equipment and tools for the field investigations and data collection were provided by Dr. Tannant. During this PhD study, four papers were published in the proceedings of several international conferences. Parts of the third and fourth papers (see below) were presented in Chapters 1 and 2 of this thesis. Zheng, W., Tannant, D.D., & Shamekhi, E. 2014. Effect of Joint Shear Strength Variability on Stepped-Planar Rock Slope Failure Mechanisms, Proc. International Discrete Fracture Network Engineering Conference, Vancouver, Canada. Shamekhi, E. & Tannant, D.D. 2011. Can geohazards be assessed with Google Earth Street View images and photogrammetry software? Proc. 5th Canadian Conf. on Geotechnique and Natural Hazards, Kelowna. Shamekhi, E. & Tannant, D.D. Highwall and fault monitoring with photogrammetry at Elkview Mine. Int. Symp. on Rock Slope Stability in Open Pit Mining and Civil Engineering, Vancouver. 2011. Shamekhi, E. & Tannant, D.D. 2010. Risk assessment of a road cut above Highway #1 near Chase, B.C. Proc. 63st Canadian Geotechnical Conference, Calgary, 8 p. Two journal papers prepared from Chapters 4 and 5 are under editing process and are anticipated to be submitted to the International Journal of Rock Mechanics and Mining Sciences and, Canadian Geotechnical Journal. v Table of Contents Abstract ............................................................................................................................................ii Preface ............................................................................................................................................ iv List of Tables ................................................................................................................................... ix List of Figures ................................................................................................................................. xii List of Symbols ............................................................................................................................... xv Acknowledgments........................................................................................................................ xvii Dedication ................................................................................................................................... xviii Chapter 1: Introduction .............................................................................................................. 1 1.1 Problem Statement .......................................................................................................... 1 1.1.1 Conventional slope stability analysis ........................................................................ 2 1.1.2 Uncertainty in slope stability analysis ...................................................................... 4 1.1.3 Integrated statistical and numerical method for rock slope stability assessment ... 5 1.2 Research Objectives ......................................................................................................... 6 1.3 Contribution and Originality ............................................................................................ 7 Chapter 2: Background and Literature Review ........................................................................... 9 2.1 Fundamental Concepts in Geotechnics............................................................................ 9 2.2 Acquiring Field Data ....................................................................................................... 12 2.2.1 Scanline and window mapping ............................................................................... 13 2.2.2 Borehole sampling .................................................................................................. 14 2.2.3 Remote sensing techniques (digital photogrammetry) .......................................... 15 2.3 Modelling Techniques in Slope Stability Analysis .......................................................... 18 2.3.1 Conventional methods ............................................................................................ 19 2.3.2 Numerical methods................................................................................................. 23 vi 2.4 Methods to Transfer Field Data into Numerical Models ............................................... 26 2.4.1 Deterministic approach .......................................................................................... 26 2.4.2 Probabilistic approach ............................................................................................ 26 2.4.3 Discrete Fracture Network (DFN) and Fracture System Modelling (FSM) ............. 29 Chapter 3: New Methodology to Integrate Statistical and Numerical Techniques in Rock Slope Stability Assessment ..................................................................................................................... 31 3.1 Overview ........................................................................................................................ 31 3.2 Case Studies ................................................................................................................... 35 3.3 Data Collection ............................................................................................................... 38 3.3.1 Digital photogrammetry ......................................................................................... 38 3.4 Generating Efficient Number of Realizations ................................................................. 38 3.4.1 Point Estimate Method ........................................................................................... 39 3.4.2 Factorial design and the Analysis of the Variance .................................................. 43 3.5 Generating Prediction Model ......................................................................................... 52 3.5.1 Central Composite Design (CCD) ............................................................................ 54 3.5.2 Response Surface Method (RSM) ........................................................................... 57 3.5.3 Probability of failure ............................................................................................... 60 3.6 Finite Element Simulation of Slope Realizations ............................................................ 60 3.6.1 General description ................................................................................................. 60 3.6.2 Phase2 convergence criteria ................................................................................... 61 3.6.3 Shear Strength Reduction (SSR) .............................................................................. 62 3.6.4 Phase2 output.......................................................................................................... 65 Chapter 4: Probabilistic Stability Assessment for a Synthetic Configuration ........................... 66 4.1 Problem description ....................................................................................................... 66 4.2 Generating Synthetic Geometric Parameters ................................................................ 69 4.3 Generating Realizations Using PEM and Factorial Design ............................................. 74 4.4 Constructing Numerical Realizations in Phase2 ............................................................. 79 4.4.1 Non-geometric input parameters ........................................................................... 79 vii 4.4.2 Discretization and mesh sensitivity ........................................................................ 81 4.4.3 Output parameters ................................................................................................. 84 4.5 Sensitivity Analysis ......................................................................................................... 92 4.5.1 Sensitivity analysis for critical SRF .......................................................................... 94 4.5.2 Sensitivity analysis for the total displacement ..................................................... 106 4.5.3 Sensitivity analysis for the normalized yielded elements ..................................... 110 4.6 Generating Prediction Models ..................................................................................... 112 4.6.1 Central Composite Design (CCD) .......................................................................... 114 4.6.2 Rotatable Central Composite Design (R-CCD) ...................................................... 115 4.6.3 Face-centre CCD (F-CCD) ....................................................................................... 124 4.6.4 Response surfaces of the prediction models ........................................................ 130 4.7 Evaluating the Prediction Capability of the Fitted Models .......................................... 140 4.7.1 Fitted models in the rotatable design ................................................................... 140 4.7.2 Fitted models in the face-centre design (F-CCD) .................................................. 149 4.8 Curse of Dimensionality ............................................................................................... 156 4.8.1 New point estimates ............................................................................................. 157 4.8.2 Rotatable CCD based on the new point estimates ............................................... 159 4.8.3 Analyzing the prediction capability of the fitted models ..................................... 166 4.9 Probability of Failure .................................................................................................... 173 4.9.1 Selecting the candidate fitted model ................................................................... 173 4.9.2 Statistical analysis of SRF ...................................................................................... 174 4.10 Summary of the Results ............................................................................................... 178 Chapter 5: Little Tunnel Case History ..................................................................................... 180 5.1 Problem description ..................................................................................................... 180 5.2 Field Investigations and Laboratory Tests .................................................................... 182 5.2.1 Aerial Photogrammetry of the Ridge .................................................................... 182 5.2.2 Ground-based Photogrammetry of the North and South Portals ........................ 185 5.2.3 Laboratory Measurements ................................................................................... 188 5.3 Statistical Analysis on the Acquired Data ..................................................................... 192 viii 5.3.1 Geometric parameters .......................................................................................... 192 5.3.2 Strength parameters ............................................................................................. 195 5.4 Generating Efficient Number of Realizations ............................................................... 197 5.5 Constructing Numerical Models of the Realizations in Phase2 .................................... 199 5.5.1 Initial boundary conditions ................................................................................... 199 5.5.2 Discretization and mesh sensitivity ...................................................................... 201 5.5.3 Output results ....................................................................................................... 202 5.6 ANOVA and Sensitivity Analysis ................................................................................... 208 5.7 Generating Prediction Models ..................................................................................... 212 5.7.1 Rotatable Central Composite Design (CCD) .......................................................... 212 5.7.2 Response surfaces of the fitted (prediction) models ........................................... 218 5.8 Evaluating the Prediction Capability of the Prediction Models ................................... 221 5.8.1 Variance of prediction........................................................................................... 221 5.8.2 PRESS and R2prediction .............................................................................................. 221 5.8.3 Mean Sum of Squares of Errors (MSE).................................................................. 223 5.9 Probability of Failure .................................................................................................... 227 5.10 Summary of the Results ............................................................................................... 229 Chapter 6: Conclusions and Recommendations ..................................................................... 231 6.1 Conclusions................................................................................................................... 231 6.2 Recommendations ....................................................................................................... 234 References .................................................................................................................................. 236 ix List of Tables Table 2.1. Definitions geometric and distributional parameters of discontinuities ................... 12 Table 3.1. Basic design table for a 23 factorial design (Montgomery, 2001) .............................. 45 Table 3.2. Analysis of Variance for a full factorial design (Montgomery, 2001)......................... 50 Table 3.3. Axial points in a CCD ................................................................................................... 56 Table 3.4. CCD matrix designs-rotatable and a face-centre with three factors ......................... 57 Table 4.1. Geometric parameters in the current and Hammah et al. (2009) studies ................ 70 Table 4.2. Statistical moments for the geometric parameters ................................................... 71 Table 4.3. Point estimate values for discontinuity properties .................................................... 75 Table 4.4. Initial half-factorial design with coded variables ....................................................... 77 Table 4.5. Factors and aliases, 6- variables, resolution VI, F= ABCDE ..................................... 79 Table 4.6. Properties used in the Phase2 models ....................................................................... 80 Table 4.7. Half factorial design table with calculated responses................................................ 93 Table 4.8. Effect estimate summary (response: critical SRF) ...................................................... 95 Table 4.9. ANOVA for the critical SRF ......................................................................................... 97 Table 4.10. Residuals and normal probability ............................................................................. 99 Table 4.11. ANOVA on the transformed SRF values (Box-Cox transformation) ....................... 103 Table 4.12. Effect estimate summary (response: total displacement) ..................................... 109 Table 4.13. Effect estimates summary (response: normalized yielded elements) ................... 112 Table 4.14. Coded variables of five point estimates - rotatable design ................................... 115 Table 4.15. Natural variables of five point estimates - rotatable design.................................. 116 Table 4.16. R-CCD design with estimated responses ................................................................ 117 Table 4.17. ANOVA results for R-CCD, quadratic and linear, R2=0.94 ...................................... 120 Table 4.18. ANOVA results for R-CCD, quadratic, R2=0.4 ......................................................... 121 Table 4.19. ANOVA results for R-CCD, quadratic, R2=0.6 ......................................................... 122 Table 4.20. Regression coefficients for three prediction models - R-CCD ................................ 123 Table 4.21. F-CCD design with estimated responses ................................................................ 125 x Table 4.22. ANOVA results for face-centre CCD, quadratic and linear, R2=0.91 ...................... 127 Table 4.23. ANOVA results for face-centre CCD, quadratic, R2=0.33 ....................................... 128 Table 4.24. ANOVA results for face-centre CCD, linear, R2=0.7 ................................................ 128 Table 4.25. Regression coefficients for three prediction models, F-CCD ................................. 129 Table 4.26. Prediction errors for the fitted models in the rotatable ........................................ 144 Table 4.27. PRESS and R2prediction for the fitted models in R-CCD .............................................. 145 Table 4.28. Simulated and predicted SRF values-selected with Monte-Carlo .......................... 146 Table 4.29. Prediction errors for the fitted models in the face-centre design ......................... 152 Table 4.30. PRESS and R2prediction for the fitted models in F-CCD .............................................. 152 Table 4.31. Simulated and predicted SRF values-selected with Monte-Carlo .......................... 154 Table 4.32. New point estimates for the significant parameters D1, DD1 and T1 ..................... 158 Table 4.33. CCD design based on the new point estimates ...................................................... 160 Table 4.34. ANOVA results for the new CCD, Q/L, R2=0.99 ...................................................... 162 Table 4.35. ANOVA results for the new CCD, Q, R2=0.3 ........................................................... 163 Table 4.36. ANOVA results for the new CCD, L, R2=0.8............................................................. 163 Table 4.37. Regression coefficients for three prediction models-New CCD ............................. 166 Table 4.38. Prediction errors for the fitted models in the new design .................................... 169 Table 4.39. PRESS and R2prediction for the fitted models in new CCD.......................................... 169 Table 4.40. Simulated and predicted SRF values-selected with Monte-Carlo .......................... 171 Table 4.41. Comparison of the prediction capability of nine fitted models ............................. 174 Table 5.1. Point load test results ............................................................................................... 189 Table 5.2. Summary of the rock mass properties ..................................................................... 190 Table 5.3. Summary of the statistical analysis on geometric parameters ................................ 195 Table 5.4. Statistical moments and point estimates of the probabilistic variables .................. 197 Table 5.5. Generated realizations and constructed numerical models for screening .............. 199 Table 5.6. Numerical simulations results-full factorial design .................................................. 203 Table 5.7. Effect estimate summary ......................................................................................... 209 Table 5.8. ANOVA for the critical SRF ....................................................................................... 209 Table 5.9. Coded variables of the five point estimates- rotatable design ................................ 213 xi Table 5.10. Natural variables of the five point estimates- rotatable design ............................ 213 Table 5.11. Rotatable Central Composite Design ..................................................................... 214 Table 5.12. ANOVA results for rotatable CCD, quadratic and linear, R2=0.99 .......................... 216 Table 5.13. ANOVA results for rotatable CCD, quadratic, R2=0.24 ........................................... 217 Table 5.14. ANOVA results for rotatable CCD, linear, R2=0.82 ................................................. 217 Table 5.15. Regression coefficients for three prediction models-rotatable CCD ..................... 218 Table 5.16. Prediction errors for the fitted models in the rotatable design (Q/L,Q, L) ............ 222 Table 5.17. PRESS and R2prediction for the fitted models in rotatable CCD ................................. 223 Table 5.18. Simulated and predicted SRF values-selected with Monte-Carlo .......................... 224 xii List of Figures Figure 1.1. Typical steps in a conventional slope stability analysis .............................................. 3 Figure 1.2. Sources of uncertainty in a rock slope stability analysis ............................................. 5 Figure 2.1. Different forms of discontinuities in a highwall- Elkview coal mine ......................... 10 Figure 2.2. Scanline and borehole sampling ............................................................................... 15 Figure 2.3. Camera positions for stereo photogrammetry ......................................................... 16 Figure 2.4. Digital Terrain Model; a) triangulation b) texture c) digital mapping....................... 17 Figure 2.5. Modelling methods in rock engineering ................................................................... 19 Figure 2.6. Discontinuity sets projected on a stereonet- Elkview mine .................................... 21 Figure 3.1. Steps in a rock slope stability using new methodology ............................................ 32 Figure 3.2. Algorithm developed for the new methodology ...................................................... 34 Figure 3.3. Geometry and two discontinuities sets in the synthetic slope................................. 36 Figure 3.4. Aerial image of the ridge and the Little Tunnel (Penticton museum) ...................... 37 Figure 3.5. Design points in a two variable CCD (re-produced from Statistica glossary 2013) .. 55 Figure 3.6. Algorithm of SSR........................................................................................................ 63 Figure 4.1. Distributions fitted to the geometric parameters of set 1 ....................................... 72 Figure 4.2. Distributions fitted to the geometric parameters of set 2 ....................................... 73 Figure 4.3. Point estimates of the orientation parameters on a stereonet ............................... 76 Figure 4.4. Mesh dependency analysis for five realizations ....................................................... 83 Figure 4.5. Discretization of the FE model for the mean realization ........................................... 84 Figure 4.6. Total displacement contours for three different realizations of the slope .............. 87 Figure 4.7. Contours of percentage yielded elements for four realizations of the slope........... 90 Figure 4.8. Yielded elements filtered based on type for a portion of realization #1.................. 91 Figure 4.9. Normal probability plot of SRF residuals ................................................................ 100 Figure 4.10. Normal probability plot of SRF residuals after transformation ............................ 104 Figure 4.11. Residuals versus realization running sequence .................................................... 105 Figure 4.12. Residuals versus fitted values ............................................................................... 105 xiii Figure 4.13. Total displacement versus SRF .............................................................................. 107 Figure 4.14. Normalized yielded elements versus SRF values .................................................. 111 Figure 4.15. Contours of total displacement for two realizations in the R-CCD ....................... 118 Figure 4.16. Contours of total displacement for realization #4, Face-centre CCD ................... 126 Figure 4.17. Response surfaces for the Q/L model-R-CCD ....................................................... 133 Figure 4.18. Response surfaces for the quadratic model-R-CCD .............................................. 134 Figure 4.19. Response surfaces for the linear model-R-CCD .................................................... 135 Figure 4.20. Response surfaces for the quadratic/linear model-F-CCD ................................... 137 Figure 4.21. Response surfaces for the quadratic model-F-CCD .............................................. 138 Figure 4.22. Response surfaces for the linear model-F-CCD .................................................... 139 Figure 4.23. Contours of the variance of prediction for the R-CCD .......................................... 142 Figure 4.24. Simulated vs. predicted SRF for the 30 random realizations, R-CCD .................... 148 Figure 4.25. Contours of the variance of prediction for the F-CCD .......................................... 151 Figure 4.26. Simulated vs. predicted SRF for 30 random realizations, F-CCD .......................... 155 Figure 4.27. New point estimates of Set 1 orientation on stereonet ....................................... 158 Figure 4.28. Contours of total displacement for realization #2, old and new CCD .................. 161 Figure 4.29. Response surfaces for three fitted models in the new CCD ................................. 165 Figure 4.30. Variance of prediction contours for the new CCD ................................................ 167 Figure 4.31. Simulated vs. the predicted SRF values -new CCD ................................................ 172 Figure 4.32. Log-normal distribution fitted to SRF values-Q/L initial CCD ............................... 176 Figure 4.33. Johnson distribution fitted to SRF values-Q/L new CCD ....................................... 176 Figure 4.34. CDF of the lognormal and Johnson distributions .................................................. 177 Figure 5.1. Little Tunnel (modified after Regional District Okanagan-Similkameen, 2014) ..... 180 Figure 5.2. Aerial photo of the ridge (Penticton Museum & Archives) .................................... 181 Figure 5.3. North portal of Little Tunnel ................................................................................... 182 Figure 5.4. Location of the camera stations and generated points of the ridge ...................... 183 Figure 5.5. Digital Terrain Model of the ridge showing a) TIN and b) texture .......................... 184 Figure 5.6. Location of the camera stations and generated points of the north portal ........... 186 Figure 5.7. Digital Terrain Model of the north portal; a) TIN, b) texture .................................. 187 xiv Figure 5.8. Digital mapping of the geological structures-north portal ..................................... 188 Figure 5.9. Potential zone for rock mass bridging and its relation with persistence ............... 191 Figure 5.10. Stereographic projection of discontinuities mapped on DTM ............................. 192 Figure 5.11. Best-fit Weibull distribution to dip measurements of joint set #1 ....................... 193 Figure 5.12. Normal distribution representing persistence values for Set #1 .......................... 194 Figure 5.13. Normal distribution representing friction angle for Set #1 .................................. 196 Figure 5.14. Normal distribution representing friction angle for bridging rock ....................... 196 Figure 5.15. Geometry of the mean realization in Phase2 ........................................................ 201 Figure 5.16. Mesh sensitivity analysis for Little Tunnel configuration ..................................... 202 Figure 5.17. Total displacement contours for different realizations of the slope .................... 206 Figure 5.18. Total displacement for two realizations of the slope ........................................... 207 Figure 5.19. Normal probability plot of SRF residuals .............................................................. 210 Figure 5.20. Residuals versus fitted values ............................................................................... 211 Figure 5.21. Residuals versus realization running sequence .................................................... 211 Figure 5.22. CCD total displacement and deformation contours for realization #5 ................ 214 Figure 5.23. CCD total displacement and deformation contours for realization #3 ................ 215 Figure 5.24. Response surfaces for the Q/L, Q and L prediction models ................................. 220 Figure 5.25. Predicted versus simulated SRF values for 30 random realizations ..................... 226 Figure 5.26. Johnson SB distribution fitted to SRF values-Q/L model ...................................... 228 Figure 5.27. CDF of Johnson distribution .................................................................................. 228 Figure 6.1. Algorithm developed for the new methodology (duplication of Figure 3.2) ......... 232 xv List of Symbols Alow Low point estimate of variable A Ahigh High point estimate of variable A A- Low point estimate of variable A-coded A+ High point estimate of variable A-coded Cohesion Reduced cohesion D1 Dip of joint set #1 D2 Dip of joint set #2 DD1 Dip direction of joint set #1 DD2 Dip direction of joint set #2 Error F Internal force Number of variables Friction angle Reduced friction angle Ho Null hypothesis H1 Alternative hypothesis K Stiffness Number of variables L Linear model MSS Mean Sum of Squares P Applied force Q Quadratic model Q/L Quadratic and linear model Sum of Squares T1 Trace length of set #1 T2 Trace length of set #2 Shear strength xvi Shear strength required for equilibrium U vector of current displacement XX’ Correlation matrix α Axial distance βj Effect of jth level of factor B βi, βj, βij Dependent variables in a regression model ϵijk Effect of error η Independent variables in a regression model μ Overall mean μx Mean of parameter x νx Skewness of parameter x σx Variance of parameter x τi Effect of ith level of factor B (τβ)ij Effect of the interaction of ith level of factor A and jth level of factor B xvii Acknowledgments My sincerest gratitude goes to my supervisor Dr. Dwayne D. Tannant, who always challenged me to the most, yet at the same time, assured me I was in good hands that led me to the right path at the end. Not only did he guide me thrive in my academic profession, but also helped my personality to flourish. His instructions assisted me to face my fears and enhanced my confidence. I had the pleasure of working with such an experienced and professional supervisor for the past four years and benefitting from his unlimited support. I also had the honour of working with my research committee members, Drs. Rehan Sadiq and Solomon Tesfamariam who greatly directed and advised me during the course of this research. I would also like to express my gratitude to Dr. Abbas Milani, who facilitated the path towards my research objectives by sharing his knowledge with me. I would like to mention my extended family and also my friends, Aida, Shahob and Alireza, who although far away from me, never left me alone when I needed them. I would also like to thank Sina, the brightest part of my journey. He took every single step with me and while holding my hands, taught me every single difficulty in life can be conquered by peace of mind, thoughtfulness, having faith in my capabilities and positive thinking. He showed me sometimes, silence is much louder than any voice. And last but not least, I would like to thank Kelowna, my first host in Canada that embraced me with the blue of its Okanagan Lake, the green of its vineyards and the warmth of its happiest sunshine. xviii Dedication To my grandfather, who was the first person to teach me the most precious stones are formed in the hardest rocks To my grandmother, who showed me patience and unconditional love are the only keys to real happiness To my mother PARVIN, who is my “Power”, “Ambition”, “Relief”, “Virtue”, “Inspiration” and, “Nirvana”, with simply being her To my father, whose presence replaces all the obstacles with the most soothing sound of music To my sister, my lifetime teacher, who always took my hand when I was lost in the darkness of the exam nights 1 Chapter 1: Introduction 1.1 Problem Statement Rock slope hazards are important hazards identified by Public Safety Canada (2014). Western Canada continues to develop new public and industrial infrastructure including mines, railways, roads, pipelines, power lines, telephone, and recreation facilities in mountainous terrain. The rock instability risks associated with exposure to steep natural and excavated rock slopes must be quantified as a first step in protecting the Canadian public and industrial sectors. As the population density increases and more infrastructure is developed in the mountains, the consequences of rock-related instabilities will increase and more effort must be devoted to hazard assessment and mitigation to maintain the risk at acceptable levels. Representative numerical models that analyse the safety of rock slopes will help geotechnical engineers gain a better understanding of the slope behaviour and manage the consequences of potential failures. In rock slope stability assessment, an engineer deals with a natural, inhomogeneous, and anisotropic rock mass that exhibits complex and in some cases, unpredictable behaviour. Understanding the failure mechanism (mainly governed by the geological structures and the geometry of the slope) is one of the most important concerns in every rock slope stability assessment. The general modes of failure can be categorized as planar, wedge, toppling and circular. However, in reality, the failure mechanisms are much more complicated. To realistically model such complex behaviour, the slope geometry and the properties of the geological structures need to be determined. The concern is not limited to acquiring sufficient data using field investigations or laboratory tests but it is mostly related to developing a procedure through which, the acquired variability of the data can be best represented in the numerical models. The parameters of concern in a typical slope stability assessment are categorized as the ones describing the physical characteristics of the rock and discontinuities (such as cohesion and 2 friction angle), while geometric parameters such as those describing the orientation, location or the size of the discontinuities in a 3D space. Recently, several improvements have been made in integrating the obtained variability of the strength parameters and the numerical models. However, despite recent improvements in acquiring representative geometric data through site investigations, incorporating the geometric data into the numerical models has remained a challenge. This challenge is mainly associated with the fact that a single change in the geometry, necessitates re-meshing the whole model and result in an increase in the computational cost. Hence, it is not feasible to directly transfer the inherent variability of the geometric parameters into the numerical models. To compensate for this limitation, undesirable simplifying assumptions are implemented. As a result, the constructed numerical models will not be a good representation of the reality, and the output results will not be reliable in many cases. 1.1.1 Conventional slope stability analysis In general, the first step of every slope stability analysis requires collection of important rock slope parameters using expert judgment or basic geotechnical knowledge. Field investigations or laboratory tests are conducted to acquire sufficient data for the parameters of interest. The extent of the required information depends on the selected modelling technique, the desired level of reliability in the output results and the complexity of the studied environment. Hence, the investigation techniques may vary from simple window mapping to more complicated methods such as remote sensing techniques. Several strength parameters should also be estimated by conducting laboratory tests. In the next step of the stability analysis, the obtained data are characterized using deterministic or statistical techniques to provide the required input parameters for a selected numerical model. The numerical model, depending on its capabilities, can evaluate a “stability factor” and/or a possible failure surface. The term “stability factor” is used in this context as an output parameter that describes the stability of the slope, and can be defined in different ways in each numerical calculation. The most commonly used stability factors are the limit equilibrium factor of safety, Shear Strength Reduction Factor (SRF), reliability index, and probability of failure. 3 Figure 1.1 presents a schematic view of the sequence of steps for a typical slope stability analysis, while highlighting in red the key need for a methodology to transfer realistically the acquired data into the selected numerical model. This becomes a critical issue, when the acquired data for one or more parameter(s) show a high level of inherent spatial variability. In this case, a deterministic mean value may not sufficiently describe the parameter(s). On the other hand, due to the excessive calculation time needed for most numerical models, it is not practical to directly transfer the acquired variability of the input parameters into the numerical models. Therefore, it is essential to develop a methodology that is able to integrate the inherent variability of the parameters into numerical models, and yet be able to maintain a desirable level of computational time efficiency. Ignoring this stage may result in a slope stability analysis depending on some deterministic values that do not truly represent the inherent variability in slope behaviour. In such analysis, the extreme scenarios that are possible are neglected. Figure 1.1. Typical steps in a conventional slope stability analysis 4 1.1.2 Uncertainty in slope stability analysis An important factor, often neglected in a conventional rock slope stability assessment, is to identify the sources of uncertainty that affect the reliability of the output results (Duncan 2000). The uncertainty can exist in the input parameters, the calculations, and the procedure in which the input parameters are transferred into the numerical model. Since this study focuses on the geometric parameters and their implementation in numerical models, the uncertainty related to or caused by geometric parameters are more of an interest. Although improvements in data acquisition techniques and modelling methods have reduced the level of uncertainty in the geometric parameters and the output results, uncertainty still exists and has a direct influence on the reliability of the analysis or the design (Baecher & Christian 2003). Generally, sources of uncertainty in a rock slope stability analysis can be categorized as natural variability, knowledge uncertainty, and operational/decision uncertainty (Figure 1.2). The latter source of uncertainty exists due to time or cost constraints or impracticality of the design in reality and will not be considered in this research. The natural variability, however, is one of the most important sources of uncertainty, especially when dealing with rock mass and discontinuity network (Baecher & Christian 2003). The natural or inherent variability is either related to different characteristics of the geological structures at different locations (spatial) or different characteristics of them at a single location, but at different times (temporal). The spatial variability a common source of uncertainty that is commonly observed either in the geometric parameters that describe discontinuities such as orientation and size, should be considered in the numerical simulations. Knowledge uncertainty is another important source of uncertainty that exists due to: 1) the lack of knowledge of the governing geometric parameters, 2) the simplifying assumptions in the modelling methods (model uncertainty), and 3) the low level of accuracy in the acquired data (Baecher & Christian 2003). Human errors in data collection such as field measurements, or lack of data due to the inaccessibility of some of the geological structures can be categorized as knowledge uncertainty. It should be noted that, regardless of the low level of knowledge uncertainty in the input parameters, the output results might still suffer from this disadvantage. 5 This is mainly due to the simplifying assumptions that exist in the numerical simulations such as those involved in Limit Equilibrium Methods (LEM). Figure 1.2. Sources of uncertainty in a rock slope stability analysis Both the natural variability and knowledge uncertainty can be reduced significantly by incorporating field investigation techniques that are capable of collecting sufficient and representative field data with an acceptable accuracy level. However, the model uncertainty may not be resolved thoroughly only by relying on complicated numerical simulations, which in many cases are also computationally expensive. Although it is important to select an adequate numerical technique, it is equally essential to develop a methodology to transfer the obtained accuracy and variability of the geometric parameters into the numerical model. 1.1.3 Integrated statistical and numerical method for rock slope stability assessment One of the main objectives of this study is to develop a methodology that improves the incorporation of geometric parameter variability into numerical models, while maintaining an affordable computational cost. To achieve this objective, numerical models of different slope realizations are constructed. A realization represents a combination of geometric data obtained from statistical analysis of the geometric parameters. Multiple realizations of the slope can be generated based on the inherent variability of the input parameters. To generate an efficient yet representative number of realizations of the slope, statistical tools such as the Point Estimate Method (PEM), and Design of Experiment techniques such as factorial design and the Analysis of Variance (ANOVA), are combined. A numerical model of each realization is constructed and its corresponding “stability factor” is estimated. To probabilistically assess the Natural VariabilityTemporal variabilityKnowledge uncertaintyLack of knowledgeMeasurement accuracySpatial variabilityModelling uncertaintyOther sources of uncertaintyOperational uncertaintyDecisional uncertainty 6 stability of the slope, response surfaces are fitted to the numerical results of all the modeled realizations. A response surface is a mathematical relation in which the geometric parameters and the stability factor are the independent and dependent variables, respectively. The response surface captures the variability of the input parameters as well as their influence on the values of the stability factor. Another advantage of constructing the response surfaces is that they can be used later to predict the stability of any arbitrary realization of a specific slope, without the need for any further numerical simulation. A response surface can help the analyst have better understanding of the behaviour of the slope for different combinations of input (geometric) parameters. 1.2 Research Objectives The main objective of this research is to overcome the shortcomings that exist in the analysis of slope stability when dealing with variability in the geometric parameters of the discontinuities. To accomplish this objective, a new methodology was developed that integrates structural and numerical models, by avoiding some of the undesired simplifying assumptions that have been used to date. It is anticipated that applying this methodology in a slope stability analysis will provide geotechnical engineers with more reliable predictions of failure probability. The work presented in this thesis involved the following main tasks. The first objective is to develop a methodology to transfer the inherent variability of the geometric parameters into numerical models with fewer simplifying assumptions. The Point Estimate Method (PEM) is used to select a computationally efficient number of realizations of the structural geology. The different realizations lead to different numerical models that better capture the inherent variability of geometric parameters. The second objective of this research is to construct a response surface based on different deterministic values of the stability factor, obtained from numerical models of each realization. The response surface defines the relation between the geometric parameters that are involved in the analysis and, the corresponding value for the stability factor. The main achievement of this step is a mathematical relation, which is able to predict a value for stability factor of any arbitrary realization of the slope. 7 The third objective is to apply a sampling technique such as the Monte Carlo method on the obtained response surface to estimate the probability of failure. Considering the probability of failure rather than a deterministic value for the stability factor, while the inherent variability of geometric parameters are considered, results in more realistic interpretations and a better understanding of the slope behaviour- as will be demonstrated. The fourth objective is to evaluate the applicability of the integrated statistical and numerical method to a real case study; a ridge that contains Little Tunnel, located in Naramata, BC. To fulfill this objective the following tasks are performed additionally. o Digital photogrammetry was used as a remote sensing technique to acquire field data associated with the geometry and orientation of the discontinuities that define potential failure mechanism in the ridge. o The acquired data was statistically analyzed to characterize the inherent spatial variability of geometric parameters such as dip and persistence. The use of remote sensing techniques provides sufficient data and subsequently adequate knowledge to reduce the level of uncertainty in the geometric parameters. o The obtained data are incorporated into the developed methodology and the probability of failure for the ridge is estimated. Accomplishing the aforementioned objectives and combining the obtained results with the estimated consequences of a probable failure, may lead to better-informed decisions regarding possible mitigation or remedial actions. 1.3 Contribution and Originality The main contribution of this research is to develop and demonstrate a new methodology for analysis of rock slope stability. The method will use all acquired geometric data from the field to first characterize the geometric variability and then to selectively choose geometric input parameters using statistical processing techniques to construct a series of numerical finite element models of the rock slope (realizations). The number of models to be constructed for a given slope will depend on the characteristics of the dominant discontinuities, but the proposed 8 methodology will keep the number of models to a minimum for practical modelling purposes while also considering the inherent variability in the geometric parameters and avoiding undesired simplifying assumptions in the numerical models. Using a response surface developed from a combination of all the numerical model results, this methodology aims to produce a general solution which can predict the behaviour of a specific rock slope, for any desired combination of its geometric parameters, while only a limited number of these combinations are modelled numerically. This general solution approach should permit an improved the level of reliability in any slope designs or related risk assessment and risk management. 9 Chapter 2: Background and Literature Review As described in Chapter Chapter 1:, the main focus of this research is to increase the reliability of slope stability assessment. This objective is achieved by introducing a methodology that integrates the inherent variability of the geometric parameters into a numerical model. In the first section of the present chapter, some fundamental terminologies and concepts in geotechnics are described. The second and third sections briefly review the available tools for site investigations and numerical calculations in slope stability assessment, respectively. In the last section, the currently available procedures to incorporate the obtained data into numerical models are discussed, and their shortcomings and advantages are highlighted. 2.1 Fundamental Concepts in Geotechnics The presence of discontinuities in a rock mass is one of the main reasons for slope failures, which may or may not be triggered by an environmental change such as an earthquake (seismic force) or high precipitation (water pressure) (Park et al. 2005). To obtain a better knowledge about the behaviour of the rock slopes and their failure mechanisms, the first step is to identify and characterize the important parameters that govern the behaviour of discontinuities as reliably as possible (Hudson & Harrison 2000). Discontinuities are formed under different thermal, chemical, and mechanical processes on the intact rock during millions of years. The term “discontinuity” is a generic term that defines any kind of separation in the main body of the intact rock. The process under which discontinuities are formed categorizes them as joints, faults, beddings, and tension cracks (Figure 2.1). 10 Figure 2.1. Different forms of discontinuities in a highwall- Elkview coal mine The characteristics of a discontinuity are defined by strength, geometric and distributional parameters (Dershowitz 1984). Strength parameters mostly describe the physical characteristics of the discontinuities. These parameters are the main components of many conventional failure criteria, such as Mohr-Coulomb. Shear strength is one of the influential strength parameters and is defined by its two main components known as cohesion and friction angle (Hoek 2000). The information about the strength parameters are usually provided using in-situ or laboratory tests. Huge effort has been made on characterizing the strength parameters to capture their spatial and temporal variability as representative as possible (Patton 1966, Nilsen 1985, Bandis et al. 1981, Phoon & Kulhawy 1999, Duzgun et al. 2002). These parameters are considered as direct inputs to constitutive laws or limit equilibrium equations in order to predict the resistance of the discontinuities against driving forces, such as gravity or water pressure. Beside strength parameters, the geometric parameters play an important role on the stability of the slope and its failure mechanism. The geometric parameters are categorized as two-dimensional parameters such as the shape and the size of the discontinuity, and three-dimensional parameters such as its orientation and location (Dershowitz, 1984). The 11 combination of these parameters forms the discontinuity system of a rock mass and is used in the kinematic analysis of the rock slopes. The definitions of some important geometric parameters are summarized in Table 2.1. To describe the possible inherent variability of the strength and geometric parameters, distributional parameters such as mean, variance and coefficient of variance should be defined. Since it is not always possible to acquire enough data for each geometric parameter to be statistically defined, different types of distributions are suggested in the literature for the governing geometric parameters (Table 2.1). The geometric parameters are the main factors that define the slope failure mechanisms and their magnitudes (Goodman et al., 1968). A change in the geometry of a discontinuity system, while keeping the strength parameters constant, may turn a stable slope into an unstable one or totally change the failure mechanism. As a result, having a true understanding of the geometry of the problem, which requires a representative characterization of the geometric parameters of the discontinuities, is very crucial. Different site investigation techniques are commonly used to capture the inherent spatial variability in geometric parameters. However, acquiring sufficient field data are very challenging when dealing with highly fractured, unstable, or remote areas. 12 Table 2.1. Definitions geometric and distributional parameters of discontinuities Geometric parameter Definition Distribution Orientation Represents the position of the discontinuity plane in space with two components (Dershowitz 1984): Dip: angle between the discontinuity plane and a horizontal plane, measured in a plane normal to both planes Dip Direction: angle between the north axis in the horizontal plane and the intersection plane between the horizontal plane and the vertical plane in which the dip is measured Fisher distribution (Dershowitz 1979 , Hack 1998) Size Represents the extension of a discontinuity in a rock mass, defined by (Cruden 1977): Persistence: measured areal extend of a discontinuity (Einstein et al. 1983) Trace length: measured length in the exposure view direction Exponential distribution (Call et al. 1976, Robertson 1970) Lognormal distribution (Barton 1977, Einstein et al. 1980) Gamma distribution (Raiffa & Shclaifer 1961) Hyperbolic distribution (Segel & Pollard 1983) Spacing Represents the perpendicular distance between two adjacent discontinuities in one set Lognormal Distribution Normal Distribution (Dershowitz 1984) Shape Represents the shape of the surface of a discontinuity (circle, rectangle, ellipsoid, etc.) - Roughness Represents the irregularity and unevenness of the surface of a discontinuity (Barton & Choubey 1977) - Location Represents the coordinates of the centre or the end points of a discontinuity in space (x,y,z) Uniform distribution (Dershowitz 1984) 2.2 Acquiring Field Data Prior to any site investigation, the parameters of interest should be identified and the level of required details and accuracy should be determined. As previously mentioned, in most of the problems the required data and the level of details are dictated by the selected modelling method. For some simple modelling techniques that are mainly based on the conventional limit equilibrium equations, the required parameters are limited to some strength parameters such as the cohesion, friction angle, Young’s modulus and Poisson ratio and some geometric parameters such as the orientation of the discontinuities and the geometry of the slope. In this 13 type of modelling, the failure mechanism has been identified, in advance, using kinematic analysis on stereonets. In more complicated modelling methods such as the Finite Element or Discrete Element methods, more initial information associated with either the strength parameters or the geometric parameters is required. Since these methods are capable of analyzing more complicated discontinuity systems, the failure mechanism is considered as an unknown parameter. Hence, detailed information about the geometric parameters is necessary. Since in these methods the relations between the parameters are defined by the constitutive laws, more data about the boundary conditions or the in situ stresses is also needed (Feng & Hudson, 2010). Different site investigation techniques are available to provide the required information for any of the selected modelling methods. In more complex cases, more than one technique may be conducted to obtain the necessary information. Some of the most commonly used techniques are scanline, window mapping, borehole sampling, and remote sensing techniques, which will be explained briefly as follows. 2.2.1 Scanline and window mapping If a large rock surface is exposed, scanlines and window mappings are the commonly used conventional techniques. Scanlines are tape measures with a length between 2 to 30 metres (depending on the scale of the exposed surface). The scanlines are mounted on the face along the strike of the rock slope in the line of the maximum dip (Priest, 1993). The orientation, trace length, roughness, and curvature of the structures that intersect with the scanline are measured and recorded in a table (Figure 2.2). To measure the orientation of the structures, a geological compass or a clinometer is used. Using this technique, there is variability of and in the values of dip and dip direction measured by different operators (Ewan & West 1981). Depending on the dimensions of the exposed rock face, several scanlines may be used. One of the main shortcomings of this method is the parallax effect in which the geological structures that are parallel to the scanline are ignored (Singhal & Gupta 2010). 14 Window sampling method has the same principals as the scanline. However, the mapping zone is defined by a window rather than a line. In this method, only the geometric parameters of the discontinuities that have a specified portion of their trace lengths in the window will be considered. This method eliminates the bias of linearity (parallax effect) that exists in the scanline method (Priest 1993). One of the advantages of the two discussed methods is their ability to obtain data from larger structures. However, these techniques are not suitable for unstable or remote areas where the life of the operators may be exposed to a risk. Moreover, depending on the mapper’s skill, the knowledge uncertainty in the measurements can become noticeable. 2.2.2 Borehole sampling If a large-scale exposed surface is not available, or some information at the depth is required, borehole samplings are the only available techniques (Singhal & Gupta 2010). In this technique, the main source of information is an extracted high quality drilled core. Some information about the orientation of the discontinuities and their spacing can be obtained from the extracted cores (Figure 2.2). More information can also be obtained from the coring operation based on some observations such as the color of the coring fluid or the rate of coring progress. However, since the cores have small diameters, information about the size of the discontinuities cannot be obtained. Another shortcoming of this method is the possibility of core rotation during the extraction process. As a result, the true orientation cannot be obtained from the core sample unless the rotation angle is known and a correction factor for the error is considered (Priest 1993). Borehole sampling is an expensive technique and may not be feasible to be used in many smaller scale projects. 15 Figure 2.2. Scanline and borehole sampling 2.2.3 Remote sensing techniques (digital photogrammetry) One of the other site investigation techniques, widely accepted in the field of geotechnical engineering is traditional photogrammetry. This technique utilizes the visual human interpretation on stereo photographs in order to extract some information on the geological structures. Aerial photographs have also been used to obtain information about the topography of the area. Recently, the advances in the digital technology have been incorporated in the traditional photogrammetry and elevated this method to a remote sensing technique known as digital photogrammetry. This enhancement has increased the ability of the technique to cover unstable and remote regions with higher accuracy (Mikhail et al. 2001). The primary tools for digital photogrammetry are digital camera, total station or a hand held GPS, and photogrammetry software. The total station or the GPS are used to survey some target points to be georeferenced in the digital model later. The camera stations are established along the exposed surface and overlapping photographs of the object are taken (Figure 2.3). The distance between the two adjacent camera stations should be between 1/6 to 1/8 of the distance between the cameras and the exposed surface (Haneberg 2008). The arrangement of camera stations should provide ample overlap of photographs and overlap of ScanlineBorehole axisDiscontinuityαReference lineβα=dipβ= dip directionLine of maximum dip 16 the resulting digital rock surface models generated from each pair of camera stations (Shamekhi & Tannant 2010). Figure 2.3 Camera positions for stereo photogrammetry Digital Terrain Models (DTMs) are constructed from georeferenced stereo photographs in a triangular irregular network (Figure 2.4) generated by the photogrammetry software (Birch 2006). The discontinuities that are captured in DTMs can be digitally mapped and the information about their orientations, locations and trace lengths can be extracted (Sturzenegger et al. 2009) (Figure 2.4). Since the discontinuities are mapped digitally in an office environment, more discontinuities can be mapped in a shorter period. This advantage will improve the ability to capture the inherent geometric variability. Moreover, the same DTM can be mapped by different experts or by the same mapper several times to evaluate the uncertainty or the error in the acquired data due to human judgment. The accuracy of the mappings on a DTM depends on the accuracy of the surveyed points, the distance between the cameras and the object, and the distortion in the lens and field conditions (Haneberg 2008). Digital photogrammetry has overcome many of the obstacles that exist in the conventional techniques in a more accurate and more time efficient manner. A summary of its advantages is listed below (Sturzenegger & Stead 2009). Able to cover the inaccessible or remote areas where are impossible or too dangerous for a mapper or an operator to reach. Overlap between photos 17 Able to cover a wider region in order to increase the quantity of the collected data. (This advantage enables the analyst to capture the inherent variability of the geometric parameters.) Able to perform a digital structural mapping in larger areas (not only limited to scanlines or windows) in a short period. Able to obtain more accurate data. Able to re-map the area by the same mapper or by different experts without the need for more field work. Figure 2.4 Digital Terrain Model; a) triangulation b) texture c) digital mapping 18 Since the main purpose of this research is to increase the reliability of a slope stability analysis by reducing the level of uncertainty, digital photogrammetry is used. It is demonstrated that the capabilities of digital photogrammetry can significantly reduce uncertainties in the input parameters that are associated with the natural variability and lack of knowledge. 2.3 Modelling Techniques in Slope Stability Analysis Modelling in rock engineering deals with a complex natural material combined with a complicated system of discontinuities that has been exposed to changes in temperature, water pressure and some other dynamic forces. To construct a representative model of a rock structure, some of the following factors should be considered in the analysis (Jing 2003): Relations between different parameters that are usually embedded in the constitutive laws Pre-existing or in-situ conditions of the rock regarding the stresses, temperature and water pressure The presence of discontinuities, their geometry and physical properties Variability of parameters (intact rock or discontinuities) in different locations (spatial variability) Variability of parameters in different periods of time (temporal variability) Variability of parameters in different directions due to the anisotropic nature of rocks Variability of parameters at different scales (depending on the scale, the behaviour can be either governed by the discontinuities or the intact rock) Influence of the artificial changes in the geometry usually caused by engineering constructions such as tunnel excavation The available modelling methods in rock engineering can be categorized in two main groups: a) conventional or analytical methods and b) numerical methods (Figure 2.5). The conventional methods are commonly used for cases in which the failure mechanisms are predicted using stereo projection techniques. However, as the failure mechanisms become more complicated, 19 the conventional methods become less capable of representing the behaviour of the slopes, so they will be replaced by the numerical methods. Figure 2.5 Modelling methods in rock engineering 2.3.1 Conventional methods Two steps are considered essential in the rock stability assessment using the conventional methods. In the first step, the presence of any probable failure will be kinematically analyzed by projecting the orientation data on a stereonet (usually an equal angle- lower hemisphere). On a 2D stereonet, each plane is represented by a pole that contains information about the dip and dip direction of its comparable plane (Figure 2.6). Discontinuities with the same orientation form a cluster of poles on a stereonet and are considered as a discontinuity set. These sets are described by the orientation of their mean pole along with other distributional parameters such as variance or a probability distribution that represent its variability. Each discontinuity system usually contains one to four different sets (Price & Cosgrove 1990). Conventional methodsKinematic analysis Stereo projectionKinetic analysis Limit EquilibriumNumerical methodsContinuum methodsFinite Element MethodFinite Difference MethodBoundary Element MethodDiscrete methodDiscrete Element MethodHybrid methodFEM/BEMDEM/BEM 20 Although there are several methods to identify the discontinuity sets, the process highly depends on expert judgment and is very subjective (Priest 1993, Grenon & Mukendi 2012). To reduce the subjectiveness of such analysis, numerical algorithms are suggested and widely used (Shanely & Mahtab 1976, Mahtab & Yegulalp 1982, Fisher et al. 1993, Zhou & Maerz 2002). Recently, some improvements in commercial software such as DIPS (Rocscience 2013) have enabled automatic discontinuity set detection by using a fuzzy clustering algorithm. Although some initial information such as the number of clusters and their potential mean values are defined by the user, the algorithm reduces the human bias of such analysis noticeably (Hammah & Curran 1998). Figure 2.6 demonstrates the potential difference between the identified discontinuity sets for similar orientation data when different approaches are used. Less complicated failure mechanisms, such as planar failure1, toppling2 or wedge failure3 can be directly identified by kinematic analysis of the orientation data. However, more complicated mechanisms such as stepped-planar failure may need further analysis. The kinematic analysis mostly considers the orientation of the joints while ignoring some other important geometric parameters such as trace length, persistence, and roughness (Eberhardt 2003). However, due to its simplicity, statistical analysis on the orientation data can be integrated with the kinematic analysis that may lead to a better understanding of the inherent variability in the orientation parameters (Coggan et al. 1998). 1 - Sliding along a discontinuity face without any rotation (Goodman and Kieffer, 2000) 2 - Forward rotation of steeply inward dipping planes on an edge (Goodman and Kieffer, 2000) 3 - Sliding along the line of intersection of two non-parallel discontinuity planes (Goodman and Kieffer, 2000) 21 Figure 2.6. Discontinuity sets projected on a stereonet- Elkview mine Set 1Set 2Set 3Set 4Set 5Fuzzy clusteringMean poleSet 1Set 4Set 3Set 2Mean poleExpert’s judgment 22 If any kind of kinematic instability such as a wedge or a planar sliding is identified, the second step, which is a kinetic analysis, is performed. In this step, the strength parameters of the discontinuities are also taken into account. For simple geometries, in which the internal deformations or the bridging can be ignored, limit equilibrium technique are commonly used in the kinetic analysis. This technique compares the magnitudes of the resisting and driving forces, which act along the planes of the discontinuities to estimate the factor of safety or the probability of failure (Coggan et al. 1998). Conventionally, the kinetic analysis of the slopes was conducted deterministically and each parameter was represented by a single mean value. The output of such analysis was a single value for the factor of safety that was used as the parameter of choice for stability assessment. However, in this approach, the factor of safety is highly sensitive to the level of uncertainty in the input values and is acceptable when low-scattered input data with a low level of uncertainty is available. To overcome this shortcoming, probabilistic analysis is utilized. In this approach, the inherent variability of each input parameter should be characterized by a probabilistic distribution function and the first and second statistical moments (mean and standard deviation). Since the geometry is simple, the failure mechanism is known in advance, and discretization of the geometry is not required, the input parameters, either geometric or physical, can be easily defined stochastically. In the limit equilibrium methods, Monte-Carlo sampling is commonly used to generate different realizations of the slope based on the inherent variability of the input parameters. Therefore, if limit equilibrium techniques are selected as the method of stability analysis, transferring the variability of the geometric/strength parameters are not considered as a huge challenge. However, due to a high level of simplifications in such analysis, the modelling uncertainty is of a concern. In this method, the behaviour of the materials is assumed linear. In addition, the dynamic forces, loading sequences and some parameters in the boundary conditions are totally ignored. These limitations increase the level of modelling uncertainty and restrict the application of conventional methods in more complex mechanisms that are the dominant situations in reality. To avoid such limitations, numerical methods for 23 the stability analysis of the rock slopes are considered as choices that are more reliable. However, these methods suffer from some other disadvantages, especially when incorporated in a probabilistic analysis. 2.3.2 Numerical methods 2.3.2.1 Continuum methods The continuum models are usually suitable for the slopes in which the behaviour is mainly governed by the intact rock rather than the joint sets. In early 70s, the concept of the joint element was introduced by Goodman at al. (1968). Later on, other improvements were made on the principals of the continuum methods that allowed the user to take into consideration the presence of fractures such as the faults and bedding planes (Zienkiewicz et al. 1970, Ghaboussi et al. 1973). Despite all these modifications, the domain in a continuum model is still assumed continuous. Moreover, continuum methods do not consider block movements or rotations (Eberhardt 2003). As a result, they should be used to model intact rock behaviours or rock masses with a few number of discontinuities that are not large enough to create a block system (Jing 2003). The most commonly used continuum methods are the Finite Difference Method (FDM), Finite Element Method (FEM) and Boundary Element Method (BEM). Finite Difference Method is the oldest method in the family of continuum modelling. In this method, the domain is discretized into smaller elements and a grid system is defined. The governing partial differential equations (PDE) will then be discretized by replacing the partial derivatives with approximated differences at the adjacent grid points (Wheel 1996). One of the most commonly used rock-engineering software, which is based on FDM, is FLAC and is available both in 2D and 3D (Itasca 2012). Finite Element Method is a widely used method in engineering due to its flexibility and capability to handle non-linear behaviours, non-homogenous nature of rocks and dynamic forces, such as the changes in deformation during a rock excavation, with an efficient computational cost (Griffith & Lane 1999, Duncan 1996). In FEM, the domain is discretized into 24 smaller elements such as triangles. The deformation and stress values are approximated locally for each element. The assemblage of the local equations forms a global matrix, which is solved to obtain the final results. FEM is the principal method commonly used in rock engineering software tools such as Phase2 (Rocscience 2013) or ANSYS. Since FDM and FEM are based on discretization of the main domain, their efficiency decreases as the dimensions of the domain increase or as the degree of freedom of the model exceeds a certain value. In this situation, BEM would be a more appropriate modelling technique among the other continuum methods (Jing 2003). In Boundary Element Method, only the boundary of the model is discretized. As a result, the number of calculations and the computational time for larger domains will decrease significantly. The main application of BEM is in modelling the fracture propagation behaviour. BEM is computationally more efficient than FEM, and in the same level of discretization, it yields results that are more accurate. However, for the modelling of non-linear material behaviour, such as plasticity or damage, FEM provides better results due to a larger number of elements in the domain (Jing 2003). One of the most important applications of BEM is in the hybrid methods, which will be discussed later. 2.3.2.2 Discrete methods When the rock mass is highly fractured or the length of discontinuities are large enough to create single blocks, the domain cannot be modeled as continuum. In fact, in this scenario, the rock domain can be best represented by an assemblage of separated blocks that can either be rigid (for which explicit DEM is used) or deformable (for which Discontinues Deformation Analysis (DDA) is applied) (Shi 1988, Goodman 1995). In the non-discrete condition in continuum models, the position of the elements remains the same relative to their adjacent elements. In contrast, in discrete methods, the blocks can move or rotate, and as a result, their position should be updated during the analysis using an edge or contact detection procedure. The main difference between the Discrete Methods and Continuum Methods is the procedure in which the contacts between the adjacent blocks are updated during the loading process. 25 In Discrete Element Method (DEM), the discontinuity system is characterized based on the acquired field data. The discontinuity system defines the block assemblage in the discrete model. In the next steps, the movement of blocks will be evaluated, their new contact information will be updated, and the deformation (in the case of non-rigid blocks) will be calculated. DEM has been implemented in some of the most commonly used commercial software tools such as UDEC and 3DEC (Cundall 1980, Cundall 1988, Cundall & Hart 1993). 2.3.2.3 Hybrid Models Due to the inhomogeneous and anisotropic nature of rocks, in some cases it is necessary to use a combination of available numerical methods either to construct a more realistic model or to reduce the cost of computational effort. The mostly known hybrid models are BEM/FEM and DEM/BEM (Jing 2003). One important advantage of hybrid models is to eliminate some of the shortcomings that exist in each of the numerical methods individually. As a result, in the combination of BEM/FEM or DEM/BEM, BEM can be applied to the far field zones where the material behaviour can be assumed as linear elastic, while FEM or DEM can be applied to the near-field zone where the realistic modelling of the material behaviour and the modelling of geometry of the discontinuities are crucial (Eberhardt 2003). The choice of the numerical method is very important in the reliability of the obtained result. A good understanding of the environment of study, acquisition of sufficient data, and thorough knowledge about the restrictions and advantages of numerical methods can help the analyst to choose the most appropriate approach. Numerical methods consider more essential parameters in their calculations for the slope ability in comparison with the conventional method. However, since these calculations are complex and mainly based on discretization of the geometry, it is very challenging to consider the inherent variability of the input parameters efficiently. Hence, contrary to conventional methods, the numerical methods are mostly used in deterministic slope stability analysis. 26 As mentioned, one of the main objectives of this study is to develop a new methodology, which is capable of integrating the inherent variability of the geometric parameters into a Finite Element model in an efficient manner. FEM is used here due to the geometry of the studied slope configurations that can be best represented by a continuum model. More details about this topic will be provided in Section 3.6 of this thesis. 2.4 Methods to Transfer Field Data into Numerical Models 2.4.1 Deterministic approach Both the conventional and numerical methods are efficient techniques when dealing with deterministic slope stability analysis. However, as discussed in Section 1.1.2, a deterministic analysis neglects many sources of uncertainty and results in less reliable outputs. In such approaches, regardless of the details captured through site investigation, each parameter is described by its mean value. The stability of the slope is also described by a single value for the stability factor. As mentioned by Tabba (1984), for factor of safety between 1.2 and 1.8 (a stable slope), the probability of failure is very sensitive to the degree of uncertainty and the inherent variability in the input parameters. In addition, the same factor of safety for different failure modes may have different probability of failures (Einstein & Baecher 1982, Low 1997). Therefore, a deterministic approach when used with the numerical methods can only be trusted for a general evaluation of the slope stability. However, due to high level of uncertainty, such results are not reliable to be used in practical designs or decision-making. 2.4.2 Probabilistic approach To overcome many of the shortcomings that exist in the deterministic slope stability, stochastic analyses are substituted. Although fuzzy theory is also practiced in this field (mostly in soils) (Juang et al. 1992, Davis & Keller 1997, Daftaribesheli et al. 2011, Park et al. 2012), probabilistic analysis are still among the most commonly used techniques for non-deterministic slope stability analysis. To evaluate the stability of the slopes in terms of the probability of failure, different realizations of the slope should be constructed and their stability behaviour should be 27 analyzed. To generate representative realizations of the slope, statistical techniques are engaged in the process of the rock slope stability assessment. Different statistical techniques such as the First Order Second Moment method (FOSM), Point Estimate Method (PEM), the Hasofer-Lind approach (FORM) and Monte Carlo simulation methods have been practiced in the field of geotechnical engineering, and incorporated with some of the numerical methods, especially with FEM and DEM. Since the geometry and the discretization of a model are independent of its physical parameters, the inherent variability in the strength parameters can be transferred into a numerical model with an affordable computational cost. Many authors have efficiently implemented the probabilistic strength parameters into different numerical models (Wolff 1996, Duncan 2000, Griffiths & Fenton 2004, Griffiths et al. 2009, Duzgan & Bhasin 2009). In most cases, Monte Carlo and Point Estimate methods have been used due to their accuracy and flexibility. Each combination of the strength parameters, which may be selected randomly or non-randomly, creates a different realization of the slope while the geometry remains the same. By analyzing each realization, a deterministic value for the factor of safety is obtained. Finally, a probabilistic distribution can be fitted to the values of factor of safety and accordingly, the probability of failure can be estimated (Miller et al. 2004, Chiwaye & Stacy 2010). Due to the less complex nature of such analysis, it can be easily combined with available numerical techniques. Therefore, the probabilistic slope stability analysis, when only the strength parameters are of a concern, has been thoroughly investigated. Some commercial software tools such as Phase2 8.00 have incorporated the probabilistic analysis (PEM) of the strength parameter in their simulations. Their analysis is continuous and directly results in a value for the probability of failure without the need to re-run the model for each realization. For a similar geometry, the difference between the computational time of a deterministic and a probabilistic analysis is within few minutes (Rocscience 2013). The statistical analysis of slope stability, however, is not limited to the inherent variability in the strength parameters. The variability in geometric parameters can have a noticeable impact on the probability of failure. More importantly, geometric parameters define failure mechanisms. 28 If these parameters are defined deterministically, the uncertainty in the predicted failure mechanism will be very high. Some successful efforts have been recently made in order to perform a stability assessment by implementing probabilistic geometric parameters. In these approaches, similar to what have been done for strength parameters, random samples are selected from the probabilistic distribution of the geometric parameters. The main difference, however, is that a small change in the geometric parameters changes the model geometry entirely. As a result, the model should be reconstructed, re-discretized, and re-meshed. This complexity forces the user to re-run the model for each single change in the geometry that makes the continuous modelling impossible. Accordingly, a statistical technique such as Monte Carlo is not efficient anymore since it is not practical to model 1000 realizations one by one. Hammah et al. (2009) have suggested integrating Point Estimate Method as a sampling technique for the geometric parameters. Each geometric random variable has been represented by two values and the model is run for 2n realization (n is the number of variables). For each realization, a value for the factor of safety is obtained. By assuming a normal distribution for the output factor of safety the probability of failure is estimated (Hammah & Yacoub, 2009). Since the variability in the geometric parameters has been partially considered, the Hammah et al. approach has improved the reliability of the output results. However, three assumptions have been made to perform this analysis: 1. Among the geometric parameters, only spacing and trace length have been considered probabilistically. However, the orientation of the discontinuity sets, which is one of the most influential parameters on the failure mechanism, is considered deterministically. 2. The probabilistic variables assumed to be distributed normally. This may not always be the case especially for the size and orientation distributions. 3. In this analysis, a normal distribution is assumed for the factor of safety, since there is not enough information to accurately identify the best fit to this parameter. This assumption can be acceptable when all input parameters are defined normally. However, in reality the distribution of the factor of safety, which directly impacts the estimated value for the probability of failure, may not be normal. Although some authors studied such influence (Elmouttie et al. 2009, Brideau et al. 2012), their output 29 parameters of concern were the block shape and volume, and less attention was paid to the changes on the stability factor associated with the variability of the dominant parameters. 4. As the number of input parameters increases (n), the number of simulated realizations increases as well (2n), to a point which may jeopardize the efficiency. Therefore, a modification in the methodology is required to guarantee the practicality of this analysis for more complicated scenarios. 2.4.3 Discrete Fracture Network (DFN) and Fracture System Modelling (FSM) Discrete Fracture Network (DFN) was initially introduced to simulate the fluid flow in fractured rock masses. Although this method can be categorized as a probabilistic approach, it is worth describing separately due to some major differences. In the recent years, the application of DFN in the field of slope stability has attracted so much attention. DFN is mostly considered as a methodology that can transfer the spatial variability of the discontinuities (acquired from field investigations) into numerical or conventional methods for further stability assessment. Generally in DFN the spatial data associated with the geometric parameters of the discontinuities (orientation, spacing, and location) are used in a 3-dimontional space to create a network of fractures (Jambayev 2013). Similar to many methods, DFN can be presented deterministically (in which the space is divided by similar cubes) and stochastically (in which the spatial variability of the space is selected from the true variability of the geometric parameters). In the latter, depending on the selected approach, the discontinuities are represented by planar and finite (length) disks. Fracture models suggested by Baecher 1977 and Dershowitz and Einstein 1988 are the most commonly used models in these analyses. Recently, many efforts have been made to use DFN or Fracture System Modelling (FSM) for probabilistic rock slope assessment (Grenon & Hadjigeorgiou 2012) with the focus on capturing the inherent variability of the geometric parameters. In these analyses, Monte-Carlo is used to generate possible realizations of the slope, each represented by a discrete network of fractures. Although DFN is an efficient and reliable method in describing possible realizations of the slope, 30 it is usually integrated with conventional limit equilibrium techniques when it comes to the stability calculation. Therefore, many factors are ignored and this may increase the level of modelling uncertainty. If numerical models are intended to be used (Brideau et al. 2012), only a limited number of realizations can be generated in order to achieve better efficiencies. In this scenario, Monte-Carlo may not be the best sampling tool since it tends to sample the close-to-the-mean values and ignore the extreme cases, which in fact may cause unpredictable instability behaviours for the slope. This becomes more of an issue when the fitted distributions for the input parameters diverge from a normal distribution and show a highly skewed trend. Besides, the involved geometric parameters are selected and distributed randomly in each realization of the slope. This makes it very challenging to quantify the influence of the variability of the input parameters on the values of factor of safety, as well as to define a mathematical relation between the input and output variables. Since this issue is one of the main focuses of this study, DFN is not used as the method for the probabilistic slope stability. In summary, selecting an appropriate method for the slope stability assessment is very important. What makes a model more representative and the results more reliable is not just limited to the way the stability assessment is performed or the factor of safety is calculated. In fact, the quality of the input parameters and the way these parameters are transferred to the numerical model is a much more significant concern prior to any stability assessment. 31 Chapter 3: New Methodology to Integrate Statistical and Numerical Techniques in Rock Slope Stability Assessment 3.1 Overview The main objectives of this research are to 1) develop a methodology to incorporate efficiently the inherent variability of the discontinuity geometry into a set of related finite element models, and 2) to statistically analyze the finite element results, construct a prediction function and, evaluate the probability of failure. The methodology developed to fulfil these objectives will be referred to as “new methodology” in this thesis. The new methodology (Figure 3.1) adopts few steps from conventional slope stability assessment, and introduces new statistical and numerical steps that enable the analyst to efficiently transfer the variability of the geometric parameters in the numerical models and accordingly, evaluate the probability of failure (highlighted in red). 32 Figure 3.1. Steps in a rock slope stability using new methodology In the new methodology, statistical techniques and design of experiment methods are combined to characterise the variability of the discontinuity geometry, acquired from field investigation, and to generate representative realizations of the slope geometry. A key goal is to minimize the number of different finite element models that need to be made to capture the inherent variability in the discontinuity geometry. A Shear Strength Reduction technique is used to assess the slope stability in each numerical realization of the rock mass, i.e., in each finite element model. To identify the dominant parameters controlling the overall behaviour of the slope, an Analysis of Variance process is used to analyze the results from the finite element models. One of the most important advantages of this analysis is to provide information about the extent of each parameter’s contribution to the variability of the calculated stability factor. It is customary that the influence of the non-significant parameters can be ignored and therefore, their variability can be replaced by their mean values in further analysis. Reducing the number of probabilistic 33 input parameters results in faster and more affordable computational effort. Conversely, more detailed analysis can be performed on the variability of the significant parameters to better understand their influence on the slope behaviour. To accomplish this task, Central Composite Design is used to generate more realizations of the slope, mainly based on the variability of the significant parameters. The results from the predetermined and limited set of finite element models are used to build a regression model that can be used to predict the slope stability as a function of the significant parameters. The regression model can be presented as response surfaces that are capable of predicting the slope stability for any realistic combination of important input parameters generated from the variability of the significant parameters. Having this ability, the necessity to model these realizations numerically is eliminated. Using the response surfaces, a random sampling technique such as Monte-Carlo or Latin hypercube can be used to randomly generate slope stability estimates for thousands of slope realizations. A statistical distribution will be fitted to the obtained results. The probability of failure along with confidence intervals can be extracted from this distribution. It should be noted that the response surfaces, generated from the prediction model, have some errors due to the regression estimations, which should be quantified and considered when predicting the behaviour of a specific realization. The algorithm of the new methodology is summarized in Figure 3.2. In the next section of this Chapter, two slope stability problems investigated in this study are introduced. The remainder of this chapter will focus on the statistical and numerical tools and techniques that are used to develop the new methodology. 34 Figure 3.2. Algorithm developed for the new methodology 35 3.2 Case Studies The slope assessment methodology is applied to two different slope configurations. The first configuration is a synthetic problem involving simple slope geometry (Hammah et al. 2009). Two discontinuity sets are included in the geometry of the slope (Figure 3.3). The strength parameters are considered deterministically in this analysis, since the main focus is to capture the inherent variability of the geometric parameters. The dip, dip direction, and trace length of the two discontinuity sets (six parameters) are considered probabilistically while spacing is fixed to its mean value. To generate distributions for the probabilistic parameters, 25 random points are generated as synthetic measurements for dip, dip direction, and trace length of each discontinuity set. The trace lengths are generated with a lognormal distribution, as suggested in the literature (Barton 1977, Einstein et al. 1980). As for the discontinuity orientation parameters, Fisher’s and uniform distributions are recommended (Dershowitz 1979, Hack 1998). However, due to spherical nature of Fisher’s distribution it will not be applicable as will be discussed later. The uniform distribution on the other hand, is not capable of sufficiently representing the true variability of the orientation parameters. Therefore, the random points describing the orientation parameters are generated according to the Weibull distribution. One important advantage of the Weibull distribution is its ability to capture the rare events (worst-case scenarios) as well as the frequent incidents (Weibull 1939). The distributions for the six probabilistically defined parameters are used in the process of data sampling and generating representative realizations. All deterministic parameters involved in this analysis, including the strength parameters, spacing and in situ stresses are selected according to the expert judgment or adopted from Hammah’s model (Hammah et al. 2009). More detailed discussions regarding the synthetic problem and the data generation process is provided in Chapter Chapter 4:. 36 Figure 3.3. Geometry and two discontinuities sets in the synthetic slope After successfully evaluating the applicability of the integrated statistical and numerical technique in the synthetic slope stability problem, the methodology will be applied to a second slope configuration. This configuration consists of a rock ridge pierced by an old railway tunnel. The Little Tunnel is located the Kettle Valley Railway trail system (Figure 3.4) close to the Naramata, BC. 37 Figure 3.4. Aerial image of the ridge and the Little Tunnel (Penticton museum) To collect representative data on the geometric parameters of the ridge and the tunnel, digital photogrammetry is used as a remote sensing technique. This method improves the accuracy and representativeness of the acquired field data. Using the goodness of fit tests along with the measurements acquired from digital photogrammetry for each geometric parameter, appropriate statistical distributions are fitted to the measurements. Some laboratory tests are also conducted to gather information regarding the strength parameters of the rock. More detailed information about data collection and further analyses are provided in Chapter Chapter 5:. The next section of this chapter introduces digital photogrammetry as a tool that was used to acquire field data for the case of the Little Tunnel. Sections 3.3 to 3.6 discuss the techniques and theories that are used in both problems. 38 3.3 Data Collection 3.3.1 Digital photogrammetry One of the main sources of uncertainties in a rock slope stability analysis is the lack of knowledge about the input parameters. The geometric parameters and the strength parameters are typically the most important. Sufficient effort should be made to accurately capture the inherent variability of these parameters and decrease the level of measurement error. This is an important initial step in any rock slope stability analysis. Even if the most comprehensive methodology is used to analyse the stability of a slope, representative data are required to ensure reliability of the assessment. Depending on the type of the problem and the parameters of concern, field investigations or laboratory tests can be used to obtain the necessary information. Since in this research the geometric parameters such as orientation, size and spacing of the discontinuities are of primary concern, it is essential to choose a site investigation technique which provides sufficient information about these parameters. As discussed earlier, the data associated with the synthetic problem are randomly generated. However, to acquire representative geometric data for the Little Tunnel problem, digital photogrammetry is used as a remote sensing technique. The main steps to construct Digital Terrain Models for this location are similar to the theory discussed in Section 2.2.3. More details about the field work along with the constructed DTMs and digital mappings will be presented in Chapter Chapter 5:. 3.4 Generating Efficient Number of Realizations As mentioned earlier, to increase the reliability in a rock slope stability assessment, it is very important to account for the inherent variability of the input parameters, captured from field investigation, in numerical models (Christian 2004). If the variability in the strength parameters is important, random sampling techniques such as Monte-Carlo can be used to simulate models with different strengths, each representing one possible scenario. Many authors have practiced implementing non-deterministic strength parameters in rock slope stability problems (Wolff 1996, Duncan 2000, Griffiths & Fenton 2004, Griffiths et al. 2009, Duzgan & Bhasin 2009). 39 Moreover, there are several available software packages, which enable the users to input the variability of the strength parameters, as either distributions or statistical moments (Rocscience 2013). The geometry of a numerical model is independent of the strength parameters of the rock or the discontinuities. Therefore, different realizations of the slope can be developed using different values of strength parameters, while maintaining the original discretization and meshing patterns. For this reason, hundreds of different realizations of the slope can be numerically modeled with an affordable computational cost. The time and cost efficiency of a probabilistic rock stability analysis is far more challenging when the variability in the geometric parameters is including in the analysis. In these types of analyses, a small change in the discontinuity geometry parameters triggers the need to create an entirely new model with new discretization, and the numerical calculations must be start from the initial stage. As a result, continuous probabilistic analyses such as those applied for non-deterministic strength parameters are impractical. If hundreds or thousands of geometric realizations obtained by Monte-Carlo sampling are numerically modeled, then the analysis of a slope with variable discontinuity geometry would be beyond the bounds of practically. On the other hand, using Monte-Carlo sampling to characterize only a few realizations will result in modelling that does not sufficiently capture and represent the geometric variability. Therefore, to incorporate properly the inherent variability of the geometric parameters in the numerical models, a sampling technique is needed that is capable of generating a limited number of realizations that truly represents the variability in non-deterministic geometric parameters. The next section discusses the Point Estimate method as a sampling tool to select representative points from the distributions of the geometric parameters. Factorial design is then introduced, as a technique to efficiently combine the obtained point estimates and generate a computationally efficient number of slope realizations. 3.4.1 Point Estimate Method Different statistical techniques such as Monte-Carlo sampling, First Order Second Moment method (FOSM), the Hasofer-Lind approach (FORM) and the Point Estimate Method (PEM) have 40 been practiced in the field of geotechnical engineering (Baecher & Christian 2003, Peschl & Schweiger 2003, Schweiger & Thurner 2007). Some of these have also been incorporated in Finite Element and Finite Difference computer codes (Rocscience 2013). FOSM and FORM suffer from mathematical simplifications and do not necessarily increase the efficiency of the probabilistic analysis when dealing with geometries that are more complicated. The Point Estimate Method is frequently used in the field of geotechnical engineering due to its simplicity, accuracy and its adaptability to different problems (Harr 1987, Wolff 1996, Duncan 1999, Miller et al. 2004, Chiwaye & Stacey 2010). In the original PEM proposed by Rosenblueth (1975), each variable is replaced by two point estimates, defined by adding/subtracting one standard deviation to/from the mean value (Eq. 3.1). This approach helps perform a probabilistic analysis while avoiding problems caused by the complexity of some rigorous probabilistic analysis. If Y is a function of x1, x2,…,xn variables, the few first statistical moments of Y can be approximated by using the point estimates of xi variables. As a result, if Y is a function of n variables, 2n calculations are necessary to estimate the moments of Y. PEM is useful for estimating the statistical moments of Y. However, it does not provide enough information to fit the best probabilistic distribution to Y. Rosenblueth’s original theory of point estimates is applicable to three different scenarios (Baecher & Christian 2003): 1. If Y is a function of only one variable (x) with known values of mean, variance and skewness, the point estimates that represent variable x are calculated using Equation 3.2. 3.1 [ √ ] 3.2 41 2. If Y is a function of only one variable (x) which has a symmetrical and Gaussian-like distribution, then three point estimates for this scenario are obtained from Equation 3.3. 3. If Y is a function of n variables (x1, x2,…,xn) which have symmetrical distributions with zero skewness and may or may not be correlated, the point estimates representing variables xi are calculated using Equation 3.4. This scenario is a more general form of the first case with skewness being ignored. Recently, Hammah et al. (2009) attempted to use the third scenario of Rosenblueth to integrate the inherent variability of the geometric parameters into numerical models. However, several simplifying assumptions were involved in their analysis. In most of the cases, a maximum of two input variables are included in the analysis. Moreover, it was assumed that both the input and the output parameters are distributed normally. In most practical problems however, more than one discontinuity set, each defined by at least two or three geometric parameters, dominate the stability behaviour of the slopes. Moreover, the significant parameters and the output parameters are not necessarily expected to be normally distributed as are the cases for spacing and trace length that are shown to have an exponential and lognormal distribution (Call et al. 1976, Robertson 1970, Barton 1977, Einstein et al. 1980, Dershowitz 1984). As a result, the original PEM may not be sufficient for more complicated slope stability analysis. In this research Hong’s (1996, 1998) modification to PEM is used to develop an efficient and representative number of the slope realizations. In this theory, Y is a function of multiple, skewed and uncorrelated variables. The point estimates of each parameter are functions of mean, variance, skewness and number of input variables and are calculated using Equation 3.5. It is believed that Hong’s modification increases the flexibility of PEM for a wider range of probabilistic distributions of the input parameters (Baecher & Christian 2003). √ 3.3 3.4 42 It should be noted that in this research, PEM is used only as a sampling technique to substitute the input distributions by their point estimate values. Therefore, PEM will not be used for prediction purposes as has been practiced in the literature. PEM is only capable of estimating the statistical moments of the output parameters, which are less of a concern in this research, as will be discussed further in this chapter. Despite all the important capabilities of PEM, it suffers from a shortcoming known as the “curse of dimensionality” which negatively affects the accuracy of the method when the number of skewed input parameters exceeds a certain value (Baecher & Christian 2003). As previously mentioned, in Hong’s theory the point estimates are a function of skewness and the number of variables. As the value of these two parameters increase, the calculated point estimates disperse more from the mean values to the tails of the distribution. If the two point estimates move closer to the tail of their parameter’s distribution, the values obtained represent an infrequent occurrence. The severity of this problem increases when the number of variables exceeds six or the input variables are highly skewed. One alternative to compensate for the curse of dimensionality problem is to reduce the number of the input variables to only those that significantly control the behaviour of the output parameters. Therefore, a preliminary sensitivity analysis is recommended to identify the significant parameters beforehand. The influence of the curse of dimensionality is specifically noticeable when PEM is used to predict the statistical moments of the output parameters. Since in this research, other techniques are used for the prediction purposes, the curse of dimensionality should have negligible effect. However, this shortcoming is considered in the analysis and its possible effects on the output results are quantified. Further discussions on this topic are presented in Chapters Chapter 4: and Chapter 5: of this thesis. Having the point estimates extracted for each input parameter distribution, factorial design is used to efficiently generate realizations of the slope by considering all the possible combinations of the point estimates. This design along with the Analysis of the Variance [ √ ] 3.5 43 technique is further used as a screening tool to identify the significant parameters on the behaviour of the slope. Section 3.4.2 presents the theory and the background of this technique with its application in the present research. The outcomes of such analysis are not intended to be used for constructing response surfaces and prediction purposes. Hence, substituting the variability of each parameter only by two point estimates is sufficient at this stage. However, in further analysis different techniques that consider more point estimates for each variable is applied. This will be discussed in Section 3.5.1. 3.4.2 Factorial design and the Analysis of the Variance 3.4.2.1 Factorial design In general, factorial designs are commonly used to evaluate the effect of one or more input parameters (factors) and their interactions on the overall behaviour of the outputs (responses). In these types of designs, each factor has different levels (values) and all the possible combinations of these levels are investigated. Each run or experiment that represents one possible combination of the factors’ levels is called a treatment. In this research, the factors are the geometric parameters and their levels are obtained from the calculated point estimates. The treatments that are the combinations of the point estimates represent the geometric realizations of discontinuities in the slope. When the factorial design is used for complicated experiments, it is very common to repeat each treatment more than once to consider the possible human or experimental errors. The obtained response for each replication is recorded and is averaged to represent the response value for a specific treatment. Since in this research each treatment (realization) is numerically modeled, human or environmental errors are ignored and only one replication for each treatment is considered. To evaluate the influence of the input parameters, two types of effects are estimated in a factorial design: 44 main effects, which describes the influence of the factors on the behaviour of the responses, and interaction effects, which describes the influence of the changes of a factor, at a specific level of the other factors, on the behaviour of the responses. It is proven that one-half of the main effects and the interaction effects are equal to the estimated least squares coefficients in a linear regression model. Therefore, the outputs of a factorial design are not only used for sensitivity analysis purposes but they can also be used as a linear prediction model (Montgomery, 2001). However, if the behaviour of the responses is better characterized as nonlinear, as is the case in this research, a nonlinear prediction model is required. For this set of problems, different techniques such as the Central Composite Designs (CCD) can be incorporated which will be discussed in the next section. If all the factors involved in a factorial design have only two levels, “2k” treatment combinations are investigated, in which k, is the number of the factors. This type of design can be considered as a sub category of a factorial design and is known as 2k factorial design. In such design, the factors’ levels can be defined quantitatively or qualitatively such as “high” and “low”. It is very common in a 2k factorial design, to have the natural variables (levels of the factors) substituted by the coded variables in which the high and low levels of each factor is represented by “-1” and “+1”. The relation between the natural and coded variables is described in Equation 3.6. This type of design is suitable for this research since initially all factors have two levels defined by their distribution’s two point estimates. The high point estimate and the low point estimate are considered as the natural variables that are converted to the coded variable using Equation 3.6. A basic design table for a 23 factorial design with coded variables is presented in Table 3.1. Column “I” is called the identity column and its values at all treatments are equal to 3.6 45 +1. Treatment combination (1) refers to a treatment in which all the three factors (A, B and C) are at their low levels. Small letter “a” represents a treatment in which factor A is at its high level while factors B and C are at their low values. Respectively, “ab” represents the treatment in which factors A and B are at their high values while factor C is at its low level. The rest of the notations are interpreted with the same logic. One of the most important characteristics of 2k factorial designs is their orthogonality. In these designs the number of “+” and “-“are equal in all columns, the total sum of each column is zero and the multiplication of each column with the identity column, leave the column unchanged. Table 3.1. Basic design table for a 23 factorial design (Montgomery, 2001) Treatments Factorial effect I A B AB C AC BC ABC (1) + - - + - + + - a + + - - - - + + b + - + - - + - + ab + + + + - - - - c + - - + + - - + ac + + - - + + - - bc + - + - + - + - abc + + + + + + + + As it is presented in Table 3.1 , a 23 factorial design consists of three main effects, three two-factor interactions, and one three-factor interaction. In general, a 2k factorial design consists of k, main effects, ( ) two-factor interactions, ( ) three-factor interactions …, and ( ) k-factor interaction. As the number of factors increases, the number of treatment combinations which should be either experimentally tested or numerically modeled increases as well. Moreover, as the number of factors increases, higher-factor interactions are involved in the sensitivity analysis. In many practical cases in slope stability analysis, the negligibility of the higher-factor interactions is known in advance or the main concern of the analyst is to estimate only the main 46 effects and the lower-level interactions. These parameters can be estimated by studying a fraction of the treatment combinations that are suggested in a full 2k factorial design. This way, the number of treatments decreases noticeably and the design becomes more affordable when dealing with higher number of factors. In a fractional factorial design, the 2k treatment combinations are replaced by 2k-p, in which “2p” represents the number of treatments that are subtracted from the initial full factorial design and are not investigated. In the case where p=1, only half of the initial combinations are studied and the design is known as half-factorial design. Due to the efficient number of runs in such design, it is widely used in screening purposes. Half-factorial design reduces the number of realizations that should be numerically modelled, and is capable of acceptable estimation of the factor effects. Hence, this design is incorporated in this research to identify the significant geometric parameters affecting the variability of the slope stability. Although only a half or a fraction of the treatments is investigated in a fractional factorial design, all effect estimates including the higher-factor interactions are calculated. Having a “defining relation” in which a column in the design (generator) is equal to the identity column “I”, the aliases effect estimates are identified. In a general half-factorial design with k factors, a basic full factional design table with k-1 factors (ignoring the last factor) is constructed. The kth factor is then substituted by the highest-factor interaction ABC...(k-1) and is added to the design table as the last column. The generator is selected as ABCD…k that forms the identity relation as I=ABCD…k. Multiplying the factorial effects included in the designed table by the generator, results in identifying the aliases (Box et al. 1978, Montgomery 2001). Depending on the selection of the generator and the identified aliases, different resolution is assigned to a half-factorial design. In resolution III designs, no main effect is aliased with any other main effect while the main effects are aliased with the two-factor interactions and the two level interactions are aliased with each other. In resolution IV factorial design, the main effects are not aliased with each other or with any two-level interactions. However, the two-level interactions are aliased with each other. Other resolutions can be interpreted following 47 the same rule. Higher resolution designs imply fewer assumptions on the negligibility of higher- factor interactions (Montgomery 2001). 3.4.2.2 Analysis of the variance In Section 3.4.2.1 a technique to generate an efficient number of treatment combinations (realizations) to study the influence of the factors on the behaviour of the responses was discussed. After numerical or physical experiments on each treatment combination in a factorial design, response values are obtained and are added to the design table. At this stage, the Analysis of Variance technique (ANOVA) is used to estimate the contribution of each factor and its interactions, to the variability of the obtained responses. In general, the main concern in such analysis is to study the deviation of the response values (corresponding to each factor level/treatment), from the overall mean of the responses. The results of the experiments or simulations in the factorial design are best described by a model known as the “effect model”. In the simplest scenario, when dealing with two factors, the model can be described as (Eq. 3.7): in which, μ is the overall mean, τi is the effect of ith level of the factor A, βj is the effect of the jth level of factor B, τβ is the effect of interaction between factor A and B and ϵ is the random error (Box et al. 1978), and k, represents the number of replications. The effect model can be extended to a k factorial design, in which k is the number of the factors. According to the described model, statistical hypotheses that describe the problem situation are defined. The number of statistical hypotheses that should be studied is a function of the total number of main and interaction effects. Each set of statistical hypotheses consists of a null and an alternative hypothesis. The null hypothesis assumes that the treatment effect associated to a factor or interaction at all levels is equal to zero. This implies that the factor or its interaction with other factors do not contribute to the variability of the response. The alternative { 3.7 48 hypothesis on the other hand, assumes that at least one treatment effect at a certain level of a factor or interaction is not equal to zero. If the null hypothesis at the decided significance level is rejected for any of the factors or interactions, it implies that the contribution of that factor or interaction effect on the behaviour of the response is significant. Equation 3.8 describes the statistical hypothesis associated to the effect model in Equation 3.7 (Montgomery, 2001). To test the hypotheses, Analysis of Variance calculations are required. Table 3.2 illustrates the Analysis of Variance procedure for a 2k factorial design (Montgomery 2001). If F0, computed for any of the sources of variance, exceeds the F-Value, which is obtained from the reference tables (Box 1953) at a selected significance level (α), the null hypotheses, which corresponds to that source of variation is rejected and the effect is categorized as the influential effect on the behaviour of the response. Otherwise, the parameter can be ignored in further analysis. In Equation 3.9, the necessary equations to compute the values of sum of squares are defined, in which a,b,…, k are the notations defined in Table 3.1 and K, is the number of factors. Factor A 3.8 Factor B Interaction AB 3.9 49 The effect estimates and the contribution percentage will give a general idea about the significance of each factor or interaction effect. If for any reason, the F-test is not applicable, these two parameters can be used to interpret the factors’ influences. From the effect estimate results, the statistical hypotheses cannot be interpreted, however, a comparison among the factors’ dominancy can be made. The effect estimates are also related to linear regression coefficients and as a result, they can be used to form a regression model. According to Table 3.2 in the case of unreplicated design in which n=1, the degree of freedom for the error will be equal to zero and the mean sum of squares of error cannot be defined. In this type of problem, usually the higher factor-interactions with negligible contribution percentages are considered as the model error and the rest of the calculations will remain the same (Box et al. 1978). Since in the present study, the unreplicated design is used, this modification is applied in the calculations. 50 Table 3.2. Analysis of Variance for a full factorial design (Montgomery, 2001) Source of variation Sum of squares Degrees of freedom Mean sum of squares F0 K main effects A SSA 1 MSSA=SSA/1 MSSA/MSSError B SSB 1 MSSB=SSB/1 MSSB/MSSError . . . . . . . . . . . . . . . K SSK 1 MSSk=SSk/1 MSSk/MSSError ( ) two-factor interactions AB SSAB 1 MSSAB=SSAB/1 MSSAB/MSSError AC SSAC 1 MSSAC=SSAC/1 MSSAC/MSSError . . . . . . . . . . . . . . . JK SSJK 1 MSSJK=SSJK/1 MSSJK/MSSError . . . . . . . . . . . . . . . ( ) k-factor interactions ABC…K SSABC..K 1 MSSABC..K=SSABC..K/1 Error SSerror 2k(n-1) MSSerror=SSerror/2k(n-1) Total SStotal n2k-1 MSSABC=SStotal/ n2k-1 3.4.2.3 Model adequacy test To ensure the validity of the Analysis of the Variance, it should be proven that the specified effect model i.e. Equation 3.7, adequately describes the problem. In this analysis, a normal distribution with the mean of zero and an unknown variance is assumed for the error ϵijk. This assumption is usually violated in most practical problems; however, by investigating the 51 residuals, the model adequacy can be evaluated. The residuals are defined as the difference between the experimented treatment and the prediction of the treatment using the obtained regression model. If the model is adequate, the residuals should be normally distributed, uncorrelated, independent, and structureless (Montgomery 2001). Each of the three conditions must be investigated. If not satisfied, some remedial steps must be taken. To study the normality of the residuals it is necessary to plot the residuals versus the normal probability. If the plot resembles a straight line, the normality assumption is met. A little dispersion from the straight line usually occurs in practical situations and can be ignored. To check for the independency assumption it is necessary to plot the residuals versus the treatment run sequences. If no specific pattern is observed and the plot is sufficiently scattered, the errors can be considered independent. To ensure that the errors are structureless it is necessary to plot the residuals versus the predicted (fitted) values. If no obvious pattern is observed, the condition is successfully met. As mentioned earlier, the model adequacy test is very important. If any of the conditions are violated, the ANOVA results are not trustable. One of the common methods to correct the model adequacy problem is a variance-stabilizing transformation. In this technique, a function is selected based on which the treatment simulation results are transformed. The ANOVA should be repeated on the new results for the responses. In this research, as will be discussed later in chapter Chapter 4:, the normality assumption is violated. Therefore, a Cox-Box transformation is applied on the analysis. 3.4.2.4 Design of Experiment for computer simulations Different methods of the Design of Experiments were initially developed for physical experiments. There are several essential concepts in these designs such as replication, randomness, and blocking which are the characteristics of physical experiments (Johnson et al. 2008). In computer simulations, however, the noise variables (blocking) do not exist or can be controlled to some extent. Randomness might not also be achieved as properly as in actual physical experiments. For these reasons, some debates have been made on the validity of extending the use of such designs for computer simulations (Sacks et al. 1989, Johnson et al. 2008, Myers 1991, Kelijnen 2005). Kelijnen (2005), states that the design of numerical 52 simulation experiments is similar to the design of physical experiments. In this scenario, the factors are the inputs, the levels are the values, and the treatments are the numerical runs. Providing that the limitation of the design of experiments such as the number of factors (maximum of 10) and the number of levels (maximum of 5) are not violated, such designs can be used as a screening tool for the computer simulations (Kelijnen 2005). Although several designs such as space filling designs are specifically developed for computer simulations, the traditional Design of Experiment is still practiced for less complicated simulation problems. It is recommended however, to apply the space filling methods when dealing with deterministic simulations since replication and randomness are not involved in the analysis. However, for stochastic problem, some level of randomness exists and Design of Experiment is applicable (Johnson et al. 2008). If the randomness of the residual errors is questionable, or the model adequacy tests are rejected by any transformation technique, the F-test results might not be trustworthy. However, the estimated effect of the factors and their interactions (Eq. 3.9) are calculated regardless of the randomness assumption. Hence, they are still valid to be used for sensitivity analysis or regression purposes. Since in this research, a stochastic problem with some level of randomness is involved and two levels for all factors are considered, the assumptions for a traditional Design of Experiment (factorial design) is not violated. Therefore, this design can be used as a screening tool. 3.5 Generating Prediction Model One of the main purposes of the overall methodology presented in this thesis is to obtain a prediction function that describes the stability behaviour of the studied slope as a function of the significant geometric parameters. Having this function, the necessity to perform numerical simulations will be eliminated and the stability factor can be directly estimated for any arbitrary realization. To construct the prediction model, a rigorous design is required to generate additional geometric realizations of the slope. These realizations must be generated based on geometric combinations obtained from new levels assigned to the significant parameters. Since it is important to capture the inherent variability of these parameters in the analysis, more than 53 two levels will be selected for each factor. Hence, the design must handle more than two levels for all dominant factors as described in the next section. In addition, the outcome of such design should provide enough information to fit first or second order response surfaces to the design variables. The response surfaces are mathematical relationships between the independent and the dependent variables. In first-order response surface is described by a linear relationship between the independent and the dependent variables (Eq. 3.10). A second-order response surface is described by quadratic terms (Eq. 3.11). In Equations 3.10 and 3.11, X, βij and η represent independent variables, regression coefficients, and dependent variables, respectively. The response surfaces corresponding to linear fitted models do not have curvature while adding quadratic terms or interaction terms in the model creates noticeable curvature in the shape of the response surfaces. It should be noted that, regardless of the type of relation defined between the dependent and the independent variables in such models, the regression is always categorized as a linear regression, since the model remains a linear function of the regression coefficients (Montgomery 2001, Myers & Montgomery 2002). 3.10 ∑ ∑ ∑ ∑ 3.11 Using a factorial design outputs, the regression coefficients of a first-order fitted model can be estimated. However, in many problems such as the one studied in this research, the first-order model is not able to sufficiently describe the behaviour of the output as a function of the design variables. In these cases, the R-squared and the adjusted R-squared for the fitted first-order models may be in an acceptable range. However, the anticipated predictability level, which is often determined by R-square of prediction and PRESS values, is not achieved. This fact will be proven in Chapter Chapter 4:. As a result, a design that can provide enough information to fit a second-order model is necessary. In the following sections, Central Composite Design (CCD) and the procedure to fit quadratic response surfaces will be discussed. 54 3.5.1 Central Composite Design (CCD) Central Composite Design (CCD) and the Box-Behnken Design (BBD) are two commonly used designs to fit second-order models in an analysis. BBD (Box-Behnken 1960) is a three-level model and belongs to the class of “balance incomplete block designs”. CCD (Box & Wilson 1951) is a five level design in its conventional form, and is the most popular design in the class of second-order models. Both BBD and CCD provide efficient designs, and the number of their sequential runs is comparable when less than four factors are involved in the analysis. In this research, CCD is used to generate necessary realizations to fit a second order prediction model. In a conventional CCD, each factor is defined by five levels, two of which are similar to the point estimates used in the factorial design. The third level is selected as the variable mean value. The other two levels are axial points, which are obtained by adding/subtracting an axial distance (α) to/from the mean value of each variable. The design points in CCD therefore, consists of 2k combinations of the factorial points, 2k combination of the axial and mean points and nc centre points. Here, k is the number independent variables and nc is the number of replications for a combination consists of the variables mean. Figure 3.5 presents a schematic view of the design points in a two variable CCD. 55 Figure 3.5. Design points in a two variable CCD (re-produced from Statistica glossary 2013) The combinations of the design points generate realizations that should be numerically modeled. The factorial points are mostly involved in the estimation of the linear terms and the interaction terms in a second-order fit, while the axial points and the centre points are used to estimate the quadratic terms and the curvature of the response surfaces, respectively (Myers & Montgomery 2002). The combination of the axial points in a CCD, with coded variables is presented in Table 3.3, in which α is the axial distance. The selection of α and nc have a noticeable influence on the rotatability of a CCD (Myres et al. 2002). A rotatable design holds similar scaled prediction variance ( [ ̂ ] ) for any pairs of design points which are equally distanced from the design center. This characteristic ensures a similar prediction quality for the second-order model within the region of interest. To achieve this goal, it is recommended (Khuri & Cornell 1987) to use Equation 3.12 to obtain the number of the centre runs and to select α in a way to satisfy Equation 3.13 (Box & Draper 1987, Mason et al. 1989). ⁄ 3.12 56 Table 3.3. Axial points in a CCD X1 X2 … Xk - α 0 … 0 α 0 … 0 0 - α … 0 0 α … 0 0 0 … - α 0 0 … α In practical situations, it is not always desirable to have a spherical (rotatable) design. There are many cases in which the region of interest is better described by a cuboid rather that a sphere. In these scenarios, a face-centre CCD is recommended. In this design, the axial distance (α) does not follow Equation 3.13. Regardless of the number of factorial points, it will be considered equal to one. Therefore, the axial points are located on the centers of cuboid faces. Although the rotatability condition is not satisfied in this situation, the design better represents the region of interest. Table 3.4 compares the design matrixes of a spherical (rotatable) and a cuboidal design with three factors (Mason et al. 1989, Myers & Montgomery 2002). The rows of the matrix with all elements equal to “0” represent the centre runs. When applying CCD in a slope stability problem, each combination of design points provides different realizations of the slope. Therefore, the realization investigated in a spherical and cuboidal designs are different. Hence, in this research response surfaces are constructed for both spherical and cuboidal designs and the predictability of the second-order models obtained from each analysis is quantified. The probability of failure is estimated according to each design prediction model. A detailed comparison is provided in Chapters Chapter 4: and Chapter 5: of this thesis. It should be noted that prior to implementing the results of a CCD to obtain a second-order fit, the model adequacy tests (as described in section 3.4.2.3) should be verified and if any condition is violated the necessary variance transformation should be adopted. √ 3.13 57 Table 3.4. CCD matrix designs-rotatable and a face-centre with three factors X1 X2 X3 -1 -1 -1 1 -1 -1 -1 1 -1 1 1 -1 -1 -1 1 1 -1 1 -1 1 1 1 1 1 -1.682 0 0 1.682 0 0 0 -1.682 0 0 1.682 0 0 0 -1.682 0 0 1.682 0 0 0 X1 X2 X3 -1 -1 -1 1 -1 -1 -1 1 -1 1 1 -1 -1 -1 1 1 -1 1 -1 1 1 1 1 1 -1 0 0 1 0 0 0 -1 0 0 1 0 0 0 -1 0 0 1 0 0 0 3.5.2 Response Surface Method (RSM) To obtain a mathematical model of the response (response surface) as a function of the design variables, Analysis of Variance is performed on the results of CCD and the coefficients of the first and second order models are estimated (Hill & Hunter 1966). While the ultimate purpose is to fit a second-order model, both the first and second order models are fitted to make comparison. Although the R-squared values are commonly used to describe the best fit to the simulated responses, they are not an adequate indictor for the predictability of the fitted models. In fact, the R-square values simply describe how well the model is fits the simulated responses; however, they do not evaluate the ability of the model to predict new points which are not previously simulated nor included in the design. In the present study, three parameters are estimated to evaluate the predictability of the response surfaces in order to select the best fit. 58 One of the most commonly used parameters to describe the prediction capability of a response surface is the R-squared of prediction. This parameter is computed using the Prediction Error Sum of Squares (PRESS), proposed by Allen (1971). To calculate the PRESS value for a specific design, the output from one finite element slope realization “i” is eliminated from the design and a prediction model is fitted to the remaining finite element model outputs. The response corresponding to the eliminated slope realization is then predicted using the obtained fitted model. The difference between the prediction and the finite element response for that specific slope realization is recorded as a prediction error. This procedure is repeated for the total “n” slope realizations in the design. The PRESS value is equal to the sum of squares of the prediction errors for all slope realizations involved in the CCD (Eq. 3.14 ). If the predict error for a specific slope realization is noticeably larger than the conventional residual, it implies that the model sufficiently fits the point. However, ignoring that point in the design diminishes the capability of the model to predict. Moreover, according to the relation between PRESS and R-squared of prediction, it is generally assumed (Montgomery 2001) that the lower values of PRESS generate a better prediction capability for the fitted response surface (Eq. 3.15). One of the other important characteristics of a prediction model, as discussed in section 3.5.1, is its ability to hold the same level of prediction accuracy within the region of interest. This characteristic is initially considered by constructing a rotatable design. However, to understand better the predictability of a fitted model, especially in the cases where face-centre designs are more desirable, it is necessary to quantify the Variance of Prediction for each simulated slope realization. If the combination of factor levels in slope realization “i”, is defined by matrix X0, the Variance of Prediction corresponding to that slope realization is computed by Equation 3.16, in which X and σ2 (estimated with MSE) denote the matrix of levels for the independent variables and model variance, respectively. If the plot of the Variance of ∑ ∑[ ̂ ] 3.14 3.15 59 Predictions versus the design points ( [ ̂ ] provides a symmetrical and balance contours, it is concluded that the fitted model predicts consistently within the region of concern (Myers & Montgomery 2002). The R-squared of prediction and the Variance of Prediction are both parameters that evaluate the ability of the fitted model to predict the behaviour of the realizations involved in the design. Although quantifying these two parameters is essential, it is not sufficient. In practical cases, the prediction model is used to predict the response values of slope realization combinations that are not numerically or physically experimented nor included in the initial design. To evaluate the predictability of the model in these cases, the Monte-Carlo technique is used to select randomly 30 different slope realizations from the distributions of the significant parameters. These 30 combinations are implemented in the prediction model and their fitted responses are computed. In addition, the 30 realizations are run in the finite element software and their simulated responses are recorded. The difference between the fitted and simulated responses will be the residuals. A good estimator of the model predictability will then be defined as the mean sum of squares of the errors (Eq. 3.17). In this equation, “n” represents the number of randomly generated slope realizations. Using the results obtained from Equations 3.14 to 3.17, it is possible to conclude which fitted model, obtained from a rotatable or a face-centre design, first-order or second-order, with or without interactions terms will best describe the stability behaviour of the slope in terms of its geometric parameters. The details of this analysis are provided in Chapters Chapter 4: and Chapter 5:. [ ̂ ] 3.16 ∑ 3.17 60 3.5.3 Probability of failure The main motivation to pursue the proposed methodology is to probabilistically describe the stability behaviour of a slope in terms of its geometric parameters. In other words, the main purpose is to replace the deterministic terms used to describe the slope stability such as the stability factor, by a probabilistic term such as probably of failure. The probability of failure is best defined when a statistical distribution for the stability factor is available. Once the best-fit model, which describes the stability behaviour of the slope, is obtained, the variability in the stability factor can be captured. In order to do so, Monte-Carlo sampling is used to select randomly 1000 points from the distribution of the significant geometric parameters. It should be noted that, Monte-Carlo is a cost efficient tool at this stage, since having the prediction model makes it possible to estimate the stability factor for all 1000 slope realizations without conducting time-consuming finite element simulations. Using the goodness of fit test, the distribution that best describes the variability in the stability factor is identified and the probability of failure and mean factor of safety, along with the confidence intervals are obtained. 3.6 Finite Element Simulation of Slope Realizations 3.6.1 General description As discussed in previous sections, the slope realizations are generated either in the factorial design or in the Central Composite Design should be simulated numerically to obtain the corresponding outputs. Several numerical methods are available that can be used for slope stability purposes. Each method has advantages or limitations for different type of problems (Section 2.3). For rock mass and discontinuity conditions present within the slopes studied in this thesis, a continuum scheme is selected. While the discontinuities are sufficiently persistent to create a blocky and discrete system, the expected slope deformations are small. Moreover, due to the geometry of the slopes and the arrangement of the in situ stresses, a plane stress condition is dominant. Therefore, 2D continuum finite element software called Phase2 61 (Rocscience 2013) is used to numerically simulate the generated realizations of the slopes, in both synthetic and case study problems. Phase2 is capable of modelling elastic, plastic, and elasto-plastic materials. The structural features of a slope can be introduced to this software, as either individual discontinuities or discontinuity networks. The latter, describes sets of discontinuities that share similar orientation, size and spacing characteristics. This software can create different types of discontinuity network such as parallel, cross-jointed, Baecher & Voronoi. Discontinuity networks are typically used in the cases where a clear pattern can be observed in the orientation of the discontinuities, such as in bedded sedimentary rocks. Commonly used failure criteria for the intact rock and discontinues such as Mohr-Coulomb, Hoek-Brown, Drucker-Prager, and Barton-Bandis are available in the software to model the stability behavior of the rock and the discontinuities. In Phase2, the relation between stress and strain is defined by elasto-plastic constitutive laws. 3.6.2 Phase2 convergence criteria Similar to many available finite element codes, Phase2 provide four different convergence criteria that can be selected by the user based on the characteristics of the problem. These four criteria are the Absolute Energy, Square Root Energy, Force, and Displacement criterion (Rocscience 2013). For all these criteria, the conventional finite element relation for equilibrium at nth load step and ith iteration is defined as (Eq. 3.18): In which K is the stiffness, ΔU is the vector of current nodal displacement, and P and F represent the vectors of applied and internal forces, subsequently. It should be noted that in Phase2 an initial value for stiffness, K0, is considered for all computations. Since the Absolute Energy criterion is recommended as the most reliable stopping criterion in this software (Rocscience 2013), it is incorporated in all numerical computations of this research. According to this criterion, numerical convergence is achieved when (Eq. 3.19): 3.18 62 The specified energy tolerance and the maximum number of iterations are set to be 0.001 and 500, respectively, for all simulations in this study. 3.6.3 Shear Strength Reduction (SSR) There are different techniques available to evaluate the stability factor of a slope. In a conventional form, factor of safety is a parameter that describes the stability behaviour of a slope, and is widely used in the limit-equilibrium calculations. In these methods, the factor of safety is defined as the ratio between the resisting and the driving forces. For more complicated geometries, Duncan (1996) proposed a new definition for the factor of safety that is incorporated in several numerical techniques such as FEM and DEM. In this definition, slope instability initiates in the numerical model when the actual shear stress is reduced by a factor called the Shear Strength Factor (SRF or F, Eq. 3.20). In other words, in a Shear Strength Reduction (SSR) technique, the true shear strength of the material is divided by a factor “SRF” and the stability of the slope is assessed according to the reduced shear strength. The factor “SRF” is increased in a systematic manner (causing a reduction in shear strength) until the convergence criteria in Equation 3.19 is violated. The SRF obtained at this stage is recorded as the value of the factor of safety (Duncan 1996, Dawson et al. 1999, Hammah et al. 2004). It should be noted that for slopes with an initial factor of safety of less than 1, the procedure remains the same. However, the SRF gradual decreases in magnitude to cause an increase in the shear strength values used in the numerical model. The SRF decreases until the convergence criterion is satisfied (Eq. 3.19). The general algorithm coded into Phase2 and used in this study to estimate the stability factors of the simulated realizations is summarized in Figure 3.6. ‖ ‖ 3.19 3.20 63 Figure 3.6. Algorithm of SSR The SSR technique is applied to the most commonly used failure criteria such as Generalized Hoek-Brown and Mohr-Coulomb. The Hoek-Brown criterion works best for the problems in which the ratio of the discontinuity spacing to the slope height is small and the failure of the slope is not governed by one or two dominant discontinuity sets (Dawson et al. 1999). As these conditions are not satisfied in the synthetic or the Little Tunnel problem, the Mohr-Coulomb criterion is used in this research to model the failure behaviour of the intact rock and the discontinuities. Equation 3.21 describes the computation of the reduced shear strength for 64 Mohr-Coulomb material (Hammah 2004). In this equation, C* and tan (f*) represent the reduced shear strength parameters. SSR method implicitly implies that cohesion and friction angle are not correlated with each other since they are both divided by the same value of F. This can be considered as a potential shortcoming of this method. However, due to its accuracy and computational efficiency, this shortcoming is ignored and SSR calculations are commonly used in the simulation of geotechnical problems. The SSR technique is very easy to apply when the Mohr-Coulomb failure criterion is used. Dawson et al. (1999) and Hammah et al. (2004) have made several comparisons between the results obtained from FEM models using SSR and the conventional limit equilibrium techniques that use Bishop or Janbu methods to find the failure surfaces. The results obtained from the different approaches are in good agreement. Dawson et al. (1999) concluded that the results from SSR and LE are very close when a fine discretization is applied to the model geometry. However, for coarser element meshing few differences are observed. The factor of safety obtained from SSR is usually lower than the one obtained from LE. This difference can be explained considering the fact that in SSR the most critical failure surface is found automatically, while in limit equilibrium it is dictated to the model a priori (Dawson et al. 1999). SSR can only be applied to the regions or zones in a numerical model where the material behaviour is described as non-elastic or perfectly plastic. In these regions, yielding is allowed to occur for the intact rock and the discontinuities. One of the shortcomings of such modelling is the expensive computational cost especially when the geometry is large and many different slope realizations need to be simulated. In this study, several realizations are generated using statistical techniques, and although it is believed the number of realizations is selected efficiently, plastic simulations are still time consuming. This is even a bigger problem when a 3.21 65 finer mesh is adopted for better accuracy of the computations. To compensate for this problem, the plastic properties and SSR calculation are assigned only to sections of the model that are expected to yield. The rest of the geometry is assigned a linear elastic material. This strategy reduces the computational time significantly. In the synthetic problem, the SSR region is selected large enough to contain the slope and the possible affected areas. In the Little Tunnel problem, the SSR region is selected in a way to cover the tunnel and the total height of the ridge from the top to the bottom. More details about the simulated geometry for both problems are provided in Chapters Chapter 4: and Chapter 5:. 3.6.4 Phase2 output Phase2 models can be used to interpret the stability behaviour of each realization. The factor of safety (SRF), the percentage of element that have yielded, the location of yielded elements (intact rock and discontinues), and the contours of the total displacement are some of the parameters generated by the software. Although the factor of safety and its variability (as discussed in Section 3.5.3 are essential for estimating the probability of failure, it is worthwhile to also monitor the variability in the total displacement and yielded elements. In this study, developing a prediction model for the total displacement and the yielded elements are not a primary concern, since they have no influence on the probability of failure. However, the variability in the yielded elements and the total displacement values are recorded in order to detect the possible failure mechanisms and their volumes as well as the formation and location of rock bridges. An ANOVA analysis is also performed for these responses as well, to identify the significant model parameters that affect the variability in such parameters. It is important to observe whether the significant variables for the three analyzed responses are similar or not. 66 Chapter 4: Probabilistic Stability Assessment for a Synthetic Configuration The integrated statistical and numerical methodology described in Chapter Chapter 3: is applied to two slope configurations: a synthetically generated slope with two discontinuity sets and the Little Tunnel case history. This chapter focuses on the synthetic configuration representing a general slope stability problem, in which the discontinuities are mainly defined in sets rather than individual, scattered joints. It is anticipated that the arrangement of the discontinuity sets with respect to the slope geometry, creates a potential planar or stepped-planar failure. The influence of the variability of the geometric parameters on the overall slope stability, total displacement, and yielded elements of the intact rock and the discontinuities is discussed. This influence is quantified by integrating PEM, Factorial Design, ANOVA, and numerical simulations of the obtained realizations. Ultimately, a response surface capable of predicting any arbitrary realization of the studied slope is generated and the probability of failure is estimated. In Sections Error! Reference source not found. and 4.2, a description of the studied slope configuration is presented and the procedure to generate the synthetic data is discussed. Section 4.3 explains the procedure in which, an efficient number of realizations is created. Section 4.4 focuses on the finite element computations in Phase2 to construct numerical simulations of the realizations. Section 4.5 discusses the screening process using ANOVA to identify the significant parameters that influence the values of SRF, total displacement, and yielded elements. The remainder of this chapter focuses on creating the prediction functions and response surfaces, based on the identified significant parameters and compares the obtained prediction models and estimations of the probability of failure for the synthetic configuration. 4.1 Problem description The overall geometry of the slope and the discontinuities for the synthetic configuration are adopted from the simple slope geometry studied by Hammah et al. (2009). Other parameters 67 involved in the finite element simulations (discussed in Section 4.4.1) are selected based either on judgment or according to the Hammah et al. model. Considering that, some shear strength properties and boundary conditions are selected differently in this study, this thesis is not meant to make comparisons with Hammah et al.’s results. Hammah et al. (2009) used PEM to estimate the mean value of the factor of safety for two scenarios. In the first one, the variability in the strength parameters was considered and PEM was used to select two point estimates from the distributions assigned to cohesion and friction angle of the intact rock. PEM was also implemented to estimate the first two statistical moments of the factor of safety (mean and standard deviation). Later, Hammah et al. made a comparison between these results and the ones obtained from 50 randomly selected Monte-Carlo simulations. This procedure was repeated for the second scenario in which the variability in the geometric parameters was considered. Two discontinuity sets (similar to those used in this thesis) were added to the initial geometry. Within the geometric parameters, dip and persistence were defined deterministically while spacing and trace length were considered probabilistically. Using point estimate values of the probabilistic distributions, eight realizations were generated, from which, the mean value of the factor of safety was estimated. To evaluate the correctness of the obtained results from PEM, 40 randomly generated Monte-Carlo realizations were simulated. The main purpose of the Hammah et al. (2009) study was to prove the ability of PEM in probabilistic analysis of rock slope stability. PEM was much simpler sampling technique in comparison with Monte-Carlo. Although the number of obtained simulations from PEM was noticeably less than the one obtained from Monte-Carlo, the outcome of PEM analysis was in a good agreement with analysis based on Monte-Carlo or conventional limit equilibrium methods. Therefore, they recommended PEM as a more efficient technique for probabilistic slope stability assessment. The Hammah et al. (2009) work will be denoted hereby as the “reference analysis” in the thesis. 68 Although Hammah et al. (2009) conducted a probabilistic slope stability analysis, in which, more than one realization was investigated, some limitations were still involved. These limitations should be eliminated to decrease the level of uncertainty in the output results. Limitation 1 - In the reference analysis, the parameters describing size and location of the two discontinuity sets (persistence and spacing) were selected probabilistically, while orientation parameters such as dip and dip direction were considered deterministically. No further analysis was made to quantify the contribution of each parameter on the estimated values of the factor of safety. If the dominant geometric parameters are identified for a specific geometry, the results can be generalized to the similar range of geometries. This advantage enables the analyst, in future problems, to narrow down the number of parameters that should be considered probabilistically, without the necessity to perform a sensitivity analysis. This capability results in a faster and more affordable assessment. Moreover, if the dominant parameters are known beforehand, more effort will be devoted to investigate the inherent variability of such parameters only. Limitation 2 - In the reference analysis, a normal distribution was assigned to describe the variability of the probabilistic parameters. In reality, however, geometric parameters such as the size and location of the discontinuities show a lognormal or an exponential behaviour. As discussed in Section 3.4.1, PEM calculations are very sensitive to the skewness of the distributions. Therefore, if non-normal distributions are assigned to the input variables, more statistical complications would be involved in the analysis. Limitation 3 - In the reference analysis, PEM was used to estimate the statistical moment of the factor of safety. No information could be obtained about the probability of failure and the type of the distribution to appropriately describe the obtained values of factor of safety from different realizations. If other statistical techniques are integrated with PEM, the appropriate distribution of factor of safety can be fitted, without the need to simulate realizations that are obtained based on Monte-Carlo sampling. 69 Limitation 4 - The results from the reference analysis cannot be used for general prediction purposes because the results do not provide enough information to evaluate the slope stability of a new arbitrary realization of the slope. If the stability of a specific realization is of interest, a new numerical simulation must be conducted to estimate the corresponding stability factor. This becomes a computational efficiency issue if several realizations must be investigated. The current study aims to eliminate the above limitations that exist in the reference analysis, using the integrated statistical and numerical methodology introduced in Chapter Chapter 3:. 4.2 Generating Synthetic Geometric Parameters The process to generate synthetic values of the geometric parameters for discontinuities is described. This process is equivalent to data acquisition using site investigation techniques, when a real case history is analyzed. The two discontinuity sets, studied by Hammah et al. (2009), were characterized by four geometric parameters (dip, trace length, spacing, and persistence). In the present study, five geometric parameters are considered: dip, dip direction, trace length, spacing, and persistence. As discussed in Section 3.6, the trace plane option is activated in Phase2 to introduce the dip direction values of the two discontinuity sets in the analysis. The trace plane dip direction was assumed fixed, and equal to 330°. The dip, dip direction, and trace length values are considered probabilistically, while spacing and persistence are fixed to their mean values. The persistence values assigned to both discontinuity sets are similar to that used by Hammah et al. (2009). The spacing values are also considered equal to the mean values of the spacing distributions used in the reference analysis (Table 4.1). To define probabilistically the other three parameters for each discontinuity set, 20 random values are generated as the synthetic measurements. These values correspond to the spatial variability of each geometric parameter. For the trace length, the values are generated such that they fit lognormal distributions with their mean values approximately equal to the mean 70 values used in the reference analysis. For the orientation parameters (dip and dip direction), the values are generated to satisfy Weibull distributions. As discussed in Section 3.2, the Weibull distribution is selected for the orientation parameters because it can capture the worst-case scenarios as well as the frequent incidents. This feature will later result in generation and investigation of slope realizations that are typically ignored in a conventional analysis. The mean values of dip for both sets are approximately equal to the deterministic values used in the reference analysis. Dip direction, however, was not included in the reference analysis. Therefore, the mean values for the dip direction of each discontinuity set are selected in a way that the dip of the discontinuity set within the selected Phase2 trace plane was equivalent to the reference analysis. Table 4.1 summarizes the deterministic and probabilistic values assigned to the geometric parameters in the current study as well as the values used in the Hammah et al. (2009) analysis. Table 4.1. Geometric parameters in the current and Hammah et al. (2009) studies Current study Hammah et al. Type Distribution Mean Type Distribution Mean Set 1 Dip Probabilistic Weibull 42° Deterministic - 40° Dip direction Probabilistic Weibull 231° - - - Trace length (m) Probabilistic Lognormal 10 Probabilistic Normal 10 Spacing (m) Deterministic - 5 Probabilistic Normal 5 Persistence Deterministic - 0.8 Deterministic - 0.8 Set 2 Dip Probabilistic Weibull 84° Deterministic - 85° Dip direction Probabilistic Weibull 269° - - - Trace length (m) Probabilistic Lognormal 4 Probabilistic Normal 4 Spacing (m) Deterministic - 4 Probabilistic Normal 4 Persistence Deterministic - 0.5 Deterministic - 0.5 The fitted distributions along with their statistical details are illustrated in Figure 4.1, Figure 4.2 and Table 4.2, respectively. Although the random values are initially generated in a way to satisfy a specified distribution type, Kolmogorov-Smirnov goodness of fit tests are performed subsequently for each distribution, to ensure its representativeness. The goodness of fit results 71 for each distribution can be described by a p-value, which ranges from zero to one. For a specific set of data, the fitted distribution with a higher p-value (closer to one) would be identified as a more representative distribution (Massey 1951). Among the six probabilistic parameters, four parameters have acceptable goodness of fit results (more than 0.8). For the other two parameters, although low in goodness of fit p-values, the presented distributions were accepted. Any fitted distribution can be acceptable in this configuration if the desirable distribution type, mean, and standard deviation are achieved. Therefore, using the two distributions with lower goodness of fit values does not impede further analyses. Table 4.2. Statistical moments for the geometric parameters Mean Standard deviation Skewness Kolmogorov-Smirnov p-value Set 1 Dip 42° 6.3° -1 0.4 Dip direction 231° 6.7° -0.2 0.8 Trace length (m) 10 1.1 0.6 0.87 Set 2 Dip 84° 1.7° -0.9 0.8 Dip direction 269° 9.4° -0.4 0.5 Trace length (m) 4 0.4 0.3 0.99 72 Figure 4.1. Distributions fitted to the geometric parameters of set 1 Histogram of Dip-set 1Dip-set 1 = 20*2.0000*weibull(x,44.5965,9.5118,0.0000)20 25 30 35 40 45 50 55 60Dip-set 10123456789No of ObservationsWeibull distributionHistogram of Dip direction-set 1Dip direction-set 1 = 20*2.0000*weibull(x,234.3463,42.3180,0.0000)200 205 210 215 220 225 230 235 240 245 250Dip direction-set 101234No of ObservationsWeibull distributionHistogram of Trace length-Set 1Trace length-Set 1 = 20*0.5000*Lognorm(x,2.3209,0.0961)7 8 9 10 11 12 13 14 15Trace length-Set 101234No of ObservationsLognormal distribution 73 Figure 4.2. Distributions fitted to the geometric parameters of set 2 Histogram of Trace length set 2Trace length set 2 = 20*0.2000*Lognorm(x,1.4079,0.1080)2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0Trace length set 201234No of ObservationsHistogram of Dip set 2Dip set 2 = 20*0.5000*Johnson(x,3.0000,-0.7326,0.5277,5.9190,79.2131)79 80 81 82 83 84 85Dip-Set 2012345678No of ObservationsWeibull distributionHistogram of Dip direction set 2Dip direction set 2 = 20*2.0000*weibull(x,273.6479,35.8633,0.0000)235 240 245 250 255 260 265 270 275 280 285 290Dip direction-Set 201234567No of ObservationsWeibull distributionLognormal distribution 74 4.3 Generating Realizations Using PEM and Factorial Design Using a statistical sampling technique, different realizations of the discontinuity geometries in the slope can be generated from the fitted distributions obtained for each probabilistic parameter in Section 4.2. To generate an efficient number of representative realizations, PEM is used to extract two point estimates from each fitted distributions. These point estimates are substituted for the corresponding distributions in further analysis. Because all fitted distributions are moderately to highly skewed, and the variables are uncorrelated, Equation 3.5 is used to calculate the two point estimates. As discussed earlier, the obtained two point estimates are functions of the distribution’s mean, standard deviation, skewness and the total number of contributing variables. A sample calculation for the two point estimates of “dip-set 1” is presented in Equation 4.1. The values used in this equation are adopted from Table 4.2. The two point estimates computed for all six probabilistic variables involved in this analysis are obtained using the same routine and their values are summarized in Table 4.3. [ √ ( ) ] [ √ ( ) ] 4.1 [ √ ( ) ] [ √ ( ) ] 75 Table 4.3. Point estimate values for discontinuity properties Variable Mean Point estimate+ (xi+) Point estimate- (xi-) Dip 42° 54.7° 29.3° Set 1 Dip direction 231° 246.8° 215.6° Trace length (m) 10 14.1 6.3 Dip 84° 86.8° 80° Set 2 Dip direction 269° 290.3° 247.7° Trace length (m) 4 5 3 The synthetic measurements of the orientation parameters for each discontinuity set should create a cluster when projected on an equal angle, lower hemisphere stereonet. The point estimates of the orientation parameters of each discontinuity set should also lie within the associated cluster. Figure 4.3 illustrates a stereonet plot of the random measurements for each discontinuity set along with their calculated point estimates. The red square symbols represent the random measurements, while the circle and triangle symbols represent the point estimates. Since six parameters are selected probabilistically in this analysis, 26 or 64 combinations of the point estimates can be defined, each of those describing one possible realization of the slope. When using finite element software, each realization must be appropriately discretized prior to any computations. In this analysis, to enhance the accuracy of the numerical modelling and to avoid mesh sensitivity problems (later discussed in Section 4.4.2), the geometry of the slope in each realization is discretized with very fine mesh sizes. Although this feature ensures the reliability of the results, it noticeably increases the computational time. Therefore, it is not efficient to simulate all 64 realizations that are generated based on the point estimate combinations. 76 Figure 4.3. Point estimates of the orientation parameters on a stereonet To conduct a more efficient analysis, a smaller number of realizations should be selected from the 64 geometry combinations. As discussed in Section 3.4.2.1, half-factorial design can be used to reduce the computational effort by half of its original. Thus using half-factorial design, only 32 realizations need to be numerically simulated. Integrating PEM and half-factorial design provides efficient yet representative number of realizations to simulate numerically, with the purpose of identifying the significant parameters affecting the stability of the investigated slope. To incorporate the point estimates in a general half-factorial design, their values are converted to coded ones using Equation 3.6. Therefore, for all six variables, the higher point estimate is replaced by “+1” and the lower point estimate is replaced by “-1”. The mean values are also replaced by zero. Table 4.4 shows the initial half-factorial design, with 32 realizations of the slope, based on the coded variables. 77 Table 4.4. Initial half-factorial design with coded variables Realization ID Set 1 Set 2 Dip Dip direction Trace length Dip Dip direction Trace length 1 - - - - - - 2 + - - - - + 3 - + - - - + 4 + + - - - - 5 - - + - - + 6 + - + - - - 7 - + + - - - 8 + + + - - + 9 - - - + - + 10 + - - + - - 11 - + - + - - 12 + + - + - + 13 - - + + - - 14 + - + + - + 15 - + + + - + 16 + + + + - - 17 - - - - + + 18 + - - - + - 19 - + - - + - 20 + + - - + + 21 - - + - + - 22 + - + - + + 23 - + + - + + 24 + + + - + - 25 - - - + + - 26 + - - + + + 27 - + - + + + 28 + + - + + - 29 - - + + + + 30 + - + + + - 31 - + + + + - 32 + + + + + + It can be observed in this table that the lower and the higher point estimates of all variables are combined to create realizations #1 and #32, respectively. All other realizations are obtained 78 based on mixed combination of the variables’ point estimates. The 32 realizations listed in Table 4.4 will contribute in the screening process. Although at this stage, the “mean realization” (a realization constructed based on the mean value of each variable) is not involved in the half-factorial design screening process, this realization is nevertheless numerically simulated. In a deterministic analysis, the mean realization is the only representation of the slope and the discontinuity geometry that is modelled. Therefore, it is worthwhile to simulate this realization to monitor the dispersion and the difference between the SRF values obtained from the 32 realizations and the deterministic one. Although the mean realization is not involved in the screening stage, it has an important role as the “centre point” in the CCD prediction design. This will be discussed later in Section 4.3. Considering six variables in the analysis and constructing a half-factorial design with 32 realizations, creates a design with resolution VI. As discussed in Section 3.4.2, in this resolution, the design generator is described as F= ABCDE in which F is considered as the sixth variable (trace length of set 2) and A, B, C, D and E are considered as the first five variables. In a resolution VI design, no main effect or two-factor interaction is aliased with any other main effect, two-factor interaction, or three-factor interaction. Nevertheless, the main effects and the two-factor interactions are aliased with five-factor and four-factor interactions, respectively. Table 4.5 provides a list of the factors and their aliases in this analysis. In this table, D1, DD1, T1, D2, DD2 and T2, represent dip-set 1, dip direction-set 1, trace length set 1, dip-set 2, dip direction-set 2 and trace length set 2, respectively. This labelling format will be used frequently in this thesis. To complete the initial half-factorial design table (Table 4.4) and to perform the Analysis of Variance, all 32 realizations are numerically modelled in Phase2 and their corresponding SRF values along with their total displacement and yielded elements are determined. Section 4.4 discusses the details of the modelling procedure and the numerical computations. 79 Table 4.5. Factors and aliases, 6- variables, resolution VI, F= ABCDE Factors Aliases Factors Aliases D1 DD1T1D2DD2T2 DD1D2 D1T1DD2T2 DD1 D1T1D2DD2T2 T1D2 D1DD1DD2T2 T1 D1DD1D2DD2T2 D1DD2 DD1T1D2T2 D2 D1DD1T1DD2T2 DD1DD2 D1T1D2T2 DD2 D1DD1T1D2T2 T1DD2 D1DD1D2T2 T2 D1DD1T1D2DD2 D2DD2 D1DD1T1T2 D1DD1 T1D2DD2T2 D1T2 DD1T1D2DD2 D1T1 DD1D2DD2T2 DD1T2 D1T1D2T2 DD1T1 D1D2DD2T2 T1T2 D1DD1D2DD2 D1D2 DD1T1DD2T2 D2T2 D1DD1T1DD2 DD2T2 D1DD1T1D2 4.4 Constructing Numerical Realizations in Phase2 In this study, Phase2 is used to conduct numerical simulations of the realizations that are generated by PEM and half-factorial design. In addition to the geometric parameters, other input parameters are needed for the Phase2 finite element models. Since in this study, the influences of the geometric parameters are the prime concern, the other input parameters are set to fixed values for all 32 models. Strength parameters and the in-situ stress conditions are two of the important non-geometric variables needed for the FE models. Other non-geometric input parameters that deal with FE calculations are kept at the software default values. 4.4.1 Non-geometric input parameters 4.4.1.1 Strength parameters All strength parameters are considered deterministically in this study. The values of these parameters are adopted and in some cases modified from the Hammah et al. (2009) model. The parameters describing the strength of the intact rock are higher than what was selected in 80 the reference study while the strength parameters of the discontinuities have similar values. Equal values are assigned to the strength parameters of both discontinuity sets. Table 4.6 summarizes the strength parameters used in this study. It should be noted that Mohr-Coulomb failure criteria are adopted to model the behaviour of the intact rock and the discontinuities. Table 4.6. Properties used in the Phase2 models Strength parameters Rock Discontinuities Unit weight (MN/m3) 0.03 - Poisson ratio 0.3 - Cohesion (MPa) 1 0.01 Friction angle° 30 20 Tensile strength (MPa) 0.3 0 Deformation modulus (GPa) 11 - Normal stiffness (MPa/m) - 100,000 Shear stiffness (MPa/m) - 10,000 4.4.1.2 In-situ stress condition To avoid complexity, loading is defined by the “gravity” setting for the in situ field stresses. Thus, the in situ stresses increase as a function of the depth. The depth is measured from the actual ground surface of the model. The horizontal/vertical stress ratio (k) is set to one. 4.4.1.3 FEM computation parameters For all 32 models in this analysis, 500 iterations at each loading step are computed. The number of load steps is set as “Auto”. For the convergence settings, the absolute energy criterion is used and the tolerance is fixed at 0.001. If the imbalance energy in iteration is less than the selected tolerance value, convergence is achieved and the computations in the corresponding iteration will stop. As for the SSR analysis, the calculations are started with the default value of one for the first SRF (Figure 3.6). The tolerance for SRF values are also selected to be 0.01. This parameter has 81 similar functionality as the energy tolerance discussed above. However, it is activated when the SRF calculations are started and it is applied only to the selected SSR zone (plastic region). 4.4.2 Discretization and mesh sensitivity One factor that controls the accuracy of the FEM calculations is the size or the number of the elements used to construct the model. Usually the accuracy of the results increases when a finer mesh size (larger number of elements) is used especially around the critical areas. However, this trend continues to a certain mesh size, smaller than that, the results remain consistent. Once that optimum mesh size determined for a specific geometry, the mesh dependency problem in FEM calculations is avoided. In the current study, 3-noded triangular elements are used in the FE models. Due to the different geometric parameters assigned to the discontinuity sets for each realization, each realization has a different meshing pattern. This difference is ignorable if each realization is discretized with its corresponding optimum number of elements. To determine the optimum number of elements for each realization, the numerical model is run with a coarse mesh size and based on that discretization a value for critical SRF is obtained. Next, the number of elements is increased and the critical SRF is obtained. This process continues until the difference between the obtained critical SRFs becomes negligible. The corresponding mesh discretization is taken as being as optimum for that specific geometry. However, because each of the 32 realization has a different mesh configuration, the procedure to determine the optimum mesh size should be performed for 32 FE models. That leads to very expensive computational cost. To avoid such difficulty, only five realizations are analysed for mesh dependency and the general findings with respect to mesh size from these models was used to select appropriate discretization for the other realizations. These five realizations are #1, #7, #16, #32, and the mean realization. The FE model for each realization started with a coarse mesh in which the five realizations are discretized with less than 50,000 elements. Each model is simulated and the corresponding 82 critical SRF recorded. Subsequently, the number of elements is increased (approximately by factor of two each time) and the corresponding critical SRF is re-calculated. This continues until the difference between the two consecutive critical SRFs for each realization becomes less than 10%. For the five investigated realizations, the mesh dependency issue became negligible when the number of elements exceeded 200,000 (Figure 4.4). Although the geometries in the five realizations are noticeably different, their corresponding SRF values converged to constant values when the number of elements exceeded approximately 200,000. Hence, this value is selected as the optimum number of elements and is used to ensure for all 32 realizations. The FE mesh for the mean realization is provided in Figure 4.5. Using a fine mesh for each realization leads to at least two hours of computing time for each model, on a 3.07 GHz Dual Processor 16 GB RAM desktop computer. Since the main SRF calculation are made in the SSR region of each FE model (not the elastic section), the finer discretization are applied to those zones only. This way, computational time is reduced and finer meshing is used only in the more critical zones of the model. It should be noted that the SSR region for all models is selected similarly. 83 Figure 4.4. Mesh dependency analysis for five realizations 0.60.811.21.41.61.80 50 100 150 200 250 300 350 400 450 500Number of elements ( )Critical SRFRealization #16Realization #32Realization #7Realization mean 84 Figure 4.5. Discretization of the FE model for the mean realization It is worth mentioning that ignoring mesh dependency may lead to misinterpretation of the slope behaviour. For the slope geometry being analysed, a coarse mesh gives higher values of SRF. The results obtained for realization #32 give SRF = 1.4 for a coarse mesh but the critical SRF decreases as the mesh becomes finer and converges to SRF = 0.89 (Figure 4.4). As a result, to increase the reliability of slope stability analyses that are simulated with a FE code, mesh dependency should be considered beforehand (Shukra & Baker 2003). This issue has been frequently overlooked in many publications using Phase2 for slope stability analysis (Rocscience 2004, Hammah et al. 2004, Hammah et al. 2009). 4.4.3 Output parameters As discussed in Section 3.6.4, three output parameters provided by FEM calculations are used in this study to assess the stability of the slope. These three parameters are SRF, total displacement, and the number of yielded elements. 85 The critical SRF from each model is the first response value that is added to an ANOVA table for further analysis. The critical SRF for each realization is the main parameter used to assess the stability of the slope. In this analysis, realizations with SRF values less than one are categorized as unstable while those with SRF values greater than one are considered as stable. The total displacement parameter can be plotted as total displacement contours as well as the direction of displacement at various points in the slope. The maximum total displacement of the slope can also be obtained by the software. This value is used as the second response of each realization in the ANOVA table. However, it should be noted that, in some cases it might not truly represent the slope behaviour and be simply a result of stress concentration at different locations of the slope, as discussed in Section 3.6.4. The contours of the total displacements are used to provide some information about the possible failure mechanism in each simulated realization. Figure 4.6 displays the total displacement contours for three different realizations. As it can be observed, the variability of the geometric parameters has noticeable impact on the values of the total displacement. The third output parameter considered in the ANOVA table is the number of yielded elements around each node in the FE model. This value is assigned to each node and is described as the ratio of the number of yielded elements to the total number of elements that are connected to a specific node. This ratio may vary between zero in which no yielded element is connected to the node, to 100% in which all elements connected to the node are yielded. 86 Realization meanDip #1 0Dip direction #1 0Trace length #1 0Dip #2 0Dip direction #2 0Trace length #2 0SRF 0.71R alization #1ip #1-Dip direction #1-Trace length #1-Dip #2-Dip direction #2-Trace length #2-SRF 1.1 87 Figure 4.6. Total displacement contours for three different realizations of the slope The number of yielded elements obtained as a Phase2 output parameter, cannot be directly used in the ANOVA table. Typically, a FE model contains many nodes, each of which has a percentage of the yielded elements. However, these values do not truly represent the slope percentage of the yielded elements. Therefore, to obtain a single value characteristic of the overall number of yielded elements in a model, a filtering procedure is used. In this process, the nodes that have at least one yielded element in their vicinity (i.e., their assigned number of yielded elements is more than zero) are identified. The number of the identified nodes are counted and divided by the total number of the nodes in the realization. The obtained value, described in percentage, is used to represent the yielding status of each realization. Since the number of nodes with yielded elements is normalized by the total number of nodes in each realization, any differences in the total number of nodes for different realizations creates no Realization #28Dip #1+Dip direction #1+Trace length #1-Dip #2+Dip direction #2+Trace length #2-SRF 1.68 88 concern. It should be noted that in the above process, it was important to identify the nodes with yielded elements regardless of the yield type (tension or shear). The percentage of the yielded elements at the nodes can be demonstrated using contours. In Phase2, the location and the type of the yielded elements are shown using small circles and crosses, representing tension and shear, respectively. Figure 4.7 presents contours of yielded elements (percentage) for four different realizations. Similar to the total displacement values, these results are very sensitive to the variability in the geometric parameters. As discussed in Section 3.6.3, in the realizations with higher values of critical SRF, the shear strength values required to reach critical equilibrium (t* in Eq. 3.20) are reduced noticeably. Hence, the number of yielded elements for those realizations is higher than the models with low values of SRF, as is observed in Figure 4.7. The yielded elements can also be identified according to their type of yielding. In Figure 4.8, the yielded elements of the realization #1 are categorized into shear and tension yielded elements. As presented, more elements are yielded in tension. This indicates that the material tends to fail in tension rather than shear. This trend is similar for all 32 realizations since equal strength parameters are assigned to all of them. 89 90 Figure 4.7. Contours of percentage yielded elements for four realizations of the slope 91 Figure 4.8. Yielded elements filtered based on type for a portion of realization #1 92 4.5 Sensitivity Analysis Table 4.4 (Section 4.3) is completed by obtaining the response values from the numerical simulations of the realizations in Phase2. The next step is to use the numerical results in an Analysis Of Variance to perform a sensitivity analysis and to identify the significant parameters affecting in the slope stability. Table 4.7 presents the completed factorial design table in which three responses (critical SRF, total displacement, and normalized yielded elements) are determined for each realization. The number of elements used for discretization of each realization is also provided. Since all realizations are discretized by more than the optimum number of elements (200,000), this parameter is not considered as a source of variability (more discussion was provided in Section 4.4.2). As expected, the values of critical SRF are very sensitive to the geometric combinations of each realization. Realization #22, in which all parameters except DD1 and D2 has their highest point estimate values, has the lowest SRF (0.24); while realization #28 in which all variables are at their highest point estimates, except for T1 and T2, have the highest value for SRF. Even with such a quick comparison, the significant influences of trace length values are obvious. The values of maximum total displacement, on the other hand, changes between 5.1 to 7.9 mm and no specific pattern is observed in their variability. One possible reason, as discussed earlier, are local zones of high stress concentrations that increase the value of the maximum total displacement in some of the realizations. Hence, some of the obtained values might not be representative of the maximum total displacement of the slope. Despite maximum displacement, the obtained values for normalized yielded elements are in agreement with the corresponding values of SRF. As the SRF values increase, more elements yield in the slope. 93 Table 4.7. Half factorial design table with calculated responses Realization ID D1 DD1 T1 D2 DD2 T2 Number of elements (x1000) SRF Max total displacement (mm) Normalized yielded elements (%) 1 - - - - - - 223 1.1 5.11 3.16 2 + - - - - + 309 1.31 6.6 4 3 - + - - - + 377 1 5.2 2.13 4 + + - - - - 337 1.41 6.7 5 5 - - + - - + 338 0.84 5.8 1.14 6 + - + - - - 305 0.62 7 1.19 7 - + + - - - 345 0.77 5.18 1.17 8 + + + - - + 270 1 5.4 2.4 9 - - - + - + 201 1.08 5.4 2.4 10 + - - + - - 200 1.53 6.8 6.64 11 - + - + - - 214 1.07 5.18 2.7 12 + + - + - + 340 1.42 7.4 5 13 - - + + - - 218 0.86 5.4 1.35 14 + - + + - + 348 0.5 5.9 0.68 15 - + + + - + 200 0.78 5.1 1.12 16 + + + + - - 215 1.11 5.45 3.4 17 - - - - + + 337 0.83 5.5 1.7 18 + - - - + - 336 1.19 5.6 3.5 19 - + - - + - 208 1.09 5.5 3.7 20 + + - - + + 298 1.4 6.7 5.3 21 - - + - + - 213 0.85 5.4 1.7 22 + - + - + + 278 0.24 4.3 0.13 23 - + + - + + 343 0.75 5.4 1.35 24 + + + - + - 327 1 7.3 2.13 25 - - - + + - 221 1.07 5.17 2.4 26 + - - + + + 347 1.44 7.3 4.9 27 - + - + + + 200 0.98 5.4 2.28 28 + + - + + - 200 1.68 7.3 7.2 29 - - + + + + 376 0.83 5.4 1.02 30 + - + + + - 395 0.58 6.2 1.06 31 - + + + + - 216 0.78 5.01 1.38 32 + + + + + + 323 0.89 7.9 1.7 Mean 0 0 0 0 0 0 200 0.71 6.8 1.2 94 4.5.1 Sensitivity analysis for critical SRF To identify which of the studied parameters have the most influence on SRF values, the sum of squares and the main and interaction effect estimates of the six variables are calculated. The percentage contribution of each parameter in the variability of the output result is also obtained using Equation 3.9. Table 4.8 summarizes the effect estimate results. As can be observed in this table, the three geometric parameters of set 1 have significant effect on the critical SRF values. The geometric parameters of set 2 have negligible effect for this specific slope configuration. Among geometric parameters of set 1, trace length (T1) has a much higher percentage contribution, 51%, on the SRF values compared to all other significant parameters (Table 4.8). This implies the significant role of T1 on the stability of the slope. The next most influential parameter is the two-factor interaction between D1 and T1, i.e., D1T1. Although the main effect estimated for D1 is the third most influential source of variability, its interaction with T1 is as twice significant. Therefore, for this specific configuration, if trace length values are considered probabilistically, the variability of the corresponding dip values should also be considered, as their interaction significantly affects the slope stability. Although DD1 and its interaction with D1 has high values of effect estimates, their influence are considerably less than those mentioned above. Moreover, no important interaction is observed between T1DD1 (0.58% contribution). 95 As for the geometric parameters of set 2, trace length (T2) has the highest percent contribution. However, the estimated effects are negligible for all geometric parameters in this set. No significant interaction (in some cases zero effect) is also identified among the set 1 and set 2 parameters. Table 4.8. Effect estimate summary (response: critical SRF) Factor Effect estimate Sum of squares (SS) Percentage contribution (%) D1 0.17 0.22 6.98 DD1 0.14 0.16 5.12 T1 -0.45 1.62 52.0 D2 0.07 0.04 1.44 DD2 -0.05 0.02 0.64 T2 -0.09 0.06 2.02 D1 DD1 0.17 0.23 7.52 D1 T1 -0.23 0.42 13.6 DD1 T1 0.05 0.02 0.58 D1 D2 -0.01 0.00 0.03 DD1 D2 -0.03 0.01 0.18 T1 D2 0.08 0.05 1.59 D1 DD2 -0.04 0.01 0.39 DD1 DD2 0.05 0.02 0.67 T1 DD2 0.00 0.00 0.00 D2 DD2 -0.04 0.01 0.46 D1 T2 -0.02 0.00 0.10 DD1 T2 0.00 0.00 0.00 T1T2 0.04 0.01 0.36 D2 T2 -0.01 0.00 0.01 DD2 T2 -0.02 0.00 0.12 Error - 0.20 6.26 Total - 3.12 - 96 To verify the results summarized in Table 4.8, ANOVA is also performed. As discussed in Section 3.4.2, half-factorial design is only used for screening purposes in this study and therefore, the Analysis Of Variance can be skipped. However, due to the important role of the identified significant parameters on the prediction of SRF values and the evaluation of the probability of failure, ANOVA is discussed for the case in which SRF is the studied response. 97 Table 4.9. ANOVA for the critical SRF Source of variability SS df MS F0 P D1 0.22 1 0.22 11.2 0.01 DD1 0.16 1 0.16 8.17 0.02 T1 1.62 1 1.62 82.9 0.00 D2 0.04 1 0.04 2.30 0.16 DD2 0.02 1 0.02 1.02 0.34 T2 0.06 1 0.06 3.23 0.10 D1 DD1 0.23 1 0.23 12.0 0.01 D1 T1 0.42 1 0.42 21.7 0.00 DD1 T1 0.02 1 0.02 0.92 0.36 D1 D2 0.00 1 0.00 0.04 0.84 DD1 D2 0.01 1 0.01 0.28 0.61 T1 D2 0.05 1 0.05 2.54 0.14 D1 DD2 0.01 1 0.01 0.61 0.45 DD1 DD2 0.02 1 0.02 1.08 0.32 T1 DD2 0.00 1 0.00 0.00 0.96 D2 DD2 0.01 1 0.01 0.74 0.41 D1 T2 0.00 1 0.00 0.16 0.69 DD1 T2 0.00 1 0.00 0.01 0.94 T1T2 0.01 1 0.01 0.58 0.47 D2 T2 0.00 1 0.00 0.02 0.90 DD2 T2 0.00 1 0.00 0.18 0.68 Error 0.20 10 0.02 Total 3.12 31 The F0-values in Table 4.9 are calculated by dividing the mean squares (MS) of each variable by the mean squares of error. The obtained values are compared with the equivalent F-values taken from standard tables at the selected significance level (α) of 0.05. 98 The validity of the results summarized in Table 4.9 is investigated by the model adequacy tests, as described in Section 3.4.2.3. To check for the normality assumption, the residuals are plotted versus the normal probability values. The residuals (eij) are defined as the difference between the simulated and the fitted SRF values for each realization. The residuals are sorted and ranked from smallest to largest. Based on each residual’s ranking, the corresponding normal probability is calculated using Equation 4.1. In this equation, J is the residual’s rank and n is the number of simulations (realizations). Table 4.10 summarizes the simulated and fitted SRFs along with their residuals and the corresponding normal probability values. Figure 4.9, illustrates the normal probability plot for SRF residuals. 4.1 99 Table 4.10. Residuals and normal probability Simulated SRF Fitted SRF Residual Sorted residuals Rank Normal probability (%) 1.10 1.10 0.00 -0.13 1 1.56 1.31 1.22 0.09 -0.12 2 4.69 1.00 0.95 0.05 -0.11 3 7.81 1.41 1.53 -0.12 -0.11 4 10.9 0.84 0.83 0.01 -0.10 5 14.1 0.62 0.62 0.00 -0.10 6 17.2 0.77 0.90 -0.13 -0.09 7 20.3 1.00 0.91 0.09 -0.08 8 23.4 1.08 1.13 -0.05 -0.05 9 26.6 1.53 1.48 0.05 -0.05 10 29.7 1.07 0.98 0.09 -0.05 11 32.8 1.42 1.53 -0.11 -0.02 12 35.9 0.86 0.86 0.00 -0.01 13 39.1 0.50 0.60 -0.10 0.00 14 42.2 0.78 0.77 0.01 0.00 15 45.3 1.11 1.01 0.10 0.00 16 48.4 0.83 0.94 -0.11 0.00 17 51.6 1.19 1.20 -0.01 0.00 18 54.7 1.09 0.99 0.10 0.00 19 57.8 1.40 1.40 0.00 0.01 20 60.9 0.85 0.74 0.11 0.01 21 64.1 0.24 0.34 -0.10 0.05 22 67.2 0.75 0.80 -0.05 0.05 23 70.3 1.00 0.95 0.05 0.05 24 73.4 1.07 1.16 -0.09 0.09 25 76.6 1.44 1.31 0.13 0.09 26 79.7 0.98 0.98 0.00 0.09 27 82.8 1.68 1.70 -0.02 0.10 28 85.9 0.83 0.71 0.12 0.10 29 89.1 0.58 0.63 -0.05 0.11 30 92.2 0.78 0.86 -0.08 0.12 31 95.3 0.89 0.89 0.00 0.13 32 98.4 100 As observed in Figure 4.9, the residuals are scattered somewhat about the linear dotted line. This implies the normality assumption for the residuals is partially violated. Although the residuals close to the tails fall close to the linear line, the residuals with values between -0.05 to 0.05 have considerable scatter and dispersion from the linear line. Ideally, all the residuals should lie close to the linear line. However, in this case, the normality assumption is significantly violated and a suitable variance-stabilizing transformation is necessary. After transformation of the initial responses, it is necessary to repeat ANOVA and investigate the normality test for the new obtained residuals. This procedure continues until a suitable transformation that satisfies the adequacy tests is attained. Figure 4.9. Normal probability plot of SRF residuals 0102030405060708090100-0.15 -0.10 -0.05 0.00 0.05 0.10 0.15ResidualsNormal probability % 101 The most commonly used transformation equations, such as the square root ( √ ), logarithmic transformation ( ) and arcsin transformation ( √ ), are examined for the SRF responses. ANOVA is repeated on the new transformed responses obtained from each transformation technique. However, the normality test results are not satisfactory either. Therefore, a Box-Cox transformation is applied to the original responses. Using STATISTICA software, the Box-Cox lambda (λ) is estimated to be 1.79. This value is used in Equation 4.2 to transform the initial values of SRF. In this equation, yi and y(λ)i represent the initial and transformed response values, respectively. { ( ) 4.2 ⁄ Table 4.11 presents the ANOVA analysis on the transformed responses. As can be observed in this table, D1, DD1 and T1 along with the interactions T1D1 and D1DD1 are identified as the significant parameters, similar to the results obtained prior to the transformation. The ranking of the parameters’ contribution percentages also remains the same with trace length identified as the most significant parameter. However, the percentage contribution of D1 increases from 7% to 12% while the contribution of error and D1DD1 decreases marginally (Table 4.11). To validate the results obtained from ANOVA on the transformed responses, model adequacy tests are investigated. Figure 4.10 shows the plot of transformed residuals versus the normal probability values. As noticed in this figure, the residuals plot close to the linear dotted line and the dispersion is noticeably decreased compared to the un-transformed residuals plotted in Figure 4.9. Therefore, the normality test of the transformed responses is satisfied. In addition to the normality tests, the residuals should be plotted versus the realizations running sequence and the fitted values. If no specific pattern is observed in the residuals-running sequence plot and a constant variance is detected in the residuals-fitted values plot, the model adequacy tests are all satisfied and the results summarized in Table 4.11 are reliable. 102 Figure 4.11 and Figure 4.12 illustrate the plots of residuals versus realizations running sequence and fitted values, respectively. As observed in Figure 4.11, no specific pattern is identified between residuals and the running sequence and the residuals are well scattered to satisfy the criterion. In addition, Figure 4.12 provides no specific evidence of a noncontact variance pattern. 103 Table 4.11. ANOVA on the transformed SRF values (Box-Cox transformation) Source of variability SS df MS F0 P Percent contribution (%) D1 0.44 1 0.44 28.3 0.00 12.6 DD1 0.13 1 0.13 8.41 0.02 3.76 T1 1.79 1 1.79 114 0.00 51.0 D2 0.06 1 0.06 3.77 0.08 1.68 DD2 0.01 1 0.01 0.74 0.41 0.33 T2 0.06 1 0.06 3.77 0.08 1.68 D1 DD1 0.19 1 0.19 12.4 0.01 5.52 D1 T1 0.52 1 0.52 33.3 0.00 14.9 DD1 T1 0.03 1 0.03 1.82 0.21 0.81 D1 D2 0.00 1 0.00 0.00 0.97 0.00 DD1 D2 0.00 1 0.00 0.25 0.63 0.11 T1 D2 0.02 1 0.02 1.11 0.32 0.50 D1 DD2 0.01 1 0.01 0.53 0.48 0.24 DD1 DD2 0.02 1 0.02 1.48 0.25 0.66 T1 DD2 0.00 1 0.00 0.19 0.67 0.08 D2 DD2 0.04 1 0.04 2.40 0.15 1.07 D1 T2 0.00 1 0.00 0.10 0.76 0.05 DD1 T2 0.00 1 0.00 0.18 0.68 0.08 T1T2 0.01 1 0.01 0.49 0.50 0.22 D2 T2 0.01 1 0.01 0.40 0.54 0.18 DD2 T2 0.00 1 0.00 0.06 0.81 0.03 Error 0.16 10 0.02 4.46 Total 3.51 31 104 Figure 4.10. Normal probability plot of SRF residuals after transformation From Figure 4.10 to Figure 4.12, it is concluded that the model adequacy tests for the transformed responses are satisfied and the results in Table 4.11 are validated. 0102030405060708090100-0.15 -0.10 -0.05 0.00 0.05 0.10 0.15ResidualsNormal probability % 105 Figure 4.11. Residuals versus realization running sequence Figure 4.12. Residuals versus fitted values The investigations in Section 4.5.1 concluded that the significant parameters controlling the values of the estimated SRF are the geometric parameters of set 1 only. They are ranked based -0.15-0.10-0.050.000.050.100.150 5 10 15 20 25 30 35ResidualsRunning sequence-0.15-0.10-0.050.000.050.100.15-0.6 -0.40 -0.20 0.00 0.20 0.40 0.60 0.80 1.00Fitted valuesResiduals 106 on their contribution as trace length, trace length and dip interaction, dip, dip and dip direction interaction, and dip direction. Therefore, for further analysis of the slope stability, the geometric parameters of set 2, which were initially considered probabilistically, are fixed to their mean values, and additional realizations are generated with more focus on the significant parameters. This is discussed in Section 4.6. 4.5.2 Sensitivity analysis for the total displacement As discussed earlier, the estimated maximum total displacement value for each realization is the second response that is added to the ANOVA table and is investigated to identify the significant parameters. The results obtained from screening analysis on the displacement values are beneficial in understanding the influence and significance of the geometric parameters on the values of the maximum total displacement. Table 4.7 summarizes the maximum total displacement values obtained for each realization. The realization with the combination of “D1+, DD1-, T1+, D2-, DD2+, T2+” has the lowest total displacement (4.7 mm), while realization with the combination of “D1+, DD1+, T1+, D2+, DD2+, T2+” has the highest total displacement (7.9 mm). A total displacement value of 6.8 mm is recorded for the mean realization. Figure 4.13 plots the values of the total displacement computed for each realization versus their corresponding SRF values. Although the realization with the lowest SRF has the lowest value of the total displacement, this trend is not repeated for the other realizations. In addition, no specific pattern that implies a distinctive relation between these two response values is observed in this figure. To identify the influential parameters on the estimated values of the total displacement, the main and interaction effect estimates and sum of the squares along with the variables contribution percentage are calculated and summarized in Table 4.12. 107 Figure 4.13. Total displacement versus SRF As presented in Table 4.12, the parameter D1 has the greatest influence on the computed values of the total displacement. Contrary to the sensitivity results obtained when SRF was the investigated response, the variability of the trace length of either set 1 or set 2 has no impact on the estimated values of the total displacement. In addition to D1, the effect estimated for the interaction of DD1 DD2 is noticeable, which implies the important role of dip direction variability. One of the main differences between the results obtained here and those summarized in Table 4.12 , is the percentage contribution of error. When SRF is considered as the response value, the percentage of error is 6% while this value is increased to 19% when the total displacement values are studied. This can be explained by the fact that, as discussed in Section 3.4.2.2, the design in this analysis is unreplicated and therefore, to avoid a zero degree of freedom for the error, the high-level interactions (more than 2-level) are considered as error. Therefore, the percentage contribution calculated for the error variable can be caused by the cumulative contribution of the high-level interactions. This may imply that the higher-level 44.555.566.577.580 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8SRFTotal displacement (mm) 108 interactions have more influence on the values of total displacement, rather than the SRF values. Another possible reason may be the stress concentration problem mentioned previously. This may cause some systematic errors on the estimated values of the total displacement. These errors are also involved in the screening process. 109 Table 4.12. Effect estimate summary (response: total displacement) Factor Effect estimate Sum of squares (SS) Percentage contribution (%) p-value D1 1.17 10.9 43.6 0.0008 DD1 0.20 0.33 1.31 0.43 T1 -0.30 0.70 2.78 0.26 D2 0.23 0.41 1.63 0.38 DD2 0.11 0.10 0.39 0.67 T2 0.03 0.01 0.02 0.92 D1 DD1 0.35 1.00 4.00 0.19 D1 T1 -0.32 0.84 3.35 0.22 DD1 T1 0.35 1.01 4.02 0.18 D1 D2 0.06 0.03 0.11 0.82 DD1 D2 -0.13 0.14 0.55 0.61 T1 D2 -0.04 0.01 0.04 0.89 D1 DD2 -0.06 0.03 0.10 0.83 DD1 DD2 0.50 2.02 8.06 0.07 T1 DD2 0.09 0.06 0.23 0.74 D2 DD2 -0.15 0.19 0.75 0.55 D1 T2 0.10 0.08 0.32 0.70 DD1 T2 -0.24 0.47 1.88 0.35 T1T2 0.27 0.59 2.35 0.30 D2 T2 0.39 1.19 4.76 0.15 DD2 T2 0.03 0.01 0.02 0.91 Error - 4.94 19.71 - Total - 25.06 - - The p-values provided in Table 4.12 are obtained from the ANOVA analysis. The model adequacy tests are investigated and since the three criteria are met, no further transformation is required. As it can be seen in this table, the p-value corresponding to D1 is significantly less than the selected significance level (α=0.05). This confirms that D1 is the most significant parameter affecting the total displacement values. Although DD1DD2 has a high percentage 110 contribution, its p-value equals 0.07 that is marginally greater than α and therefore, its effect would be considered negligible. 4.5.3 Sensitivity analysis for the normalized yielded elements The last important response, for which the influential parameters are investigated, is the normalized yielded elements. The value for the normalized yielded elements represents the percentage of the nodes in each realization that are connected to at least one yielded element. Models giving higher SRF values are those that must reduce more significantly the shear strength values to reach critical equilibrium, thus a higher percentage of the normalized yielded elements are expected. Figure 4.14 shows the values of SRF versus the percentage of the normalized yielded elements. As expected, a second-order relation between these two responses can be identified. Hence, the realization with the lowest SRF has the lowest yielded elements and the realization with the highest SRF has the highest values of the normalized yielded elements. To identify the parameters having the most significant impact on the values of the normalized yielded elements, the main and interaction effect estimates, sum of squares, and the percentage contribution is calculated and summarized in Table 4.13. Given that some relationship exists between the SRF and normalized yielded elements values, it is anticipated that the parameters that strongly influence the SRF, also affect the normalized yielded elements. 111 Figure 4.14. Normalized yielded elements versus SRF values As can be seen in Table 4.13, T1 with a 47% contribution is identified as the most significant parameter. D1 and D1T1 are the other two important parameters with 18% and 11% contribution, respectively. Besides the geometric parameters of set 1, T2 is identified as an influential source of variability. Although this parameter has the lowest percent contribution among the significant parameters, its corresponding null hypothesis is rejected in the ANOVA analysis (p-value = 0.024). This suggests that the variability of T2 should be considered if yielded element values are the response of concern. It should be noted again that, similar to the SRF and total displacement analyses, the model adequacy tests are investigated. Based on the obtained results, no further transformation is found necessary. 012345670.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8SRFNormalized yielded elements%Y=3.44x2-1.62x+0.47R2=0.96 112 Table 4.13. Effect estimates summary (response: normalized yielded elements) Factor Effect estimate Sum of squares (SS) Percentage contribution (%) p-value D1 1.49 17.8 18.1 0.001 DD1 0.71 3.99 4.07 0.02 T1 -2.42 47.0 47.9 0.000003 D2 0.38 1.13 1.15 0.17 DD2 -0.15 0.17 0.17 0.58 T2 -0.68 3.73 3.80 0.02 D1 DD1 0.54 2.30 2.34 0.06 D1 T1 -1.19 11.4 11.6 0.0009 DD1 T1 0.50 2.01 2.05 0.08 D1 D2 -0.14 0.16 0.16 0.60 DD1 D2 -0.08 0.05 0.05 0.76 T1 D2 0.08 0.05 0.05 0.76 D1 DD2 -0.17 0.22 0.22 0.54 DD1 DD2 0.42 1.43 1.46 0.13 T1 DD2 0.00 0.00 0.00 0.99 D2 DD2 -0.30 0.73 0.75 0.27 D1 T2 -0.09 0.06 0.07 0.73 DD1 T2 0.19 0.29 0.30 0.47 T1T2 -0.02 0.00 0.00 0.93 D2 T2 -0.20 0.31 0.31 0.46 DD2 T2 0.05 0.02 0.02 0.86 Error - 5.30 5.41 - Total - 98.12 - - 4.6 Generating Prediction Models In Section 4.5, the influential parameters on the values of each studied response were identified. Using these parameters, prediction models can be constructed to evaluate the probability of failure and predict the stability of any arbitrary realization without the need for 113 further numerical simulations. These models should be capable of predicting the values of the desirable response for any arbitrary realization within an acceptable confidence level. To construct the prediction models for each investigated response, only the identified significant parameters are involved, and the rest of the parameters are kept fixed at their mean values. Since all the geometric parameters of set 1 are identified as the dominant parameters affecting the values of SRF and normalized yielded elements, and dip of set 1 was identified as the most important parameter on the values of the total displacement, these variables are considered probabilistically at this stage. However, none of the geometric parameters of set 2 was characterized as being significant when SRF and total displacement were studied. Although T2 was identified as an influential parameter on the values of the normalized yielded elements, its contribution was negligible and the obtained p-value was very close to the selected significance level. Therefore, all the geometric parameters of set 2, initially described probabilistically, are fixed to their distribution mean values at this stage. By decreasing the number of probabilistic variables, the prediction model can be generated according to the variability of the significant parameters only (Section 4.6.2). It should be noted that, one important use of a prediction model is to evaluate the probability of failure. The probability of failure can be estimated from a best-fit distribution to the values of predicted SRF. It was concluded in Section 4.5.3 that there is a second-order relation between SRF and yielded element values. Therefore, if a prediction model for SRF values is constructed, it can be also used to predict the normalized yielded elements for any arbitrary realization. The maximum total displacement has no significant contribution in either evaluating the probability of failure or the failure mechanism. Hence, in this study, only the procedure to construct prediction model for SRF values is discussed. Constructing the prediction models is performed in two steps. In the first step, Central Composite Design (CCD) is used to efficiently select more point estimates than initially selected from the distribution of the significant parameters in the factorial design. Then CCD combines the newly selected point estimates to generate an efficient number of realizations. These 114 realizations are numerically simulated in Phase2, and their corresponding response values are estimated. The theory of CCD was discussed in detail in Section 3.5.1. In the second step, ANOVA is performed on the CCD matrix that contains the CCD combinations and their corresponding SRF values. Accordingly, the coefficients of the possible prediction functions are estimated. These prediction functions demonstrate the stability of the slope as a function of the geometric parameters of set 1. The relation between the SRF values and the significant parameters can be described in different orders. In this analysis, three types of prediction models are investigated: a) a second-order relation in which quadratic, linear, and interaction terms are considered (hereafter referred to as Q/L), b) a second-order relation in which the quadratic and interaction terms are involved (hereafter referred to as Q), and c) a relation which is constructed based on the linear and interaction terms of the significant parameters (hereafter referred to as L). These prediction functions can also be illustrated as the response surfaces in a 3-dimentional environment that depicts the changes of response versus each pair of input parameters (Section 4.6.4). 4.6.1 Central Composite Design (CCD) In a general CCD, five point estimates are selected from each variable distribution. In the current study, the five point estimates are selected for the geometric parameters of set 1 only. These point estimates are later combined in an efficient pattern to construct the CCD design. For each variable, two of the selected point estimates are similar to those selected initially in the factorial design. These points (factorial points) were calculated using PEM equations (xi-,xi+). The third point estimate is the variable mean value, termed as the centre point. The other two point estimates are selected by adding and subtracting the axial distance, α, from the centre point. The axial distance is calculated using Equation 3.13. As discussed in Section 3.5.1, depending on the type of the region of interest, i.e. spherical or cuboid, a rotatable CCD or a face-centre CCD is constructed. The main difference between these two designs is the procedure by which the axial point estimates are calculated for each variable. Therefore, different realizations of the slope are generated in rotatable and face- 115 centre designs. Consequently, different simulations are run in Phase2, and different prediction models are constructed. To quantify the sensitivity of the obtained prediction models with respect to the rotatability of CCD and to make a comparison between the prediction ability of the fitted models, each design is separately investigated. 4.6.2 Rotatable Central Composite Design (R-CCD) In the rotatable design, the axial distance (α) is calculated using Equation 3.13. Since eight factorial points are involved in this analysis, α is equal to 1.68. Table 4.14 lists the five point estimates calculated for each geometric parameter of Set 1 in terms of the coded values. The third and fourth columns of this table define the coded variables representing the axial points of each variable. These coded values are equal to negative and positive values of the axial distance, respectively. The last column represents the mean value of each variable distribution. Table 4.15 summarizes the selected point estimates in terms of their natural values. As can be seen in this table, the values listed in the first and second columns (factorial points) are similar to what was initially selected by PEM and used in the screening process. Table 4.14. Coded variables of five point estimates - rotatable design Variable Factorial point- Factorial point+ Axial point- Axial point+ Centre point D1 -1 +1 - 1.68 + 1.68 0 DD1 -1 +1 - 1.68 + 1.68 0 T1 -1 +1 - 1.68 + 1.68 0 116 Table 4.15. Natural variables of five point estimates - rotatable design Variable Factorial point- Factorial point+ Axial point- Axial point+ Centre point D1 29.3 54.7 20.6 63.3 42 DD1 216 247 205 257 231 T1 6.3 14.1 3.6 16.7 10.2 Once the five levels are determined, CCD is used to generate representative combinations (realizations) of the obtained point estimates. Since three significant parameters are identified in this study, 14 realizations are generated. In addition, two centre runs, that combine the mean values of the input variables, are added to the design to satisfy the orthogonality characteristic of CCD. Each realization is numerically modeled in Phase2 and the corresponding SRF, total displacement and normalized yielded elements are evaluated. Table 4.16 shows the rotatable and orthogonal CCD design and summarizes the response values obtained from the numerical simulations. As it can be observed in this table, realization #10 has the highest SRF value equal to 1.82, while realizations #6 and #14 have the lowest SRF value equal to 0.55. It can be seen that in realization #10 the dip value is at its maximum level. However, in realizations #6 and #14 the trace length is at the maximum level. This result attests the important influence of T1 on the estimated SRF values. Figure 4.15 presents the geometry and the displacement contours of realization # 13 and #14, respectively. To obtain the coefficients of the prediction models for SRF, the CCD matrix is further analyzed by ANOVA. The main role of ANOVA in this analysis is to estimate the regression coefficients of each prediction model based on the main/interaction effect estimates of each variable and the order of the prediction model (Q/L, Q, or L). Moreover, ANOVA provides information about the significance of the linear and quadratic terms of each variable in terms of percentage contribution and p-values. This information is very useful for a general understanding of the dominant terms in a prediction model. However, the selection of the best prediction model will not be limited to the ANOVA results and will be further analyzed in Section 4.7 by quantifying the prediction capability of each of the three obtained fitted models. 117 Table 4.16. R-CCD design with estimated responses Realization ID D1 DD1 T1 D2 DD2 T2 Number of elements (10^3) SRF Max total displacement (10^ -3m) Normalized yielded elements (%) 1 -1 -1 -1 0 0 0 228 1.1 5.3 2.8 2 +1 -1 -1 0 0 0 375 1.4 6.7 4.45 3 -1 +1 -1 0 0 0 230 1.06 5.7 2.5 4 +1 +1 -1 0 0 0 372 1.44 6.8 4.66 5 -1 -1 +1 0 0 0 208 0.85 5.4 1.31 6 +1 -1 +1 0 0 0 350 0.55 5.8 0.74 7 -1 +1 +1 0 0 0 211 0.79 5.3 1.2 8 +1 +1 +1 0 0 0 337 0.97 8.4 2.14 9 -α 0 0 0 0 0 233 1.12 5.3 1.7 10 + α 0 0 0 0 0 315 1.82 8.6 6.5 11 0 -α 0 0 0 0 423 0.66 5 1.03 12 0 + α 0 0 0 0 400 0.67 5.5 1.01 13 0 0 - α 0 0 0 231 1.56 6.5 7 14 0 0 + α 0 0 0 210 0.55 5.1 0.81 15 (centre run) 0 0 0 0 0 0 200 0.71 6.8 1.71 16 (centre run) 0 0 0 0 0 0 200 0.71 6.8 1.71 118 Figure 4.15. Contours of total displacement for two realizations in the R-CCD CCD-Realization #13Dip #1 0Dip direction #1 0Trace length #1 -αDip #2 0Dip direction #2 0Trace length #2 0SRF 1.56CCD-Realization #14Dip #1 0Dip direction #1 0Trace length #1 +αDip #2 0Dip direction #2 0Trace length #2 0SRF 0.55 119 Table 4.17 summarizes the effect estimates and the percentage contribution of the linear and quadratic terms of each variable, along with their corresponding p-values for the rotatable design. In this table, the linear and quadratic terms are labelled as “L” and “Q”, respectively. As can be seen in this table, dip direction values considered either as a linear or quadratic term in the fitted model are not significant in comparison with trace length and dip values. Hence, it is anticipated that eliminating this parameter from the prediction model has a negligible effect on its prediction capability. On the other hand, the linear term of trace length has the highest percent contribution in comparison with the other involved parameters. Since the quadratic term of trace length does not contribute significantly to the obtained values of SRF, it is concluded that SRF changes linearly in association with the trace length values. As for the dip values, both the linear and quadratic terms are influential, yet the quadratic term contributes approximately three times more than the linear term. From the obtained results, it can be interpreted that a linear or a quadratic prediction model may not sufficiently predict the values of SRF, as both quadratic and linear terms are influential. As for the interaction terms, no significant contribution is detected, yet D1 T1 has the highest effect estimates. 120 Table 4.17. ANOVA results for R-CCD, quadratic and linear, R2=0.94 Factor Effect estimate Sum of squares (SS) Percentage contribution (%) p-value D1(L) 0.25 0.22 9.90 0.02 D1(Q) 0.51 0.60 26.7 0.001 DD1(L) 0.06 0.01 0.47 0.50 DD1(Q) -0.06 0.01 0.40 0.53 T1(L) -0.52 0.92 41.1 0.0004 T1 (Q) 0.21 0.11 4.74 0.06 D1 DD1 0.14 0.04 1.76 0.21 D1 T1 -0.20 0.08 3.58 0.09 DD1T1 0.09 0.02 0.73 0.40 Error - 0.12 5.32 - Total - 2.23 - - To obtain a prediction model in which only quadratic and interaction terms (Q) are considered, Table 4.16 is used in the ANOVA analysis with the linear terms considered as a source of error rather than an individual source of variability. Hence, the results from ANOVA are different from the results in Table 4.17. These results are summarized in Table 4.18. According to this table, none of the investigated parameters is identified as a significant source of variability. Moreover, the obtained R2 is reduced noticeably from 0.94 to 0.4, and the percentage contribution of error is increased to 56%. Since the most important parameters, T1(L) and D1(L), are ignored in the analysis and considered as sources of error, the obtained results are much less satisfactory than the previous model in which both quadratic and linear terms were included. It can be concluded that the linear terms (at least for T1 (L), and D1 (L))) must be included in the candidate fitted model. 121 Table 4.18. ANOVA results for R-CCD, quadratic, R2=0.4 Factor Effect estimate Sum of squares (SS) Percentage contribution (%) p-value D1(Q) 0.51 0.60 26.7 0.07 DD1(Q) -0.06 0.01 0.40 0.81 T1 (Q) 0.21 0.11 4.74 0.41 D1 DD1 0.14 0.04 1.76 0.61 D1 T1 -0.20 0.08 3.58 0.47 DD1T1 0.09 0.02 0.73 0.74 Error - 1.27 56.8 - Total - 2.23 - - To investigate the third fitted model (L), the linear and interaction terms are analyzed as main/interaction source of variability in ANOVA table while the quadratic terms are considered as the error. These results are summarized in Table 4.19. Among the studied sources of variability, only trace length is identified as a dominant term. Since the quadratic term of dip is not considered in this analysis, the R2 and the percent contribution of error are not still satisfactory. Their values are much lower than the values estimated for the Q/L fitted model (R2 changes from 0.94 to 0.6 and error changes from 5% to 42%). This implies the significant role of the quadratic parameters in the fitted model. 122 Table 4.19. ANOVA results for R-CCD, quadratic, R2=0.6 Factor Effect estimate Sum of squares (SS) Percentage contribution (%) p-value D1(L) 0.25 0.22 9.90 0.18 DD1(L) 0.06 0.01 0.47 0.76 T1(L) -0.52 0.92 41.1 0.02 D1 DD1 0.14 0.04 1.76 0.56 D1 T1 -0.20 0.08 3.58 0.41 DD1T1 0.09 0.02 0.73 0.70 Error - 0.95 42.5 - Total - 2.23 - - The above preliminary assessment indicates that the SRF values change quadratically in response to the changes in dip and linearly in response to the changes in trace length. Therefore, the candidate prediction model that considers both the quadratic and linear terms may provide better estimation of the SRF values for an arbitrary realization. However, it should be noted that the obtained results and R2 values best describe the level of agreement between the fitted models and the simulated realizations, and do not provide sufficient information regarding the prediction capability of such models. Besides identifying the significant terms in estimation of SRF values, the regression coefficients of the terms involved in each fitted model are calculated based on the effect estimates values. Table 4.20 summarizes the obtained coefficients for each type of the prediction model along with their standard errors. 123 Table 4.20. Regression coefficients for three prediction models - R-CCD Sources of variability Prediction model Quadratic and linear Quadratic Linear Coefficient Std. error Coefficient Std. error Coefficient Std. error D1(L) -0.1832 0.0637 - - -0.0511 0.1362 D1(Q) 0.0016 0.0003 0.0014 0.0004 - - DD1(L) 0.0383 0.0889 - - -0.0206 0.0315 DD1(Q) -0.0001 0.0002 0.0000 0.0000 T1(L) -0.2960 0.2036 - - -0.1526 0.4476 T1 (Q) 0.0070 0.0030 0.0061 0.0038 - - D1 DD1 0.0004 0.0003 -0.0003 0.0001 0.0004 0.0006 D1 T1 -0.0020 0.0010 -0.0029 0.0014 -0.0020 0.0023 DD1T1 0.0007 0.0008 -0.0003 0.0004 0.0007 0.0019 Interception 1.5710 10.9182 0.6458 0.4286 5.1558 7.3684 Regardless of the type of the prediction model, the regression coefficients representing D1(L) and D1(Q) are negative and positive, respectively. This implies a linear decrease and/or a quadratic increase in the estimated values of SRF as the dip value assigned to set 1 increases. In a simple limit equilibrium calculation, an increase in the dip value increases the potential of sliding (D1(L) ). However, it is observed that in a more complicated computations, the quadratic component of the dip also affects (positively) the stability of the slope. As for the trace length, the negative sign of T1 (L) implies that as the value of this parameter increases the stability of the slope decreases. This trend was observed previously in the SRF values obtained from simulating realizations of CCD and factorial tables, in which the minimum SRF values correspond to the realizations that had higher values of trace length in their geometric combinations. Equations 4.2 to 4.4 employ the obtained regression coefficients in the prediction models corresponding to the Q/L, Q and L functions. These functions are used to predict the SRF values 124 for any arbitrary realization generated based on different combinations of the geometric parameters of Set 1 within the region of interest. ⁄ 4.2 4.3 4.4 4.6.3 Face-centre CCD (F-CCD) The axial points in a CCD design can be selected in different ways depending on the type of the region of interest. To quantify the influence of the design type on the prediction capability of the obtained models, a face-centre design is also investigated, and the three fitted models obtained from this design are analyzed. In the F-CCD, the axial points are not considered in the design. Therefore, the F-CCD is defined as a three level design as opposed to the R-CCD being a five level design. Despite this difference, face-centre design generates 16 realizations, 10 of which are similar to the realizations in the rotatable design (factorial combinations and face centre). The other six realizations are generated based on new combinations of the factorial and centre points that were not included in the rotatable design. Table 4.21 presents the realizations generated in the face-centre design along with the estimated responses for each realization. As it can be seen in this table, the first eight combinations and the centre runs in the face-centre design are similar to the combinations generated in the R-CCD. However, realizations 9 to 14 are obtained based on different combinations of the variables point estimates. 125 Table 4.21. F-CCD design with estimated responses Realization ID D1 DD1 T1 D2 DD2 T2 Number of elements (10^3) SRF Max total displacement (10^ -3m) Normalized yielded elements (%) 1 -1 -1 -1 0 0 0 228 1.1 5.3 2.8 2 +1 -1 -1 0 0 0 375 1.4 6.7 4.45 3 -1 +1 -1 0 0 0 230 1.06 5.7 2.5 4 +1 +1 -1 0 0 0 372 1.44 6.8 4.66 5 -1 -1 +1 0 0 0 208 0.85 5.4 1.31 6 +1 -1 +1 0 0 0 350 0.55 5.8 0.74 7 -1 +1 +1 0 0 0 211 0.79 5.3 1.2 8 +1 +1 +1 0 0 0 337 0.97 8.4 2.14 9 -1 0 0 0 0 0 216 0.86 5.6 1.83 10 +1 0 0 0 0 0 201 1.3 7.2 3.37 11 0 -1 0 0 0 0 225 0.72 5.7 1.36 12 0 +1 0 0 0 0 216 0.65 6.6 1.69 13 0 0 -1 0 0 0 215 1.32 7.2 5.49 14 0 0 +1 0 0 0 205 0.59 5.3 0.95 15 (centre run) 0 0 0 0 0 0 200 0.71 6.8 1.71 16 (centre run) 0 0 0 0 0 0 200 0.71 6.8 1.71 In this design, the most unstable realization is identified to be realization #6 with SRF value equal to 0.55. This result is similar to what was achieved in the rotatable design. The highest SRF, i.e. 1.44, is associated with realization #4 that is lower than the highest SRF estimated in the rotatable design. Figure 4.16 illustrates the geometry and the total displacement contours of realization #4 in the face-centre design. It should be noted that the extreme values of SRF, neither the highest nor the lowest, corresponds to the realizations specifically generated in the face-centre design. 126 Figure 4.16. Contours of total displacement for realization #4, Face-centre CCD ANOVA is performed to identify the significant terms in the prediction models obtained from the face-centre design. The effect estimates along with the percentage contributions and the corresponding p-values for the Q/L fitted model are summarized in Table 4.22. These results are in general agreement with the results summarized previously for the rotatable design in Table 4.17. However, the percentage contribution of T1(L) is increased 8%. Moreover, linear dip term is not identified as a dominant parameter. From this table, it can be concluded that the values of SRF can be best predicted with a function that includes linear terms for trace length and quadratic terms for dip. When considering the interaction terms, none are identified as significant, although the interaction between the trace length and dip values has the highest percentage contribution. CCD-Realization #4Dip #1 -1Dip direction #1 +1Trace length #1 +1Dip #2 0Dip direction #2 0Trace length #2 0SRF 1.44 127 Table 4.22. ANOVA results for face-centre CCD, quadratic and linear, R2=0.91 Factor Effect estimate Sum of squares (SS) Percentage contribution (%) p-value D1(L) 0.20 0.10 7.57 0.06 D1(Q) 0.51 0.17 12.8 0.03 DD1(L) 0.06 0.01 0.63 0.54 DD1(Q) -0.28 0.05 3.96 0.15 T1(L) -0.51 0.66 49.8 0.001 T1 (Q) 0.26 0.04 3.31 0.18 D1 DD1 0.14 0.04 2.92 0.21 D1 T1 -0.20 0.08 6.03 0.09 DD1T1 0.09 0.02 1.24 0.39 Error - 0.12 8.84 - Total - 1.33 - - The ANOVA results for the fitted models with quadratic/interaction terms (Q) and linear/interaction terms (L) are also presented in Table 4.23 and Table 4.24, respectively. As anticipated, the percentage contribution of error is increased noticeably, when either the quadratic terms or the linear terms are eliminated. Since the linear term associated with the trace length has the highest percentage contribution, the results from the quadratic prediction model show the highest percentage error and the lowest R-squared value. 128 Table 4.23. ANOVA results for face-centre CCD, quadratic, R2=0.33 Factor Effect estimate Sum of squares (SS) Percentage contribution (%) p-value D1(Q) 0.51 0.17 12.8 0.22 DD1(Q) -0.28 0.05 3.94 0.48 T1 (Q) 0.26 0.04 3.31 0.52 D1 DD1 0.14 0.04 2.90 0.55 D1 T1 -0.20 0.08 6.03 0.39 DD1T1 0.09 0.02 1.29 0.69 Error 0.89 66.8 Total 1.33 Table 4.24. ANOVA results for face-centre CCD, linear, R2=0.7 Factor Effect estimate Sum of squares (SS) Percentage contribution (%) p-value D1(L) 0.20 0.10 7.57 0.18 DD1(L) 0.06 0.01 0.64 0.68 T1(L) -0.51 0.66 49.8 0.005 D1 DD1 0.14 0.04 2.92 0.39 D1 T1 -0.20 0.08 6.03 0.22 DD1T1 0.09 0.02 1.24 0.57 Error - 0.42 31.8 - Total - 1.33 - - According to the obtained effect estimates for each involved term in the prediction models, the regression coefficients and their estimates errors are calculated. These values are summarized in Table 4.25 and their corresponding prediction functions are listed in Equations 4.5 to 4.7. 129 Table 4.25. Regression coefficients for three prediction models, F-CCD Sources of variability Prediction model Quadratic and linear Quadratic Linear Coefficient Std. error Coefficient Std. error Coefficient Std. error D1(L) -0.19 0.07 - - -0.05 0.09 D1(Q) 0.0016 0.0005 0.0007 0.0005 - - DD1(L) 0.25 0.16 - - -0.02 0.02 DD1(Q) -0.0006 0.0004 0.000020 0.000021 - - T1(L) -0.33 0.22 - - -0.15 0.30 T1 (Q) 0.0085 0.0057 0.0050 0.0060 - - D1 DD1 0.0004 0.0002 -0.0001 0.0002 0.0004 0.0004 D1 T1 -0.0020 0.0010 -0.0026 0.0013 -0.0020 0.0015 DD1T1 0.0007 0.0008 -0.0002 0.0006 0.0007 0.0013 Interception -22.36 18.70 0.76 0.46 5.15 4.94 From a quick comparison between the regression coefficients obtained from the rotatable and the face-centre designs for each of the three fitted models (Eqs. 4.2 to 4.7), it can be interpreted that eliminating the axial points in the design (as in F-CCD) has significant effect on the coefficients of the quadratic and/or linear terms of the variables. However, its influence on the coefficients of the interaction terms is almost negligible. This difference may alter the ⁄ 4.5 4.6 4.7 130 prediction capability of the fitted models obtained from two different designs, as will be discussed in Section 4.7. 4.6.4 Response surfaces of the prediction models Prediction models, obtained from the rotatable and the face-centre designs, can be presented as response surfaces. These surfaces are in fact the illustrations of the values of SRF for any possible combinations of each two input variables. The curvature in the response surfaces determines the order of the fitted model. Therefore, for a linear model with no interaction, no curvature is detected on the response surfaces. However, for the functions with either the interaction terms or the quadratic terms, curvatures can be observed. Response surfaces are usually demonstrated in 3-dimensional plots and present the values of SRF versus of any two input parameters, while fixing the other parameters to their mean values. The response surfaces can indicate the range within which SRF values change in the region of the interest. Moreover, having the response surfaces, the realizations that yield the maximum and minimum SRF values, i.e. the realizations that create the worst and best case scenarios, are identified. 4.6.4.1 Rotatable design Figure 4.17 to Figure 4.19 demonstrate the response surfaces representing the three fitted models obtained from the rotatable design. Each response surface is plotted for two independent variables and one dependent variable (SRF). In the current analysis, three independent variables are investigated. Therefore, three response surfaces can be generated for each fitted model. In each response surface, the SRF values are calculated by considering one independent variable fixed to its mean value. In plots (a), (b) and (c) of each figure, D1, DD1, and T1 are fixed as shown in the legend. The blue points on each response surface represent the results of the simulated realizations for the design. These points were initially used to construct the fitted models. Figure 4.17 131 illustrates the three response surfaces constructed based on the Q/L fitted model. Although the R2 evaluated for this fitted model (when all three variables are included) is very high (R2 = 0.94), large dispersion of the simulated points from the response surfaces are observed in Figure 4.17(a) and Figure 4.17(c). In these parts, dip and trace length are fixed at their mean values, respectively. This dispersion indicates inability of the fitted model to describe the simulated results while one of the involved parameters is fixed to its mean value. The more significant the parameter is, the more dispersion that is created. Since the analyses of variance identified trace length and dip as more significant parameters in comparison with dip direction, fixing those variables to their mean values results in a less representative response surface as in Figure 4.17(a) and Figure 4.17(c) In contrast, in Figure 4.17 (B), in which DD1 is fixed to 231°, the dispersion is negligible and the simulated results are well established on the response surface. This trend is observed in all provided response surfaces regardless of the order of the fitted models and the type of the CCD design. The effect of the order of fitted models can be discussed as well. The response surfaces for SRF obtained from the Q/L and Q fitted models are very similar in the shape and range. When dip is considered as the fixed parameter (Figure 4.17(a) and Figure 4.18(a)), the SRF values are stronger functions of the trace length. As trace length increases, SRF values decreases. In Figure 4.17(b) and Figure 4.18(b), however, almost equal contribution on the SRF values is observed from dip and trace length. In this case, the most critical realizations are generated by combining the dip values close to its mean (42°) with high values of trace length (more than 9m). On the other hand, when dip values are changing within a range of 25° to 45°, all obtained realizations have SRF less than one regardless of the trace length value. The stability of any other realizations (outside the pre-mentioned range for dip values) is a direct function of the trace length. In Figure 4.17(c) and Figure 4.18(c) (in which the trace length is a fixed parameter) the SRF values are mostly a direct function of dip values with the values below 1 ones laying within 25° to 45°. The response surfaces obtained from the linear model illustrated in Figure 4.19 are slightly different, both in shape and values, from their corresponding Q/L and Q fitted models. Since 132 the quadratic term of dip, which contributes positively on the values of SRF, is eliminated in the linear model, the SRF values are mostly underestimated, changing from 0.3 to maximum of 2.2. The trend of change in the SRF values showed in Figure 4.19(a) is similar with Figure 4.17(a) and Figure 4.18(a). However, in Figure 4.19(b) and Figure 4.19(c) the dip range within which instabilities occurs are limited to shallow configurations (less that 25°) unless trace length increases. The other important observation from these three figures (Figure 4.17 to Figure 4.19) is that for the same combination of the input parameters, the level of dispersion of the simulated points from the response surfaces are almost identical in the Q/L and Q models, both noticeably lower than what is observed in the response surfaces associated with the L models. This result confirms the deficiency of a linear model to describe the values of SRF as a function of the discontinuities geometric parameters. 133 Figure 4.17. Response surfaces for the Q/L model-R-CCD 134 Figure 4.18. Response surfaces for the quadratic model-R-CCD 135 Figure 4.19. Response surfaces for the linear model-R-CCD 136 4.6.4.2 Face-centre design Response surfaces corresponding to the results obtained from the F-CCD are illustrated in Figure 4.20 to Figure 4.22. Since the axial points are not considered in this design, the region of interest is more limited in comparison with the rotatable design. Therefore, the maximum and minimum values obtained for the SRF values are different. Due to this characteristic, the capability of these models to predict extreme combinations is questionable, and should be later evaluated. Despite the difference in the SRF values, their trend of change is similar to the rotatable design. Hence, an increase in the trace length values decreases the SRF values. In addition, from a certain point (approximately 45°) an increase in dip values increases SRF. Besides, the change in the dip direction value does not create any changes in the SRF contours on the response surfaces, which confirms its insignificance compared to other independent variables. One of the important differences between the face-centre response surfaces and the corresponding rotatable response surfaces is the higher dispersion of the simulated points. Figure 4.17(b) and Figure 4.20(b) are considered as the best response surfaces describing the rotatable and face-centre designs respectively. A comparison between these two figures shows a noticeably high dispersion of the measured points for the face centre while much less dispersion for the rotatable design. From these observations it can be concluded that the response surfaces are better constructed using the simulated realizations in which axial points are considered. Therefore, not only the order of the fitted models, but also the type of the CCD design can influence the capability of the fitted model to describe the simulated realizations. It should be noted that all the conclusions made in this section focuses on the ability of the models to describe the already simulated realizations considered in the CCD designs. Hence, these results cannot be directly extended to the prediction capability of the fitted models. This characteristic should be investigated and quantified independently (Section 4.7). 137 Figure 4.20. Response surfaces for the quadratic/linear model-F-CCD 138 Figure 4.21. Response surfaces for the quadratic model-F-CCD 139 Figure 4.22. Response surfaces for the linear model-F-CCD 140 4.7 Evaluating the Prediction Capability of the Fitted Models Section 4.6 focused on constructing fitted models and response surfaces that can best describe the behaviour of the simulated realizations. However, the main purpose of constructing such models is to predict the stability of any arbitrary realization within the region of interest, without the need of any further numerical simulations. Although quantitative parameters such as R2 and estimated errors are indicators of the goodness of fit for the obtained models, they solely are not sufficient to provide information about the model’s prediction capability. The fitted models should not only adequately represent the simulated realizations, but also they should be capable of predicting the stability of both simulated and new realizations that are not included in the design. In this section, for each fitted model, three parameters that evaluate the prediction capability of the models are quantified. As discusses in Section 3.5.2, the three parameters are the Variance of Prediction, PRESS/R2prediction, and Mean Squares of Residuals (MSE). The model(s) with the highest prediction capability will be used later to evaluate the probability of failure for the synthetic configuration. 4.7.1 Fitted models in the rotatable design 4.7.1.1 Variance of prediction One of the important characteristics of a fitted model is its capability to maintain a consistent level of error for all realizations constructed within the region of interest. As discussed earlier in Section 3.5.2, this characteristic is quantified in terms of the variance of prediction ( [ ̂ ]) and is directly affected by the selection of the design points. Since the three fitted models (Q/L, Q, and L) in the rotatable design are constructed based on similar design points, they share similar characteristic associated with the variance of prediction. To validate the adequacy of the rotatable design in terms of the variance of prediction values, equal variances of predictions must be obtained for the realizations that are equally distant from the centre of the region of interest. Using the design points in the R-CCD (Table 4.16), the correlation matrix (X´X) in terms of the natural variables is constructed. Twenty realizations 141 within the region of interest (X0i, i=1, 20) are also selected in a way that each two or three hold similar distance from the centre point. It should be noted that the dip direction value for all the twenty realizations are fixed at the mean value for the purpose of simplicity in plotting the results in a 2D environment. Using Equation 3.16 the variances of prediction for the twenty selected realizations are calculated. Figure 4.23 presents a contour plot of the variance of prediction versus its corresponding X0i in which, X0i represents the geometric combination of each twenty realizations in terms of dip and trace length. This plot is obtained by inverse-interpolation of the 20 obtained data. The X0i values are kept at their natural values in the calculation but converted to coded variables to obtain equal scaling on the plot axis. Figure 4.23 shows that the contours are symmetrical over the region of interest and form concentric circles. This indicates that the rotatability condition is satisfied and the constructed prediction models are capable of holding consistent prediction of variance for all realizations equally distant from the centre of the design. 142 Figure 4.23. Contours of the variance of prediction for the R-CCD The shape of the variance of prediction contours remains unchanged for the three constructed fitted models in R-CCD, within the specified region of interest. However, since the estimated model variances (σ) used in the calculation differs (Eq. 2.16), the values of the variance of prediction contours vary. For the Q/L model, the variance of prediction changes from 0.0015 to 0.0035 (Figure 4.23), while for the Q and L models it changes from 0.07 to 0.33 and 0.06 to 0.25, respectively (results not shown here). As anticipated, the variance of prediction for the Q/L changes in a lowest range for the Q/L model. 4.7.1.2 PRESS and R2Prediction As described in Section 3.5.2, to calculate the PRESS value for each model, one slope realization (treatment) at a time is eliminated from the initial R-CCD. The model coefficients are then re-estimated based on the new design. Using the new model, the stability of the eliminated slope DipTrace length 143 realization is predicted and compared with the simulated (actual) result. The squared value of this difference is recorded as the prediction residual (error) corresponding to the eliminated slope realization. This procedure is repeated for all different realizations involved in the R-CCD. The PRESS value and the corresponding R2Prediction are then calculated by employing the estimated prediction errors in Equations 3.14 and 3.15. A high prediction error for a realization suggests a significant influence of that specific realization, as a design point, on the prediction capability of the fitted model. The prediction errors corresponding to the sixteen simulated realizations for each of the fitted models in the R-CCD along with their percentage of error are listed in Table 4.26. 144 Table 4.26. Prediction errors for the fitted models in the rotatable Eliminated realization Simulated SRF Predicted SRF Prediction error Percentage of error (%) Q/L Q L Q/L Q L Q/L Q L 1 1.10 1.29 0.8 1.2 0.0 0.1 0.0 3 7 0 2 1.40 1.64 1.7 1.5 0.1 0.1 0.0 4 8 1 3 1.06 0.84 1.2 0.7 0.0 0.0 0.1 5 3 12 4 1.44 1.87 1.6 1.7 0.2 0.0 0.1 13 3 5 5 0.85 0.56 0.6 0.4 0.1 0.0 0.2 10 5 21 6 0.55 0.91 1.2 0.8 0.1 0.4 0.0 24 70 8 7 0.79 0.70 0.9 0.6 0.0 0.0 0.1 1 2 7 8 0.97 0.92 0.6 0.8 0.0 0.2 0.0 0 16 4 9 1.12 1.37 1.1 0.7 0.1 0.0 0.2 6 0 20 10 1.82 1.38 1.2 1.0 0.2 0.3 0.7 11 18 39 11 0.66 0.46 0.7 1.1 0.0 0.0 0.2 6 1 23 12 0.67 0.68 0.9 1.2 0.0 0.0 0.3 0 6 39 13 1.56 1.29 1.2 1.4 0.1 0.1 0.0 5 7 2 14 0.55 0.63 0.6 0.6 0.0 0.0 0.0 1 0 0 15 0.71 0.72 0.7 1.0 0.0 0.0 0.1 0 0 13 16 0.71 0.72 0.7 1.0 0.0 0.0 0.1 0 0 13 As it can be seen in this table, for each of the three fitted models in the R-CCD, a high percentage of error exist (labelled in red). The criteria for the maximum acceptable error is based on expert judgment and defined as 10% here. When the Q/L model is used to predict the eliminated realizations, only three realizations are predicted with noticeable error, with the maximum being 24%. As the models change from Q/L to Q and then to L, the deficiency of the models to predict the stability of the eliminated design points becomes more evident. For instance, the linear model could not sufficiently predict the stability behaviour of 8 realizations. The information summarized in Table 4.26 is used to estimate the PRESS values and R2prediction for each fitted model in the rotatable design. As it was anticipated according to the obtained 145 prediction errors, the Q/L model has the highest R2prediction while the linear model has a very low R2prediction (Table 4.27). Table 4.27. PRESS and R2prediction for the fitted models in R-CCD Quadratic and linear Quadratic Linear PRESS 0.9 1.34 1.98 R2prediction 0.6 0.4 0.11 4.7.1.3 Mean Squares of Error (MSE) Although in many problems R2Prediction is sufficient to describe the prediction capability of a fitted model, it can only represent the model capability to predict the stability of realizations that are initially used as design points. To evaluate the prediction capability of the obtained fitted models when the stability of a random realization is of a concern, new realizations are randomly generated and their corresponding SRF values are predicted using the fitted models. The results are then compared with the actual SRF values. To fulfil this objective, 30 random realizations are generated by Monte-Carlo sampling from distributions of D1, DD1, and T1. The geometric parameters of set 2 are fixed at their mean values. The strength parameters assigned to new realizations are identical to those used in factorial design and CCD realizations. Moreover, to avoid the mesh sensitivity problem, all the new realizations are discretized with more than 200,000 elements, according to Section 4.4.2. The generated 30 realizations are initially simulated in Phase2 and their corresponding SRF values are found. Next, the fitted models are used to predict the SRF value of each randomly generated realization using Equations 4.2 to 4.4. The geometric combinations of the new realizations along with their simulated and predicted SRF values are summarized in Figure 4.24. This figure shows that the mean sum of squares of errors (MSE) for the Q and Q/L fitted models are almost equivalent. However, the MSE obtained for the linear fitted model, as expected, is almost twice as large as the other models. 146 Table 4.28. Simulated and predicted SRF values-selected with Monte-Carlo Realization ID D1 DD1 T1 Simulated SRF Q/L Q L Predicted SRF SE Predicted SRF SE Predicted SRF SE 1 46 233 8 0.92 0.95 0.00 0.96 0.00 1.18 0.07 2 35 223 8 0.77 0.88 0.01 0.81 0.00 1.06 0.09 3 48 236 9 1.00 0.94 0.00 0.93 0.00 1.16 0.03 4 48 224 9 1.07 0.94 0.02 0.97 0.01 1.15 0.01 5 30 222 10 0.83 0.85 0.00 0.76 0.01 0.93 0.01 6 46 223 12 0.61 0.63 0.00 0.67 0.00 0.87 0.07 7 49 235 9 1.05 0.98 0.00 0.98 0.01 1.18 0.02 8 31 224 9 0.87 0.89 0.00 0.79 0.01 0.98 0.01 9 45 238 9 1.06 0.83 0.05 0.84 0.05 1.10 0.00 10 55 218 9 1.18 1.13 0.00 1.25 0.00 1.16 0.00 11 47 234 9 1.05 0.95 0.01 0.95 0.01 1.17 0.02 12 50 233 9 1.10 1.00 0.01 0.99 0.01 1.17 0.00 13 34 232 10 0.76 0.76 0.00 0.75 0.00 0.94 0.03 14 45 236 10 0.75 0.78 0.00 0.77 0.00 1.04 0.09 15 36 220 8 0.77 0.83 0.00 0.77 0.00 1.05 0.08 16 47 224 11 0.81 0.68 0.02 0.72 0.01 0.93 0.01 17 40 224 9 0.91 0.79 0.02 0.78 0.02 1.06 0.02 18 46 221 9 0.89 0.81 0.01 0.85 0.00 1.08 0.04 19 51 241 10 1.09 0.93 0.02 0.88 0.04 1.11 0.00 20 32 240 12 0.79 0.71 0.01 0.76 0.00 0.82 0.00 21 29 238 12 0.82 0.79 0.00 0.84 0.00 0.77 0.00 22 34 222 9 0.81 0.82 0.00 0.75 0.00 1.00 0.04 23 22 237 12 1.05 1.09 0.00 1.11 0.00 0.72 0.11 24 38 225 10 0.76 0.70 0.00 0.69 0.00 0.96 0.04 25 48 216 11 0.92 0.67 0.06 0.79 0.02 0.92 0.00 26 49 242 12 0.75 0.77 0.00 0.70 0.00 0.96 0.04 27 33 232 6 1.08 1.05 0.00 1.00 0.01 1.09 0.00 28 36 239 7 1.09 0.93 0.03 0.96 0.02 1.09 0.00 29 45 232 10 0.79 0.75 0.00 0.76 0.00 1.02 0.05 30 42 224 11 0.67 0.66 0.00 0.69 0.00 0.95 0.08 MSE=0.1 MSE=0.09 MSE=0.18 147 The simulated versus the predicted SRF values for the 30 randomly generated realizations are plotted in Figure 4.24 for the Q/L, Q, and L models. The error bars indicate the squared errors values corresponding to each prediction. The upward error bar represents an overestimation of SRF, while a downward error bar represents an underestimation in the predicted values. In addition, a high dispersion from the dotted line on the plot suggests a high difference between the simulated and predicted values. 148 Figure 4.24. Simulated vs. predicted SRF for the 30 random realizations, R-CCD 0.00.20.40.60.81.01.21.41.61.82.00.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0Predicted SRFSimulated SRF(0.9,0.7)(1.1,0.8)Q/L fitted model0.00.20.4.60.81.01.21.41.61.82.00.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0Predicted SRFSimulated SRF(1.1,0.8)(1.2,0.9)Q fitted model0.00.20.40.60.81.01.21.41.61.82.00.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0Predicted SRFSimulated SRFL fitted model 149 Considering the Q/L model, a good agreement is observed between the simulated and predicted values. Two significant underestimations related to realizations #9 and #25 are observed with their squared errors being 0.05 and 0.06, respectively. However, for the other realizations the predictions are in an acceptable range (less than 10% error). Similar results are observed for the Q model. Although the total MSE of the Q/L and Q models are very close (0.1 vs. 0.09) the predictions of the Q model is more clustered around the linear line, specifically for the SRF values less than 1. This can be explained considering the fact that in the Q/L model, the linear terms of the trace length and dip have high negative weights (coefficients). For realizations in which either the trace length value or the dip value or both are at their high levels (i.e. close to the tail) the Q/L model underestimates. This does not extended to the Q model where the only negative influence is imposed from the interaction of dip and trace length that has a low contribution. The linear fitted model gave unsatisfactory predictions. Except for a few realizations, the stabilities are poorly predicted. Moreover, in the linear model, due to the elimination of the dip quadratic term, most of the errors are positive which indicates a significant overestimation. The results obtained from the Mean Squares of Errors (MSE) of 30 new realizations confirm the results summarized in Section 4.7.1.2. In both approaches, it was shown that the linear fitted model was unable to satisfactorily predict the stability of an arbitrary realization. However, Q/L and Q fitted models could almost equally be used for the prediction purposes, with the Q/L model being slightly on the conservative side. 4.7.2 Fitted models in the face-centre design (F-CCD) To make a comparison between the fitted models obtained in the rotatable and the face-centre CCD, the three parameters investigated in Section 4.6.4.1 (variance of prediction, PRESS/R2prediction and MSE) should be re-estimated for the fitted models obtained from the face-centre design. This comparison is worth investigating since the choice of the design points may have significant influence on the prediction capabilities of the fitted models. 150 4.7.2.1 Variance of the prediction For the comparison purposes, the twenty realizations for which the variances of prediction were estimated in Section 4.7.1.1 are used again in this analysis. The corresponding variance of prediction for each of those realizations are re-estimated by substituting the correlation matrix of the rotatable design by the new correlation matrix constructed based on the face-centre design points. Since the design points in the F-CCD are selected differently from the rotatable design, the correlation matrix has different arrays. If the variances of prediction corresponding to realizations with equal distance from the design centre point are equal, the contour plot of the variance of prediction versus the factors level of the twenty realizations must be symmetrical. It should be noted that, since in the face-centre design the region of interest is a cuboid, it is anticipated that the contours of the variance of prediction form symmetrical squares rather than circles. Here, the calculations are made using the natural variables for the factor levels and the correlation matrix. However, for plotting purposes, the obtained variances of prediction are plotted versus the coded values of the factors levels (Figure 4.25). As can be seen in Figure 4.25, the contours of the variance of the prediction are symmetrical squares. Although in the face-centre design the region of interest is not sphere and the axial distance (α) is not selected according to the rotatability condition, the design maintain consistent variance of prediction for equally distant realizations from the centre point. The shapes of these contours remain unchanged for the three fitted model in F-CCD. However, their values are different due to the difference in the models variances. The variance of the prediction for the Q/L fitted model in the F-CCD changes from 0.11 to 0.2, while this parameter changes from 0.08 to 0.15 and 0.05 to 0.09 for the Q and L fitted models, respectively. It should be noted that the obtained variances of prediction for any of the fitted models in the face-centre design are noticeably higher than the variances of prediction for the equivalent fitted model in the rotatable design. This noticeable difference is mainly due to the higher level of model variance (σ2) in the face-centre design. Although the high level of model variance does 151 not affect the symmetry of the obtained contours, it reduces the prediction capability of the fitted models obtained from the face-centre design. Figure 4.25. Contours of the variance of prediction for the F-CCD 4.7.2.2 PRESS and R2prediction To evaluate the capability of the three fitted models in predicting the stability of the realizations involved in the F-CCD, the values of PRESS and R2Prediction are evaluated. Since the procedure is similar to what was presented in Section 4.7.1.2, here only the results are discussed. Table 4.29 summarizes the prediction errors estimated for the Q/L, Q and L fitted models obtained from the face-centre design when one realization is eliminated at a time. Using these values along with the Equations 3.14 and 3.15, the corresponding PRESS and R2prediction for each fitted model is also calculated (Table 4.30). DipTrace length 152 Table 4.29. Prediction errors for the fitted models in the face-centre design Eliminated Realization Simulated SRF Predicted SRF Prediction error Percentage of error (%) Q/L Q L Q/L Q L Q/L Q L #1 1.10 0.8 0.88 1.02 0.1 0.0 0.0 8 4 1 #2 1.40 1.56 1.40 1.18 0.0 0.0 0.0 2 0 3 #3 1.06 0.76 1.20 0.48 0.1 0.0 0.3 8 2 32 #4 1.44 1.93 1.67 1.47 0.2 0.1 0.0 17 4 0 #5 0.85 0.45 0.59 0.60 0.2 0.1 0.1 19 8 7 #6 0.55 0.93 1.22 0.50 0.1 0.4 0.0 26 82 0 #7 0.79 0.72 0.88 0.39 0.0 0.0 0.2 1 1 20 #8 0.97 0.7 0.53 0.41 0.1 0.2 0.3 8 20 32 #9 1.12 1.37 1.12 0.65 0.1 0.0 0.2 6 0 20 #10 1.82 1.38 1.24 0.98 0.2 0.3 0.7 11 18 39 #11 0.66 0.46 0.73 1.05 0.0 0.0 0.2 6 1 23 #12 0.67 0.68 0.87 1.18 0.0 0.0 0.3 0 6 39 #13 1.56 1.29 1.23 1.38 0.1 0.1 0.0 5 7 2 #14 0.55 0.63 0.55 0.56 0.0 0.0 0.0 1 0 0 #15 0.71 0.72 0.72 1.01 0.0 0.0 0.1 0 0 13 #16 0.71 0.72 0.72 1.01 0.0 0.0 0.1 0 0 13 Table 4.30. PRESS and R2prediction for the fitted models in F-CCD Quadratic and linear Quadratic Linear PRESS 1.14 1.12 1.31 R2prediction 0.14 0.1 0.007 From the above tables it can be concluded that the prediction capability of the fitted models constructed in the face-centre design is considerably less than the fitted models obtained in the rotatable design. In both designs, the Q/L model has the highest capability of prediction. However, the R2Prediction correspond to the Q/L model of the face-centre design is almost 1/5 of 153 the equivalent model in the rotatable design. It is concluded that the fitted models obtained from R-CCD can describe the stability of the design points more reliably. 4.7.2.3 Mean Squares of Errors (MSE) To evaluate the ability of the fitted models to predict the stability of any arbitrary realization, the SRF values corresponding to the 30 realizations (Section 4.7.1.3) are predicted. The predicted values along with their corresponding sum of squares and mean sum of squares of errors are summarized in Table 4.31. The predicted SRFs and their corresponding simulated SRFs are also plotted in Figure 4.26 for the three fitted models. 154 Table 4.31. Simulated and predicted SRF values-selected with Monte-Carlo Realization ID D1 DD1 T1 Simulated SRF Q/L Q L Predicted SRF SE Predicted SRF SE Predicted SRF SE 1 46 233 8 0.92 0.99 0.00 1.00 0.01 1.11 0.04 2 35 223 8 0.77 1.07 0.09 0.86 0.01 1.02 0.06 3 48 236 9 1.00 0.94 0.00 0.97 0.00 1.09 0.01 4 48 224 9 1.07 0.96 0.01 0.98 0.01 1.07 0.00 5 30 222 10 0.83 1.07 0.06 0.79 0.00 0.94 0.01 6 46 223 12 0.61 0.65 0.00 0.74 0.02 0.83 0.05 7 49 235 9 1.05 0.99 0.00 1.00 0.00 1.11 0.00 8 31 224 9 0.87 1.11 0.06 0.81 0.00 0.96 0.01 9 45 238 9 1.06 0.83 0.05 0.90 0.03 1.04 0.00 10 55 218 9 1.18 1.06 0.01 1.12 0.00 1.05 0.02 11 47 234 9 1.05 0.97 0.01 0.99 0.00 1.11 0.00 12 50 233 9 1.10 1.01 0.01 1.00 0.01 1.09 0.00 13 34 232 10 0.76 0.87 0.01 0.79 0.00 0.92 0.02 14 45 236 10 0.75 0.79 0.00 0.83 0.01 0.98 0.05 15 36 220 8 0.77 0.99 0.05 0.83 0.00 1.02 0.06 16 47 224 11 0.81 0.70 0.01 0.77 0.00 0.88 0.00 17 40 224 9 0.91 0.89 0.00 0.84 0.00 1.01 0.01 18 46 221 9 0.89 0.84 0.00 0.89 0.00 1.02 0.02 19 51 241 10 1.09 0.91 0.03 0.90 0.03 1.03 0.00 20 32 240 12 0.79 0.75 0.00 0.79 0.00 0.80 0.00 21 29 238 12 0.82 0.88 0.00 0.84 0.00 0.78 0.00 22 34 222 9 0.81 0.99 0.03 0.80 0.00 0.98 0.03 23 22 237 12 1.05 1.26 0.04 0.98 0.00 0.76 0.08 24 38 225 10 0.76 0.81 0.00 0.77 0.00 0.94 0.03 25 48 216 11 0.92 0.63 0.09 0.80 0.01 0.87 0.00 26 49 242 12 0.75 0.76 0.00 0.76 0.00 0.88 0.02 27 33 232 6 1.08 1.28 0.04 1.00 0.01 1.02 0.00 28 36 239 7 1.09 1.02 0.00 0.98 0.01 1.03 0.00 29 45 232 10 0.79 0.78 0.00 0.82 0.00 0.96 0.03 30 42 224 11 0.67 0.72 0.00 0.76 0.01 0.92 0.06 MSE=0.15 MSE=0.09 MSE=0.15 155 Figure 4.26. Simulated vs. predicted SRF for 30 random realizations, F-CCD 0.00.20.40.60.81.01.21.41.61.82.00 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20.00.20.40.60.81.01.21.41.61.82.00 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2Predicted SRFSimulated SRFQ/L fitted model0.00.20.40.60.81.01.21.41.61.82.00 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2Simulated SRFQ fitted modelPredicted SRF(1.09,0.9)(1.06,0.9)Predicted SRFSimulated SRFL fitted model 156 According to the MSE values in Table 4.31 and the plots in Figure 4.26, it is interpreted that the predicted SRFs obtained from the Q/L and L are not in a good agreement with the simulated values. In the linear model, most values of SRF are noticeably overestimated. In the Q/L model however, when SRF values range between 0.6 and 0.8 the predications are almost in a good agreement with the simulated values. However, the fitted model is not capable of satisfactorily predict higher values of SRF. The fitted Q model is capable of predicting the stability of all realization equally acceptable, with the highest error being 0.04. Although the R2prediction of the Q/L fitted model (Table 4.30) is slightly higher than the R2prediction of the Q model, the latter could predict the stability of the randomly generated realization much better than the former. To conclude, considering the values of R2prediction and MSE for each fitted model in either of the designs, the Q/L and Q fitted models in R-CCD can better predict the stability of any generated realizations within the region of interest, in comparison with the similar fitted models in the face-centre design. Therefore, the type of the design affects the prediction capability of the prediction models and to construct more reliable models, the region of interest associated with the geometric parameters of the discontinuities are better described spherically. 4.8 Curse of Dimensionality As discussed earlier in Section 3.4.1, one of the important shortcomings of the point estimate method is the curse of dimensionality. This shortcoming becomes more of an issue when the distributions are highly skewed or/and the number of input variables are more than seven. In these scenarios, the point estimates are mainly selected from the tails of the distributions where the frequency of occurrence is low (Baecher & Christian 2003). In the current study, the distributions representing the input variables are highly skewed and the initial number of variables (prior to the screening phase) is six. Therefore, the obtained point estimates may suffer from the curse of dimensionality. Since the design points in a CCD (either rotatable or face centre) are selected based on the point estimates of the variables distributions, the obtained fitted models (prediction models) may also be affected by the curse of dimensionality. Hence, it is necessary to quantify this effect and evaluate its significance on the prediction capability of the obtained fitted models. 157 As suggested by Baecher and Christian (2003), one of the possible approaches to reduce the effect of the curse of dimensionality is to identify the significant parameters and re-calculate the point estimates only by considering those parameters. Consequently, the number of variables used in the PEM equations (N) is reduced to the number of significant parameters, and the new point estimates would be closer to the distributions mean where higher frequency of occurrence exists. In the current study, to evaluate the significance of the curse of dimensionality, new point estimates that represent the significant parameters are calculated. Accordingly, new design points and new CCD are generated. The new design points, which are in fact the new realizations of the slope, are simulated in Phase2. Next, the Q/L, Q and L fitted models are constructed in the new CCD and their corresponding variance of prediction, R2prediction and MSE are calculated and compared with the results obtained from the initial CCD (discussed in Section 4.7.1). It should be noted that the new CCD is rotatable and will be compared with the initial rotatable CCD (R-CCD). 4.8.1 New point estimates In the screening process (Section 4.5.1 ), three input variables, D1, DD1 and T1 were identified as the significant parameters. The new point estimates are re-calculated for these three variables using Equation 3.5, while the number of variables in this equation (N) is reduced from six to three. The new obtained point estimates are summarized in Table 4.32. The initially selected point estimates in Section 4.3 are also provided in this table for the comparison purposes. This table shows that the new point estimates of the significant parameters are closer to the mean values. 158 Table 4.32. New point estimates for the significant parameters D1, DD1 and T1 Variable Mean Initial point estimates New point estimates (xi+) (xi-) (xi+) (xi-) Set 1 Dip 42° 54.7° 29.3° 50° 34° Dip direction 231° 246.8° 215.6° 242° 220° Trace length (m) 10 14.1 6.3 12 8 Set 2 Dip 84° 86.8° 80° Non-significant Dip direction 269° 290.3° 247.7° Non-significant Trace length (m) 4 5 3 Non-significant The new point estimates of the orientation parameters (dip and dip direction) are plotted on an equal angle lower hemisphere stereonet to demonstrate the location of the new point estimates relative to the Set 1 cluster boundary (Figure 4.27). This figure can be compared with Figure 4.3, presented in Section 4.3, to illustrate the effect of curse of dimensionality on the location of selected point estimates relative to the orientation measurements of Set 1. Figure 4.27. New point estimates of Set 1 orientation on stereonet 159 4.8.2 Rotatable CCD based on the new point estimates According to the new obtained point estimates for the significant parameters involved in the stability assessment of the synthetic configuration, the new design points for the CCD are generated. The factorial points in this design are equivalent to the new point estimates. The axial points are also calculated using Equation 3.13. It should be noted that since the number of factorial points is the same as the initial CCD, α is selected equal to be the same (i.e., 1.68). The new CCD is summarized in Table 4.33 in terms of the coded variables. Each row is defined based on one possible combination of the design points that is considered as one new realization of the slope. The lowest and highest values of SRF are obtained from simulating realization #14 and realization #7. These values are equal to 0.52 and 1.32, respectively. Similar to the results obtained from R-CCD, in realization #14, the trace length value is at its highest level. Considering the negative dominance of trace length on the values of SRF, the minimum stability was expected for this realization. Although the highest SRF in R-CCD was obtained from realization #13, in the new CCD realization #7 is identified as the most stable combination. Nevertheless, in the new CCD the SRF value of realization #13 is still very high, equal to 1.26. To illustrate the influence of the new point estimates on the geometry of the numerical models, and the total displacement contours of the corresponding realizations, Phase2 results for realization #2 in two CCDs are demonstrated in Figure 4.28. Realization #2 in new CCD has shallower discontinuities and higher trace length values for set 1. The estimated SRF in the new CCD is almost 7% lower than its corresponding realization in the initial CCD. 160 Table 4.33. CCD design based on the new point estimates Realization ID D1 DD1 T1 D2 DD2 T2 Number of elements (10^3) SRF 1 -1 -1 -1 0 0 0 216 0.92 2 +1 -1 -1 0 0 0 222 1.31 3 -1 +1 -1 0 0 0 220 0.76 4 +1 +1 -1 0 0 0 204 1.32 5 -1 -1 +1 0 0 0 200 0.73 6 +1 -1 +1 0 0 0 201 0.61 7 -1 +1 +1 0 0 0 210 0.69 8 +1 +1 +1 0 0 0 207 0.87 9 -α 0 0 0 0 0 219 0.85 10 + α 0 0 0 0 0 365 1.21 11 0 -α 0 0 0 0 200 0.69 12 0 + α 0 0 0 0 206 0.68 13 0 0 - α 0 0 0 203 1.26 14 0 0 + α 0 0 0 205 0.57 15 (centre run) 0 0 0 0 0 0 200 0.71 16 (centre run) 0 0 0 0 0 0 200 0.71 161 Figure 4.28. Contours of total displacement for realization #2, old and new CCD New CCD-Realization #2Dip #1 50Dip direction #1 220Trace length #1 8Dip #2 83Dip direction #2 269Trace length #2 4SRF 1.31Initial CCD-Realization #2Dip #1 55Dip direction #1 215Trace length #1 6Dip #2 83Dip direction #2 269Trace length #2 4SRF 1.41 162 To identify the significance of the quadratic and linear terms of the three parameters in new Q/L, Q, and, L fitted models Analysis of Variance is performed. Table 4.34, Table 4.35 and Table 4.36 summarize the results of ANOVA for these models. A comparison between the ANOVA tables of the new CCD (Table 4.34) and the initial CCD (Table 4.17) shows that regardless of the values of the design points, the significant main effects are the linear and quadratic terms of dip (D1) and the linear term of the trace length (T1). The latter, in both models, contributes significantly more to the values of SRF. In the ANOVA results obtained from the new CCD, however, the contribution of error is less than what was obtained in the ANOVA analysis of the initial CCD (0.6% vs. 5%). This may suggest that in the new analysis the contributions of the higher-level interactions, included in the estimated error, are negligible (Section 3.4.2). Table 4.34. ANOVA results for the new CCD, Q/L, R2=0.99 Factor Effect estimate Sum of squares (SS) Percentage contribution (%) p-value D1(L) 0.24 0.19 18.9 0.00 D1(Q) 0.23 0.13 12.5 0.00 DD1(L) 0.01 0.00 0.02 0.68 DD1(Q) -0.01 0.00 0.02 0.72 T1(L) -0.37 0.49 48.1 0.00 T1 (Q) 0.14 0.05 5.24 0.00 D1 DD1 0.12 0.03 2.74 0.00 D1 T1 -0.22 0.10 9.82 0.00 DD1T1 0.09 0.02 1.70 0.01 Error - 0.01 0.67 - Total - 1.01 - - 163 Table 4.35.ANOVA results for the new CCD, Q, R2=0.3 Factor Effect estimate Sum of squares (SS) Percentage contribution (%) p-value D1(Q) 0.23 0.13 12.5 0.23 DD1(Q) -0.01 0.00 0.02 0.96 T1 (Q) 0.14 0.05 5.24 0.43 D1 DD1 0.12 0.03 2.74 0.56 D1 T1 -0.22 0.10 9.82 0.28 DD1T1 0.09 0.02 1.70 0.65 Error - 0.68 7.53 - Total - 1.01 - - Table 4.36. ANOVA results for the new CCD, L, R2=0.8 Factor Effect estimate Sum of squares (SS) Percentage contribution (%) p-value D1(L) 0.24 0.19 18.9 0.01 DD1(L) 0.01 0.00 0.02 0.92 T1(L) -0.37 0.49 48.1 0.00 D1 DD1 0.12 0.03 2.74 0.28 D1 T1 -0.22 0.10 9.82 0.06 DD1T1 0.09 0.02 1.70 0.39 Error - 0.19 2.07 - Total - 1.01 - - Table 4.35 discusses the ANOVA results in the case in which only quadratic and interaction terms are considered. This model is used to estimate the regression coefficients of the Q fitted model. As seen in this table, none of the quadratic terms is considered significant, yet D1 has the highest percentage contribution. The main difference between these results and the ones obtained from the initial CCD with quadratic terms (Table 4.18) is the percentage contribution of the error, which is significantly lower in the new CCD (8% vs. 52%). Table 4.36 also summarizes the ANOVA results for a CCD design in which the linear terms and interactions are considered as the sources of the variability. Similarly, the estimated error 164 contribution is remarkably less than the corresponding model constructed in the R-CCD (42% vs. 2.07%). The significant difference between the estimated error values in the new and R-CCD results in a remarkable improvement in the values of R2 for each of the three fitted models in the new CCD. This impact can be easily observed in terms of the low dispersion of the design points from the response surfaces of each fitted model in the new design. Figure 4.29 demonstrates this fact for the response surfaces obtained from Q/L, Q, and L models, considering dip direction as the fixed parameter. Using the new CCD, the regression coefficients of the Q/L, Q and L models are estimated. Table 4.37 summarizes the regression coefficients for the terms included in the Q/L, Q and L fitted models. Using these values, the prediction models are constructed as described in Equations 4.8 to 4.10. As expected, when the linear term of trace length (T1(L)) is included in the fitted model, it has the highest regression coefficient due to its dominance on the values of SRF. The linear term and quadratic terms for dip has the second and third highest coefficients in their corresponding fitted models. 165 Figure 4.29. Response surfaces for three fitted models in the new CCD 2530354045505560Dip67891011121314Trace length0.40.81.21.62.02.42.83.2SRF > 2.4 < 2.2 < 1.8 < 1.4 < 1 < 0.6 Q/L Fitted Surface; DD1 =2312530354045505560Dip67891011121314Trace length0.40.81.21.62.02.42.83.2SRFQ Fitted Surface; DD1=231 > 2.4 < 2.2 < 1.8 < 1.4 < 1 < 0.6 2530354045505560Dip67891011121314Trace length0.40.81.21.62.02.42.83.2SRFL Fitted Surface; DD1 =231 > 2 < 1.9 < 1.5 < 1.1 < 0.7 < 0.3 166 Table 4.37. Regression coefficients for three prediction models-New CCD Sources of variability Prediction model Quadratic and linear Quadratic Linear Coefficient Std. error Coefficient Std. error Coefficient Std. error D1(L) -0.2235 0.0352 - - -0.0699 0.1380 D1(Q) 0.0018 0.0002 0.0014 0.0004 - - DD1(L) -0.0328 0.0432 - - -0.0487 0.0339 DD1(Q) 0.0000 0.0001 0.0000 0.0000 - - T1(L) -0.6410 0.1388 - -0.2862 0.5534 T1 (Q) 0.0177 0.0026 0.0129 0.0061 - - D1 DD1 0.0007 0.0001 -0.0001 0.0002 0.0007 0.0006 D1 T1 -0.0070 0.0007 -0.0087 0.0019 -0.0070 0.0032 DD1T1 0.0021 0.0005 0.0001 0.0006 0.0021 0.0023 Interception 12.5039 5.4752 0.8531 0.2784 9.5060 7.9426 ⁄ 4.8 4.9 4.10 4.8.3 Analyzing the prediction capability of the fitted models The values of the variance of prediction, PRESS/ R2Prediction and MSE are evaluated for the three fitted models in the new CCD. These values will be later used to make comparison between the new and initial fitted models to evaluate the significance of curse of dimensionality. The procedure used in this stage is similar to that in Section 4.7. 167 4.8.3.1 Variance of the prediction The shape and symmetry of the variance of prediction contours are the first parameter to be investigated. Although these parameters do not discuss the prediction capability of the fitted models, they evaluate the consistency of the obtained variances of prediction with respect to the realization distance from the design center. Since the axial points in the new CCD are selected in a way to satisfy rotatability, it is anticipated that the variance of prediction contours create a concentric symmetrical circles. The realizations used in Section 4.7.2.1 to estimate the variance of prediction, are used here as well. The corresponding values of variance of prediction versus the realization combination are plotted in Figure 4.30. The symmetrical circles confirm the design holding similar variance of prediction for the realizations located in equal distances from the mean realization. Therefore, the rotatability condition is successfully achieved in the new design. Figure 4.30. Variance of prediction contours for the new CCD DipTrace length 168 In Figure 4.30, the variance of prediction of Q/L fitted model varies from 0.0005 to 0.0012. This parameter changes from 0.04 to 0.13 and from 0.011 to 0.04 for the Q and L fitted models, respectively (results not shown here). It should be noted that the values of the variance of prediction for the three fitted models are different due to the difference in the model variance values (σ) (Eq. 3.16). However, the shapes of these contours for the three fitted models remain unchanged as symmetrical circles. 4.8.3.2 PRESS and R2prediction Table 4.38 summarizes the results of the analysis to obtain PRESS and R2prediction for each fitted model. The residuals obtained for each realization (difference between the predicted and actual SRF) are used in Equations 3.14 and 3.15 to calculate the PRESS and R2prediction values of each fitted models. A comparison between Table 4.27 and Table 4.38 shows that the squared prediction errors for the realizations generated in the new CCD are noticeably less than those were obtained in the initial CCD. As in the new design, realization #8 and realization #12 have the maximum percentages of error (10%) when predicted by the Q and L fitted models, respectively. However, in the initial design, realization #2 had the maximum percentage of error, 70%, when predicted by the L fitted model. This difference is significantly reflected on the estimated values of PRESS and R2prediction of the new CCD (Table 4.39). Similar to the initial CCD, in the new CCD, the Q/L fitted has the highest R2prediction and the L fitted model has the lowest value. However, as anticipated based on the prediction errors, the R2prediction for the three fitted models in the new CCD are considerably higher than their corresponding values in the initial CCD. 169 Table 4.38. Prediction errors for the fitted models in the new design Eliminated realization Simulated SRF Predicted SRF Prediction error Percentage of error (%) Q/L Q L Q/L Q L Q/L Q L #1 1.10 1.08 0.93 0.74 0.03 0.03 0 3 4 0 #2 1.40 1.25 1.18 1.35 0 0 0.02 0.3 0.1 1 #3 1.06 0.72 0.65 0.9 0 0.02 0.01 0.2 3 2 #4 1.44 1.33 1.25 1.33 0 0 0 0 0 0.4 #5 0.85 0.66 0.59 0.69 0 0 0.02 0.7 0.2 3 #6 0.55 0.59 0.52 0.78 0 0.03 0.01 0.1 5 1 #7 0.79 0.69 0.62 0.76 0 0 0 0 0.7 0.7 #8 0.97 0.73 0.66 0.58 0.02 0.08 0.04 2 10 5 #9 1.12 0.83 0.6 0.73 0 0.01 0.06 0 2 7 #10 1.82 1.28 1.01 1.19 0 0 0.04 0.4 0 3 #11 0.66 0.69 0.92 0.77 0 0.01 0.05 0 0.9 8 #12 0.67 0.73 0.94 0.79 0 0.01 0.07 0.4 2 10 #13 1.56 1.23 1.16 1.17 0 0.01 0.01 0.1 0.6 0.8 #14 0.55 0.65 0.53 0.59 0.01 0 0 1 0.1 0.3 #15 0.71 0.7 0.87 0.75 0 0 0.03 0 0.2 4 #16 0.71 0.7 0.87 0.75 0 0 0.03 0 0.2 4 Table 4.39. PRESS and R2prediction for the fitted models in new CCD Quadratic and linear Quadratic Linear PRESS 0.07 0.21 0.37 R2prediction 0.9 0.8 0.6 4.8.3.3 Mean Squares of Errors (MSE) Although the R2prediction of the Q/L fitted in the new CCD is larger than the corresponding value in the initial design (0.9 vs. 0.6), it cannot be concluded that the prediction capability of the 170 former is higher than the latter. R2prediction represents the ability of the fitted models in predicting the design points. It should also be noted that, the total Sum of Squares of the designs (SST) has significant impact on the calculations of the R2prediction. Hence, to conclude whether or not the curse of dimensionality significantly affects the prediction capability of the fitted models, new realizations that are different from the design points and are randomly generated, should be predicted by the new fitted models. The new fitted models are used to predict the values of the 30 randomly generated realizations in Section 4.7.1.3. The predicted values and the corresponding residuals are listed in Table 4.40. Moreover, the actual and predicted values for the three fitted models are plotted in Figure 4.31. 171 Table 4.40. Simulated and predicted SRF values-selected with Monte-Carlo Realization ID D1 DD1 T1 Simulated SRF Q/L Q L Predicted SRF SE Predicted SRF SE Predicted SRF SE 1 46 233 8 0.92 1.05 0.02 1.08 0.03 1.13 0.04 2 35 223 8 0.77 0.92 0.02 0.82 0.00 0.92 0.02 3 48 236 9 1.00 1.01 0.00 1.03 0.00 1.09 0.01 4 48 224 9 1.07 1.04 0.00 1.09 0.00 1.10 0.00 5 30 222 10 0.83 0.86 0.00 0.75 0.01 0.78 0.00 6 46 223 12 0.61 0.57 0.00 0.62 0.00 0.63 0.00 7 49 235 9 1.05 1.08 0.00 1.09 0.00 1.13 0.01 8 31 224 9 0.87 0.88 0.00 0.76 0.01 0.80 0.01 9 45 238 9 1.06 0.84 0.05 0.88 0.03 0.99 0.01 10 55 218 9 1.18 1.30 0.01 1.39 0.04 1.14 0.00 11 47 234 9 1.05 1.04 0.00 1.06 0.00 1.12 0.00 12 50 233 9 1.10 1.09 0.00 1.10 0.00 1.12 0.00 13 34 232 10 0.76 0.71 0.00 0.72 0.00 0.76 0.00 14 45 236 10 0.75 0.77 0.00 0.78 0.00 0.91 0.02 15 36 220 8 0.77 0.88 0.01 0.79 0.00 0.93 0.03 16 47 224 11 0.81 0.63 0.03 0.69 0.02 0.72 0.01 17 40 224 9 0.91 0.81 0.01 0.81 0.01 0.94 0.00 18 46 221 9 0.89 0.87 0.00 0.92 0.00 1.00 0.01 19 51 241 10 1.09 0.97 0.01 0.91 0.03 1.00 0.01 20 32 240 12 0.79 0.73 0.00 0.78 0.00 0.65 0.02 21 29 238 12 0.82 0.91 0.01 0.94 0.01 0.65 0.03 22 34 222 9 0.81 0.83 0.00 0.75 0.00 0.86 0.00 23 22 237 12 1.05 1.29 0.06 1.28 0.05 0.63 0.18 24 38 225 10 0.76 0.69 0.01 0.70 0.00 0.81 0.00 25 48 216 11 0.92 0.63 0.08 0.75 0.03 0.71 0.04 26 49 242 12 0.75 0.78 0.00 0.63 0.01 0.75 0.00 27 33 232 6 1.08 1.07 0.00 0.95 0.02 0.82 0.07 28 36 239 7 1.09 0.92 0.03 0.94 0.02 0.86 0.05 29 45 232 10 0.79 0.74 0.00 0.76 0.00 0.88 0.01 30 42 224 11 0.67 0.64 0.00 0.69 0.00 0.79 0.01 MSE=0.11 MSE=0.11 MSE=0.14 172 Figure 4.31. Simulated vs. the predicted SRF values -new CCD 0.00.20.40.60.81.01.21.41.61.82.00.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0Predicted SRFSimulated SRFQ/L fitted model0.00.20.40.60.81.01.21.41.61.82.00.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0Predicted SRFSimulated SRFQ fitted model0.00.20.40.60.81.01.21.41.61.82.00.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0Predicted SRFSimulated SRFL fitted model(0.92,0.63)(1.05,1.29)(1.05,1.29)(1.05,0.63)(1.08,0.82) 173 A comparison between Table 4.28 and Table 4.40 shows that the Mean Sum of Errors (MSE) for the corresponding fitted models of the new and initial CCD are almost equal. In both designs, the lowest sums of errors (residuals) are estimated for the Q/L and Q models. This indicates that the stability behaviour of the slope can be better described with the quadratic terms included in the regression model. Comparing Figure 4.31 with Figure 4.24 also confirms that the capability of the fitted models to predict stability of new realizations is not significantly influenced by the curse of dimensionality. Although changing the design points had a significant impact on the values of R2prediction, it neither enhanced nor deteriorated noticeably the prediction capability of the fitted models. The discussions in Sections 4.7 and 4.8 suggest that the prediction capability of the fitted models, in which SRF is the response variable and the geometric parameters are the predictors, depends on the type of the design. A rotatable design generated more representative fitted models as opposed to a face-centre design. However, for the same type of design, the choice of initial point estimates has negligible influence on the prediction capability, even though it may affect the design estimated errors and regression quality. 4.9 Probability of Failure In Sections 4.2 to 4.8 of the current chapter, the methodology introduced in Chapter Chapter 3: was applied to a synthetic slope configuration. Six fitted models were obtained from the rotatable and face-centre CCDs based on initial point estimates, and three fitted models were obtained from another rotatable CCD in which the curse of dimensionality was considered. The prediction capability of each of the nine models was investigated considering their variance of prediction, PRESS, R2prediction, and Mean Sum of Squares. In this Section, the best fitted models will be selected and used to estimate the probability of failure. 4.9.1 Selecting the candidate fitted model Table 4.41 summarizes the R2prediction and MSE results for the nine fitted models. To choose the candidate fitted models, both R2prediction and MSE values should be in a satisfactory range. 174 Table 4.41. Comparison of the prediction capability of nine fitted models Initial CCD rotatable Initial CCD face centre New CCD rotatable Q/L Q L Q/L Q L Q/L Q L R2prediction 0.6 0.4 0.11 0.14 0.1 0.0007 0.9 0.8 0.6 MSE 0.1 0.09 0.18 0.15 0.09 0.15 0.11 0.11 0.14 As seen in Table 4.41, the R2prediction obtained for the fitted models of the F-CCD are markedly low. Therefore, they are not considered in further analysis. Besides, the quadratic (Q) and Linear (L) fitted models of the R-CCD (initial design) suffers from similar shortcoming and hence, are eliminated. The linear fitted model of the R-CCD with new design points has high value of R2prediction. However, the results of MSE show that this model is not particularly capable of predicting the behaviour of the randomly generated realizations. Since this deficiency affects the reliability of the estimated probability of failure, the linear fitted model of new CCD is also disregarded. The three models that show an acceptable prediction capability are the Q/L fitted model of the initial R-CCD and the Q/L and Q fitted models of the new R-CCD. It should be noted that, acceptable is a subjective concept here and is based on expert judgment. The mutual characteristics of the candidate fitted models imply that for the studied configuration, the region of interest is better described as sphere rather than a cuboid. Hence, rotatable designs are more applicable. Moreover, quadratic terms have a dominant influence on defining a relation between the geometric parameters and SRF values. Since in the new design, Q/L model and Q model are equally adequate in terms of the prediction abilities, only the Q/L model is selected from the new CCD for further discussions of the probability of failure. 4.9.2 Statistical analysis of SRF To evaluate the probability of failure, representative distribution of the SRF values are required. Using Monte-Carlo sampling technique 1000 realizations are randomly selected from the distributions of the significant parameters (D1, DD1, and T1). Consequently, the stability of each 175 generated realization is estimated using the two candidate fitted models (Eqs. 4.2 and 4.8). The 1000 obtained SRF values can represent the possible stability scenarios for the synthetic configuration. To fit a distribution to 1000 SRF values, the Kolmogorov-Smirnov goodness of fit test is employed. Fifteen different distributions are fitted to the predicted values and the one with the highest p-value is selected. Figure 4.32 and Figure 4.33 show the best distributions fitted to the predictions. As labelled, the best fitted distribution to the SRF values predicted by Q/L fitted model of the initial CCD is a lognormal distribution with a mean value of 0.78 and standard deviation of 0.08. The p-value describing the results of the goodness of fit test is 0.8. Moreover, the best fit to the SRF values obtained from Q/L fitted model of the new CCD is a Johnson SU distribution, with mean value of 0.78 and standard deviation of 0.12. The corresponding p-value for the goodness of fit test is 0.7. The mean values of both distributions are almost equal to the mean SRF value (0.71) obtained from Phase2 simulation (Section 4.5.1). Although the general shape of the two obtained distributions are very similar, the lognormal distribution is more symmetrical and less skewed. This can be explained considering the fact that, the number of observations with SRF values greater than 1 is more in Johnson distribution. In addition, the tail of Johnson distribution is more extended since the maximum SRF predicted by the fitted model of new CCD is higher than the one predicted by the initial design. These differences are very negligible. It can be interpreted that the fitted models obtained from the initial and new CCD are almost equivalent in terms of the results provided for the probability of failure. 176 Figure 4.32. Log-normal distribution fitted to SRF values-Q/L initial CCD Figure 4.33. Johnson distribution fitted to SRF values-Q/L new CCD Eventually, to estimate the probability of failure, the cumulative distribution functions of the lognormal and Johnson distributions are plotted in Figure 4.34. The cumulative probability 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3SRF0100200300400500No of ObservationsLog-normal0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5SRF0100200300400No of ObservationsJohnson 177 corresponding to the SRF value of one represents the probability of failure. This value is estimated to be 98% and 93% for the lognormal and Johnson distributions, respectively. Figure 4.34. CDF of the lognormal and Johnson distributions As can be seen in the above figures, the 5% and 95% confidence bands are also determined for the CDF describing the SRF values. The upper and lower bound for the lognormal distribution imply that at significant level of 0.05 (selected in this study), there is 95% probability that the 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4SRF0102030405060708090100PercentProbability of failureMean SRFLog-normal0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4SRF01020304050607080910PercentProbability of failureMean SRFJohnson SU 178 probability of slope failure be greater than 90% and smaller than 99% . As for the Johnson distribution, the probability of failure is at least 90% and at most 98% at confidence level of 0.05. 4.10 Summary of the Results In this chapter, the methodology to integrate statistical and numerical techniques in rock slope stability analysis was applied to a simple, synthetically generated slope configuration. The main focus was to efficiently implement the variability of the three selected geometric parameters of the two discontinuity sets and accordingly, estimate the slope probability of failure. These parameters were dip, dip direction, and trace length. Using Point Estimate Method and half-factorial design framework (DOE), 16 realizations of the slope were generated based on the variability of the six geometric parameters. Finite Element model of each realization was constructed in Phase2 and the SRF value representing the stability of each realization was evaluated. ANOVA was then used for screening purposes to identify the significant parameters affecting the values of SRF. Based on such analysis, dip, dip direction, and trace length of set #1 were identified as dominant parameters while all three geometric parameters associated with set #2 were found to be insignificant on the stability of the slope. To establish a prediction model for SRF, five values from the distributions of the identified three significant parameters were selected and implemented in Central Composite Design (CCD). Two types of CCD (rotatable and face-center) were analyzed in this study and the obtained prediction models from each design were evaluated to identify the best prediction model. The main difference between the two designs was the procedure within which the axial points for each variable were selected. It was observed in both designs that the obtained prediction models with only linear or quadratic components are not sufficient to evaluate the SRF values. From the ANOVA results, it was concluded that the dominant components which must be included in the prediction model of SRF are the linear term for trace length (set #1) with an inverse relation with SRF, and quadratic and linear terms of dip (set #1) with inverse and direct 179 relations with SRF, respectively. For this geometry, the interaction terms and dip direction parameter were identified as unimportant components of the prediction function. Although the identified significant parameters based on rotatable and face-centre designs were the same, the estimated regression coefficients corresponding to each significant parameter were not equal. This difference was due to the different axial points that were initially selected for each design. The variance of prediction, R2 of prediction and Mean Sum of Squares of Errors were estimated for each of the obtained prediction models to evaluate its prediction capability. The latter was calculated based on comparing the SRF values that were simulated and predicted by the prediction model, for 30 randomly generated realizations. According to these three parameters, the prediction model with linear and quadratic components, obtained from rotatable CCD, had the highest R2 of prediction and the lowest Mean Sum of Squares of Errors. It was concluded that this fitted model could predict the SRF value for any arbitrary realization of the slope within an acceptable confidence level. Besides, it was shown that the choice of the axial point affected the prediction capability of the fitted models, and the rotatable CCD could better describe the region of interest for this slope configuration. Since six probabilistic parameters with skewed distributions were initially selected in this analysis, the effect of “curse of dimensionality” was also assessed by recalculating the point estimates of the variables (new point estimates) and redesigning the CCD. It was shown that for a rotatable CCD, the fitted models and their prediction capabilities are insensitive to the choice of the point estimates and hence, “curse of dimensionality” could be ignored. To estimate the probability of failure, 100 realizations of the slope were randomly generated using Monte-Carlo technique and their SRF values were estimated with the two best prediction models, obtained from rotatable CCDs based on initial and new point estimates. The best distribution representing the SRF results were lognormal and Johnson SU for the former and the latter, respectively. The probability of failure was estimated as 98% and 93% with mean values of 0.71 and 0.78. 180 Chapter 5: Little Tunnel Case History 5.1 Problem description Little Tunnel is located in British Columbia (B.C.), Canada near the town of Naramata (Figure 5.1). The unsupported tunnel was excavated in 1913 as part of Kettle Valley Railway (KVR), extending from Hope to Midway. The Little Tunnel railway grade was decommissioned and the track was removed in 1978. In 1990, the Province of British Columbia purchased the rail corridor from Canadian Pacific Railway. It is now part of the provincial Rails to Trails network, and is widely used due to its scenic location. Figure 5.1. Little Tunnel (modified after Regional District Okanagan-Similkameen, 2014) In this chapter, the integrated statistical and numerical methodology that was introduced in Chapter Chapter 3: is used for probabilistic stability assessment of the ridge containing Little Tunnel (Figure 5.2). Aerial and ground-based stereo photographs were taken to provide information about the tunnel and the ridge geometries, and the geometric parameters of the 181 dominant geological structures. Laboratory tests on rock specimens were also performed to measure the strength parameters of the intact rock. Figure 5.2. Aerial photo of the ridge (Penticton Museum & Archives) Little Tunnel is located in metamorphic gneiss known as Okanagan gneiss, that can be described as a foliated, hornblende-biotite granodiorite orthogneiss. The intact rock is strong and resistant to weathering. The behaviour of the rock mass is largely governed by weaker geological structures such as the foliation surfaces and joints rather than the strength of the intact rock. The tunnel is approximately 6.5 m wide and 8 m high, 40 m long, and has a grade of 2.2%. Little Tunnel contains continuous steeply dipping, open fractures that extend upwards from the tunnel roof. The fractures run along the tunnel roof and dip toward the Okanagan Lake. These fractures can combine with the other well-developed, persistent joints that dip at a shallower angle toward the Okanagan Lake to create a stepped planar failure mechanism. Figure 5.3 shows the north portal of Little Tunnel. The shallow dipping joint set and the steep fractures are indicated in red and yellow dashed lines, respectively. 182 Figure 5.3. North portal of Little Tunnel 5.2 Field Investigations and Laboratory Tests 5.2.1 Aerial Photogrammetry of the Ridge Photogrammetry was used as a technique to obtain geometries of the ridge containing Little Tunnel and the geological structures. Due to the inaccessibility and steep slope of the ridge, ground-based photogrammetry was not applicable. Therefore, aerial images were taken of the ridge from a small private plane flown by Mike Bidden. The aerial photographs were taken using a Canon EOS 5D Mark II digital camera, with a focal length of 85mm for the lens. Before taking the images, fourteen photogrammetry survey targets were mounted on the ridge and their coordinates were measured using a Leica TS06 total station. The coordinates of the target points were used later to determine the locations and orientations of the camera for each aerial photograph, and the coordinates of the random points generated by the photogrammetry software in the 3D model of the ridge. Figure 5.4, shows a plane view of the 183 randomly generated points representing the ridge geometry, along with the location of the target points and the camera stations. Fifty aerial photographs were taken of the ridge from the private plane. However, due to the challenging flying conditions, air speed, and turbulence many of the photographs were blurry and could not be used to create the 3D model. The model shown in Figure 5.4 was constructed using 12 photographs taken from four different camera locations with the best overlap. Figure 5.4 Location of the camera stations and generated points of the ridge Finally, photographs taken from two selected camera stations that provided the best coverage of the ridge were processed in Adam Technology software (Adam Technology 2014) to build a Digital Terrain Model (DTM) of the ridge (Figure 5.5). The generated DTM contains 644,958 points and 1,289,873 triangles in a triangular irregular network. The yellow dotted line on the DTMs depicts the location of the cross section that represents the 2D geometry of the ridge. This geometry was transferred to Phase2 for further numerical simulations. 5499900550000055001005500200550030055004005500500311950 312050 312150 312250 312350 312450Easting (m)Northing (m)Camera stationsTarget pointsRandom pointsOkanagan lakeRidgeTunnel north portal#12 184 Figure 5.5 Digital Terrain Model of the ridge showing a) TIN and b) texture 185 5.2.2 Ground-based Photogrammetry of the North and South Portals The DTM obtained from aerial photogrammetry was used to extract the geometry of the slope for numerical simulations. To identify the dominant geological structures of the ridge and to measure their geometric parameters (such as orientation), ground based photogrammetry was performed on the north and south portals of Little Tunnel. Photographs of the north and south portals were taken from three and four camera stations, respectively. Due to limited photography access to the south portal, the desired overlap between the camera stations could not be achieved. Hence, the constructed DTM was blurry which made the digital mapping of the geological structures inaccurate. Besides, the geological structures are more apparent on the north portal. Therefore, the study of the geological structures was more focused on the DTMs obtained from the north portal images. To determine the location of the camera stations and the randomly generated points, six photogrammetry survey targets were mounted on the rock surface near the north portal and their coordinates were surveyed using the Leica TS06 total station. The photographs were taken from a Canon EOS-5D Mark II camera with a fixed-focus 24 mm EF series lens. Figure 5.6 shows the location of the camera stations and randomly generated points of the north portal. Photographs, taken from two selected camera stations that provided the best coverage of the north portal, were used to build the Digital Terrain Model (DTM) of the north portal. The DTM contains 370,122 points and 740,127 triangles in a triangular irregular network (Figure 5.7). 186 Figure 5.6 Location of the camera stations and generated points of the north portal 5500412550041755004225500427550043255004375500442550044755004525500457312270 312275 312280 312285 312290 312295 312300 312305Easting (m)Tunnel north portalCamera stationsTarget pointsRandom points 187 Figure 5.7 Digital Terrain Model of the north portal; a) TIN, b) texture 188 The geological structures identified on the DTMs were mapped digitally using ADAM technology software. The process involves selecting digital points along the apparent intersection of each identified discontinuity and the rock face. Based on the selected digital points, a disk is fitted to each discontinuity. The orientation of the discontinuity (dip and dip direction) is defined by the orientation of the disk, and its length is represented by the disk diameter. Figure 5.8 shows the digitally mapped structures. In this figure, the discontinuities that belong to a common set are mapped with similar disk colors. Accordingly, three dominant discontinuity sets were identified, and their orientations were projected on an equal angle, lower hemisphere stereonet for further kinematic and statistical analyses. Figure 5.8. Digital mapping of the geological structures-north portal 5.2.3 Laboratory Measurements 5.2.3.1 Rock mass strength properties To measure the compressive strength of the intact rock, point load tests were performed on six rock specimens that were collected from inside and outside of the tunnel. The test was 189 performed using a RocTest model PIL-7 point load device. Since core specimens were not available, diamond saw cut cuboid-shape specimens were used. The equivalent diameter (De) of the specimens varied between 40.9 mm to 55 mm and was calculated using Equation 5.1, in which w is the dimension of the smallest width and D is the distance between the platen contact points (Brook 1985). ⁄ ⁄ 5.1 The equivalent diameter for each specimen was used to calculate the point load index (Is), which was later corrected for 50 mm diameter (Is(50)) using the “size correction factor”. The correlation between the corrected point load index and the compressive strength of Gneiss was also obtained using the empirical relation defined by Kahraman & Gunaydin, 2009 (Eq. 5.2). 5.2 Table 5.1 summarizes the results of the point load tests performed on the six collected specimens. According to this table, the average compressive strength of the intact rock (gneiss) is measured as 226 MPa (the fourth test result is ignored). This value, along with other parameters (GSI, D, mi, MR), mainly estimated based on field observations, are used to estimate the rock mass properties and Mohr-Coulomb shear strength parameters (Table 5.2). Table 5.1. Point load test results Loading (MN) Equivalent diameter (mm) Compressive strength (MPa) 31.46 40.9 285 42.84 52.8 260 40.94 55.0 230 15.42 52.4 80 22.58 51.6 131 Average 226 190 Table 5.2. Summary of the rock mass properties Parameter Deterministic value Measured/Observed parameters Intact Compressive strength (σc) 226 MPa GSI 70 Hoek-Brown mi 30 Disturbance factor (D) 0 Modulus ratio (MR) 500 Estimated rock mass properties Rock mass compressive strength (σc’) 98 MPa Rock mass tensile strength (σt’) -0.74 MPa Deformation modulus (Erm) 77 GPa Linear Mohr-Coulomb fit Cohesion (c) 3.4 MPa Friction angle (φ) 70° 5.2.3.2 Joint shear strength properties The Mohr-Coulomb failure criterion is used to describe the strength of joint sets #1 and #2. Since less than 1 mm aperture is recorded for the joints and no infilling material was observed, the joints are considered cohesionless with no tensile strength. The friction angle of the joint sets #1 and #2 are assumed 25° and 20°, respectively. No actual field measurements were available to estimate these values. However, to achieve the anticipated failure mechanism of the ridge in the numerical models, these values were selected using a trial and error procedure. Moreover, according to the field observations, joint sets #1 was found to be almost dry, whereas, the presence of water and accordingly noticeable weathering could be observed in discontinuities belonging to joint set #2. Hence, a lower friction angle was assigned to this joint set. Since the configuration of the ridge and discontinuities in the current study is very similar to the synthetic problem discussed in Chapter Chapter 4:, it is anticipated that the overall stability of the ridge is significantly governed by joint set #1. Hence, in the probabilistic stability of the ridge, the friction angle of joint set #1 is considered probabilistically to take into account the potential uncertainty in the assigned value to this parameter. More details are provided in 191 Section 5.3.2. Other strength parameters associated with the joints are considered deterministic in further analyses. 5.2.3.3 Rock bridge strength properties In the numerical simulations of the ridge and Little Tunnel, the strength parameters measured/calculated for the rock mass are assigned to the regions with few or no discontinuities (Section 5.5). However, to model the bridging phenomenon, which is likely to occur in the rock mass located between two adjacent joints (Figure 5.9), different material properties are assigned. The rock mass in such zones will be labeled as bridging rock mass in this chapter. The properties assigned to the bridging rock mass are based on field observations and engineering judgment. Mohr-Coulomb failure criterion is used to describe the behaviour, with cohesion selected as zero and the mean friction angle as 30°. These values are selected in a way to allow yield of the bridging rock elements and potentially achieve sliding on joint set #1. To consider the uncertainty existing in the selected shear strength parameters, the friction angle was allowed to vary in different realizations of the ridge (discussed in Section 5.3.2). Figure 5.9. Potential zone for rock mass bridging and its relation with persistence 192 5.3 Statistical Analysis on the Acquired Data 5.3.1 Geometric parameters The orientations of the identified geological structures are projected on an equal angle lower hemisphere stereonet (Figure 5.10). Joint set #1 included 64 discontinuities, while 10 and 20 discontinuities are identified for sets #2 and #3, respectively. A preliminary kinematic analysis shows the potential of a planar/stepped planar failure caused by the presence of joint sets #1 and #2. As a result, joint set #3 is not considered in further analyses. Figure 5.10 Stereographic projection of discontinuities mapped on DTM Set 1Set 2Set 3RidgeLower hemisphereEqual anglemean values 193 5.3.1.1 Set #1 geometric parameters For joint set #1, 64 measurements are used to study the variability of the orientation and size parameters. Based on the outcome of the synthetic configuration study in Chapter 4, the variability of dip direction has no significant impact on the output results. Therefore, the computations of the numerical models are conducted in 2D space without considering the dip direction value. However, it is expected that the variability of the joint set #1 dip has a dominant influence on the evaluated values for SRF. As a result, this parameter is considered probabilistically. Fifteen different distributions were fitted to the 64 dip measurements, among which, the Weibull distribution is identified as the best fit (Figure 5.11). The Kolmogorov-Smirnov test p-value, mean, standard deviation, and skewness of the fitted Weibull distribution are 0.81, 18, 7.6, and 0.31, respectively. Figure 5.11. Best-fit Weibull distribution to dip measurements of joint set #1 The other geometric parameter considered probabilistically in this study is the persistence of joint set #1. Since it is very challenging to measure the persistence of the discontinuities on DTMs, actual measurements of the persistence is not available. However, a mean value of 0.5 0 5 10 15 20 25 30 35 40 45 50Dip-Set 102468101214161820No of ObservationsWeibull distribution 194 is selected based on field investigations. Here the joint persistence is defined as the ratio of the discontinuity length to the total length of the discontinuity plane (Rocscience 2013). A normal distribution with mean and standard deviation values equal to 0.5 and 0.3 is selected to represent the variability of persistence in joint set #1 (Figure 5.12). It should be noted that this distribution is truncated at zero and one, since persistence values out of this range are not acceptable. The length of the rock mass along which bridging may occur is a direct function of the selected value for the joint persistence (Figure 5.9). Accordingly, if persistence tends to one, the length of the rock mass between two adjacent joints also tends to zero. Since in this study the trace length of the joint set #1 is selected deterministically, the variability of persistence is in fact equivalent to the variability of the rock mass length between two adjacent joints. Figure 5.12. Normal distribution representing persistence values for Set #1 5.3.1.2 Set #2 geometric parameters Field observations and the DTM of the north portal show the presence of a single dominant discontinuity (joint #2). This joint is a steep and continuous discontinuity cutting through the Mean=0.5, STD=0.30.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0Persistence 02468101214161820No of ObservationsNormal distribution 195 roof of the tunnel. In the numerical simulations, joint #2 is modelled by an individual continuous joint cutting through the roof and the floor of the tunnel with its orientation being equal to the mean orientation of joint set #2. According to the outcome of synthetic configuration analysis (Chapter 4), it is concluded that the variability of the orientation and size parameters of joint #2 have no significant influence on the values of SRF. Therefore, to conduct a more efficient probabilistic assessment, these parameters were considered deterministically. Table 5.3 summarizes the results of the statistical analysis on the geometric parameters of joint set #1 and joint #2. The selection of spacing value for joint set #1 is independent of the actual field measurements and is selected with the purpose of computational simplification. Table 5.3. Summary of the statistical analysis on geometric parameters Type Mean Standard deviation Skewness Kolmogorov-Smirnov p-value Set #1 Dip Probabilistic 18° 8° 0.31 0.81 Persistence Probabilistic 0.5 0.3 0 N/A Trace length (m) Deterministic 8 - - - Spacing (m) Deterministic 0.6 - - - Joint#2 Dip Deterministic 76° - - - Trace length (m) Deterministic 27 - - - 5.3.2 Strength parameters As discussed earlier, all parameters associated with joint #2 are considered deterministically, while the friction angles assigned to joint set #1 and the bridging rock mass are selected probabilistically. No actual measurements are available for either of the two parameters. Hence, based on field observations and engineering judgment, a normal distribution is used to represent their variability. As for joint set #1 friction angle, the mean value is selected equal to 25° with standard deviation of 5° (Figure 5.13). For the bridging rock mass friction angle, a mean value of 30° and standard deviation of 5° are selected (Figure 5.14). 196 Figure 5.13. Normal distribution representing friction angle for Set #1 Figure 5.14. Normal distribution representing friction angle for bridging rock Mean=20, STD=56 8 10 12 14 16 18 20 22 24 26 28 30 32 34Friction angle-Joint-Set 102468101214161820No of ObservationsNormal distributionMean 30; STD=514 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46Bridging rock mass-Friction angle02468101214161820No of ObservationsNormal distribution 197 5.4 Generating Efficient Number of Realizations According to the methodology proposed in Chapter Chapter 3:, the Point Estimate Method is used as the first step to substitute the distribution of each probabilistic variable with two representative point estimates. These values are later used in the Design of Experiment framework (factorial design) to generate an efficient number of realizations of the ridge. These realizations are numerically modeled for screening purposes to identify the most significant parameters affecting the stability of the ridge. More details on the theory are provided in Chapters 3 and 4. Table 5.4 summarizes the geometric and strength parameters that are defined probabilistically in the current study, along with their statistical moments and point estimates. The curse of dimensionality is negligible here, based on the results obtained from the synthetic configuration and the fact that the number of probabilistic variables is only four, with three of them represented by a normal distribution with zero skewness. Table 5.4. Statistical moments and point estimates of the probabilistic variables Distribution Mean Standard deviation Skewness Point estimate- (xi-) Point estimate+ (xi+) Set #1 Dip Weibull 18° 8° 0.31 18° 27° Persistence Normal 0.5 0.3 0 0.2 0.8 Friction angle Normal 25° 5° 0 15° 20° Bridging rock Friction angle Normal 30° 5° 0 25° 35° In a full factorial design framework, 24 realizations of the ridge are generated. Here, reducing the number of realizations, using half-factorial design technique, is not necessary. Although 16 realizations of the ridge are generated, only four numerical simulations are required to model the generated realizations. Phase2 software that is used for the modelling purposes in this study, handles the distributions associated with the strength parameters. It is capable of 198 substituting each distribution with its two point estimates and accordingly, generating and simulating the realizations based on the obtained point estimates. If only the strength parameters are defined probabilistically, for each numerical model, 2n SRF values will be calculated, where n is the number of probabilistic strength parameters. In the current study, the four realizations that are generated based on the variability of the geometric parameters are separately modelled, while the influence of the variability of the two strength parameters in each model is automatically assessed by Phase2. For each numerical model, four values of SRF were estimated and ultimately, the 16 required SRF values were obtained. Table 5.5 shows the generated realizations along with the numerical models representing each of them. A mean realization, in which all variables are fixed to their distribution mean value, is also simulated for further analysis. 199 Table 5.5. Generated realizations and constructed numerical models for screening Realization number Numerical model number Dip- Set #1 Friction angle- Bridging rock Friction angle- Set #1 Persistence- Set 1 1 1 -1 -1 -1 -1 2 -1 -1 1 -1 3 -1 -1 -1 1 4 -1 -1 1 1 5 2 -1 1 -1 -1 6 -1 1 1 -1 7 -1 1 -1 1 8 -1 1 1 1 9 3 1 -1 -1 -1 10 1 -1 1 -1 11 1 -1 -1 1 12 1 -1 1 1 13 4 1 1 -1 -1 14 1 1 1 -1 15 1 1 -1 1 16 1 1 1 1 Mean 5 0 0 0 0 5.5 Constructing Numerical Models of the Realizations in Phase2 5.5.1 Initial boundary conditions All numerical models are constructed in two stages. In the first stage, the pre-tunnel excavation condition of the ridge is modeled while in the second stage, the tunnel excavation is added to the geometry. In the first stage, plastic calculations are performed according to the defined boundary and in situ stress conditions, and stresses are redistributed. Subsequently, the tunnel is excavated in the second stage and shear strength reduction calculations are 200 applied. Only the results of the last stage (stage two) are interpreted in Phase2. Figure 5.15 represents the geometry of the mean realization in stage two. As it can be seen in this figure, to avoid boundary effects, the left and bottom boundaries of the ridge are extended. However, to achieve computational efficiency, the SRF area is fixed to the actual boundaries of the ridge that were extracted from DTM cross section (yellow dotted line). The purple line in this figure represents the region with material properties measured for the rock mass, while the light green zone represents the material properties defining the bridging rock. In all numerical simulations, only one joint from set #1 was modelled. This joint was located such that it intersected the lower right corner of the tunnel (Figure 5.15), the most critical location in terms of overall stability for the rock ridge above the tunnel. In Phase2, this joint was modelled as a joint set with a very wide spacing such that more than one individual discontinuity along the joint plane was modelled. This enables the user to assign geometric and strength values (such as persistence, trace length, etc.) to all the joints on the same joint plane. Therefore, in the current study, the term “set” for joint #1 mainly refers to the type of the external boundary defining the joints in the numerical models, rather than the geological definition of a joint set. Other parameters involved in the finite element calculations such as the in-situ stresses, stiffness of the joints, and convergence criterion were selected as their default values and similar to those used in the synthetic configuration simulations (Section 4.4). 201 Figure 5.15. Geometry of the mean realization in Phase2 5.5.2 Discretization and mesh sensitivity The mesh sensitivity problem and its influence on the developed methodology were discussed in Section 4.4.2. It was concluded that to avoid undesirable results from the analysis related to mesh sensitivity, an optimum mesh size (number of elements) should be estimated for the studied geometry. Subsequently, all simulated realizations have to be discretized with mesh sizes equal or smaller than the estimated optimum value. To estimate the optimum number of the elements for the Little Tunnel simulations, mesh sensitivity was studied for three different realizations (#1, #9 and mean). The realizations were selected in a way to have different geometric parameters, and different SRF values. For each realization, five simulations with a different number of elements were conducted. The first simulation of each realization had the coarsest meshing (about 4000 elements) and the last simulation had the finest meshing (about 160,000 elements). The results are shown in Figure 5.16. SRF zone 202 Figure 5.16. Mesh sensitivity analysis for Little Tunnel configuration From Figure 5.16., it is interpreted that the results obtained from geometries which are discretized with more than 70,000 elements are insensistive to the mesh size (less than 10% difference) Hence, in the current analysis all realizations of the ridge are discretized at least with 70,000 elements. 5.5.3 Output results The SRF values evaluated for each realization of the Little Tunnel configuration are listed in Table 5.6. The mean SRF estimated from the mean realization represents the SRF value that would have been obtained if a deterministic approach using means values had been adopted. Among all numerical simulations, realization #11 has the lowest SRF equal to 0.71 while realization #6 has the highest SRF equal to 4.67. In the former, the dip of joint set #1 and 0.50.70.91.11.31.51.71.92.12.32.50 20000 40000 60000 80000 100000 120000 140000 160000SRFRealization #9Realization #1Mean realizationNumber of elements 203 friction angle of the bridging rock have their highest and lowest point estimate values respectively, while in the latter, the dip of joint set #1 has its lowest and the bridging rock has its highest point estimate values. Table 5.6. Numerical simulations results-full factorial design Realization number Numerical model number Number of elements (10^3) Dip- Set #1 Friction angle- Bridging rock Friction angle- Set #1 Persistence- Set 1 SRF 1 1 80 -1 -1 -1 -1 2.4 2 -1 -1 1 -1 2.2 3 -1 -1 -1 1 1.9 4 -1 -1 1 1 2.61 5 2 85 -1 1 -1 -1 3.59 6 -1 1 1 -1 4.67 7 -1 1 -1 1 2.73 8 -1 1 1 1 3.25 9 3 75 1 -1 -1 -1 0.81 10 1 -1 1 -1 0.84 11 1 -1 -1 1 0.71 12 1 -1 1 1 0.85 13 4 80 1 1 -1 -1 1.14 14 1 1 1 -1 1.23 15 1 1 -1 1 1.44 16 1 1 1 1 1.13 Mean 5 70 0 0 0 0 1.54 Figure 5.17 shows the displacement and deformation contours of the mean, #11 and #6 realizations. As it can be seen in this figure, the failure mechanism remains the same for different realizations. Joint #2 initially fails in tension and as a result, opening of the joint plane 204 is observed as simulation progresses. However, joint set #1 and the bridging rock mainly fails in shear and hence, an offset along the plane of Set #1 discontinuity occurs. This mechanism mostly remains unchanged for all simulated realizations while the magnitude of the joint aperture or offset changes according to the selected input values. The total displacement and SRF values change noticeably as functions of the geometric and strength parameters of the joints and rock mass. 205 Factorial design-mean realizationDip Set #1 18Persistence 0.5Friction angle- Set #1 20°Friction angle- rock bridging30°SRF 1.54Factorial design-realization #11Dip Set #1 27Persistence 0.8Friction angle- Set #1 15°Friction angle- rock bridging25°SRF 0.71 206 Figure 5.17. Total displacement contours for different realizations of the slope Figure 5.18 shows the displacement contours of realizations #2 and #5, in which the geometric parameters are the same and the strength parameters differ. Considering the SRF values and the displacement contours, it can be concluded that only the variability of the strength parameters will have a significant influence on the stability of the tunnel. Factorial design-realization #6Dip Set #1 12Persistence 0.2Friction angle- Set #1 25°Friction angle- rock bridging35°SRF 4.67 207 Figure 5.18. Total displacement for two realizations of the slope Factorial design-realization #2Dip Set #1 12Persistence 0.2Friction angle- Set #1 25°Friction angle- rock bridging25°SRF 2.2Factorial design-realization #5Dip Set #1 12Persistence 0.2Friction angle- Set #1 15°Friction angle- rock bridging35°SRF 3.59 208 5.6 ANOVA and Sensitivity Analysis To identify the parameters that significantly contribute to the stability of the ridge and the tunnel, ‘analysis of variance’ was performed on the SRF values obtained from the factorial design realizations. Table 5.7 and Table 5.8 summarize the results of this analysis. In this study, the main effects of the four selected variables (dip of set #1, persistence of set #1, friction angle of set #1, and friction angle of the bridging rock) along with their two-way interaction terms were selected as the sources of the variability. The higher interaction terms were considered negligible and were included in the errors. In this analysis, the engineering significance level was selected to be 0.05. For the purpose of simplicity, the dip of set #1, the persistence of set #1, the friction angle of set #1, and the friction angle of the bridging rock are labelled as D1, P1, φJ1, and φB, respectively. As it can be seen in these tables, D1 has the highest percentage contribution (70%) and the lowest p-value (0.000098). This implies that the stability of the ridge is dominantly governed by the dip of set #1. Moreover, the friction angle of the bridging rock has the second highest percentage contribution (14%) with its p-value being less than the selected significance level (0.05). It can be concluded that this parameter is also influential on the values of SRF and has to be considered probabilistically in analysis. The contributions of the other two parameters (P1, φJ1) are negligible as their estimated p-values are less than the selected significance level. The two-way interactions were identified to be insignificant. Although the D1 φB interaction term has a low p-value (0.09) it is still greater than the selected significant level and its influence can be considered to be negligible. 209 Table 5.7. Effect estimate summary Factor Effect estimate Sum of squares (SS) Percentage contribution % D1 -1.89 14.40 70 φB 0.85 2.87 14 φJ1 0.25 0.26 1.28 P1 -0.29 0.33 1.6 D1 φB -0.41 0.68 3.31 D1 φJ1 -0.27 0.28 1.36 D1 P1 0.31 0.39 1.89 φB φJ1 0.09 0.03 0.14 φB P1 -0.24 0.24 1.16 φJ1 P1 0.007 0.00023 0.0011 Error - 1.04 Total - 20.55 - Table 5.8. ANOVA for the critical SRF Source of variability SS df MS F0 P D1 14.40 1 14.40 83.04 0.000098 φB 2.87 1 2.87 16.56 0.006575 φJ1 0.26 1 0.26 1.49 0.26 P1 0.33 1 0.33 1.90 0.21 D1 φB 0.68 1 0.68 3.97 0.09 D1 φJ1 0.28 1 0.28 1.65 0.24 D1 P1 0.39 1 0.39 2.28 0.18 φB φJ1 0.03 1 0.03 0.19 0.67 φB P1 0.24 1 0.24 1.38 0.28 φJ1 P1 0.00023 1 0.00023 0.0013 0.97 Error 1.04 1 1.04 - - Total 20.55 16 - - - 210 To validate the results obtained from ANOVA, the model adequacy tests are investigated. To check for the normality assumption, the sorted residuals (eij) are plotted versus the normal probability values. The normal probability plot is shown in Figure 5.19. As seen in this figure, the residuals are located on a straight dotted line with a negligible dispersion proving the test is satisfactory. The plots of residuals versus predicted values and residuals versus the run sequence are also shown in Figure 5.20 and Figure 5.21. Almost constant variance is observed in Figure 5.20 and randomness is observed in Figure 5.21. Accordingly, the model adequacy tests are acceptable and the ANOVA results are validated. Figure 5.19. Normal probability plot of SRF residuals 0102030405060708090100-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5Normal probability %Residuals 211 Figure 5.20. Residuals versus fitted values Figure 5.21. Residuals versus realization running sequence -0.50-0.40-0.30-0.20-0.100.000.100.200.300.400.500.00 0.50 1.00 1.50 2.00 2.50 3.00 3.50 4.00 4.50Fitted valuesResiduals-0.50-0.40-0.30-0.20-0.100.000.100.400.500 2 4 6 8 10 12 14 16 18Running sequenceResiduals 212 Based on the results obtained from the ‘analysis of variance’, D1 and φB are identified as the most significant input parameters. These parameters are considered probabilistically and are included in the CCD design to obtain a prediction model for SRF. However, P1 and φJ1 will be considered deterministically and their distributions will be replaced by their mean values equal to 0.5 and 20°, respectively. 5.7 Generating Prediction Models 5.7.1 Rotatable Central Composite Design (CCD) To obtain a prediction model that describes the stability of the ridge as a function of the identified significant parameters, Central Composite Design (CCD) is used. As described in Section 3.5, different types of CCD can be constructed according to the desired region of interest. The main difference among different CCD designs is the calculation process for the axial points assigned to each variable. According to the statistical analysis on the synthetic configuration (Section 4.7), it was concluded that a rotatable CCD could better represent the region of interest. Hence, in the current analysis of the Little Tunnel configuration, rotatable CCD was to generate the new realizations required to establish the prediction model. Here, the face-centre design was not investigated. Each significant parameter in the rotatable CCD is substituted by five levels (point estimates); two of them are equal to the highest and lowest point estimates used in the factorial design (factorial points), one is the mean value (centre point) and the other two levels are the highest and lowest axial points calculated using Equation 3.13. Table 5.9 and Table 5.10 summarize the coded and natural values for the five point estimates of each of the significant parameters. 213 Table 5.9. Coded variables of the five point estimates- rotatable design Variable Factorial point- Factorial point+ Axial point- Axial point+ Centre point D1 -1 +1 - 1.41 + 1.42 0 φB -1 +1 - 1.41 + 1.42 0 Table 5.10. Natural variables of the five point estimates- rotatable design Variable Factorial point- Factorial point+ Axial point- Axial point+ Centre point D1 12 27 29 8 18 φB 25 35 23 37 30 Based on the obtained five point estimates for each of the two significant parameters, nine realizations are generated in the rotatable CCD framework. Since one of the significant parameters is a strength parameter, as described in Section 5.4, only six numerical simulations are required to model the nine generated realizations. Table 5.11 summarizes the generated realizations along with the numerical models that represent each of the realizations, and their corresponding SRF values. It should be noted that all realizations were discretized with more than 70,000 elements to avoid mesh sensitivity effects. As can be seen in this table, realizations #5 and #3 have the highest and lowest SRF values, respectively. In realization #5, the lowest axial point is assigned to the dip of set #1 and, the mean value of friction angle is assigned to the bridging rock. While in realization #3 highest and lowest factorial points are assigned to the dip of set #1 and friction angle of the bridging rock, respectively. Figure 5.22 and Figure 5.23 show the displacement and deformation contours of these two realizations. As also discussed in Section 5.6, the failure mechanism remains unchanged as the input parameters vary. Joint #2 mainly fails in tension while joint set #1 and the bridging rock fail in shear. However, the SRF values and the magnitude of the slope displacement change noticeably as functions of the two input variables. 214 Table 5.11. Rotatable Central Composite Design Realization number Numerical model number Number of elements Dip Set #1 Friction angle Bridging rock SRF 1 1 70 -1 -1 2.5 2 -1 1 3.22 3 2 70 1 -1 0.84 4 1 1 1.04 5 3 71 -1.41 0 4.01 6 4 70 1.41 0 0.87 7 5 70 0 -1.41 1.29 8 0 1.41 1.81 Mean 6 70 0 0 1.54 Figure 5.22. CCD total displacement and deformation contours for realization #5 CCD-realization #5Dip Set #1 8Persistence 0.5Friction angle- Set #1 20°Friction angle- rock bridging30°SRF 4.01 215 Figure 5.23. CCD total displacement and deformation contours for realization #3 A statistical analysis on the CCD table and the corresponding SRF values was performed using the ‘analysis of variance’ technique. Accordingly, three different prediction models (response surfaces) are investigated. As for the first prediction model, the quadratic and the linear terms along with the two-way interaction terms are considered (Q/L). In the second prediction model, the quadratic terms with the two-way interaction terms are included only (Q). The components of the third prediction model consist of the linear terms and the two-way interaction terms (L). The results of ANOVA analysis for each of the prediction models are summarized in Table 5.12 to Table 5.14. As it can be seen in Table 5.12, the linear and quadratic terms of D1 have the highest percentage contribution and the lowest p-values, which imply the significance of these two terms in the SRF prediction model. It should be noted that the linear term of D1 is almost five times more significant than its quadratic term. The p-values estimated for φB (L) and the interaction term D1 φB are also found to be less than the selected significance level (0.05) for this problem. Although these two terms are less influential than D1 components, they are also CCD-realization #3Dip Set #1 27Persistence 0.5Friction angle- Set #1 20°Friction angle- rock bridging25°SRF 0.84 216 identified as significant sources of variability. Despite the results obtained for the Q/L prediction model of the synthetic configuration, in the current study the linear terms are more dominant than the quadratic terms. The effect estimates corresponding to the regression coefficients of each of the involved components in the perdition model also show a direct relation between D1(Q), and SRF and, φB (L) and SRF, yet an inverse relation between D1(L) and the SRF values. A R2 value of 0.99 is also estimated for the Q/L fitted model. Although this parameter does not represent the prediction capability of the prediction model, it shows the goodness of fit of the obtained response surface (fitted model) to the measured (simulated) data. Table 5.12. ANOVA results for rotatable CCD, quadratic and linear, R2=0.99 Factor Effect estimate Sum of squares (SS) Percentage contribution % p-value D1(L) -1.93 7.45 72 0.000000 D1(Q) 1.03 1.79 17 0.000000 φB (L) 0.39 0.29 2.8 0.000022 φB (Q) 0.04 0.004 0.03 0.287233 D1 φB -0.24 0.06 0.5 0.002694 Error - 0.02 0.19 Total - 10.30 As for the quadratic prediction model (Q), none of the involved sources of variability is identified as significant. Moreover, the low estimated value for the R2 (0.24) indicates that the obtained response surface cannot adequately represent the SRF values obtained from the simulation of the CCD realizations. This confirms the fact that the linear components contribute dominantly to the values of SRF and hence, ignoring these components deteriorates the representativeness of the quadratic response surface. 217 Table 5.13. ANOVA results for rotatable CCD, quadratic, R2=0.24 Factor Effect estimate Sum of squares (SS) Percentage contribution % p-value D1(Q) 1.18 2.39 23 0.130594 φB (Q) 0.002 0.00001 0 0.996888 D1 φB -0.32 0.10 0.9 0.733477 Error - 7.77 74 Total - 10.30 The ANOVA analysis on the linear model shows that only the linear term of D1 is dominant and the other involved sources of variability have negligible contributions. Since the most dominant parameter (D1(L)) is included in this prediction model, the R2 of the obtained response surface is in an acceptable range, yet smaller than the one estimated for the Q/L model. According to the effect estimates, an inverse relation between D1(L) and SRF and a direct relation between φB (L) and SRF is observed. Table 5.14. ANOVA results for rotatable CCD, linear, R2=0.82 Factor Effect estimate Sum of squares (SS) Percentage contribution % p-value D1(L) -2.00 8.07 78 0.000140 φB (L) 0.39 0.29 2.8 0.256511 D1 φB -0.24 0.06 0.58 0.594416 Error 1.82 18.54 Total - 10.30 To obtain mathematical relations that describe the three fitted models, the regression coefficients of the involved components in each model were estimated based on the ANOVA effect estimates. These values along with their standard errors are summarized in Table 5.15. The regression models for each fitted model are also summarized in Equations 5.3 to 5.5. According to the obtained coefficients of regression, it can be concluded that regardless of the 218 type of the fitted model, the linear term for D1 and the interaction term (D1 φB) have negative effect on the SRF values while D1(Q), φB (L) and φB (Q) act positively. Table 5.15. Regression coefficients for three prediction models-rotatable CCD Sources of variability Prediction model Quadratic and linear Quadratic Linear Coefficient Std. error Coefficient Std. error Coefficient Std. error D1(L) -0.38 0.02 - - -0.035 0.1 D1(Q) 0.009 0.0003 0.006 0.001 - - φB (L) 0.044 0.05 - - 0.103 0.1 φB (Q) 0.0009 0.0008 0.004 0.0007 - - D1 φB -0.003 0.0007 -0.012 0.002 -0.003 0.005 Interception 5.1 0.8 2.1 0.3 1.1 3.5 5.7.2 Response surfaces of the fitted (prediction) models To better depict the mathematical relation between the output parameter (SRF) and the two significant input parameters (D1 and φB), response surfaces representing each of the obtained fitted models are shown in Figure 5.24. In this figure, the x and y-axes represent the input variables while the z-axis and the contours, represent the output variable. The eight small blue points on each response surfaces represent the simulated (measured) SRF values for the nine CCD realizations. Hence, the model that better fits these points is a more representative regression model. It should be emphasized that this characteristic cannot necessarily give any true judgment about the prediction capability of the fitted model. ⁄ 5.3 5.4 5.5 219 From Figure 5.24, it can be seen that the simulated data best fits the Q/L response surface. For the given range of the input parameters, SRF varies between 1 and 5 in this prediction model. As for the Q model, the response surface representativeness decreases as the dip and friction angle values increases. Since the linear component of D1 is eliminated in this prediction model overestimation of SRF is observed as D1 increases. Hence, this model is more accurate for realizations with lower ranges of D1 and φB. For the given range of the input parameters, SRF changes between 0.25 and 6 according to this prediction model. The L model underestimates the SRF values for almost six out of nine simulated realizations. This could be anticipated because the quadratic terms (which have direct relation with the SRF values) are ignored in this model and only the linear and interaction terms with inverse effects are included. According to this prediction model, SRF varies from 0.25 to 4 for the given range of the input values. Although the representativeness of the obtained prediction models is important in the process of selecting the best prediction model, it is not sufficient. The prediction capability of each of the obtained models should be evaluated separately, and accordingly the best prediction model should be selected for estimating the probability of failure. 220 Figure 5.24. Response surfaces for the Q/L, Q and L prediction models 221 5.8 Evaluating the Prediction Capability of the Prediction Models As discussed in Section 3.5.2, to investigate the prediction capability of the obtained fitted models, three parameters should be analysed. These parameters are the variance of prediction, PRESS, and R2prediction, and Mean Sum of squares of Errors (MSE). Among the three obtained prediction models, the model that has the best prediction capability will be selected for evaluating the probability of failure. 5.8.1 Variance of prediction The variance of prediction represents the rotatability of the design. This parameter is independent of the studied case and is only a function of the design points in CCD. It was shown in Section 4.7.1 that the selection of the axial points using Equation 3.13 satisfies the rotatability condition and so, equal variances of predictions are obtained for the realizations that are equally distant from the centre of the region of interest. Since similar calculations are used to construct CCD for the Little Tunnel configuration, it is therefore anticipated that the rotatability condition exists and the variance of prediction is not investigated here. 5.8.2 PRESS and R2prediction To calculate the PRESS value for each of the fitted models, one realization in the rotatable CCD is eliminated at a time, and ANOVA is repeated to re-estimate the regression coefficients of the involved parameters. The SRF value corresponding to the eliminated realization is predicted based on the obtained prediction model and the difference between the predicted and simulated SRF values are recorded. This procedure is repeated for all eight realizations in the design. PRESS value and R2prediction for each fitted model is calculated using Equations 3.14 and 3.15. More details on the theory and calculations are provided in Section 3.5.2. Table 5.16 summarizes the predicted SRFs and the prediction errors of each prediction model after elimination of each realization from CCD design. As it was anticipated, Q/L model is less affected by the elimination of one realization at a time from CCD design. The highest prediction error for this model corresponds to the elimination of realization #4 in which, both input 222 variables are at their highest point estimate level. This error is an underestimation of the actual SRF by 0.3. The Q model shows a high level of sensitivity to the elimination of realizations from the CCD design. As can be seen in Table 5.16, the highest error (equal to 0.8) corresponds to realizations #4 and #5 in which D1 is fixed to its highest point estimate and lowest axial point level, respectively. The minimum prediction error of this model corresponds to the elimination of realization #2 and #6. It should be noted that the minimum prediction error of the Q model in this analysis is equal to the maximum prediction error of the Q/L model. This implies a higher prediction capability of Q/L model. The L model is also very sensitive to the elimination of the realizations in the CCD design. As observed from the L response surfaces (Figure 5.24), all SRF values predicted by the L model are noticeably underestimated. This is due to the fact that all regression coefficients considered in such model has inverse relation with SRF. The highest prediction error for the L model (1.2) corresponds to realization #5 in which, D1 and φB are set to their lowest axial point and mean value, respectively. The prediction errors, correspond to the L model, are significantly high and biased to the elimination of the realizations. Table 5.16. Prediction errors for the fitted models in the rotatable design (Q/L,Q, L) Eliminated realization Simulated SRF Predicted SRF Prediction error Q/L Q L Q/L Q L 1 2.50 2.52 2.05 2.19 0.0 0.5 0.3 2 3.22 3.07 3.5 2.75 0.2 0.3 0.5 3 0.84 0.78 1.58 0.09 0.1 0.7 0.7 4 1.04 0.79 0.24 0.13 0.3 0.8 0.9 5 4.00 4.07 3.25 2.81 0.1 0.8 1.2 6 0.87 0.99 0.61 0.16 0.1 0.3 0.7 7 1.29 1.25 1.86 1.03 0.0 0.6 0.3 8 1.81 2.03 2.35 2.34 0.2 0.5 0.5 223 The PRESS and R2prediction of each fitted model are also summarized in Table 5.17. Q/L model has the lowest PRESS value and highest prediction capability, while the L model has the highest PRESS and lowest R2prediction. These results are in agreement with those obtained for the synthetic problem, although higher R2prediction is obtained for the Little Tunnel prediction models. Table 5.17. PRESS and R2prediction for the fitted models in rotatable CCD Quadratic and linear Quadratic Linear PRESS 0.15 2.7 3.96 R2prediction 0.98 0.73 0.61 5.8.3 Mean Sum of Squares of Errors (MSE) The last parameter investigated to evaluate the prediction capability of the fitted models was MSE. Using Monte-Carlo simulation and based on the variability of the two significant parameters, 30 realizations of the Little Tunnel configuration were generated randomly. The 30 models were simulated with Phase2 with the same initial conditions used in simulating the realizations in the CCD design. The SRF values of the 30 realizations were also predicted using the Q/L, Q and L models. The error between the predicted and simulated SRFs were recorded, and the MSE corresponding to each model was calculated using Equation 3.17. A lower MSE shows a higher prediction capability of the fitted model. Table 5.18 summarizes the 30 realizations, their simulated and predicted SRF values, and their prediction errors. As can be seen in this table, Q/L fitted model has the lowest MSE that implies better capability in predicting SRF values for any arbitrary realization of the Little Tunnel configuration. Both Q and L models noticeably underestimate the values of SRF, specifically when higher ranges of D1 and φB are considered in the geometry of the slope (realizations #1 or #16). Hence, the prediction capability of these two models is acceptable when the probabilistic geometric parameters are set within a range close to their mean values, i.e. realization #6. 224 Table 5.18. Simulated and predicted SRF values-selected with Monte-Carlo Realization ID D1 φB Simulated SRF Q/L Q L Predicted SRF SE Predicted SRF SE Predicted SRF SE 1 30 42 1.01 1.19 0.03 0.26 0.56 0.38 0.40 2 14 25 1.9 2.02 0.01 1.87 0.00 2.10 0.03 3 22 31 1.3 1.09 0.04 1.08 0.04 1.32 0.00 4 19 22 1.2 1.11 0.01 1.42 0.04 1.38 0.03 5 21 33 1.4 1.24 0.03 1.23 0.03 1.51 0.01 6 18 32 1.7 1.67 0.00 1.73 0.00 1.97 0.10 7 23 29 1.1 0.98 0.02 1.02 0.01 1.16 0.00 8 12 34 3.1 2.99 0.01 3.10 0.00 2.85 0.06 9 26 33 0.9 0.97 0.00 0.73 0.04 0.88 0.00 10 17 33 1.9 1.87 0.00 1.96 0.01 2.13 0.06 11 12 24 2.4 2.54 0.02 2.16 0.06 2.32 0.01 12 20 34 1.6 1.42 0.03 1.46 0.02 1.71 0.01 13 12 29 2.7 2.80 0.01 2.59 0.01 2.62 0.01 14 22 27 1.1 0.98 0.02 1.12 0.00 1.21 0.01 15 18 30 1.5 1.54 0.00 1.57 0.00 1.85 0.10 16 29 33 0.9 0.98 0.01 0.51 0.16 0.45 0.21 17 24 25 1 0.84 0.01 1.20 0.06 0.92 0.00 18 14 32 2.6 2.53 0.01 2.56 0.00 2.56 0.00 19 16 29 1.8 1.81 0.00 1.80 0.00 2.05 0.07 20 12 28 2.7 2.61 0.01 2.39 0.10 2.50 0.04 21 19 34 1.7 1.65 0.01 1.74 0.00 1.95 0.05 22 12 18 1.8 2.13 0.10 1.84 0.00 1.85 0.00 23 22 29 1.2 1.05 0.03 1.13 0.01 1.30 0.01 24 11 30 3.5 3.06 0.18 2.86 0.38 2.78 0.49 25 21 28 1.2 1.10 0.02 1.18 0.00 1.39 0.02 26 27 36 1.1 1.03 0.00 0.52 0.31 0.68 0.16 27 17 36 2.1 1.96 0.01 2.17 0.01 2.22 0.02 28 23 31 1.2 1.04 0.02 1.02 0.03 1.24 0.00 29 14 21 1.7 1.88 0.04 1.75 0.00 1.88 0.014 30 34 33 0.9 0.96 0.00 0.57 0.012 0.56 0.013 MSE=0.15 MSE=0.25 MSE=0.26 225 The predicted and simulated SRF values are also plotted in Figure 5.25. The prediction errors are shown as error bars; the downward and upward error bars represent underestimation and overestimation in the prediction models, respectively. As it was anticipated, all three prediction models can adequately predict SRF values of the realizations in which D1 and φB variables are set in a range close to their mean values. However, as these two values tend to their distributions tails (low and high values of SRF), the Q and L models underestimates the actual SRF values to an extent that the linear model predicts 0.38 for an actual SRF value of 1.01 and, the quadratic model predicts 0.52 for an actual value of 1.08. It should be noted that however, the Q/L prediction capability remains in an acceptable level for all ranges of D1 and φB. According to these investigations on the variance of prediction, PRESS and R2prediction, and MSE of the three prediction models, it can be concluded that the Q/L model can best describe the stability of the slope for any arbitrary realizations within the region of interest. A very high R2prediction and a low value of MSE imply that this model is the best candidate to be used for evaluating the probability of failure for the Little Tunnel configuration. The same conclusion was drawn for the case of the synthetic configuration. Therefore, for this specific failure mechanism and slope geometry, the mathematical relation between the SRF and the involved significant parameters must be defined using both quadratic and linear components of the independent variables. 226 Figure 5.25. Predicted versus simulated SRF values for 30 random realizations 0.001.002.003.004.000.00 1.00 2.00 3.00 4.00Q/L fitted model(3.48,3.06)(1.81, 2.13)Predicted SRFSimulated SRF0.001.002.003.004.000.00 1.00 2.00 3.00 4.00Q fitted modelPredicted SRFSimulated SRF0.001.002.003.004.000.00 1.00 2.00 3.00 4.00Simulated SRFPredicted SRF(3.48,2.86)(1.08, 0.52)(3.48,2.78)(1.01,0.38)L fitted model 227 5.9 Probability of Failure To evaluate the probability of failure of the ridge containing Little Tunnel, 200 realizations of the slope were randomly generated using the Monte-Carlo technique. No extra numerical simulations were necessary at this stage since the SRF values corresponding to each of the 200 generated realizations can be predicted using the obtained Q/L response surface. In the current analysis, considering the use of the Little Tunnel, a SRF value of 1 is selected as the acceptable factor of safety. Hence, any realization with SRF value less than 1 is considered unstable. To find the best distribution describing the variability of the obtained SRF values, 15 different distributions were investigated. According to the Kolmogorov-Smirnov goodness of fit test, Johnson SB distribution is selected, with p-value equal to 0.8 (Figure 5.26). Figure 5.27 also shows the cumulative Johnson SB distribution of the SRF values. According to this distribution, the mean SRF is estimated to be 1.6 and the probability of failure (SRF<1) is approximately 18%. The upper and lower confidence bands imply there is 95% possibility that the probability of failure is smaller than 25% and greater than 10%. Having a confidence range defined for the probability of failure, decreases the level of uncertainty in the output results of the stability analysis. 228 Figure 5.26. Johnson SB distribution fitted to SRF values-Q/L model Figure 5.27. CDF of Johnson distribution Mean SRFProbability of failure Johnson SB 229 5.10 Summary of the Results In this chapter, the developed statistical and numerical technique was shown to be capable of efficiently perform a probabilistic slope stability assessment of a real case history. In this study, the stability of the ridge contacting Little Tunnel was evaluated. Four of the input variables describing the strength and geometric parameters of the dominant geological structures were defined probabilistically. These parameters were the dip of joint set #1, friction angle of the bridging rock, friction angle of the joint (set #1), and the persistence of set 1. According to the ‘design of the experiment’ frame work 16 realizations of the slope were generated. However, since two of the input variables were strength parameters, the actual number of numerical simulations of the 16 realizations in Phase2 was dropped down to four. The ‘analysis of variance’ on the obtained SRF results shows the dominancy of two input parameters, dip of joint set #1 and the friction angle of the bridging rock, while the significance of the other two parameters, friction angle of the joint and persistence of set #1, was found to be negligible. At this stage, the two insignificant parameters were substituted with their mean values and considered deterministically. The two significant parameters were used in a rotatable CCD design to generate prediction models of SRF. Three types of prediction models were investigated; Q/L, Q and L. It was shown that all components of the significant parameter, except φB (Q), significantly contribute to the stability of the tunnel. The linear term for D1 and the interaction term (D1 φB) have an inverse relation with SRF values while D1(Q), φB (L) and φB (Q) have direct relations. The L model noticeably underestimates the predicted values of SRF for an arbitrary realization. To evaluate the prediction capability of the three obtained prediction models, variance of prediction, PRESS, and R2prediction, and Mean Sum of Squares of the Errors were investigated. Accordingly, it was shown that the response surface that considers both the linear and quadratic terms of the two significant parameters has the best prediction capability. A similar result was obtained when the synthetic slope configuration was studied. 230 The candidate response surface (Q/L) was used to predict the SRF for 200 realizations of the ridge. These realizations were randomly generated using the Monte-Carlo technique. The distribution that best fits the obtained SRF values was identified as Johnson SB. Considering SRF less than 1 as undesirable (unstable slope), the probability of failure for the ridge was estimated to be approximately 18%. 231 Chapter 6: Conclusions and Recommendations 6.1 Conclusions The current study shows that the developed methodology can fulfill the research objectives by performing a more reliable assessment of rock slope stability with an affordable computational cost. The main contribution of this research is the development of a methodology that provides sufficient knowledge about each parameter in the analysis, quantifies their contributions and their effects on the output results (mainly SRF), and generates response surfaces that describe the behaviour of the slope for any arbitrary realization of joint geometries. The methodology assists the user to develop and run an affordable number of numerical simulations for the generated realizations. The analysis steps for this methodology are described in Chapter Chapter 3: and are duplicated in Figure 6.1. It should be noted that some of the detailed statistical analyses for comparing the prediction models obtained from different types of CCDs or quantifying the influence of ‘curse of dimensionality’ were merely performed to show the robustness of the developed methodology and are, therefore, not essential parts of the algorithm. According to these analyses, it was concluded that the type of CCD affected the prediction capability of the obtained response surfaces. A rotatable design that described a spherical region of interest was found to better define the problems investigated in this study, in compared to a face centre design. The effect of curse of dimensionality on the prediction capability of the fitted models were found to be negligible. This methodology is flexible to the choice of the numerical method used in the analysis (finite element, finite difference, etc.) and the characteristics of the distributions describing the input variables. The latter enables the analyst to use non-normal distributions, recommended in literature, to define the inherent variability of the strength and geometric parameters. Although in this study geometric and strength parameters were selected as the sources of variability, other parameters in numerical computations such as the in-situ stresses, water conditions, seismic forces, etc. can also be included within the framework of the methodology. 232 Figure 6.1. Algorithm developed for the new methodology (duplication of Figure 3.2) 233 In Chapter Chapter 4:, a simple slope configuration with varying joint geometries was selected to illustrate the efficiency and applicability of the developed methodology and to compare the ‘learned knowledge’ of the significant input factors with common sense or the expert judgment. However, in real cases, the complexity of the slope geometries and the available initial geological knowledge vary. Accordingly, the computational steps of the developed methodology can be modified. For instance, if the geometry is simple and only a few variables are involved, or the similar geometry has been studied previously and some initial knowledge about the parameters are available, the screening phase (factorial design) and the associated numerical simulations can be skipped, and only the realizations and simulations required for a Central Composite Design are performed. Depending on the number of the variables, the prediction model can be established based on as few as eight numerical simulations. It should be noted that in many cases, identifying the significant parameters without performing a sensitivity analysis is not trivial, especially for cases in which the interactions between the variables contribute to the values of the output results. In Chapter Chapter 5:, the developed methodology was applied to the Little Tunnel rock slope case history. The geometry of the discontinuities and the potential failure mechanism in this real case were similar to the synthetic configuration used in Chapter 4. In this analysis, both the strength and geometric parameters were defined probabilistically. Although all the steps in the new methodology were performed for the Little Tunnel rock slope, the knowledge obtained from the synthetic configuration was used to limit the number of the probabilistic geometric parameters that were considered. Moreover, it was shown that the developed methodology is flexible enough to be integrated with existing probabilistic features in a numerical tool such as Phase2 to improve the efficiency of a probabilistic slope stability analysis. Accordingly, the number of the required numerical simulations for a thorough sensitivity analysis and for obtaining the response surfaces that predict the behaviour of the slope, were reduced by factors of four and two, respectively. The results obtained from probabilistic slope stability analysis of two different slope configurations show that if the geological structures are defined in sets or if only few dominant 234 discontinuities are present in the slope geometry, the developed methodology is an effective tool for a probabilistic rock slope analysis. 6.2 Recommendations Although the objectives of this study were mostly achieved, few recommendations could be implemented to improve the overall effectiveness of the developed methodology. 1) In the current study, 2D numerical simulations of the slope realizations were used to estimate the factor of safety. Two-dimensional simulations limit the ability of the numerical tool to simulate more complicated failure mechanisms, such as wedge failures. If the developed methodology is applied to 3D numerical simulations, the failure mechanism can also be considered as an output parameter of interest. Accordingly, the effect of the variability of the input parameters on the failure mechanisms of the slope can be investigated. Moreover, if the variation in the geometry of the slope and discontinuities creates different failure mechanisms, it is anticipated that the response surface for the factor of safety would show more than one local minimum, each representing an instability caused by a different failure mechanism. The response surface could be zoned by each failure mechanism and thus, the probability of the occurrence of a specific failure mechanism can be obtained. If the failure mechanism is known, the volume of failure can be evaluated as well. This information would be very useful for estimating the consequences of a probable failure and therefore, its total risk. 2) In the current study, the ‘design of experiment’ framework was used to generate an efficient number of realizations and to quantify the contribution of each significant parameter on the output results. In addition, Central Composite Design was used to generate the prediction models. Recently, several studies are suggesting the use of the ‘design of numerical simulations’ framework for the analysis in which actual experiments are replaced by the numerical simulations. The new methodology presented in this thesis could be modified such that it becomes more aligned with the ‘design of numerical simulations’ framework. 235 3) In the current study, the goodness of fit tests, such as Kolmogorov-Smirnov goodness or Chi square tests, are used to find the best distribution representing the variability of each probabilistic variable. However, other techniques such as bootstrapping can also be applied to capture such variability and identify the best distribution. 4) In the current study, some level of uncertainty in the stability assessment is identified and reduced through generating different geometric realizations of the slope. If alternative software tool capability allows incorporating the true spatial variability of the strength parameters in the simulations, a more reliable assessment can be achieved. 236 References Allen, D. M. (1971). Mean square error of prediction as a criterion for selecting variables. Technometrics, 13, 469-475. Baecher, G. B., & Christian, J. T. (2003). Reliability and Statistics in Geotechnical Engineering. Chichester, U.K.: Wiley. Baecher, G. B., Lanney, N. S., & Einstein, H. H. (1977). Statistical description of rock fractures and sampling. 18th U.S. Symposium on Rock Mechanics, New York. 5C1-1-5C1-8. Bandis, S. C., Lumsden, A. C., & Barton, N. (1981). Experimental studies of scale effects on behavior of rock joints. International Journal of Rock Mechanics and Mining Sciences & Geomechanics Abstracts, 18, 1-21. Barton, C. M. (1977). Geotechnical analysis of rock structure and fabric in CSA mine. NSW No.24 Applied Geomechanics Technical Paper. Australia: Commonwealth Scientific and Industrial Research Organization. Barton, N., & Choubey, V. (1977). The shear strength of rock joints in theory and practice. Rock Mechanics Springer, 1(2), 1-54. Birch, J. S. (2006). Using 3DM analyst mine mapping suite for rock face characterization, laser and photogrammetric methods for rock face characterization. 41st U.S. Rock Mechanics Symposium, Golden Canada. Box, G. E. P., & WILSON, K. B. (1951). On the experimental attainment of optimum conditions. Royal Statistical Society. Series B (Methodological), 13(1), 1-45. Box, G. E. P. (1953). Normality and tests on variance. Biometrika, 40, 318-335. Box, G. E. P., & Draper, N. R. (1987). Empirical Model Building and Response Surfaces. Oxford, England: John Wiley & Sons. 237 Box, G. E. P., & Behnken, D. W. (1960). Some new three level designs for the study of quantitative variables. Technometrics, 2, 455-475. Box, G. E. P., Hunter, W. G., & Hunter, J. S. (1978). Statistics for Experimenters. New York: Wiley. Brideau, M. Pedrazzini, A. Stead, D. Froese, C. Jaboyedoff, M. Van Zeyl, D. (2011). Three-dimensional slope stability analysis of south peak, Crowsnest Pass, Alberta, Canada. Landslides, 8(2), 139-158. Brideau, M., & Stead, D. (2010). Controls on block toppling using a three-dimensional distinct element approach. Rock Mechanics and Rock Engineering, 43, 241-260. Brideau, M., Sturzenegger, M., Stead, D., Jaboyedoff, M., Lawrence, M., Roberts, N., et al. (2012). Stability analysis of the 2007 Chehalis lake landslide based on long-range terrestrial photogrammetry and airborne LiDAR data. Landslides, 9, 75-91. Brideau, M., Chauvin, S., Andrieux, P., & Stead, D. (2012). Influence of 3D statistical discontinuity variability on slope stability conditions. Landslides and Engineered Slopes: Protecting Society through Improved Understanding – Eberhardt et al. (Eds). Call, R. D., Savely, J., & Nicholas, D. E. (1976). Estimation of joint set characteristics from surface mapping data. 17th U.S. Symposium on Rock Mechanics, 2B2-9-2B2-12. Chiwaye, H. T., & Stacey, T. R. (2010). A comparison of limit equilibrium and numerical modelling approaches to risk analysis for open pit mining. SIAMM . Journal of the South African Institute of Mining and Metallurgy, 110, 571-580. Christian, J. T. (2004). Geotechnical engineering reliability. how well do we know what we are doing? Journal of Geotechnical and Geoenvironmental Engineering, ASCE 130(10), 985-1003. Coggan, J. S., Stead, D., & Eyre, J. (1998). Evaluation of techniques for quarry slope stability assessment. Transactions of the Institutions of Mining and Metallurgy, Section B: Applied Earth Science, 107, B139-B147. 238 Cruden, D. M. (1997). Describing the size of discontinuities. International Journal of Rock Mechanics and Mining Science Geomechanics Abstracts, 14, 133-137. Cundall, P. A. (1980). UDEC – A generalised distinct element program for modeling jointed rock. No. PCAR-1-80) Peter Cundall Associates, U.S. Army, European Research Office. Cundall, P. A. (1988). Formulation of a three-dimensional distinct element Model—Part I. A scheme to detect and represent contacts in a system composed of many polyhedral blocks. International Journal of Rock Mechanics and Mining Sciences & Geomechanics Abstracts, 25(3), 107-116. Cundall, P. A., & Hart, R. D. (1993). Numerical modeling of discontinua. Comprehensive rock engineering, principles, practice and projects (edited by Hudson J. A.), 2,. 231-243. Daftaribesheli, A., Ataei, M., & Sereshki, F. (2011). Assessment of rock slope stability using the fuzzy slope mass rating (FSMR) system. Applied Soft Computing, 11(8), 4465-4473. Davis, T. J., & Keller, C. P. (1997). Modelling uncertainty in natural resource analysis using fuzzy sets and Monte Carlo simulation: slope stability prediction. International Journal of Geographical Information Science, 11(5), 409-434. Dawson, E. M., Roth, W. H., & Drescher, A. (1999). Slope stability analysis by strength reduction. Geotechnique, 49(6), 835-840. Dershowitz, W. S. (1979). A probabilistic model for the deformability of jointed rock masses. (M.Sc., MIT, Cambridge, MA.). Dershowitz, W. S. (1984). Rock joint system. Doctoral dissertation. Massachusetts Institute of Technology, Cambridge, Massachusetts. Dershowitz, W. S., & Einstein, H. H. (1988). Characterizing rock joint geometry with joint system models. Rock Mechanics and Rock Engineering, 21, 21-51. Duncan, J. M. (1996). State of the art: limit equilibrium and finite-element analysis of slopes. Journal of Geotechnical Engineering, ASCE 122, 7, 577-597. 239 Duncan, J. M. (1999). The use of back analysis to reduce slope failure risk. Journal of the Boston Society of Civil Engineers, 14(1), 75-91. Duncan, J. M. (2000). Factors of safety and reliability in geotechnical engineering. Journal of Geotechnical & Geoenvironmental Engineering, 126(4), 307-316. Duzgun, H., & Bhasin, R. (2009). Probabilistic stability evaluation of Oppstadhornet rock slope, Norway. Rock Mechanics and Rock Engineering, 42(5), 729-749. Duzgun, H. S. B., Yucemen, M. S., & Karpuz, C. (2002). A probabilistic model for the assessment of uncertainties in the shear strength of rock discontinuities. International Journal of Rock Mechanics and Mining Science and Geomechanics Abstract, 39, 773-754. Eberhardt, E. (2003). Rock slope stability analysis – utilization of advanced numerical techniques. Unpublished manuscript. Einstein, H. H., Baecher, G. B., & Veneziano, D. (1980). Risk Analysis for Rock Slopes in Open Pit Mines (Final Technical Report No. No. R80-17). Cambridge, Massachusetts: Department of Civil Engineering, Massachusetts Institute of Technology. Einstein, H. H., & Baecher, G. B. (1982). Probabilistic and statistical methods in engineering geology, problem statement and introduction to solution. Rock Mechanics, Suppl. 12, 47-61. Einstein, H. H., Veneziano, D., Baecher, G. B., & O'Reilly, K. J. (1983). The effect of discontinuity persistence on rock slope stability. International Journal of Rock Mechanics and Mining Sciences & Geomechanics Abstracts, 20(5), 227-236. Elmouttie, M., Poropat, G., & Hamman, E. (2009). Simulations of the sensitivity of rock structure models to field mapped parameters. Slope Stability, Santiago, Chile. Ewan, V. J., & West, G. (1981). Reproducibility of Joint Orientation Measurements in Rock. No. SR702. Crowthorne, Berks.: Department of the Environment, Department of Transport TRRL Report. 240 Feng, X. T., & Hudson, J. A. (2010). Specifying the information required for rock mechanics modelling and rock engineering design. International Journal of Rock Mechanics and Mining Sciences, 47, 179-194. Fisher, N. I. (1993). Statistical Analysis of Circular Data. Cambridge, New York: Cambridge University Press. Ghaboussi, J., Wilson, E. L., & Isenberg, J. (1973). Finite element for rock joints and interfaces. Journal of Geotechnical Engineering, ASCE 99, SM10, 833-848. Griffiths, D. V., & Lane, P. A. (1999). Slope stability analysis by finite elements. Geotechnique, 49(3), 387-403. Griffiths, D. V., & Fenton, G. A. (2004). Probabilistic slope stability analysis by finite elements. Journal of Geotechnical and Geoenvironmental Engineering, 130(5), 507-518. Griffiths, D. V., Huang, J., & Fenton, G. A. (2009). Influence of spatial variability on slope reliability using 2-D random fields. Journal of Geotechnical & Geoenvironmental Engineering, 135(10). Goodman, R. E., Taylor, R. L., & Brekke, T. L. (1968). A model for the mechanics of jointed rock. Journal of the Soil Mechanics and Foundations Division, 637-659. Goodman, R. E. (1995). “Thirty-fifth Rankine lecture: Block theory and its application. Geotechnique, 45, 381-423. Goodman, R. E., & Kieffer, D. S. (2000). Behavior of rock in slopes. Journal of Geotechnical Engineering, 126(8), 675-684. Grenon, M., & Hadjigeorgiou, J. (2012). Application of fracture system models (FSM) in mining and civil rock engineering design. International Journal of Mining, Reclamation and Environment, 26(1), 55-73. Hack, R. (1998). Slope Stability Probability Classification (SSPS). (2nd ed., pp. 256). Enschede, Netherlands: ITC Delft publication. 241 Hammah, R. E., & Curran, J. H. (1998). Fuzzy cluster algorithm for the automatic Identification of joint sets. International Journal of Rock Mechanics and Mining Sciences, 35, 889-905. Hammah, R. E., Curran, J. H., Yacoub, T. E., & Corkum, B. (2004). Stability analysis of rock slopes using the Finite element method. The ISRM Regional Symposium EUROCK 2004 and the 53rd Geomechanics, Colloquy, Australia. Hammah, R. E., Yacoub, T. E., & Curran, J. H. (2009). Numerical modelling of slope uncertainty due to rock mass jointing. International Conference on Rock Joints and Jointed Rock Masses Tucson, Arizona, USA. Haneberg, W. C. (2008). Using close range terrestrial digital photogrammetry for 3-D rock slope modeling and discontinuity mapping in the united states. Bulletin of Engineering Geology and the Environment, 67, 457-469. Harr, M. E. (1987). Reliability-based Design in Civil Engineering. New York: McGraw-Hill. Hill, W. J., & Hunter, W. G. (1966). A Review of response surface methodology: a literature survey. Technometrics, 8(4), 571-590. Hoek, E. (2000). Practical Rock Engineering. www.rocscience.com. Hong, H. P. (1996). Point-estimate moment-based reliability analysis. Civil Engineering and Environmental Systems, 13(4), 281-194. Hong, H. P. (1998). Hong, H. P. (1998). An efficient point estimate method for probabilistic analysis. Reliability Engineering and System Safety, 59(3), 261-267. Hudson, J. A., & Harrison, J. P. (2000). Engineering Rock Mechanics. Oxford, England: Pergamon, Oxford. Jambayev, A. S. (2013). Discrete fracture network modeling for a carbonate reservoir. Master of Science dissertation, Colorado School of Mines. 242 Jing , L. (2003). A review of techniques, advances and outstanding issues in numerical modelling for rock mechanics and rock engineering International Journal of Rock Mechanics and Mining, 40, 283-353. Johnson, R. T., Montgomery, D. C., Jones, B., & Fowler, J. W. (2008). Comparing designs for computer simulation experiments. In: WSC '08: Proceedings of the 40th Conference on Winter Simulation, 463-470. Juang, C., Lee, D., & Sheu, C. (1992). Mapping slope failure potential using fuzzy sets. Journal of Geotechnical Engineering, 118(3), 475-494. Khuri, A. I., & Cornell, J. A. (1987). Response surfaces: Design and analysis. New York: Marcel Dekker. Kleijnen, J. P. C. (2005). An overview of the design and analysis of simulation experiments for sensitivity analysis. European Journal of Operational Research, 16(2), 287-300. Low, B. K. (1997). Reliability analysis of rock wedges. Journal of Geotechnical and Geoenvironmental Engineering, 123(6), 498-505. Mahtab , M. A., & Yegulalp, T. M. (1982). A rejection criterion for definition of clusters in orientation data. 23rd. Symposium on Rock Mechanics, 116-123. Regional District of Okanagan-Simikameen, URL: maps.rdos.bc.ca. Rocscience Inc. 2014. Phase2 v8.0 – Two-Dimensional Finite Element Slope Stability Analysis. User’s Manual. Mason, R. L., Gunst, R. F., & Hess, J. L. (1989). Statistical Design and Analysis of Experiments. New York-USA: John Wiley & Sons. Massey, F. J. (1951). The kolmogorov-smirnov test for goodness of fit. Americal Statistical Association Journal, 46, 68-78. 243 Mikhail, E., Bethel, J., & McGlone, J. C. (2001). Introduction to Modern Photogrammetry. John Wiley & Sons. Miller, S. M., Whyatt, J. K., & McHugh, E. L. (2004). Applications of the point estimation method for stochastic rock slope engineering. Rock Mechanics Across Borders and Disciplines, 6th North American Rock Mechanics Conference, Gulf Rocks, USA. Montgomery, D. (2001). Design and Analysis of Experiments. John Wiley and Sons. Myers, R. H. (1991). Response surface methodology in quality improvement. Communications in Statistics -Theory and Method, 20, 457-476. Myers, R. H., & Montgomery, D. C. (2002). Response Surface Methodology Process and Product Optimization Using Designed Experiments (2nd ed.). New York-USA: John Wiley & Sons. Nilsen, B. (1985). Shear strength of rock joints at low normal stresses – a key parameter for evaluating rock slope stability. International Symposium on Fundamentals of Rock Joints, Bjørkliden, Sweden. 487-494. Park, H. J., West, T. R., & Woo, I. K. (2005). Probabilistic analysis of rock slope stability and random properties of discontinuity parameters, interstate highway 40, western North Carolina, USA. Engineering Geology, 79(3-4), 230-250. Park, H. J., Um, J. G., Woo, I., & Kim, J. W. (2012). Application of fuzzy set theory to evaluate the probability of failure in rock slopes. Engineering Geology, 125, 92-101. Patton, F. D. (1966). Multiple mode of shear failure in rock and related materials. First Congress of International Society of Rock Mechanics, Lisbon, Portugal. Peschl, G. M., & Schweiger, H. F. (2003). Reliability analysis in geotechnics with finite elements - comparison of probabilistic, stochastic and fuzzy set methods. 3rd International Symposium on Imprecise Probabilities and their Applications (ISIPTA’03), Carleton Scientific, Canada, 437-451. 244 Phoon, K. K., & Kulhawy, F. H. (1999). Characterization of geotechnical variability. Canadian Geotechnical Journal, 36, 612-624. Price, N. J., & Cosgrove, J. W. (1990). Analysis of Geological Structures. Cambridge, England: Cambridge University Press. Priest, S. D. (1993). Discontinuity Analysis for Rock Engineering. Chapman and Hall. Public Safety Canada (2014). Available from http://www.publicsafety.gc.ca/index-eng.aspx. [Last accessed 18 July 2014]. Raiffa, H., & Schaifer, R. (1961). Applied Statistical Decision Theory. Cambridge, MA: MIT Press. Robertson, A. (1970). The interpretation of geological factors for use in slope stability. Symposium on the Theoretical Background to the Planning of Open Pit Mines with Special Reference to Slope Stability, 55-71. Rosenblueth, E. (1975). Point estimates for probability moments. National Academy of Sciences of the United States of America, 72, 3812-3814. Sacks, J., Welch, W. J., Mitchell, T. J., & Wynn, H. P. (1989). Design and analysis of computer experiments (with discussion). Statistical Science, 4, 409-435. Schweiger, H. F., & Thurner, R. (2007). Basic concepts and applications of point estimate methods in geotechnical engineering. In Probabilistic Methods in Geotechnical Engineering, D.V.Griffiths & G.A. Fenton (Eds), Springer-Verlag, New York. 97-110. Segal, P., & Pollard, D. D. (1983). Joint formation in granitic rock of the Sierra Nevada. Geological Society of America Bulletin, 94, 563-575. Shamekhi, E., & Tannant, D. D. (2010). Risk assessment of a road cut above Highway #1 near Chase, B.C. 63rd Canadian Geotechnical Conference, Calgary, Canada. Shanley, R. J., & Mahtab, M. A. (1976). Delineation and analysis of clusters in orientation data. Journal of the International Association for Mathematical Geology, 8(1), 9-23. 245 Shi, G. H. (1988). Discontinuous deformation analysis: A new numerical model for the statics and dynamics of block system. Engineering Computations, 9, 157-168. Shukra, R., & Baker, R. (2003). Mesh geometry effects on slope stability calculation by FLAC strength reduction method – linear and non-linear failure criteria. . In Proceedings of the 3rd International FLAC Symposium, Ontario, Canada. pp. 109-116. Singhal , B. B. S., & Gupta, R. P. (2010). Applied hydrogeology of fractured rocks (2end ed.) Springer. Slakter, M. (1965). A comparison of the Pearson chi-square and Kolmogorov goodness-of-fit tests with respect to validity. Journal of the American Statistical Association, 60(311), 854-858. Sturzenegger, M., & Stead, D. (2009). Quantifying discontinuity orientation and persistence on high mountain rock slopes and large landslides using terrestrial remote sensing techniques. Natural Hazards and Earth System Sciences, 9(2), 267-287. Sturzenegger, M., Stead, D., Beveridge, A., Lee, S., & van , A. (2009). A long-range terrestrial digital photogrammetry for discontinuity characterization at Palabora open-pit mine. 3rd Canada–U.S. Rock Mechanics Symposium, Toronto, Canada. Tabba, M. M. (1984). Deterministic versus risk analysis of slope stability. 4th International Symposium on Landslides. Toronto, Canada. 491-498. Weibull, W. (1939). A statistical theory of the strength of materials. Ingeniors Vetenskaps Akadenien, 151(3), 45-55. Wheel, M. A. (1996). A geometrically versatile finite volume formulation for plane elastostatic stress analysis. The Journal of Strain Analysis for Engineering Design, 31(2), 111-116. Wolff, T. F. (1996). Probabilistic slope stability in theory and practice. Conference on Uncertainty in the Geologic Environment, Madison, USA. 419-433. 246 Zhou,W.,& Maerz,N.H.(2002). Implementation of multivariate clustering methods for characterizing discontinuities data from scanlines and oriented boreholes. Journal of the Computers and Geosciences, 28, 827-839. Zienkiewicz, O. C., Best, B., Dullage, C., & Stagg, K. (1970). Analysis of non-linear problems in rock mechanics with particular reference to jointed rock systems. 2nd International Congress on Rock Mechanics, Belgrade, Serbia.
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Probabilistic assessment of rock slope stability using...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Probabilistic assessment of rock slope stability using response surfaces determined from finite element… Shamekhi, Seyedeh Elham 2014
pdf
Page Metadata
Item Metadata
Title | Probabilistic assessment of rock slope stability using response surfaces determined from finite element models of geometric realizations |
Creator |
Shamekhi, Seyedeh Elham |
Publisher | University of British Columbia |
Date Issued | 2014 |
Description | A new methodology for probabilistic rock slope stability assessment was developed. This methodology enables the analyst to estimate the probability of failure by incorporating the variability of the geometric parameters such as joint orientation and trace length in the stability analysis. This improves the reliability of any future design, remedial action or risk assessment. Although incorporating the variability of material strength parameters into numerical models is common in geotechnical engineering, similar analysis is very challenging when geometric parameters such as dip, dip direction, and trace length are considered non-deterministically. The challenge is related to mesh generation required for each numerical model as the geometric parameters change, resulting in high computational effort. For practical stability assessment, a representative yet computationally efficient number of realizations or numerical models is required. Therefore, commonly used sampling techniques such as the Monte-Carlo method that generate a large number of slope realizations cannot be used. The new methodology uses the Point Estimate Method to substitute each probabilistic variable by its two point estimates. As the number of probabilistic input parameters increase, the number of point estimates, and accordingly the number of realizations to be modeled, increase exponentially. To compensate for this problem, the methodology uses a ‘design of experiment’ framework to minimize the number of representative realizations needed. While Phase² finite element software is used in this thesis, the methodology is flexible and can be used to increase the efficiency of other numerical tools that require re-meshing to accommodate changes in geometric parameters. The developed methodology was evaluated for two different slope configurations. Analysis of variance was implemented to identify the significant parameters affecting the factor of safety. To estimate the probability of failure, central composite design was used to generate more realizations of the significant parameters and to fit response surfaces to the factor of safety values. A method is presented to select the most accurate response surface to estimate the probability of failure. This response surface can be used to predict the stability behaviour of any arbitrary geometric realization of the slope without the need for further numerical modelling. The sensitivity of the methodology to the selection of the initial point estimates was investigated and was shown to be unbiased. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2014-07-29 |
Provider | Vancouver : University of British Columbia Library |
Rights | Attribution-NonCommercial-NoDerivs 2.5 Canada |
DOI | 10.14288/1.0074360 |
URI | http://hdl.handle.net/2429/48524 |
Degree |
Doctor of Philosophy - PhD |
Program |
Civil Engineering |
Affiliation |
Applied Science, Faculty of Engineering, School of (Okanagan) |
Degree Grantor | University of British Columbia |
Graduation Date | 2014-09 |
Campus |
UBCO |
Scholarly Level | Graduate |
Rights URI | http://creativecommons.org/licenses/by-nc-nd/2.5/ca/ |
Aggregated Source Repository | DSpace |
Download
- Media
- 24-ubc_2014_september_shamekhi_seyedeh elham.pdf [ 7.67MB ]
- Metadata
- JSON: 24-1.0074360.json
- JSON-LD: 24-1.0074360-ld.json
- RDF/XML (Pretty): 24-1.0074360-rdf.xml
- RDF/JSON: 24-1.0074360-rdf.json
- Turtle: 24-1.0074360-turtle.txt
- N-Triples: 24-1.0074360-rdf-ntriples.txt
- Original Record: 24-1.0074360-source.json
- Full Text
- 24-1.0074360-fulltext.txt
- Citation
- 24-1.0074360.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0074360/manifest