Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Reliability-based design optimization using DDM enabled finite elements Gavrilovic, Stevan 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2015_september_gavrilovic_stevan.pdf [ 1.71MB ]
Metadata
JSON: 24-1.0166426.json
JSON-LD: 24-1.0166426-ld.json
RDF/XML (Pretty): 24-1.0166426-rdf.xml
RDF/JSON: 24-1.0166426-rdf.json
Turtle: 24-1.0166426-turtle.txt
N-Triples: 24-1.0166426-rdf-ntriples.txt
Original Record: 24-1.0166426-source.json
Full Text
24-1.0166426-fulltext.txt
Citation
24-1.0166426.ris

Full Text

RELIABILITY-BASED DESIGN OPTIMIZATION USING DDM ENABLED FINITE ELEMENTS by Stevan Gavrilovic    A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (CIVIL ENGINEERING)  THE UNIVERSITY OF BRITISH COLUMBIA (VANCOUVER)  July 2015  © Stevan Gavrilovic, 2015  iiAbstract Rts is a risk-based structural optimization, multiplatform computer program that incorporates uncertainty into structural analysis with the utilization of random variable parameters.  The major contribution to this thesis is that Rts now has the capability to perform reliability-based design optimization using Finite Element Method (FEM) analytical sensitivities.  Analytical gradients are exact, more efficient, and convergence is achieved more rapidly in gradient- based optimization methods when compared to finite difference sensitivity methods.   For this thesis, I have derived and implemented both nodal and material analytical gradients throughout the Rts framework starting at the finite element level up through to the optimization level.  The Reliability-Based Design Optimization (RBDO) model stream includes an FEM model, a COST model, a RISK model with built-in First-Order Reliability Model (FORM), and the orchestrating RBDO model.  A program wide Direct Differentiation Method (DDM) framework was additionally established that provides efficient analytical gradient calculations throughout the model stream.   The FEM elements implemented consist of the Bilinear-Mindlin four node and nine node plate elements.  An academic COST model was created to showcase the multi-model capabilities of Rts and the ability to calculate DDM dependencies of downstream models.  Additionally, a RISK model was implemented that incorporated a built-in FORM model with gradient-history capabilities and in-model DDM dependency calculations; the RISK measure used is the mean cost. The RBDO model was also built upon to include DDM capabilities and downstream model integration.    iii Finally, two reliability-based design optimization examples were implemented using both nodal and material sensitivities.  The thickness and width of a timber cantilever beam was optimized with respect to mean cost taking into account deflection damage and construction cost.      ivPreface Components of this thesis were implemented in the Rts computer program created at the University of British Columbia.  Rts is a merger of St and Rt.  Rt was developed during the PhD studies of Dr. Mojtaba Mahsuli at UBC Vancouver while St was developed by Dr. Terje Haukaas.  My contributions to the program were implementation of:  plate finite elements with analytical gradients   analytical gradients throughout the model framework  a cost model template with analytical gradients  a RISK model with integrated FORM analysis; extended original risk framework created by Alfred Larsen to include analytical gradients and integrated FORM optimization algorithms  reliability-based optimization examples incorporating finite element analysis, cost models, and reliability models; all utilizing analytical derivatives from the downstream models  vTable of Contents Abstract .................................................................................................................................... ii Preface ..................................................................................................................................... iv Table of Contents .................................................................................................................... v List of Tables ........................................................................................................................ viii List of Figures ......................................................................................................................... ix List of Symbols and Abbreviations ...................................................................................... xi Acknowledgements ............................................................................................................... xii Dedication ............................................................................................................................. xiii Chapter  1: INTRODUCTION .............................................................................................. 1 Chapter  2: ORCHESTRATING MODELS ........................................................................ 5 2.1 RBDO Optimization Model ...................................................................................... 6 2.1.1 Optimization Search Algorithms .......................................................................... 8 2.1.1.1 Steepest Descent Method .............................................................................. 9 2.1.1.2 BFGS Method ............................................................................................. 10 2.2 RISK Model ............................................................................................................ 11 2.3 FORM Reliability Model ........................................................................................ 13 2.3.1 FORM Limit-State Function ............................................................................... 15 2.3.2 FORM Design Point Optimization ..................................................................... 16 2.3.3 FORM Convergence Criterion ............................................................................ 17 2.3.4 FORM Probability Transformations ................................................................... 18 2.4 COST Model ........................................................................................................... 19 2.5 FEM Model ............................................................................................................. 19  vi2.5.1 Energy Formulation ............................................................................................ 21 2.5.2 Principle of Virtual Displacements ..................................................................... 21 2.5.3 Plate Finite Elements .......................................................................................... 25 2.5.4 FEM Mindlin Plate Shape Functions .................................................................. 28 2.5.5 Parametric Finite Elements ................................................................................. 30 Chapter  3: ANALYTICAL GRADIENTS ........................................................................ 37 3.1 FEM Gradients ........................................................................................................ 41 3.2 COST Model Gradients .......................................................................................... 44 3.3 FORM Reliability Model Gradients ....................................................................... 45 3.4 RISK Model Gradients ........................................................................................... 48 3.5 RBDO Gradients ..................................................................................................... 49 3.5.1 Direct Differentiation Method – DDM ............................................................... 50 3.5.2 Adjoint Method – ADM...................................................................................... 50 Chapter  4: IMPLEMENTATION AND EXAMPLES IN RTS ....................................... 51 4.1 FEM Implementation .............................................................................................. 52 4.2 COST Implementation ............................................................................................ 53 4.3 RISK Implementation ............................................................................................. 54 4.3.1 FORM and COST Gradients ............................................................................... 55 4.3.2 RISK Sensitivity Calculations ............................................................................ 55 4.3.3 SAMPLING Gradients........................................................................................ 56 4.4 RBDO Implementation ........................................................................................... 56    viiChapter  5: RBDO EXAMPLES IN RTS ........................................................................... 57 5.1 Optimization of Beam Thickness Using Material DDM Sensitivities .................... 60 5.2 Optimization of Beam Width Using Nodal DDM Sensitivities.............................. 64 5.3 Simultaneous Optimization of a Beam Width and Thickness ................................ 68 Chapter  6: CONCLUSION ................................................................................................. 70 Bibliography .......................................................................................................................... 72 APPENDICES ....................................................................................................................... 73 Appendix A ......................................................................................................................... 74 A.1 Rts Input File Thickness Optimization ............................................................... 74 A.2 Rts Input File Width Optimization ..................................................................... 77 Appendix B ......................................................................................................................... 80 B.1 Material Optimization Output ............................................................................. 80 B.2 Nodal Optimization Output................................................................................. 89   viiiList of Tables Table 1.  Element material properties matrices ...................................................................... 23 Table 2.  Shape functions of a plane quadrilateral from 4 to 9 nodes (Cook, Malkus, Plesha, & Witt, 2002) .......................................................................................................................... 32 Table 3. Examples of typical FEM parameters used in Rts .................................................... 52 Table 4.  Rts and Excel optimization example results comparison ........................................ 59 Table 5.  Rts model list of parameters and values .................................................................. 57 Table 6. Optimization analysis parameters ............................................................................. 59   ixList of Figures Figure 1. Graphical outline of orchestrating model stream in Rts ............................................ 5 Figure 2. Stresses on a plate element ...................................................................................... 25 Figure 3.  Coordinate mapping from x,y to ξ,ɳ  (Flaherty E, 2014) ....................................... 30 Figure 4.  Four node and nine node quadrilateral elements. ................................................... 33 Figure 5.  Four node quadrilateral plate element .................................................................... 36 Figure 6.  The gradient flowchart within Rts .......................................................................... 37 Figure 7.  Direct Differentiation Method (DDM) in Rts (Mahsuli & Haukaas, 2013) ........... 38 Figure 8.  Optimization software architecture in Rts .............................................................. 51 Figure 9.  Cantilever beam model in Rts ................................................................................ 58 Figure 10.  Deflected beam Rts model.................................................................................... 59 Figure 11.  Thickness optimization Rts GUI Ouput ............................................................... 61 Figure 12.  Optimization plot; iteration number vs. objective function value ........................ 61 Figure 13.  Thickness optimization CCDF ............................................................................. 62 Figure 14.  Thickness optimization full GUI .......................................................................... 63 Figure 15.  Thickness optimization plot ................................................................................. 63 Figure 16.  Thickness optimization CCDF Evolution ............................................................ 64 Figure 17.  Width optimization full Rts GUI output ............................................................... 65 Figure 18.  Width optimization plot; iteration number vs. objective function value .............. 65 Figure 19.  Width optimization RISK CCDF evolution plot .................................................. 66 Figure 20.  Width optimization Rts GUI Output .................................................................... 66 Figure 21.  Width optimization; iteration number vs. objective function value ..................... 67 Figure 22.  Width optimization CCDF curve.......................................................................... 67  xFigure 23.  Iteration number vs. objective function value of simultaneous optimization ...... 68 Figure 24.  CCDF curve of of simultaneous optimization ...................................................... 68   xiList of Symbols and Abbreviations   = the random variables or parameters that influence the probability of a failure event occurring B = strain-displacement matrix  BFGS = Broyden-Fletcher-Goldfarb-Shanno algorithm  = cost response D = material property matrix  DDM = Direct Differentiation Method d = the vector containing element dofs dof = Degree of Freedom F = the vector of global nodal forces FEM = Finite Element Method FORM = First-Order Reliability Model N = Vector or matrix of spatially varying shape functions NDA = Nondeterministic Approach Ni = element shape function PDF = Probability Density Function RBDO = Reliability-Based Design Optimization  u = the vector of global nodal displacements  = displacement response  = original-space vector consisting of all of the random variables or parameters   = standard-normal vector of the normalized random variables   xiiAcknowledgements  I would like to extend my gratitude to Dr. Terje Haukaas for inspiring me, offering his help and invaluable advice whenever I encountered roadblocks along the way.  I am also grateful for his guidance and mentoring without which this would not have been possible.  I would additionally like to acknowledge the civil faculty, fellow students, and staff at UBC for making it an enjoyable and worthwhile learning experience.   Special thanks are owed to my parents who have compelled me to always give my best and never give up.   I am forever grateful for your support and the many sacrifices that paved the way for us to have the life that we have.  I also want to thank my brother and sister for their love, understanding, and assistance.   Last, but not least, I want to thank my loving wife for her incessant support.  I am lucky to have a person like you in my life.  Thank you for putting up with the late nights and the many research hours that were spent on making this a reality.    xiiiDedication  I dedicate this thesis to my wife and family.  Without all of your support and care I would not know the meaning of love.  Thank you for the encouragement and affection even when I didn’t deserve it.    1Chapter  1: INTRODUCTION Traditional engineering problems often reduce and simplify a system so that it could be solved deterministically.  Using this approach, engineers often introduce safety factors, or scale the solutions, so that they take into account some degree of uncertainty.  Uncertainty is inherent in every methodology and it exists due to material variability, unpredictable loading, imperfect knowledge, and through errors involved in empirical understanding of physical systems to name a few.  An example of a system where there is insufficient knowledge is in nuclear energy where the frequency of failures is so low that the probabilities and uncertainties may be difficult to estimate.  Though the failures are rare, the consequences of failure are extremely severe.   As modeling processes become more sophisticated and new technologies emerge, the conventional deterministic approach is less efficient at providing solutions for making the best decisions.  Complex problems involve varying degrees of uncertainty that need to be considered and quantified so that the best decisions are made with the knowledge that is available.  A practical definition of uncertainty is: the knowledge gap between what is known and what needs to be known for making optimal decisions with minimal risk (Singhal, Ghiocel, & Nikolaidis, 2004).  Methods that incorporate risk and uncertainty are termed Nondeterministic Approaches or NDA.    Nondeterministic methods quantify and manage uncertainty, reducing and mitigating its effects.  The method of NDA that is used in this thesis is the Probabilistic Analysis Approach.  In this approach, the system parameters are assumed to be random variables, with  2the analysis incorporating their respective joint Probability Density Functions (PDFs).   The primary objective of the Probabilistic Analysis Approach is to determine the reliability of the system.   The reliability of a system is considered the ability of the system, or subsystem, to function under certain conditions for a specified period of time; in other words the probability of successful operation.    Successful operation consists of the structural system performing its function throughout its lifecycle.  The ability of a system to successfully perform a certain function can be quantified using performance measures.  Some examples of performance measures are cost, displacement, and stress responses.  Examples of cost measures include cost of repairs, cost of degradation over time, and future cost of replacement. All of the above measures potentially involve large variations in uncertainty.  When combined with inadequate knowledge of the problem, it is apparent why conventional deterministic methods fail to include all components involved in making sound decisions.    Reliability assessment addresses these issues with the selection of a suitable reliability model, analysis of the model, calculation of the reliability performance indices, and evaluation of the results with decisions on improvements (Singhal, Ghiocel, & Nikolaidis, 2004).  The end result of a reliability assessment is a risk measure.  Risk is defined as a measure of uncertainty; the quantitative and qualitative likelihood that a negative outcome may be realized.  The final goal is to minimize risk using optimization techniques after the risk is quantified using a probabilistic approach.  A typical optimization procedure uses algorithms to find the best possible combination of parameter properties to satisfy a set of  3performance requirements and design constraints.   The optimization model requires the input of a risk model that incorporates a reliability model. In this thesis, the First-Order Reliability Method (FORM) is used to calculate the failure probabilities.  A reliability model requires the input of a response model.  Typically, the response input involves a cost function or structural response; in the examples presented in this thesis, the COST model is implicitly related to the structural response of the system and construction cost.  The structural response is evaluated analytically using Finite Element Methods (FEM).  Propagation of the uncertainty throughout the system starts at the FEM level where the input parameters can be random variables that carry on through to the final orchestrating Reliability-Based Design Optimization (RBDO) model. The governing models responsible for each subsection of the optimization analysis are termed orchestrating models.  The order of orchestrating models in Rts from lowest to highest is: FEM, COST, FORM, RISK, and RBDO.  The RBDO model employs algorithms that quantify which parameters have the largest influence on minimizing the risk.  This is termed sensitivity analysis.  Parameter sensitivities involve gradient calculations that start at the FEM level and propagate throughout the model stream.  To calculate the various gradients used throughout the model stream, exact analytical Direct Differentiation Methods (DDM) are used.  Unlike approximate numerical methods, such as the Finite Difference Method, analytical methods are exact.  Having accurate gradients greatly increases the efficiency and accuracy of the RBDO and FORM procedures.   4This thesis will go over the orchestrating models implemented throughout Rts used to perform a reliability-based optimization.  In addition, it will highlight the theory, derivations, and progression of analytical gradients starting at the FEM level up to the final RBDO model.  After the background theory is established, this thesis will provide an overview of the Rts implementation and will conclude with working examples involving the optimization of a timber cantilever beam.  The examples will include both nodal and material sensitivities that involve optimizing the beam’s width and thickness.      5Chapter  2: ORCHESTRATING MODELS A typical structural optimization problem uses geometric state or response variables such as displacement, stress, or strain as constraints on the objective function.  The objective function is a constrained mathematical function that is the target of the RBDO optimization. Responses are calculated from structural equilibrium formulations such as analytically solving a partial differential equation or numerically from FEM techniques.  Since analytical solutions are feasible for only the simplest of cases, FEM is predominantly used to calculate the response.    Rts goes beyond geometric responses by optimizing a risk expectance that is derived from a reliability analysis that incorporates the results from a FEM model which has random variable parameters.  A graphical representation of the Rts model structure is given below:  Figure 1. Graphical outline of orchestrating model stream in Rts The FEM model is the starting point of every optimization iteration .  The output of the FEM model is used by the COST model which assigns a cost based on the response from  6FEM.  The COST model can assign additional costs based on construction or damage to name a few.  The final cost measure is then utilized by the FORM reliability model which is in turn governed by the RISK model; the RISK model specifies thresholds and calculates the mean cost using the probability of exceedance from the reliability model.  Finally, the all-encompassing RBDO model minimizes the risk measure, or mean cost, by adjusting the values of decision variables.  Sections 2.1 to 2.5 will explain the functions of each orchestrating model starting from governing RBDO model down to the low-level FEM model.    2.1 RBDO Optimization Model A structure is an assemblage of components and elements that can vary in material, spatial, and cross-sectional properties to name a few.  The assemblage must satisfy certain performance and design requirements based on its intended function.  A design requirement could be to minimize cost, as in a commercial building, or weight, as in an aerospace component.  It can also be a combination of design requirements such as to maximize the stiffness, strength, or stability while minimizing the cost or weight.  A performance requirement could be a maximum allowable deformation or stress.  Regardless of a structures prescriptive requirements, it must practically satisfy a set of constraints or bounds while being efficient.  Finding the optimum structural layout involves employing a suitable optimization methodology.  The procedure for solving reliability-based structural optimization problems in Rts is given by the following algorithm: 1. Select an initial design . 2. The displacement of the system () is calculated using FEM for the current design  7iteration . 3. If applicable, using a COST model, estimate the cost of repair or damage (()) given the FEM response.  Include other cost measures such as construction and degradation over lifecycle if appropriate. 4. Calculate the probability of failure () for the current design iteration using FORM.  Failure can be exceedance of a cost (()) or displacement (), for example.   5. Calculate the RISK measure () using the FORM output; the mean value of the aggregate cost for the current design. 6. For the current design, formulate the objective function () and calculate its gradient ∇().  7. Utilizing a numerical optimization technique, calculate a suitable step size and direction to give a new design . 8. Repeat steps 2-7 until a convergence criterion is satisfied.    Note: At each iteration  of the optimization, the design state (the current values of decision variables) is represented by .   The objective function () is a function that is either maximized or minimized while using specified design criterion as constraints.  It is composed of decision variables and state variables.  Decision variables () include those variables which are adjusted or iterated-over during the optimization procedure.  A decision variable can be the thickness of a plate, the length of a beam, or the area of a cross-section.  State variables ( || ) represent the response of the system; examples include cost, displacement, stress, or strain.     8Therefore, to summarize, an optimization problem can be represented as: min (,||)  subject to                      response constraints on ||                   design constraints on                     equilibrium constraints 1 Response constraints are typically written as the inequality (||) ≤ 0 where  is termed the constraint function.  In structural optimization, the response || is dependent on the design variable .  Therefore, the two constraints can be combined to define a “nested” formulation in which all response and design constraints are written as the inequality (,||) ≤ 0.   The structural optimization () problem is repeated mathematically for completeness as: () min(,||). .(,||) ≤ 0 2 The solution of the  problem involves gradient-based numerical techniques that employ the derivatives of .  At both the FORM and RBDO levels of the procedure numerical algorithms are used that employ gradients.  Finding the gradients of the objective and constraint functions is a non-trivial task since () is implicitly carried through the model framework starting at the FEM level.   2.1.1 Optimization Search Algorithms In iterative optimization problems, an important step is selecting a suitable step size and direction when updating decision variables.  The sections below describe the two main search algorithms implemented in Rts that are used by the RBDO model; they are the Steepest Descent method and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method.  Both algorithms employ so called hill-climbing techniques that are used to find a local minimum.   9The minimum is called a stationary point; a point where the gradient is zero.    2.1.1.1 Steepest Descent Method The steepest descent method is a simple approach in where the search direction is equal to the gradient of the objective function.  The step size is given by the user and remains fixed during the optimization procedure.  Using this approach, the function will always make a step towards the design point.  It can be described mathematically as:  =  ∙∇() 3 where  is the search direction vector,  is a fixed step size, and  ∇() is the gradient of the objective function evaluated at .  Care must be taken when selecting a step size, for if the step size is too small, the search can be very inefficient.  If the step size is too large, then the search may step over the design point thereby oscillating around the design point without converging; especially if the function exhibits large curvature around the design point.  Further issues arise when conducting multi-parameter optimization as some parameters require a smaller step size for convergence while others can converge faster with a larger step size.  This requirement can prove to be inefficient since multiple optimization iterations are required.     102.1.1.2 BFGS Method Another method for selecting the step size and direction is known as the BFGS algorithm.  BFGS attempts to solve the step size and convergence issues encountered with the steepest descent method by utilizing second-order approximations of the objective function.  The BFGS method is considered a quasi-newton method since the Hessian is approximated using first derivative values rather than being evaluated directly.  The BFGS analogue to Newton’s formulation is given as:  = −∇() 4 where   is the approximation to the Hessian matrix,   is the search direction vector, and ∇() is the gradient of the objective function evaluated at . The BFGS algorithm is summarized as follows: 1. Make an initial guess  and . 2. Obtain a search direction where    = −∇() 5  3. Perform a line-search to find the step size  (or use a fixed step size).  4. Find  =  + . 5. Set S = . 6. Set  = ∇() − ∇(). 7. Find  =  +−. 8. Repeat steps 2-6 until  converges to a solution.    112.2 RISK Model The RISK model takes the exceedance probability output from the FORM and sampling models to produce an expectance or risk measure.  The risk measure, in this instance, is the area under the Complementary Cumulative Distribution Function (CCDF) which represents the mean value of a total cost.  The total cost can be the output from a collection of models consisting of any number of random variables or decision variables.  Examples can include repair cost, construction cost, or replacement cost models to name a few.  Inherently, the total cost is uncertain as it depends on the values of the random and design-decision variables.  Hence, the risk measure is given as a mean value that is then used as an input into the RBDO model.    The procedure for calculating the risk measure is summarized as: 1. The sampling model does an initial sampling analysis of a specified number of samples to get a rough estimate of the mean and standard deviation.    2. A specified number of thresholds is then distributed equally on either side of the estimated mean.  A larger number of thresholds gives greater resolution in the CCDF curve but it comes with a higher computational cost.   3. An integrated FORM analysis is then run for each threshold value with the corresponding exceedance probability as its output.   4. If the probability exceedance values are not above or below 0.005 or 0.995 at the extreme thresholds, then new thresholds are added in either direction until the exceedance probability falls within the bracket.  This ensures that the summation of the risk measure does not exclude large portions of the area in the “tail ends” of the CCDF.    125. The resulting exceedance values are the points on the CCDF curve; with each point being the probability that the cost value will exceed a certain cost threshold.    For the examples used in Chapter 5 of this thesis, 10 initial samples were sufficient to estimate the mean.  The number of cost thresholds used was 23.  Using a higher number of sampling points or thresholds greatly increases computational cost.  At every sample point the FEM model is run while the FORM model is run at every threshold value.  FORM typically runs multiple instances of the FEM model.  Therefore, if the structure is complicated, using an unnecessarily high number of initial samples or thresholds can be computationally detrimental.  Every optimization requires sound judgment when selecting the number of sampling points or thresholds.  The integral, or area under the CCDF, gives the risk measure which in this case is the mean cost.  Using the Trapezoidal Rule, the area under the CCDF curve is calculated numerically.  The mean cost  is given as: where  is the exceedance probability from FORM and  is the cost threshold. It is worth reiterating that incorporating a higher number of thresholds results in a more precise measure of the mean, but it comes with a high computation cost due to FORM calling repeated COST and FEM models downstream.      =121 +   +  +  ∙( − )  6  132.3 FORM Reliability Model The main goal of a structural reliability analysis is to evaluate the probability of a failure event occurring within a specified time period.  A failure event could be defined as cost overrun, excessive deformation, damage to the environment, or loss of life.  Mathematically, failure events can be described by a relation called the limit-state function:  =  =  {() ≤ 0} 7 The failure event  is realized when () is equal to or less than zero.  The components of the original-space vector  consist of all of the random variables, or parameters,  that influence the probability of the failure event occurring.   is given as:  =  {  …  } 8 The limit-state function can be either linear or non-linear.  In the first case, the limit-state function () is a linear function of continuous random variables .  The probability of the failure event F can be described by the function: () = (() ≤ 0) = (−)  9 where  can be described as the structural reliability index, given as:  =  10 In this context,  represents the number of standard deviations, or the spatial “distance,”  from the failure plane.  A larger  corresponds with an increase in safety and conversley a smaller value implies a greater failure probability.  Taking into account all of the random variables, the total probability of failure can be determined by integrating a joint probability distribution function () in original space as: () = (() ≤ 0) =  ()() 11  14 Analytically solving this integral is non-trivial, therefore numerical approximation techniques are used to evaluate the probability of failure.  Numerical integration methods are highly inefficient for increasing dimension of the vector .  Consequently, a first-order approximation approach is adopted.  Reliability analysis using FORM involves a transformation, or normalizing, of the limit-state function from the original space function () to the standard-normal space function ().  This transformation is accomplished by normalizing the components of the standard-normal vector  from the original-space vector  where each component y is given by: y = −   12 By normalizing the limit-state function into standard-normal space, the normalized random variables y now have zero means and unit standard deviations.  In standard-normal space the structural reliability index  is the shortest distance from the hyper-plane (or line in two dimensions) that forms the boundary between the safe domain and the failure domain to the standard-normal space origin.  The point on the plane or line that is closest to the origin is denoted the design point, or the point where failure is most likely.  This is also the point in the failure domain with the highest probability-density.  In standard-normal space, the joint probability distribution function of uncorrelated standard-normal random variables is given by: () =1(2)  13 where  is the standard-normal vector of the normalized random variables .      152.3.1 FORM Limit-State Function  A cornerstone of FORM involves the process of linearizing the limit-state function described in equation 9 at the design point.  The design point ∗ is the point on the failure hyper-plane that is located at the closest distance to the standard-normal space origin.  As described above, the shortest distance spatially between the safe and failure domain is called the reliability index and it is defined as:  = ‖∗‖ 14 In FORM, the limit-state function is linearized by a first-order Taylor approximation to produce: () ≅ (∗) + ∇∗ ∙( − ∗)  15 The term (∗) is equal to zero since we are evaluating the limit state function at its design point.  It is common to normalize the gradient and negate it; this negative unit vector is referred to as the alpha-vector and is given as:  = −∇(∗)‖∇(∗)‖  16 where each component is: = −∂G∂(∗)∇(∗) ∙∇(∗)  17 () can now be expressed as: () ≅ −‖∇(∗)‖∙( − ∗) = ‖∇(∗)‖∙(∗ − )  18 Beta is now redefined as:  = ∗ = −∇(∗)‖∇(∗)‖∗  19   16which brings us to the final form of the linearized limit-state expression: () ≅ ‖∇(∗)‖∙( − )  20 To find the design point ∗, we must find the minimum which can be accomplished using an optimization routine.    2.3.2 FORM Design Point Optimization  The design point is found by iteratively solving the following optimization problem:  = min∈{()}= ‖∗‖  21 where ∗ is the design point or the distance closest to the failure surface.  The above expression infers that the design point is located at the minimum distance from the failure surface linearized on the point   by the line tangent to the surface ′() = 0.  For FORM to function, () must be continuously differentiable.  The search algorithm for finding the FORM design point can be described as the following optimization problem:  ∗ = argmin12‖‖ | () ≤ 0  22 The iterative algorithm to find ∗ is formulated as:   =   + ∙  23 where  is the step size and  is equal to a search direction.  The FORM Model incorporates both fixed and Armijo step sizes.  If the fixed step size fails to converge, then Armijo will take over.  The search direction is given using the state-of-the-art iHLRF algorithm:  =  −   = ()‖∇()‖+  ∙  −    24  17 The iHLRF algorithm requires the normalized gradient ()‖∇()‖ which we will solve for analytically.    2.3.3 FORM Convergence Criterion The design point is attained when the Karush-Kuhn-Tucker (KKT) optimality conditions have been satisfied.  The two optimality conditions are: ()≤   25 and  1 −‖‖≤   26 The first condition ensures that the trial point  is close to the limit state surface while the second condition is an angle-based criterion that requires the trial point on the limit-state surface to be the minimum distance to the origin. The values of  and  are in the magnitude of 10-3.   To summarize, the FORM algorithm involves the following procedures: 1. Select a starting point in standard normal space . 2. Transform the parameters from original space  using equation 31. 3. Evaluate the limit-state function () =  (). 4. Calculate the gradient of the limit-state function ∇(). 5. Set the scaling factor for the first convergence criterion  =  (). 6. Check convergence using equations 25 and 26. 7. If convergence is not satisfied, iterate to find  using equation 24 and a suitable step size.  188. Repeat steps 2-7 while skipping 5 until convergence. The reliability index is only then computed at convergence, followed by the respective failure probability.   2.3.4 FORM Probability Transformations As previously mentioned, in FORM reliability analysis, we seek to transform each random variable parameter from original to standard-normal space.  The goal of probability transformations is to redefine a vector of random variables , with known probability distributions, to a vector , also with known probability distributions.  This process involves determining the functional relation between the random variables; the random variables can be correlated or independent.  In the optimization examples, the probability distributions used are normal and log-normal.  The examples use independence probability transformations.  For independent or uncorrelated random variables, the probability transformation is given as: y =  Φ() 27 from () = Φ (y) 28 and   = (Φ (y)) 29  To find Φ , we need the inverse of the Jacobian which we can derive by differentiation of equation 28: () = Φ (y) = () =Φ (y) =yyΦ (y) =y(y)  30  19Finally, we get the diagonal Jacobian matrix necessary to complete the transformation with the components: y=()(y)  31  2.4 COST Model The COST model is where the responses and geometry parameters from the FEM model are converted to a present-value cost.  The COST model may include construction cost, future repair cost, degradation cost, financing cost, and environmental cost to name a few.  The COST model can be extremely complex and is outside the scope of this thesis, only the model framework for DDM gradients was implemented along with simple examples for academic purposes.  For this thesis, the cost model formulation is: () = (()) + () 32 Where (()) is a function that assigns a monetary value to a given displacement () and () is a construction cost that can be considered a function of a component’s material and geometric properties.  (()) can be considered a damage model while () could be a construction model that depends on material used and component geometry such as thickness of a plate or volume of a beam.   2.5 FEM Model There are multiple approaches and derivations for implementing finite element solutions.  The main idea behind FEM involves solving a boundary value problem that satisfies all equilibrium, kinematics, and material law requirements.  The FEM model, as the name implies, involves discretizing a solid continuum into multiple ‘finite’ elements.  In other  20words, it involves meshing a continuous global domain into a collection of sub-domains.   In the continuous domain, the responses are described by differential equations on an infinite number of points.  In the discrete domain, the responses are described by a finite number of points and with a set of algebraic equations.  In most practical cases, the governing partial differential equations are extremely difficult to solve for point-wise or exact solutions due to the complexity of the solids and domains involved.   As a result, the governing equations need to be numerically approximated, or averaged, over each discrete element within a discretized local domain.  As more elements are used, and as the mesh spacing becomes infinitesimal, the solution approaches the exact value.   In structural analysis solutions, FEM involves approximating either or both of the displacement and force fields.  The equilibrium, or compatibility approach, consists of approximating the displacement unknowns while the kinematics approach subjects the force unknowns to approximation.  There are also hybrid approaches that combine both force and displacement approximations to arrive at a solution.    In Rts, the displacements are approximated to satisfy equilibrium.  Hence, the remainder of this thesis will focus on the displacement-based FEM approach.  Regardless of the governing equation formulations used to derive the force or displacement approximations; they can both be linked with virtual work or energy principles.      212.5.1 Energy Formulation The energy quantity defined as work satisfies both the compatibility and kinematics conditions.  More generally, work or   can be defined as the energy required for the differential displacement  of a point p along the direction of an applied force component .  Mathematically, work can be expressed as:   =   or as    =  ∫ 33 This integral conveniently represents both the equilibrium and kinematics quantities in a single expression.  Since both forces and displacements appear in the definition of work, virtual work can be expressed as either the principle of virtual displacements or virtual forces.  The conditions of equilibrium are derived from virtual displacements while compatibility is defined through virtual forces.  In the case of a deformable body, such as a structural system, both the external and internal forces can do virtual work.  This thesis will employ the principle of virtual displacements since it is the method predominantly used in solving structural mechanics problems.    2.5.2 Principle of Virtual Displacements Using extremum formulations, structural problems can be solved by virtual displacements utilizing a variational calculus approach.  Variational calculus involves minimizing an integral, in this case the potential function:  Π = U − H  34 The potential function states that the strain energy of a system U is equal to the total potential of the loads H  plus a constant Π.  The values of  U and H  are obtained by summing (integrating) the energy contributions from each of the elements.  The internal strain energy  22of a deformable system is given by:    U =12  35 The potential energy due to an external load is given by the area under the force-displacement relationship as:  H =12F∙ 36 Integrating over the entire volume of a solid gives the following general expression:   {}{}  − {}{}  = Π 37 The core concepts of the virtual displacement principle can be summarized in the following theorems: I. If a particle is in equilibrium, the total virtual work done during any arbitrary virtual displacement of the particle is zero. II. A system of particles is in equilibrium if the total virtual work done is zero for every independent virtual displacement.   III. A deformable system is in equilibrium if and only if the total external virtual work is equal to the total internal virtual work for every virtual displacement consistent with the constraints. IV. Of all of the possible displacements which satisfy the boundary conditions of a structural system, those corresponding to equilibrium configurations make the total potential energy assume a stationary value.   V. The displacements corresponding to stable equilibrium configurations make the total potential energy a relative minimum (Eisley, 1989).                                                                     23The equilibrium equations are therefore obtained by minimizing the potential function:  Π = 0 38 To accomplish this, we impose a virtual displacement or strain relative to equilibrium and set the potential function to zero.  Integration by parts is then performed on equation 37 and 38 where the derivative is transferred from the minimizing function to the so-called variational .  The symbol  in this instance has the same meaning as the differential operator.  The final outcome is equation 39 below.   {}{}  − {}{}  = 0 39 This FEM formulation is considered a ‘weak’ formulation since we neglect the twice-differentiability condition.  Now the kinematic relationship between stress and strain can be formulated as: {}= []{} 40 where [] is a material properties matrix that depends on the element type.  For example, various elements and their corresponding material matrices are given in Table 1 below. Table 1.  Element material properties matrices Element Material Matrix  [] Bar  Beam  Plane Element 1 − 1  0 1 00 01 − 2    24Substituting equation 40 into 39 yields:  {}[]{}  − {}{}  = 0 41 One of the cornerstones of the FEM is the ability to interpolate the displacements using shape functions.  Let the displacements  be interpolated over an element using linear shape functions [] such as: {}= []{} 42 where {} is the vector of the nodal displacement at each Degree of Freedom (dof) of each element.  We also define [] as the strain displacement matrix: []= ∇[] 43 By definition of the strain-displacement relationship, strains are determined from displacements as:  {}= ∇{} 44 Hence, combining equations 42, 43, and 44 we define the strain as: {}= ∇[]{}= []{} 45 Substituting equations 42 and 45 into equation 41, we arrive at the final form of the FEM integral: {δ}  [][][][]  {}−  {δ}  []{}  = 0 46 Where the element stiffness matrix is: []=  [][][]   47 Equation 47 can be extended to any element, in any dimension. Rts uses this approach for line elements, bar elements, plane elements, and as of this thesis, plate elements.  The  25following sections will focus on the numerical solution techniques employed on plate elements since that was the element implemented in Rts for this thesis.   2.5.3 Plate Finite Elements In general, plate elements are based on either Kirchhoff or Mindlin plate theories.  Kirchhoff plate theory excludes transverse shear deformation while Mindlin plate theory accounts for it.  This is analogous to beam bending where Euler-Bernoulli beam theory neglects shear deformation while Timoshenko theory includes it.  Also, much like beam theory, as a beam gets deeper (or a plate becomes thicker) transverse shear plays a greater role in the accuracy and reliability of the computed results.  The stresses and stress resultants are highlighted on a plate element in Figure 2 below.    Figure 2. Stresses on a plate element  Plates develop bending moments in the two principle directions along with a twisting moment.  Positioned with the x-y plane at its midsurface, both rotations at the midsurface and surface slopes are used to describe plate elements.  The midsurface rotations  and  are positive when pointing in the x and y directions.  The rotation or slope on the surface of the plate element is given as w,x and w,y; where the comma represents differentiation with  26respect to the following subscript.  In Kirchhoff plate theory, a straight line normal to the midsurface is assumed to remain straight and normal to the deformed midsurface; transverse shear deformation is zero.  This requirement is relaxed in Mindlin theory.  In Rts, Mindlin plates were implemented because they provide more options such as the ability to analyze layered plates where transverse shear deformation is often important.  Additionally, they can be easily extended to become shell elements.  The drawback of using Mindlin plates is that they are susceptible to shear locking and require more expertise to implement (Cook, Malkus, Plesha, & Witt, 2002).    The plane-stress expression for plate elements is as follows: []=  =1 − 1  0 1 00 01 − 2  48 and for shear as: []=  =  00    49  where  is the Young’s modulus and the shear modulus is:  =2 + 2  50  The b and s subscripts in equations 48 and 49 denote bending and shear respectively.   Kirchhoff plates utilize only   while Mindlin plates incorporate both  and .  The full plane-stress expression for Mindlin plates is: ⎩⎪⎨⎪⎧⎭⎪⎬⎪⎫=⎣⎢⎢⎢⎡0 0[] 0 00 00 0 0  00 0 0 0 ⎦⎥⎥⎥⎤⎩⎪⎨⎪⎧⎭⎪⎬⎪⎫ 51  27Since we are dealing with plane stress,   and  are considered negligible.   Kirchhoff plates need only be represented by a single field w(x,y); the lateral deflection of the midsurface.  The stress and deformation throughout a Mindlin plate is expressed as three fields; w(x,y), ψx, and ψy.  The variables ψx and ψy being the rotations of the midsurface normal along x and y respectively.  With Kirchhoff elements, w,x = ψx and w,y = ψy  since the slope at the midsurface is assumed to equal the slope at the surface; hence only the field w(x,y) is sufficient to represent the total state of deformation.    The moment-curvature relationship for a homogenous, isotropic, and linearly elastic Mindlin plate is: ⎩⎪⎨⎪⎧    ⎭⎪⎬⎪⎫=⎣⎢⎢⎢⎡  0 0 0  0 0 00 0()0 00 0 0 ℎ 00 0 0 0 ℎ⎦⎥⎥⎥⎤⎩⎪⎨⎪⎧,,, + , − , − , ⎭⎪⎬⎪⎫     or     {}= []{}  52  where  =  the material property matrix  (),  =  is the shear modulus  ,  =  effective thickness 5 6 , ℎ = plate thickness, and  = material Poisson’s Ratio  D  is analogous to flexural rigidity EI  in beam theory while G denotes the shear modulus.  ℎ can be regarded as the “effective thickness” for transverse shear deformation.  Note that the  28lateral deflection field in Mindlin theory is coupled to the rotation fields only by transverse shear deformation.  The curvature deformations {}, the far-right vector of equation 52, are approximated through shape functions.  The following sections will provide an overview of the shape functions used for the Mindlin plate element implementation.   2.5.4 FEM Mindlin Plate Shape Functions In mathematics, any continuous function can be represented as a linear combination of discrete basis functions.  To discretize the continuum domain, an appropriate ‘basis’ function must be selected.  A shape function is considered a restriction of a basis function to an individual element.  These unique basis functions are related to the geometry of the meshed element; either through a vertex, edge, face or the whole element in simple geometries.   Constructing a basis function in terms of a shape function eliminates the need to know specific geometric information about overall element layout and connectivity.  Basis functions are constructed by combining the shape functions of the neighboring elements; this ensures connectivity and continuity through the system.  2D elements preserve edge continuity while 3D elements require face continuity as well.    Placing a node at the element’s vertices ensures a continuous basis.  The neighboring edges share the same nodal values; the edges are not collinear and by definition the shape functions from both edges are determined by the same two nodal values at their shared vertex; hence, the basis is continuous.  When constructing shape functions, the objective is to construct a p-th degree polynomial approximation.  The resulting polynomial must be linear, unique, and complete.  In two-dimensions, Pascal’s triangle is used to construct shape functions.  These  29piecewise, linear polynomials are referred to as Langrangian Shape Functions.  Including all of the terms in the polynomial ensures isotropy in that the degree of the shape function is retained under coordinate translation and rotation.  The shape functions described above take the form: (,) = (,) = (,)  53 where  are unique coefficients and where (,) = [1,,,,,,… ,]  54 Quadratic shape functions have the form:  =  +  +  +  +  +   55 where the six coefficients  …  = 1. . .6 are determined uniquely by the values of the shape functions of the three nodes on a given edge.  The values of the shape functions on shared edges are determined by the same nodal values in neighboring elements.  Constructing Langrangian approximations, though straight forward to implement, can be algebraically complicated.  To reduce the complexity, an element can be transformed into a canonical or parametric element.  This coordinate transformation is achieved by mapping an arbitrary element from the (x,y) plane to a canonical element in the (,) plane.  The ability to solve any problem using arbitrary coordinate transformations facilitates the straightforward formulation and solution of differential equations using these parametric elements.    302.5.5 Parametric Finite Elements Isoparametric mapping permits quadrilateral elements to have non-rectangular shapes.  This technique uses reference coordinates to map an arbitrary physical element into a “reference element.”  The mapping of a 2-dimensional four node quadrilateral element to a canonical element is shown below in Figure 3.  Figure 3.  Coordinate mapping from x,y to ξ,ɳ  (Flaherty E, 2014) This is accomplished with coordinate transformation using shape functions that interpolate both the displacement field and the element’s geometry.  The displacement of a point within the element can be expressed in terms of the nodal dof and the shape functions [] which are also functions of the reference coordinates.  The global coordinates of a point within the element can be expressed as shape functions ; these are also functions of the reference coordinates.    An element is called isoparametric if  is identical to [].  If  is of lower degree than[], then the element is called subparametric.  If  is a higher degree than [], then the  31element is superparametric.  Isoparametric elements were incorporated in Rts for this thesis.  When using any form of parametric elements, the strain-displacement matrix [] is not straightforward to assemble.  In this case, it involves gradients and  is not simply a constant multiplied with a differential.  To accomplish this, a function  = (,) is defined and we use the chain rule to differentiate this with respect to x and y as follows: =+  56  =+  57 or as: ,, = [],,  58 where [] is defined as the Jacobian such that: []= , ,, , =⎣⎢⎢⎢⎢⎡ ⎦⎥⎥⎥⎥⎤  59 and where n is equal to the number of nodes on the element.  The Jacobian contains all first-order partial derivatives of the shape functions of an element.  Equation 58 is reformulated to:   ,, = Г ГГ Г ,,  60 where [Г] equals the Jacobian inverse: [Г]= [] =⋅⋅ − −   61  32The above transformation equations apply to plane elements that have any number of nodes. By definition, we define the displacement over a Mindlin element by shape function interpolation.  As previously noted, there are three fields necessary to completely describe the deformation.  The lateral displacements and rotation fields are given by:  = ∑  0 00  00 0     or   {u} = [N]{d} 62 The shape functions Ni = N i (ξ,ɳ)  depend on the number of nodes in the element, they are given in Table 2 below for a quadrilateral element of up to nine nodes.   Table 2.  Shape functions of a plane quadrilateral from 4 to 9 nodes (Cook, Malkus, Plesha, & Witt, 2002) Ni = N i (,) Include only if node i is present in the element i=5 i=6 i=7 i=8 i=9  =14(1 − )(1 − ) −12   −12 −14  =14(1 + )(1 − ) −12 −12   −14  =14(1 + )(1 + )  −12 −12  −14  =14(1 − )(1 + )   −12 −12 −14  =12(1 − )(1 − )     −14  =12(1 + )(1 − )     −12  =12(1 − )(1 + )     −12  =12(1 − )(1 − )     −12  = (1 − )(1 − )       33 Nodes are placed at an elements vertices and along the midpoints.  If present, node 9 is located at  =   = 0.  The two plate quadrilateral elements implemented in Rts are shown in Figure 4 Below.  Figure 4.  Four node and nine node quadrilateral elements.   The bilinear 4 node Mindlin (Mind4) is shown on the left while the 9 node (Mind9) Lagrange element is shown on the right. As introduced above in equation 43, the stiffness matrix is assembled using the strain-displacement interpolation matrix [].  Every element type has a unique strain-displacement interpolation matrix.  To assemble the strain-displacement matrix for a Mindlin plate element, we must account for all three displacement fields: w(x,y), ψx, and ψy.  Again, these are the out-of-plane displacements and the rotations at the plate’s midsurface and surface respectively.        34Here we define the displacement {}= [  ] and curvature {} as: {}=⎩⎪⎨⎪⎧,,, + , − , − , ⎭⎪⎬⎪⎫ = []{}= [] 63 In equation 63, the displacement vector {}  is approximated by the Mindlin shape functions  [] to give: []= [][]= ∇[]  64 where  is the derivative operator matrix known as the “nabla” operator ∇. []=⎣⎢⎢⎢⎢⎢⎢⎡ 0 00 0  0  −  1 0− 0 1 ⎦⎥⎥⎥⎥⎥⎥⎤ 65 For an element having n nodes, we obtain [] as: [] =⎣⎢⎢⎢⎢⎡0 , 0 ⋯ 0 , 00 0 , ⋯ 0 0 ,0 , , ⋯ 0 , ,−,  0 ⋯ −,  0−, 0  ⋯ −, 0  ⎦⎥⎥⎥⎥⎤ 66 We can now rewrite the element stiffness matrix in terms of the isoparametric coordinates as: []=  [][][]  67     35Using the definition of the Jacobian [] from equation 59, its determinant || is used to formulate the change in volume of an element:  = || 68 Now the integral in equation 67 can be restated as: []=  [][][]||  69 Using gauss quadrature numerical integration techniques, we calculate the local element stiffness matrix by summing the values at each integration point ,: []=  ∑ ∑ [][][]||  70 where   is the gauss integration weight coefficient, [] is the strain-displacement matrix, [] is the material properties matrix, and || is the Jacobian determinant    Using the direct-stiffness method, the local element stiffness matrices are compiled into the global stiffness matrix.  Incorporating the forces acting on the system, we are left with the final form of the global element stiffness relation: {}= []{} 71 Finally, the system of equations is solved for the displacements: {} = []{} 72    36The quadrilateral four node plate element used in Rts is shown below in Figure 5 with its 12 dof.  Figure 5.  Four node quadrilateral plate element In Rts, every node has 6 dof though plate elements only make use of 3 dofs in their stiffness assembly as shown above.                37Chapter  3: ANALYTICAL GRADIENTS Gradients are important throughout the model stream since almost all optimization algorithms throughout the orchestrating models involve the gradients in one form or another.  In addition, the gradients provide us with sensitivity measures of the design parameters.  This information is helpful in deciding which design variables to vary to achieve the lowest overall cost with all things considered.  The gradient flow chart throughout the models is given as:  →  →  →  →  FEM COST FORM RISK RBDO Figure 6.  The gradient flowchart within Rts The overall intent is to calculate the sensitivity of the mean cost in relation to a change in the design parameters .  In other words, how much will the mean cost change when we vary the values of the design parameters.  This information is directly used by the optimization algorithm since the minimums occur at stationary points where the gradient of the objective function is equal to zero (∇() = 0).  Note that for constrained problems, the local minimum may not be located at a stationary point if it does not fall within the constraint boundaries; the local minimums could potentially be located on the boundary itself.  For this thesis I will focus on unconstrained optimization.         38Before this thesis, Rts had an initial framework in place for calculating the sensitivities as shown in the graphic below.  Figure 7.  Direct Differentiation Method (DDM) in Rts (Mahsuli & Haukaas, 2013) Figure 7 outlines the gradient computational stream for a response  with respect to parameters  ⋯.  In this case, the response depends explicitly on parameters  and   and implicitly on parameters  and .  The sensitivities for upstream models are calculated previously and stored in each model’s RResponse class.  In the current orchestrating model, the analytical expressions for the derivatives are required for all parameters.  For this example, analytical expressions for  , , ,  are computed within the orchestrating model’s class whose input responses are  and  and whose output response is .  The result is then inserted into the RModel DDM Map; RModel is the parent  39class for all orchestrating models and the variable DDMMap is one of its members. After the orchestrating model is evaluated, the gradient ‘dependency’ calculations are performed within the RModel class.  Here the implicit and explicit dependencies between DDMMap of the current orchestrating model  , , ,  and the gradient maps of the input models  ,  and  ,  are evaluated.  The calculation performed is: =∙  73  =∙+∙  74  =∙+  75  =  76 The final results are then saved as a gradient map to the RResponse class of the orchestrating model, for use in models downstream.   Gradients can be calculated both numerically and analytically.  The finite difference method is a numerical method used for approximating the solutions to differential equations by employing finite difference equations.  Finite difference equations are used to replace, or more generally approximate, the derivatives of a differential equation by central, forward or backward difference(s).  By definition, the finite difference method is not exact; it is only an approximation.  Therefore, there will always be an error which is the difference between the  40approximation and the exact analytical solution.  The two main sources of error are round-off and discretization error.  Round-off errors are the loss of precision due to computer rounding of decimals while discretization errors, or truncation errors, inherently come from the discretization of the problem’s domain and the continuous applications of the finite difference method.    A drawback of the finite difference method is that it requires additional simulations and computations each time a design parameter is perturbed.  Also, selecting the optimal perturbation size of a set of parameters with the sensitivity of the numerical solution in context can be very difficult and important (Anderson, 2000).  Additionally, when performing nodal optimization, every perturbation requires re-meshing of the FEM domain. Therefore, the disadvantage of the finite difference method is that it can computationally expensive while still incorporating errors.    Conversely, analytical gradients are exact. They are calculated from the direct differentiation of the functions used throughout the various models.  Exact gradients are useful because they ensure consistent convergence; especially when the analysis is near the stationary point of the optimization where accurate gradients are required.  Implementation of DDM comes with the initial cost of calculating the analytical derivatives and implementing them in the code alongside the orchestrating models.  This is not a trivial task due to the complexity of the models and their interconnectivity.  The sections below will go over the DDM gradient theory, calculations, and their logistics throughout each model.   413.1 FEM Gradients The FEM analysis is the starting point of the model analysis stream.  The inputs into the FEM models are the design parameters .  The design parameters could be decision variables, random variables, responses for nodal coordinates, and/or constants.  The outputs of the FEM are the displacement responses {}. Recall from equation 72 that the final formulation of the FEM system of equations is: {} = []{} 77 We start with the derivation of   where it is required to find the gradient of the stiffness matrix  , and  as shown in the following equation: =  −∙     78 where  is the gradient of the displacements  with respect to the parameters .  The expression within the brackets is termed the “pseudo-load.”  Recall that the stiffness matrix from equation 70 is:   = ∑∑  79 where  =  = ||.   can be considered a constant for material sensitivity (E,ν,h etc.) though it is NOT a constant for nodal or shape sensitivity; this is because  contains the Jacobian which is a function of the nodal coordinates.  Therefore, without shape sensitivity (when not considering nodal coordinate variations) only material sensitivity is considered.  This simplifies the sensitivity expression to:  = {}  80   42Let  {} =   +    81 and let {}=   82 now  =   +     +      83 which gives =   +    +      84 For material sensitivities, the derivative of   equals zero since the strain displacement matrix is not a function of any material properties.  This simplifies to the final expression for material sensitivities: =       85 Therefore, for material sensitivity, the gradient of the stiffness matrix is derived solely from the strain displacement and material matrices for material sensitivity.  For shape sensitivity, equation 84 is extended to: = ||Tr +   +     86 where ||= ||Tr  87 The derivative of the Jacobian is incorporated as well as the derivative of the strain- 43displacement matrix .  Recall that the strain-displacement matrix from equation 66 is:  =⎣⎢⎢⎢⎢⎡0 , 0 ⋯ 0 , 00 0 , ⋯ 0 0 ,0 , , ⋯ 0 , ,−,  0 ⋯ −,  0−, 0  ⋯ −, 0  ⎦⎥⎥⎥⎥⎤ The  matrix already involves the derivatives of the shape functions.  For nodal sensitivities, we require the second derivatives, which depend on the parameter that we are taking the derivative with respect to.  The formula for the derivative of the strain-displacement matrix is: =∂∇∂θ  88 In the case of nodal sensitivities, for the 2-D plane element, the parameters  can either be  or .  That is, when nodal coordinates depend on a geometric parameter that we need sensitivities for, such as plate width or length, for example.  Recall that the Jacobian is given as: []= , ,, , =⎣⎢⎢⎢⎢⎡ ⎦⎥⎥⎥⎥⎤  89  The derivative of the Jacobian is therefore: []=⎣⎢⎢⎢⎢⎡ ⎦⎥⎥⎥⎥⎤  90   44From theorems of matrix calculus, it can be shown that the inverse is: []= [] ∙[]∙[]  91 The derivative of  is made up of the components of the differentiated inverse Jacobian to yield: =⎣⎢⎢⎢⎢⎡0 ,, 0 ⋯ 0 ,, 00 0 ,, ⋯ 0 0 ,,0 ,, ,, ⋯ 0 ,, ,,−,, 0 0 ⋯ −,, 0 0−,, 0 0 ⋯ −,, 0 0 ⎦⎥⎥⎥⎥⎤  92 where  ,,,, = []∙ ,,  93  3.2 COST Model Gradients The next step in the model stream is the calculation of the cost model gradients.  The cost models may consist of construction, damage, and degradation models.  The input to the cost model consists of the displacement response output from FEM along with other random variables that represent cost measures.  The cost model can be a simple algebraic expression or a complex relationship that is a function of the displacement, design variables, and/or time.  In any case, the output of the cost model is related to the displacement output from the FEM model, which is in turn related to the design variables.  In this instance, the cost is given by:  = (()) + () 94 where (()) is a function that assigns a monetary value to a given displacement () and  45() is a function of a design variable (density or beam geometry, for example).  The gradient of the cost model is hence given by: =∙+  ∙  95 The gradients for  must be analytically solved within the cost model.  Cost and damage models can be highly complex; as such, care must be taken so that the gradients are smooth-continuous e.g. there are no large jumps in the costs that cause “kinks” or discontinuities within the gradient curve.  3.3 FORM Reliability Model Gradients  The gradient of the limit state function is necessary for the gradient-based optimization methods used in FORM.  The gradient of the limit-state function is given by: ∇() =()=  96 where  is the derivative of the limit state algebraic expression which is in terms of the response (),  is the response gradient vector calculated from the FEM model, and  is the inverse of the Jacobian matrix from the probability transformation composed of the components yx=(x)(y)  97 where  and  are the CDF and PDF corresponding to F and Φ.   is the vector of random  46variables in original space while  is its transformation in standard-normal space.  The input into the reliability model is the cost response.  The output is the probability of exceedance of a failure, or in this case the total cost threshold .  The derivatives of   and   with respect to the parameters  are valuable because they are used in the gradient-based optimization algorithms; analytical derivatives are particularly useful since they are exact and offer faster and more reliable convergence.  In addition, the gradients are useful in themselves as “importance measures.”  In FORM, the alpha vector serves as an indicator or importance vector in where the absolute value of each individual component serves as a relative indicator of its contribution to the total variance.   Recall that the limit-state function in original space is given by: () =  − ()  98 and in standard-normal space by: () =  − ()  99 In FORM, the limit-state function is linearized by a first-order Taylor approximation to produce: () ≈ (∗) + ∇∗ ∙( − ∗)  100 where  is the random variable vector in standard-normal space and ∗ contains the coordinates of the design point.  At the design point, the gradient of the limit-state function is found by using the chain rule: ∇(∗) =∗=∙∗∙∗∗  101 Also, recall from equation 9 that the probability  is: () = (() ≤ 0) = Φ (−)  102  47In other words,  is the probability that the cost () equals or exceeds a threshold ; where the limit-state function is less than or equal to zero.  The vector  consists of the random variables from the design parameters  in original space.  It follows that the derivative of the exceedance probability with respect to the cost threshold is: =Φ (−)=Φ (−)= −∙()  103  can be calculated by realizing that the reliability index  is directly related to the design point values at ∗, such as: =∗∗  104 The reliability index can be stated as  = ∗, the derivative of which is: ∗=  = −∇(∗)‖∇(∗)‖  105 Hence,  is given with the following expression: = −∇(∗)‖∇(∗)‖∗  106 We must now differentiate the limit-state function at the design point ∗, where  = . =∗∗+∗= ∇(∗)∗+∗= 0  107 Solving for  at the design point gives the conditional derivative: ∗= − ∇(∗)∗  108    48Substituting the conditional derivative into equation 106 gives: = ‖∇(∗)‖∗  109 It can be proven that the derivatives of the reliability index at the design point in standard-normal and in original space are related as follows: = ‖∇(∗)‖∗= ‖∇(∗)‖∗  110  is the derivative of the original limit state function (() =  − ()), which is equal to 1.  Substituting equation 110 into 103 gives the reliability gradient  as: =  −()‖∇(∗)‖  111 If we want the derivative of the reliability index with respect to parameters , the following expression is derived: = −∇(∗)‖∇(∗)‖∗  112 In other words, this relates to how much the distance from failure plane to origin changes in relation to changes in the design variables.   3.4 RISK Model Gradients The risk model takes the FORM model probability of exceedance  as an input and outputs the RISK measure  to the optimization model.  The FORM output (() ≥) yields a single point on the Complementary Cumulative Density Function (CCDF) curve for the response ().  A method of acquiring the mean of cost () is by calculating the area under the curve.  In this case, the mean is calculated by numerically integrating the  49CCDF values using the trapezoidal rule, formulated as:  =12 1 +  + ( −  ) ∙(+ )    113 The gradient required for optimization is the sensitivity of the mean with regards to any changes in the parameters.  Since  is a function of both  and , the chain rule is used to derive the expression for the risk sensitivities:  =121 +  +  ∙+ −   ∙+   + ( −  ) ∙+     114 where  equals the gradients from the COST model evaluated at the design point, and =∙  where  is equal to the FORM gradient at the design point derived above in equation 111.  The final expression for the RISK gradient is equal to: =121 +  +  ∙+ −   ∙+   + ( −  ) ∙+     115  3.5 RBDO Gradients  The analytical gradients used in the optimization analysis can be calculated using the Direct and Adjoint Differentiation Methods; DDM and ADM respectively.  Currently, only the DDM method is employed in Rts but the related ADM is given for completeness.  The DDM and ADM methods are explained in the sections below.      503.5.1 Direct Differentiation Method – DDM  The sensitivity of the objective function  =  , is given as: ()=()() + () ∙() ∙()−()∙()     116 Therefore at each design iteration point, to find the gradient of the objective function, we need to solve the “pseudo-load” for each parameter , which is then inserted into each constraint function  ( = 1… ).  Thus, the total number of calculations is then  × .  3.5.2 Adjoint Method – ADM Using the adjoint method, we begin with:  () ∙ =  117 Solving for  gives:  = ()   118 The sensitivity is then given as: ()=+   ()−()∙()  119 Using this approach,  is solved for each constraint function  ( = 1… ), and then it is inserted into the final sensitivity equation  times. In this case, the total number of calculations performed is  × .  To conclude, the adjoint method is more efficient if the number of constraints is more than the number of design variables.  Conversely, if the design variables outnumber the constraints, the direct differentiation method is more effective (Christensen & Klarbring, 2009).  51Chapter  4: IMPLEMENTATION AND EXAMPLES IN RTS This chapter outlines the implementation of the orchestrating models and their respective DDM gradient calculations in Rts for performing reliability-based optimization.  The model architecture for a typical analysis is given in Figure 8:  Figure 8.  Optimization software architecture in Rts The RBDO is the high-level orchestrating model that calls the run analysis method in RFunction starting with the down-stream FEM model first.      524.1 FEM Implementation At the FEM level, parameters consist of material properties, section properties, coordinate properties, or loads and hazards.  Table 3 below provides typical parameters used in Rts. Table 3. Examples of typical FEM parameters used in Rts Material Properties Young’s Modulus, Poisson’s Ratio, Density Section Properties Member Thickness, Member Length/Width, Moment of Inertia Points or Coordinates Any point or node on the structure Force Components Parameters or force components that contribute to the response  Any parameter that contributes to the response can be analytically differentiated.   The response from the FEM model is always a displacement which can be translated into a stress, force, or strain.  The software architecture is setup so that the analytical derivative equations are programmed in on the element level.  The stiffness matrix assembly routine then compiles the differentiated global sensitivity-stiffness matrix corresponding to each parameter.  The sensitivity values for all parameters are solved and then saved in the RNode class for each node.  When the user selects a response, the gradient values from that response’s node are then utilized for calculating gradients downstream of FEM.  For the thesis examples, the parameters used in FEM were: =  ℎ where ,ℎ,, are the young’s modulus, thickness, width, and a point load respectively.    534.2 COST Implementation In this example implementation, the cost model involves only a simple algebraic expression: () =   ∙ ∙ℎ ∙ +   ∙ 120 where  = a random variable that represents the cost of purchasing material and constructing a timber panel $/m3, =the width of the panel in meters,  ℎ=the thickness of the panel in meters, =the length of the panel in meters, = a random variable that represents the cost of damage due to the maximum deflection of the beam under a point load at its tip $/m, and ()= the deflection of the tip of the beam.  The gradients inserted in RCostModel DDMMap are: =  ∙ℎ ∙  121 =  ∙ℎ ∙  122 ℎ=  ∙ ∙  123 =  ∙ ∙ℎ  124 =   125 =   126  54The COST DDMMap dependencies are then evaluated in RModel.  From FEM, the gradients were: =  ℎ  127 Since the only parameters shared by the input and cost models are the thickness and beam width, the dependency calculations are: ℎ=ℎ+ℎ  =+  128 Giving the final cost gradient vector as: =  ℎ  129  4.3 RISK Implementation The RISK model incorporates the FORM and sampling models.  For gradient calculations, the DDM Map of each iteration of FORM at each cost threshold calculation is saved at convergence and utilized in calculating the final RISK gradient.  Unlike the previous dependency calculations, the RISK model gradient calculations are done within the model itself. They are then directly saved into the RResponse class from within RISK.  The dependency calculation within RModel is avoided in this step since RModel would only see the final COST response gradients from RResponse that are leftover from the last FORM iteration at the final threshold.      554.3.1 FORM and COST Gradients  Within RISK, at each cost threshold FORM convergence, the COST model gradients are saved along with the FORM gradient as described below.  The FORM gradients utilized are described in section 3.3, equation 111 as: =  −()‖∇‖  130 The COST gradients are saved at the FORM design point, or when FORM converges.  At this instance the realization of the cost function equals the cost threshold. Mathematically, this is simply: =    131 since  = .  These gradients are used in RISK to calculate the sensitivity of the mean.    4.3.2 RISK Sensitivity Calculations The RISK gradients involve the derivatives of the Trapezoidal Rule at each exceedance probability/cost threshold pair.  Since we are performing a summation to calculate the mean, our gradients must equally be accounted for.  For example, we cannot solely use the gradients of the last value of the sum as RModel would, all of them must be represented equally.  To accomplish this, we sum the products of the gradients within RISK to achieve the final RISK sensitivity.  For each parameter, the DDM map  from every COST threshold is multiplied through with the FORM gradient and summed up.  The final RISK gradients are then saved to the global DDM Map.     56This calculation is given as:  =121 +  +  ∙+ −   ∙+   + ( −  ) ∙+     132  4.3.3 SAMPLING Gradients The sampling model is only used as a tool to establish the approximate thresholds on either side of the estimated mean.  Therefore, the sampling gradients are of no use.    4.4 RBDO Implementation For this thesis, the optimization algorithm was implemented using the steepest descent method and a fixed step size utilizing the RISK gradients mentioned above.  Note that for optimization routines involving nodal coordinates, the FEM model needs to be re-meshed at every perturbation of the decision variable.             57Chapter  5: RBDO EXAMPLES IN RTS The three examples implemented in Rts involve the optimization of the thickness and width of a cantilever timber beam.  The first example uses material-based sensitivity implementations while the second example incorporates nodal sensitivity.  Finally, a third example involves an unconstrained optimization on both thickness and width parameters. The analytical gradients were verified with finite difference approximations at the FORM level.  Additionally, an analytical model of a cantilever beam was optimized in Excel using Solver and compared to the values from Rts.  For each Rts example, the optimization was run twice; once with a starting point above the design value and once with the starting point below.  The values of the various parameters are given in Table 4.  Table 4.  Rts model list of parameters and values Parameter Type of Parameter Distribution Type Coefficient of Variation Initial Value Modulus of Elasticity (GPa) Random Variable Lognormal 0.15 13 Beam Width (m) Decision Variable - - 3 Beam Length (m) Constant - - 9 Poisson’s Ratio Constant - - 0.25 Beam Thickness (m) Decision Variable - - 0.3 Cost of Beam Construction ($/m3) Random Variable Normal 0.20 200 Damage Cost Due to Deflection ($/mm) Random Variable Normal 0.20 1000 Mass Density (kg/m3) Random Variable Lognormal 0.05 500 Point Load (kN) Random Variable Normal 0.30 17    58The model consists of random variables, constants, and decision variable parameters.  The un-deformed cantilever beam FEM Model is shown in Figure 9.  Figure 9.  Cantilever beam model in Rts  The loading on the model includes self-weight and a point load.  It can easily be extended to support distributed loading over an area as this is similar to the body-loading (or self-weight) already accounted for.  The model consists of 36 elements; six in either direction.  In this optimization, the four node bilinear Mindlin plate element was used.  The deformed beam is shown below in Figure 10.   59   Figure 10.  Deflected beam Rts model The original un-deformed beam is shown as a light grey line.  The Rts input files are given in Appendix A.1: Rts Input File Thickness Optimization and Appendix A.2:  Rts Input File Width Optimization.  Table 5 below summarizes the FEM parameters used in the structural analysis.  Table 5. Optimization analysis parameters Parameter Value Number of  FEM Integration Points 1 Element Type Bilinear Mindlin 4-Node Integration Type Gaussian Optimization Type Unconstrained Search Direction Steepest Descent Step Size Fixed  The results of the Rts examples and Excel model are summarized in Table 6.  Table 6.  Rts and Excel optimization example results comparison Example Initial Value (m) Final Design  Value (m) Iteration Number at Convergence Time to Complete (min) Mean Cost ($) Beam Thickess  0.3 0.1769 10 20 1311 Beam Thickess 0.05 0.1769 10 25.4 1311 Beam Width 0.7 1.1587 6 14.4 929  60Example Initial Value (m) Final Design  Value (m) Iteration Number at Convergence Time to Complete (min) Mean Cost ($) Beam Width 3 1.1627 8 25.3 929 Excel Beam Thickness 0.3 0.1754 - - 1303 Excel Beam Width  3 1.1507 - - 921  The analytical Excel values match closely to those from Rts.  The number of iterations to convergence depends on step size.  All examples used a fixed step size.  If the step size is too small, it will take longer to converge.  In both optimization examples, the material and geometric properties are equivalent with the exception of the thickness being the decision variable in the first and the beam width in the other.  The following sections showcase the Rts examples along with their respective GUI output.  The input file for each optimization is given in the Appendices.    5.1 Optimization of Beam Thickness Using Material DDM Sensitivities  This section outlines an RBDO example using material sensitivities.  In this case, the decision variable is the beam thickness.  In the RBDO class, the steepest descent method is used with a step size of 0.0005.  The optimization was started at higher and lower than design thickness values to illustrate convergence capabilities.   The first optimization was started with an initial thickness of 0.05 m.        61The GUI Rts output of the optimization is shown below.   Figure 11.  Thickness optimization Rts GUI Ouput The Rts output to the user consists of both graphical and text components.  The text component is given in Appendix A.  Figure 11 illustrates the GUI output of the thickness optimization. The optimization iteration number and objective function value are plotted on the left.  The RISK CCDF is plotted in the upper right hand corner while the FORM evolution plot is given in the bottom right.    Figure 12.  Optimization plot; iteration number vs. objective function value  62 Figure 12 shows the decreasing objective function value as the optimization progresses.  At around the sixth iteration the thickness approaches the optimum cost.  The beam is initially very thin therefore deflection governs and the high cost is due to a large deflection value.   Figure 13 below shows the evolution of the CCDF curve with each optimization iteration.   Figure 13.  Thickness optimization CCDF As the thickness approaches the optimum, the spacing between the curves decreases while the slope increases.  The area under the leftmost curve represents the lowest mean cost; the optimization proceeds from right to left.  The plot shown is the mean probabilistic cost of the beam vs. the probability of exceedance.    The second optimization started with a beam that was 0.3 m thick.  In this case, the deflection was smaller while the material cost of manufacturing the beam governed.  The GUI output is shown below while the text output can be seen in Appendix B.1: Material Optimization Output.    63 Figure 14.  Thickness optimization full GUI In this case the optimization approaches the design thickness within four iterations as can be seen in Figure 14 and Figure 15.  Figure 15.  Thickness optimization plot  The CCDF evolution plot is additionally given below for reference.   64 Figure 16.  Thickness optimization CCDF Evolution As can be seen in Figure 16, as the decision variable value approaches the design point, the CCDF curve spacing diminishes as it shifts leftward.   In the thickness case, the final mean cost was $1311.  5.2 Optimization of Beam Width Using Nodal DDM Sensitivities  In this section, the width of a timber beam is optimized.  The parameters and their respective values are given in Table 4.  In this optimization, the decision variable involves nodal coordinates; the beam width.  As in the beam thickness example, two values were chosen above and below the design point.  The first analysis starts with a width value of 0.7 m.          65 The Rts GUI output of the optimization is given below in Figure 17.  Figure 17.  Width optimization full Rts GUI output As can be seen in the optimization plot shown in Figure 18, the optimization approaches the design point after the third iteration.    Figure 18.  Width optimization plot; iteration number vs. objective function value The CCDF evolution plot is also given below.   66 Figure 19.  Width optimization RISK CCDF evolution plot As in the thickness optimization, the RISK CCDF curve spacing becomes smaller as the curves proceed to the left. The optimization was repeated with an initial value of 3 m; well above the design value.  The GUI output is presented below.   Figure 20.  Width optimization Rts GUI Output The optimization completed within eight steps with the decision variable approaching the optimum value at around the sixth step.   67 Figure 21.  Width optimization; iteration number vs. objective function value The final decision variable value at convergence was 1.1627 m after 8 steps.   Figure 22.  Width optimization CCDF curve The mean cost was $929 at the final iteration.  The console output is given in Appendix B.2: Nodal Optimization Output.     685.3 Simultaneous Optimization of a Beam Width and Thickness  Since this is an example of unconstrained optimization, as expected, the thickness of the beam keeps increasing while the width keeps decreasing.  This is consistent with the fact that tall and slender beams are more efficient in bending.  Figure 23 and Figure 24 highlight the 50 first steps of such an optimization.    Figure 23.  Iteration number vs. objective function value of simultaneous optimization The initial parameter values were 0.1 and 5 for the beam thickness and width respectively.   Figure 24.  CCDF curve of of simultaneous optimization  69After 50 iterations, the final values for the thickness and width were 2.756 m and 0.00728 m.  Eventually the optimization would break down since, unconstrained, the beam thickness would theoretically approach infinity while the width would approach zero.  The mean cost after 50 iterations was $64.                        70Chapter  6: CONCLUSION The major contribution of this thesis is the implementation of nodal and material analytical gradients throughout the Rts framework staring at the element level up through to the optimization level.  The elements implemented consist of the Bilinear-Mindlin four node and nine node plate elements.  A framework was also put into place in where adding DDM capabilities is possible with any element; with analytical gradients hardcoded in at the element level.    An academic cost model was created to showcase the multi-model capabilities of Rts and the ability to calculate DDM dependencies of downstream models.  A RISK model was implemented that incorporated a built-in FORM model with gradient-history capabilities and in-model DDM dependency calculations.  Additionally, an RBDO optimization model was built upon to include DDM capabilities and downstream model control.  Finally, two reliability-based design optimization examples were implemented that used both nodal and material sensitivities.    Future research should entail “piece-wise” optimization where the objective function is not smooth throughout the entire optimization stream; this would result in a discontinuity of the gradients.  Such discontinuities would likely be an issue at the cost model level since cost models have the potential to be very complex, incorporating subjective decision making aspects.  Additionally, more components and elements should be added at the FEM level to increase the capabilities of Rts along with implementation of the adjoint method and extending the analytical sensitivities implementation to incorporate inelastic and dynamic  71analysis.   Using the Rts optimization framework involves engineering judgment such as selecting the optimum number of FEM elements, the number of FEM integration points, the number of RISK thresholds, the initial sampling number, and various step sizes throughout the models.  Application of this research provides the means for computer-aided reliability-based structural optimization.  In summary, engineers and researchers could use Rts to find the optimum values of structural components to make the best decisions under uncertainty.  72Bibliography   Anderson, K. H. (2000). Analytic Full-Recursive Sensitivity Analysis for Multibody Systems Design. Proceedings of the International Congress of Theoretical and Applied Mechanics, Paper-1874. Christensen, P. W., & Klarbring, A. (2009). An Introduction to Structural Optimization. Sweden: Springer. Cook, R., Malkus, D. S., Plesha, M. E., & Witt, R. J. (2002). Concepts and Applications of Finite Element Analysis 4th Ed. Hoboken: Wiley. Eisley, J. (1989). Mechanics of elastic structures: classical and finite element methods. Michigan: Prentice Hall. Flaherty E, J. (2014). Finite Element Analysis: Mesh Generation and Assembly. New York: Rensselaer Polytechnic Institute. Haukaas, T., & Der Kiureghian, A. (April 2014). Finite Element Reliability and Sensitivity Methods for Performance-Based Earthquake Engineering. University of California, Berkeley: Pacific Earthquake Engineering Research Center. Mahsuli, M., & Haukaas, T. (2013). Computer Program for Multimodel Reliability and Optimization Analysis. Journal of Computing in Civil Engineering , 87-98. Singhal, S., Ghiocel, D. M., & Nikolaidis, E. (2004). Engineering Design Reliability Handbook. CRC Press.   73APPENDICES                        74Appendix A    Rts Input File  A.1 Rts Input File Thickness Optimization // By Stevan Gavrilovic // Developed  February 2015 // // Description: A thickness optimization of a cantilever with a point load using cost as a function of the cantilever's volume    // -1)**************TOOLS USED BY THE VARIOUS METHODS*******************************  //LINEAR SOLVER  RInHouseLinearSolver |ObjectName: myLinearSolver |OutputLevel: Minimum    //PROBABILITY DISTRIBUTIONS RInHouseProbabilityDistributions |ObjectName: theProbabilityDistributions |OutputLevel: Minimum    //PROBABILITY TRANSFORMATIONS RIndependenceProbabilityTransformation |ObjectName: myProbabilityTransformations |OutputLevel: Minimum |ProbabilityDistributions: theProbabilityDistributions  //RANDOM NUMBER GENERATOR RInHouseRandomNumberGenerator |ObjectName: myInHouseRandomNumberGenerator |OutputLevel: Minimum |Seed: 0  //MATRIX ASSEMBLER RConnectivityTableAssembler |ObjectName: myAssembler |OutputLevel: Minimum   // -2)**************DEFINE THE LOADING *******************************  //CONCENTRATED POINT LOAD RContinuousRandomVariable |ObjectName: pointLoad |ProbabilityDistributions: theProbabilityDistributions |DistributionType: Normal (mean, stdv) |Mean: -17000  |StandardDeviation: 500   |CurrentValue: -17000  // -3)**************DEFINE THE MODEL'S RANDOM VARIABLES*****************************  // MATERIAL PROPERTIES RANDOM VARIABLES ARE SET IN THE COMPONENT CODE  // COST RANDOM VARIABLES // Cost to supply and install materials at current rate - based on volume in this case RContinuousRandomVariable |ObjectName: thetaCLT |ProbabilityDistributions: theProbabilityDistributions |DistributionType: Normal (mean, stdv) |Mean: 200 |CoefficientOfVariation: 0.2           |CurrentValue: 200  //Cost of repair from damage models  RContinuousRandomVariable |ObjectName: thetaDeflection |ProbabilityDistributions: theProbabilityDistributions |DistributionType: Normal (mean, stdv) |Mean: -1000 |CoefficientOfVariation: 0.2  |CurrentValue: -1000    75// -4)**************DEFINE THE OPTIMIZATION DECISION VARIABLES*******************  // Beam thickness RContinuousDecisionVariable |ObjectName: beamThickness |CurrentValue: 0.05 |InitialValue: 0.05 |UpperBound: 0 |LowerBound: 0 |IncrementalCost: 1  // -5)**************DEFINE THE COMPONENT CONSTANTS*******************************  RConstant |ObjectName: beamLength |CurrentValue: 9 RConstant |ObjectName: beamWidth  |CurrentValue: 3  // -6)**************DEFINE THE COMPONENTS'S GEOMETRY*******************************  //FIXED WALL GEOMETRY  RPoint |ObjectName: pointA  |XCoordinate: 0   |YCoordinate: -1     |ZCoordinate: 2  RPoint |ObjectName: pointB  |XCoordinate: 0   |YCoordinate: 5     |ZCoordinate: 2  RPoint |ObjectName: pointC  |XCoordinate: 0   |YCoordinate: 5     |ZCoordinate: 4  RPoint |ObjectName: pointD  |XCoordinate: 0   |YCoordinate: -1     |ZCoordinate: 4    //CANTILEVER BEAM GEOMETRY  RPoint |ObjectName: horzPoint1   |XCoordinate: 0   |YCoordinate: 0      |ZCoordinate: 3  RPoint |ObjectName: horzPoint2   |XCoordinate: beamLength     |YCoordinate: 0       |ZCoordinate: 3    RPoint |ObjectName: horzPoint3   |XCoordinate: beamLength       |YCoordinate: beamWidth  |ZCoordinate: 3  |ZForce: pointLoad RPoint |ObjectName: horzPoint4   |XCoordinate: 0   |YCoordinate: beamWidth  |ZCoordinate: 3   // -7)**************DEFINE THE SYSTEM COMPONENTS*******************************  //CLT BEAM COMPONENT (HORIZONTAL PLATE BILINEAR MINDLIN 4-NODE) RPlateComponent  |ObjectName: cantileverBeam |Point1: horzPoint1 |Point2: horzPoint2 |Point3: horzPoint3 |Point4: horzPoint4  |MeshOption: 106 |Thickness: beamThickness  //FIXED WALL COMPONENT  RFixedPLaneComponent |ObjectName: ground |Point1: pointA |Point2: pointB  |Point3: pointC |Point4: pointD   // -8)**************DEFINE THE STRUCTURAL ANALYSIS TYPE*******************************  // LINEAR STRUCTURAL ANALYSIS RLinearStaticStructuralAnalysis |ObjectName: myStaticAnalysis |OutputLevel: Minimum |Assembler: myAssembler |LinearSolver: myLinearSolver   // -9)**************DEFINE THE ORCHESTRATING MODEL TOOLS AND INPUTS*****************  // OPTIMIZATION TOOLS RGradientNormConvergenceCheck |ObjectName: myOptimizationConvergenceCheck |OutputLevel: Maximum |Tolerance: 10  // Steepest Descent RFixedStepSize |ObjectName: myOptimizationFixedStepSize |OutputLevel: Minimum |StepSize: 0.000005 //|StepSize: 0.0001  76 RSteepestDescentSearchDirection |ObjectName: myOptimizationSearchDirection |OutputLevel: Minimum  // BFGS //RFixedStepSize |ObjectName: myOptimizationFixedStepSize |OutputLevel: Minimum |StepSize: 0.00001 //RBFGSSearchDirection |ObjectName: myOptimizationSearchDirection |OutputLevel: Minimum    // -10)**************DEFINE THE ORCHESTRATING MODELS*******************************  // RESPONSE MODEL RComponentResponseModel |ObjectName: myStaticBuilding |OutputLevel: Minimum  |StructuralAnalysis: myStaticAnalysis |Responses: horzPoint3.ZDisplacement   //**********COSTS**********  // COSTS - TOTAL RSteveTestCostModel |ObjectName: costTotal |OutputLevel: Maximum |Expression: thetaCLT*beamWidth*beamThickness*beamLength  + thetaDeflection*myStaticBuildinghorzPoint3ZDisplacement | GradientMethod:  Direct Differentiation  //*************************  // SAMPLING MODEL RSamplingModel |ObjectName: mySamplingCostModel |OutputLevel: Minimum |InputParameter: costTotalResponse |Threshold: 1300 |MaxSamples: 1 |TargetCov: 0.005 |SamplingCentre: Origin |RandomNumberGenerator: myInHouseRandomNumberGenerator |ProbabilityTransformation: myProbabilityTransformations    // RISK MODEL RRiskModel |ObjectName: myRiskCostModel |OutputLevel: Maximum |RiskMeasure: 1 |InputFromCost: costTotalResponse  // OPTIMIZATION MODEL ROptimizationModel |ObjectName: myRiskOptimizationCostModel |OutputLevel: Maximum |Objective: myRiskCostModelResponse |MaxSteps: 20 |SearchDirection: myOptimizationSearchDirection |StepSize: myOptimizationFixedStepSize |ConvergenceCheck: myOptimizationConvergenceCheck |GradientMethod: Direct Differentiation               77 A.2 Rts Input File Width Optimization // By Stevan Gavrilovic // Developed February 2015 // // Description: A width optimization of a cantilever with a point load using cost as a function of the cantilever's volume    // -1)**************TOOLS USED BY THE VARIOUS METHODS*******************************  //LINEAR SOLVER  RInHouseLinearSolver |ObjectName: myLinearSolver |OutputLevel: Minimum    //PROBABILITY DISTRIBUTIONS RInHouseProbabilityDistributions |ObjectName: theProbabilityDistributions |OutputLevel: Minimum    //PROBABILITY TRANSFORMATIONS RIndependenceProbabilityTransformation |ObjectName: myProbabilityTransformations |OutputLevel: Minimum |ProbabilityDistributions: theProbabilityDistributions  //RANDOM NUMBER GENERATOR RInHouseRandomNumberGenerator |ObjectName: myInHouseRandomNumberGenerator |OutputLevel: Minimum |Seed: 0  //MATRIX ASSEMBLER RConnectivityTableAssembler |ObjectName: myAssembler |OutputLevel: Minimum    // -2)**************DEFINE THE LOADING *******************************  //CONCENTRATED POINT LOAD RContinuousRandomVariable |ObjectName: pointLoad |ProbabilityDistributions: theProbabilityDistributions |DistributionType: Normal (mean, stdv) |Mean: -17000  |StandardDeviation: 500   |CurrentValue: -17000   // -3)**************DEFINE THE MODEL'S RANDOM VARIABLES*******************************  // MATERIAL PROPERTIES RANDOM VARIABLES ARE SET IN THE COMPONENT CODE  // COST RANDOM VARIABLES // Cost to supply and install materials at current rate - based on volume in this case RContinuousRandomVariable |ObjectName: thetaCLT |ProbabilityDistributions: theProbabilityDistributions |DistributionType: Normal (mean, stdv) |Mean: 200 |CoefficientOfVariation: 0.1           |CurrentValue: 200 //Cost of repair from damage models  RContinuousRandomVariable |ObjectName: thetaDeflection |ProbabilityDistributions: theProbabilityDistributions |DistributionType: Normal (mean, stdv) |Mean: -1000 |CoefficientOfVariation: 0.1  |CurrentValue: -1000       78 // -4)**************DEFINE THE OPTIMIZATION DECISION VARIABLES**********************   // Beam width RContinuousDecisionVariable |ObjectName: beamWidth  |CurrentValue: 3 |InitialValue: 3 |UpperBound: 0 |LowerBound: 0 |IncrementalCost: 1    // -5)**************DEFINE THE COMPONENT CONSTANTS*******************************  RConstant |ObjectName: beamLength |CurrentValue: 9 RConstant |ObjectName: beamThickness |CurrentValue: 0.2  // -6)**************DEFINE THE COMPONENTS'S GEOMETRY*******************************   //FIXED WALL GEOMETRY  RPoint |ObjectName: pointA  |XCoordinate: 0   |YCoordinate: -1     |ZCoordinate: 2  RPoint |ObjectName: pointB  |XCoordinate: 0   |YCoordinate: 5     |ZCoordinate: 2  RPoint |ObjectName: pointC  |XCoordinate: 0   |YCoordinate: 5     |ZCoordinate: 4  RPoint |ObjectName: pointD  |XCoordinate: 0   |YCoordinate: -1     |ZCoordinate: 4     //CANTILEVER BEAM GEOMETRY  RPoint |ObjectName: horzPoint1   |XCoordinate: 0   |YCoordinate: 0      |ZCoordinate: 3  RPoint |ObjectName: horzPoint2   |XCoordinate: beamLength     |YCoordinate: 0       |ZCoordinate: 3    RPoint |ObjectName: horzPoint3   |XCoordinate: beamLength       |YCoordinate: beamWidth  |ZCoordinate: 3  |ZForce: pointLoad RPoint |ObjectName: horzPoint4   |XCoordinate: 0   |YCoordinate: beamWidth  |ZCoordinate: 3    // -7)**************DEFINE THE SYSTEM COMPONENTS*******************************  //CLT BEAM COMPONENT (HORIZONTAL PLATE BILINEAR MINDLIN 4-NODE) RPlateComponent  |ObjectName: cantileverBeam |Point1: horzPoint1 |Point2: horzPoint2 |Point3: horzPoint3 |Point4: horzPoint4  |MeshOption: 106 |Thickness: beamThickness  //FIXED WALL COMPONENT  RFixedPLaneComponent |ObjectName: ground |Point1: pointA |Point2: pointB  |Point3: pointC |Point4: pointD   // -8)**************DEFINE THE STRUCTURAL ANALYSIS TYPE*******************************  // LINEAR STRUCTURAL ANALYSIS RLinearStaticStructuralAnalysis |ObjectName: myStaticAnalysis |OutputLevel: Minimum |Assembler: myAssembler |LinearSolver: myLinearSolver       79 // -9)**************DEFINE THE ORCHESTRATING MODEL TOOLS AND INPUTS*************  // OPTIMIZATION TOOLS RGradientNormConvergenceCheck |ObjectName: myOptimizationConvergenceCheck |OutputLevel: Maximum |Tolerance: 10  // Steepest Descent RFixedStepSize |ObjectName: myOptimizationFixedStepSize |OutputLevel: Minimum |StepSize:  0.0008 RSteepestDescentSearchDirection |ObjectName: myOptimizationSearchDirection |OutputLevel: Minimum   // BFGS //RFixedStepSize |ObjectName: myOptimizationFixedStepSize |OutputLevel: Minimum |StepSize: 1 //RBFGSSearchDirection |ObjectName: myOptimizationSearchDirection |OutputLevel: Minimum    // -10)**************DEFINE THE ORCHESTRATING MODELS*******************************  // RESPONSE MODEL RComponentResponseModel |ObjectName: myStaticBuilding |OutputLevel: Minimum  |StructuralAnalysis: myStaticAnalysis |Responses: horzPoint3.ZDisplacement   //**********COSTS**********  // COSTS - TOTAL RSteveTestCostModel |ObjectName: costTotal |OutputLevel: Maximum |Expression: thetaCLT*beamWidth*beamThickness*beamLength  + thetaDeflection*myStaticBuildinghorzPoint3ZDisplacement | GradientMethod:  Direct Differentiation   //*************************   // SAMPLING MODEL RSamplingModel |ObjectName: mySamplingCostModel |OutputLevel: Minimum |InputParameter: costTotalResponse |Threshold: 1300 |MaxSamples: 1 |TargetCov: 0.005 |SamplingCentre: Origin |RandomNumberGenerator: myInHouseRandomNumberGenerator |ProbabilityTransformation: myProbabilityTransformations  // RISK MODEL RRiskModel |ObjectName: myRiskCostModel |OutputLevel: Maximum |RiskMeasure: 1 |InputFromCost: costTotalResponse   // OPTIMIZATION MODEL ROptimizationModel |ObjectName: myRiskOptimizationCostModel |OutputLevel: Maximum |Objective: myRiskCostModelResponse |MaxSteps: 20 |SearchDirection: myOptimizationSearchDirection |StepSize: myOptimizationFixedStepSize |ConvergenceCheck: myOptimizationConvergenceCheck |GradientMethod: Direct Differentiation   80Appendix B    Rts Console Output B.1 Material Optimization Output The OPTIMIZATION analysis in "myRiskOptimizationCostModel" started...    The decision variable values initially are:  |        0.05 |  The RISK analysis in "myRiskCostModel" started...    Now running the intial sampling to determine thresholds...   The random variable named "cantileverBeamPlateDensity" is missing the probability distributions tool,  hence, it is creating a tool called "defaultInHouseProbabilityDistributions"  The random variable named "cantileverBeamPlateYoungsModulus" is missing the probability distributions tool,  hence, it is picking up the "defaultInHouseProbabilityDistributions" that is already available.  The sampling mean-value is:   13377.7  Reducing the std. dev. to mean to 4.75  Reducing the std. dev. to mean to 4.5  Reducing the std. dev. to mean to 4.25   Determining thresholds using points within 4.25 standard deviations from the mean.  The threshold 1 is 4426.55  The threshold 2 is 5240.3  The threshold 3 is 6054.04  The threshold 4 is 6867.79  The threshold 5 is 7681.53  The threshold 6 is 8495.27  The threshold 7 is 9309.02  The threshold 8 is 10122.8  The threshold 9 is 10936.5  The threshold 10 is 11750.2  The threshold 11 is 12564  The threshold 12 is 13377.7  The threshold 13 is 14191.5  The threshold 14 is 15005.2  The threshold 15 is 15819  The threshold 16 is 16632.7  The threshold 17 is 17446.5  The threshold 18 is 18260.2  The threshold 19 is 19073.9  The threshold 20 is 19887.7  The threshold 21 is 20701.4  The threshold 22 is 21515.2  The threshold 23 is 22328.9  The FORM analysis in RISK at threshold "4426.55" has started..        81For exceedence probability curve (threshold number, threshold, probability):  1 , 4426.55 , 0.999156  2 , 5240.3 , 0.997142  3 , 6054.04 , 0.991911  4 , 6867.79 , 0.97968  5 , 7681.53 , 0.955488  6 , 8495.27 , 0.914106  7 , 9309.02 , 0.852095  8 , 10122.8 , 0.769566  9 , 10936.5 , 0.670758  10 , 11750.2 , 0.56303  11 , 12564 , 0.4549  12 , 13377.7 , 0.354049  13 , 14191.5 , 0.26585  14 , 15005.2 , 0.193012  15 , 15819 , 0.135802  16 , 16632.7 , 0.0928361  17 , 17446.5 , 0.0618074  18 , 18260.2 , 0.0401754  19 , 19073.9 , 0.025554  20 , 19887.7 , 0.015936  21 , 20701.4 , 0.00976417  22 , 21515.2 , 0.00588768  23 , 22328.9 , 0.00350003  The risk measure is: 12440.6  The analysis in "myRiskCostModel" completed in 126.855 seconds.    At optimization step 1 the objective function in "myRiskOptimizationCostModel" is 12440.6  The decision variable values (before updating) are:  |        0.05 |  The optimization gradient is currently:  |-1.38693e+06 |  The norm in "myOptimizationConvergenceCheck" is 1.38693e+06  Now running the intial sampling to determine thresholds...   The sampling mean-value is:   8115.12  The FORM analysis in RISK at threshold "1167.02" has started..     For exceedence probability curve (threshold number, threshold, probability):  1 , 1167.02 , 0.999994  2 , 1798.67 , 0.999952  3 , 2430.31 , 0.999699  4 , 3061.96 , 0.998346  5 , 3693.61 , 0.992898  6 , 4325.25 , 0.976226  7 , 4956.9 , 0.937162  8 , 5588.54 , 0.865241  9 , 6220.19 , 0.757978  10 , 6851.83 , 0.62449  11 , 7483.48 , 0.482125  12 , 8115.12 , 0.348988  13 , 8746.77 , 0.237641  14 , 9378.41 , 0.152967  15 , 10010.1 , 0.0935804  16 , 10641.7 , 0.0547176   8217 , 11273.4 , 0.0307372  18 , 11905 , 0.0166718  19 , 12536.6 , 0.00876941  20 , 13168.3 , 0.00449065  21 , 13799.9 , 0.00224713  22 , 14431.6 , 0.00110201  23 , 15063.2 , 0.000531233  The risk measure is: 7537.98  The analysis in "myRiskCostModel" completed in 134.855 seconds.    At optimization step 2 the objective function in "myRiskOptimizationCostModel" is 7537.98  The decision variable values (before updating) are:  |        0.06 |  The optimization gradient is currently:  |     -676328 |  The norm in "myOptimizationConvergenceCheck" is 676328  Now running the intial sampling to determine thresholds...   The sampling mean-value is:   4363.81  The FORM analysis in RISK at threshold "1986.6" has started..     For exceedence probability curve (threshold number, threshold, probability):  1 , 1986.6 , 0.998448  2 , 2202.71 , 0.996291  3 , 2418.82 , 0.991987  4 , 2634.93 , 0.983834  5 , 2851.04 , 0.969812  6 , 3067.15 , 0.947636  7 , 3283.26 , 0.915221  8 , 3499.37 , 0.871189  9 , 3715.48 , 0.815294  10 , 3931.59 , 0.748619  11 , 4147.7 , 0.673506  12 , 4363.81 , 0.593189  13 , 4579.92 , 0.511304  14 , 4796.03 , 0.431371  15 , 5012.14 , 0.356372  16 , 5228.25 , 0.288496  17 , 5444.36 , 0.229054  18 , 5660.47 , 0.178532  19 , 5876.58 , 0.13675  20 , 6092.69 , 0.103045  21 , 6308.8 , 0.0764659  22 , 6524.91 , 0.0559376  23 , 6741.02 , 0.0403797  24 , 6957.13 , 0.028791  25 , 7173.24 , 0.0202945  26 , 7389.35 , 0.0141546  27 , 7605.46 , 0.00977601  28 , 7821.57 , 0.00669111  29 , 8037.67 , 0.00454171  The risk measure is: 4685.46      83The analysis in "myRiskCostModel" completed in 160.809 seconds.    At optimization step 3 the objective function in "myRiskOptimizationCostModel" is 4685.46  The decision variable values (before updating) are:  |       0.072 |  The optimization gradient is currently:  |     -326423 |  The norm in "myOptimizationConvergenceCheck" is 326423  Now running the intial sampling to determine thresholds...   The sampling mean-value is:   3201.2  The FORM analysis in RISK at threshold "583.445" has started..     For exceedence probability curve (threshold number, threshold, probability):  1 , 583.445 , 0.999999  2 , 821.423 , 0.999986  3 , 1059.4 , 0.999894  4 , 1297.38 , 0.999295  5 , 1535.36 , 0.996365  6 , 1773.33 , 0.985669  7 , 2011.31 , 0.956515  8 , 2249.29 , 0.895825  9 , 2487.27 , 0.796305  10 , 2725.25 , 0.663538  11 , 2963.22 , 0.51507  12 , 3201.2 , 0.37217  13 , 3439.18 , 0.251086  14 , 3677.16 , 0.15899  15 , 3915.13 , 0.0950777  16 , 4153.11 , 0.054036  17 , 4391.09 , 0.0293678  18 , 4629.07 , 0.0153495  19 , 4867.05 , 0.00775516  20 , 5105.02 , 0.00380511  21 , 5343 , 0.00182038  22 , 5580.98 , 0.000852197  23 , 5818.96 , 0.000391622  The risk measure is: 3034.37  The analysis in "myRiskCostModel" completed in 151.07 seconds.    At optimization step 4 the objective function in "myRiskOptimizationCostModel" is 3034.37  The decision variable values (before updating) are:  |      0.0864 |  The optimization gradient is currently:  |     -156205 |  The norm in "myOptimizationConvergenceCheck" is 156205  Now running the intial sampling to determine thresholds...   The sampling mean-value is:   2180.97  The FORM analysis in RISK at threshold "1241.83" has started..        84For exceedence probability curve (threshold number, threshold, probability):  1 , 1156.45 , 0.996412  2 , 1241.83 , 0.992054  3 , 1327.2 , 0.983251  4 , 1412.58 , 0.969421  5 , 1497.96 , 0.946143  6 , 1583.33 , 0.911998  7 , 1668.71 , 0.86522  8 , 1754.09 , 0.805533  9 , 1839.46 , 0.73421  10 , 1924.84 , 0.653985  11 , 2010.22 , 0.568674  12 , 2095.6 , 0.48251  13 , 2180.97 , 0.399518  14 , 2266.35 , 0.322951  15 , 2351.73 , 0.255069  16 , 2437.1 , 0.197035  17 , 2522.48 , 0.149025  18 , 2607.86 , 0.110496  19 , 2693.23 , 0.0804189  20 , 2778.61 , 0.0575161  21 , 2863.99 , 0.0404777  22 , 2949.37 , 0.0280638  23 , 3034.74 , 0.0191895  24 , 3120.12 , 0.0129541  25 , 3205.5 , 0.00864244  26 , 3290.87 , 0.00570406  27 , 3376.25 , 0.00372727  The risk measure is: 2102.07  The analysis in "myRiskCostModel" completed in 170.443 seconds.    At optimization step 5 the objective function in "myRiskOptimizationCostModel" is 2102.07  The decision variable values (before updating) are:  |     0.10368 |  The optimization gradient is currently:  |    -72471.3 |  The norm in "myOptimizationConvergenceCheck" is 72471.3  Now running the intial sampling to determine thresholds...   The sampling mean-value is:   1642.72  The FORM analysis in RISK at threshold "958.301" has started..  ]   For exceedence probability curve (threshold number, threshold, probability):  1 , 958.301 , 0.995644  2 , 1020.52 , 0.990432  3 , 1082.74 , 0.980777  4 , 1144.96 , 0.964031  5 , 1207.18 , 0.937402  6 , 1269.4 , 0.898241  7 , 1331.62 , 0.844844  8 , 1393.84 , 0.777098  9 , 1456.06 , 0.696833  10 , 1518.28 , 0.607699  11 , 1580.5 , 0.514576  12 , 1642.72 , 0.422701   8513 , 1704.94 , 0.336783  14 , 1767.16 , 0.260348  15 , 1829.38 , 0.195427  16 , 1891.6 , 0.142601  17 , 1953.82 , 0.101288  18 , 2016.04 , 0.0701341  19 , 2078.26 , 0.0474155  20 , 2140.48 , 0.0313502  21 , 2202.7 , 0.0203044  22 , 2264.91 , 0.0129024  23 , 2327.13 , 0.00805538  24 , 2389.35 , 0.0049488  The risk measure is: 1600.91  The analysis in "myRiskCostModel" completed in 149.305 seconds.    At optimization step 6 the objective function in "myRiskOptimizationCostModel" is 1600.91  The decision variable values (before updating) are:  |    0.124416 |  The optimization gradient is currently:  |    -30620.5 |  The norm in "myOptimizationConvergenceCheck" is 30620.5  Now running the intial sampling to determine thresholds...   The sampling mean-value is:   1395.66  The FORM analysis in RISK at threshold "835.815" has started..    For exceedence probability curve (threshold number, threshold, probability):  1 , 835.815 , 0.995175  2 , 886.71 , 0.990117  3 , 937.605 , 0.981052  4 , 988.499 , 0.965794  5 , 1039.39 , 0.941821  6 , 1090.29 , 0.906558  7 , 1141.18 , 0.857983  8 , 1192.08 , 0.795202  9 , 1242.97 , 0.719032  10 , 1293.87 , 0.632093  11 , 1344.76 , 0.538682  12 , 1395.66 , 0.443984  13 , 1446.55 , 0.353325  14 , 1497.45 , 0.271159  15 , 1548.34 , 0.200587  16 , 1599.24 , 0.14299  17 , 1650.13 , 0.0982695  18 , 1701.03 , 0.0651413  19 , 1751.92 , 0.0416967  20 , 1802.82 , 0.0257984  21 , 1853.71 , 0.0154531  22 , 1904.61 , 0.00897344  23 , 1955.5 , 0.00506102  24 , 2006.39 , 0.00277696  The risk measure is: 1368.18  The analysis in "myRiskCostModel" completed in 152.31 seconds.      86At optimization step 7 the objective function in "myRiskOptimizationCostModel" is 1368.18  The decision variable values (before updating) are:  |    0.149299 |  The optimization gradient is currently:  |    -9924.72 |  The norm in "myOptimizationConvergenceCheck" is 9924.72  Now running the intial sampling to determine thresholds...   The sampling mean-value is:   1272.44  The FORM analysis in RISK at threshold "799.887" has started..     For exceedence probability curve (threshold number, threshold, probability):  1 , 756.928 , 0.995887  2 , 799.887 , 0.99257  3 , 842.846 , 0.987083  4 , 885.805 , 0.978456  5 , 928.764 , 0.965373  6 , 971.723 , 0.94641  7 , 1014.68 , 0.920067  8 , 1057.64 , 0.884998  9 , 1100.6 , 0.840251  10 , 1143.56 , 0.785517  11 , 1186.52 , 0.721325  12 , 1229.48 , 0.649128  13 , 1272.44 , 0.571239  14 , 1315.4 , 0.49062  15 , 1358.35 , 0.410564  16 , 1401.31 , 0.33422  17 , 1444.27 , 0.264337  18 , 1487.23 , 0.202902  19 , 1530.19 , 0.15102  20 , 1573.15 , 0.108915  21 , 1616.11 , 0.0760691  22 , 1659.07 , 0.0514298  23 , 1702.03 , 0.0336498  24 , 1744.99 , 0.0213028  25 , 1787.95 , 0.0130482  26 , 1830.9 , 0.00773371  27 , 1873.86 , 0.00443503  The risk measure is: 1309.9  The analysis in "myRiskCostModel" completed in 165.379 seconds.    At optimization step 8 the objective function in "myRiskOptimizationCostModel" is 1309.9  The decision variable values (before updating) are:  |    0.179159 |  The optimization gradient is currently:  |     463.061 |  The norm in "myOptimizationConvergenceCheck" is 463.061  Now running the intial sampling to determine thresholds...   The sampling mean-value is:   1201.57  The FORM analysis in RISK at threshold "436.831" has started..    87  For exceedence probability curve (threshold number, threshold, probability):  1 , 436.831 , 0.999988  2 , 506.353 , 0.999947  3 , 575.874 , 0.999798  4 , 645.396 , 0.99931  5 , 714.918 , 0.997887  6 , 784.439 , 0.994188  7 , 853.961 , 0.985616  8 , 923.483 , 0.967894  9 , 993.004 , 0.935187  10 , 1062.53 , 0.881261  11 , 1132.05 , 0.801788  12 , 1201.57 , 0.696986  13 , 1271.09 , 0.573216  14 , 1340.61 , 0.442181  15 , 1410.13 , 0.317682  16 , 1479.66 , 0.211386  17 , 1549.18 , 0.129758  18 , 1618.7 , 0.0732669  19 , 1688.22 , 0.0379927  20 , 1757.74 , 0.0180762  21 , 1827.26 , 0.0078909  22 , 1896.79 , 0.00316212  23 , 1966.31 , 0.00116498  The risk measure is: 1311.07  The analysis in "myRiskCostModel" completed in 147.4 seconds.    At optimization step 9 the objective function in "myRiskOptimizationCostModel" is 1311.07  The decision variable values (before updating) are:  |    0.176844 |  The optimization gradient is currently:  |    -23.6451 |  The norm in "myOptimizationConvergenceCheck" is 23.6451  Now running the intial sampling to determine thresholds...   The sampling mean-value is:   1302.9  The FORM analysis in RISK at threshold "491.8" has started..     For exceedence probability curve (threshold number, threshold, probability):  1 , 491.8 , 0.99996  2 , 565.536 , 0.999833  3 , 639.272 , 0.999376  4 , 713.008 , 0.997942  5 , 786.744 , 0.993988  6 , 860.48 , 0.984399  7 , 934.216 , 0.963942  8 , 1007.95 , 0.925523  9 , 1081.69 , 0.861935  10 , 1155.42 , 0.769137  11 , 1229.16 , 0.649533  12 , 1302.9 , 0.513312  13 , 1376.63 , 0.376037  14 , 1450.37 , 0.253419  15 , 1524.1 , 0.156248   8816 , 1597.84 , 0.0877811  17 , 1671.58 , 0.0448277  18 , 1745.31 , 0.0207849  19 , 1819.05 , 0.00874595  20 , 1892.78 , 0.00334271  21 , 1966.52 , 0.00116177  22 , 2040.26 , 0.000368003  23 , 2113.99 , 0.000106563  The risk measure is: 1311.12  The analysis in "myRiskCostModel" completed in 142.001 seconds.    At optimization step 10 the objective function in "myRiskOptimizationCostModel" is 1311.12  The decision variable values (before updating) are:  |    0.176962 |  The optimization gradient is currently:  |     4.40024 |  The norm in "myOptimizationConvergenceCheck" is 4.40024  The analysis in "myRiskOptimizationCostModel" completed in 1521.39 seconds after 10 iterations.    The decision variable values are:  |    0.176962 |                   89B.2 Nodal Optimization Output The OPTIMIZATION analysis in "myRiskOptimizationCostModel" started...    The decision variable values initially are:  |         0.7 |  Remeshing...  The RISK analysis in "myRiskCostModel" started...    Now running the intial sampling to determine thresholds...   The random variable named "cantileverBeamPlateDensity" is missing the probability distributions tool,  hence, it is creating a tool called "defaultInHouseProbabilityDistributions"  The random variable named "cantileverBeamPlateYoungsModulus" is missing the probability distributions tool,  hence, it is picking up the "defaultInHouseProbabilityDistributions" that is already available.  The sampling mean-value is:   1004.92  Determining thresholds using points within 5 standard deviations from the mean.  The threshold 1 is 634.338  The threshold 2 is 668.028  The threshold 3 is 701.717  The threshold 4 is 735.406  The threshold 5 is 769.096  The threshold 6 is 802.785  The threshold 7 is 836.474  The threshold 8 is 870.163  The threshold 9 is 903.853  The threshold 10 is 937.542  The threshold 11 is 971.231  The threshold 12 is 1004.92  The threshold 13 is 1038.61  The threshold 14 is 1072.3  The threshold 15 is 1105.99  The threshold 16 is 1139.68  The threshold 17 is 1173.37  The threshold 18 is 1207.06  The threshold 19 is 1240.75  The threshold 20 is 1274.44  The threshold 21 is 1308.12  The threshold 22 is 1341.81  The threshold 23 is 1375.5  The FORM analysis in RISK at threshold "634.338" has started..     For exceedence probability curve (threshold number, threshold, probability):  1 , 634.338 , 0.999818  2 , 668.028 , 0.999231  3 , 701.717 , 0.997472  4 , 735.406 , 0.992944  5 , 769.096 , 0.983086  6 , 802.785 , 0.964475  7 , 836.474 , 0.933504  8 , 870.163 , 0.887399  9 , 903.853 , 0.825207   9010 , 937.542 , 0.748344  11 , 971.231 , 0.660449  12 , 1004.92 , 0.566655  13 , 1038.61 , 0.472551  14 , 1072.3 , 0.383202  15 , 1105.99 , 0.302454  16 , 1139.68 , 0.232637  17 , 1173.37 , 0.174626  18 , 1207.06 , 0.128117  19 , 1240.75 , 0.0920114  20 , 1274.44 , 0.0647866  21 , 1308.12 , 0.0447947  22 , 1341.81 , 0.0304497  23 , 1375.5 , 0.0203791  24 , 1409.19 , 0.0134457  25 , 1442.88 , 0.00875553  26 , 1476.57 , 0.00563327  27 , 1510.26 , 0.00358474  The risk measure is: 1039.71  The analysis in "myRiskCostModel" completed in 150.531 seconds.    At optimization step 1 the objective function in "myRiskOptimizationCostModel" is 1039.71  The decision variable values (before updating) are:  |         0.7 |  The optimization gradient is currently:  |     -1262.8 |  The norm in "myOptimizationConvergenceCheck" is 1262.8  Remeshing...  Now running the intial sampling to determine thresholds...   The sampling mean-value is:   1092.86  The FORM analysis in RISK at threshold "622.284" has started..     For exceedence probability curve (threshold number, threshold, probability):  1 , 622.284 , 0.999784  2 , 665.064 , 0.998504  3 , 707.843 , 0.993338  4 , 750.622 , 0.977566  5 , 793.401 , 0.941252  6 , 836.181 , 0.874994  7 , 878.96 , 0.775951  8 , 921.739 , 0.651027  9 , 964.519 , 0.514741  10 , 1007.3 , 0.383419  11 , 1050.08 , 0.269669  12 , 1092.86 , 0.179752  13 , 1135.64 , 0.114059  14 , 1178.42 , 0.0692195  15 , 1221.19 , 0.040368  16 , 1263.97 , 0.022715  17 , 1306.75 , 0.0123839  18 , 1349.53 , 0.00656609  19 , 1392.31 , 0.00339668  20 , 1435.09 , 0.00171954  21 , 1477.87 , 0.00085415   9122 , 1520.65 , 0.000417306  23 , 1563.43 , 0.000200952  The risk measure is: 978.65  The analysis in "myRiskCostModel" completed in 132.532 seconds.    At optimization step 2 the objective function in "myRiskOptimizationCostModel" is 978.65  The decision variable values (before updating) are:  |     0.82628 |  The optimization gradient is currently:  |    -707.401 |  The norm in "myOptimizationConvergenceCheck" is 707.401  Remeshing...  Now running the intial sampling to determine thresholds...   The sampling mean-value is:   884.863  The FORM analysis in RISK at threshold "543.2" has started..     For exceedence probability curve (threshold number, threshold, probability):  1 , 543.2 , 0.999998  2 , 574.261 , 0.999983  3 , 605.321 , 0.999896  4 , 636.381 , 0.999487  5 , 667.441 , 0.997976  6 , 698.501 , 0.993469  7 , 729.562 , 0.982399  8 , 760.622 , 0.959557  9 , 791.682 , 0.919249  10 , 822.742 , 0.857407  11 , 853.802 , 0.773656  12 , 884.863 , 0.672181  13 , 915.923 , 0.560819  14 , 946.983 , 0.448896  15 , 978.043 , 0.344871  16 , 1009.1 , 0.254682  17 , 1040.16 , 0.181158  18 , 1071.22 , 0.124426  19 , 1102.28 , 0.0827362  20 , 1133.34 , 0.0534026  21 , 1164.4 , 0.0335452  22 , 1195.46 , 0.0205575  23 , 1226.52 , 0.0123191  24 , 1257.58 , 0.00723492  25 , 1288.65 , 0.00417135  The risk measure is: 940.211  The analysis in "myRiskCostModel" completed in 140.086 seconds.    At optimization step 3 the objective function in "myRiskOptimizationCostModel" is 940.211  The decision variable values (before updating) are:  |    0.991536 |  The optimization gradient is currently:  |    -269.989 |  The norm in "myOptimizationConvergenceCheck" is 269.989  Remeshing...  Now running the intial sampling to determine thresholds...  92  The sampling mean-value is:   952.356  The FORM analysis in RISK at threshold "474.981" has started..     For exceedence probability curve (threshold number, threshold, probability):  1 , 474.981 , 1  2 , 518.379 , 1  3 , 561.777 , 0.999996  4 , 605.174 , 0.999929  5 , 648.572 , 0.999305  6 , 691.97 , 0.995564  7 , 735.367 , 0.980605  8 , 778.765 , 0.938733  9 , 822.163 , 0.853051  10 , 865.56 , 0.719508  11 , 908.958 , 0.554938  12 , 952.356 , 0.38913  13 , 995.753 , 0.24845  14 , 1039.15 , 0.145306  15 , 1082.55 , 0.0784869  16 , 1125.95 , 0.0395102  17 , 1169.34 , 0.018705  18 , 1212.74 , 0.008394  19 , 1256.14 , 0.00359881  20 , 1299.54 , 0.00148379  21 , 1342.93 , 0.000591961  22 , 1386.33 , 0.000229596  23 , 1429.73 , 8.69831e-05  The risk measure is: 929.596  The analysis in "myRiskCostModel" completed in 136.315 seconds.    At optimization step 4 the objective function in "myRiskOptimizationCostModel" is 929.596  The decision variable values (before updating) are:  |     1.12653 |  The optimization gradient is currently:  |    -49.2708 |  The norm in "myOptimizationConvergenceCheck" is 49.2708  Remeshing...  Now running the intial sampling to determine thresholds...   The sampling mean-value is:   950.429  The FORM analysis in RISK at threshold "645.637" has started..     For exceedence probability curve (threshold number, threshold, probability):  1 , 645.637 , 0.999466  2 , 673.345 , 0.998071  3 , 701.053 , 0.994249  4 , 728.762 , 0.98513  5 , 756.47 , 0.966506  6 , 784.179 , 0.933334  7 , 811.887 , 0.881292  8 , 839.595 , 0.808548  9 , 867.304 , 0.71705   9310 , 895.012 , 0.612473  11 , 922.72 , 0.502873  12 , 950.429 , 0.396665  13 , 978.137 , 0.300778  14 , 1005.85 , 0.219533  15 , 1033.55 , 0.154572  16 , 1061.26 , 0.105192  17 , 1088.97 , 0.0693675  18 , 1116.68 , 0.0444355  19 , 1144.39 , 0.0277164  20 , 1172.1 , 0.0168729  21 , 1199.8 , 0.0100493  22 , 1227.51 , 0.00586534  23 , 1255.22 , 0.00336194  The risk measure is: 929.53  The analysis in "myRiskCostModel" completed in 132.779 seconds.    At optimization step 5 the objective function in "myRiskOptimizationCostModel" is 929.53  The decision variable values (before updating) are:  |     1.15117 |  The optimization gradient is currently:  |    -15.0959 |  The norm in "myOptimizationConvergenceCheck" is 15.0959  Remeshing...  Now running the intial sampling to determine thresholds...   The sampling mean-value is:   977.628  The FORM analysis in RISK at threshold "655.419" has started..     For exceedence probability curve (threshold number, threshold, probability):  1 , 655.419 , 0.999172  2 , 684.711 , 0.996975  3 , 714.002 , 0.991025  4 , 743.294 , 0.97715  5 , 772.586 , 0.949763  6 , 801.878 , 0.90307  7 , 831.169 , 0.833486  8 , 860.461 , 0.74169  9 , 889.753 , 0.633254  10 , 919.045 , 0.517304  11 , 948.336 , 0.403949  12 , 977.628 , 0.301682  13 , 1006.92 , 0.215835  14 , 1036.21 , 0.148271  15 , 1065.5 , 0.0980654  16 , 1094.8 , 0.0626141  17 , 1124.09 , 0.038723  18 , 1153.38 , 0.0232563  19 , 1182.67 , 0.0136014  20 , 1211.96 , 0.00776656  21 , 1241.25 , 0.00434014  22 , 1270.55 , 0.00237838  23 , 1299.84 , 0.00128124  The risk measure is: 929.448  The analysis in "myRiskCostModel" completed in 132.997 seconds.   94  At optimization step 6 the objective function in "myRiskOptimizationCostModel" is 929.448  The decision variable values (before updating) are:  |     1.15871 |  The optimization gradient is currently:  |    -5.76984 |  The norm in "myOptimizationConvergenceCheck" is 5.76984  The analysis in "myRiskOptimizationCostModel" completed in 866.808 seconds after 6 iterations.    The decision variable values are:  |     1.15871 |   

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0166426/manifest

Comment

Related Items