A CONTRIBUTION TO INNOVATIVE METHODS FOR ENHANCING THE ECONOMY OF STEEL STRUCTURES ENGINEERING by VIGNESH R A M A D H A S Bachelor of Engineering, PSG College of Technology, 2003 A THESIS SUBMITTED IN PARTIAL F U L F I L M E N T OF THE REQUIREMENTS FOR THE DEGREE OF M A S T E R OF APPLIED SCIENCE in THE F A C U L T Y OF G R A D U A T E STUDIES (Civil Engineering) THE UNIVERSITY OF BRITISH C O L U M B I A August 2005 © Vignesh Ramadhas ABSTRACT An engineer in his everyday practice, in addition to the tasks of analyzing, designing, fabricating and erecting structures, is faced with the exigent task of decision making. The process of making decisions becomes even more challenging in the presence of uncertainties. Nevertheless, the presence of uncertainties is inevitable in all forms of engineering and making efficient decisions in the presence of uncertainties is a critical skill that engineers as decision makers must develop. It is a well known fact that decisions made in the early stages of a project have greater economic implications on the project than decisions made in the later stages when changes become more expensive and difficult to make. However, there is a high level of uncertainty associated with the early or conceptual stages of a project. In such stages, while making decisions, the engineer relies on experience when challenged with problems similar to those encountered previously and when faced with uncharted situations, falls back on engineering judgment and first principles. Since the issue of liability is a major concern, it is extremely critical that the engineer supports his decision with proven decision making theories and analyses rather than with the bare tag of "experience". Theories for decision making have already been well recognized in fields such as business and economics and several methods, tools and software are now available and can be used to apply decision making theories in everyday practice. This thesis introduces several theories and criteria that can be used to make rational decisions in situations of risk, uncertainty and incomplete knowledge. The practical implementation of these theories with the help of tools such as Decision Trees, Influence Diagrams, Monte Carlo Simulation, Sensitivity Analysis, Expected Monetary Value (EMV), etc. has also been discussed. It is felt that a proper combination of theories and analytical techniques can greatly rationalize the decision making process and improve the quality of the decisions made. The focus of this report has been on using the decision making tools mentioned above to enhance the economy of steel structures. Examples illustrating the same have been provided and two case studies have been conducted to demonstrate the application of the ideas discussed in real time problems and projects. 111 TABLE OF CONTENTS ABSTRACT ii TABLE OF CONTENTS iv LIST OF TABLES x LIST OF FIGURES xi ACKNOWLEDGEMENTS xiv 1. Introduction 1 1.1 Problem Overview 1 1.2 Research Objectives 4 1.3 Thesis Layout: 5 2. The Science of Decision Making and Decision Theory 6 2.1 Introduction 6 2.2 Basic Assumptions 7 2.3 Formulation and Framework of the Decision Problem 9 2.4 Classification of Decision Theory 12 2.4.1 Normative and Descriptive Classification 12 2.4.2 Certainty, risk and uncertainty 14 2.4.3 Single Person Decision Making and Group Decision Making 14 2.5 Decision Making under Risk, Uncertainty and Incomplete Knowledge 16 2.6 Decision Making under Risk 18 2.6 Decision Making under Uncertainty 21 2.6.1 Introduction 21 2.6.2 The Maximin Criterion 21 iv 2.6.3 The Maximax Criterion 22 2.6.4 The Hurwicz Criterion 23 2.6.5 The Minimax Regret Criterion 24 2.6.6 The Bayes-Laplace Criterion 25 2.7 Decision Making under Incomplete Knowledge 27 2.7.1 Introduction 27 2.7.2 Fishbum's Theorem 27 2.7.3 Cannon and Kmietowicz's Proof for Fishburn's Theorem 28 2.7.4 Drawbacks of Fishburn's Theorem 29 2.7.5 Maximizing and Minimizing Expected Values 29 2.7.6 Numerical Illustration 31 2.8 Concluding Remarks 33 3. Tools to Aid in the Decision Making Process 34 3.1 Introduction 34 3.2 Elementary Data Analysis 34 3.2.1 Introduction 34 3.2.2 Graphical Tools 36 3.2.3 Numeric Representation of Data 40 3.3 Probability 42 3.3.1 Introduction 42 3.3.2 Axioms of Probability 43 3.3.3 Elementary Rules of Probability 43 3.4 Representing Uncertainty using Probability Distributions 46 v 3.4.1 Uncertainty in Engineering t 46 3.4.2 Probability Distributions 46 3.4.3 Illustrative Example 47 3.5 Monte Carlo Simulation 49 3.5.1 Introduction 49 3.5.2 Illustrative Example 50 3.6 Sensitivity Analysis 52 3.6.1 Introduction 52 3.6.2 Illustrative Example 52 3.7 Expected Value: • 55 3.7.1 Introduction: 55 3.7.2 Illustrative Example 56 3.7.2 St Petersburg Paradox: 57 3.8 Utility Theory 59 3.8.1 Introduction 59 3.8.2 Deriving the Utility of a Payoff 60 3.8.3 Risk 60 3.9 Decision Trees 63 3.9.1 Introduction 63 3.9.2 Types of nodes 64 3.9.3 Building a Decision Tree 65 3.9.4 Limitations of decision trees 70 3.10 Influence Diagrams 71 vi 3.10.1 Introduction 71 3.6.2 Illustrative Example 71 4. Decision Making in Business and Economics 74 4.1 Introduction 74 4.2 Real Estate Example 74 4.3 The Gerber-Phthalates Controversy 75 5. Application of Decision Science to Steel Structures Engineering - An Introduction to Case Studies 78 6. Case Study 1: The Atacama Cosmology Telescope (ACT) 80 6.1 Introduction 80 6.2 Cosmology and the Cosmic Microwave Background (CMB) 80 6.3 The Atacama Cosmology Telescope (ACT) 84 6.4 Major Components of the Atacama Cosmology Telescope 84 6.5 Important Considerations 86 6.6 Accuracy • 86 6.6.1 Pointing Error 87 6.6.2 Surface Accuracy: 89 6.7 Cost 92 6.7.1 Introduction 92 6.7.2 Model Construction 93 6.7.3 Component Model 94 6.7.4 Efficiency of Scenario 97 6.8 Conclusions 99 vii 7. Case Study 2: Protection of Steel Bridges in British Columbia against Corrosion using Probabilistic Methods 100 7.1 Introduction 100 7.2 Corrosion and Corrosion Rating 102 7.3 Climatic Diversity of British Columbia 103 7.4 Maintenance Strategies 104 7.4.1 Introduction 104 7.4.2 Touch Up or Spot Repair 104 7.4.3 Overcoat • 105 7.4.4 Recoat 105 7.5 Selection of Strategy 107 7.6 Problem Statement 108 7.7 Performance Criteria , 109 7.7.1 Cost : 109 7.7.2 Aesthetics 109 7.8 Input Parameters 110 7.9 Proposed Model I l l 7.9.1 Cost Component I l l 7.9.2 Corrosion Rating Component 111 7.10 Implementation 113 7.11 Demonstration of Capabilities of Model 114 7.11.1 Most Economic Strategy 114 7.11.2 Strategy that ensures Highest Corrosion Rating 115 viii 7.12 Conclusions and Recommendations 117 8. Conclusions 119 9. Future Developments 121 BIBLIOGRAPHY 123 APPENDIX A 126 APPENDIX B 131 APPENDIX C 134 APPENDIX D 154 APPENDIX E 166 ix LIST OF TABLES Table 1-1: Comparison of different options using performance and weight factors 2 Table 2-1: Payoff Matrix 11 Table 2-2: Comparison of normative, prescriptive and descriptive branches of decision theory 13 Table 2-3: Payoff matrix for three design options 16 Table 2-4: Decision making under risk 18 Table 2-5: Maximin Criterion 21 Table 2-6: Maximax Criterion 22 Table 2-7: Hurwicz criterion 23 Table 2-8: Minimax regret criterion 25 Table 2-9: Maximization of expected values -Numerical illustration 31 Table 2-10: Maximization of expected values -Numerical illustration (Weak Ranking) 31 Table 2-11: Maximization of expected values -Numerical illustration (Strong Ranking) 32 Table 3-1: Live loads in a steel building measured during different times 35 Table 3-2: St. Petersburg paradox 57 Table 3-3: Bernoulli's solution to St Petersburg Paradox 58 Table 6-1: Pointing Error 87 Table 6-2: Surface accuracy table 90 Table 7-1: Input Distributions 110 x LIST OF FIGURES Figure 1.1: Variation of uncertainty and economic implications of decisions made at various stages in a project 1 Figure 2.1: Decision making under risk 19 Figure 2.2: Bayes-Laplace criterion 26 Figure 3.1: Histogram from a set of live load measurements 36 Figure 3.2: Frequency diagram for a set of live load measurements 37 Figure 3.3: Cumulative frequency diagram for a set of live load measurements 38 Figure 3.4: Scatter Diagram for a set of live load measurements 39 Figure 3.5: Probability Distributions - Illustrative Example 48 Figure 3.6: Monte Carlo simulation - Illustrative Example 50 Figure 3.7: CDF - Illustrative Example 51 Figure 3.8: Sensitivity Analysis - Illustrative Example - Part 1 52 Figure 3.9: Sensitivity Analysis - Illustrative Example - Part 2 53 Figure 3.10: Sensitivity Graph 53 Figure 3.11: Input Sensitivity Table 54 Figure 3.12: Expected Value - Illustrative Example 56 Figure 3.13: Defining Risk 61 Figure 3.14: Utility Functions [28] 61 Figure 3.15: Risk adverse utility 62 Figure 3.16: Risk seeking utility 62 Figure 3.17: Decision Tree - Representation 1 63 Figure 3.18: Decision Tree - Representation 2 64 xi Figure 3.19: Decision Node 64 Figure 3.20: Chance node 65 Figure 3.21: Building a decision tree - 1 66 Figure 3.22: Building a decision tree - 2 67 Figure 3.23: Building a decision tree - 3 68 Figure 3.24: Building a decision tree - 4 69 Figure 3.25: Representation using Decision Tree : 72 Figure 3.26: Representation using Influence Diagram 72 Figure 4.1: Real Estate Example 75 Figure 4.2: Gerber-Phthalates Controversy 76 Figure 6.1: Map of temperature anisotropy in the CMB by COBE 82 Figure 6.2: Map of temperature anisotropy in the CMB by WMAP 82 Figure 6.3: Components of the Atacama Cosmology Telescope 85 Figure 6.4: Distribution of pointing error 89 Figure 6.5: Distribution of surface accuracy 91 Figure 6.6: Model Construction - Steps 93 Figure 6.7: Panel Support: Decision Tree (Design) 95 Figure 6.8: Panel Support: Decision Tree (Material Procurement) 95 Figure 6.9: Panel Support: Decision Tree (Fabrication) 96 Figure 6.10: Panel Support: Decision Tree (Most favorable scenario) 97 Figure 6.11: Primary reflector panel: Decision Tree (Design) 98 Figure 7.1: Problem Statement 108 Figure 7.2: Comparison of Equivalent Annual Costs of strategies 115 xii Figure 7.3: Comparison of Average Corrosion rating of strategies xm ACKNOWLEDGEMENTS I would like to thank my supervisor, Dr. Siegfried F. Stiemer for the technical advice and support that he extended to me during the course of this project. He gave me extensive freedom to pursue my research interest but was always available to provide his valuable suggestions when I needed help. My acknowledgements are due to the Steel Structures Education Foundation for the financial support they provided for this project. I would like to thank Kevin Baskin, Chief Bridge Engineer and Sharlie Huffman, Bridge Seismic Engineer of the Ministry of Transportation of British Columbia for their interest in this project. My thanks are also due to AMEC Dynamic Structures for sharing with me valuable information pertaining to the Atacama Cosmology Telescope project. I would also like to acknowledge my fellow colleague, Phyllis Chan for providing me with information that she collected during the course of her thesis. I would finally like to thank my parents for the unconditional love, support and encouragement they have provided me in all my endeavors. xiv 1. Introduction 1.1 Problem Overview Uncertainty and the risk associated with it is present in all phases of engineering and making decisions while accounting for uncertainties is a key skill that an engineer in his role as a decision maker must hone. Though uncertainty is present in all stages of engineering, the degree of uncertainty varies from one stage to another. The uncertainty associated with the earlier stages of a project is much higher than that associated with the later stages of a project as the concepts of design begin to take shape and become more deep-rooted (shown qualitatively in Figure 1.1). Conversely, the economic benefits of making decisions in the earlier stages of a project are far more superior to those of making decisions in the later stages when changes are more difficult and expensive to make (shown qualitatively in Figure 1.1). Uncertainty Economic Implication of making decisions Figure 1.1: Variation of uncertainty and economic implications of decisions made at various stages in a project 1 When faced with the task of making decisions in uncertain circumstances, an engineer relies on experience and refers to previous cases when similar situations were encountered. This he explains in his report and elicits his reasons for choosing one option over the other. Very often this is presented in the form of a written argument in his project report. Some engineers make use of tables which use weight factors (Table 1.1) to compare different options [13]. Though the use of such tables is a quick and easily way for engineers to justify their choice of one option over another, the highly subjective nature of the "Weight Factors" and the "Performance Factors" used in these tables is an issue that has to be addressed. Table 1-1: Comparison of different options using performance and weight factors Discriminator Option 1 Option 2 Option 3 Weight Factor on Discriminator Weight 0 +1 +1 1 Accuracy + 1 -1 0 4 Cost 0 +1 -1 3 Weighted Total +4 0 -2 Weight Factors Performance Factors 1 = Not important +1 = Good 2 = Important 0 = Satisfactory 3 = Very Important -1 = Poor 4 = Absolutely Critical 2 With the introduction of stringent legislation and increasing emphasis on liability, it is only logical that engineers back up their decisions with proven decision making theories and analyses rather than with subjective information and past experience. It is therefore evident that there is a need for methods, tools and software which will advance the decision making process and help the engineer support his decisions with sound theories and numbers. 3 1.2 Research Objectives In order to address the issue discussed in the previous section a research project funded by the Steel Structures Education Foundation was initiated at the University of British Columbia. This project aims at demonstrating by explicit examples a practical, rational and modern approach to decision making in the steel industry. The main objectives of this research project may be summarized as follows: • To conduct a comprehensive study on the fundamental principles of rational decision making and to provide an overview of common decision making theories applicable under situations of risk, uncertainty and incomplete knowledge. • To complement the study of rational decision making by describing mathematical tools and techniques which can serve as aids in the decision making process. • To demonstrate with the help of key examples the economic benefits of using decision science in steel structures engineering and the ease with which this can be done by using the analytical tools described. • To develop case studies that will show the everyday application of decision making tools and serve as guidance for structural engineers when faced with the task of decision making. 4 1.3 Thes i s L a y o u t : An introduction to decision science and a description of common decision making theories has been provided in Chapter 2 of this thesis. Although various classifications have been discussed, the classification of theories based on decision making under risk, uncertainty and incomplete knowledge has been explored in detail. Chapter 3 provides an overview of several mathematical tools such as Monte Carlo simulation, expected value, utility, decision trees, probability distributions etc. A brief account of decision making in economics and business has been provided in Chapter 4. Chapter 5 introduces the two case studies that were carried out. A report on how decision tree models were used during the developmental stages of the Atacama Cosmology Telescope has been provided in Chapter 6. Chapter 7 elaborates on the application of decision making tools in the maintenance of steel bridges in British Columbia. Finally the findings of this project and have been presented in Chapter 8 of this thesis. Scope for future study has also been mentioned in Chapter 9. 5 2. The Science of Decision Making and Decision Theory 2.1 Introduction The science of decision making or decision science is the study of how decision makers choose from a set of alternative courses of actions. Though decision science has evolved into a specialized area of study in itself, it is basically an interdisciplinary field of study which is related to mathematics, statistics, economics, psychology, philosophy and management. Decision theory deals with the development of analytical techniques for guiding the choice of a single course of action from among a series of alternatives, in order to accomplish a designated goal [1]. The idea behind decision theory is to supplement intuition with the logic of mathematical methods and theories so that decision makers can assess the outcome of decisions and use the results from computations to back up decisions. Decision theory can by no means be a complete replacement for intuition but is a way to cut down on total empiricism by the use of mathematical logic [2]. In short, decision theory is a tool which can be used to put valuable intuition to better use i.e. make more informed decisions. 6 2.2 B a s i c A s s u m p t i o n s Like all theories, decision theory is also based on some basic assumptions. It is necessary that these assumptions be satisfied for decision theory to hold good. The following are some of the prominent assumptions of decision theory as stated by Antol Rapoport (1989) and by Z.W. Kmietowicz and A.D. Pearman (1981): • The fact that the decision maker has the freedom to choose between various alternatives is the primary assumption of decision theory, for without freedom of choice the question of decision making would not arise. • It is also assumed that there are consequences associated with each available alternative or decision and that the decision maker has a preference for each consequence. • Even if there are an infinite number of possible alternatives, the decision maker, by some process can eliminate those strategies that are not relevant to the decision situation or are inferior to the alternatives being retained. • As in the case of alternatives, even if there are infinite possible states of nature, the decision maker can narrow them down to a finite number based on their relevance to the decision situation. 7 • Another important assumption is that the states of nature are not influenced by the decision maker's choice of alternatives i.e. nature neither favors nor hinders the decision maker consciously. 8 2.3 Formulation and Framework of the Decision Problem Before discussing the different decision theories and their classification, it is worthwhile to introduce the traditional idea of a decision problem. Z.W. Kmietowicz and A.D. Pearman (1981), describe the decision making process as a "sequential series of component problems". They further device a framework which may be used to represent the major steps in decision making. This framework may be outlined as follows: 1. Identification of an array, Aj ( i = 1, 2, 3...n ), which consists of all mutually exclusive and collectively exhaustive alternative strategies that the decision maker may adopt in the given situation. 2. Identification of an array, S, ( j = 1, 2, 3...m ), which consists of all mutually exclusive and collectively exhaustive states of nature within which each of the alternatives will operate. 3. Prediction and evaluation of outcomes, Xjj, for all possible combinations of alternatives and states of nature. Where Xy corresponds to the outcome of strategy Aj when the state of nature is Sj. 4. Allocation of a probability Pj to each of the states of nature, there by forming a probability array. 9 5. Evaluation of each alternative A j , based on a selection criteria and arriving at the best possible alternative. However, it should be noted that the steps described above may not be sufficient for all decision making problems. The above framework may serve only as a reference to describe the major components of the decision making process. The formulation of a decision problem can be best illustrated by using an example. Consider the following situation in a steel fabricating plant: Example 2: A steel fabricator has accepted the task of manufacturing the railings for a roller coaster. While studying the details of the project, the engineer in charge finds out that one particular run of the railings has a unique arrangement and that such a component has not been manufactured by them previously and therefore does not know how simple or complex the fabricating process would be. The engineer now has to decide whether he can achieve his task with Regular Fabrication Machines (which would be economical) or whether he has to use CNC Machines (which would cost the company more). For this example, let us formulate the decision problem by following the framework discussed above. • Identification of all relevant alternatives constitutes the first part of a decision problem. In this example, the engineer has two alternatives: using regular machines or using CNC machines. 10 Identification of all possible states of nature forms the next step. In this case the states of nature would either be success in fabricating the component or failure in fabricating the component. Formulation of a payoff matrix attaching a payoff value to each outcome for all combinations of alternatives and states of nature forms the next step. For now let us assume that the payoff are simple monetary values (the concept of utility will be discussed later). Table 2-1: Payoff Matrix Available Alternatives States of Nature Success in Fabrication Failure in Fabrication Regular Machines Payoff (Regular, Success) Payoff (Regular, Failure) CNC Machines Payoff (CNC ,Success) Payoff (CNC, Failure) The probability of occurrence of these states of nature can be obtained either subjectively or using previously available data. In this way the decision problem for any situations can be set up. The methods available to solve this problem are discussed in subsequent sections. 11 2.4 Classification of Decision Theory As mentioned previously, decision theory essentially consists of analytical techniques and procedures which help choose the best alternative from a range of choices. In the past, several such techniques and procedures have been proposed and developed. These procedures can be classified in a number of ways. In this chapter the following three ways of classification are explored: 1. Normative and Descriptive Classification: This classification is based on what issue the theory addresses. The differentiation is based on whether the theory deals with the issue of how decisions are made or how decisions should be made. 2. Certainty, Risk and Uncertainty: In this classification, theories are distinguished based on the amount of information they assume is available about the probability of occurrence of the states of nature. 3. Single person decision making and multiple-person or group decision making: As the name suggests this classification is based on whether the decision situation involves just one decision maker or whether a number of people interact in the decision making process. These classifications are discussed in more detail in the following sections. 2.4.1 Normative and Descriptive Classification The classification of theories into normative decision theories and descriptive decision theories is based on the primary issue or question that they address. The normative branch of decision theory is concerned with identifying the best decision to take, assuming an ideal decision maker who is fully informed, able to compute with perfect accuracy, and fully rational [23]. However, people are not completely logical when making decisions. The 12 descriptive branch of decision theory starts with observations of how decision makers choose in given classes of decision situations and attempts to describe their behaviour as systematically as possible. Thus a descriptive decision theory is essentially inductive [3]. Simply put, descriptive theories attempt to explain how decisions are made while the normative theories explain how decisions ought to be made. A comparison between the solutions suggested by the normative approach and the descriptive approach helps the decision maker understand and weed out inconsistencies in his decision making behaviour. David E.Bell, Howard Raffia and Amos Tversky [4] discuss about a third type of classification for decision theories called the prescriptive branch of decision theory which consists of theories that help and train people to make good decisions. A distinction between the normative branch and the prescriptive branch is not obvious and in literature such a distinction is often not made. The main characteristics of the three branches of decision theory discussed above can be summarized in table 2-2. Table 2-2: Comparison of normative, prescriptive and descriptive branches of decision theory Normative Branch Descriptive Branch Prescriptive Branch Analytical techniques that are consistent with logical decision making Decisions made by the "irrational" decision maker Helps the decision maker make better decisions How decisions should be made How decisions are actually made How one can make better decisions 13 2.4.2 Certainty, risk and uncertainty This classification of theories, based on the amount of information that is assumed to be available about the probability of occurrence of states of nature, is a major distinction between theories. If the decision maker is sure about the state of nature i.e. the outcome corresponding to each of his alternatives, a choice between alternatives would essentially be a choice between definite outcomes. Such problems of decision making in the presence of certainty are not strictly considered a part of decision theory, but such situations are encountered and therefore deserve a mention. Theories which assume that the decision maker can specify precisely the probability of occurrence of states of nature either subjectively, by results from experiments or some other means fall under the category of decision making in the presence of risk. These theories assume that the decision maker can fully specify the values of the probability array (discussed previously in section 2.3). Theories which assume that the decision maker has no information about the probability of occurrence of states of nature may be classified as decision making in the presence of uncertainty. Such theories assume that the decision maker is working under complete uncertainty. In other words these theories assume that the decision maker cannot specify any of the components of the probability array. Theories which have assumptions that lie in between the extremities of decision making under risk and decision making under uncertainty constitute decision making under conditions of incomplete knowledge. 2.4.3 Single Person Decision Making and Group Decision Making This classification distinguishes between decision situations that involve one decision maker and situations which involve more than one decision maker. In single person decision making the components of the decision problem revolve around a single person. The possible choices 14 are the alternatives available to one person. The outcomes are consequences that affect the interests of one person. However, in group decision making or multi-person decision making, the attention is shifted from a single person to the various members of the group. In such situations, each person has to not only consider the alternatives available to him but also think about the point of view of the other members or "players" in the group. The branch of decision theory which deals with decision situations which involve more than one person is called theory of games. Theory of games is a vast topic in itself. In this thesis the emphasis has been on single person decision making and hence game theory has not been explored. 15 2.5 Decision Making under Risk, Uncertainty and Incomplete Knowledge Although a discussion of decision theories based on the normative and descriptive approaches or single and group decision making would be interesting topics to explore, in this thesis an account of decision theories with a "risk-uncertainty-incomplete knowledge" stand point is adopted because it is felt that problems encountered in everyday engineering practice can be better understood in this way. In the following sections some classical methods that may be used to deal with situations of risk, uncertainty and incomplete knowledge are discussed. Consider the following example: Example 3: While designing the crane girder for a harbor storage facility, the chief engineer of a design firm has to choose between three design options. While choosing an alternative, the chief engineer has to bear in mind that the design that he chooses should be feasible in the storage facility under question. Let us assume that by some means the chief engineer arrives at the payoff or outcome associated with each combination of design option and state of nature. The problem faced by the engineer can be represented by the following payoff matrix. Table 2-3: Payoff matrix for three design options Available Alternatives States of Nature Feasible Not Feasible Design Option 1 Payoff (1, Feasible) Payoff (1, Not Feasible) Design Option 2 Payoff (2, Feasible) Payoff (2, Not Feasible) Design Option 3 Payoff (3, Feasible) Payoff (3, Not Feasible) 16 Let us explore how the chief engineer can choose between the three design options depending upon how much he knows about the probability of occurrences of the states of nature. While discussing this issue it would be easier if we assigned a numeric value (may be the profit or loss) incurred for each of the payoffs. Payoff (1, Feasible) = 100,000 $ Payoff (2, Feasible) = 150,000 $ Payoff (1, Feasible) = 200,000$ Payoff (1, Not Feasible) = -50,000 $ Payoff (1, Not Feasible) = -100,000 $ Payoff (1, Not Feasible) = -150,000$ 17 2.6 Decision Making under Risk Decision making under risk assumes that the decision maker can completely specify the probability of occurrence of states of nature. These probabilities may be the subjective information of the decision maker, derived from experimental results, previous experience or some other means. Once these probabilities have been established, the expected payoff (the concept of expected value and utility are explained in detail in Chapter 3) for each alternative can be calculated which can serve as a common index for comparing the alternatives. Let us now apply the concept of decision making under risk to the example discussed in the previous section. After studying the three designs the chief engineer comes up with subjective probabilities for the feasibility or non-feasibility of each of the design options. Using subjective probabilities is very often the case in engineering practice since experimentation is not practical. The following are the probability values that the chief engineer thinks are reasonable: Table 2-4: Decision making under risk Available Alternatives States of Nature Feasible Not Feasible Design Option 1 50% 50% Design Option 2 20% 80% Design Option 3 70% 30% Using the values of Payoffs and the subjective probability values the expected Payoff values for each of the alternatives can be determined as shown. 18 Design Option 1 / / 50% Feasible 1 ~~~100000 25000 \ 5 0 % NotJ=easible1 -50000 A Choice Design Option 2 20% fjeasible2_ -150000" 95000 -50000 Design Option 3 80% Not Feasible^ -100000 70% Feasible3 200000 95000 30% Not Feasible3 -150000 Figure 2.1: Decision making under risk Since the Payoffs are expressed in terms of profit obtained, the alternative with the maximum expected value would be the most favorable one and hence in this case design option 3 would be the best choice. This method of decision making does not exclude the fact that during some circumstances, the chosen decision may not be the best alternative and that some other alternative would be a better choice. It simply implies that if the problem were repeated a large number of times, design option 3 would be the best strategy. A major advantage of this method is that it makes use of both the probability of occurrences of the states of nature and the estimated payoffs or outcomes. The selection of an alternative on the basis of expected payoff may not be suitable in cases where the decision being made is unique and of extreme importance because in such situations, an unfavorable state of nature would result in extremely bad consequences. 19 Furthermore, the subjective nature of the probabilities used raises the question of how well-informed the decision maker has to be to chum out such numbers. These subjective probabilities are also highly dependent on the decision maker's perception of risk. Because of these disadvantages, a decision maker would be better off assuming complete ignorance of the probability of occurrence of states of nature. This is further discussed in the next section. 20 2.6 Decision Making under Uncertainty 2.6.1 Introduction Very often in engineering, projects are unique and previous experience is of little help when it comes to choosing between alternatives and the decision maker only has a vague idea about the probability of occurrence of states of nature. Decision making under such circumstances is termed as decision making under uncertainty. Several criteria have been proposed to help decision making under such situations. Some of these criteria have been discussed in the following sections. 2.6.2 The Maximin Criterion As the word MAXIMIN suggests, this criterion involves choosing the maximum value from a set of minimum payoffs. In other words, the alternative with the "best possible worst outcome" is chosen. In the payoff table from the previous example, the "Not Feasible" state of nature yields the worst possible outcomes for each of the alternatives. Table 2-5: Maximin Criterion Available Alternatives States of Nature Feasible NoWeasibie Design Option 1 100,000 $ :-50,000 S Design Option 2 150,000$ ;-H)p,oool Design Option 3 200,000 $ -150,000;$ The maximum payoff value among these unfavorable outcomes corresponds to that of Design Option 1. Hence as per the maximin criterion, Design Option 1 is the best alternative. The maximin criterion is based on the point of view of a pessimistic decision maker since 21 such a selection criterion ensures that even in unfavorable states of nature the chosen alternative provides a known minimum payoff. 2.6.3 The Maximax Criterion The maximax criterion is very similar to the maximin criteria except that the maximum value from a set of maximum payoffs is chosen. As opposed to the maximin criterion which is based on a pessimistic point of view, the maximax is based on an optimistic viewpoint. The alternative with the "best possible best outcome" is selected. In the payoff table from the previous example, the "Feasible" state of nature yields the best possible outcomes for each of the alternatives. Table 2-6: Maximax Criterion Available Alternatives States of Nature Feasible Not Feasible Design Option 1 •100,000"$ -50,000 $ Design Option 2 :iiO,ooo.$ -100,000$ Design Option 3 2(30,00,0;$, -150,000$ The maximum payoff value among these favorable outcomes corresponds to that of Design Option 3. Hence as per the maximax criterion, Design Option 3 is the best alternative. This type of selecting alternatives reflects the frame of mind of a gambler who is lured by the positive benefits of a game. 22 2.6.4 The Hurwicz Criterion The Hurwicz criterion tries to strike a balance between the maximin and the maximax critera. The maximum payoff of each strategy is multiplied by a factor " a " and the minimum payoff of each strategy is multiplied by a factor "I-a" and the two values are added up. The factor " a " is called the coefficient of optimism. Hence as per the Hurwicz criteria, the payoff for a strategy would be: Payoff = MaximumPayoff x a + MinimumPayoff x (1 - a) where a is the coefficient of optimism and 0 < a < 1 . The strategy with the highest payoff (calculated using the above equation) would be the best alternative according to the Hurwicz criterion. When the value of a is 1 and 0, the Hurwicz criterion reduces to the maximax criterion and maximin criterion respectively. Let us now apply the Hurwicz criterion to the example under discussion. Let us assume the coefficient of optimism to be 0.6. Table 2-7: Hurwicz criterion Available Alternatives Maximum Payoff Minimum Payoff Hurwicz Criterion Payoff (a = 0.6) Design Option 1 100,000 $ - 50,000 $ 40,000 $ Design Option 2 150,000$ - 100,000$ 50,000 $ Design Option 3 200,000 $ - 150,000$ 60,000 $ For a coefficient of optimism of 0.6, the Hurwicz criterion suggests that the best alternative would be Design Option 3. 23 2.6.5 The Minimax Regret Criterion This criterion aims at minimizing the opportunity loss or "regret". Regret may be defined as the difference between the payoff of a particular strategy corresponding to a certain state of nature and the maximum payoff that could have been obtained in that state of nature had the "right" decision been made. Regret is mathematically defined as: R(i, j) = MaxPayoff(j) - Payoff(i, j) Where, R(i,j) is the regret associated with alternative i corresponding to state of nature j . MaxPayoff(j) is the maximum possible payoff that can be obtained corresponding to the j'h state of nature had the right decision been made. Payoffis the payoff obtained by choosing alternative / when the state of nature that occurred was j . A "loss table" is constructed with the calculated regret values corresponding to each payoff. The maximum regret for each of the alternatives is found out and the alternative with the least maximum regret is the best alternative according to the minimax regret criterion. Now let us solve example 3 using the Minimax regret criterion. The loss table corresponding to the payoff values is shown in table 2-8. It is worth mentioning that the regret corresponding to the maximum payoff in any state is zero. In other words had that alternative been chosen there would have been no loss in opportunity cost and hence no regret. 24 Table 2-8: Minimax regret criterion Available Alternatives Regret Values Feasible Not Feasible Design Option 1 100,000 0 Design Option 2 50,OQOj i i i i i Design Option 3 0 ^^ ^^ ^^ ^^ ^^ ^^ The maximum regret values of each alternative has been highlighted. The strategy with the least regret is Design Option 2 and so according to the minimax regret criterion it is the best alternative. 2.6.6 The Bayes-Laplace Criterion Al l the criteria for decision making under uncertainty discussed so far assume complete ignorance of the probability of occurrences of states of nature. The Bayes-Laplace criterion, however, proposes a way to arrive at these probability values. The Bayes-Laplace criterion makes use of the principle of insufficient reason. The principle of insufficient reason states that if we are ignorant of the ways an event can occur (and therefore have no reason to believe that one way will occur preferentially compared to another), the event will occur equally likely in any way [24]. Therefore all our states of nature are equally likely to occur. Hence if we have " n " states of nature, the probability of occurrence for each state of nature is "1/n". According to the Bayes-Laplace criterion, the alternative that has the maximum expected value with these probability values is the best alternative. 25 Choice 25000 Design Option 1 25000 Design Option 2 25000 Design Option 3 25000 50% Feasiblel . ' 100000 1 ,50% Not Feasiblel -50000 50% Feasible2 . ' 150000 1 , 50% Not Feasible2 -100000 50% Feasible3 . ' 200000 i , 50% Not Feasible3 -150000 Figure 2.2: Bayes-Laplace criterion In the example that we have been discussing, there are only two possible states of nature. Hence the probability of occurrence of each state of nature is 50 %. Using these probabilities the expected value of each of the alternatives is calculated (shown in figure 2.2). For the example chosen, it so happens that the expected values for all our options using an equal likelihood criteria is the same and hence either of the options can be chosen. Decision making under uncertainty has some obvious shortcomings. The dependence of most of these criteria on the extreme payoff values of alternatives is questionable, especially since the probability of occurrences of all intermediate payoffs put together is very often greater than the probability of occurrences of the extreme payoffs. Another major disadvantage of the complete ignorance criteria is the assumption that the decision maker has no knowledge about the probability of occurrences of the states of nature. However, in many cases the decision maker has considerable insight into the situation and this valuable subjective information should not be ignored. 26 2.7 Decision Making under Incomplete Knowledge 2.7.1 Introduction The assumptions about the probability of occurrences of states of nature made by theories for decision making under risk and under uncertainty are highly idealistic. The assumption of complete knowledge of the states of nature overestimates the actual situation while the assumption of complete ignorance underestimates reality. Decision making under incomplete knowledge is a better description of the actual situation since in most situations the decision maker has some idea about the probability of occurrences of states but not complete knowledge. 2.7.2 Fishburn's Theorem Fishbum was among the first people to think about decision making under incomplete knowledge. He made the assumption that the decision maker has sufficient insight into the situation so as to rank the probability of occurrence of states of nature. For example, if there were n states of nature, the decision maker possesses sufficient insight to say that P i ^ P i ^ P i Pn Where p. is the probability of occurrence of the j'h state of nature. Fishburn's theorem [1] can be explained by considering two alternatives Ax and A2 which operate under n states of nature. According to Fishburn's theorem, if px> p2> p} pn j j then E(AX) > E(A2) if ^Xlk >^X2k for j-1,2,3. ...n . Fishbum proved this by using Abel's summation identity. However, an alternative proof using transform functions provided 27 by Cannon and Kmietowicz is of more significance since it can be used to derive expressions for maximizing or minimizing expected values for strategies. 2.7.3 Cannon and Kmietowicz's Proof for Fishburn's Theorem E(A{)-E(A2) = -£>,*2, 7=1 7=1 Let Qj = fj -pJ+i (j = 1,2... n -1) Hence, j = n ^Qn=pn (v pn+l = 0) L e t Z , = £ x , , (f = l ,2)&(; = l,2...n) Using the above transformations, TiPjXV = P\Xn+P2*i2+PnXtn 7=1 (2, +Q2 +~~QJXn HQ2 +ft +-Qn)xn .•.£(4)-2?(4) = YjQjYij ~YJQJy2J 7=1 y=l 7=1 A l l value . = p. - p J+l are non negative because of the assumption that px > p2 > p3 pn. J j YXj-Y2j is also non-negative since ^Xlk >^X2k . Thus E(Al) -E(A2) is non-negative implying that £(.4,) > E(A2). This proves Fishburn's theorem. 28 2.7.4 Drawbacks of Fishburn's Theorem Consider the assumption^X l k >^X2k , expanding for values of j = 1,2 & 3 X, 11 X. 21 xn+x} 12 xn + x. 22 xu +xi2 + xx 13 23 This essentially means that the payoff values of alternative Ax should be greater than the payoffs of A2 for all states of nature. It is obvious that this assumption is rarely the case in reality. In spite of the practical limitations of Fishburn's theorem, it was instrumental in developing procedures for the maximization or minimization of expected values. 2.7.5 Maximizing and Minimizing Expected Values Without getting into the mathematical intricacies of deriving the conditions for maximizing or minimizing expected payoffs let us examine the procedure to be followed to obtain the maximum and minimum expected payoffs. 1. Weak Ranking: When the decision maker can rank the probability of occurrence of states of nature but cannot specify any strict relationship between the various probabilities, the ranking is said to be "weak". In other words, the decision maker can say that px> p2> p3 pn and that pj > pj+j but he cannot say anything about the difference in values between p} and pJ+l other than the fact that /?. - pj+i > 0. 29 In section 2.7.3 it was shown that the expected value of an alternative can be expressed in terms of the transform functions Q and Y as E(A) = fjQjYj 7=1 It can be shown that the above expression is maximized or minimized when the Yj 7. expression— is maximized or minimized. The term — can be calculated as follows: j J Y. 1 7 J J k=\ 2. Strict Ranking: When the decision maker is in a position to not only rank the probability of occurrence of states of nature but also specify a definite relationship between the various probabilities, the ranking is said to be "strict". In other words, the decision maker can not only say that px > pz > p3 pn and /?. > pJ+l but also say that pj - pJ+i > kj, where k. is a positive constant. In such cases it can be shown that the expected value of a strategy can be maximized or minimized by maximizing or minimizing the following expression J 7=1 7=1 It may be worth mentioning that the same value of j optimizes the expression of E(A) for both weak and strict ranking. The derivation of the above results has been presented in Appendix A. 30 2.7.6 Numerical Illustration While designing a steel moment resisting frame, an engineer has to decide between two design options: DOptionl and DOption2. The payoff matrix (profit or loss) for this problem is shown below: Table 2-9: Maximization of expected values -Numerical illustration Alternatives States of Nature Success in Third Attempt Success in Second Attempt Success in First Attempt Failure DOptionl 40,000 60,000 100,000 -10,000 DOption2 55,000 70,000 80,000 -5,000 Weak Ranking: If the engineer can specify a weak ranking for the probability of occurrence of the states of nature, the design options can be evaluated using the expression — . J k=\ Table 2-10: Maximization of expected values -Numerical illustration (Weak Ranking) j Expected Payoff DOptionl DOption2 1 40,000 55,000 2 50,000 62,500 3 66,667 68,333 4 47,500 50,000 Now using any of the criteria mentioned in section 2.6, the best strategy can be chosen. 31 Strong Ranking: If the engineer can specify a strong ranking for the probability of occurrence for states of nature, the design options can be evaluated using the expression—Yj(1 - ^] jkj) + ^JkjY}. j 7=1 7=1 Let us say that the decision maker has sufficient information to specify the following: Pl-p2>0A p2-Pi>0A Pi-p,>0A P,-p5>0 (p5=0) This implies that A:, =0.1 ^ 2 =0.1 ^ 3 =0.1 k4=0 The expected values of the strategies are calculated using the expression j n n — Yj (1 - ^ jkj) + 2]kjYj and the strict rankings is given below. j 7=1 7=1 Table 2-11: Maximization of expected values -Numerical illustration (Strong Ranking) j Expected Payoff DOptionl DOption2 1 40,000 55,000 2 49,000 61,750 3 60,667 65,833 4 53,000 58,500 Once again using any of the criteria mentioned in section 2.6, the best strategy can be chosen. 32 2.8 Concluding Remarks It is evident from the above discussions that several decision making theories and criteria exist that can be resourcefully used by engineers while making decisions. Depending upon how much information is available about the probability of occurrence of the states of nature the engineer can make use of decision making under risk, uncertainty or incomplete knowledge. Since it is not practical to have objective values for probabilities in engineering, the engineer has rely on expert opinion, past experience and other reasonable judgment to arrive at subjective values. Even if the initial value of probability that he arrives at is not very accurate, a more precise value of probability can be obtained by the process of Bayesian updating as new information becomes available. 33 3. Tools to Aid in the Decision Making Process 3.1 Introduction In addition to the various decision making criteria and theories discussed in the previous chapter, there are several tools and methods which when used resourcefully can be invaluable assets in the decision making process. These tools and methods not only help in making decisions but can also be used as means to convey one decision maker's perception of a problem to other decision makers. The models and data generated can serve as valuable engineering records. This chapter deals with several such tools that can used by the practicing engineer. 3.2 Elementary Data Analysis 3.2.1 Introduction When present with an unorganized set of numbers, it is very difficult for engineers to make any conclusive statements about the nature and extent of uncertainty associated with the parameter under discussion. However, representing the raw data in certain forms can add a new meaning to them. Data analysis in itself is a broad field of study and several methods exist for interpreting data. Without delving into extensive details, this section explores some basic methods for analyzing and representing data. It is convenient to explain the different ways of representing data with the help of an example. The live load in a steel building measured during different times is shown in Table 3-1. 34 Table 3-1: Live loads in a steel building measured during different times Time (Minutes) Live Load (KN/sq.m) Time (Minutes) Live Load (KN/sq.m) Time (Minutes) Live Load (KN/sq.m) 0 2.07 255 1.88 510 2.35 15 3.85 270 2.87 525 1.01 30 0.85 285 1.97 540 1.79 45 1.57 300 1.56 555 2.34 60 2.53 315 2.04 570 3.46 75 0.11 330 2.59 585 3.76 90 0.97 345 3.61 600 3.85 105 0.94 360 1.88 615 3.92 120 2.21 375 2.29 630 3.36 135 4.16 390 2.4 645 1.14 150 2.65 405 3.99 660 2.44 165 3.96 420 3.06 675 4.89 180 2.96 435 1.36 690 3.45 195 1.96 450 2.06 705 1.77 210 3.63 465 3.1 720 1.15 225 1.52 480 3.45 735 2.97 240 1.66 495 1.06 35 3.2.2 Graphical Tools 3.2.2.1 Histograms The values shown in table 3-1 were randomly generated using the random number generator in excel. It is very hard to draw any conclusive remarks about the variation of live load from the set of random numbers that we have. But if we plot these numbers in the form of a bar graph of interval 0.5 KN/m , we obtain a histogram. A histogram shows the frequency of occurrence of values is a particular interval. From the histogram we can conclude that the live load mostly lies in the range of 1.5 KN/m 2 and 4 KN/m 2 . Histogram 6 5 > 8 4 Oi | 3 u_ 2 1 0 N K<o <V n<o <b <o K<o <o V <V «V tx-Live Load (KN/sq.m) Figure 3.1: Histogram from a set of live load measurements 36 3.2.2.2 Frequency Diagram If the ordinates of a histogram are normalized such that the area of the resulting graph is unity, then we obtain the frequency diagram. In our example, we have 50 observations and the interval of our histogram is 0.5. In order to obtain the frequency diagram we divide the ordinates of the histogram by 25 (i.e. 50 x 0.5 ). The resulting frequency diagram is shown in figure 3.2. The frequency diagram is helpful in comparing histograms with different interval ranges and in choosing probability distributions. Frequency Diagram 0 1 2 3 4 5 6 Live Load (KN/sq.m) Figure 3.2: Frequency diagram for a set of live load measurements 37 3.2.2.3 Cumulative Frequency Diagram A cumulative frequency diagram is a plot which gives the relative frequency of occurrence of values that is less than or equal to a particular value. For example, there are 28 values less than or equal to 2.5 KN/m . Therefore the ordinate on the cumulative frequency disgram corresponding to 2.5 KN/m 2 is 58% (28 out of 50 values). A plot of the Cumulative frequency diagram is shown in figure 3.3. Cumulative Frequency Diagram Live Load (KN/Sq.m) I Figure 3.3: Cumulative frequency diagram for a set of live load measurements 3.2.2.4 Scatter Diagram In all the graphical representations discussed above, we represented only the live load data from table 3-1. We did not take into account the time. If we were to plot both the live loads 38 and the time, we obtain a scatter diagram. If the points of a scatter diagram lie along a straight line, then the two parameters which are being plotted are said to be linearly dependent. On the other hand, if the points of a scatter diagram lie along a curve, then the two parameters which are being plotted are said to be non-linearly dependent. Figure 3.4 shows the scatter diagram between time and live load. Scatter Diagram 800 700 600 g" 500 | 400 o . i 300 I-200 100 0 SlilSSilHiWlIM • • • • 2 3 4 Live Load (KN/sq.m) Figure 3.4: Scatter Diagram for a set of live load measurements 39 3.2.3 Numeric Representation of Data In addition to the graphical representation of data, several numeric descriptors exist which are representative of the sample size or population. These values can be used to derive objective conclusions about a set of data and to compare sets of data. 3.2.3.1 Mean The mean is the average value of the sample and is mathematically represented as Where n is the sample size. In our example, the mean live load is 2.45 KN/m 2 3.2.3.2 Median The median of a sample is the mid value when the values in the sample are either arranged is ascending or descending order. If the sample size is even, the median is the average of the two mid values. In our case the sample size is 50, and therefore the median is 2.34+2.35 _2 3g KN/m 2. 2 3.2.3.3 Range The range of a sample is the difference between the highest and lowest values in the sample. In our example, the highest value is 4.89 KN/m 2 and the lowest value is 0.11 KN/m 2 . Therefore our range is the difference between 4.89 and 0.11 which is equal to 4.78 KN/m 2 . 40 3.2.3.4 Variance The variance is a measure of the dispersion in the sample values from the mean. The mathematical expression for variance is given by However, the expression mentioned above is called a biased estimator since its expected value is not equal to the true variance of the population. In order to eliminate this inconsistency, the following expression is often used for sample variance: In our example the value of variance is 1.15 KN/m 2 . 3.2.3.5 Standard Deviation The standard deviation of a sample population is the square root of the variance. In our case, the standard deviation is 1.07 KN/m 2 . 41 3.3 Probability 3.3.1 Introduction The theory of probability is a branch of mathematics that deals with uncertainty [5]. According to classical probability theory, the probability of occurrence of an event is defined as the ratio between the number of favorable outcomes and the total number of possible outcomes. Hence the probability of an even number turning up when an unbiased die is cast is given by: „.„ 7LT , _ . TT . NumberofEvenNumberOutcom.es 3 P(EvenNumberTurningUp) = - = — = 0.5 TotalNumberofOutcomes 6 There is much debate among members of the scientific community about the precise definition of probability. But there are mainly two approaches that try to explain the idea of probability: the relative frequency approach and the Bayesian approach. According to the relative frequency approach, if a random experiment is repeated N times and if an event O occurs N0 times, then as N approaches infinity, P{0) approaches the probability of occurrence of event O. Where P(0) is defined as: P(0) = The probability obtained using the classical approach and the relative frequency approach is called objective probability. The application of the relative frequency notion of probability is clearly limited, especially in a field such as engineering where repetition of events is impractical and where problems are unique. This is where subjective probability finds its use. 42 Subjective probability or the Bayesian approach to probability is a person's degree of belief in the occurrence or non-occurrence of an event. One major argument against the Bayesian approach is that it is highly influenced by an individual's perception of risk and even for the same problem different people may specify different probabilities. Nevertheless, in several engineering situations, subjective probability is the best information available as it is very often the view of experts who, over a period of time, have developed an understanding of the situation. 3.3.2 Axioms of Probability Whether specified subjectively or objectively, probability has to confirm to the following basic rules known as axioms of probability. 1. The probability of occurrence of an event is always greater than or equal to zero and less than or equal to unity. This is mathematically represented as 0 < P(A) < 1 2. The probability of occurrence of a certain event is 1 i.e. P{CertainEvent) = 1 3. If A and B are mutually exclusive events t\\znP(A u 5) = P{A ) + P(B). 3.3.3 Elementary Rules of Probability In addition to the above probability axioms, there are some elementary rules that are extremely useful when dealing with probability. 1. The union rule or the inclusion-exclusion rule: If A],A2,AJ An are n events, then P(AxyjA2yjA, vAJ = ±P(Ai)-fj£p(AinAj) + fjfj£p(AinAjnAk) i=\ i=\ i<j i=\ i<j j<k + (-l)"-lP(AlnA2nA2 nAn) 43 For two events A and B , P(A u B) = P(A) + P(B) - P(A n B) If A and B are mutually exclusive events P(Ar\B) would be zero and the above equation would revert to the third axiom of probability. 2. Conditional Probability: The probability of an event A occurring given that an even B has already occurred is given by: 3. Multiplicative Rule: For any two events A and B , the probability of their joint occurrence is given by, If A and 5 are independent events, then P(A IB) = P(A) . Therefore the above equation reduces to: P(AnB) = P(A)- P(B) 4. Baye's Theorem: Let us consider the Conditional probability of two events A and B P(AIB) = P(A n B) P(B) P(Ar^B) = P(AIB)-P(B) P(AIB) = P(A n B) P(B) But we know that P(A n B) = P(B I A) • P(A) Hence, P(AIB) = P(B/A) P(B) P(A) 44 This is known as Baye's theorem and is of critical importance in probability theory. It represents how the notion of an event occurring, P(A) (also known as priori probability) changes to P(A/ B) (also known as posterior probability) in the light of new information. 45 3.4 Representing Uncertainty using Probability Distributions 3.4.1 Uncertainty in Engineering Engineering involves representing natural phenomenon with the help of models. While modeling, an engineer tries to replicate to the best of his ability the behavior or response of a naturally occurring state. In order to do so, the engineer has to understand and account for a number of uncertainties. The nature of an uncertainty can either be aleatory or epistemic [6]. Epistemic uncertainty is the uncertainty that exists due to imperfect understanding or lack of knowledge. This type of uncertainty can be reduced by research and improving our understanding of the phenomenon. Aleatory uncertainty is the inherent variation is a physical phenomenon. Aleatory uncertainty is characteristic of the process itself. No amount of research or knowledge acquisition can reduce aleatory uncertainty. 3.4.2 Probability Distributions Irrespective of what the source or nature of uncertainty, a model must account for them to ensure that the physical phenomenon is accurately represented. Probability distributions are convenient means to represent uncertainty. Probability distributions may either be discrete or continuous. Discrete Probability Distributions A discrete probability distribution P(x) can take only discrete values and is used to represent discrete random variables. P(x) is said to be the probability distribution of a discrete random variable X if it satisfies the following properties: • The probability that a random variable X can take a particular vale x is given by P[X = x] = P(x) = Px. 46 • P(x) is non negative for all x • The sum of P(x)over all values of x is 1 i.e. ^ P ( x ) = 1 X A discrete probability distribution is also called a probability mass function. Continuous Probability Distributions A function / (x) is said to the probability distribution of a continuous random variable x if it satisfies the following properties: • The probability that a value x lies within an interval (a, b) is given by b P(a <x<b)- jf(x)dx a • It is non negative for all real x • The integral of a continuous probability distribution between - oo to + oo is one. CO \f(x)dx = \ —CO A continuous probability distribution is also called a probability density function. 3.4.3 Illustrative Example One of the elements of a steel truss is subjected to a compressive force which is normally distributed with a mean of 200 K N and a standard deviation of 30 K N . Calculate the change in length of the member if following are the properties of the section: Length = 0.5 m Young's Modulus = 200 GPA Cross Sectional Area = HSS 102 x 51 x 8 47 Solution: The area corresponding to HSS 102x51 x 8 as per CAN/CSA SI6-01 is 2010 mm2 The change in length of the member is given by PL S = -AE But the load is not a constant but uncertain and has a distribution. S = (M:200;£D:30)x500 2010x200,000 Solving the above equation using Decision Pro we obtain the following distribution for 8 with a mean of 0.24978 and a standard deviation of 0.03736. Frequency Distribution 250 0.12 0.14 0.16 0.18 0.2 0.22 0.24 0.26 0.2B D.3 0.32 0.34 0.35 0.38 Delta Figure 3.5: Probability Distributions - Illustrative Example 48 3.5 Monte Carlo Simulation 3.5.1 Introduction Monte Carlo simulations are excellent tools for modeling uncertainty. They allow the definition of uncertainty in a vague manner such as in the form of distributions. Monte Carlo simulations may be used to generate values for uncertain quantities which can then be used to decide a course of action [38]. Monte Carlo simulations can be used to generate a range of possible outcomes, the probability of occurrence of each outcome and also the most likely outcome. The origin of Monte Carlo simulations can be traced back to the 1940s when scientists at Los Alamos National Laboratory developed computer programs to randomly generate the range of possible outcomes in nuclear reactions. They named these programs Monte Carlo Simulations after the city Monte Carlo in Monaco, famous for its casinos where games of chances such as the roulette wheels, dice, and slot machines were extremely popular [36]. The nature of outcomes in these games of chance is very similar to that of the variables generated by a Monte Carlo simulation. When an unbiased die is thrown, we know that the outcome can be any number from 1 to 6 but in any particular throw we do not know what the "precise" outcome will be. In other words we know the range of possible outcomes but we do not know the exact outcome of a specific trial. Monte Carlo simulations generate random numbers within a particular range; however, the exact value of the number that will be generated at any particular trial is uncertain. 49 3.5.2 Illustrative Example A company manufactures bolts for connections in steel members. The diameter of the bolt produced is normally distributed with a mean of 20 mm and a standard deviation of 4 mm. The holes into which the bolts fit are drilled on site and the diameter of the drilled hole is normally distributed with a mean of 24 mm and a standard deviation of 4 mm. What are the chances of the pin not fitting into the hole? The frequency distribution for the clearance in the above problem can be obtained using Monte Carlo simulation (shown in figure 3.6). Frequency Distribution 40,000 - i 30,000 • 8 ( J t 20,000 L> O o 10,000 0 I | . - r --20 -15 -10 -5 0 5 10 15 20 25 30 Clearance Figure 3.6: Monte Carlo simulation - Illustrative Example But a better representation of the clearance, more relevant to this problem may be the Cumulative Distribution (CDF Shown in Figure 3.7). 50 Cumulative Distribution 100% 60% 0- 40% 20% -30 -20 -10 0 10 20 30 40 Clearance Figure 3.7: CDF - Illustrative Example From the CDF it can be inferred that almost 25% of the cases, the clearance is less that zero which implies that one in four bolts would not fit into the hole. This suggests that some changes to either the bolt hole diameter or to the quality control have to be made to reduce lack of fit. 51 3.6 Sensitivity Analysis 3.6.1 Introduction Sensitivity analysis is a procedure which shows the effect of changes in various input parameters on the model output. It is used to examine model behaviour. A common procedure to conduct sensitivity analysis is to vary the input parameters and observe how the output changes. If the variation of an input parameter largely affects the output, then the output is said to be highly sensitive to that input parameter. Likewise if the variation of an input parameter does not have a significant effect on the output, then the output is said to be less sensitive to that input parameter. Sensitivity analysis can be used as a procedure to validate models. 3.6.2 Illustrative Example Consider the error budget for a telescope, shown in figures 3.8 and 3.9 . Total Error 0.25219 Enclosure 0.06 alignment 0.1 L Telescope := V alignment2 + aberration2 + optical2 Unevaluated -< aberration 0.17 (optica j 0.14526 Figure 3.8: Sensitivity Analysis - Illustrative Example - Part 1 52 optical 0.14526 2 2 2 Corrector + Secondary + Primary polishing 0.05 Figure 3.9: Sensitivity Analysis - Illustrative Example - Part 2 The sensitivity of the total error to the component errors is shown in figure 3.10. Another way of representing the sensitivity i.e. in the form of a sensitivity table is shown in figure 3.11. Input Sensitivity Graph 0.28 Enclosure Secondary • 0 % Change in Input — aberration alignment 10 20 polishing Figure 3.10: Sensitivity Graph 53 Input Sensitivity Table Input Relative Absolute Corrector 0.10063 0.31722 Enclosure 0.0566 0.23792 Secondary * 0.02516 0.15861 aberration 0.4544 0.67409 alignment 0.15723 0.39653 polishing 0.03931 0.19826 support 0.03931 0.19826 thermal 0.12736 0.35687 Figure 3.11: Input Sensitivity Table 54 3.7 Expected Value: 3.7.1 Introduction: The expected value or the mathematical expectation is the weighted average of the outcomes of a process. It represents the average value that may be expected should the process be repeated an infinite number of times. The expected value can be defined mathematically as follows: 1. Discrete Random Variable: If X is a discrete random variable with n possible outcomes: x,,x 2,x 3....x n with probabilities of occurrences: pup2,p3....pn, then the expected value of X is given by: E(X) = xlpl +x2p2 +x3pr... + xnpn E(X) = £ x i P i 2. Continuous Random Variable: If X is a continuous random variable with probability density function / ( x ) , then the expected value of X is given by: oo E(X)= \xf{x)dx —0 If the random variable X can be thought of as an alternative then E{X) would represent the expected payoff of alternative X . The expected value can be used to choose between alternatives especially when the probabilities of outcomes can be clearly defined i.e. while making decisions under risk. 55 3.7.2 Illustrative Example The following example illustrates how two alternatives can be compared using expected values. Example: In the fabrication of a braced frame, an engineer has to choose between either specifying very stringent tolerances or loose tolerances. The possible outcomes could be a perfect fit, a slight lack of fit requiring rework or a complete lack of fit which would result in fabrication the member again. Solution: The expected value of the "Loose Tolerance" strategy is calculated as: Expected Value = 0.2 (100000) + 0.6 (50000) + 0.2(20000) = 54000 The expected value of the "Stringent Tolerance" strategy is calculated as: Expected Value = 0.4 (80000) + 0.5 (60000) + 0.1(20000) = 64000 Since specifying stringent tolerance has a greater expected value, it will be a better alternative compared to specifying loose tolerance. 20% Perfect Fit l L o o s e Tolerance 6 0 % Slight Lack of Fitl / 54000 Decision 2 0 % Complete Lack of Fit l . 64000 Stringent Tolerance 4 0 % Perfect Fit2 S0000 5 0 % Slight Lack of Fit2 64000 60000 10% Complete Lack of Fit2 20000 Figure 3.12: Expected Value - Illustrative Example 56 3.7.2 St Petersburg Paradox: A major limitation of the expected value is that it does not account the decision makers view of risk. This can be explained by the St.Petersburg paradox [26]. The St.Petersburg game is played by tossing a coin until a tail appears. The payoff of the game is determined by the toss at which the tail appears. If a tail appears on the n'h toss then the payoff is 2" $. Table 3-2 summarizes the probability and expected payoff for the first 5 tosses. Table 3-2: St. Petersburg paradox Toss Probability of Occurrence Payoff (2"$) Expected Payoff ($) 1 1/2 2 1 2 1/4 4 1 3 1/8 8 1 4 1/16 16 1 5 1/32 32 1 This would imply that the expected payoff for the game would be infinity since there are infinite possible outcomes. A rational player would pay an amount less than the expected payoff for a game. This would mean that no matter how large the entry fee is the rational thing to do would be to play the game. However, simple intuition suggests otherwise. This paradox questions the validity of the expected value and begs the need for a better explanation. The solution to this paradox was provided by Daniel Bernoulli who proposed that money has decreasing utility and therefore what has to be calculated is expected utility as opposed to expected value. According to him a realistic representation of the utility of 57 money would be the logarithm of the actual monetary value. This gives rise to the following table: Table 3-3: Bernoulli's solution to St Petersburg Paradox Toss Probability Payoff Utility Expected Utility 1 1/2 2 0.301 0.1505 2 1/4 4 0.602 0.1505 3 1/8 8 0.903 0.1129 4 1/16 16 1.204 0.0753 5 1/32 32 1.505 0.0470 In this case, the sum of expected utility has a limit of 0.60206 which equals a monetary value of 4$. Thus a reasonable amount to pay for the St.Petersburg (with the above payoff values) game would be any value below 4$. 58 3.8 Utility Theory 3.8.1 Introduction When the values of outcomes in the payoff matrix are based on purely monetary values, they fail to take into account various other considerations which may be of critical importance in the decision situation. For example, the option of using conventional machines in a steel fabrication plant may have a better payoff than upgrading to advanced machines in terms of capital investment but the latter option may ensure the survival of the fabricator in the long run. In such a case the decision maker may prefer the second alternative with a lower payoff even though a simple expected value calculation suggests otherwise. Thus it is clear that the decision maker's perception of value or utility of payoffs has to be taken into account while assessing alternative choices. The relative value of an outcome with respect to a set of other possible outcomes is called utility [37]. The concept of utility was first proposed by Daniel Bernoulli to solve St. Petersburg paradox. He proposed that when the utility of money decreases with increase in quantity, it is reasonable to assume that the utility is a function of the logarithm of the actual monetary value. However, credit for the mathematical formulation of the utility theory goes to John Von Neumann and Oskar Morgenstem (1944) who proposed axioms and suggested methods to measure the utilities of objective values. Unlike Bernoulli's definition of utility function for a payoff, Neumann and Morgenstem's utility theory defines a utility for a lottery or gamble. Their theory is based on acceptable axioms concerning the decision maker's consistency of preference between payoffs [1]. 59 3.8.2 Deriving the Utility of a Payoff According to this theory the utility of a lottery that offers a payoff of A and B with a probability of p and 1 - p respectively is given by U = p.U(A) + (\-P).U(B) Where U(A) and U(B) are the utilities of A and B. Furthermore, if the decision maker is indifferent to the choice of the lottery and another certain payoff value C , then the utility of C is equal to the utility of the lottery. If C lies in between A and B such that A>C> B, then the decision maker would prefer C when the value of p is 0 and prefer the lottery when the value of p is 1. But for some value ofp, the decision maker would be indifferent to choosing the lottery and the certain payoff C . This value of p may be used to determine the utility of C using the equation U(Q = p.U(A) + (l-P).U(B) Similarly, the utility of any other payoff value in between A and B can also be determined. The utilities of A and B can be any arbitrary value. Usually the utility of the minimum payoff i.e. U(B) is taken as 0 and the utility of the maximum payoff i.e. U(A) is taken as 1. 3.8.3 Risk The utility of a value depends upon an individual's attitude towards risk. Let us consider the a situation in which a person can choose between two options, one providing a huge gain but also a huge risk and the other providing small reward but at small risk (shown in figure 3.13). 60 alternative! Decision alternative2 50% Gainl 100010 i 50% Gain2 -100000 50% Gain3 ^ 1""""^ \ 50% Gain4 J t -1 Figure 3.13: Defining Risk A simple comparison of the expected values of the two alternatives suggests that alternative 1 is better than alternative2. But very few people would choose alternative 1 even though it has a greater expected value than alternative2. This is because we do not want to risk losing 100,000 dollars to gain just 10 dollars. The above calculation is based on the idea that every dollar is worth the same. However, this is clearly not the case. If a person is risk averse, then the utility value of each dollar is less than that of the previous dollar. If a person is risk seeking, then the utility value of each dollar is greater than that of the pervious dollar. This can be represented by the following utility functions. Risk Adverse Risk Neutral Risk Seeking Figure 3.14: Utility Functions [28] 61 Now if we use a risk adverse utility function in the above problem, we get the following decision tree. Decision -0.00017 a (tentative 1 -97920.55846 alternative2 -0.00017-50% Gajnl 100010 50% Gain2 -16660b 50% Gain3 . , _ < ,50% Gain4 Figure 3.15: Risk adverse utility It is evident from the above decision tree that alternative2 has a better expected utility as compared to alternative 1. Thus a person who has a risk adverse attitude will choose alternative2.0n the other hand, if the above calculations were done by using a risk seeking utility function we get the following decision tree and a person who has a risk seeking attitude will choose alternative!. Decision 97930.55346 alternative! 97930.55346 50% Gainl 100010 * 50% Gain2 -100000 50% Gain3 , alternative^ ^ 0.00017 1 \ 50% Gain4 , 1 - 1 - « Figure 3.16: Risk seeking utility 62 3.9 Decision Trees 3.9.1 Introduction Decision trees are graphical tools that aid in the decision making process. Their usefulness can be best seen when there are several alternatives to choose from and when each alternative has different consequences and risk associated with it, which is very often the case in engineering. They provide an effective framework within which a comprehensive analysis of the various available alternatives can be carried out [27]. Decision trees also outline the risk, uncertainty and utility associated with each alternative. Decision trees also clearly represent a decision maker's perception of a problem. Therefore, they can be used to an efficient means of communication between decision makers. There are several ways in which a decision tree can be drawn. Figure 3.17 and Figure 3.18 show two ways of representing decision trees. In the rest of this section the second method of representation (Figure 3.18) will be used. 20% Consequence 1 1000 Alternative! 60% Conseqence2 2000 Decision 2000 20% Consequence3 3000 2000 Aternative2 5000 Figure 3.17: Decision Tree - Representation 1 63 Decision 2000 Alternative! 2000 20% / Consequencel 1000 60% / Conseqence2 r D Aternative2 20% / Consequence3 3000 Figure 3.18: Decision Tree - Representation 2 3.9.2 Types of nodes There are three types of nodes in a decision tree, namely, the decision node, the chance node and the utility node. A decision node is a node which the decision maker can control. It represents those variables that are at the discretion of the decision maker i.e. the alternatives to choose from. It is usually represented by a rectangle. In the above example, the first node from the left is a decision node since the decision maker has complete control on whether he chooses "Alternative!" or "Alternative2". scisionl •00 Alternati 2000 Aternath 5000 Figure 3.19: Decision Node 64 A chance node represents those variables that are states of nature or consequences over which the decision maker has no control. In the decision tree shown in the previous section, the outcome of choosing "Alternative 1", i.e. the occurrence of either "Consequence 1", "Consequence2" or "Consequence3" is not under the control of the user. A chance node is usually represented by a circle. A utility node, also known as the value node or terminal node represents the value or utility of an outcome. It is usually the end node in a branch of the decision tree. In our problem, all the three "consequences" can be viewed as utility nodes. Since the node "Alternative2" also has a value and does not branch out any further, it is also a utility node. 3.9.3 Building a Decision Tree This section describes the process of building a decision tree. This is done with the help of an example. Consider the following decision problem: A company has been given the task of designing a structure which is predominantly steel. After careful considerations the engineers of the company have decided to use steel piles for Figure 3.20: Chance node 65 the foundation of the structure. The engineer in charge of the foundation has to now decide whether to buy the steel piles from a manufacturer or manufacture the piles themselves. This is a classical example of a decision that an engineer has to make in everyday practice. Normally, an engineer would study pervious cases where such a situation was encountered and see what decision was made then. But very often the problems encountered in engineering are unique and several factors change over a period of time. For example, previously the company may have opted for buying the piles but now they may have developed substantially and may now have the capacity to manufacture the piles on their own. In such cases, a more rational way of making decisions is by using a decision tree. The first step in building a decision tree is to establish all the available options. In this case the engineer can either buying the piles or manufacture them. (Decision! Manufacture Buy Figure 3.21: Building a decision tree - 1 The next step is to analyze the various components of each of the two alternatives. Let us analyze the "Buy" option first. 66 Number of P i l es c o s t l 9 0 0 0 0 c o s t 2 1 0 0 0 0 0 c o s t 3 110000 2 5 0 0 0 Figure 3.22: Building a decision tree - 2 If the engineer decides to buy the piles, the cost incurred would be the sum of the cost of the piles and the procurement costs. The cost of piles would be a function of the number of piles and the cost per pile. Since the engineer already knows the number of piles required for the structure, it would be a fixed quantity. However, the cost per pile may be variable since it is determined by the vendors. In this problem, it is assumed that there are three possible bid prices and a probability is associated with each of the costs. It is in determining these probability values that "intuition" or "experience" may be put to best use. The procurement cost is the sum of the transportation cost and miscellaneous costs. This is represented in the form of a decision tree in Figure 3.22. Now consider the "Manufacture" branch of the decision tree (Shown in Figure 3.23). The costs incurred if the piles were to be manufactured by the company would include material cost, labor cost fabrication cost and transportation cost. The material cost would be a function of the number of piles, the unit cost of material and the amount of material required. The labor cost would depend on the number of labor days required and the cost of each labor day. 67 C o s t of P i l es 2 5 0 0 0 0 0 25 2 5 % C o s t Pe r Pi le 1 0 0 0 0 0 . 5 0 % Buy 2 5 7 5 0 0 0 2 5 % Procurement C o s t 7 5 0 0 0 Transpor ta t ion C o 5 0 0 0 0 M i s c e l l a n e o u s C o However, the number of labor days may vary depending upon whether the project (manufacturing process) is completed before schedule, as per schedule or later than schedule. The fabrication cost would depend upon the type of machines that will be used. Manufacture 2295005 Material Cos t 1250000 Material Costs per Pi le 50000 Unit Cost of material 500 Number of Pi les 25 Number of Units of Material 100 33.3% Labor Costs 15005 Number of Days 15.005 Early 10 0' Cost per Day 1000 ,33 .3% On Time ) 15 33.4% Late 20 4 0 % Fabrication Costs 975000 Simple Machines 750000 K> ,55% Complex Machines 1000000 5% Transportation Costs 30000 Cannot Fabricate 2500000 Miscellaneous Cost 25000 Figure 3.23: Building a decision tree - 3 The fabrication cost will be least if the piles can be fabricated by using simple machines. The use of complex machines in the fabrication process will result in a slightly higher fabrication cost. However, if the company is unable to fabricate the piles then it would have to buy the piles and hence the fabrication cost would be maximum. The probabilities of the fabrication cost being any of these three values may be allotted by experience or intuition. For simplicity the transportation and miscellaneous costs are assumed to be invariant. 68 Manufacture 2320005 Decision 2320005 Material Cost 1250000 Materia! Costs per Pile 50000 Unit Cast of material 500 Number of Piles 25 Number of Units of Material 100 L^bor Costs 15005 Number of Days 15.005 Early 10 Cost per Day 1000 ,33.3* • n Time 15 Late 20 Fabrication Costs 1000D00 Simple Machines 750D00 Complex Machines 1000000 Transportation Costs 3000D Cannot Fabricate 300DDOO Miscellaneous Cost 25000 Buy 2575000 Cost of Piles 2500000 | Number of Piles | - * 2 5 * Cost Per Pile 100000 costl Q0000 501 COSI2 ) 100000 2 5 * oost3 110000 Procurement Cost 7500D Transportation Cost 50000 I tufscellaneous Cost Figure 3.24: Building a decision tree - 4 Having analyzed each of the options intricately, the engineer is now in a position to make a rational decision. If cost is the driving factor, then the above decision tree would clearly indicate to the engineer that manufacturing the piles would be a better alternative. It is evident from the above illustration that decision trees can serve as a better rational when choosing between alternatives as compared to mere intuition or experience. It should be noted that subjective information and intuition is used in determining the probability values in the decision tree calculations. 69 3.9.4 Limitations of decision trees Having seen the advantages of using decision trees it is equally important to know about their limitations. It is extremely essential to keep in mind that decision trees like all decision making tools and theories are in no way a replacement for intuition or common sense. But using decision trees along with experience and subjective information can lead to better decision making. For example, in the above problem, at a number of instances probability values were used. These probability values can be arrived at only based on expert opinion or subjective information. If the subjective information that is used does not clearly represent the actual situation in nature, then the outcome of the decision tree will also not be reliable. Even though decision trees can serve as effective tools to comprehensively analyze alternatives, the size of a decision tree increases rapidly with the number of chance nodes and the number of consequences at each chance node. This may result in large decision trees for representing even fairly small problems. Another limitation of decision trees is that they can be used to represent only those scenarios which have discrete outcomes. Problems in which the outcome is continuous cannot be represented using decision trees. 70 3.10 Influence Diagrams 3.10.1 Introduction An influence diagram is an intuitive graphical view of the structure of a model, consisting of nodes and arrows. Influence diagrams provide a clear visual way to express uncertain knowledge about the state of the world, decisions, objectives, and their interrelationships [9]. Influence diagrams can be thought of as diagrammatic representations of processes as perceived by a decision maker. Influence diagrams show the "influence" or effect of each variable in the model on the other variables and hence the name. It was earlier mentioned that the size of a decision tree increases with the increase in number of chance nodes. This limitation is overcome by influence diagrams. As in the case of decision trees, influence diagrams also contain three types of nodes: chance nodes, decision nodes and utility nodes. In addition to this influence diagrams have arcs. Arcs are essentially arrows connecting one node to the other. There are two types of arcs: influence arcs and informational arcs. In an influence arc, the node at the tail of an arc influences the node at the head of the arc. An arc directed into a decision node is an informational arc. They represent the information that will be available while making a decision. 3.6.2 Illustrative Example Note: The following example has been adopted from Probability, Statistics and Decision for Civil Engineers by Jack R Benjamin and CAllin Cornell. An engineer has to choose the length of a pile that has to be driven into a region where the bedrock may be either at a depth of 10 m or 15 m. If the engineer chooses a pile length that is equal to the depth of the bed 71 rock, no loss will occur. However, if the engineer chooses a pile length of 10 m and the actual depth of bead rock turns out to be 15 m, splicing and welding of an extra 5 m length of pile will be required, resulting in a loss of 40,000 $. If on the other hand the engineer chooses a 15 m pile and the actual depth of bed rock turns out to be 10 m, 5 m length of pile will have to be cut off, resulting in a loss of 20,000$. Let us compare the decision tree and influence diagram for this problem. This scenario can be represented by a decision tree as shown in figure 3.25. Pile Length; 20000 Pile Length! 20000 "\ Pile Length2^ / 10000 "H 50% Utility 11 L50% Utility2 1 40000 * 50% Utility3 ' 2 0 0 0 0 ,50% UjHity4 "6 -4 Figure 3.25: Representation using Decision Tree The same problem can be represented using an influence diagram as shown in figure 3.26 Choice of Pile Length 72 From the above example it can be seen that an influence diagram can represent a problem in a much more compact way than a decision tree. This is extremely advantageous in the case of large problems with several chance nodes where the use of a decision tree may not be cumbersome. However, it must be pointed out that influence diagrams do not show as much detail of the decision problem as decision trees. 73 4. Decision Making in Business and Economics 4.1 Introduction The use of decision making tools in business and economics is widespread. Having discussed about some common decision making theories and analytical tools previously, their application in the fields of business and economics has been discussed in this chapter. However, since this project is primarily about decision making in engineering, the issue of decision making in business and economics is not explored in much detail. Two examples have been cited from business and economics where decision science has been used to make logical decisions. 4.2 Real Estate Example The following example describes a situation where the choice of buying a property is made using a decision tree. Problem: A piece of property can be purchased for $100,000. In the event of the property being rezoned as business land, it can be sold for $150,000. It is estimated that there is a 60% chance of the rezone request succeeding. If the rezoning attempt fails, the property can either be dumped for $70,000 or an appeal can be made to the zoning board. But the appeal only has a 10% chance of success. The cost of applying for zoning change and for appealing an unfavorable ruling is $5,000. Should the property be bought? Solution: Solving such a decision problem using decision tree is common practice in business. The two alternatives are analyzed separately and the expected payoff in each case is found out. 74 Accepted 60% 0 Sell Property 150000 i 45000 45000 Root 0 18200 Buy Property -100000 Apply for Rezoning -5000 18200 Do Nothing 0 o Success 10% 0 18200 Appeal -5000 -22000 140000 Denied \40% 0 -22000 -40000 Dump Property 70000 -35000 Sell 250000 140000 Failure Dump ,90% 0 70000 -40000 Figure 4.1: Real Estate Example The expected pay off corresponding to the do nothing option is 0 $. On the other hand, if the investor decides to buy the property, the expected value is 18200 $. Hence the best alternative in this case, based on a neutral utility function is to buy the property. As per the above decision tree, after buying the property the owner should apply for a rezoning and in the event of it getting rejected he should make an appeal. Source : http://www.vanguardsw.com/dphelp4/dph00080.htm 4.3 The Gerber-Phthalates Controversy This example shows how Gerber Products Ltd used decision trees to decide whether to continue using poly vinyl chloride (PVC) in its products. PVC is a common material used in several household products such as containers, toys etc. To soften PVC a chemical plasticizer known as "phthalates" is added. In 1998, Greenpeace, an environmental group, announced that lab testing had shown that phthalates were carcinogenic in lab rats. Since Gerber had over 75 products (that used phthalates) and were used orally by children, it had to respond 75 quickly. The Consumer Product Safety Commission (CPSC) informed Gerber that it was going to issue a press release telling parents about the potential threats of phthalates. The Gerber management decided to assess this problem using a decision tree. This decision tree is shown in figure 4.2. Decision! -168750 Proactive -168750 Reactive -2700000 50% Simple Reportl 600000 80% Increase Sales_01 . ' 1000000 1 ,20% Decrease Sales 01 i50% RecalU -937500 -1000000 25% Increase Sales_02 A ' 0 < s 75% Decrease Sales_02 -1250000 50% Simple Report2 , . / 25% Increase Sales_03_, r ~ — 0 — '" , 75% Decrease Sales 03 -2000000 20% Increase Sales 04 .50% Recall2 -3900000 500000 80% Decrease Sales_04 Figure 4.2: Gerber-Phthalates Controversy Gerber basically had two alternatives, either to be reactive: wait of the press release, see the consumer response and then act accordingly or be proactive and address the problem irrespective of what the public response to the CPSC report would be. In the press release, the CPSC would either issue a recall of all products that contained phthalates or issue a report simply expressing concern. With Gerber's point of view, the latter outcome would be more 76 favorable compared to the former. Gerber predicted eight possible outcomes which were represented by a decision tree (shown in figure 4.2). If Gerber adopted a proactive approach (i.e. initiated steps to find a solution to the phthalates problem) and if CPSC issued a simple report, then it was estimated that there was an 80% chance that sales would go up by a million dollars since Gerber would have reacted faster than other competitors. It was also estimated that there was a 20% chance of sales declining by a million dollars owing to bad press coverage. If however, CPSC issued a recall, it was estimated that there was a 25% chance of maintaining the current sales using a proactive approach and a 75% chance of sales decreasing by 1.25 million dollars. If Gerber adopted a reactive approach (i.e. waited for the CPSC report to be issued) and if CPSC issued a favorable report then it was estimated that there was a 25% chance that sales would remain unaffected and a 75% chance that sales would drop by 2 million dollars. If on the other hand, CPSC issued a recall, it was estimated that there was a 20% chance of sales to increase by half a million dollars because it was felt that some competitors would be less prepared than Gerber. It was also estimated that there was an 80% chance of sales decreasing by five million dollars. Using the decision tree indicated that being proactive would be a better option and Gerber initiated methods to find a solution for the phthalates problem irrespective of what the CPSC report was. Following Gerber's decision, the CPSC issued a favorable report and the use phthalates was approved. Source : http://gbr.pepperdine.edu/993/tree.html 77 5. Application of Decision Science to Steel Structures Engineering - An Introduction to Case Studies In Chapter 2, several existing theories that can be used for decision making under conditions of risk, uncertainty and incomplete knowledge were discussed. In Chapter 3, several mathematical tools and techniques that may be used as aids for the implementation of decision theories in practice were described. The theories and tools that were discussed in both these chapters can be applied to any field when faced with a decision making situation. Whenever possible examples demonstrating their applications in steel structures engineering has been presented. Most of these examples are based on hypothetical decision situations. However, the situations considered are very similar to those encountered in everyday practice. The need to demonstrate the application of decision tools in real time problems led to the realization of two case studies: The Atacama Cosmology Telescope and Corrosion Prevention in Steel Bridges. In both the case studies some of the tools described in this thesis were used during the decision making phase of the projects. Due to the specific needs of each of these projects, only a few of the decision making tools described were used. However, it is felt that the essence of these case studies is to show that the decision making tools discussed in this thesis are not of mere academic interest but have practical applications. In Case Study 1: The Atacama Cosmology Telescope, the use of Monte Carlo simulation, expected values and decision trees for the error budgeting and cost efficiency of the telescope 78 has been demonstrated. In Case Study 2: Corrosion Prevention in Steel Bridges, the use of Monte Carlo Simulation and probability distributions in the selection of bridge maintenance strategies has been shown. 79 6. Case Study 1: The Atacama Cosmology Telescope (ACT) 6.1 Introduction The application of tools such as Monte Carlo Simulation, Expected Monetary Value and decision trees, discussed in earlier chapters, has been demonstrated in this case study. The advantages of using mathematical tools with commercially available decision making software to make intelligent engineering decisions is illustrated in this chapter by using the Atacama Cosmology Telescope (ACT) as an example. A brief account of the trends in cosmology is given to serve as an introduction to the ACT project. Man's curiosity to learn about the origin and composition of the universe has led to exponential growth in the field of cosmology. Precise mapping of temperature anisotropy in the Cosmic Microwave Background promises valuable information for scientists to answer several age old questions about our universe. Several projects are being launched in this direction. The Atacama Cosmology Telescope if one such project which aims at producing clearer and more refined images of the CMB by making use of new bolometer array technology. 6.2 Cosmology and the Cosmic Microwave Background (CMB) Cosmology is the branch of science which deals with the study of the origin, present state and ultimate fate of the universe. It involves understanding the structure, composition and dynamics of the universe [18]. Like all branches of science, Cosmology involves the proposition of theories and hypotheses. These theories and hypotheses are then accepted, modified or rejected based on their ability to explain observed phenomena. 80 In the beginning, the universe was a hot, dense plasma composed of ionized particles (electrons and protons) and photons. Under the extreme conditions that prevailed the ionized particles emitted radiation. However, this radiation was trapped within, because of the high density of the universe. But as the universe expanded and cooled, the electrons and protons combined together to form neutral atoms and the ability of the universe to prevent the emission of radiation was lost. Thus, radiation was emitted in the form of photons. These photons form what cosmologists call the Cosmic Microwave Background (CMB). It has been observed that the CMB is fairly uniform. This is a direct indication that it originates from a time when the structure of the universe was fairly simple, long before galaxies and stars were formed. It is by observing the CMB that cosmologists hope to answer questions about the origin, composition and other mysteries about our universe that continue to baffle man even today. In 1990 the Cosmic Background Explorer [30], more popularly known as COBE, launched by NASA, detected variations in temperature of the order of one in 100,000 in the CMB from one region to another (Figure 6.1). These variations give scientists valuable information in understanding how matter was distributed in the early universe and brings them closer to answering the question of how the primordial plasma evolved into what we see today. 81 -100 T(nK) 1100 Figure 6.1: Map of temperature anisotropy in the CMB by COBE Wilkinson Microwave Anisotropy Probe (WMAP) [31] launched by NASA in 2001 was one of the most prominent projects in this direction. It mapped the temperature variation in the CMB to a much finer resolution (Figure 6.2) than COBE. The results obtained by WMAP allowed researchers to estimate more precisely the age of the universe and also gave them a better insight into the geometry and composition of the universe. The results also validate some of the predictions of the Big Bang theory and the Inflation theory. - 2 0 0 T < | 1 K ) -+ZOO Figure 6.2: Map of temperature anisotropy in the CMB by WMAP 82 The success of the WMAP has led to an increasing interest and effort in mapping the temperature anisotropy in the CMB with higher precision. The Atacama Cosmology Telescope (ACT) is one among the several projects that have been launched in this direction. 83 6.3 The Atacama Cosmology Telescope (ACT) The Atacama Cosmology Telescope is an experimental collaboration between Princeton, U. Pennsylvania, Rutgers, NASA Goddard, NIST and several other smaller partners [15]. It aims at producing arc-minute resolution and micro-Kelvin sensitivity maps of the microwave background temperature over 200 square degrees of the sky in there frequency bands. The ACT will combine thousand-element CCD-like bolometric arrays with a custom-designed six-meter telescope to achieve high sensitivity, angular resolution and control over systematic errors [16]. The ACT will be positioned on Cerro Toco, in the Atacama Desert in the Chilean Andes, at an elevation of 5200 meters. It is 35 km east of San Pedro de Atacama 130 km southeast of the large mining town of Calama and 275 km northeast of the port of Antofagasta [32]. 6.4 Major Components of the Atacama Cosmology Telescope The major components of the Atacama Cosmology Telescope (some of which are shown in figure 6.3) are listed below: • Primary Reflector Panel • Secondary Actuators • Panel Support • Secondary Reflector Support • Guard Ring on Primary • Receiver Cabin • Primary Back Up Structure • Receiver Cabin Support • Elevation Frame • Exterior Shielding • Telescope Elevation Frame • Connections: Ball Screw to • Secondary Reflector Panel Elevation and Azimuth Frames 84 • Connections to Elevation Bearings • Surface Cladding • Telescope Azimuth Bearing • Ground Screen Frame • Pedestal • Ground Screen Foundation • Telescope Foundation Figure 6.3: Components of the Atacama Cosmology Telescope All the components described were part of the preliminary design of the telescope. Since then several changes have been made. Picture obtained from A M E C Dynamic Structures. 8 5 6.5 Important Considerations While manufacturing a telescope, several considerations have to be made. The accuracy of the telescope is a major consideration since precise mapping of the temperature anisotropy in the CMB is of primary importance to cosmologists. Owing to the uniqueness of telescopes, it is not uncommon for engineers to be faced with new challenges while designing and manufacturing telescopes. While working in such unfamiliar situations, it is advantageous to keep track of the initial cost estimated based on the risk associated with different components and how it changes as new information becomes available. In the following sections two decision tree models are described: One addressing the issue of accuracy and the other dealing with cost. 6.6 Accuracy As mentioned previously, the accuracy is of critical importance as it determines the precision of the telescope. The error present in a telescope is represented by an error budget. An error budget defines the total error in a telescope, the various sources that contribute to the total error and how much error each source contributes. The total error is calculated by combining the errors from the various sources. The square root of the sum of the squares (RSS) of the errors from various sources is taken as the total error of the telescope. This can be mathematically expressed as: Where Errorx,Error2 > Errorn are the errors from n sources. 86 The RSS method magnifies the contribution of large errors while it reduces the effects of smaller errors. The RSS method is commonly used to estimate errors in telescopes. 6.6.1 Pointing Error The pointing error of a telescope at any point in time may be defined as the difference between the direction in which the telescope has to point (direction commanded) and the direction in which it is actually pointing. Pointing error is expressed in terms of arc second. Several factors such as wind, temperature gradient, movement of various components of the telescope etc. contribute to pointing error. For an existing telescope the pointing error may be found out by actual measurements but during the developmental stages the pointing error is estimated using finite element models. Table 6-1 gives the values of pointing errors estimated using a finite element model by AMEC Dynamic Structures. The total pointing error (i.e. the RSS value calculated) is also given. Table 6-1: Pointing Error Siniree of Error Day-Error 'in resect . Night-Error . (amec) Gust wind component 1.2 2.4 Absolute temperature 1.0 0.5 Temperature gradient 1.5 0.0 • Servo error* 0.8 0.8 Dynamical Servo** 2.7 2.7 "Other J.O 1.0 Total RSS Error with Dyn. Servo .. 3.7 • • .3.9 Total RSS Error w/o Dyn. Servo 2.5 • 2.8 87 Engineering involves representing physical phenomenon by numeric models. While doing so, the engineer is limited by his assumptions, unavailability of complete information and several other uncertainties. It is therefore evident that the results that he obtains from his models are also subject to uncertainties. The values of errors shown in the table 6-1 were obtained using numeric models and hence have an element of uncertainty associated with them. Since the total error is calculated from these values, it is also uncertain. Therefore, a probabilistic treatment of these variables would be a more realistic representation of the situation. Strictly speaking, an appropriate distribution for each of the values in the above table should be arrived at by using a large number of data and fitting a curve to it. However, since such a method would be cumbersome and impractical, an alternative method using the subjective values of probability was used. A normal distribution was used to represent the errors from different sources. The RMS value of error was used as the mean value of the distribution. The standard deviation was arrived at based on a term called the "confidence factor" which was obtained from engineers who worked on the project. It is worth mentioning that the perception of risk varies widely from one person to another and even when the same person is presented with different situations. Although this discrepancy cannot be eliminated, it was weighted down using a fudge factor. Having allocated a distribution for the errors form various sources, the total error was calculated using Monte Carlo Simulation. The resulting distribution represented the possible range of values for total error. 88 Frequency Distribution 8 3.45 3.5 3.55 3.6 3.65 3.7 3.75 3.8 3.65 3.9 3.95 4 4.DS 4.1 4.15 4.2 4.25 4.3 Pointing Night_RSS Figure 6.4: Distribution of pointing error The distribution obtained for total pointing error during night is shown in figure 6.4. The decision tree models that were used for obtaining the distributions for pointing error have been shown in Appendix C. Using a distribution for the pointing error rather than a single value gives the engineer an idea of the range of possible values that he can expect from his design and the probability of occurrence of these values. Using a distribution instead of a single number gives the engineer a better understanding of the performance of his model. 6.6.2 Surface Accuracy: The smoothness of the reflecting surface plays a critical role in determining the quality of the image that the telescope produces. A surface accuracy estimate is a measure of the smoothness. Several factors such as gravity, temperature, wind, errors during the manufacturing process etc. affect surface accuracy. The overall surface accuracy estimate was determined using a finite element model by AMEC dynamic structures. This is shown in table 6-2. 89 1S.C 10.C 5,t Table 6-2: Surface accuracy table Table S Overall surface accuracy estimate Krror Source K M S K i T i . r •Panels -.- .-Manufacturing(including measurement &.asin«) S ti Gravity .; • 3.0 Wind . ' • • li Absolute Temperature 6.0 Temperature Gradients I i.ii ' Total Panel (RSSr 14.8 ' Back-up Structure Gravity (Ideal > . ' ' 10.0 Gravity (Departure from Ideal} 2.0 Steady Component of Wind 5.1 Abso!ute>cremi»rature 5.0 Temperature Gradients ' . , ' 10.0 Total Back-up S true lure (RSSV 16.0 Panel Mounting • - - 1 Absolute'-Temperature ' - — 0.0 f included in panel entry j Temperature Gradients. o.o : Panel Location in Plane • ' 0,0 (included in panel entry') ;• Panel Adjustment Perpendicular to Plane 5,0 . Gravity 0,0 (included in panel entry) Wind 0.0 (included in panel entry) Total Panel Mounting (RSS i 5.0 Secondary Mirror • Manufacturing (including moasurornem & agina) 10.0 Gravitv - ' . • Wind • . . . 4.0 Absolute Temperature • - • '• . • . '• 8.0 Temperature Giadsems ' ' . no Alignment 8.0 .': Total Panel Mounting;(RSS)' ' : ; 20.6 Holography , . . . . . Measurement - . , • ,. 12.0 Total Holographs RSS r 12.0 Other EriDrs not Included Above 5.0 Total'RSS 33.0 90 As in the case of pointing errors an element of uncertainty is associated with the surface errors shown in table 6-2. Hence a probabilistic approach is advantageous. Following the same procedure as in pointing error estimates, a distribution may be obtained for the total surface accuracy. This is shown in figure 6.5. The model used to obtain this output is presented in Appendix C. Frequency Distribution 25,000 —1 1 29 29.2 29.4 29.6 29.B 30 30.2 30.4 30.6 30.8 31 31.2 31.4 31.6 31.6 32 32.2 Surface Accuracy Estimate Figure 6.5: Distribution of surface accuracy 91 6.7 Cost 6.7.1 Introduction In the corporate world, engineering firms are manufacturers and engineering services goods or commodities. Al l businesses are governed by profits and engineering is no exception. During the course of any project, an engineer in addition to facing technical challenges has to deal with the business aspects of engineering. While doing so careful consideration of the cost associated with the different stages of a project is extremely essential. Formulating such a cost estimate becomes even more challenging when the engineer is not sure about what methods will be implemented to realize an objective. This vagueness is very often the case during the initial stages of the project when several intricacies are still unknown and is especially true in the case of projects as unique as the ACT. In such situations it is extremely advantageous to have a model which not only outlines the various possible scenarios but also associates tentative costs with each of them. The model being used needs to be sufficiently flexible so that changes can be incorporated as new and relevant information becomes available. Maintaining such a model has several advantages. First, it presents an opportunity for the engineer to consider all possible circumstances. Second, using such a model, the engineer can find out how the success or failure of one event affects the total cost of his project. The model also gives the engineer an idea of where he should focus his efforts to attain maximum benefits. A decision tree model which incorporated these requirements was used during the ACT project. This has been explained in the following sections. 92 6.7.2 M o d e l Construction The fist step in constructing the model was listing the major components of the telescope. These components (mentioned in section 6.4) were identified based on the load path (load path chart in Appendix C). The various steps involved in the design, manufacture or procurement of each of these components were identified and the possible alternatives in each step were worked out. A cost was attached to each of these alternatives. In situations where more than one outcome was possible, probabilities were attached to each outcome and the expected monetary value was used to find out the most likely cost. The probability values used were subjective information obtained from the concerned personnel. Once a decision tree model was set up for each component, the model for the telescope was obtained by simply combining the component models. These steps are shown in figure 6.6. Identify: Major Telescope Components Cohstnjct Component Model Combine Component Models to Obtain Telescope Model Identify Various Steps to Construct Component identify Various Outcomes in Each Step • 1 AttadrCostand Probabilities to Each Outcome Figure 6.6: Model Construction - Steps 93 6.7.3 Component Model The example described in this section is purely for illustration purposes and is not necessarily the actual model used during the ACT project. The various steps involved in the construction of a component model have been discussed in this section. This is done by illustrating how the decision tree model for the panel support was set up. After careful consideration, it was decided that the process of making the panel support could be divided into three major steps: Designing, Material Procurement and Fabrication. Design: Three possible scenarios were foreseen in the design phase. It could be possible that the first design proposed is successful, in which case the cost incurred would be the least. A high probability of occurrence was placed on this outcome because of the experience of the design team. It is also possible that the first design proposed is unsuccessful but with sufficient modification meets the requirement. The cost incurred in this case would be a little greater than the first outcome. In this example, the combined probability of occurrence of outcome one and two was taken as 9 5 % since it was felt that successfully designing the panel was a very likely scenario. A probability of 5 % was placed on failure in design since such an outcome would be very rare but could not be ruled out because of the highly unpredictable nature of the project. The cost incurred in the event of this outcome would be the greatest. The most likely cost incurred during design was calculated as the expected monetary value of there outcomes discussed above subject to their respective probability of occurrence. This is represented in the form of a decision tree and is shown in figure 6.7. 94 0174 IPS D*si<Ji>. i <|)stl_siicce«s 12 psd_D«sf<jii psdjtow s ' how ly «h<jine«i iny 5040 I— |^>8<l JlOUIS ; a IQ < psd_n»odify := 25" 25 (>3<l_nnxlWy% K> psd_m<Mlify_d«si<Mi := psd.Design ' i»od_f.ioioi 7540 psd_Desi(jn 5040 (nwKljfjctor t:t | psdj.til 100 - |>«<l_su«c«ss - |>s<l_mo<llfy 5 l>SI|«iit<ess 70 pidjnodify 25 • psdjairti l>s<l_lJiUii e p«d_D«si(jii' fjiljactoi 15120 |)S<l_D«si<|ii s 5040 : fail factor := 3 Figure 6.7: Panel Support: Decision Tree (Design) Material Procurement: The configuration of the telescope was such that each panel had four supports. Since the cost of each panel and the number of panels in the telescope were known, the total cost of material was found out. Though several issues like variability in unit cost and other variations can be considered, for simplicity, a deterministic approach was adopted. P S Ma te r i a l s 5600 . miml>erW|>an«ISi:iJf<!.: s c o s t pet s»i|>|>oit :=20 | Figure 6.8: Panel Support: Decision Tree (Material Procurement) Fabrication: As in the case of the design step, three possible outcomes were predicted in the fabrication stage. The first outcome would be successful fabrication using simple machines. 95 This would have the least cost. Since fabricating the supports was a relatively simple task considering the sophisticated fabrication facilities that were available, a high probability was placed on the success of this outcome. The event of using complex machines to. fabricate the supports was considered as the second outcome. Finally the event of not being able to fabricate the support was taken as the third outcome. A low probability of occurrence was attached to the two latter outcomes. The branch of the decision tree corresponding to fabrication is shown in figure 6.9. PS Fabrication := emv( psf_su«esst 2250O -<^ps<_s»«<sss := 00 [ —<^l»sf _F.ihi ication := 20000 | -^psfjiiodHy :- 5 [ ? l>sf_ch.u\<|e Jii.ichinimj := p$f J=Al>iic 300*0 |>£f_Fitni<:jtk>i« I j , 20000 psf j ni := 100 - |>sf_sutceis - psf j i * 5 . psf_success . 00 psf_niodify u l>sf_fail*,i psf_f.iMtue := psf_F.il)iic.ition ' UIIJJ| 60000 psfJFaltiictfion ?0600 '(.ill f.Ktoi := 3 Figure 6.9: Panel Support: Decision Tree (Fabrication) The total cost of the panel support would be sum of the design, material and fabrication costs. The cost calculated using the above model would be the most likely cost that would be incurred. The cost of each component of the telescope corresponding to the best possible scenario was also calculated. This cost corresponded to the cost that would be incurred if the most favorable outcomes occurred. The best possible scenario corresponding to the panel support is shown in figure 6.10. 96 i_Panel Sti|>(>oit ;= p«fj)esig» • PS Matei ials +' |>5f_F*ik.ition p$d_Design := psdjtows' Irauily engineering S04» (psdJwuisptO •-(houily enpieei Wi PS M4«i Wo:»number of panels' mi| S600 |-(minil)ei of pawls :•= 70 | (iwmbei of suppous pti pawl:» 4| QpsfJFal)iic^ipn:= 2080(1 (cost per support? 20 Figure 6.10: Panel Support: Decision Tree (Most favorable scenario) Separate decision tree models were created for each component of the telescope and a combination all these component models constituted the model for the entire telescope. 6.7.4 Efficiency of Scenario The models described in the previous section can be used to calculate two distinct values of cost for each telescope component: the cost corresponding to the most likely scenario and the cost corresponding to the most favorable scenario. With these values of cost, the following formula can be used to understand how efficient the present scenario is: ^ Cost _of _most _ likely _ scenario ^ Cost _of _ most _ favorable _ scenario xlOO-100 Greater the value computed using the above formula, lesser is the efficiency of the scenario. If a scenario is inefficient, i.e. it's most likely cost is much greater that the most favorable cost, it is advisable to direct resources and effort toward improving it. On the other hand, if a most likely cost of a scenario is not very different from the most favorable cost, it is advisable to leave it as such. This is illustrated with the help of an example. Consider the 97 design of the primary reflector panel. The cost corresponding to the most likely scenario is given by the following decision tree: PRP Design s* «my< i p<l_su««ss%, 52«G.« -^f|Mlgiicc«ss := 94~1: i p<l„s»ccess% i ptl^Desiyn:«ipUJiotii s ^M^Hitly enyineeiing * SOW (lmirly«ntiin««ilii3*:a<3J| rpd_nwdify°; i p < l _ m o ( l _ c l « & i g i i : » i | x i _ D e & K j n ! n K M l j r a c t o i 75«« ; ~ " iptl_Oe&ign .. ^mo<INfactor.:= 1.S [ ip<IJf.til:» 100- ipcl„s<rc«ess • i p t i j i W M l i i y i|Ml_cu<ceso 04 i p < l _ m o < l i t y s rp<IJi>il% lp<l„f.iiluie .= tp<l_D«si<jn ' IAUJ ictoi. 1S120 i p<l_Desi<jn 5040 ( i j i lJ .wtor1 | Figure 6.11: Primary reflector panel: Decision Tree (Design) The cost corresponding to the mast favorable scenario is 5040 dollars. The inefficiency in this case is given by: ^ 1 x 1 0 0 - 1 0 0 =4.5 % I. 5040 J Since the value of inefficiency is quite low, it is not necessary to try and improve the most likely cost. However, in the case of designing the guard ring, it was observed that most likely cost was 4042.5 dollars and the most favorable cost was 3300 dollars. The inefficiency of the scenario in this case was 22.5%. This indicated that effort had to be focused on improving the design of the guard ring. It is worth mentioning that the efficiency of a particular scenario can be calculated in different ways and the formula indicated in this section is just the preference of the author. 98 6.8 Conclusions 1. Accounting for uncertainties while calculating the pointing error and surface accuracy gives the engineer a probabilistic distribution for the total error as opposed to a single number. Now, rather than comparing a single number with the client's requirement, the engineer can compare a probabilistic distribution with the clients requirement. This gives the engineer a better idea of how precise or imprecise his design is and can then change it accordingly. 2. The use of a decision tree cost model forces the engineer to think about all possible scenarios or outcomes. This ensures that no event is overlooked and therefore the engineer is prepared for all types of situations. 3. Using the most likely cost and the most favorable cost, the efficiency of the present scenario may be determined. This was extremely useful in determining where effort had to be directed and resulted in the avoidance of unnecessary work that did not provide much benefit. 4. Since the decision tree model produced is essentially a visual representation of the engineer's description of the problem and the risk associates with each step, it served as an extremely useful tool to convey information from one engineer to another. 99 7. Case Study 2: Protection of Steel Bridges in British Columbia against Corrosion using Probabilistic Methods 7.1 Introduction Corrosion prevention is of great importance and has to be appropriately addressed during the maintenance and repair phases of steel bridges. The development of innovative and effective coating systems has greatly helped in the protection of steel bridges against corrosion. While it is extremely vital to make use of these efficient coating systems, it is equally important to develop procedures and models that will help in making decisions such as what maintenance strategies to choose, when to apply them and when to reapply them. While making such decisions, it is essential to evaluate the performance of each strategy over the entire maintenance life of the bridge. This task is particularly challenging because of the uncertainty entrenched in the various parameters which determine the performance of each strategy. The model proposed in this case study is intended to help the decision maker evaluate the maintenance strategies and to choose the best strategy in order to achieve a desired performance level over the maintenance life of the bridge. The evaluation of the performance of each maintenance strategy can be done only in a probabilistic sense because of the uncertainty associated with the input parameters. The proposed model predicts the equivalent annual cost and aesthetic performance of using a particular strategy and the uncertainty associated with the prediction. Based on the output from the model and the subjective information at hand the user can choose the best maintenance strategy. 100 Monte Carlo simulation has been used to represent the uncertainty associated with the various strategies. A wide array of decision making software is already available in the market to effortlessly implement the ideas discussed above in a practical manner. @Risk 4.5, one such software which is an add-in to Microsoft Excel has been used in the implementation phase. Since 1994, a research program has been carried out jointly by the Ministry of Transportation of British Columbia and the University of British Columbia to address the problem of corrosion in BC bridges. This project follows suite in the wide array of research projects that were part of the corrosion prevention research program. This project is also part of an investigation to find innovative methods to enhance the economy of steel structures funded by the Steel Structures Education Foundation. 101 7.2 Corrosion and Corrosion Rating In the presence of an electrolyte, when there is a difference in electrochemical potential between two regions in a metal, oxidation takes place at the anode and reduction takes place at the cathode. Oxidation at the anode results in the expulsion of metal ions which combine with oxygen to form their corresponding oxides. This results in depletion of metal at the anode while the cathode remains intact. Simply put, corrosion is a chemical or electrochemical reaction between and metal and its environment which causes deterioration in the properties of the metal. The rate of corrosion depends on the metal, the environmental conditions that it is subjected to and the presence of electrolytes. The American Society for Testing and Materials (ASTM-D610) suggests a "Standard Test Method for Evaluating Degree of Rusting on Painted Steel Surfaces" which may be used to evaluate the extent of corrosion on steel surfaces. The corrosion rating system suggested by ASTM-D610 is given in Appendix D. 102 7.3 C l i m a t i c D i v e r s i t y o f B r i t i s h C o l u m b i a The province of British Columbia is well known for its climatic diversity. The northern and interior regions of BC experience extremely cold arctic weather while the lower mainland, which is protected from the cold northern arctic winds by the coastal mountains, enjoys mild climate. The coastal mountains are also responsible for the rainfall that the lower mainland receives. The weather in places like Vancouver and Victoria is also influenced by their proximity to the pacific coast. The Province of British Columbia owns over 700 structures that are classified as steel bridges. These bridges are distributed throughout the province. Given the climatic diversity of British Columbia, it is no surprise that these bridges are subjected to a variety of environmental onslaughts. This causes the degradation of coating material and results in corrosion which may in turn lead to reduction in cross sectional area and premature structural failure. In order to ensure that the structural performance of these bridges is in accordance to design, maintenance procedures have to be carried out. 103 7.4 Maintenance Strategies 7.4.1 Introduction Maintenance strategies are essentially ways in which coating systems are applied to the bridge. The maintenance strategy adopted depends upon the extent of corrosion, the performance level that is being targeted at the end of the application period and the amount of money available for the project. There are mainly three maintenance strategies for bridges. They are as follows: • Touch Up or Spot Repair • Overcoat • Recoat 7.4.2 Touch Up or Spot Repair The touch up strategy may be used when the corrosion damage in the bridge is minor. The damage due to corrosion may be termed as "minor" when the corroded area does not exceed 0.1% of the total surface area of the bridge i.e. the corrosion rating of the bridge (as per ASTM D610) is not less than 8. The touch up strategy involves cleaning and surface preparation of only those areas in the bridge that are corroded. A fresh layer of coating is applied over these areas. Areas of the bridge that are not affected by corrosion are left untouched. The cost per unit area of the touch up strategy is usually higher than that of over coating and recoating strategies. However, since the maintenance area is only a fraction of the total surface area of the bridge, the total cost of the touch up strategy is considerably lower than that of the other two strategies. In addition to the advantage of low cost, the use of the touch 104 up strategy results in a higher average corrosion rating over the maintenance period. It should also be noted that the durability of the touch up strategy is much less than the durability of the overcoat and recoat strategies and therefore will have to be repeated more number of times. 7.4.3 Overcoat The overcoat strategy may be used when there is a fair amount of corrosion damage in the bridge. A bridge is said to have a "fair" amount of corrosion damage when the corroded area does not exceed 3% of the total surface area of the bridge i.e. the corrosion rating of the bridge (as per ASTM D610) is not less than 5. In the overcoat strategy, the entire surface of the bridge is cleaned with more emphasis on corroded areas. Areas of the bridge that are not corroded are also cleaned to ensure adhesion between the bridge surface and the new coating system but not as thoroughly as the corroded areas. After surface preparation, the entire bridge is coated with a new coating system. The cost per unit area of the overcoat strategy is less than that of the recoat strategy however, the durability of the overcoat strategy is also less than that of the recoat strategy. 7.4.4 Recoat The recoat strategy may be used when there is severe corrosion damage in the bridge. A bridge is said to have a "severe" amount of corrosion damage when the corroded area exceeds 3% of the total surface area of the bridge i.e. the corrosion rating of the bridge (as per ASTM D610) is less than 5. 105 In the recoat strategy, the entire bridge is completely cleaned of all paint and stripped to bare steel. A new coating system is then applied to the entire bridge. The recoat strategy is the most expensive of the three strategies but also has the greatest durability among the three. In the event of a bridge being severely corroded, recoat is the only maintenance strategy that can be used. 106 7.5 Selection of Strategy While selecting a strategy a few key points have to be considered. Each of the strategies mentioned above can be used only within certain ranges. For example, when a bridge has a corrosion rating less that 8, the touch up strategy cannot be used. When a bridge has a corrosion rating less than 5, neither the overcoat strategy nor the touch up strategy can be used. The recoat strategy can be used irrespective of what the corrosion rating of the bridge is. Sometimes even if a strategy is applicable within a particular range it may not be used because of various considerations. For example, even though it can be used, the recoat option would not be used when the corrosion rating of the bridge is 8 simply because it would be extremely expensive and since another strategy such as touch up would be more efficient in such a situation. 107 7.6 Problem Statement Once the extent of corrosion in a bridge is determined by site inspection, the engineer is faced with the task of choosing one sequence of strategies form a wide spectrum of possible alternatives. These alternatives can range form sequences that use only the touch up strategy, only the overcoat strategy or only the recoat strategy to sequences that use combinations of the three strategies. Figure 7.1 describes graphically the problem that has to be addressed while selecting a maintenance strategy. "Given the current corrosion rating of the bridge, its maintenance life and the target performance level, what would be the best maintenance strategy that may be adopted?" The task of choosing a strategy becomes challenging because of the fact that many factors such as inflation, interest rate, unit cost of operation etc, which influence the performance of the maintenance strategies are highly variant. It is therefore evident that predicting the outcome of a particular choice can be done only in a probabilistic sense. Which Strategy to Choose? P E R C 0 R R O S O N R A T Maintenance Life L E V E L N G Current Time Time End of Bridge Life Figure 7.1: Problem Statement 108 7.7 Performance Criteria When choosing an option, the engineer has to keep in mind that his choice meets the performance levels that are being targeted at the end of the maintenance period. The following are the two major performance levels addressed in this project. 7.7.1 Cost The major reason for carrying out research on maintenance strategies is to obtain a tool that will help reduce the maintenance cost in bridges. It is therefore evident that cost is very often the governing performance criteria when it comes to choosing a strategy. 7.7.2 Aesthetics In the case of bridges of historic value or bridges which are tourist attractions, aesthetics is of great importance. In such cases, maintaining the bridge within a particular corrosion rating may be the governing performance criteria. 109 7.8 Input Parameters As mentioned previously, the evaluation of the performance of the different maintenance strategies can be done only in a probabilistic sense because of the uncertainty associated with the input parameters. In this model, the uncertainty associated with the input parameters are represented using probability distributions. The probability distributions that were assigned to the various input parameters are summarized in table 7-1. Table 7-1: Input Distributions Input Parameter Distribution Surface Area of Bridge Deterministic Initial Corrosion Rating Discrete Unit Cost of Strategy Uniform Durability of Strategy Lognormal Maintenance Life of Bridge Lognormal Escalation Rate Lognormal Interest Rate Lognormal These distributions were not assigned randomly but were based on engineering judgment and discussion with experts. This is elaborately discussed in Appendix D. 110 7.9 Proposed Model The model proposed is intended to give the engineer or decision maker an idea of what outcome he can expect when choosing a strategy and the uncertainty associated with it. The model primarily has two components: The cost component and the corrosion rating component. 7.9.1 Cost Component The cost component calculates the equivalent annual cost of using a particular sequence of strategies. The functioning of the cost component of the model takes place in the following three phases: • Phase 1: Calculation of Cost per Cycle and Number of Cycles • Phase 2: Calculation of Total Present Value • Phase 3: Calculation of Equivalent Annual Cost 7.9.2 Corrosion Rating Component The corrosion rating component calculates the expected average corrosion rating of the bridge over the maintenance period. Like the equivalent annual cost, the average corrosion rating of the bridge is also not deterministic and has a probabilistic distribution. However, while calculating the average corrosion rate of a strategy, the presence of uncertainties may not be as explicit as in the case of equivalent annual cost. Factors which contribute to the uncertainty in the average corrosion rating of the bridge are explained in 111 Appendix D. The functioning of the corrosion rating component of the model takes place in the following two phases: • Phase 1: Calculation of Number of Cycles • Phase 2: Accounting for Uncertainty 112 7.10 Implementation For implementation of the proposed model three programs, Decision Pro, Analytica and @Risk were investigated. Al l the three programs mentioned above made use of Monte Carlo Simulation to represent uncertainty. However, they had different ways of representing the problem. Analytica made use of influence diagrams, Decision Pro used decision trees and @Risk made use of spreadsheets. Although the capabilities of all the above mentioned programs were such that any one of them could have been used for this project, @Risk was chosen because of the following reasons: • @Risk is an add-in program to Microsoft Excel, which is already widely available in design offices. • Using the @Risk would be easier since Microsoft Excel is a familiar tool for ' engineers as they often use it for design. The model proposed in this project consists of two types of spread sheets, one to deal with the cost aspect of the problem and the other to deal with the average corrosion rating. Each of the three maintenance strategies has an equivalent annual cost spread sheet and an average corrosion rating spread sheet. These spreadsheets have been discussed more elaborately in Appendix D. 113 7.11 Demonstration of Capabilities of Model The capabilities and possible applications of the model are demonstrated in this section using an example. Example: Site inspection was carried out on a bridge with a total surface area of 3500 m 2 and the corrosion rating of the bridge as per ASTM D610 was adjudged as 8 by the site inspector. What would be the best type of strategy maintenance that should be adopted if: • Economy of the maintenance strategy was the most important consideration. • If the aesthetics of the bridge was of greatest importance. 7.11.1 Most Economic Strategy Based on the input distributions, the model generates a distribution for the equivalent annual cost of each of the three maintenance strategies. A comparison of the generated distributions is shown in Figure 7.3. From the diagram it can be concluded that for the set of inputs used, the touch up strategy has least equivalent annual cost. It can also be seen from the plots that the equivalent annual cost of the touch up strategy has a much narrow spread in comparison to that of the overcoat and recoat strategies. This means the uncertainty associated with the cost of the touch up strategy is less than that associated with the overcoat and recoat strategy. Therefore, for this example, the touch up strategy would be the most economic strategy. 114 1:. 0300. 0.800 0.700+ 0.600 0.500 0.400- -0.300- -0.200--0.100- -0.000: Comparison of EAC of Strategies 87.5 175 Values in Thousands 262.5 350 Figure 7.2: Comparison of Equivalent Annual Costs of strategies 7.11.2 Strategy that ensures Highest Corrosion Rating Based on the input distributions, the model generates a distribution for the average corrosion rating for each of the three maintenance strategies. A comparison of the output distributions is shown in Figure 13.2.1. From the diagram it can be concluded that for the set of inputs used, the touch up strategy has highest average corrosion rating. It can also be seen from the output histograms that the average corrosion rating for the touch up strategy exceeds 10. This can be explained by the fact that the durability of the sequence of strategies exceeds the actual maintenance life of the bridge. 115 Comparison of Corrosion Rating Figure 7.3: Comparison of Average Corrosion rating of strategies 116 7.12 C o n c l u s i o n s a n d R e c o m m e n d a t i o n s • The touch up strategy was found to be economical in cases where it could be used, i.e. when there was mild corrosion in the bridge. • When appearance of the bridge is the target performance level, the touch up strategy ensures highest average corrosion rating over the maintenance period. • Since the touch up strategy is both economical (when corrosion rating is greater than 8) and has a high average corrosion rating, maintaining bridges within this cut-off corrosion rating is extremely advantageous. • In the case of bridges that have already deteriorated to a state below the cut-off corrosion rating for touch up (i.e. corrosion rating of 8), a good strategy would be to upgrade them using either the overcoat or touch up strategy and then maintain the bridge within the "touch up zone". • While using decision making tools is advantageous, it is necessary to have frequent site inspections to ensure that the actual corrosion in the bridge falls within the rage assumed by our model. • While the decision model developed, predicts the outcome of choosing a particular strategy, it is left to the engineer to decide which strategy it most appropriate for the problem at hand. This has to be done after taking into account factors such as social issues, availability of resources etc. • Cost incurred if the bridge were closed during maintenance i.e. down time, though important has not been considered in this model. Further work has to be focused into such issues. 117 It has to be kept in mind that while using the model, the output given by the model is highly dependent on the input and therefore a thorough investigation has to be done on the input distributions that are being used. Ideally, each bridge should have its own set of input distributions. 118 8. Conclusions Dealing with uncertainty is an integral part of engineering and decision making in the presence of uncertainty is an essential skill that an engineer has to hone. The essence of this project is to investigate methods and tools that can aid an engineer in making these decisions and to put valuable intuition or subjective knowledge to best use. The emphasis of this project has been the use of decision making tools to enhance the economy of steel structures. However with certain skillful modifications, the ideas discussed can be applied to any field of engineering. The main findings of this investigation have been outlined below. • It is a well known fact that several proven theories and methods exist which can be used to make rational decisions. However, these theories have been thought of to be of little practical relevance to the practicing engineers. The usefulness of these tools in everyday engineering has been demonstrated in this thesis. • Depending upon the amount of information available about the probability of occurrence of the various states of nature, the engineer can make use of theories which pertain to decision making under risk, uncertainty or incomplete knowledge. Several such theories have been discussed and examples illustrating their relevance to steel structures engineering have been presented. Thus, showing their usefulness in everyday engineering. • The issue of implementing these theories in everyday practice was addressed by discussing about various analytical tools and techniques that may be used to do the 119 same with considerable ease. This has also been shown with the help of examples relevant to steel structures engineering. The application of decision making theories by using proven mathematical techniques in real time problems was demonstrated by using two key case studies: The Atacama Cosmology Telescope and Maintenance of Steel Bridges in British Columbia. The fact that the theories discussed could be used to solve problems in these projects showed their scope in everyday engineering. Several advantages of using decision making tools were observed during the course of the two case studies. In the ACT project, the use of probabilistic methods while preparing the error budget gave the engineer a better insight into the performance of his design. Using a decision tree model to assess the various scenarios and the cost associated with them enabled the engineer to not only think about all possibilities but also served as a yard stick to measure efficiency. In the Bridge Maintenance project the use of decision tools enabled the representation of variability in input parameters and the selection of best strategy based on the two performance levels: cost and aesthetics. 120 9. Future Developments The idea of using decision making tools in steel structures engineering has been investigated in this thesis. However, this thesis cannot claim to have completely covered all aspects of decision making. This investigation is only a small contribution to the application of decision making tools in steel structures engineering. Decision science is a vast field and the application of several decision making concepts has not been explored in this thesis. The following are some topics that are suggested for future work in this area: • In this thesis the discussion was based on decision making under risk, uncertainty and incomplete knowledge and majority of the arguments and examples were based on this perspective. The investigation of decision making based on a normative, descriptive and prescriptive point of view can be of interest for future studies. Such an investigation would provide an insight into how engineering decisions are being made and how they should actually be made. A comparison between these two studies can help identify inefficiencies and suggest what should be done to rationalize the decision making process. • In several examples choosing between two or more design or manufacture options, have been discussed in this thesis. The underlying concepts of decision making under such situations has been presented and several possibilities have been explored. The implementation of these ideas in a real time design or manufacture situation could yield interesting conclusions. 121 The probability of occurrence of states of nature is a critical aspect of decision making and hence being able to define them is extremely important. Although using a subjective approach is the best option in the case of engineering, an investigation of rational methods that could serve as guidance while arriving at subjective probability estimates would be extremely useful. While representing uncertainty in decision models with the help of distributions, choosing a probability distribution that appropriately represents the variable under discussion is extremely important. Arriving at distributions for variables in engineering is particularly challenging because of the non-availability of data. Further studies in this area can prove to be an invaluable resource for representing uncertainty and in decision making. The use of commercially available decision making software, though not explicitly addressed in this thesis, is evident in the form of various examples and case studies which used these software. An extensive investigation into the capabilities of each of these software and a demonstration of their usefulness could prove to be a critical factor in rationalizing decision making in engineering. 122 B I B L I O G R A P H Y [I] Z.W. Kmietowicz and A.D. Pearman (1981). "Decision Theory and Incomplete Knowledge." Gower Publising Company Ltd., Hampshire Gull 3HR, England. [2] Arnold Kaufmann (1968). "The Science of Decision Making." McGraw-Hill Book Company, New York, Toronto. [3] Antol Rapoport (1989). "Decision Theory and Decision Behaviour: Normative and Descriptive Approaches" Kluwer Academic Publishers, Dordrecht, The Netherlands. [4] David E.Bell, Howard Raffia and Amos Tversky (1991). "Decision Making: Descriptive, normative and prescriptive interactions." The University Press, Cambridge, England. [5] Jack R Benjamin and C.Allin Cornell (1970). "Probability, Statistics and Decision for Civil Engineers." McGraw-Hill Book Company, The United States of America. [6] Haukaas, Terje (2003). "Lecture Notes in Reliability and Structural Safety." Department of Civil Engineering, University of British Columbia, Vancouver, Canada. [7] Dougherty, Edward. R (1990). "Probability and Statistics for the Engineering, Computing and Physical Sciences." Prentice Hall, Englewood Cliffs, New Jersey, The United States of America. [8] Haugen, B.Edward (1968). "Probabilistic Approaches to Design." John Wiley and Sons, Inc., New York, The United States of America. [9] Analytica User Guide. [10] Nowak, Andrzej S. and Collins, Kevin R. (2000). "Reliability of Structures." The McGraw-Hill Companies Inc., The United States of America. [II] Gedig, Michael and Stiemer, Siegfried F. "Decision Making Tools for Steel Structures Engineering." [12] Gedig, Michael and Stiemer, Siegfried F. "Decision Making Tools for the Design and Fabrication of Steel Structures [Interim Report]." [13] Loewen, Nathan, meeting with Vignesh Ramadhas, May 12, 2005. [14] Charles .L. Bennett et al. (2003), "First Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Preliminary Maps and Basic Results", Astrophysics Journal, v. 148, P - l [15] Kosowsky, Arthur (2003). "The Atacama Cosmology Telescope" 123 [16] Kosowsky, Arthur (2004). "The Future of Microwave Background Physics." [17] Canadian VLOT working group, (2003) "VLOT Project Book." [18] Wayne Hu and Martin White (2004), "The Cosmic Symphony", Scientific American, February, 2004 issue. [19] American Standard for Testing and Materials: ASTM D610-01. Standard Test Method for Evaluating Degree of Rusting on Painted Steel Surfaces. [20] Tarn, Chun Kwok(1994). "A Study of Bridge Coating Maintenance." M.A.Sc Thesis, Department of Civil Engineering, The University of British Columbia, Vancouver, Canada. [21] Chan, Phyllis (2005). " A Contribution to a Practical Approach to Corrosion Protection and Coating Maintenance of Steel Bridge Structures." M.A.Sc Thesis, Department of Civil Engineering, The University of British Columbia, Vancouver, Canada. Web-Based References [22] The Web Dictionary of Cybernetics and Systems http://pespmc 1 .vub.ac.be/ASC/IndexASC.html#D [23] Wikipedia.org - Decision Theory http://en.wikipedia.org/wiki/Decision theory [24] Mathworld.com - Principle of Insufficient Reason http://mathworld.wolfram.com/PrincipleoflnsufficientReason.htail [25] Wikipedia.org - Expected Value http://en.wikipedia.org/wiki/Expected value [26] Stanford Encyclopedia of Philosophy - St Petesburg Paradox http://plato.stanford.edu/entries/paradox-stpetersburg/ [27] Mind Tools - Decision Trees http ://www.mindtool s. com/dectree .html [28] Vanguard Software Corporation - Decision Trees http://www.vanguardsw.com/decisionpro/iDtree.htm [29] Gerber-Phthalates controversy http://gbr.pepperdine.edu/993/tree.html 124 [30] Cosmic Background Explorer (COBE) http://lambda.gsfc.nasa.gov/product/cobe/ [31] The Wilkinson Microwave Anisotropy Probe (WMAP) http://map.gsfc.nasa.gov/ [32] Atacama Cosmology Telescope (ACT) http ://www .hep .upenn.edu/act/ [33] Corrosion Mechanism: http://www.counteractrust.com/corrosion.htm [34] Climate in British Columbia: http://www.thecanadianencyclopedia.com/ [35] @RISK4.5 for Excel http://www.palisade.com/risk/default.asp [36] Monte Carlo simulation http://planning.yahoo.com/mc 1 .html [37] Utility Theory http://www.questionmark.eom/uk/glossary.htm#U [38] Vanguard Software Corporation - Monte Carlo simulation http ://www. vanguardsw. com/deci si onpro/j MC .htm 125 APPENDIX A Derivation of Expressions for the Maximization and Minimization of Expected Values Decision Theory and Incomplete Knowledge Z . W . Kmietowicz and A.D. Pearman Pages: 16 to 21 126 Derivation of Expression for Maximizing or Minimizing Expected Value given Weak Ranking of Probabilities 127 Derivation of Expression for Maximizing or Minimizing Expected Value given Weak Ranking of Probabilities (Cont'd) 128 Derivation of Expression for Maximizing or Minimizing Expected Value given Strong Ranking of Probabilities which : ' [ ; ' { . % . • » « tiS f;:S'.J-). <sftc' ««"is - t» 'arsa lcgags to J 3 - . 4 ) . 129 Derivation of Expression for Maximizing or Minimizing Expected Value given Strong Ranking of Probabilities (Cont'd) " j«i 3 J j-i -< •> .... ; ' , l , J K r " 1 j * • i* f*;i' C , , T ; • - • ' *vir * "' * ' ' i r - t ".r '*)'*• " r ' " i ' V C i i r . 1 r>" I " 1 ? .nr. V 13,1 Si* j t 3 , i e i '. :-.xi-e',x of :;i , ; ' " K ' ; ~ f » j' 1 3 . 1 7 1 4 »_* \ ' '" i •:.r) «t»t«ir.t<J oWt-.-e. This »hc«f« tN» fSSUits • i » . . ;s" -,\': 130 APPENDIX B Probability Distributions Probability and Statistics for the Engineering, Computing and Physical Sciences Dougherty, Edward R. Pages: 143 and 163 131 LIST OF COMMON DISCRETE PROBABILITY DISTRIBUTIONS Distribution Density Mean Variance Binomial (n P*q"~x np npq Hyper geometric 'N-k\ yn-x J nk N nk(N-k)(N-n) N\N-\) Negative Binomial (x + k-Y { k-l , Pkqx kq P kq Geometric pqx q_ P q P2 Poisson eKXx x\ X 132 LIST OF COMMON CONTINUOUS PROBABILITY DISTRIBUTIONS Distribution Density Mean Variance Normal 1 -f^T 2na Uniform b-a ,a<x<b a + b (b-a)2 12 Gamma T(a) xa"e-XIP,x>0 aB aB2 Erlang (k-l) xk-le-x//1,x>0 kB kB2 Exponential be-"\x>0 1_ b l_ b2 ) - v / 2 Chi-square r(v/2) x ( v / 2 H e - , / 2 ; X > ( ) 2v Beta 1 B(a,B) xa-\i-xy-\o<x<i a aB a + B (a + B)2(a + B + l) Weibull aB-axa-'e-(xlpr ,x>0 BY ^ a j a + 1 Rayleigh n-2xe-^)2/2,x>0 4-7T 2 —^1 Lognormal \C\ogx-cX -el[ K ' ,x>0 e 2 f « _!) 2nfy Laplace 2 2/b2 Pareto ra -,x>a ra ,r>\ ra (r-\)2(r-2y ,r>2 Cauchy ~ 2 " Tib i+ x-a ' v ^ J Non existent Nonexistent 133 APPENDIX C ATACAMA COSMOLOGY TELESCOPE 134 ATACAMA COSMOLOGY TELESCOPE - LOAD PATH PRIMARY REFLECTOR PANEL GUARD RING OH PRIMARY PANEL 5UPPORT PRIMARY BtEKUF STRWCrrUFE SECONDARY REFLECTOR PANELS SECONDARY ACTUATORS RECEIVER CABIN TELESCOPE ELEVATION FRAME SECONDARY HEFLECTCH SUPPORT RECEIVER CABIN SUPPORTS E1CTERCR SHIELDING CONNECTIONS BALL 5CHE1H TO ELEVATION k AZIMUTH FRAMES CONNECTIONS TO ELEVATION FJEARINC5 TELESCOPE A2IMI/TH FRAME, IWCL YOKE AZIMUTH BEAPJNO 5URFACE CLACONC/SHIELDINC WMUND SCR FRAME EEN TELESCOPE FOUNIMTION GROUND SCREEN FOUNLWION 135 ATACAMA COSMOLOGY TELESCOPE - MODELS The decision tree models that were created during the course of this case study have been presented in the subsequent pages. Owing to their large size, not all the models could be presented. However, models that were most relevant to the concepts discussed have been presented. These models have been presented in the following order: • Pointing Error Budget During Day • Frequency Distribution of Pointing Error During Day • Pointing Error Budget During Night • Decision Tree Model for Primary Reflector Panel (Whole Tree) • Decision Tree Model for Primary Reflector Panel (Design Branch) • Decision Tree Model for Primary Reflector Panel (Supply Branch) • Decision Tree Model for Panel Support (Whole Tree) • Decision Tree Model for Panel Support (Design Branch) • Decision Tree Model for Panel Support (Materials Branch) • Decision Tree Model for Panel Support (Fabrication Branch) • Decision Tree Model for Elevation Frame (Whole Tree) • Decision Tree Model for Elevation Frame (Design Branch) • Decision Tree Model for Elevation Frame (Materials Branch) • Decision Tree Model for Elevation Frame (Fabrication Branch) • Decision Tree Model for Guard Ring (Whole Tree) • Decision Tree Model for Guard Ring (Design Branch) • Decision Tree Model for Guard Ring (Fabrication Branch) 136 Pointing Day Error Estimate - RSS Valuel P d i := ddeiink( -excel", "[error_budget.xlslpomtDay", "R1C1") 0.5 Pointing Day_RSS := sum^pointrng_D_l RMSTJ Pointing Error RMS - day pointing_D_RMS := [ Gust Wind_D, Abs. Temperatun [0.5,1,1.5,0.8,2.7.0] ^1 Gust Wind_D := nrand( pd1, pel ) 0.5 i-r^ry p d := pd1 * (1 - pcnl ) * conf 0.01 If pcnl := ddelink( "excel", n[error_budget.xls]pointDay", "R1C2") M 0 . 8 conf:= 0.1 .0.1 Abs. Tempera ture_D := nrand( pd2, pc2 )l I pd2 := ddelink( "excel", "[error_budget.xls]pointDay", rlTdTV pc2 := pd2" (1 - pcn2 ) 0.02 • conf j ^ pcn2 := ddelink( "excel", "[error_budget.xls]pointDay",| 0.8 I Grad. Temperature_D := nrand( pd3, pc3 1.5 Servo_D := nrand( pd4. pc4 ) 0.8 Dyn Servo_D := nrand( pd5, pc5 ) 2.7 Other. D := nrand{ pd6, pc6 ) 0 pd3 := ddeiink( "excel", "[error_budget.xls]pointDay", 1.5 pc3 := pd3 ' ( 1 - pcn3 )" conf I _ 0.03 I" ~| 0.8 pcn3 := ddelink( "excel", "[error_budget.xls]pointDay",| pd4 := ddelink( "excel", "[error_budget.xls]pointDay", 0.8 rfp^"V pc4 := pd4 " ( 1 - pcn4 ) 0.04 •con f | J _ | pcn4 := ddelink( "excel", "[error_budget.xls]pointDay",| 0.5 I pd5 := ddelink( "excel", "[error_budget.xls] pointDay" 2.7 3 [Hpd5 r-* pc5 := pd5 * ( 1 - pcn5 ) 0.27 * conf| ^ pcn5 := ddelink( "excel", "[error_budget.xls]pointDay",| Unevaluated I pd6 := ddelink( "excel", "[error_budget.xls]pointDay", 0 pc6 := pd6" {1 - pcn6 ) 0 • confI [p —Irtii cn6 := ddelink( "excel", "[error_budget.xls]pointDay",| FREQUENCY DISTRIBUTION FOR POINTING ERROR DURING DAY 20,000 Frequency Distribution 2.9 3 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4 4.1 4.2 4.3 4.4 4.5 4.B Point ing D a y _ R S S 138 Pointing Night Error Estimate - RSS Value J P®1 : = d d e l i n k < " e x c e l " . "[errorjudget.xislpointNight", Pointing Night_RSS := -\] sum(pointing_N_RMS[]2^ Pointing Error RMS - night pointing_N_RMS := [ Gust Wmd_N, Abs. Temperatun [2.4,0.5,0.0.8,2.7,11 Gust Wmd_N := nrand( pel, pf1 ) 2.4 pf1 := pel * ( 1 - pdnt ) * conf 0.048 pdnl := ddeiinkf "excel", "[error_budget.xls]pointNightl 0.8 I conf ~ 0.1 0.1 Abs. Temperature^N := nrand( pe2, pf2 ) 0.5 pe2 := ddelink{ "excel", "[error_budget.xls]pointNight", 0.5 0.01 |J pf2 := pe2 * ( 1 - pdn2 ) * conf I N pdn2 := ddelink( "excel", "[error_budget.xls]pointNight' Grad. Temperature_N := nrand( pe3, pf3 0 pe3 := ddelink( "excel", "[error_budget.xls]pointNightH 0 pf3 := pe3 • ( 1 - pdn3 ) ' conf 0 pdn3 := ddelink( "excel", "[error_budget.xls]potntNightl 0.8 I Servo^N := nrand( pe4, pf4 ) 0.8 I pe4 := ddelink( "excel", "[eiror_budget.xls]pointNight" 1 - 1 0.8 pf4 := pe4 * ( 1 - pdn4 ) * conf j pdn4 := ddeiink( "excel", "{error_budget.xls]pointNight1 Dyn Servo_N := nrand( pe5, pf5 ) 2.7 I pe5 := ddelink( "excel", "[error_budget.xls]pointNight" ^ 2 . 7 pf5 := pe5' 0.135 ( 1 - pdn5 ) * conf |_ n pdn5 := ddeiinkf "excel", "[error_budget.xfs]pointNight| 0.5 M Other_N := nrand( pe6, pf6 ) pe6 := ddelink( "excel", "[error_budget.xls]potntNight" pf6 := pe6 ' ( 1 - pdn6 ) 0 * conf |_ _ pdn6 :• I Tj = ddeiinkf "excel", "[enor_budget.xls]pointNight 4^ O Inputs: "prp_Delivery 300000 fair factor 3 rhod_factor 1.5 rpd hours 80. hourly engineering 63: :prp:delay, 20 : prp^delayed 450000! iprp^fail 15! prp_failure 900000! prp. on time 65 rpd^fail 1 rpdimodify 5 rpd-success 94 !rpdvDebiqn 5040! Outputs: PRP Design 5266.8 Primary Reflector Panel 425266.8 PRP Supply 420000 Primary Reflector Panel := PRP Design ^PRP!Sup'ply!|("!!:: 425266.8 l -PRP Design := emv{ rpd_success' 5266.8 —^ rpd. success := 94' rpd success% rpd-Design := rpd_hours * hourly engineering 5040 rpd_modify := 5 | rpdjhours : 1 rpd_modify% rpd_mod_design := rpd_Design * mod_factor ; 7560 rpd_Design 5040 hourly engineering := 63 | 3* . mod_factor := 1.5 rpd^fail: 1 100 - rpd_successrpdimodify } rpdiisuccessv 94 rpd-modify 5 V rpd_fail% rpd^failure •:= rpd_Design * fail, factor: 15120 L12! J L-^ fal rpd_Design 5040 iil_factor := 3 PRP Supply := emv( prp_on time%, prp. 420000 ^ prp_on time := 65^ prp on time% / , ~ „ • - — ( prp_Delivery := 300000 | prp> delay :=i20^ prp_ delay % prp.delayed :=;prp_ Delivery* mod_factor 450000 } prp^ Delivery 300000 mod^factor 1.5 prp_fail := 100 - prp_on time - prp_delay 15 } prp on time 65 prp_delay 20 prp_fail% iprp^failure := prp^Delivery * fail^factori; 900000 } prp_ Delivery 300000 faillfactor 3 V P R P Design := emv( r p d _ s u c c e s s % , rpt\ 5266.8 : := 94 | rpd_Design := rpd_hours * hourly engineer ing 5040 n r—(_ rpd_hours := 8(T 1 ' — < ^ o u r i ^ n g i n e e r i n ^ ^ 6 ^ i _ r p d ^ T i o d r f y j = 5 _ _ _ J rpd m o d . d e s i g n := rpd D e s i g n * mod_factor 7560 rpd_Design 5040 ^ m o c M a c t o r ^ = ^ 5 j rpd_fail := 100 - r p d _ s u c c e s s - rpd_modify 1 r p d _ s u c c e s s 94 rpd_modify 5 P rpd_failure := rpd .Des ign * fail, factor |J 15120 r p d . D e s i g n 5040 ' fail_factor := 3 | P R P Supply := emv( prp_on time%, prp_ 420000 ^Jjrjj^priJimej=>6!Q ' prp_Delivery := 300000^ ^jrpjJejay^=20j prp_delayed := prp_Dclivery * m o d factor 450000 p r p . Delivery 300000 ' mod_factor := 1.5^ prp fail := 100 - prp..on time - prp_delay 15 } prp_on time 65 prp_delay 20 l_| prp_failure := prp_Delivery * fail_factor 900000 prp_Delivery 300000 ^a^l_factorj=3j Inputs: psd_hours - 80 p s d s u c c e s s 70 psd_modify 25 psd_Design . 5040 psd_modifyJidesign 7560 psd_ failure 15120 psfiFabrication 20000> psf-change_machining ! 30000 psf_failure 60000 psf_success • . 90 psf_modify < 5 number of panels • 70 cost per support 1 20 number of supports per panel 4 mod_factor •; •, ] < 1.5 fail factor 3 hourly engineering 63 Outputs: Panel Support PS Design PS Fabrication Panel Support := PS Design + PS Materials + PS Fabrication 34274 34274 PS Design := emv( psd_success%, psd_ 6174 < psd success := 70 70 psd_success% psd_Design := psd_hours * hourly engineering 5040 psd_hours := 80 psd_modify := 25 25 psd_modify% K> psd_modify_design := psd_Design * mod_factor 7560 hourly engineering := 63 | IP psd_ Design 5040 : ^ mod_ factor := 1.5~] psd.fail := 100 - psd_success - psd_modify |_j 5 psd_success 70 psd_modify 25 psd_fail% psd.failure : - psd_Design * fail_factor 15120 H psd Design 5040 "1 n 504 PS Materials := number of panels * numb|_ 5600 number of panels := 70 Lfactor := 3~j | ~J~~^ number of supports per panel := 4 | cost per support; := 20^| PS Fabrication := emv( psf_success%, 22500 psPsuccess :=90J, J psf success% / ... . . . \ • r — ^ <;psf^ Fabrication := 20000 | —^ psfmodify := 5~j psf_modify% psf_change_machining : - psf jFabricatu 30000 1-psf_Fabrication 20000 ¥ 3r mod factor 1.5 psfifail:=::100 • psf .success - psf modi! 5 I psf_success 90 psi modify 5 psf_fail% psf_failure := psf_Fabncation ' f n l facli 60000 psf. Fabrication 20000 ¥ ¥ ¥ faihfactor; 3 ¥ ¥ 4 ^ 4 ^ P S Design : 6174 = emv( p s d _ s u c c e s s % , p s d _ D | p s d _ s u c c e s s := 70: 70 p s d _ D e s i g n := psd_hours * hourly engineering 5040 [—<<=psd_hours := 80 h o u r l y e n g j n e e n n g ^ i ^ e ^ psd_modi fy := 25 > 25 psd_modi fy_des ign := psd_Des ign * mod_factor 7560 J p s d _ D e s i g n 5040 W n o j J £ f a c t o r j = ^ ^ psd . fa i l := 100 - p s d . . s u c c e s s - psd_modi fy 5 p s d s u c c e s s 70 psd_modify< 25 y psd_fai lure := psd_Des ign * fail factor 15120 p s d _ D e s i g n 5040 ' fail_factor := 3 | 4^ PS Materials := number of panels 5600 number bf,pahSs;;l7fc numbaj ( number of supports; *pel| pane I;": =3-' I cost per suppor t := 20 1 ( p s f . s u c c e s s := 90 $.psf_Fabrication := 20000 I—(:psf_modify := 5~| 4 ^ <3\ P S Fabrication := emv( p s f _ s u c c e s s % , p: 22500 psf_change_machin ing := psf_FabricatirJ 30000 psf iFabr icat ion-20000 j j r j o d ^ f a c t o r j ^ l ^ s j |_ | psf_fail := 100 - p s f . s u c c e s s - psf . modify 5 p s f . s u c c e s s 90 psf_modify 5 (_[ psf_failure := psf_Fabricat ion * fail facto} 60000 psf_ Fabrication 20000 ' fail_factor := 3 | y Inputs: efa hours 480 ;efa^ Analysis 30240: efa success 85 efa_modlfy 13 efd_hours 374 efd_Design 18700 efd.succcss 901 efd_modify 5 eff_ hours 1275 effJUIacfiinmg 76500 eff_ success 9o;; mod_factor 1.5 fail factor ;^3; hourly engineering 63 hourly drafting, . M 50 hourly fabrication 60 Outputs: Elevation Frame 134777.7 EF Analysis 33415.2 EF Design 21037.5 EF Fabrication 80325 Elevation Frame := EF Analysis + EF Design + EF Fabrication 134777 7 } EF Analysts emv( efa_success%, efa_Analysis, 33415.2 ef«Lsuccess .:=;85 \ efa_success% efa_Anatysis := hourly engineering* efa_hours 30240 -^efa_modify := 13 ~~| efa,modify% K> efa_ modified := efa_Analysis * mod_factor 45360 efa_fail := 100 - efa_success- efa_modrfy 2 , * efa^failure := efa_Analysis " faiLfactor 90720 EF Design emv{ efd_success%, efd.Design 21037.5 efdjsuccess •:- 90 efd_success% efd_Dosign := hourly drafting * efd_hours 18700 efdimodify :~5~\ efd_modify% jgn, ef{JQ efd_modified:= efd_Design * modi, factor 28050 efd_fail := 100 - efd-i success - efd_modify 5 efd_failure := efd_Design * fail_factor 56100 EF Fabrication := emv( eff_success%, eff_Machinil 80325 I eff_ success := 90 | r—^ hourlyengineering := 63~fc L-^efa.hours := 480,, efa_Analysis 30240 ; mod_factor^ :=i1 H~\ efa_success 85 efa_ modify 13 efa^Analysis 30240 Cfail_factor := 3~| |—^ hourly drafting ':'= 50 \ »-<efd_hours := 374 j efd_ Design 18700 mod_f actor 1.5 efd_success 90 efd modify 5 , l cfd Dis J 18700 faih factor :: 3 eff_success% eff_Machining := hourly fabrication * eff_hours. |_T 76500 r -^ hourly fabrication := 60 \ t-^eff.hours :=1275 5 eff_modify := 100 - eff_success 10 eff_modify% ;offj_modified•:- effiMachining * modiifactor - I 114750 | eff_ success 90 off Michinlng 76500 mod_factor i.5 4^ 00 E F Ana lys is := 33415.2 emv( e f a _ s u c c e s s % , efa_Analysis , c f a _ s u c c e s s jssf:=85~) efa_Analysis := hourly engineering * efa_hours 30240 efa_modify := 13 1 efa_modified := e fa_Ana lys is * mod_factor 45360 efa_fail::=100 - e f a _ s u c c e s s - efa^ modify 2 efa_failure := e fa_Ana lys is * fail_factor 90720 n r—<^ hourly engineering := 63~j efa hours :=?480~ efa. .Ana lys is 30240 ^ r i o d ^ a c t o r ^ l ^ ^ e f a _ s u c c e s s 85 efa_modify 13 e fa_Analys is 30240 fail^factor:= 3 | J 4^ E F Des ign := emv( e f d _ s u c c e s s % , efd_Design, e f d l 21037.5 1 efd s u c c e s s , " 90 efd_Design := hourly drafting * efd hours 18700 ^ f d _ n j j o d i f y ^ 5 j efd_modif ied := efd_Design * m o d i f a c t o r 28050 efd fail := 100 - e fd . s u c c e s s - efd_modify 5 efd_failure := efd_Design * fail_factor 56100 n —<^ hourly drafting := 50 \ —< efd_hours := 374 J| efd_Design 18700 y ^ n o d ^ f a c t o r j ^ ^ ^ e f d _ s u c c e s s 90 efd_modify 5 y y efd_Design 18700 ' fa i l l factor := 3 | E F Fabrication := emv( e f f_success%, eff_Machini 80325 eff s u c c e s s := 90 eff_Machining := hourly fabrication * eff hours 76500 r—^ hourly fabrication := 60 \ L -<e f f_hours:= .1275 1 eff. modify := 100 - e f f_success eff s u c c e s s 10 90 eff_modifiedi:=efr Machining * mod_factor 114750 eff Machin ing 76500 y ^Tiodifector^=^5j Inputs: grd_ hours 66 grd_Design 3300 grd_success 70 grd_modify 25 grf^hours 225 grf_success 90 grf modify • 10 mod_f actor 1.5 fail_factor 3 hourly drafting 50 hourly fabrication . 60 grf_Machining 13500 Outputs: Guard Ring 18217.5 GR Design 4042.5 GR Fabrication 14175 Guard Ring := G R Design +| 18217.5 GR Design := emv( grd 4042.5 grd_success := 70 70 grd_success% grd_Design := hourly drafting * grd_hours 3300 < grd_modify := 25 25 grd_modify% t o grd_modify design := grd Design * mod_factor 4950 grd. fail := 100 • grd_success - grd modify 5 grd_fail% grd_failure to design := grd_Design * fail_factor 9900 GR Fabrication := emv( grf_su 14175 < grf_success := 90 90 grf_success% grf_Machining := hourly fabrication * grf_hours 13500 grf_modify := 100 - grf .success 10 —Ojour lydraf t ing ^ s p J —•Ojrd_hoursj=66_^__^J grd_Desigm 3300 ' mod_factor := 1.5~j grd_success 70 y grdvmodify' 25 grd_Design 3300 P fail factor tiou>lyJabricjatiojij= >6^ grf_hours :'= 225 grf_success 90 grf_modify% grf_modified_machining := grf_Machining * mod_factor 20250 grf_Machining 13500 mod factor 1.5 GR Design := emv( grd. 4042.5 grd_success:= 70 70 grd_Design := hourly drafting * grd_hours 3300 grd_modify := 25 25 grd_modify design := grd_Design * mod_factor 4950 grd_fail := 100 - grd_success - grd_modify 5 l_j grd_failure to design := grd_Design * fail_factor 9900 p-<^iouHy^rafting^^Oj '—^ grd_hours := 66; , \ grd_Design 3300 ^Tiod^factorj^M^^ } grd success 70 grd_modify 25 P grd Design 3300 V • fail_factor := 3 | G R Fabrication := emv( grf_sui 14175 |_| grf_Machining := hourly fabrication * grf_hours 13500 g r f _ s u c c e s s := 90 -. 90 fl r—( hourly fabrication := 60 | L-< grf_hours := 225 grf_modify := 100 - g r f _ s u c c e s s g r f _ s u c c e s s 10 90 grf_modif ied_machining := grf_Machining * mod_factor 20250 grf_Machining 13500 n r w d ^ a c t o r i ^ l j ^ APPENDIX D PROTECTION OF S T E E L BRIDGES AGAINST CORROSION USING PROBABILISTIC METHODS 154 ASTM-D610 CORROSION RATING SYSTEM ASTM-D610 suggests the following "Rust Grade" based on the percentage of area corroded: Rust Grade Percentage of Surface Area Rusted 10 < 0.01% 9 > 0.0\%but < 0.03% 8 >0.03%but<0.l% 7 >0.1%^<0.3% 6 >0.3%to<l% 5 >l%Z>«/<3% 4 >3%to<10% 3 >\0%but<\6% 2 >\6%but<33% 1 > 33%but < 50% 0 > 50% Corrosion Rating based on area Rusted In addition to the above rating system which is based on the percentage of area corroded, ASTM-D610 suggests the use of alphabets "S", "G", "P" and " H " to describe the pattern in which rusting has occurred. Where, S stands for Spot Rusting, G stands for General Rusting, P stands for Pinpoint Rusting and H stands for Hybrid Rusting and is a combination of the S, G and P rusting patterns. Visual examples are provided in ASTM-D610 to represent these rusting patterns. Thus an ASTM-D610 rating of 4-S would represent a surface which has a 155 corroded area between 3% and 10% of the total surface area and a corrosion pattern which spot rusting. Pattern of Rusting: Spot Rusting, General Rusting and Pinpoint Rusting 156 CLIMATIC DIVERSITY OF BRITISH COLUMBIA Climate: British Columbia Te>rp3ratjre{C} Average Number of Days S u n s h i n e Precipitation • i Piaoe MaanJan Mean July fog Thundes Feezing i Days , Tote! (nun) | V p o w r 2.5 17.3 « i 55 1919,6 161 604 1112.6 [ •PatKton •21 203 1 12 . 129 2032,2 101 to 2825 j f i k e Rupert „, 4)3 128 37 2 107 1224.1 253 15H 25233 | JDH'lltfiUKM -6.7 206 •35 27 a 5 p 2045.4 134 \WT Edmonton -14:7 17,4 20 21 190 2237 121 132.1 446J1 1 SuperiaUves ? IHighestiTemp>44;4*C at Lvtton July 16,4941 Greatest P r e c i p M o n ? 8 l ' 2 3 mrn'at Henderson L a & l 931, towestTemp -58 9°C at Smith River J a n 31,1947 Greatest Snowfall: 2447 cm at Revetetokel 971-72 Climatic Diversity in British Columbia (http://www.thecanadianencyclopedia.com/) 157 DISTRIBUTIONS FOR INPUT PARAMETERS The efficiency of the output of any model depends on how good the input values are (Garbage In Garbage Out) and therefore the assumptions and approximations made while defining the input parameters have to be properly reasoned and justified. The reasoning behind the inputs for this model has been discussed in this section. Surface Area of Bridge Strictly speaking, the surface area of the bridge has to be considered as probabilistic since there may be differences between the actual dimensions of the bridge and the dimensions shown in drawings. However, since these variations are usually very small, they were neglected and the surface area of the bridge was treated as deterministic. Initial Corrosion Rating of Bridge o 3 S-43 U . > — 1 .QDO-^ 0.90O-f-0.800-t-0 .700-0 .600 - -0 . 5 0 0 - -0 .400 - -0 .300-0.200-0.100-0.000 6.5 C u r r e n t C o r r o s i o n R a t i n g tU>, \\ -H • 7.5 8.5 Cor ros ion Rat ing [ASTM D610] 9.5 158 Site inspection and determination of corrosion rating of bridges is done by experienced engineers and therefore a high level of confidence can be attached to their opinion. However, because of the subjective nature of rating system, there is a possibility that the true corrosion rating of the bridge may differ from the rating suggested by the engineer. Hence the initial corrosion rating of the bridge may be best represented by a discreet distribution. Unit Cost of Using a Strategy Unit Cost of Touch Up 300 400 500 BOO Cost ($/in2) The unit cost of using a strategy depends on factors like location of bridge, availability of labour, cost of material, size of bridge etc. Since it is equally likely that the unit cost of a strategy can lie anywhere within the range specified in the table below, a uniform distribution was used to represent it. Based on the information provided by Mr. Russ Raine, an expert in bridge maintenance in British Columbia, the following data was used for unit cost of strategy: 159 Strategy Unit Cost (Range) Unit Cost ($/m2) Recoat 150 to 250 $/m2 150 to 250 $/m2 Overcoat 50 to 80% of mean recoat unit cost 100 to 160 $/m2 Touch Up 150 to 300% of mean recoat unit cost 300 to 600 $/m2 Durability of Strategy The durability of a strategy depends upon the environmental conditions that the bridge is subjected to. The durability of a strategy may be less in regions with extreme climatic conditions than in regions with mild climatic conditions. The values suggested by Mr. Russ Raine are shown in below: Strategy Durability (years) Recoat 25 with a deviation of 5 Overcoat 20 with a deviation of 5 Touch up 12.5 with a deviation of 2.5 The durability of a strategy is dependent upon the coating system used. Since the level of quality control implemented by coating system manufactures and the warrantee that they extend towards their product is quite high, it is very likely that the coating system will meet the prescribed durability and also likely that it will exceed the prescribed durability. It is less likely that the actual durability will be less than that guaranteed by the manufacturer. The above arguments can be best accommodated by a lognormal distribution and hence it was chosen. 160 Durability of Touch Up 1Q 15 20 Durability of Touch Up (years) 25 Maintenance life of bridge Maintenance Life 0.200-5?-tu dent Version itemw, Us* 0t% 110 1 BO Maintenance Life (years) 210 30 260 The maintenance life of a bridge is highly bridge specific. It depends upon factors such as how well the bridge was maintained in the past and the design life of the bridge. Since all structures are designed such that the probability of the failure before the design life is very less and because most structures exceed their design life, the lognormal distribution was chosen to represent the maintenance life of the bridge. While using the lognormal distribution, 161 the mean maintenance life of the bridge was shifted to the right to accommodate for the fact that the actual life of the bridge is greater than its design life. Interest and Escalation Rates Interest Rate Interest Rate(%) The interest rate and escalation are governed by the economy of a nation and the wide range of social and political factors that are intertwined with it. However, the interest and escalation rates can be estimated by studying past trends. After discussion with Dr.A.D.Russell, Professor, Computer Integrated Design and Construction, Department of Civil Engineering, University of British Columbia, distributions were arrived at for interest and escalation rates. 162 Escalation Rate Escalation Rate [%] 163 E Q U I V A L E N T A N N U A L C O S T - S P R E A D S H E E T This spread sheet calculates the equivalent annual cost of using a strategy. It also gives the variability associated with the equivalent annual cost depending upon the variability of input parameters. In the EAC spread sheet, the following are the input parameters. • Surface Area of Bridge • Cost of Strategy • Initial/Current Corrosion Rating of the Bridge • Maintenance Life • Durability of Strategy • Escalation Rate • Interest Rate INPUT (Total Area of Bridge T A B = 1 = 3500i[mA2] jUnit cost rjftauch up UC TU = 1 = 450 [$/rr,«2] ^Current corrosion rating [C C R E f " ~ " " = " " "IT " i Life of bridge L B ~ " = 1" = 110 0 [years] s Durability of touch up D TU = l !=; 12.5 [years] ^Escalation Rate E R = l = 2.5 i[%] • Interest Rate 1 R = l = 3.0 [%] The equivalent annual cost of the strategy is calculated using the following user defined functions: • total _ present _ value • equivalent _ annual _ cost Cost Aspects j | j Cost of first cycle JC FC = UC TU*C A = 39375 [$] j i] .Present Value of all touch total present value(R N C ,E R.I R, _ p j j up cycles P V TU = D_TU,C FC) = 281279.4 [5] | ij (Equivalent Annual Cast of equivalent annual cost(PV TUJ R,L j Touch Up E A C TU = _BL _ _ . = E77B.3 [S/year] I 164 AVERAGE CORROSION RATING - SPREAD SHEET This spread sheet calculates the average corrosion rating at which the bridge will be maintained when a particular strategy is applied. The following are the input parameters for this spread sheet: • Maintenance Life • Durability of Strategy • Highest Possible Corrosion Rating • Lowest Possible Corrosion Rating INPUT ' I ' l l ' 1 Life of bridge |L B =1 = _ 110.0 [years] Durability of touch up ID TU 12.5 [years] ; Highest Corrosion Rating "TH R = I " ~ 9 . 0 Lowest Corrosion Rating •L R e.o The average corrosion rating of the bridge is calculated using the following user defined functions: • area _ one _ cycle _ touchup • area _ one _ cycle _ overcoat • area _ one _ cycle _ recoat Corrosion State of Br idge Area under one cycle of the corrosion deterioration curve area one cycle touchup(T Start ,T ArJDne = End.H R,L R) 11B.B Area under all cycles Ar Tot i = Ar One * R N C Average corrosion rating over maintenance period Avg_CR i = Ar_Tot/L_B = 9.7 165 APPENDIX E C O M M E R C I A L L Y A V A I L A B L E SOFTWARE INVESTIGATED 166 COMMERCIALLY AVAILABLE SOFTWARE USED The following commercially available decision making software were used during the course of this project. While Decision Pro was used in several illustrations in this thesis and also in the ACT study, @RISK was used in the decision making process for the protection of steel bridges against corrosion. Analytica was instrumental in the study of influence diagrams and their functioning. The following are some of the salient features of these software. Details of Software Functions of Software Name: DecisionPro Vendor: Vanguard Website: http://www.vanguardsw.com • Decision Tree Analysis • Monte Carlo Simulation • Sensitivity Analysis • Forecasting Name: @RISK Vendor: Palisade Website: http://www.palisade.com/ • Add-in for Microsoft Excel • Monte Carlo Simulation • Sensitivity Analysis • Statistical Analysis Name: Analytica Vendor: Lumina Website: http://www.lumina.com/ • Influence Diagrams • Monte Carlo Simulation • Sensitivity Analysis 167
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- A contribution to innovative methods for enhancing...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
A contribution to innovative methods for enhancing the economy of steel structures engineering Ramadhas, Vignesh 2005
pdf
Page Metadata
Item Metadata
Title | A contribution to innovative methods for enhancing the economy of steel structures engineering |
Creator |
Ramadhas, Vignesh |
Date Issued | 2005 |
Description | An engineer in his everyday practice, in addition to the tasks of analyzing, designing, fabricating and erecting structures, is faced with the exigent task of decision making. The process of making decisions becomes even more challenging in the presence of uncertainties. Nevertheless, the presence of uncertainties is inevitable in all forms of engineering and making efficient decisions in the presence of uncertainties is a critical skill that engineers as decision makers must develop. It is a well known fact that decisions made in the early stages of a project have greater economic implications on the project than decisions made in the later stages when changes become more expensive and difficult to make. However, there is a high level of uncertainty associated with the early or conceptual stages of a project. In such stages, while making decisions, the engineer relies on experience when challenged with problems similar to those encountered previously and when faced with uncharted situations, falls back on engineering judgment and first principles. Since the issue of liability is a major concern, it is extremely critical that the engineer supports his decision with proven decision making theories and analyses rather than with the bare tag of "experience". Theories for decision making have already been well recognized in fields such as business and economics and several methods, tools and software are now available and can be used to apply decision making theories in everyday practice. This thesis introduces several theories and criteria that can be used to make rational decisions in situations of risk, uncertainty and incomplete knowledge. The practical implementation of these theories with the help of tools such as Decision Trees, Influence Diagrams, Monte Carlo Simulation, Sensitivity Analysis, Expected Monetary Value (EMV), etc. has also been discussed. It is felt that a proper combination of theories and analytical techniques can greatly rationalize the decision making process and improve the quality of the decisions made. The focus of this report has been on using the decision making tools mentioned above to enhance the economy of steel structures. Examples illustrating the same have been provided and two case studies have been conducted to demonstrate the application of the ideas discussed in real time problems and projects. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2009-12-15 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
IsShownAt | 10.14288/1.0063315 |
URI | http://hdl.handle.net/2429/16675 |
Degree |
Master of Science - MSc |
Program |
Civil Engineering |
Affiliation |
Applied Science, Faculty of Civil Engineering, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 2005-11 |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-ubc_2005-0606.pdf [ 7.25MB ]
- Metadata
- JSON: 831-1.0063315.json
- JSON-LD: 831-1.0063315-ld.json
- RDF/XML (Pretty): 831-1.0063315-rdf.xml
- RDF/JSON: 831-1.0063315-rdf.json
- Turtle: 831-1.0063315-turtle.txt
- N-Triples: 831-1.0063315-rdf-ntriples.txt
- Original Record: 831-1.0063315-source.json
- Full Text
- 831-1.0063315-fulltext.txt
- Citation
- 831-1.0063315.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0063315/manifest