Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Analytical method for quantification of economic risks during feasibility analysis for large engineering… Ranasinghe, Kulatilaka Arthanayake Malik Kumar 1990

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1990_A1 R36.pdf [ 15.59MB ]
Metadata
JSON: 831-1.0050456.json
JSON-LD: 831-1.0050456-ld.json
RDF/XML (Pretty): 831-1.0050456-rdf.xml
RDF/JSON: 831-1.0050456-rdf.json
Turtle: 831-1.0050456-turtle.txt
N-Triples: 831-1.0050456-rdf-ntriples.txt
Original Record: 831-1.0050456-source.json
Full Text
831-1.0050456-fulltext.txt
Citation
831-1.0050456.ris

Full Text

ANALYTICAL METHOD FOR QUANTIFICATION OF ECONOMIC RISKS DURING FEASIBILITY ANALYSIS FOR LARGE ENGINEERING PROJECTS By KULATILAKA ARTHANAYAKE MALIK KUMAR RANASINGHE B. Sc. (Engineering) Honors, University of Moratuwa, Sri Lanka. M. A. Sc., The University of British Columbia, Canada. A T H E S I S S U B M I T T E D I N P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F D O C T O R O F P H I L O S O P H Y in T H E F A C U L T Y O F G R A D U A T E S T U D I E S D E P A R T M E N T O F C I V I L E N G I N E E R I N G We accept this thesis as conforming to the required standard T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A August 1990 © KULATILAKA ARTHANAYAKE MALIK KUMAR RANASINGHE, 1990 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of < *^v\\_ <sto<s;\€J&V?\Vvjgg| The University of British Columbia Vancouver, Canada Date hUuSxuu&x W^O DE-6 (2/88) A b s t r a c t The objectives of this thesis are to develop an analytical method for economic risk quantification during feasibility analysis for large engineering projects and to com-puterize the method to explore its behavior, to validate it and to test its practicality for the measurement of uncertainty of decision variables such as project duration, cost, revenue, net present value and internal rate of return. Based on the probability of project success the method can be utilized to assist on strategic feasibility analysis issues such as contingency provision, "go-no go" decisions and adopting phased or fast track construction. The method is developed by applying a risk measurement framework to the project economic structure. The risk measurement framework is developed for any function Y = <7(X), between a derived variable and its correlated primary variables. Using a variable transformation, it transforms the correlated primary variables and the func-tion to the uncorrelated space. Then utilizing the truncated Taylor series expansion of the transformed function and the first four moments of the transformed uncorre-lated variables it approximates the first four moments of the derived variable. Using these first four moments and the Pearson family of distributions the uncertainty of \ the derived variable is quantified as a cumulative distribution function. The first four moments for the primary variables are evaluated from the Pearson family of distribu-tions using accurate, calibrated and coherent subjective percentile estimates elicited from experts. The correlations between the primary variables are elicited as positive definite correlation matrices. The project economic structure describes an engineer-ing project in three hierarchical levels, namely, work package/revenue stream, project ii performance and project decision. Each of these levels can be described by Y = £f(X), with the derived variables of the lower levels as the primary variables for the upper level. Therefore, the input as expert judgements is only at the work package/revenue stream level. Project duration is estimated by combining the generalized PNET algorithm to the project economic structure. This permits the evaluation of the multiple paths in the project network. Also, the limiting values of the PNET transitional correlation (0,1) permits the estimation of bounds on all of the derived variables. Project cost and revenue are evaluated in terms of current, total and discounted dollars, thereby emphasizing the economic effects of time, inflation and interest on net present value and internal rate of return. The internal rate of return is evaluated from a variation of Hillier's method. The analytical method is validated using Monte Carlo simulation. The valida-tions show that the analytical method is a comprehensive and extremely economical alternative to Monte Carlo simulation for economic risk quantification of large engi-neering projects. In addition, they highlight the ability of the analytical method to go beyond the capabilities of simulation in the treatment of correlation, which are seen to be significant in the application problems. From these applications a technique to provide contingencies based on the probability of project success and to distribute the contingency to individual work packages is developed. ni T a b l e of C o n t e n t s Abstract ii List of Tables xii List of Figures xvi Acknowledgement xvii 1 Introduction 1 1.1 General 1 1.2 Background for the Research 2 1.3 Problem Statement and Structure 8 1.3.1 Work Package/Revenue Stream Level 10 1.3.2 Project Performance Level 11 1.3.3 Project Decision Level 12 1.3.4 Observation 12 1.4 Objectives of the Research 13 1.5 Previous Research and Motivation 14 1.6 Structure of the Thesis 17 2 Risk Measurement Framework 20 2.1 General 20 2.2 The Pearson Family of Distributions 21 2.3 The Moments of a Primary Variable 24 iv 2.4 Moments of the Derived Variable 31 2.4.1 Truncated Taylor Series 32 2.4.2 Variable Transformation Method 34 2.4.3 Moments of the Uncorrelated Variables 36 2.4.4 The Function 37 2.4.5 The First Four Moments 37 2.5 Cumulative Distribution Function 39 2.6 Application of the Framework 40 2.6.1 Example 1 : Activity Duration 40 2.6.2 Example 2 : Stochastic Breakeven Analysis 42 2.6.3 Example 3 : Linear Function 45 2.7 Summary 48 3 Elicitation of Subjective Probabilities 50 3.1 General 50 3.2 Subjective Probabilities 51 3.3 Definitions and Assumptions 53 3.4 Pre-Elicitation Stage 57 3.4.1 Motivating Phase 57 3.4.2 Structuring Phase 58 3.4.3 Conditioning Phase 59 3.5 Elicitation Stage 61 3.6 Feedback and Consensus Estimates 67 3.7 Analysis Stage 69 3.8 Verification 70 3.9 Summary 71 v 4 Correlations Between Variables 73 4.1 General 73 4.2 Correlation between Primary Variables 74 4.2.1 Positive Definite Correlation Matrix 75 4.2.2 Elicitation of a Correlation Matrix 76 4.3 Correlation between Derived Variables 79 4.4 Multicollinearity 85 4.5 Numerical Study 87 4.5.1 Variable Transformation Method 87 4.5.2 The Standard Approach 89 4.5.3 The Comparison 91 4.5.4 Transformation under Multicollinearity 93 4.6 Summary 96 5 Decomposition of a Derived Variable 102 5.1 General • 102 5.2 Decomposition 103 5.3 Hypotheses 105 5.4 Test Statistics 108 5.5 Experiment 112 5.5.1 The Activity 112 5.5.2 Procedure 112 5.6 Analysis 113 5.6.1 Moments from Decomposition 113 5.6.2 Experimental Results 116 5.6.3 Hypotheses Testing 119 5.7 Summary 121 vi 6 The Analytical Method 122 6.1 General 122 6.2 Work Package/Revenue Stream Level 124 6.2.1 Work Package Duration 124 6.2.2 Work Package Start Time 126 6.2.3 Work Package Cost 132 6.2.4 Net Revenue Stream 133 6.3 Project Performance Level 136 6.3.1 Project Duration 138 6.3.2 Project Cost 138 6.3.3 Project Revenue 140 6.4 Project Decision Level 140 6.4.1 Project Net Present Value 140 6.4.2 Project Internal Rate of Return 141 6.5 Discussion 143 6.5.1 Computational Accuracy 145 6.5.2 Standard Approach 146 6.6 Summary 149 7 Validations and Applications 151 7.1 General 151 7.2 Monte Carlo Simulation 153 7.2.1 Treatment of Correlations 154 7.2.2 The Number of Iterations 156 7.3 Modified PNET Algorithm 158 7.3.1 Road Pavement Project 159 7.3.2 Industrial Building Project 161 vii 7.4 Parallel Network 163 7.5 First Example 168 7.5.1 Second Limiting Case 168 7.5.2 First Validation 171 7.5.3 Second Validation 177 7.5.4 Discussion 188 7.6 Second Example 191 7.6.1 Third Validation 194 7.6.2 Fourth Validation 197 7.6.3 Correlations at All Levels of the Project 207 7.6.4 Discussion 218 7.7 Sensitivity Analysis and Contingency 218 7.7.1 Sensitivity Analysis 219 7.7.2 Distribution of Contingency 221 7.8 Summary 227 8 Conclusions and Recommendations 231 8.1 Conclusions 231 8.2 Recommendations for Future Work 234 8.2.1 Analytical Method 234 8.2.2 Computer Programs 236 8.2.3 Risk Management Process 237 Bibliography 238 Appendices 252 A The First Four Moments 252 viii A.l General 252 A.2 Expected Value 252 A.3 Second Central Moment 253 A.4 Third Central Moment 255 A.5 Fourth Central Moment 257 A.6 Note : Higher Order Moments 259 B Investigation of Ro 260 C Bounds for a Correlation Coefficient 262 C.l The Proof 262 C. 2 The Bounds 264 D The Computer Programs 266 D. l General 266 D.2 ELICIT - Program to Obtain Input Data 266 D.3 TIERA - Program for Risk Quantification 270 E The Correction Factor a 278 F Input Data for Numerical Examples 280 ix L i s t o f T a b l e s 2.1 Subjective Percentile Estimates for A, P and L 41 2.2 Statistics for the Random Variables 41 2.3 First Four Moments and Part ial Derivatives of Transformed Variables 42 2.4 Comparison of Moments and Shape Characteristics 43 2.5 Comparison of Estimation Approaches 45 2.6 Comparison of Moments and Shape Characteristics 46 2.7 Comparison of the First Part ial Derivatives of Y 46 2.8 Comparison of Moments and Shape Characteristics 47 4.1 Quantity Descriptors (Q) (ft3) 91 4.2 Labour Productivity Rates, PL; (ft3/m.d) 92 4.3 Labour Usage, L ; (m.d/year) 92 4.4 Condition Number (<fi) and Correlation Coefficients 93 4.5 First Four Moments of the Work Package Durations 94 4.6 Moments of the Duration with an Unstable Correlation Matr ix . . . . 95 5.1 Actua l and-Estimated Statistics for the Act ivi ty Duration (minutes) . 117 5.2 Test Statistics for Expected Values and Standard Deviations 118 5.3 Significance Tests for Hypotheses (5.2) to (5.9) at 95% confidence level 120 6.1 Statistics for Work Package Costs 147 6.2 First Four. Moments and Shape Characteristics for Project Cost . . . 148 7.1 Ordered Paths and Duration Statistics - Table 2, Ang et a l , (1975) . 159 x 7.2 Ordered Paths and Duration Statistics from Modified PNET 161 7.3 Ordered Paths and Duration Statistics for the Industrial Building . . 164 7.4 Statistics for Project Duration for First Limiting Case 166 7.5 Statistics for Project Duration for Second Limiting Case 171 7.6 Statistics for Project Duration from First Validation - Ex #1 172 7.7 Statistics for Discounted Project Cost from First Validation - Ex #1 174 7.8 Statistics for Discounted Project Revenue from First Validation-Ex #1 174 7.9 Statistics for Project NPV from First Validation - Ex #1 175 7.10 Statistics for Project IRR from First Validation - Ex #1 176 7.11 Comparison of CPU times from First Validation - Ex #1 176 7.12 Statistics for Project Duration from Second Validation - Ex #1 . . . 181 7.13 Statistics for Discounted Project Cost from Second Validation - Ex #1 182 7.14 Statistics for Discounted Project Revenue from Second Validation-Ex #1 182 7.15 Statistics for Project NPV from Second Validation - Ex #1 183 7.16 Statistics for Project IRR from Second Validation - Ex #1 183 7.17 Comparison of CPU times from Second Validation - Ex #1 184 7.18 Deterministic and Probabilistic Analyses of Project Cost 189 7.19 Statistics for Project Duration from Third Validation - Ex #2 . . . . 194 7.20 Statistics for Discounted Project Cost from Third Validation - Ex #2 195 7.21 Statistics for Discounted Project Revenue from Third Validation - Ex #2 195 7.22 Statistics for Project NPV from Third Validation - Ex #2 196 7.23 Statistics for Project IRR from Third Validation - Ex #2 196 7.24 Comparison of CPU times from Third Validation - Ex #2 198 7.25 Statistics for Project Duration from Fourth Validation - Ex #2 . . . . 202 xi 7.26 Statistics for Project Variables 203 7.27 Statistics for Discounted Project Cost from Fourth Validation-Ex #2 204 7.28 Statistics for Discounted Project Revenue from Fourth Validation-Ex #2 205 7.29 Statistics for Project N P V from Fourth Validation - E x #2 205 7.30 Statistics for Project IRR from Fourth Validation - E x #2 206 7.31 Comparison of C P U times from Fourth Validation - E x #2 206 7.32 Statistics for Project Variables 214 7.33 Comparison of the Statistics for Project Duration 215 7.34 Comparison of the Statistics for Current Dollar Project Cost 215 7.35 Xp, C and E ^ , c ^ for Different Probabilities of Success 222 7.36 Statistics for Current Dollar Work Package Cost - E x #2 224 7.37 Distributed Contingency and Probability of Success 225 F . l Activities and Estimated Durations (Pavement Project) . . . . . . . . . 281 F.2 Activities and Estimated Durations (Industrial Building Project) . . . 282 F.3 Deterministic Values for Work Package Durations and Costs 284 F.4 Statistics for Work Package Durations and Costs 285 F.5 Statistics for Revised Work Package Durations 286 F.6 Statistics for Annual Revenue and Operating Costs 287 F.7 Statistics for Quantity Descriptor Qi (ft3) 288 F.8 Statistics for Labour Productivity Rate PLi (ft3/m.d) 288 F.9 Statistics for Labour Usage L ; (m.d/year) 289 F.10 Statistics for Equipment Usage E{ (e.d/year) 290 F . l l Statistics for Subcontractor Cost Si ($) 290 F.12 Statistics for Common Primary Variables 291 F.13 Statistics for Annual Revenue and Operating Costs 292 xii L i s t o f F i g u r e s 1.1 Overall Assessment of Project Results 4 1.2 Average Economic Rates of Return for Evaluated Projects 4 1.3 Project Completion Time Overruns/Underruns 6 1.4 Average Project Cost Overruns 6 1.5 Precedence Network for an Engineering Project 9 2.1 Moment Ratio Plane Showing Pearson Types I-XII 23 2.2 The Steps of the Iterative Process 26 2.3 The "Best Fit" Distribution 30 2.4 Approximated Pearson Type Distributions for P(x) 44 3.1 Calibration Curve 55 3.2 Subjective Percentile Estimates 66 4.1 Feasible Regions for T for R n to be Positive Definite 80 4.2 Correlation from Common (Shared) Primary Variables 82 4.3 Expected Values 98 4.4 Second Central Moment 99 4.5 Third Central Moment 100 j 4.6 Fourth Central Moment 101 i 5.1 t Distribution for Two Tailed Test 110 5.2 t Distribution for Upper Tailed Test 110 5.3 x2 Distribution for Two Tailed Test I l l xiii 5.4 x2 Distribution for Upper Tailed Test Ill 6.1 Generalized Cash Flow Diagram for an Engineering Project 123 6.2 Cash Flow Diagram for the Analytical Method 123 6.3 Flowchart for the Analytical Method 125 6.4 Generalized Discounted Work Package Cost 134 6.5 Upper and Lower Bounds for Project Duration 139 6.6 Upper and Lower Bounds for Project Cost 139 6.7 Bounds for the Project Net Present Value 144 6.8 Bounds for the Project Internal Rate of Return 144 7.1 Random Variate Generation 157 7.2 The Correction Factor a for Different Values of p 157 7.3 The Precedence Network for the Road Pavement Project 160 7.4 The Precedence Network for the Industrial Building Project 162 7.5 The Parallel Network 165 7.6 CDFs for Project Duration for the Parallel Network 167 7.7 The Project Network for the First Example 169 7.8 CDFs for Project Duration for the Single Dominant Path 170 7.9 CDFs for Project Duration - First Validation - Ex #1 178 7.10 CDFs for Project Duration - First Validation - Ex #1 178 7.11 CDFs for Discounted Project Cost - First Validation - Ex #1 179 7.12 CDFs for Discounted Project Revenue - First Validation - Ex #1 . . 179 7.13 CDFs for Project Net Present Value - First Validation - Ex #1 . . . . 180 7.14 CDFs for Project Internal Rate of Return - First Validation - Ex #1 180 7.15 CDFs for Project Duration - Second Validation - Ex #1 185 7.16 CDFs for Project Duration - Second Validation - Ex #1 185 X I V 7.17 CDFs for Discounted Project Cost - Second Validation - Ex #1 . . . 186 7.18 CDFs for Discounted Project Revenue - Second Validation - Ex #1 186 7.19 CDFs for Project Net Present Value - Second Validation - Ex #1 . . 187 7.20 CDFs for Project Internal Rate of Return - Second Validation - Ex #1 187 7.21 CDFs for Current Dollar Project Cost - Second Validation - Ex #1 . 190 7.22 CDFs for Total Dollar Project Cost - Second Validation - Ex #1 . . . 190 7.23 The Project Network for the Second Example 192 7.24 CDFs for Project Duration - Third Validation - Ex #2 199 7.25 CDFs for Project Duration - Third Validation - Ex #2 199 7.26 CDFs for Discounted Project Cost - Third Validation - Ex #2 . . . . 200 7.27 CDFs for Discounted Project Revenue - Third Validation - Ex #2 . . 200 7.28 CDFs for Project Net Present Value - Third Validation - Ex #2 . . . 201 7.29 CDFs for Project Internal Rate of Return - Third Validation - Ex #2 201 7.30 CDFs for Project Duration - Fourth Validation - Ex #2 208 7.31 CDFs for Project Duration - Fourth Validation - Ex #2 208 7.32 CDFs for Discounted Project Cost - Fourth Validation - Ex #2 . . . 209 7.33 CDFs for Discounted Project Revenue - Fourth Validation - Ex #2 . 209 7.34 CDFs for Project Net Present Value - Fourth Validation - Ex #2 . . 210 7.35 CDFs for Project Internal Rate of Return - Fourth Validation - Ex #2 210 7.36 Correlation Matrix for the Complete System 211 7.37 CDFs for Project Duration 216 7.38 CDFs for Total Dollar Project Cost 216 7.39 CDFs for Project Net Present Value 217 7.40 CDFs for Project Internal Rate of Return 217 7.41 CDF for Current Dollar Project Cost 226 7.42 CDF for Current DoUar Cost for Work Package #4 226 xv D.l Flowchart for ELICIT 267 D.2 Flowchart to Ensure Coherence of Subjective Estimates 269 D.3 Flowchart of the Modified PNET Algorithm 271 D.4 Flowchart to Trace all the Paths to a Work Package 272 D.5 Typical Output from TIERA 274 i i i xvi A c k n o w l e d g e m e n t I wish to express my most sincere gratitude to Dr. Alan Russell, my teacher and supervisor, who has had a profound and positive impact on my academic and profes-sional attitudes. I greatly appreciate his advice, guidance and support throughout my graduate studies. This thesis would not exist without his patient efforts and valuable suggestions. My special thanks to Dr. Bill Caselton and Dr. Ricardo Foschi for the numerous stimulating discussions, especially on the areas of correlations and simulations. Their efforts in reviewing this thesis are greatly appreciated. I thank Dr Frank Navin and Dr. Karl Bury for serving on my supervisory committee. Acknowledgement is most gratefully extended to the Canadian Commonwealth Scholarship and Fellowship Plan who provided the scholarship which enabled me to pursue graduate studies in Canada. My special thanks to Miss. Deirdre Roeser of the above plan for making my stay in Canada an enjoyable experience. To Ashley Herath, Arif Rahemtulla, Gerard Canisius, Damika Wickremesinghe, Chanaka Edirisinghe, Ron Yaworsky, Ibrahim Al-Hammad, fellow colleagues and friends, many thanks for your moral support and encouragement. You made the bad times more tolerable and the good times more enjoyable. Finally, to Deepthi, your patience, support and encouragement is most gratefully acknowledged. xvu To my parents and to my grandmother for their support and encouragement in all my endeavors. ! xvm C h a p t e r 1 I n t r o d u c t i o n "Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise." John W. Tukey, Annals of Mathematical Statistics, vol. 33, 1962, p.13. 1.1 General This thesis describes the development, validation and application of an analytical method for time and economic risk quantification during feasibility analysis for large engineering projects. The method has the ability to quantify the uncertainty in and estimate the bounds on decision variables of a project implemented in traditional, fast track or phased construction. The pragmatic convention of treating risk and uncertainty as synonyms is adopted in this thesis. The precision in the computations presented is to facilitate comparisons with other risk quantification techniques. This precision however, belies the accuracy of estimations which can be achieved for real life projects. This chapter describes the background for the research problem, the economic structure adopted to represent an engineering project, the research objectives, a brief state-of-the-art and an overview of the thesis. 1 Chapter 1. Introduction 2 1.2 Background for the Research Large, complex engineering projects will continue to be undertaken both in the de-veloped and developing worlds to meet the increase in demand for infrastructure, energy, raw materials and employment. Typically these projects have long durations, high costs, multiple investors and are undertaken in uncertain environments. The generation of benefits at the earliest possible date to pay back or justify the large investments required for such projects has necessitated the adoption of concepts such as fast track and phased construction. The very nature of these concepts coupled with the increasing size and complexity of such projects necessitates explicit treatment of risk and uncertainty, especially in the early stages. The World Bank reports that about 20% of the projects evaluated between 1974 and 1986 were determined to be unsatisfactory (see figure 1.1). The "satisfactory" projects between 1974 - 1984 were based on the achievement of at least a 10% eco-nomic rate of return, or other significant benefits if the economic rate of return was lower, or an evaluator's qualitative judgement about the performance if no economic rate of return was calculated. The classifications for 1985 and 1986 were based on achievement of one of the three states: 1. wholly satisfactory : project achieves or exceeds all its major objectives, achieves substantial results in almost all respects; 2. satisfactory : project achieves most of its objectives and has satisfactory results with no major shortcomings; 3. marginally satisfactory : project reveals major short-comings in meeting objectives and/or achievements but is still considered worthwhile. (Project Performance Results for 1986 (1988)). Figure (1.2) depicts the average eco-nomic rates of return at appraisal and average re-estimated economic rates of return calculated shortly after final disbursement of Bank funds. Both rates are based on future flows of economic benefits. The first is calculated from project costs and eco-nomic events predicted in the appraisal phase, while the second is based on the actual Chapter 1. Introduction 3 project cost, relative price changes and current economic events. The figures clearly display the risks and uncertainty associated with the predictions that are made during feasibility analysis. The critical reasons for project failure, besides an adverse economic environment, were deficiencies in project design. These include the lack of: clarity and acceptance of objectives in terms of technical, economic and administrative criteria; and/or the thoroughness with which the project design is prepared and appraised. Over one third of the projects reviewed by the World Bank in 1985 were judged to have been adversely affected by deficiencies in preparation or appraisal (The Twelfth Annual Review of Project Performance Results, 1987). A profile of the project completion time overruns/underruns for 1513 projects reviewed by World Bank between 1974 to 1986 is shown in figure (1.3). Time over-run/underrun refers to the difference between actual and appraised project execution time. The execution time is from the signing date of the loan/credit to actual com-pletion date. The average project execution time for those reviewed in 1986 was 6.1 years. The principal reasons for completion delays were inadequate project prepa-ration, changes in project scope, administrative constraints within the country and the unfamiliarity of the borrower with Bank procurement procedures, delays in the appointment of staff or consultants, and lack of financial support for the project by the borrower (The Twelfth Annual Review of Project Performance Results, 1987). The average cost overruns for 1269 projects reviewed by the World Bank between 1974 to 1986 are depicted in figure (1.4). The average cost overrun is the unweighted mean of the percent cost overrun for individual projects. The World Bank states that while Bank forecasting methods deserve continual scrutiny to enhance their effectiveness in identifying development opportunities, the Chapter 1. Introduction 4 1 8 '2 Q_ "O o O. E o O o a> co c <D 2 a> a. 00 90 -80 -70 -60 50 -40 -30 -20 10 h Satisfactory Unsatisfactory J I i i i I i L Wholly Satisfactory Satisfactory Marginally Satisfactory Unsatisfactory 74 75 76 77 78 79 80 81 82 83 84 85 Evaluation Year Figure 1.1: Overall Assessment of Project Results Source : Appendix Tables 8 and 9. - Project Performance Results for 1986 (1988), Appendix Table 1.14 - 12th Annual Review of Project Performance Results, (1987). C3 30 25 15 -3 10 5 -Appraised ERR Re-estimated ERR 74 75 76 77 78 79 80 81 82 83 84 85 86 Evaluation Year Figure 1.2: Average Economic Rates of Return for Evaluated Projects Source : Appendix Table 10 - Project Performance Results for 1986 (1988). Chapter 1. Introduction 5 resulting investments will continue to face considerable risk and uncertainty. The ar-ray of difficulties now confronting borrowers, such as foreign debt, domestic inflation, exchange rates and the continued volatility of external factors, implies that risk will remain an important issue, calling for broader risk analysis and more deliberate ef-forts at risk management. It is suggested that the way to address risks directly at the feasibility stage is to present the probability of project success (Project Performance Results for 1986 (1988)). After an extensive study on risk management in engineering construction, Hayes et al. (1986) concluded that: all too often, risks are either ignored, or dealt with in a completely arbitrary way (simply adding 10% contingency onto the estimated cost of a project is typical); and the greatest uncertainty is present in the earliest stages in the life of a project, which is also when decisions of greatest impact are made. Risks must be treated at this phase; and since all parties involved in construction projects and contracts would benefit from reduction in uncertainty prior to financial commitment, more effort should be devoted to risk management. While risk and uncertainty are distinguished in the context of decision analysis (Siddall, 1972), Perry and Hayes (1985b) state that risk and uncertainty are inherently present in all construction projects and in the practice of construction risk management such distinctions are unnecessary and may even be unhelpful. The objective of the feasibility analysis is to develop and evaluate alternatives so that the most desirable ones are selected and implemented. Generally speaking, the selected alternatives should be, in the decision maker's view, the best in terms of technical, economic and socio-political feasibility. However, in practice technical feasibility is considered as the dominant criterion (Jaafari, 1988a, 1988b). Youker (1989) states that the economic analysis should be treated as the decision criterion and done before, rather than after detailed engineering design. Chapter 1. Introduction 100 90 80 .1 70 £ 60 ej 50 O) | 40 v 2 30 Q Q_ 20 10 0 3ZZ3 Delay > 200% 100%<Delay<;200% 50%<Delay<100% 20%< Delay <50% 0%<Delay<20% Delay < 0% 1974-79 1980-84 1985 Evaluation Year 1986 Figure 1.3: Project Completion Time Overruns/Underruns Source : Appendix Table 18 - Project Performance Results for 1986 (1988). 45 40 35 g 30 -& 25 h B § 20 I 15 10 h 5 74 75 76 77 78 79 80 81 82 83 84 85 86 Evaluation Year Figure 1.4: Average Project Cost Overruns Source : Appendix Table 17 - Project Performance Results for 1986 (1988). Chapter 1. Introduction 7 A project is economically feasible if the net present value of the benefits generated from it exceeds the net present value of its cost at the minimum attractive rate of return (marr). This thesis treats net present value of a project at marr and its internal (economic) rate of return as the two basic measures that guide the decisions on the economic feasibility of an engineering project (Au, 1988; Bonini, 1975; Cooper and Chapman, 1987; Thompson and Wilmer, 1985). Other measures such as: ratio of net present value over total initial capital investment (Jaafari, 1988b) to complement total life cycle cost (Jaafari, 1988a, 1988c) and risk adjusted discount rate (Farid et al., 1989) have been suggested for construction projects. Taylor (1988) argues that the most reliable approach for appraising projects is using the criterion of net present value alone, and not net present value divided by the initial cost. The greatest degree of uncertainty about the future estimates is encountered at the feasibility stage. Consequently, decisions taken during this stage of a project can have a large impact on its final cost and its duration. However, it is in this stage that decision makers have the greatest leeway to make changes in the scope of the project, restructure a marginally unfeasible project into a feasible one, or even to cancel the project with minimum loss (Youker, 1989). The limited information available at this stage increases the uncertainty of such decisions. The ability to identify, measure and respond to potential risks and uncertainties will significantly improve the quality of decisions made during feasibility analysis. This process of risk identification, risk quantification and risk response is considered as the most suitable approach for risk management in engineering projects (Flanagan et al., 1987; Perry and Hayes, 1985b). More comprehensive discussions on risk management in engineering projects are found in Ashley (1980a, 1980b); Ashley and Bonner (1987), Chicken and Hayns (1989); Cooper and Chapman (1987); Hayes et al. (1986); Jaafari (1986, 1987, 1988b); Perry (1986); Perry and Hayes (1985a, 1985b); Thompson (1981); Youker (1989). Chapter 1. Introduction 8 1.3 Problem Statement and Structure Economic risk quantification is a vital step for risk management in large engineering projects because it develops the basis for the decision maker to respond to identified risks. While economic risk quantification techniques for engineering projects are avail-able, in their current form many lack the ability to model large engineering projects realistically for a comprehensive feasibility analysis. Some of the considerations for realistic modeling are: limitation of data and the need for judgements; interaction of time with cost and revenue; correlation among variables; existence of multiple paths to complete a project; the number of variables that can be used in the analysis; and most importantly the need to evaluate a range of alternatives economically to select the best strategy to develop a project. These issues are dealt with explicitly in this thesis in the formulation of the analytical method for risk quantification. Central to this method is the description of the project economic structure as a hierarchy containing all of the derived time and economic variables of an engineering project. The one presented is an extension of the structure developed by Ranasinghe (1987) to represent an engineering project. In this thesis, three levels of description, namely, project decision, project performance and work package/revenue stream, de-scribe the project economic structure. Work packages and revenue streams of a project are linked together by way of a precedence network (see figure 1.5). The work package/revenue stream level is considered as the lowest level at which meaningful information can be obtained during feasibility analysis. However, if necessary, a work package can be further decomposed to a sub-network of activities, with each activity having a duration and cost function. Definitions relevant to this structure are as follows. <§' at o n Q. et> P a a> S! o o •-» 6 w P 92. p l-J p era 13 •i o < >. o Start W.P W.P#1 W.P#4 W.P#2 W.P#5 W . P # 6 — 1 W.P#3 W.P#7 Revenue Stream #2 3 e o Revenue Stream #3 Revenue Stream #1 Chapter 1. Introduction 10 1.3.1 Work Package/Revenue Stream Level The variables at the work package/revenue stream level are, Work Package Duration : Work package duration can be estimated directly by the analyst (holistic value) or derived using a functional relationship which treats work scope, anticipated job conditions, likely construction methods, productivity and resource levels or a sub-network of activities. Work Package Cost : A generalized expression for work package cost which can be used to estimate constant, current, total dollar cost and discounted value is as follows: WPd = f e^i'^ci fTc' C0i(r) e(8ci-^T dr (1.1) Jo +(1 - f)e^-y^eeciTsa fTci Coif^e^i-^dr Jo where WPCi is the discounted iih work package cost, Coi(r) is the function for con-stant dollar cash flow for the ith work package, Tsci and Ta are work package start time and duration, Tp is the time at which the repayment of interim financing is due for all work packages, / is the equity fraction, 6c{, r and y are inflation, interest and discount rates respectively. The time r is measured from the start of the ith work package. C^T) can be either holistic or a decomposed function of work scope, resources applied, and productivity. The work package cost is expressed in discounted dollars for generality. When required, the work package cost can be expressed: in total dollars (constant -f inflation + financing) by setting the discount rate to zero (WPCTDI)] in current dollars by setting the discount rate to zero and equity fraction to one (WPCcDi)] o r m constant dollars by setting discount and inflation rates to zero and equity fraction to one {WPCcoi). Chapter 1. Introduction 11 Net Revenue Stream : The present value of a net revenue stream can be expressed as follows: NRSi = fTSm+Tm \Roi(t)e6R^-Ts^ - Moiity^'} e~ytdt (1.2) JTSR, 1 J where NRSi is the discounted iih net revenue stream, Roi(t) and Moi(t) are the func-tions for constant dollar cash flow for ith gross revenue and operation and maintenance cost, Tsm and TJH are early start time and duration of the revenue stream, Giti^Mi and y are inflation and discount rates respectively. Roi(t) and M^t) can be either holistic or decomposed functional forms. 1.3.2 Project Performance Level The variables at the project performance level are as follows. Project Duration ^ = £ WPDa (1.3) t=i where Tj is the duration of the j t h path and WPDij is the duration of the ith work package on the jth path. For the deterministic case, project duration is given by, T = max\/j (Tj) j = 1, ,n (1.4) When time is uncertain, the probability of completing the project in time t, denoted by p(t), (Ang et al., 1975), is given by, p(t) = 1 - [P(T->t) + P(Tl<t,T2>t) + + P(2i < t,T2 < t, , r n _ x < t,Tn > t)] (1.5) where Ti,T2, ,Tn are durations of the possible paths to complete the project. Times and probabilities for intermediate milestones can be determined in a similar manner. Chapter 1. Introduction 12 Project Cost n Discounted Project Cost = ^ WPCi (1.6) t=i n Project Cost in Total Dollars = WPCTDi (1.7) t=i n Project Cost in Current Dollars = ^ WPCcDi (1-8) i = l n Project Cost in Constant Dollars = ^ VFPCco; (1-9) i=i Project Revenue r Discounted Project Revenue — ^ NRSi (1-10) 1.3.3 Project Decision Level The variables at the project decision level are as follows. Net Present Value NPV — Discounted Project Revenue — Discounted Project Cost (1.11) Internal Rate of Return IRR = Discount Rate when NPV = 0 (1.12) 1.3.4 Observation The variables at every level of the project economic structure can be described by Y = #(X), where Y is defined as the derived variable and X is the vector of its primary variables. The derived variables of the lower levels of the project economic structure are the primary variables for the higher levels. At the work package/revenue stream level the derived variables are work package duration, start time, cost and Chapter 1. Introduction 13 net revenue streams. At the project performance level the derived variables are the project duration, cost, revenue and cash flow profile while the primary variables are the derived variables at the work package/revenue stream level. At the project decision level the derived variables are project net present value and internal rate of return, while the primary variables are discounted project cost and revenue. At the work package/revenue stream level, time, cost and revenue may be pre-dicted using a variety of functional forms - growth and decay functions for revenue, production functions for time and cost through to network models. These produc-tion functions are generally multiplicative and/or additive in nature. The functions for derived variables at the project performance and decision levels are always pre-determined and linear. 1.4 Objectives of the Research The primary objectives of this research are: 1. to develop an analytical method for economic risk quantification during feasibility analysis for large engineering projects, 2. to computerize the method to explore its behavior, to validate it and to test its practicality in the measurement of uncertainty of performance and decision variables. The secondary objective of this research is to lay the foundation for obtaining the in-put data necessary to make the analytical method a practical tool for the construction industry. The input data are in the form of subjective probabilities and correlation matrices for primary variables. Chapter 1. Introduction 14 The desired features of the analytical method are: model the interaction of time, cost and revenue throughout the life cycle of the project; provide the freedom to model a project to any level of detail using any number of variables; recognize the constraints that exist during feasibility stage, such as data limitations and the need for subjective probabilities; quantify the uncertainty in and estimate bounds on per-formance variables such as duration, cost, revenue, net present value and internal rate of return of a project; perform sensitivity and probabilistic analysis; consider multiple paths (shorter paths with higher variance or skewness) when evaluating project dura-tion; treat correlations at all levels of the project; estimate individual contributions to overall uncertainty; provide intermediate milestone information to set realistic targets for performance; and have the flexibility to model and evaluate a range of alternatives economically to select the best strategy to develop a project. 1.5 Previous Research and Motivation A review of the literature shows that estimates for project decision and performance variables are still treated deterministically by most authors. A number of authors have recognized the random nature of estimates and adopted probabilistic concepts in developing their methods. These methods are classified below depending on their individual applications. Probabilistic Time Methods : those which evaluate the duration of activities and the project as the decision criterion (Ahuja and Nandakumar, 1985; Ang et al., 1975; Carr et al., 1974; Crandall, 1976, 1977; Crandall and Woolery, 1982; Elmaghraby, 1977; Hall, 1986; Jaafari, 1984; Kennedy and Thrall, 1976; King and Wilson, 1967; King et al., 1967; King and Lukas, 1973; McGough, 1982; Mirchandani, 1976; Pritsker and Happ, 1966; Pritsker and Whitehouse, 1966; Woolery and Crandall, 1983). Chapter 1. Introduction 15 Probabilistic Cost Methods : those which evaluate project cost as the decision criterion (Bjornsson, 1977; Deshmukh, 1976; Flanagan and Norman, 1980; Flanagan et al . , 1987; Hemphill , 1968; Reinschmidt and Frank, 1976; Shafer, 1974; Smith and Thoem, 1976; Spooner, 1974; Vergara and Boyer, 1974; Wallace, 1977). Probabilistic Time/Cost Methods : those which treat cost as time dependent when evaluating project cost as the decision criterion (Ahuja and Arunachalam, 1984; Baker, 1986; Borcherding, 1977; Chapman, 1979; Chapman and Cooper, 1983; Chap-man et al., 1985; Cooper et al., 1985; DeCoster, 1976; Diekmann, 1983; Jaafari, 1988a; 1988c; Moeller, 1972; Thompson and Whitman, 1973; Van Tetterode, 1971). Probabilistic Present Value Methods : those which evaluate project net present value and internal rate of return as decision criteria (Cooper and Chapman, 1987; Hillier, 1963, 1969; Hu l l , 1980; Pouliquen, 1970; Reutlinger, 1970; Thompson, 1981; Thompson and Whitman, 1974; Thompson and Wilmer, 1985; Wagle, 1967; Zinn et a l , 1977). From this classification, only those methods which evaluate project net present value and internal rate of return are suitable for economic feasibility studies because they represent the family of criteria used to evaluate a project. Of these, C A S P A R (Computer Aided Simulation for Project Appraisal and Review) developed by Thomp-son and Wilmer (1985) is the widely applied model for large engineering projects -Severn Tida l Power (Thompson et al., 1980), Mersey Barrage (Perry et al, 1983). C A S P A R (Thompson and Wilmer, 1985) is a project management tool designed to model the interaction of time, cost and revenue throughout the entire life of a project. It differs from the normal economic appraisal model as it is network based and is designed to simulate the realistic interaction of time and money. C A S P A R models a project in four stages. The first is a definitive model of the project con-structed from a network of inter-related activities using a precedence diagram, to Chapter 1. Introduction 16 which costs and revenues are attached. The second stage identifies and investigates major uncertainties by performing a sensitivity analysis. During the third stage the definitive model is reviewed in light of the sensitivity analysis. At the fourth stage a probabilistic risk analysis is performed using the revised definitive model in a Monte Carlo simulation. A suitable probability distribution is assumed for the uncertain variables - a generalized triangular distribution has been assumed for variables in the applications. The decision criteria are the project net present value and internal rate of return. PROJECT (Thompson and Whitman, 1974) is an older version of CASPAR. When CASPAR and other simulation based methods (Bjornsson, 1977; Flana-gan et al., 1987; Hull, 1980; Jaafari, 1988a; 1988c; Moeller, 1972; Pouliquen, 1970; Thompson and Whitman, 1974; Van Tetterode, 1971) are considered in the context of the desired features of the analytical method, issues such as modelling interaction of time, cost and revenue throughout the life cycle of the project; quantifying uncertainty of decision variables by developing cumulative distribution functions; performing sen-sitivity and probabilistic analysis; treating correlations at the level of variable input; the effect of multiple paths in the project network when evaluating project duration; and obtaining milestone information are resolved. However, when the number of variables in the analysis is large and the variables are correlated, Monte Carlo simulation can be both time consuming and computa-tionally expensive, precluding exploration of a wide range of alternative strategies. Hence, the motivation for an analytical method that can handle a realistic formula-tion of the problem, a large number of correlated variables in the analysis and yet is computationally economical, thereby permitting the evaluation of several strategies. Chapter 1. Introduction 17 1.6 Structure of the Thesis Chapter two develops the risk measurement framework which is the foundation for the analytical method. The framework, based on four assumptions, quantifies the uncertainty of a derived variable that is functionally related to a set of primary vari-ables (Y = £f(X)). The uncertainty of the derived variable is quantified by developing a cumulative distribution function for it. This development is based on the first four moments of a derived variable obtained from moment analysis using the truncated Taylor series expansion of the transformed function for g(X), and the first four mo-ments of transformed variables. The first four moments are the expected value and second to fourth central moments. The correlations between primary variables are treated by using a variable transformation approach. A numerical example is used to demonstrate the framework, while the stochastic breakeven problem is used for comparison with some published results (Kottas and Lau, 1978). Chapter three develops an approach to elicit accurate and calibrated subjective probabilities as percentile estimates of an expert's subjective prior probabihty dis-tribution for a primary variable. The analysis and verification method ensures that the measured belief is coherent and useful for the quantification of uncertainty of a derived variable. Chapter four discusses the correlations between variables. The discussion high-lights the positive definite correlation matrix. A method to elicit a positive definite correlation matrix for primary variables and a method to obtain a positive definite correlation matrix for the derived variables when only linear correlations between primary variables are available are developed. Numerical comparisons under general conditions and multicollinearity are performed to show that the variable transforma-tion approach is more robust than the standard method used to treat correlations in moment analysis. Chapter 1. Introduction 18 Chapter five describes a study on the decomposition of a derived variable that is sometimes estimated holistically in the elicitation of subjective probabilities. The study contains hypotheses, an experiment and test statistics to compare the two estimation approaches used in engineering risk quantification. The duration of an activity is used as the example for the derived variable to compare holistic versus decomposed methods of estimation. Chapter six combines all of the developments and studies done in chapters two to five with the project economic structure to develop the analytical method for time and economic risk quantification for large engineering projects. The method com-putes the moments for derived variables at the work package/revenue stream level (work package duration, cost, and net revenue), project performance level (project duration, cost and revenue) and project decision level (net present value and internal rate of return) using the moments and correlation matrices for primary variables in their functional forms. The shape characteristics of the derived variables are used to approximate Pearson type distributions to quantify their uncertainty. The computed moments for derived variables at project decision and performance levels are exact. The approximations for moments are only for the derived variables at the work pack-age/revenue stream level. The expected value, standard deviation and cumulative distribution function for project duration are obtained from the modified P N E T ap-proach (Ang et al., 1975), while those for project internal rate of return are derived from a variation of Hillier's method (Hillier, 1963). Chapter seven describes the validations and applications of the analytical method. The validations are done using Monte Carlo simulation. The modified P N E T algo-rithm is validated by solving two numerical examples that were presented by Ang et al. (1975). The Monte Carlo simulation process is first validated using two Hm-iting cases. The first hmiting case is a parallel network while the second is a single Chapter 1. Introduction 19 dominant path in a highly interrelated network. Six simulations of two engineering projects are used to validate the analytical method. The data for the first example is obtained from an actual deterministic feasibility analysis. The second is a hypothet-ical engineering project developed to demonstrate the full potential of the analytical method. The types of sensitivity analyses that can be performed by the analytical method are explored. A detailed method to distribute the contingency for a derived variable to its primary variables using one of the sensitivity analyses is presented. Chapter eight contains the conclusions and recommendations for future work. Appendices A, B, C and E contain proofs and derivations used for the develop-ments described by this thesis. Appendix D describes the two computer programs, "ELICIT" and "TIERA", developed to obtain input data and perform time and economic risk quantification. Appendix F contains the input values used for the numerical examples presented in chapter seven. C h a p t e r 2 R i s k M e a s u r e m e n t F r a m e w o r k 2.1 General The framework to quantify the uncertainty of a derived variable is developed in this chapter. The inspiration for this development is the "unified statistical framework for probabilistic planning models" suggested by Kottas and Lau (1980), (1982). Their framework is a computational alternative to simulation for models involving additive and multiplicative functions of random variables. The proposed framework is for any arbitrary function, #(X), between the derived variable and its primary variables. This development is based on four assumptions which are explicitly identified and discussed, moment analysis and a function #(X). The use of a truncated Taylor series expansion of the function, <7(X), for moment analysis generalizes the type of functional relationship between the derived variable and its primary variables. In addition to <?(X), moments of the primary variables are required to evaluate the moments of the derived variable. The Pearson family of distributions and subjective percentile estimates are used to approximate the moments of the primary variables. The cumulative distribution function for the derived variable is approximated from the Pearson family of distributions using its first four moments. The next section describes the Pearson family of distributions. In the third section an iterative process for approximating the first four moments of a primary variable is developed. The fourth section describes the approach to approximate the first four 20 Chapter 2. Risk Measurement Framework 21 moments of the derived variable. The approximation of the cumulative distribution function for the derived variable is described in the fifth section. The application of the risk measurement framework to three examples: duration of a construction activity; the breakeven analysis problem; and a linear function is presented in the sixth section. The seventh section highlights the contributions of this chapter. 2 . 2 The Pearson Family of Distributions The Pearson family of distributions are obtained as solutions of the differential equa-tion which, when the origin of x is at the mean has the form, dy - y (x + b) T = —T"T 7 2 Ll < x < L2 (2.1) dx a -+- 6 x + c xl where the coefficients a, fc, and c are functions of the moment ratios (V/Si, 02)> a n d may be expressed as (Amos and Daniel, 1971), a = " ' < 4 f t - 3 f t ' (2.2) 10 & - 12 ft - 18 K 1 h = f ^ ' V 3> (2.3) 10 32 - 12 By - 18 K 1 2/32 - 3 / 3 , - 6 10 & - 12 0 i - 18 where A = —5 and /32 =—5. P2, p3, and p4 are the second, third, and fourth A*! central moments of the random variable x. Chapter 2. Risk Measurement Framework 22 For each pair of (y/8i,B2) the solution of equation (2.1) defines a density function with mean zero on an interval Lx < x < L2. The solutions assume a variety of different mathematical forms according to the values of the moment ratios (Johnson et al., 1963). These forms or "types of distributions" may be associated with different regions in a plane having rectangular co-ordinate axes y/j5[ and B2 (see figure 2.1). Since the shapes of the distributions change continuously across the boundaries of the regions, Johnson et al. (1963) compiled tables for the Pearson family of dis-tributions by treating the problem as a single unity. These tables, tabulate the standardized deviate for fifteen percentage points based on the values of and B2. The fifteen percentage points are namely the median, upper and lower 0.25, 0.5, 1.0, 2.5, 5.0, 10.0, and 25.0 percentage points. Amos and Daniel (1971) extended the Pearson tables to cover a much larger area of the plane (y//3~[,B2). Assumption 2.1 : The derived and the primary variables are continuous and their probabihty distributions are approximated by the Pearson family of distributions. The variables of the project economic structure such as time, cost, revenue, infla-tion and interest rates are all continuous in nature. The continuous random variable is a convenience for probabilistic applications. Most of the probabihty distributions used for applications in engineering such as - Normal, Beta (Typel), Gamma (Typelll), Exponential (TypeX), Uniform, Lognormal (TypeV), Student's t (TypeVII), Chi-square (Typelll), F (TypeVI), are members of the Pearson family of distributions (Harr, 1987; Ord, 1985). While there is no guarantee that a Pearson type distribution will always provide a good fit for a variable, the theoretical developments of the Pearson family (Kendall and Stuart, 1969; Ord, 1972) and the widespread applications of the Pearson sys-tem indicate that it will provide a good fit to most "real life" distributions (Kottas Chapter 2. Risk Measurement Framework 23 Figure 2.1: Moment Ratio Plane Showing Pearson Types I-XII Source : Amos and Daniel, (1971) Chapter 2. Risk Measurement Framework 24 and Lau, 1982). However, it must be noted that there are theoretical examples for which Pearson type provides a poor fit for a variable with the same first four mo-ments (Pearson, 1963). The assumption permits the development of the framework as a "distribution free" method because of the high flexibility of the Pearson system (SiddaU, 1972). 2.3 The Moments of a Primary Variable A continuous random primary variable approximated by a Pearson type distribution can be expressed by its first four moments (Kendall and Stuart, 1969; Ord, 1972). The first four moments of a primary variable are its expected value and second to fourth central moments. Assumption 2.2 : An expert can provide estimates for percentiles of his subjective prior probability distribution for a primary variable at the input level. The use of subjective probabilities to quantify the uncertainty about the primary variables stems from the observation that actuarial or relative frequency based data are unavailable or not meaningful for direct input as probability forecasts for esti-mating future events (Wright and Ayton, 1987). To use subjective probabilities as input to risk analyses, they have to be accurate, calibrated and coherent (Lindley et al., 1979). In chapter three a method to elicit accurate, calibrated and coherent sub-jective probabilities as percentile estimates of an expert's subjective prior probability distributions for primary variables is developed. A step by step iterative process for approximating the first four moments of a primary variable (see figure 2.2) is set out in this section. The sole purpose of ap-proximating third and fourth central moments of primary variables is to approximate third and fourth central moments of the derived variable. This information is required Chapter 2. Risk Measurement Framework 25 to fit a Pearson distribution and to make probabilistic statements about the derived variable. The starting point follows from the first two assumptions. The first assumption permits the use of the table for percentage points of standardized Pearson distribu-tions (Amos and Daniel, 1971; Johnson et al., 1963). From the second assumption estimates for the 5, 25, 50, 75, and 95 percentiles of an expert's subjective prior prob-ability distribution for a primary variable are elicited. The process of approximating the first four moments of a primary variable stops when either of the following con-ditions are met. Condition 1 : When a„Q5 (equation 2.8) is greater than <TQ 025 (equation 2.9), Condition 2 : When "best fit" distribution is the same as that of the previous cycle. The step by step process for generating the first four moments for a primary variable is as follows. Step 1 : Subjective Estimates Obtain the estimates for the 5, 25, 50, 75, and 95 percentiles of the expert's subjective prior probability distribution for the primary variable (assumption 2.2). Step 2 : Expected Value and Standard Deviation Since the time Malcolm et al. (1959) suggested the well known approximations for PERT, a number of different studies have been done on the approximations for the expected value and standard deviation of a continuous random variable (Brit-ney, 1976; Davidson and Cooper, 1976; Hull, 1978; Keefer and Bodily, 1983; Moder and Rodgers, 1968; Pearson and Tukey, 1965; Perry and Greig, 1975). From an ex-tensive empirical study, Pearson and Tukey (1965) developed approximations to the expected value and standard deviation for the Pearson family of distributions. This development was based on the constancy of the ratio of the distances between suitable symmetrical percentage points to the standard deviation. Chapter 2. Risk Measurement Framework Step 1 : Elicit Subjective Estimates for the Uncertain Primary Variable 1 * Step 2 : Approximate Expected Value and Standard Deviation (o = o0X)5) I  Step 3 : Standardize Subjective Estimates ~ I Step 4: Select "Best Fit" Distribution Yes (Condition 2) Step 5 : Obtain 2.5% and 97.5% Estimates Step 6 : Check the Standard Deviation Yes (Condition 1) Step 7 : The Iterative Cycle (o* = a 0.025) Re-estimate Percentile Values (see section 3.7) Step 8 : VB, and B 2 for the Primary Variable Step 9 : The Central Moments for the Primary Variable Figure 2.2: The Steps of the Iterative Process Chapter 2. Risk Measurement Framework 27 The approximation for the expected value from percentile values ([P%]) is, E[X] = [50%] + 0.185 A (2.5) w here A = [95%] + [5%] - 2 [50%]. (2.6) The approximation for the standard deviation using percentile values and the iteration scheme suggested by Pearson and Tukey (1965) is, a = max {<To 0 5 , 0-0.025} (2.7) where [95%] - [5%] 0.05 max <3.29 - 0.1 T 2 00.05 (2.8) 3.08 [97.5%] - [2.5%] '0.025 max <3.98 - 0.1 I-00.025-(2.9) ,3.66 cr005 and 00.025 are the approximations for the standard deviation from the previous iteration. For the first iteration <r0.05 and 00.025 are defined on the basis of figure (3) of Pearson and Tukey (1965) as, [95%] - [5%] 0*0.05 3.25 (2.10) 00.025 [97.5%] - [2.5%] 3.92 (2.11) Pearson and Tukey. (1965) state that the error in the approximation of the expected value is not more than 0.1% for a large area of the (\//3i,/32) plane and not more than Chapter 2. Risk Measurement Framework 28 0.5% for the rest. The error for the standard deviation is less than 0.5% for a very large area of the ( ^ / W i i f i i ) plane. After comparing most of the approximations available to estimate expected value and standard deviation of continuous random variables from judgmental (subjective) estimates, Keefer and Bodily (1983) concluded that the approximations suggested by Pearson and Tukey (1965) are more accurate, often by a wide margin, than their com-petitors. For their study Keefer and Bodily (1983) used only the approximation given by equation (2.8) for the standard deviation because of the difficulty of assessing the 2.5 and 97.5 percentiles subjectively. However, the standard deviation approximated from equation (2.8) alone is an underestimation for a large part of the Pearson family (Pearson and Tukey, 1965). In developing the framework both approximations for the standard deviation (equations 2.8 and 2.9) are included in the iterative approach, thereby ensuring that the approximated standard deviation for the primary variable is the maximum. The 2.5 and 97.5 percentiles for equations (2.9) and (2.11) are obtained as described in steps 3 through 7. The five subjective estimates from step 1 are used in equations (2.5), (2.6), (2.8) and (2.10) to approximate the expected value and the standard deviation for the primary variable. The process of determining the standard deviation using equa-tion (2.7) starts with cr equal to <TQ05. Step 3 : Standardize the Subjective Estimates Using the approximations for the expected value and the standard deviation of the primary variable from step 2, the five subjective estimates from step 1 are stan-dardized by, = *, - EjX] cr where xp is a subjective percentile estimate and Xp is its standardized value. Chapter 2. Risk Measurement Framework 29 Step 4 : The "Best Fit" Distribution The standardized estimates from step 3 are then compared with the 5.0, 25.0, 50.0, 75.0, and 95.0 percentage points for the standardized Pearson variable tabulated by Amos and Daniel (1971), by minimizing the sum of squared deviations as suggested by Ord (1972) to approximate the "best fit" distribution. The acceptable error of the approximation (square root of the sum of squared deviations) for the "best fit" distribution should be specified by the user. A maximum cumulative error of 10% of the standard deviation is used as a default value in the computer program. Step 5 : 2.5% and 97.5% Estimates For the "best fit" distribution from step 4 obtain the standardized Pearson variable values for 2.5% and 97.5% points (see figure 2.3). From these standardized values generate the actual values for the two percentiles from, xp = Xpa + E[X] (2.13) Step 6 : Check for the Standard Deviation From the generated values for 2.5% and 97.5% in step 5 and equations (2.9) and (2.11) evaluate O"o.o25-If <To.o5 > ^0.025 : 8° t° step 8 as Condition 1 is satisfied. K 0o.o5 < <To.o25 : 6° *° s^eP 7 f° r * n e iterative cycle. The standard deviation for the primary variable cr is now equal to (TQ 025-Step 7 : The Iterative Cj'cle Go back to step 3 to start the iterative cycle. If the "best fit" distribution from step 4 is same as for the previous cycle then go to step 8 as Condition 2 is satisfied. If not continue till either of the conditions are met for a specified number of iterative cycles. Chapter 2. Risk Measurement Framework 0.025 0^.975 Standardized Percentile Values Figure 2.3: The "Best Fit" Distribution Chapter 2. Risk Measurement Framework Step 8 :. y/fii and # 2 for the Primary Variable Obtain y/fa and B2 from the Pearson table (Amos and Daniel, 1971) for the selected "best fit" distribution in step 4. When Condition 1 is satisfied, from equa-tion (2.7) the standard deviation for the variable is <Tp 0 5 (the condition used by Keefer and Bodily, 1983). When Condition 2 is satisfied the standard deviation for the variable is O Q 0 2 S . Then, the requirement specified by Pearson and Tukey (1965) in equation (2.7) is fulfilled. Step 9 : The Central Moments From the standard deviation approximated at step 8 and the y/p\ and B2 for the "best fit" distribution, the second, third and fourth central moments of the primary variable are evaluated from, p2(X) = a 2 (2.14) V*{X) = \[fhv* (2.15) p4(X) = fa a4 (2.16) 2.4 Moments of the Derived Variable The method to approximate the moments of the derived variable is based on moment analysis. The moment analysis use the moments of the transformed variables and a truncated Taylor series expansion of the transformed function for g(X) to approximate the first four moments of the derived variable. Assumption 2.3 : A derived variable can be more accurately estimated from a set of primary variables that are functionally related to it than by direct estimation. Chapter 2. Risk Measurement Framework 32 When a functional form between a derived variable and primary variables is used in stochastic applications, it is based on the premise that it is more accurate to estimate the primary variables individually than to estimate the derived variable directly (Kottas and Lau, 1982). It reflects the engineering penchant to seek more detail as a way of seeking greater precision. The analytical method developed in chapter six does not require this assumption at all levels but allows for elaboration of time and cost estimating relationships to achieve more precision. However, when variables which are sometimes assessed holistically are used in decomposed estimation (duration, productivity) the assumption becomes debatable. Chapter five examines the validity of assumption (2.3) for such variables. 2.4.1 Truncated Taylor Series For a system of n primary variables described by the function, Y = o(X), which has continuous partial derivatives, the Taylor series expansion of the function g(X) about the mean values X is given by, g(X) = g(X) + 9 9 i=i dXi + ^ £ ( A W , ) (2.17) The Taylor series is then truncated at the second order such that the truncation error of the approximation is, 1*1 = S t t h * - ^(X, - Xi)(Xk - X k ) w ^ T k (2.18) Chapter 2. Risk Measurement Framework 33 The truncated second order approximation of the expansion is g(X) + £ {Xt - Xt) dg t=i dXi ( 2 . 1 9 ) where the partial derivatives are evaluated at X . The partial derivatives constitute sensitivity coefficients and either increase or decrease the contribution of each term, depending on the importance of each variable to the derived variable, thereby, acting as an in-built sensitivity analysis. The second order approximation provides reasonable mathematical ease for mo-ment analysis. A third or higher order approximation would give more accurate results (Tukey, 1954), but mathematical complexities that are involved when treating statistical dependencies prohibit their use. The moments of a derived variable can be evaluated using the truncated Taylor series expansion with the definition of moments (Siddall, 1972). Then, the first four moments of the derived variable are, (2.20) (2.21) (2.22) H{Y) E (Y E[Y\Y (2.23) Chapter 2. Risk Measurement Framework 34 To evaluate accurate first four moments, correlations between primary variables have to be treated. The standard approach to treat correlations in moment analysis is by expanding the above equations (Ang and Tang, 1975). This approach can include the linear correlations easily only in the approximation for the first two moments. A variable transformation approach that can include the linear correlations in the higher order moments of the derived variable is used in the development of this framework. Assumption 2.4 : The correlations between primary variables are linear. Generally, when the correlations among primary variables are treated it is the lin-ear correlations. If all the variables in the system are normally distributed then the linear correlations between variables are the true correlations. In general, the primary variables which describe a work package are not normally distributed. Consequently, one is faced with the prospect of non-linear correlations. Obtaining non-linear corre-lations or treating non-linear correlation in a multivariate situation are still complex and largely unresolved theoretical issues. Most four moment methods (Jackson 1982; Siddall, 1972) avoid the treatment of correlations; their treatment is important, how-ever, if one wishes to establish an accurate measurement of risk (Perry and Hayes, 1985b; Cooper and Chapman, 1987) and a realistic estimate of bounds. 2 .4 .2 Variable Transformation Method A set of correlated variables are transformed to a set of uncorrelated variables having mean values and unit variances by, Z = I T 1 D - 1 X (2.24) Chapter 2. Risk Measurement Framework 35 where X is the vector of correlated random variables, X = [ X i , X n ] T ; Z is the vector of transformed variables with unit covariance matrix; L - 1 is the inverse of the lower triangular matrix obtained from the Cholesky decomposition of the correlation matrix R (= L X T ) ; and D - 1 is the inverse of the diagonal matrix of standard deviations of the X vector ( D = diag Proof of the Transformation Let X be a vector of correlated random variables with covariance matrix C X and correlation matrix R . Let Z be the vector of transformed variables from equation (2.24) with covariance matrix C Z . Then, Var[Z] = V a r f L ' 1 D _ 1 X ] (2.25) -1 T~»-1 r- IT-1 T-k- l 1 ^ C Z = L D C X | L D - 1 ] (2.26) Using the relationship R = D - 1 C X D _ 1 and the Cholesky decomposition of the correlation matrix R = L L T , L " 1 D " 1 C X D - 1 = L - 1 L L T = L T (2.27) Similarly, L - 1 D " 1 C X D " 1 [ L T ] _ 1 = L T [ L T ] _ 1 = 1 (2.28) Since [L T ] = and D _ 1 = [ D - 1 ] because D _ 1 is symmetric, T L D _ I C X [L~ D _ j = I (2.29) From equations (2.26) and (2.29), C Z = I (2.30) Therefore , the transformed variables are uncorre la ted wi th unit variances. Chapter 2. Risk Measurement Framework 36 Even with assumption (2.4) it is not possible to prove that the variable trans-formation precludes the existence of non-linear correlations amongst the transformed variables. This has implications for the terms treated in approximating the fourth central moment (see section 2.4.5, chapter four and Appendix A). A similar transformation to obtain a set of standard variates with zero means and unit covariance matrix from a set of correlated variables was used by Der Kiureghian and Liu (1986) for applications in structural reliability. 2.4.3 Moments of the Uncorrelated Variables Since the transformation given by equation (2.24) is linear the first four moments of the transformed uncorrelated variables can be evaluated directly from the moments of the correlated primary variables. Then, the first four moments of a transformed uncorrelated primary variable are, E[Zi] = J2 An E[Xj] (2.31) K(Zi) = £ 4 p2(xj) 3 = 1 n n + 2 E E Aii Aik cov(XhXk) = 1 (2.32) j=i k=j+i toW * E 4 (2-33) J'=I « Y. 4 ^(Xj) (2.34) 3 = 1 where A = L 1 D 1 and E[Xj], p2(Xj), p3(Xj), p4(Xj) are the first four moments of the jth correlated primary variable in the X vector. Chapter 2. Risk Measurement Framework 37 2.4.4 The Function T o use m o m e n t s of the transformed uncorrelated p r i m a r y variables to evaluate the first four m o m e n t s of the derived variable Y, the function g(X.) has to be transformed to the uncorrelated space. T h i s transformation is done from, X = D L Z (2.35) T h e n the transformed function is Y = G(Z). If the original function c/(X) was complicated, this transformation increases the complexity as each variable i n the X vector is replaced b y a linear c o m b i n a t i o n of variables from the Z vector. However, since this transformation is linear a n d i n practice the replacement will be done by the computer, the increased complexity of the transformed function is not apparent to the user. 2.4.5 The First Four Moments T h e first four m o m e n t s of the derived variable are now evaluated using the trans-formed function, G(Z), as the function for the derived variable. T h e m o m e n t analysis considers the terms i n v o l v i n g u p to the fourth order because m o m e n t i n f o r m a t i o n is available u p to the fourth order. T h e cross m o m e n t terms for w h i c h i n f o r m a t i o n is not available are neglected (see A p p e n d i x A for derivations). T h e a p p r o x i m a t i o n for the expected value is, E[Y] * G(Z) + \ £ | ^ p2(Z{) (2.36) Chapter 2. Risk Measurement Framework 38 the approximation for the second central moment is, E t = l dG dZi " dG d2G 1 A [ &G_ " + 4 4-t dZf i = l L t H&) - [p2(Zi)f (2.37) the approximation for the third central moment is, E t=l L o n + -y dG dZi 1 3 dG dZ{ 82G dZt pA{Z{) - [^(Z^f (2.38) and the approximation for the fourth central moment is, MY) « E i=i dG dZi fi4(Zi) (2.39) where Z is the vector of transformed uncorrelated variables and G(Z) is the trans-dG d2G formed function for the derived variable. The first (~zzr) and second (-^j) par-dZ{ dZ± tial derivatives with respect to the transformed variables are evaluated numerically (Howard, 1971). The sole purpose of approximating the third and fourth central moments of the derived variable is to approximate a cumulative distribution function for it. In the Chapter 2. Risk Measurement Framework 39 approximation for the fourth central moment it is evident that only the first term of the expansion is used (see Appendix A.5). A second fourth order term which cannot be evaluated except for the case of statistical independence and the special case when there are no non-linear correlations between transformed variables, has been ignored. The underestimation of the fourth central moment can create a problem however, because one may not be able to fit a valid Pearson distribution to the derived variable. A valid distribution requires the relation f32 — 0i — 1 > 0 be satisfied (Johnson et al., 1963). 2.5 Cumulative Distribution Function The approximated central moments are then used to evaluate the shape characteris-tics, skewness (y/^i) and kurtosis (f32) for the derived variable from, P2 = H(Y) ^{Yf (2.40) (2.41) where p2(Y), Pz(Y) and /^(V) are the approximated central moments for the de-rived variable. A cumulative distribution function for the derived variable is approx-imated from the Pearson family of distributions (assumption 2.1) using the method suggested by Johnson et al. (1963). The approximated Pearson distribution is the one which corresponds most closely to the shape characteristics of the derived vari-able. The cumulative distribution function is the quantification of the uncertainty associated with the derived variable. Chapter 2. Risk Measurement Framework 40 2.6 Application of the Framework Three examples are presented to demonstrate the application of the framework. The first is a numerical example for a real construction activity. For the second example, results for the breakeven analysis problem from the framework are compared to those reported by Kottas and Lau (1978). The third is a linear function of the primary variables in the breakeven problem. It is used to highlight some of the reasons for the differences between exact moments and those approximated by the framework. 2.6.1 Example 1 : Activity Duration The duration to fly form a typical slab of 3000 ft2 in a single suite per floor high-rise is considered as the derived variable for the numerical example to demonstrate the risk measurement framework. The duration can be estimated from the decomposed relationship given by, T = A + jrz (2-42) where T is the duration to fly form a typical slab in days, Q is the estimated quantity in ft2, P is the estimated labour productivity in ft2/manhour once the fly forms are placed, L is the estimated labour usage in manhour s / day and A is the time required to fly the forms in days. Other authors (Jaafari, 1984; Hendrickson, 1987) have used decomposed relations to compute the activity duration. While the quantity is deter-ministic, the other three variables are considered as random. Then, equation (2.42) can be re-written as, T = XX + ^ = g(X) (2.43) Si- 2 ^ 3 The subjective percentile estimates for the random variables and the positive definite correlation matrix (R) elicited from an experienced engineer are given below. Chapter 2. Risk Measurement Framework 41 Table 2.1: Subjective Percentile Estimates for A, P and L Variable 5% 25% 50% 75% 95% A(days) P{ft2/mh) L(mh 1 day) 0.25 17.0 75.0 0.33 19.0 83.0 0.375 20.0 88.0 0.42 21.0 92.0 0.5 22.0 96.0 • 1.0 -0.5 0 -R = -0.5 1.0 -0.4 . 0 -0.4 1.0 . The negative correlation between X1 and X2 suggests the greater the productivity, probably the greater the efficiency of flying the forms and vice versa. The negative correlation between X2 and X3 implies that the smaller the crew the greater the productivity (minimum congestion, all crew members visible and not able to hide). The expected values, standard deviations, and shape characteristics of the ap-proximated Pearson type distributions for the random variables from equations (2.5) to (2.16) are given in Table 2.2. Table 2.2: Statistics for the Random Variables Variable Expected Value Standard Deviation P\ A (X,) 0.375 0.08 0.0 9.0 P (X2) 19.815 1.54 -0.8 4.1 L (X3) 87.075 6.5 -0.7 3.3 The diagonal matrix of standard deviations (D) and lower triangular matrix from the Cholesky decomposition of the correlation matrix (R = L L T ) are, Chapter 2. Risk Measurement Framework 42 D = 0.08 0.0 0.01 0.0 1.54 0.0 0.0 0.0 6.5 ; L 1.0 0.0 0.0 -0.5 0.866 0.0 0.0 -0.46188 0.88694 The transformed function G(Z), for the duration to fly form a typical slab is obtained using the above in equation (2.35). Z is the vector of transformed uncorrelated variables. The first four moments of the transformed uncorrelated variables, and dG d2G the first (-;r=-) and second (TTT^) partial derivatives with respect to the transformed variables which are evaluated numerically (Howard, 1971) are given in Table 2.3. Table 2.3: First Four Moments and Partial Derivatives of Transformed Variables Variable E[Zi) M2(Z») /*s(2i) Pi{Zi) dG dZi ff'G dZf 4.68177 1.0 0.0 9.0 0.14764 0.00525 z2 17.56525 1.0 -1.23168 8.28889 -0.05705 0.01181 Zz 24.25121 1.0 -1.17720 5.94210 -0.11515 0.01525 The expected value, standard deviation, skewness and kurtosis respectively for duration to fly form a typical slab from equations (2.36) to (2.41) are 2.13 days, 0.2045 days, 0.622 and 3.095. 2.6.2 Example 2 : Stochastic Breakeven Analysis The problem of breakeven (or cost-volume-profit) analysis under uncertainty has had considerable discussion in the management literature (Jaedicke and Robichek, 1964; Hilliard and Leitch, 1975; Starr and Tapiero, 1975; Kottas and Lau, 1978; Cooper and Chapman, 1987). Kottas and Lau (1978) used the breakeven analysis problem reported by Starr and Tapiero (1975) to present an "exact" four moment solution. Chapter 2. Risk Measurement Framework 43 Their solution to the breakeven equation given by, P(x) = {p-c)x - K (2.44) where p is the unit sale price; c is unit variable cost; x is sales volume; K is fixed cost; and P(x) is profit realized; was shown to be superior to that given by Starr and Tapiero (1975) using Chebyshev's Inequality. The framework is applied to the same numerical example used by Kottas and Lau (1978). In the numerical example p, c, x and K were assumed to be normally distributed with expected values, standard deviations and correlation coefficients as shown below. fip = 1000; fie = 600; px = 1000; fiK = 250000; crp = 100; crc = 60; ax - 200; aK = 20000; PPC = 0.3; P p x = -0.4; pcx = -0.2; pKx = 0.2; Since all the primary variables are normally distributed, there are no non-linear cor-relations between the transformed variables (see equations A.9, A.15 and A.20). Ta-ble 2.4 shows the comparison of the moments for P(x) approximated by the frame-work with those computed by Kottas and Lau (1978). Figure (2.4) shows the Pearson distributions approximated by Kottas and Lau (1978) and by the framework. Table 2.4: Comparison of Moments and Shape Characteristics Kottas and Lau Framework Difference E[P(x)] 144,400 144,400.68 0% 1.2335 * 1010 1.2111 * 1010 -1.81% 4.3964 * 1014 4.5454 * 1014 3.39% MP) 4.5895 * 1020 4.5135 * 1020 -1.66% aP 111,063 110,048 -0.91% 0.3209 0.3411 6.29% p\ 3.0164 3.0775 2.02% Chapter 2. Risk Measurement Framework Figure 2.4: Approximated Pearson Type Distributions for P(x) Chapter 2. Risk Measurement Framework 45 Kottas and Lau (1978) compared estimates from their approach to (1) what is the probability of at least breaking even ? and (2) what is the probability of realizing more than the expected profit of $ 144,400 ? with those from Chebyshev's Inequality and a simulation with a sample size of 50,000. Table 2.5 shows the comparison of those results with that from the framework. Table 2.5: Comparison of Estimation Approaches Starr and Tapiero Kottas and Lau Simulation n = 50,000 Framework Prob.[P(x) > 0] Prob.[P(x) > 144,400] > 41% > 0% 90.9% 47.9% 91.2% 47.7% 91.2% 47.9% The comparison of the approximated moments to the exact moments, and of the estimation approaches show that the proposed framework is robust. The next example is presented to highlight some of the reasons for the underestimation of the moments. 2 . 6 . 3 Example 3 : Linear Function Assume a linear functional form given by, Y=p + c + x + K (2.45) where p, c, x and K are the same variables as in the previous example. In addition, assume that all of the variables are uncorrelated. Since all of the primary variables are normally distributed, they are now statistically independent. Hence, Y is also normally distributed and its exact moments can be computed. Table 2.6 shows the exact moments of Y and those approximated by the framework. Chapter 2. Risk Measurement Framework 46 Table 2.6: Comparison of Moments and Shape Characteristics Exact Framework Difference E[Y) 252,600 252,600 0% 400053600 400004963 -0.012% t*(Y) 0 0 0% 4.8012 * 1017 4.7988 * 1017 -0.050% VP\ 0.0 0.0 0% 02 3.0 2.9992 -0.027% The variance of Y is underestimated due to numerical differentiation. The first and second partial derivatives in equations (2.36) to (2.39) are computed numerically to provide for generality of the function. Table 2.7 gives the first partial derivatives of Y with respect to the transformed variables. The second partial derivatives of Y with respect to the transformed variables are zero because the transformed functional form is also linear. Table 2.7: Comparison of the First Partial Derivatives of Y Exact Framework Difference SY B5r °& or 100 60 200 20000 99.99392 59.99635 199.98784 19998.79419 0.006% 0.006% 0.006% 0.006% Hence, from equation (2.37), P2(Y)ex = 1002 *1 + 602 *1 + 2002 * 1 + 200002 *1 = 400053600 = 99.993922 * 1 + 59.996352 * 1 + 199.987842 * 1 + 19998.794192 * 1 = 400004963 Chapter 2. Risk Measurement Framework 47 When primary variables are statistically independent equation (2.34) should be, = E K n{Xt) + 6 £ E A l A l M*i) p2(Xk) (2.46) j=i j=i fc=j+i and equation (2.39) should be, 2 p-AZ^Zi) (2.47) For generaHty, the approximation for Pi(Y) is based on the assumption that trans-formed variables will only be uncorrelated. This assumption is reasonable because statistical independence will occur only when all the primary variables are normally distributed. Hence, it is evident that the fourth central moment for the derived variable from the framework will always be an approximation. Table 2.8 gives the first four moments and shape characteristics for Y when exact, those approximated by the framework in general and when corrected for this example by using equations (2.46) and (2.47) instead of (2.34) and (2.39). Table 2.8: Comparison of Moments and Shape Characteristics Exact Framework Corrected E[Y] 252,600 252,600 252,600 MY) 400053600 400004963 400004963 MY) 0 0 0 MY) 4.8012 * 1017 4.7988 * 1017 4.8001 * 1017 0.0 0.0 0.0 ft 3.0 2.9992 3.0 MY) = E t=l &Y_ dZi MZi) + e E E 1=1 l=i+l &Y_ dZi dY dZj Chapter 2. Risk Measurement Framework 48 2.7 Summary The proposed framework requires: a functional relationship, <7(X), between the de-rived variable and its primary variables; approximation of the first four moments of a primary variable from subjective estimates; approximation of the first four moments of the derived variable from moment analysis using a truncated second order Taylor series expansion of the transformed function and moments of the transformed vari-ables; evaluation of shape characteristics of the derived variable; and approximation of the derived variable to a Pearson type distribution using its shape characteristics. The framework is suitable for systems where pre-determined functions are available, data limitations exist and the decisions are not based on extreme probabilities. The results from the application to the stochastic breakeven problem show that the frame-work is accurate. The use of a truncated Taylor series expansion of the system function for moment analysis (Ang and Tang, 1975; Benjamin and Cornell, 1970; Jackson, 1982; Siddall, 1972; Smith, 1971) or the four moment approach for the quantification of uncertainty of a derived variable (Kottas and Lau, 1980, 1982; Siddall, 1972; Jackson, 1982) are not unique. The method to approximate the first four moments of a primary variable from subjective probabilities and the variable transformation method to treat corre-lations between primary variables in the approximation of the first four moments of the derived variable are unique for this framework. The use of subjective probabilities recognizes the lack of input data for most risk analyses performed during the feasi-bility stage. The variable transformation method permits the inclusion of correlation information in the approximations for higher order moments of the derived variable which is neglected by the standard approach for moment analysis (see section 4.2 and Appendix A). Chapter 2. Risk Measurement Framework 49 In the context of time and economic feasibility of an engineering project, all of the decision and performance parameters have well defined functional forms (even though the functions for derived variables at the work package/revenue stream level can change from analyst to analyst) and significant data limitations exist. In addition, strategic decisions such as contingencies and tolerances for those parameters rarely require probabihty values beyond the 90th percentile. Therefore, the framework becomes the foundation for the proposed method. The practical advantages of the framework are the rigor it imparts on the analysis process and the formalized procedure it imparts upon the participants. The analysts and the experts are forced to consider that the inputs are random and to structure their thinking in terms of range estimates. Hence, it quickly becomes apparent what primary variables are the major contributors to the uncertainty of the derived variable. C h a p t e r 3 E l i c i t a t i o n of S u b j e c t i v e P r o b a b i l i t i e s 3.1 General The framework developed in the previous chapter to quantify the uncertainty of a derived variable is based on the assumption that experts can provide estimates for percentile values of their subjective prior probabihty distributions for primary vari-ables in construction estimation. This is the measurement of the experts' belief about the uncertainty of primary variables. For the measured belief to be useful in the quantification of uncertainty of the derived variable it has to be accurate, calibrated, coherent and also be converted to moments. While the work described in this chapter is not conclusive, it provides a foundation for obtaining input data necessary to make the analytical method a practical tool for engineering construction. Also, it should be seen as a vital step towards standardizing and computerizing, to the extent possible, elicitation of expert input dealing with uncertainty. Consequently, this chapter achieves the secondary research objective identified in chapter one of this thesis. Developed in this chapter are an approach to elicit the desired percentile values of an expert's subjective prior probabihty distributions for variables in engineering construction and a method to ensure the elicited subjective probabilities are coherent and useful in the quantification of uncertainty of the derived variable. 50 Chapter 3. Elicitation of Subjective Probabilities 51 3.2 Subjective Probabilities After the detailed work of DeFinetti (1970) and Savage (1954), the use of subjective probabilities - the degree of belief in the occurrence of an event attributed by a given person at a given instant and a given set of information, is considered a quantifica-tion of uncertainty, because it represents the extent to which the person believes a statement is true, based on the information available to him at that time (Hampton et al., 1973). Subjective probabilities are generally eh cited for use in Bayesian decision analysis. Lindley et al., (1979) state that to use subjective assessments in decision analysis they have to be accurate, calibrated and coherent. A person is calibrated if for all events assigned a probability, q, the proportion that actually occur is in fact equal (or close) to q (Budescu and Wallsten, 1987). A set of subjective probabilities are coherent if they are compatible with the probability axioms. Coherence is essential if the assessments are to be manipulated according to probabilistic laws (Lindley et al., 1979). Wallsten and Budescu (1983) argue that it is not necessary for encodings to obey axioms of additive probability theory in order to be valid measures of belief. Such conformity is necessary only if the user of the judgements wants to treat them as ad-ditive probability measures. Wright and Ayton (1987) were surprised by the lack of significant relations between coherence and forecasting performance (i.e calibration), because the two ways of assessing the adequacy of a forecaster are logically interre-lated. They state that if a forecaster is incoherent he cannot be well calibrated, but it does not follow that coherence necessarily produces good calibration. A review of the subjective probability literature show that it can be classified into three broad categories, namely, theoretical, review and empirical. The theoretical literature can be further divided into axioms on subjective probabilities (DeFinetti, Chapter 3. EHcitation of Subjective Probabilities 52 1970; DeGroot, 1970. 1975, 1979; French, 1980, 1982; Lindley, 1982; Lindley et al., 1979; Pratt et al., 1964; Savage, 1954, 1971; Suppes, 1975), assessment and consensus of subjective probabilities (Ashton and Ashton, 1985; Bacharach, 1975; Bordley, 1982; Bordley and Wolff, 1981; Diaconis and Ylvisaker, 1985; Dickey, 1979; Dickey and Chen, 1985; Dickey et al., 1986; French, 1985; Holt, 1986; Press, 1979; Winkler, 1986b) and expert resolution (Ashton, 1986; Clemen, 1986; Einhorn, 1972; French, 1980, 1986; Lindley, 1986; Lock, 1987; Morris, 1974, 1977, 1983, 1986; Schervish, 1986; Winkler, 1981, 1986a). Some of the review literature on subjective probabilities are (Beach, 1975; Beach et al., 1987; Budescu and Wallsten, 1987; Bunn, 1979a, 1979b; Chesley, 1975; Christensen-Szalanski and Beach, 1984; Cooper and Chapman, 1987; Green, 1967; Hampton et al, 1973; Hogarth 1975; Huber, 1974; Ludke et al., 1977; Moore, 1977; Morrison, 1967; Phillips, 1987; Wallsten and Budescu, 1983; Winkler, 1983; Wright and Ay-ton, 1987), while the empirical studies are (Bunn, 1975; Gustafson et al., 1973; Hull, 1978; Milkovich et al, 1972; Murphy and Winkler, 1971a, 1971b, 1975, 1984; Murphy and Daan, 1984; Murphy et al., 1985; Pratt and Schlaifer, 1985; Press, 1985; Seaver, 1977; Seaver et al., 1978; Smith, 1967; Spetzler and Stael von Holstein, 1975; Stael von Holstein, 1971, 1972; Tversky and Kahneman, 1984; Winkler, 1967a, 1967b, 1968, 1971). The literature review shows that theoretical investigations on the topic of subjec-tive probabilities have currently far outstripped the empirical studies. Christensen-Szalanski and Beach (1984), after reviewing over 3500 abstracts of articles on proba-bihty judgements and decision making found only 84 (2.4%) empirical studies. This is unfortunate because available guidance for the elicitation of subjective probabilities is not on a par with the theoretical analyses. Nevertheless, proven techniques from other fields are used for the development of the elicitation approach described herein. Chapter 3. Elicitation of Subjective Probabilities 53 3.3 Definitions and Assumptions In this thesis, the terms analyst and expert are used throughout. This section will define these terms and state the assumptions that are central to the development of the eh citation approach. Analyst Analyst refers to the individual (or group of individuals) within the firm responsible for conducting economic and financial feasibility, scheduling and cost analyses. He is the key person in the elicitation approach because he must elicit from the expert his belief about the uncertainty of variables as subjective probabilities. To achieve this, the analyst must know the problem, concepts in subjective probabilities and be able to build a rapport with the expert. Expert Expert refers to individuals both within and external to the firm who provide key input dealing with economic, revenue, cost, financial, productivity and schedule in-formation and is a person who in his area has some degree of training, experience and/or knowledge significantly greater than that in the general population (Wallsten and Budescu, 1983). These experts are drawn from the fields of economics, finance, design, construction and so forth, when the analyst believes that they possess the most relevant knowledge and information regarding the uncertainty of a primary variable. In general, they are substantive experts, who in a given domain, assess events in their field of expertise (Wallsten and Budescu, 1983). Based on the literature reviewed, three assumptions that are central for the elici-tation of subjective probabilities are stated. Chapter 3. Elicitation of Subjective Probabilities 54 Assumption 3.1 : The experts involved with engineering projects are calibrated. Budescu and Wallsten (1987) state that calibration is the most important criterion for an expert because it directly compares his performance with empirical reality, and while experienced experts are highly calibrated, calibration can be further improved with training. The calibration curve as shown in figure (3.1) is a bivariate plot of the proportion of events occurring versus the expert's probability assigned to the events. It is linear with unit slope and zero intercept for a "perfectly" calibrated expert (Murphy and Winkler, 1984). Phillips (1987) states that calibration of assessments are usually better for future events made by experts in a group when training and feedback are available. Past studies show that when experts are required to encode subjective probabilities within their area of competence, they can be exceedingly well calibrated (Wallsten and Budescu, 1983). Assumption 3.2 : Interaction between the analyst and the expert is an essential part of the process. The main reason for the interaction between the analyst and the expert is to avoid serious misunderstandings and biases. Spetzler and Stael von Holstein (1975) state that even subjects who are well trained in probability or statistics, when having to assign a probability distribution without the help of an analyst often provide poor assignments. Past studies show that interaction is useful (Chesley, 1975; Cooper and Chapman, 1987; Huber, 1974; Hull, 1980; Spetzler and Stael von Holstein, 1975) especially when experts lack experience in providing subjective probabilities. However, the interaction hinders the practicality of the framework. Firstly, it makes the implicit assumption of an additional person. Secondly, it discourages self-elicitation. Thirdly, every problem may not justify the time and cost associated with the interaction during the elicitation. Spetzler and Stael von Holstein (1975) state Chapter 3. Elicitation of Subjective Probabilities 100 0 0.2 0.4 0.6 0.8 1 Assessed Probability Figure 3.1: Calibration Curve Chapter 3. Elicitation of Subjective Probabilities 56 that in such situations or when the firm uses probabilities regularly to communicate about uncertainty, interactive computer interviews might be valuable. Some real applications, such as probabilistic weather forecasting, rarely used interaction for the elicitation of subjective probabilities (Murphy and Winkler, 1975). Due to the need to obtain assessments for a large number of variables required for engineering risk analysis, it is necessary to standardize and computerize the elicitation approach, to the extent possible. While Spetzler and Stael von Holstein (1975) and Chapman and Cooper (1987) assert that the role of the computer should be minimized in the elicitation process, Wallsten and Budescu (1983) recommend the study of unaided judgements, because of their applied interest, and because only through studying unaided judgements can the benefits of interaction be determined. Those stages of the approach that can be standardized and computerized to increase the efficiency of the process are explicitly identified in this chapter. Assumption 3.3 : Questions based on those from previous non-construction related applications and studies will elicit accurate subjective percentiles for the construction context. The developments in this research are restricted to the measurement of an ex-pert's belief as percentiles of subjective prior probability distributions. In developing the elicitation approach, many proven techniques are used to compensate for the lack of experience in formal elicitation of subjective probabilities in engineering construc-tion. The elicitation of accurate and calibrated subjective probabilities involves three phases - pre-elicitation, elicitation and feedback. Chapter 3. Elicitation of Subjective Probabilities 57 3 . 4 Pre-Elicitation Stage The objective of pre-elicitation is for the analyst to train the expert in the task of quantifying his belief as subjective probabilities. It is done in the three phase approach of motivating, structuring and conditioning developed by Spetzler and Stael von Holstein (1975). 3.4.1 Motivating Phase The motivating phase has two purposes. The first is to build a rapport with the expert by introducing him to the elicitation task. The second is to explore whether any motivational biases might operate (Spetzler and Stael von Holstein, 1975). Introduce the expert to the elicitation task The analyst attempts to build a rapport with the expert by giving an explanation on the importance and purpose of probabihty encoding. This is useful in motivating the expert to become fully involved in the eh citation task (Spetzler and Stael von Holstein, 1975). The need for subjective probabilities in engineering construction, because the variables represent predictions of future events, is emphasized. As most engineers prefer to make deterministic predictions (and then add safety factors for the uncertainty), the difference between deterministic and probabilistic prediction is explained. This discussion is helpful when the expert is asked to respond to proba-bilistic questions during the elicitation stage. Explore whether any motivational biases are operating Motivational biases are defined as either conscious or subconscious adjustments in expert's responses motivated by his perceived system of personal rewards for various responses (Spetzler and Stael von Holstein, 1975). The analyst points out to the Chapter 3. Elicitation of Subjective Probabilities 58 expert that there is no commitment (firm projection) inherent in a probability as-sessment and that the only aim is to elicit a probability distribution that represents belief of the expert about the uncertain variable. 3.4.2 Structuring Phase The structuring phase concerns the uncertain variable. It also has two purposes. Define the uncertain variable The uncertain variable is clearly denned in terms of the structure of the problem. The definition includes relevant units for the variable. The importance of the variable to the decision problem is explained to demonstrate the relevance of the elicitation process to gain the expert's full cooperation. Such cooperation is essential for a successful elicitation (Cooper and Chapman, 1987; Huber, 1974; Hull, 1980; Spetzler and Stael von Holstein, 1975). Expert is asked to think the variable through Having denned the uncertain variable the expert is then asked to think the variable through carefully. This enables the analyst to find out what background information is relevant to the elicitation process. If relevant historical data are available it is used in the discussion. The meanings of any descriptive terms (such as highest and lowest or shortest and longest) used in the questionnaire are explained. Winkler (1967a) observed that when subjects are asked for a shape of their subjective distribution many try to associate it to a normal distribution. The author's experience is similar, some experts believing that percentiles should be symmetric to be consistent. The expert is made aware of this common mistake, so that features like skewness will be considered during the eh citation. Chapter 3. Elicitation of Subjective Probabilities 59 3.4.3 Conditioning Phase The aim of this phase is to condition the expert to think fundamentally about his judgements and to avoid cognitive biases. Cognitive biases are defined as either con-scious or subconscious adjustments in the expert's responses that are systematically introduced by the way the expert intellectually processes his perceptions (Spetzler and Stael von Holstein, 1975). For example, a response may be biased towards the most recent piece of information simply because the information is the easiest to re-call. Spetzler and Stael von Holstein (1975) state that cognitive biases depend on the expert's "modes of judgement". F i n d out h o w the expert makes p r o b a b i l i t y assignments The analyst tries to discover what "mode of judgement" the expert might be using to make probabihty assessments and then adapts the interview to minimize possible biases. Spetzler and Stael von Holstein (1975) define five "modes of judgement" and how each might operate in producing bias, based on the work by Tversky and Kah-neman (1984). 1. Availability: Probabihty is based on the ease with which relevant information is recalled or visualized. This occurs when recent information or information that made a strong impression at the time it was first presented is given more weight than old information. While availability as a mode of judgement can produce biases due to retrievability of instances or imaginability, it can also be introduced deliberately by the analyst to help compensate for an expert's bias. If the analyst believes that the expert has a central bias, he asks the expert to make up scenarios for extreme out-comes, which become more available and help counteract the central bias. 2. Adjustment and Anchoring : The initial response in an interview often serves as a basis for later responses, especially when the first question concerns a .likely value Chapter 3. Elicitation of Subjective Probabilities 60 for the uncertain variable. Most often experts' adjustment from such a basis is in-sufficient. Thus, anchoring occurs from a failure to process information about other points on the distribution independently from the point under consideration. 3. Representativeness : The probability of an event is evaluated according to the degree to which it is considered representative of, or similar to, some specific major characteristics of the process from which it originated (i.e probability judgements are reduced to judgements of similarity). When this mode is operating there is a strong tendency to place more confidence in a single piece of information that is considered representative than in a larger body of more generalized information. 4- Unstated Assumptions : Expert's responses are conditional on various unstated assumptions. Since the expert cannot be held responsible for taking into account all possible eventualities that may effect the variable, the analyst states the assumptions he is making about the uncertain variable. Once identified the experts can assign their probabilities. 5. Coherence : People tend to assign probabilities based on the ease with which they can fabricate a plausible scenario that would lead to an outcome. Therefore any discussion of scenarios leading to possible outcomes for an uncertain variable should be well balanced, since the relative coherence of various arguments can have an effect on the probability assignment. Be alert for biases symptomatic of modes of judgements Asking the expert to specify the most important bases for his judgement, and what information he is taking into account in making his estimates, will indicate possible biases symptomatic of the modes. The first often acts as anchor and possibly leads to central bias while the second will indicate what information is easily available. These observations are also used as checks when obtaining responses for subjective estimates. Chapter 3. Ehcitation of Subjective Probabilities 61 3.5 Elicitation Stage With the completion of the pre-elicitation stage the expert is ready to quantify sub-jectively his belief about the uncertain variable. The ehcitation session is based on a questionnaire that would elicit the desired percentiles of the subjective prior probabil-ity distribution for each uncertain variable. This section develops the questionnaire and describes how the ehcitation session is conducted. In developing the questionnaire, central bias (Bunn, 1975, 1979; Chesley, 1975; Hampton et al., 1973; Huber, 1974; Hull, 1978, 1980; Seaver et al., 1978; Spetzler and Stael von Holstein, 1975; Tversky and Kahneman, 1984; Wallsten and Budescu, 1983; Winkler, 1967a) and its effect on the ehcitation of tail probabilities (5th and 95th percentiles) must be treated. Therefore, the questionnaire begins by establishing the extremes of the distribution (Budescu and Wallsten, 1987; Cooper and Chapman, 1987; Hull, 1980; Selvidge, 1975; Spetzler and Stael von Holstein, 1975). This pre-pares the expert to respond to questions on tail probabilities. The deliberate use of scenarios for extreme outcomes counteracts the effect of central bias that is other-wise likely to occur. Also, this has the overall effect of increasing the range of the assigned distribution for the uncertain variable (Hull, 1980). Estimation of the time required to construct a floor slab is used as the example for an uncertain variable to demonstrate a sample questionnaire. Each question is followed with an explanatory comment. Duration assignments for different percentiles are depicted in figure (3.2). Question 1 : What in your opinion is the shortest possible duration to construct the floor slab for which the probability is so small as to equal zero for practical purposes ? (say the value is vl) Chapter 3. Elicitation of Subjective Probabilities 62 Comment : The pre-elicitation stage would have clarified the terms used in the ques-tion and explained the range of scenarios the experts should consider in their quan-tification of judgements. Question 2 : So, A is in your opinion the shortest possible duration, is that correct ? Comment : A check to clarify the expert's thinking about the lower tail value of the distribution. Question 3 : If A in your opinion has a zero probability of not exceeding the actual duration, what is the duration which would not exceed a probability of 0.05 ? (Say the value is C ) Comment : Having established the point for zero probability the expert should be able to give a value for the 5th percentile. This value would be anchored to that of zero probability. However, the anchoring is the result of forcing the expert to think of extreme outcomes to counteract central bias. Question 4 : So, you associate a 1 in 20 chance that the actual duration will be less than C. Is that correct ? Comment : Here, odds are used to check the consistency of the elicited 5th percentile. This is helpful to verify the expert's thinking. If the expert confirms his estimate, go to Question 6, if not, ask Question 5. Question 5 : If not, what is the value for the actual duration that you consider to have a 1 in 20 chance of not being exceeded ? Comment : A follow up question to the consistency check attempted in Question 4. Question 6 : What in your opinion is the longest possible duration to construct the floor slab for which the probability is so large as to be equal to one for practical purposes ? (Say the value is Z) Chapter 3. Ehcitation of Subjective Probabilities 63 Comment : Going from one extreme to the other increases the range and would re-duce even more the possible effects of the central bias that may occur when the 25th and 75th percentiles are elicited after the median value. Question 7 : So, Z is in your opinion the longest possible duration, is that correct ? Comment : A check to clarify the expert's thinking about the upper value of the distribution. Question 8 : If Z in your opinion has a unit probabihty of not exceeding the actual duration, what is the duration which would not exceed a probabihty of 0.95 ? (Say the value is X) Comment : Same as for Question 3 Question 9 : So, you associate a 1 in 20 chance that the actual duration will be above X. Is that correct ? Comment : Again, odds are used to check the consistency of the elicited 95"1 per-centile. If the expert confirms his estimate, go to Question 11, if not, ask Question 10. Question 10 : If not, what is the value for the actual duration that you consider to have a 1 in 20 chance of being exceeded ? Comment : A follow up question to 9. Question 11 : What in your opinion is the value for actual duration such that it is equally likely to be above as it is to be below ? (Say the value is M) Comment : This question would elicit the median value of the expert's subjective probabihty distribution for duration to construct a floor slab. Question 12 : So, you are willing to bet equal odds that the actual duration is either Chapter 3. Elicitation of Subjective Probabilities 64 above or below M , is that correct ? Comment : A check to clarify the expert's response to the median. Question 13 : What is the value for duration that you feel will divide the region below M equally, thus it is just as likely that duration will fall below this value as it will be between this value and M ? (Say the value is L) Comment : The expert is asked to bisect the area below the median to give an esti-mate for his 25"1 percentile value. Question 14 : So, you associate a 1 in 4 chance that the actual duration will be below L, is that correct ? Comment : A consistency check to clarify that the expert is thinking about the 25th percentile with the bisected value. If the expert confirms his estimate, go to Question 16, if not ask Question 15. Question 15 : If not, what is the value for the actual duration that you consider to have a 1 in 4 chance of not being exceeded ? Comment : A follow up question to 14. Question 16 : Now, concentrate on the case where the duration could be above M, which you felt would be 50% of the time. What is the value that you feel will divide the region above M equally, thus it is just as likely that duration will be above this value as it will be between this value and M ? (Say the value is N) Comment : The expert is asked to bisect the area above the median to give, an esti-mate for his 75th percentile value. In addition the expert is reminded of his estimate for the median. This gives him a further opportunity to change or confirm his esti-mate for the median, now that he has given an estimate for the 25th percentile. Question 17 : So, you associate a 1 in 4 chance that the actual duration will be Chapter 3. Elicitation of Subjective Probabilities 65 above N, is that correct ? Comment : A check to clarify that the expert is thinking about the 75th percentile with the bisected value. If the expert confirms his estimate, stop the interview, if not ask Question 18. Question 18 : If not, what is the value for the actual duration that you consider to have a 1 in 4 chance of being exceeded ? Comment : A follow up question to 17. The questionnaire combines direct probability responses and chance responses to provide cross checking for consistency. The basis for direct probability responses is the variable interval method (Huber, 1974), because it elicits the percentiles required by the framework. Hull (1978) and Seaver et al. (1978) have reported that the fixed interval method performed better than the variable interval method because the vari-able interval method gave distributions that were "too-tight". It must be noted that both studies assessed the median first, giving rise to possible central bias. Murphy and Winkler (1975) studying experienced weather forecasters conclude that the vari-able interval method performed better than the fixed interval method in probabilistic weather forecasting. Since the questionnaire starts with the tails of the distribu-tion, the elicited percentiles should overcome the effects of central bias and display sufficient spread. The elicitation session is based on the questionnaire and the analyst is expected to follow the general format of the questionnaire when conducting the session. How-ever, at his discretion the analyst can adapt the interview to suit different situations. Since the questionnaire is based primarily on the variable interval technique, it is easily standardized and automated to use for those variables selected for interactive computer interviews. Chapter 3. Elicitation of Subjective Probabilities Figure 3.2: Subjective Percentile Estimates Chapter 3. Ehcitation of Subjective Probabilities 67 3.6 Feedback and Consensus Estimates Whenever possible a group of experts are used for the ehcitation because it has been found that consensus judgements from a group to be better than the individual judgements (Ashton and Ashton, 1985; Ashton, 1986; Bacharach, 1975; Beach, 1975; Bordley, 1982; Bordley and Wolff, 1981; Cooper and Chapman, 1987; French, 1985; Hampton et al., 1973; Huber, 1974; Stael von Holstein, 1971; Winkler, 1968, 1971). Huber (1974) states that the aggregation of responses from several experts improve subjective judgements because aggregation from a statistical viewpoint tends to re-duce random error as well as reduce the impact of biases. This view is confirmed by Beach (1975) who in addition states that combining the opinions of several experts would aid in eliminating conservatism and/or extremism and promote more nearly optimal decisions. Once the initial subjective estimates'are made by the expert, he is provided feed-back on his assessments in the form of a discussion (graphically if necessary) between the analyst and expert, and expert and expert. The expert can revise his prior judge-ments after the discussion. This process is based on the nominal group technique (Delbecq et al., 1975). While there is no consensus in the literature as to which is the best method to provide feedback, there is agreement that feedback improves the orig-inal estimates (Beach, 1975; Chesley, 1975; Gustafson et al., 1973; Lock, 1987; Stael von Holstein, 1971; Winkler, 1971). However, Gustafson et al. (1973) have shown that assessments from the nominal group technique (estimate-talk-estimate) to be more accurate than those from Delphi technique (estimate-feedback-estimate), con-ventional group technique or individual estimates. Lock (1987) in proposing a general approach to group judgmental forecasting concludes that there are benefits to com-munication and discussion between group members, so long as these are structured as in nominal group approaches. Chapter 3. Elicitation of Subjective Probabilities 68 Winkler (1968) used several mathematical and behavioral approaches for arriv-ing at consensus subjective probability distributions. The mathematical approaches were either using a weighted average or Bayes' theorem. The behavioral approaches, Delphi and nominal group led the group to arrive at the final probability distribu-tion. He could not determine which method was most accurate, because there was no "correct" opinion, but he did find that different methods produced different results. Makridakis and Winkler (1983) used ten different forecasting methods to combine forecasts. Performance was compared in terms of the mean average percentage er-ror (MAPE). They found that the accuracy increased when additional methods were added to the forecast. However, the gains tailed off after about four or five were combined. Ashton and Ashton (1985) compared equal weighting with four differential weight-ing methods to examine the impact of aggregation in forecasting of annual advertising sales at TIME magazine. They found: aggregates of subjective forecasts to be more accurate than the individual forecasts that comprised the aggregates; incremental accuracy of differential weighting methods over equal weighting was small; regard-less of the weighting method, accuracy attributable to aggregation was achieved by combining a small number of individual forecasts. They concluded that equal weight-ing appears to be the solution to the problem of choosing a weighting method for subjective forecasting. Lock (1987) states that for most purposes linear models are adequate for aggregation and differential weighting do not offer any real advantages in practical terms. Since the experts are provided feedback, their subjective estimates would be close to consensus. Also, because of the difficulty in measuring the variability in expert accuracy, consensus subjective estimates are obtained by assigning equal weights to all the experts. The routine aspects of feedback such as exchanging individual estimates, Chapter 3. Elicitation of Subjective Probabilities 69 obtaining revised estimates and the task of combining subjectives estimates can be readily automated. 3.7 Analysis Stage The requirement for eliciting coherent subjective probabilities has been discussed previously. An automated approach to ensure the coherence of subjective probabilities has been developed as a part of this research effort. It is documented in the form of an interactive computer program called "ELICIT" (see Appendix D). This program, based on the method to convert subjective estimates to moments (section 2.3.2), enables the analyst to approximate the subjective estimates for a variable to a Pearson type distribution. The high flexibility of the Pearson family (Amos and Daniel, 1971) approximates most of the subjective estimates to Pearson type distributions. However, in some instances subjective estimates may not approximate to a Pearson distribution for the specified maximum cumulative error. In these situations the expert is made aware of the necessity for subjective probabilities to be coherent and is asked to modify the 25th and 75th percentile estimates. These two estimates are ehcited only to approximate a Pearson type distribution. The expected value and standard deviation for the uncertain variable are derived from the 5th, 50th and 95th percentile values and are initially independent of the approximated distribution. In addition, the conversion of subjective estimates to moments ensures that the measured belief is useful in the quantification of uncertainty of the derived variable. For example, assume that the five percentile values ehcited for a variable are 1.0, 2.5, 5.0, 7.5 and 9.0. The expected value and standard deviation from step 2 of section (2.3.2) is 5.0 and 2.432. However, the five subjective estimates do not approximate to a Pearson type distribution. It is obvious from the 5th and 95th estimates that the expert Chapter 3. Elicitation of Subjective Probabilities 70 is thinking of a symmetric distribution. If a value for the 25th percentile between 2.9 and 3.5 with a symmetrical value for the 75"1 percentile between 6.5 and 7.1 is accept-able to the expert, a Pearson distribution with E[X] = 5.0, cr = 2.432, y/% = 0 and /32 between 2.0 and 4.8 can be approximated. In the author's limited experience in eliciting subjective estimates, the consensus among experts is that if necessary they would be willing to change within reason the 25'h and 75th percentile estimates because those two are the least important to them. Ideally, the interactive program should give guidance for the change as in the case of correlation coefficients for a positive definite correlation matrix. However, at present it is the responsibility of the analyst to guide the expert. The analyst can recognize the shape of the distribution (symmetric, positively or negatively skewed) by observing the 5th, 50th and 95th percentile estimates and guide the expert to acceptable estimates for the 25th and 75th percentiles. It is planned that this facility be added to "ELICIT" as a future improvement. 3 .8 Verification As the final stage of the elicitation, the subjective prior probability distribution is verified to see if the expert is in total agreement with it (i.e it reflects his belief). Cooper and Chapman (1987) state that verification can be conducted by: using cross checking for consistency between values; using different elicitation methods especially when indirect methods have been used; and having the expert examine and confirm the final result. Since the questionnaire has performed cross checking for consistency of all the percentile values, as verification, the computer program "ELICIT" informs the user of the expected value, standard deviation and shape characteristics of the approximated Chapter 3. Ehcitation of Subjective ProbabUities 71 Pearson type distribution. While the skewness and the kurtosis give an indication of the shape of the distribution, a better verification method is to provide a graphical display of the approximated density function. The next step in the development of "ELICIT" is to display the probabihty density function of the approximated Pearson type to be viewed by the expert. Incorporating such a verification process to the ehcitation technique would provide the analyst and expert with greater confidence that the approximated distribution represents the expert's belief. In addition, it would eliminate approximating variables with bell shaped probabihty distributions to Pearson distributions that are U or J shaped. 3.9 Summary An approach to elicit an expert's belief of uncertainty as subjective probabilities has been described in this chapter. The approach combines the theoretical requirements of subjective probabilities with a practical process. The process is developed by transforming proven techniques from other fields of study to the requirements of risk quantification in engineering construction. The role of the computer and the use of a standard approach to expedite the process is identified at every stage. The pre-elicitation stage based on the developments by Spetzler and Stael von Holstein (1975) trains and prepares the expert to quantify his belief as subjective probabilities. This stage requires a high level of person to person interaction. Hence, there is little use of the computer during pre-elicitation. The ehcitation stage elic-its the percentile values of an expert's subjective prior probabihty distributions for uncertain variables using the developed questionnaire. These subjective probabilities are accurate and calibrated. Since the questionnaire is based primarily on the variable Chapter 3. Ehcitation of Subjective Probabilities 72 interval technique, the ehcitation stage can be standardized and automated for inter-active computer interviews. If more than one expert participates in the ehcitation, consensus subjective estimates are obtained by assigning equal weights to all the ex-perts. The routine aspects of feedback and obtaining consensus subjective estimates can be done by the computer. The coherence of subjective probabilities is ensured by the interactive computer program "ELICIT" (see Appendix D) , using the Pearson family of distributions. The moments for the uncertain primary variable are used to verify whether the shape of the approximated distribution is similar to that which the expert has in mind. Distinct roles for computerization and standardization exist to expedite the pro-cess of ehcitation and verification of an expert's belief about uncertainty. At present, the experience in using these approaches in field applications is limited. The next stage of this research will concentrate on building up experience from field applica-tions and on refining and validating the ehcitation approach. This is essential for the proposed method to become a practical tool in risk quantification for large engineering projects. C h a p t e r 4 C o r r e l a t i o n s B e t w e e n V a r i a b l e s 4.1 General The risk measurement framework developed in chapter two was based on the as-sumption that the correlations between variables were linear. From that assumption a variable transformation method was developed to treat linear correlations among the primary variables when evaluating the moments of the derived variable. This transformation was based on the correlation matrix for the primary variables. A correlation matrix is denned by Graybill (1983) as follows. Let X be an raxl random vector with positive definite covariance matrix denoted by C = [vij]. The correlation matrix of X is R = [pij] where pij is defined by, Pii = - 7 ^ = (4-1) for all i and j. This chapter addresses some of the issues that arise in treating analytically the linear correlations between variables (these are equally relevant when treating cor-relations for Monte Carlo simulation). In the next section the correlations between primary variables are discussed. The discussion highlights an often ignored theoret-ical requirement of the correlation matrix and thereby develops a method to elicit a positive definite correlation matrix for primary variables. 73 Chapter 4. Correlations Between Variables 74 Developed in the third section is a method to obtain a positive definite correlation matrix for derived variables. The method is developed by extending the approxima-tion for the covariance between two functions suggested by Kendall and Stuart (1969) to the multivariate case. The fourth section address the issue of multicoUinearity in the correlation matrix and suggests a mathematical manipulation to overcome the effect of multicollinearity for practical applications. A numerical study is presented in the fifth section. The first part of the study compares the variable transformation method to the standard approach used in mo-ment analysis to treat correlation among primary variables (Ang and Tang, 1975; Benjamin and Cornell, 1970) under general conditions. The second part explores the behavior of the two methods in the presence of multicoUinearity. This study demon-strates that while the variable transformation method is stable in the presence of multicoUinearity, the standard approach could fail. The intention of the third part is to study the susceptibility of the transformation to the effect of multicoUinearity. 4.2 Correlation between Primary Variables The correlation information between primary variables, required by the framework, wiU have to be obtained subjectively from experts because of data limitations. A number of authors in both simulation and approximate applications have recognized this necessity (Eilon and Fowkes, 1973; Inyang, 1983; Howard, 1971; HuU, 1977, 1980; Kadane et al., 1980; Keefer and Bodily, 1983; Kryzanowski et al., 1972; Wagle 1967). Other than for Kadane et al., (1980), who develop an approach to elicit a positive definite correlation matrix, aU of the others obtain only the correlation coefficients between variables. Chapter 4. Correlations Between Variables 75 4.2.1 Positive Definite Correlation Matrix A positive definite correlation matrix ensures theoretical consistency of a system. A correlation matrix is positive definite if there are no linear dependencies among the primary variables. If an elicited correlation matrix is not positive definite, then it has to be positive semi-definite because the variance of a vector of random variables is always greater than or equal to zero. Proof that a Correlation Matrix is Positive Definite Let X be the vector of n random variables with covariance matrix C x and correlation matrix R. Let a be a vector of n scalars. From definition, Var [aT X] > 0 (4.2) a T C x a > 0 (4.3) Therefore, covariance matrix C x is always positive definite (i.e > 0) or positive semi definite (i.e = 0). Rewriting equation (4.3) as, a T C x a = a T (X - X) (X - X ) T a (4.4) Let b — (X — X ) T a, where fc is a number and b = bT. When 6 = 0 (positive semi definite condition), (X - X ) T a = 0 (4.5) (Xi-X^d! + (X2-X2)a2 + + (Xn-Xn)an = 0 (4.6) variable Xn is a linear combination of the others. If there are no linear dependencies (combinations) then the covariance matrix is always positive definite. For the correlation matrix, starting from the relationship R = D _ 1 C x D _ 1 , where D _ 1 is the inverse of the diagonal matrix of standard deviations of the X Chapter 4. Correlations Between Variables 76 vector, it follows that, a T R a = a T D " 1 C X D " 1 a T Since D _ 1 = fl} - 1] because D _ 1 is symmetric, a T [ D - 1 ] 1 C x D - 1 a = b T C X b (4.7) (4.8) where b = D 1 a. Since D _ 1 is non-singular and symmetric, when C X is positive definite, b T C X b > 0 (4.9) If the covariance matrix is positive definite then the correlation matrix is always positive definite. A correlation matrix could be positive semi-definite even when all the variables are not perfectly correlated. For example, consider the correlation matrix for a three variable system given by Ro, "1.0 0.5 0.5 " Ro = 0.5 1.0 -0.5 (4.10) .0.5 -0.5 1.0 . At first glance, the correlation coefficients between the variables seem reasonable. However, the determinant of matrix Ro is equal to zero (i.e positive semi-definite). A further investigation shows that a linear combination of variables 2 and 3 is perfectly correlated with variable 1 (see Appendix B). 4.2.2 Elicitation of a Correlation Matrix The proposed method of ehcitation is a combination of a two stage process. The first is the ehcitation of the linear correlation coefficients between the primary variables, while the second is ensuring the positive definiteness of the correlation matrix. Chapter 4. Correlations Between Variables 77 Linear Correlation Coefficients The hnear correlation coefficient between two primary variables Xi and Xj can be approximated from the conditional expected value of Xj\X{ = Q. The conditional expected value of Xj\X{ = Q from Bury (1975) is, where -E[X;] and are the expected values and o~i and <Tj are the standard deviations for Xi and XJ; Q is the conditional value for Xi] and pij is the hnear correlation coefficient between primary variables Xi and Xj. Then, similarly to the method suggested by Hull (1977) for risk simulation, the correlation coefficient between Xi and Xj is approximated by averaging three or four values for pij. The values for p^ are evaluated from equation (4.12). The conditional expected value of Xj, E\Xj\Xi = Q], is ehcited by asking the question "What is the expected value for Xj, when Xj = Q ?" from the experts. Different percentile values of Xt- can be used for the conditional value Q. The Elicitation Procedure Let R n be a subjectively ehcited nxn correlation matrix partitioned as, E[Xj\Xi = Q] = E[Xj] + p^ %-(Q- E[Xi}) (4.11) Hence, Pij = (E[Xj\Xj = Q} - E[Xj})cTi (Q - E[Xi]) o-j (4.12) •n-1 b Rn = b T (4.13) 1 where R n - i is a (n — l)x(n — 1) correlation matrix for n = 2,3,.... and b T = [pin p2n Pn-ln]-Chapter 4. Correlations Between Variables 78 Then R n is positive definite i f R n - i is positive definite and, b T R ^ b < 1 (4.14) (Kadane et al. , 1980; for proof see Appendix C) First, the primary variables in the function for the derived variable are ordered according to the expert's confidence in them and their relationship with the other variables. The variable that is selected as the "best" is numbered one and the "worst" numbered n (when there are n variables in the functional form). Then, pi2 is elicited as suggested in the previous section. This value is assumed to be consistent with the expert's belief because the 2x2 matrix is always positive definite. Thereafter, pi3 and p23 are elicited. If the condition given by equation (4.14) is satisfied, the correlation values are accepted because the 3x3 matrix is positive definite. If the condition is violated, the expert is made aware of the inconsistency and given the option to change one of the correlation coefficient values in the b vector. When a value is selected (say p23) the expert is informed of the real bounds for p23 in which the 3x3 matrix wil l be positive definite. The bounds, r \ and r 2 , (if they exist - see figure 4.1) are the real roots of the quadratic equation (see Appendix C for the derivation), j - 1 n - 1 + [Cu + C2j + £ % Bu + Y. S* B*}R (4.15) t = l i = j + l + £ (Cu + C2i) Bu + £ (C-, - 1 < 0 i=\ i= j+ l Chapter 4. Correlations Between Variables 79 Si where R n * x = ; r s2 C i = B ^ Si ; and C 2 = B j S 2 . T is the correlation coefficient (pjn) for which bounds are required (for p23, j = 2 and n — 3), S i is a (j — l)x(ra — 1) matrix and S 2 is a (n — 1 — j)x(n — 1) matrix, B ^ and B ^ are lx(_7 — 1) and lx(n — 1 — j) row matrices, and This procedure, of introducing the next ordered variable, eliciting correlation co-efficients between that and the previous variables and ensuring that the correlation matrix is positive definite is continued until the R n is positive definite. Once accepted, the elicitation procedure does not permit the positive definite R n - i to be changed. If at any stage the expert refuses to change a value from the b vector when R n is not positive definite, he is implying that the function for the derived variable is not consistent with his belief and it should be changed by removing one or more of the already used ordered variables from the function. 4.3 Correlation between Derived Variables Assumption 4.1 : Correlation between two derived variables arise only from com-mon (shared) variables in their functional forms. The common (shared) primary variables are defined as those of the same type having Sj, C i and C 2 are lx(ra — 1) row matrices. Chapter 4. Correlations Between Variables I / / / / / / / / / /'IT-Infeasible Feasible 'Xy / / / / / / / / / / / / / , Infeasible I +1 / / / / / / / / / / / / / / / / / / / / / / / / Feasible Infeasible I 1 ii !?. +1 / / / / / / / / / / / / / / / / / / / / /'*~ Infeasible Feasible Y, P , n ! Feasible r — ^ Infeasible Point +1 Feasible I Figure 4.1: Feasible Regions for V for R n to be Positive Definite. Chapter 4. Correlations Between Variables 81 the same first four moments in the functional forms for two or more derived variables (see figure 4.2). Hence, correlation between two derived variables arise only from those primary variables that are quantified for the functions. Correlation arising due to unquantifiable variables in construction such as management, methods, or weather are ignored by assumption (4.1). The correlation coefficient between two derived variables Ya and Yb can be evalu-ated from, cov(Ya,Yb) pab = I ===== I4-16) Jp2(Ya) u2(Yb) where cov(Ya,Yb) is the covariance between Ya and Yj,; ^ 2 ( ^ 1 ) and p2(Yb) are the variance of Ya and Yb] and pab is the correlation coefficient between Ya and Yb- The covariance between two derived variables can be approximated from the approxima-tion given by Kendall and Stuart (1969), using only the hnear correlation information between primary variables. The approximation is, cov{Y.,Y„) « «>»(* , X,-) +i^^mix>iXi) (4-i7) , f V*  d 9 a d 9 b t Y Y \ where Ya = ga(X.) has m random variables; Yb = ^b(X) has n random variables; I is the number of common (shared) primary variables in the functions #a(X) and <7b(X), (i.e X1}X2, ....,Xi)] and cov(Xi,Xj) is the covariance between two primary variables X t - and Xj. First order Taylor series expansions of the functions for derived variables are used in equations (4.16) and (4.17). For a vector of derived variables Y in a system, the correlation matrix must also be positive definite. Chapter 4. Correlations Between Variables Common (Shared) Primary Variables .Y.Y.Y.T.V.Y/.Y.Y.Y.Y/.Y.Y.Y.Y.Y.I-. 1 > ' YXYXY//>XY/!YX-X\YXY' .Y.Y.Y.| .Y.Y.Y. Y.Y.Y.J.Y.Y.Y.-i i t i i , t i I I I • ' . . . 1 . . . . . . . V.Y/.V/AV.Y/.JV.V/.VJ — 1 t j The required correlation information for the system [ I The available correlation information for the system Figure 4.2: Correlation from Common (Shared) Primary Variables Chapter 4. Correlations Between Variables 83 Proof that Correlation Matrix for Y is Positive Definite Let Y be a vector of derived variables where, Y = [Y"1....Ya Yj ) . . . .Y r ] T and Yi = <?i(X), Ya = ga(X), Yb = gb(X), Yz = gz{X). Let C y be the covariance matrix, R y be the correlation matrix and D y be the diagonal matrix of standard deviations of the vector Y . The covariance matrix of vector Y is positive definite if there are no hnear de-pendencies among the derived variables. The hnear dependencies can occur if the functional form for two or more derived variables are identical and all the primary variables are shared. However, since the models are not perfect and unquantifiable variables exist in all systems, the true models are, Y{ = 5 l ( X ) + e x , Y ; = ga(X) + ea, Yb = gb(X) + efc,.., Y* = gz(X) + ez. where e is a vector of independent error variables to represent the unquantifiable variables in the systems. For simplicity assume all error variables have the same variance c r 2 . Then, Y* = Y -f e (4.18) Let a be a vector of scalars. From the definition for variance, Var [aT Y*] > 0 (4.19) a T (Y* - Y*) (Y* - Y * ) T a > 0 (4.20) Since E[e] = 0, (Y* - Y") = (Y - Y + e). Then, a T (Y - Y + e) (Y - Y + e)T a > 0 (4.21) Since error variables are statistically independent, from mathematical induction, (Y - Y + e) (Y - Y + e)T = (Y - Y) (Y - Y ) T + a2 I (4.22) Substituting (4.22) in (4.21), a T (Y - Y) (Y - Y ) T + c r 2 I a > 0 (4.23) Chapter 4. Correlations Between Variables 84 Expanding equation (4.23), a T (Y - Y) (Y - Y ) T a + a T a2 I a > 0 (4.24) For the covariance matrix to be positive semi-definite both terms in (4.24) have to be equal to zero at the same time. But, a T a2 I a > 0. Therefore, a T (Y" - Y*) (Y* - Y")T a > 0 (4.25) Since no linear dependencies exist in the vector of derived variables, a T (Y - Y) (Y - Y ) T a > 0 (4.26) Hence, a T C y a > 0 (4.27) The covariance matrix for derived variables is always positive definite. In a correlation matrix the following requirements must hold: (Graybill, 1983) (1) pa = 1 ; i = 1, ...,a,b, ...,z ; (2) -1 < P i j < 1; for all i ^ j Since only the first order Taylor series expansions of the vector Y are used to evaluate correlation and covariance, the first requirement is obtained by evaluat-ing the ith diagonal element of D"1 C y D - 1 (= R-y)- The second require-ment is obtained by setting the ith element of the vector of scalars b equal to -fl, the jth element equal to +1, and the other elements equal to zero. From b T R y b > 0, pa + Pij + pji + pjj > 0 or pij > —1. Similarly, by changing the j" 1 element of b to —1, pa — — pji + Pjj > 0 or < 1. Since the covariance matrix is positive definite and the requirements hold, the correlation matrix for derived variables is always positive definite. Chapter 4. Correlations Between Variables 85 4.4 MulticoUinearity For a successful transformation, the ehcited correlation matrix, in addition to being positive definite, should also be stable because of matrix inversion. The instabihty can occur when the determinant of the correlation matrix is close to zero. This problem is called multicoUinearity . The term multicoUinearity defines itself, multi implying many and collinear implying hnear dependencies (Myers, 1986). MulticoUinearity occurs when there are near hnear dependencies among the columns of a correlation matrix. That is, there is a vector of constants c (not all zero) for which, n Y c T Xj =t 0 (4.28) where Xj are the columns of the n x n correlation matrix. If the right hand side of equation (4.28) is identicaUy zero then the correlation matrix is positive semi-definite. Thus, the Hnear dependencies are exact and the inverse of the correlation matrix and hence L _ 1 does not exist. Myers (1986) states that if multicoUinearity is present, then there exists at least one A,- = 0, where A^  are the eigenvalues of the correlation matrix. While the nearness to zero of the smaUest eigenvalue is a measure of the strength of a hnear dependency, the ratio, 4> = ^ (4.29) A min which is caUed the condition number of the correlation matrix is the true measure of multicoUinearity. As a rule of thumb, a correlation matrix with <f> < 100 is considered to be stable. However, when <b exceeds 1000 then one should be concerned about the effect of multicoUinearity (z.e instabihty in the correlation matrix) (Myers, 1986). Chapter 4. Correlations Between Variables 86 The instability in the correlation matrix can hamper a successful transformation. It is suggested that for practical applications the concept called the "k value" be utilized. The k value is used in ridge regression as a mathematical manipulation to stabilize an unstable correlation matrix (Myers, 1986). The stability is achieved by replacing the correlation matrix R by (R + k I), where k is a small positive quantity. Similarly, an elicited unstable correlation matrix can be stabilized by introducing a k I matrix to the correlation matrix. The k value would be the smallest value that would make the correlation matrix stable. Stability is denned in terms of the desired stabilizing condition number given by, +- = \ m a X 1 \ (4-3°) Therefore, > A m a i (f>B A m j n k = ^ _ l (4.31) would stabilize the correlation matrix to the desired condition number <6„. An upper bound on the stabilizing k value can be established in terms of the number of the variables in the functional form (n) and the desired stabilizing condition number (</>„), from the fact that the largest eigenvalue of a correlation matrix is always less than n (Graybill, 1983). Hence, h is always less than n For example, if a condition number <pt = 100 is desired (the empirical limit at which regression analysis considers a correlation matrix to be stable) the k value for the function described by equation (4.33) is less than 0.030303. Therefore, the k value that stabilizes a correlation matrix to a desired c6„ can be bound as, Chapter 4. Correlations Between Variables 87 4.5 Numerical Study T h e first part of the n u m e r i c a l s t u d y compares the variable transformation m e t h o d to the s t a n d a r d a p p r o a c h used i n m o m e n t analysis under general conditions (i.e stable correlation matrices). T h e second part explores the behavior of the two methods i n the presence of multicoUinearity. T h e intention of the t h i r d part is to s t u d y the susceptibility of the transformation to the effect of multicoUinearity. T h e d u r a t i o n of a work package i n a construction project is used as the derived variable for the study. T h e d u r a t i o n of a work package (T) can be evaluated from the simple relationship given by, T = PTL = ^ ( X ) ( 4 - 3 3 ) where Q is the quantity descriptor, PL is the l a b o u r p r o d u c t i v i t y rate a n d L is the l a b o u r usage, the p r i m a r y variables of the work package d u r a t i o n m o d e l . 4.5.1 Variable Transformation Method F o r the work package d u r a t i o n m o d e l described by equation (4.33), X T = [Q PL L] = [Xy X2 X3) a n d D = d i a g t o ] . If the correlation m a t r i x for the work package d u r a t i o n m o d e l is R, where R = 1-0 pn prs' P\2 1-0 P23 -Pl3 P23 1-0. (4.34) Chapter 4. Correlations Between Variables 88 then the lower triangular matrix obtained from the Cholesky decomposition of the correlation matrix (R = L L T ) is, ' i n 0.0 0.0" L - 0.0 -Ll3 L23 £ 3 3 -(4.35) where Ln = 1.0 r P23 — Pl2 Pl3 '23 P12 L\2 — P12 ; L13 and L 3 3 — y^ l — ~L P13 ; -^ 22 Pl2 2 13 T 2 n23 The transformation for the uncorrelated variables is Z = L 1 D 1 X and for the functional form is X = D L Z. Hence, the first four moments of the work package duration are approximated from equations (2.36) to (2.39). The expected value of the work package duration is, E[T] * G(Z) + \ £ g p2(Zi) (4.36) the second central moment is, E i=l OG dZi P2(Zi) » dG d2G E t=i d2G dZ}, p4(Z{) - [piiZi)}' (4.37) Chapter 4. Correlations Between Variables 89 the third central moment is, E i=l L 3 3 2 dG dZi + E t=i dG dZi Mzi) 2 d2G (4.38) dZf pA{Z{) - > 2 ( ^ ) f and the fourth central moment is, pA{T) » Y, i=l where Z1}Z2,Z3 are the transformed uncorrelated variables of Q,PT,,L and G(Z) is the transformed function for work package duration. dG dZi P4(Zi) (4.39) 4.5.2 The Standard Approach The first four moments of the work package duration from the standard approach are derived by expanding equations (2.20) to (2.23) (see Appendix A for the general derivation). The boxed terms are those due to the linear correlations between primary variables. Then the expected value of the work package duration is, m - & + \ i m >»<*> i=l cov(Xi,Xj) (4.40) Chapter 4. Correlations Between Variables 90 the second central moment is, E i=i 3 dg_ dXi ^ dg d 2g fr[ dXi dxi d 2g 3 3 - E E i=i j=i+i [ 1 2 [ccn^X^Xj)]2 dXidX^ (4.41) the third central moment is, A*a(T) E »=i 0X; 3 * + 2 E z i=l L dXi ^{Xi) <9X,2 \n4(Zi) - [u2{Zi)X - 6 S J + 1 ^ ^ ^ ; [ C 0 t ; ( X i ' A i ) ] (4.42) and the fourth central moment is, u4(T) * 2 i=i <9X; Pi(Xi) (4.43) where X i is Q, X2 is Pj, a n d X 3 is L . Chapter 4. Correlations Between Variables 91 Table 4.1: Quantity Descriptors (Q) (ft3) W.P E[Q] ft 01 38397.3 12186.1 0.5 3.3 02 60555.0 8829.3 0.9 9.0 03 76850.0 24440.5 0.5 3.2 04 16185.0 3527.4 0.8 7.8 05 32429.2 7030.8 0.8 7.8 06 38397.3 12186.1 0.5 3.3 07 21998.0 2621.4 0.2 2.4 08 76850.0 24440.5 0.5 3.2 09 20413.0 5782.4 0.7 8.5 10 76850.0 24440.5 0.5 3.2 4.5.3 The Comparison The moment analyses for both approaches consider terms up to the fourth order. While both methods treat the same correlations, the variable transformation method simplifies the approximations by the transformation (see the boxed terms in equa-tions 4.40, 4.41 and 4.42). The two approaches are compared for ten hypothetical work package durations. The values for the primary variables and correlation coeffi-cients used for the numerical study are given in Tables 4.1 to 4.4. Table 4.5 shows the moments for work package durations evaluated from the two approaches when the correlation matrices are stable. The time unit is in years. When the primary variables are assumed to be uncorrelated, both methods give identical moments indicating that they are comparable (see Table 4.5). When there is correlation between the primary variables the expected values from both methods are identical. The second and third central moments from the variable transformation method are larger for all work packages. The fourth central moments from the stan-dard approach are same as when uncorrelated or highly correlated (see Table 4.6) because there are no covariance terms in equation (4.43). If the moment analysis Chapter 4. Correlations Between Variables 92 Table 4.2: Labour Productivity Rates, PT,; (ft3/m.d) W . P E [ P L ] ft 01 9.0 1.25 0.0 5.6 02 9.0 1.25 0.0 5.6 03 9.0 1.25 0.0 5.6 04 10.1 2.28 0.1 2.2 05 8.4 1.28 0.1 8.8 06 9.0 1.25 0.0 5.6 07 10.1 2.28 0.1 2.2 08 9.0 1.25 0.0 5.6 09 9.9 2.22 0.9 9.0 10 10.2 2.23 0.8 8.0 Table 4.3: Labour Usage, L; (m.d/year) W.P E[L] O-L ft 01 6833.2 692.7 0.4 2.4 02 15185.0 1539.5 0.4 2.3 03 15185.0 1539.5 0.4 2.3 04 6074.0 615.8 0.4 2.4 05 7777.5 2339.8 1.1 5.7 06 9055.5 832.9 0.4 4.3 07 6074.0 615.8 0.4 2.4 08 15092.5 1388.1 0.4 4.3 09 3850.8 393.4 0.4 2.3 10 15092.5 1388.1 0.4 4.3 Chapter 4. Correlations Between Variables 93 Table 4.4: Condition Number (<f>) and Correlation Coefficients W.P <t> PQPL POL PPLL 01 6.77 -0.48 0.42 -0.69 02 9.05 -0.55 0.62 -0.74 03 8.60 -0.53 0.56 -0.74 04 6.77 -0.48 0.42 -0.69 05 8.60 -0.53 0.56 -0.74 06 8.22 -0.48 0.62 -0.70 07 6.77 -0.48 0.42 -0.69 08 11.4 -0.53 0.56 -0.80 09 7.94 -0.48 0.62 -0.69 10 8.60 -0.53 0.56 -0.74 considered only the terms up to the third order (Bury, 1975; Siddall, 1972), there will be no covariance term in equation (4.42). Then the third central moments from the standard approach will also be same as when uncorrelated or highly correlated. 4 . 5 . 4 Transformation under MulticoUinearity Two numerical studies are done to demonstrate the behavior of the transformation in the presence of multicoUinearity. The first, compares the variable transformation and the standard method using the same correlation matrix for aU the work package durations. The correlation matrix used is, • 1.0 -0.999 0.999 ' R-m — -0.999 1.0 -0.999 . 0.999 -0.999 1.0 . which has a condition number <f> equal to 2998.04. Table (4.6) shows the moments from the two methods. Again, the expected values are identical, but some of the central moments are different. While some of the variances were comparable, the others showed considerable differences. The most E[T) MT) MT) MT) WP Uncor Trans Stdrd Uncor Trans Stdrd Uncor Trans Stdrd Uncor Trans Stdrd 01 .64294 .64168 .64168 .05127 .05109 .04942 .00517 .00750 .00410 .00545 .00692 .00545 02 .45628 .45254 .45254 .01022 .00727 .00684 .00069 .00088 .00053 .00025 .00025 .00025 03 .57906 .57625 .57625 .04172 .03951 .03825 .00379 .00500 .00273 .00351 .00409 .00351 04 .28024 .27986 .27986 .00762 .00842 .00736 .00031 .00116 .00022 .00012 .00026 .00012 05 .55286 .52657 .52657 .03521 .01447 .00842 .01296 .01721 .01092 .00417 .00361 .00417 06 .48430 .48156 .48156 .02886 .02669 .02609 .00226 .00266 .00172 .00177 .00190 .00177 07 .38089 .37804 .37804 .00979 .00801 .00769 .00043 .00043 .00031 .00011 .00008 .00011 08 .58158 .57981 .57981 .04175 .04091 .03946 .00393 .00556 .00293 .00360 .00447 .00360 09 .56646 .56472 .56472 .03997 .04224 .03768 .01011 .01584 .00874 .00632 .00866 .00632 10 .52807 .53086 .53086 .03906 .04559 .04139 .00701 .00974 .00562 .00320 .00465 .00321 Table 4 . 5 : First Four Moments of the Work Package Durations Chapter 4. Correlations Between Variables 95 Table 4.6: Moments of the Duration with an Unstable Correlation Matrix WP E[T] MT) MT) MT) Trans Stdrd Trans Stdrd Trans Stdrd Trans Stdrd 01 .6417 .6417 .05392 .04845 .01217 .00051 .00814 .00545 02 .4525 .4525 .00885 .00669 .00244 .00026 .00045 .00025 03 .5779 .5779 .04386 .03945 .00874 .00037 .00524 .00351 04 .2814 .2814 .01304 .00798 .00413 -.00006 .00057 .00012 05 .5142 .5142 .00642 -.00505 .01732 .00751 .00336 .00417 06 .4854 .4854 .03258 .02921 .00585 .00039 .00293 .00177 07 .3780 .3780 .00929 .00754 .00154 .00005 .00022 .00011 08 .5829 .5829 .04711 .04227 .00995 .00069 .00596 .00360 09 .5727 .5727 .07155 .04508 .05648 .00519 .01994 .00632 10 .5381 .5381 .06099 .04761 .02216 .00245 .00845 .00320 startling observation is the negative variance for the fifth work package duration, indicating that in the presence of multicollinearity the standard approach could fail. When the moments from the variable transformation method in Tables (4.5) and (4.6) are compared, the expected values compare well while the central moments are reasonably close, considering the fact that they have different correlation values. This study indicates that the transformation is not too susceptible to the instability in the correlation matrix for this example. The intention of the second study is to see how susceptible the transformation is to the effect of multicollinearity. The uk value" concept discussed in section (4.4) is used to study the percentage change in moments of a work package duration. If the percentage change from the base value moment (i.e at k = 0), for stable (small and unstable (large </>) correlation matrices, are similar with increasing k values (i.e as the matrices got more and more stable), then the transformation is not susceptible to instability in the correlation matrix. Figures (4.3) to (4.6) show the absolute percentage changes in the first four mo-ments from the base values, for increasing k values. The percentage changes in the Chapter 4. Correlations Between Variables 96 moments are similar when the condition number (c6) for the correlation matrices vary from 50 to 2998, indicating that the transformation is not susceptible to the instabihty (i.e effect of multicoUinearity) in the correlation matrix for this study. However, it must be noted that in another situation it is possible for multicoUinear-ity to effect the transformation. It is suggested that in practical applications of the variable transformation method (or the standard method) the condition number (<f>) of the correlation matrix be checked for multicoUinearity. If unstable correlation ma-trices have been ehcited, they can be stabilized using a smaU k value at the discretion of the analyst. This check is equally valid for the treatment of correlations in Monte Carlo simulation. 4 .6 Summary The correlations between the primary variables and between the derived variables is addressed in this chapter. The second section highlighted the often ignored requirement for the correlation matrix to be positive definite and developed a subjective ehcitation method to ob-tain a positive definite correlation matrix for primary variables. A positive definite correlation matrix recognizes the existence of multivariates in a system. The third section suggested a method to obtain a positive definite correlation matrix for de-rived variables when only the hnear correlations between the primary variables are available. The theoretical developments in these two sections are the basis for the part in the computer program "ELICIT" (see Appendix D) to obtain interactively the correlations between variables. The fourth section highlighted the concept of mul-ticoUinearity and its possible effects on the variable transformation. A mathematical manipulation that could provide stabihty to the correlation matrix for practical ap-plications was suggested. Chapter 4. Correlations Between Variables 97 The final section utilizing the example of the work package duration showed nu-merically that the variable transformation method is comparable to the standard approach in treating correlation between primary variables under general conditions (stable correlation matrices). Also, that the transformation simplifies the approx-imations for the first four moments and treats the linear correlations consistently. The other two numerical studies explored the behavior of the transformation in the presence of multicollinearity. The first showed that the standard approach can fail in the presence of multicollinearity while the variable transformation method was more stable. The second study showed that the transformation was not susceptible to instabilities in the correlation matrix. Chapter 4. Correlations Between Variables 0.00 0.01 0.02 0.03 k value 0.04 o.c Figure 4.3: Expected Values CJiapter 4. Correlations Between Variables 0.00 0.01 0.02 0.03 k value 0.04 o.c Figure 4.4: Second Central Moment Chapter 4. Correlations Between Variables =2998 = 6 0 0 = 3 0 0 = 100 - 50 0.00 0.01 0.02 0.03 k value 0.04 0.05 Figure 4.5: Third Central Moment Chapter 4. Correlations Between Variables 0.00 0.01 0.02 0.03 k value 0.04 0.05 Figure 4.6: Fourth Central Moment C h a p t e r 5 D e c o m p o s i t i o n o f a D e r i v e d V a r i a b l e 5.1 General In developing the risk measurement framework to quantify the uncertainty of a derived variable it was assumed that a derived variable can be more accurately estimated from a set of primary variables that are functionally related to it than by direct estimation (assumption 2.3). This reflects the engineering penchant to seek more detail as a way of seeking greater precision. For most derived variables in engineering construction, assumption (2.3) is reasonable. However, this assumption becomes debatable when variables which are sometimes estimated holistically in the ehcitation of subjective judgments (probabilities) - (eg. duration, productivity) are decomposed. This chapter describes a study on the de-composition of such a derived variable. The duration of an activity is used as the example for the derived variable to compare holistic versus decomposed methods of estimation. The objective of this chapter is to make a small step towards exploring an issue that is largely ignored in the estimation literature and to provide the motivation for a more extensive study on the ehcitation of subjective probabilities for continuous random primary variables. 102 Chapter 5. Decomposition of a Derived Variable 103 5.2 Decomposition Ravinder et al. (1988) state that decomposition is often regarded as a useful tech-nique for reducing the complexity of difficult judgment problems. They studied the application of decomposition to the elicitation of subjective probabilities for discrete events. A target event for which probability judgments were required was decomposed into background events in the form of conditional probabilities of the target event. Once the individual conditional distributions for background events were elicited, the law of total probability was used for the aggregation. The probability of the target event Pr(A) was defined as, Pr(A) = £ Pr(A\Bi) Pr(Bi) (5.1) t = i where the background events denoted B\, ,Bn form a mutually exclusive and n exhaustive partition of the relevant event space (i.e ^ Pr(Bi) = 1). They concluded: if the component probabilities can be assessed with no greater precision than holistic assessment, decomposition reduces random errors associated with probability encoding; but as the number of events increases, error reduction will only occur up to a point (i.e a limit for decomposition exists). While it is not possible to generalize their conclusions to the decomposition of a derived variable to a functionally related set of primary variables, they are used as guidance for this study. In the context of this research, the main reason for decomposing work package variables to their primary variables is to develop a link between cost and time of the work package for economic analysis. It is incorrect to assume that cost is independent of time, because when a work package duration is either reduced (more resources) or increased (less resources) the net result is a change in the cost. The link between cost Chapter 5. Decomposition of a Derived Variable 104 and time permits the use of net present value and internal rate of return as decision variables. The second reason is the basis for assumption (2.3), to reduce the complexity of holistic estimation because experts in construction (engineers) find it easier to quantify the decomposed primary variables. The same reasoning has been used by other authors for decomposing the activity duration into its primary variables (Jaafari, 1984 ; Hendrickson et al., 1987). This raises another question; won't it be more accurate if the primary variables are further decomposed. The main disadvantage of decomposition is the loss of the mental awareness of in-terdependencies between primary variables that exists when estimating from a holistic approach. While it may be possible to relate the primary variables functionally to the derived variable, it is also difficult to model all the interdependencies (Inyang, 1983). Secondly, even if accurate estimates for primary variables are obtained, as the decomposition is continued a model which can link them to provide a reliable estimate of the derived variable may be lacking. As Ravinder et al. (1988) have shown a definite limit exists for decomposition. Thirdly, unless decomposition improves the system significantly, it would be hard to convince an expert that decomposition is necessary for the ehcitation of subjective probabilities. It was stated in chapter three that convincing experts about the relevance of the primary variables was essential to gain their full cooperation during the ehcitation (Cooper and Chapman, 1987; Huber, 1974; Hull, 1980; Spetzler and Stael von Holstein, 1975). The next four sections propose hypotheses, test statistics, an experiment and the analysis to study a derived variable that is sometimes estimated holistically. In addi-tion, some of the beliefs that exist in engineering construction regarding decomposed versus holistic estimation of judgments are explored. Chapter 5. Decomposition of a Derived Variable 105 5.3 Hypotheses Nine hypotheses are suggested to study decomposed versus holistic estimation of an activity duration. The first hypothesis is about the precision of assessments for an activity duration from the two approaches. The other eight are for assessed expected values and standard deviations for an activity duration to compare holistic versus decomposed estimation. Hypothesis 5.1 The precision of assessments for an activity duration from holistic or decomposed estimation are similar. The precision of assessments from the two approaches are measured using the coefficient of variation for duration of an activity, a non-dimensional measure of vari-ation. Ho : E[Vi\ = 0 ; Hx : E [Vi] ? 0 where Vi = Vui — Vwt, the difference between a pair of coefficients of variation of the assessed activity duration from decomposed (V^) and holistic (Vwi) estimation. Hypothesis 5.2 When experts are asked to assess the expected value for duration of an activity from the holistic approach that assessment will be the true value. Ho : E[T] = E[TW] ; # i : E[T] ^ E[TW] Hypothesis 5.3 When experts are asked to assess the standard deviation for duration of an activity Chapter 5. Decomposition of a Derived Variable 106 from the holistic approach, that assessment will be the true value. H0 : o~T = <TTW ; H-i : crT ^ aTw Hypothesis 5.4 When experts are asked to assess the expected value for duration of an activity from the decomposed approach that assessment will be the true value. H0 : E[T] = E[TD] ; H, : E[T] + E [TD] Hypothesis 5.5 When experts are asked to assess the standard deviation for duration of an activity from the decomposed approach, that assessment will be the true value. H0 : aT — CTTD ; Hx : aT ^ CTTD Hypothesis 5.6 When experts are asked to assess the expected value for duration of an activity from the holistic approach, that assessment will be an underestimation of the true value. Ho : E[T] = E[TW] ; Hx : E[T] > E[TW) Hypothesis 5.7 When experts are asked to assess the standard deviation for duration of an activity from the holistic approach, that assessment will be an underestimation of the true value. H0 : CTT = (TTW ; Hi : CTT > CTTW Chapter 5. Decomposition of a Derived Variable 107 Hypothesis 5.8 When experts are asked to assess the expected value for duration of an activity from the decomposed approach, that assessment will be an underestimation of the true value. Ho : E[T] = E[TD) ; i f : : E[T] > E [TD] Hypothesis 5.9 When experts are asked to assess the standard deviation for duration of an activity from the decomposed approach, that assessment will be an underestimation of the true value. Ho • crT = O-TD ; Hi : ar > <TTB where E [TV] and E [TD] are the expected values and O~TW and arD are the standard deviations assessed for the activity duration from holistic and decomposed subjective estimation. While a hypothesis test is done for the first hypothesis, only significance tests (i.e a hypothesis can only be rejected) are done for the next eight because there is only one sample to test all of the hypotheses. Hypothesis (5.1) tests whether there is a difference between the precision of assessments (i.e coefficients of variation) from the two approaches. The next four hypotheses (5.2 to 5.5) provide the basis to compare the two approaches. By calculating the percentages of the number of times an individual hypothesis is rejected, given the group of experts and the amount of information available during the elicitation, the two approaches are compared. Hypotheses (5.6) to (5.9) are included because of the traditional belief in engineering construction that holistic approach underestimates duration more regularly than the Chapter 5. Decomposition of a Derived Variable 108 decomposed approach (i.e holistic will be rejected more times than the decomposed). It must be stressed that the objective of this study is not to select the "better" method, but to compare the two methods available for estimating duration when experts participate in subjective ehcitation. The "better" method is a consensus approach after estimating from both approaches. However, it is not practical as a subjective ehcitation technique. 5 .4 Test Statistics Assumption 5.1 : A sample of durations to complete an activity constitutes a random sample from a normal distribution with both u and cr unknown. Since the sample of the measured durations are for the same activity it is reason-able to expect the measurements to be symmetric around the mean value. Then the test statistic for the difference between paired coefficients of variation of the assessed activity duration is (Devore, 1982), V ^paired Sv/y/n (5.2) where V and Sv are sample mean and standard deviation, respectively for V{'s and n is the sample size. The rejection regions for level a tests are (see figure 5.1), Hypothesis Rejection Region 5.1 tpaired > * f , n - l °r ^ p a i r e d < — £ | ) T l _ i Chapter 5. Decomposition of a Derived Variable 109 Test statistic for the assessed expected value for activity duration is (Devore, 1982) f - E [To] t = 5 / V n (5.3) where T and S are the sample mean and standard deviation, E [T0] is either the assessed E [TV] or E [Try]. The rejection regions for level a tests are (see figures 5.1 and 5.2), Hypothesis Rejection Region 5.2 and 5.4 * > * f , n - i or t < - * ! , „ - ! 5.6 and 5.8 t > £ a , n - l Test statistic for the assessed standard deviation for activity duration is (Devore, 1982), *> = ^ («.4) where S is the sample standard deviation and cry0 is the assessed O~TW or <TTD • The rejection regions for level ct tests are (see figures 5.3 and 5.4), Hypothesis Rejection Region 5.3 and 5.5 X2 > x|,„_i or x2 < X i - f . n - 1 5.7 and 5.9 X — Xa,n — 1 Chapter 5. Decomposition of a Derived Variable 110 Figure 5.1: t Distribution for Two Tailed Test Figure 5.2: t Distribution for Upper Tailed Test Chapter 5. Decomposition of a Derived Variable 0.06 i f ( x ; v ) 0.05 0.04 h 0.03 0.02 0.01 Rejection Regio Rejection egion 10 20 30 40 50 60 Figure 5.3: x2 Distribution for Two Tailed Test 0.06 i f ( x ; v ) Figure 5.4: x2 Distribution for Upper Tailed Test Chapter 5. Decomposition of a Derived Variable 112 5.5 Experiment 5.5.1 The Activity The activity to obtain a sample of durations to test the hypotheses should: permit the assessment of duration from hohstic and decomposed subjective estimation; permit the measurement of actual duration; be repetitive; utihze the expertise of construction engineers - read, interpret and visualize construction drawings. While an activity such as the repetitive construction of a column, a beam or a footing is ideal, the inherent difficulties of field experiments such as: free access to a construction site; measurement of actual duration; and time constraints; makes the selection infeasible. Instead, the assembly of a LEGOLAND wheel loader (model #6658) was selected as the activity for the experiment. In addition to satisfying the requirements of an activity for the experiment, the LEGOLAND model permitted the experiment to be conducted in a laboratory setting. 5.5.2 Procedure First, the objectives of the experiment were explained to the participants. This expla-nation was based on the procedure of pre-elicitation discussed in section (3.3). There-after, using two questionnaires the desired subjective percentile values for the activity duration were ehcited from all the participants. The first questionnaire ehcited the duration in minutes to assemble the complete model in accordance with the drawings (i.e hohstic estimation). The second ehcited the duration in seconds to identify and attach one component to the model in accordance with the drawings (i.e decomposed estimation). Finally, the actual duration to assemble the model by each participant was measured (see Table 5.1). Chapter 5. Decomposition of a Derived Variabie 113 The participants were graduate students and final year undergraduates in civil engineering who had followed courses in engineering economics and risk analysis. The subjective elicitation based on the drawings for the LEGOLAND model and its assembly in accordance with drawings utilized their expertise as civil engineers. Each participant is considered as an independent source for hypotheses testing. That is, evaluated expected values, standard deviations and coefficients of variation from the two approaches for each participant are the basis for a set of hypotheses. The measured actual durations constitute the sample to obtain statistics to test each set of hypotheses. 5.6 Analysis The expected values, standard deviations and coefficients of variation for duration (holistic - duration to assemble the complete model; decomposed - duration to iden-tify and attach one component to the model) are evaluated from equations (2.5) to (2.11) using the elicited subjective percentile values. However, for hypotheses on de-composed estimation, the moments for duration to assemble the complete model have to be evaluated. 5.6.1 Moments from Decomposition For a LEGOLAND model consisting of I components, the duration to assemble the complete model is, TD = £ t (5.5) i = l where t is the duration to identify and attach one component to the model. It is assumed that t is identical for all the components. Chapter 5. Decomposition of a Derived Variable 114 The expected value for duration to assemble the complete model from decomposed estimation is, E[TD] = lE[t] (5.6) where E[t] is the evaluated expected value for duration to identify and attach one component to the model and assumed to be identical for all the components. Assumption 5.2 : The estimated coefficients of variation for duration for an ac-tivity from hohstic and decomposed estimation are similar (i.e VD = Vw = V ) . Assumption (5.2) is tested by hypothesis (5.1). Therefore, from the definition for coefficient of variation, crTw _ aTD _ at = y ^ E[TW] ~ E[TD] E[t] The standard deviation for duration to assemble the complete model depends on the assumption regarding the correlation between duration to assemble individual components. The variance for duration to assemble the complete model is, MI(TJD) = £ £ covltittj) (5.8) i=i j=i From definition, P2(TD) ^ 0. Hence, p2(TD) = la2 + pl(l-l)a2 > 0 (5.9) where o~t is the evaluated standard deviation for duration to identify and attach one Chapter 5. Decomposition of a Derived Variable 115 component to the model and p is the correlation coefficient between two component durations. Rewriting equation (5.9), p2(TD) = a2t [l + p(l2-l)} > 0 (5.10) Since a\ > 0, for P2(TD) to exist I + p(l2-l)} > 0 (5.11) Therefore, Hence, hm p = 0. From definition p < 1. I—too A t the extremes, component durations are either uncorrelated or perfect positive correlated. When the component durations are assumed to be uncorrelated, the variance for duration to assemble the complete model from equation (5.9) is, p2{TD) = I af (5.13) Hence, the standard deviation is, aTD = Vlat (5.14) When the component durations are assumed to be perfect positive correlated, the variance for duration to assemble the complete model from equation (5.9) is, p2(TD) = I2 a2 (5.15) Chapter 5. Decomposition of a Derived Variable 116 Hence, the standard deviation is, °~Tr> = 1 ° t (5.16) where crt is the evaluated standard deviation for duration to identify and attach one component to the model and assumed to be identical for-all the components. Comparing equations (5.6), (5.7), (5.14) and (5.16) it is evident that relationship given by equation (5.16) evaluates the standard deviation for duration to assemble the complete model from decomposed estimation. 5.6.2 Experimental Results The actual duration to assemble the LEGOLAND model and the expected values, standard deviations and coefficients of variation for duration from holistic and de-composed estimation are given in Table 5.1. All of the participant are given an identification The subjective estimates of participant # 15 were rejected. The sample mean and standard deviation for the sample of actual measured dura-tions to assemble the LEGOLAND model in minutes are, f — 15.95 and S = 7.99. The sample mean and standard deviation for K's, the difference between paired co-efficients of variation are, V = 0.0518 and Sv = 0.1714. The test statistic ipai P ed for 27 participants from equation (5.2) is equal to 1.5698. The t values for assessed expected values and %2 values for assessed standard deviations for holistic and decomposed estimation evaluated from equations (5.3) and (5.4) are given in Table 5.2. Table 5.1, Table 5.2, sample means and standard deviations are obtained from a computer program called "LEGO". Chapter 5. Decomposition of a Derived Variable 117 Table 5.1: Actual and Estimated Statistics for the Activity Duration (minutes) # Actual Duration Hohstic Estimation Decom D o s e d Estimation E[TW) °~TW Vw E[TD] VD 01 10.0 12.00 4.02 0.335 15.01 6.32 0.421 02 13.0 3.18 1.41 0.442 10.67 3.75 0.352 03 16.0 28.70 13.94 0.485 24.29 15.84 0.652 04 16.0 15.55 6.33 0.407 24.49 15.65 0.639 05 24.0 17.44 7.03 0.403 5.29 3.03 0.572 06 18.0 30.00 12.56 0.418 16.00 7.50 0.469 07 21.0 11.81 5.91 0.500 11.26 5.57 0.495 08 7.75 5.37 2.57 0.478 6.99 3.58 0.512 09 16.83 29.07 8.89 0.306 52.43 34.00 0.636 10 13.75 20.92 8.89 0.425 11.54 4.03 0.349 11 25.5 6.81 2.53 0.371 15.80 5.37 0.340 12 13.42 6.28 2.05 0.326 5.23 0.95 0.181 13 47.0 30.00 10.05 0.335 16.00 8.43 0.527 14 24.5 14.18 3.96 0.279 10.47 2.96 0.283 15 9.25 - - - - - -16 9.5 10.18 1.78 0.175 39.86 1.96 0.049 17 18.92 10.00 3.77 0.377 10.86 7.91 0.728 18 16.5 25.00 4.27 0.171 32.99 5.54 0.168 19 9.92 19.07 9.56 0.502 5.48 0.96 0.175 20 10.05 10.55 3.12 0.295 6.30 0.68 0.109 21 10.82 3.00 0.75 0.251 4.27 1.07 0.251 22 21.3 30.00 7.54 0.251 32.00 8.04 0.251 23 9.08 9.18 3.53 0.384 7.57 3.75 0.496 24 16.88 18.52 6.14 0.332 9.88 2.90 0.293 25 17.0 5.99 3.65 0.610 13.13 8.31 0.632 26 11.9 12.74 3.19 0.250 26.86 18.01 0.671 27 10.78 10.00 3.02 0.302 5.33 1.61 0.302 28 8.09 9.63 2.81 0.292 3.40 2.03 0.597 Chapter 5. Decomposition of a Derived Variable Table 5.2: Test Statistics for Expected Values and Standard Deviations # Holistic Estimation Decomposed Estimation t t x 2 01 2.6204 106.55 0.6238 43.11 02 8.4610 870.43 3.5038 122.32 03 -8.4479 8.86 -5.5249 6.87 04 0.2649 42.97 -5.6556 7.03 05 -0.9873 34.85 7.0641 187.63 06 -9.3060 10.91 -0.0299 30.58 07 2.7430 49.29 3.1116 55.55 08 7.0133 261.42 5.9386 134.12 09 -8.6931 21.78 -24.8280 1.49 10 -3.2931 21.78 2.9278 106.04 11 6.0559 269.82 0.1008 59.77 12 6.4120 410.52 7.1030 1914.45 13 -9.3060 17.05 -0.0299 24.23 14 1.1727 109.87 3.6346 196.36 16 3.8230 544.55 -15.8399 449.77 17 3.9456 121.24 3.3731 27.51 18 -5.9931 94.39 -11.2849 56.14 19 -2.0640 18.83 6.9395 1864.32 20 3.5778 177.44 6.3962 3676.43 21 8.5836 3030.90 7.7443 1498.43 22 -9.3060 30.31 -10.6311 26.64 23 4.4856 138.40 5.5587 122.15 24 -1.6996 45.67 4.0268 205.07 25 6.6033 129.07 1.8695 24.96 26 2.1.301 169.33 -7.2246 5.31 27 3.9456 189.43 7.0376 665.97 28 4.1907 217.61 8.3203 418.09 Chapter 5. Decomposition of a Derived Variable 119 5.6.3 Hypotheses Testing All of the hypotheses are tested at confidence level a = 95%. Then, i|,n-i = 2.052, *«,„-! = 1.703, xl,„-i = 43.194, xL-„_! = 14.573, a n d £ = 40.113 for 2 2 ' n = 28. Since f p a i r e d is within the acceptance region, hypothesis (5.1) is accepted at 95% confidence level. Hence, the assumption that coefficients of variation for activity duration from hohstic and decomposed estimation are similar is verified. The results of the significance tests for hypotheses (5.2) to (5.9), total and per-centages of the number of times an individual hypothesis is rejected at 95% confidence level are given in Table 5.3. 'R' indicates that the hypothesis is rejected, while 'NR' indicates that it is not rejected at 95% confidence level. The significance tests show high rejection rates and similar percentage values for both methods of estimation. The classical approaches for hypotheses testing and the generally accepted confi-dence levels used in this study may not be the most suitable to test human ability to predict future events because of the high variability in predictions from individual to individual. While broader confidence levels reduce the rejection rates they may not be acceptable from a statistical view point. This highhghts the inherent difficulties in developing experiments to measure human abihty to predict future events. Similar rejection percentages of individual hypothesis confirm the view that nei-ther is the "better" method. Those for hypotheses (5.6) to (5.9) contradict the tra-ditional belief that hohstic estimation underestimates duration more regularly than decomposed estimation. If decomposition is not critical to the decision problem -when only work package and project duration estimates are desired; the approach preferred by the analyst and experts can be used for subjective ehcitation. How-ever, if decomposition is important to the decision problem - Ravinder et al., (1988); decomposed estimation alone can be used with confidence that the precision of as-sessments are similar to those from hohstic estimation and that it can reduce random Chapter 5. Decomposition of a Derived Variable 120 Table 5.3: Significance Tests for Hypotheses (5.2) to (5.9) at 95% confidence level # Hypotheses 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 01 R R NR NR R R NR R 02 R R R R R R R R 03 R R R R NR NR NR NR 04 NR NR R R NR R NR NR 05 NR NR R R NR NR R R 06 R R NR NR NR NR NR NR 07 R R R R R R R R 08 R R R R R R R R 09 R NR R R NR NR NR NR 10 R NR R R NR NR R R 11 R R NR R R R NR R 12 R R R R R R R R 13 R NR NR NR NR NR NR NR 14 NR R R R NR R R R 16 R R R R R R NR R 17 R R R NR R R R NR 18 R R R R NR R NR R 19 R NR R R NR NR R R 20 R R R R R R R R 21 R R R R R R R R 22 R NR R NR NR NR NR NR 23 R R R R R R R R 24 R R R R R R R R 25 R R NR NR R R R NR 26 R R R R R R NR NR 27 R R R R R R R R 28 R R R R R R R R Total 24R 20R 22R 21R 16R 19R 16R 18R 88.89% 74.07% 81.48% 77.77% 59.26% 70.37% 59.26% 66.66% Chapter 5. Decomposition of a Derived Variable 121 errors (Ravinder et al., 1988). 5.7 Summary This chapter described a study on the decomposition of a derived variable that is sometimes estimated holistically. The study consists of a set of hypotheses, test statistics, an experiment and analysis to compare holistic versus decomposed methods for estimating duration when experts participate in subjective elicitation. While classical approaches and confidence levels used in this study may be too restrictive to test the human ability to predict future events, they provide a statisti-cally accepted framework. The interpretation of the results as percentages of rejection rates reduce some of the restrictions. The first hypothesis verified the assumption that coefficients of variation in subjective assessments for duration from holistic and de-composed estimation are similar. The next four support the view that neither is the "better" estimation approach to elicit subjective assessments for duration. The last four while contradicting the traditional belief in construction about holistic es-timation of duration confirm the view regarding the "better" approach. It must be stressed that these are observations based on this study and in no way can they be generalized to the decomposition of derived variables that are sometimes estimated holistically. The recognition that some of the implicit assumptions and beliefs in engineer-ing construction (assumption 2.3; holistic estimation underestimates duration more regularly) should be explored when dealing with the human ability to predict future events and the inherent difficulties in developing experiments and methods to test such beliefs are some of the benefits of this study. It is recommended that this topic be explored further. C h a p t e r 6 T h e A n a l y t i c a l M e t h o d 6.1 General The generation of economic benefits is one of the fundamental objectives of an in-vestment in a project. Hence, the initial decision to invest is governed by the ability of the project to generate a return that would justify the investment. Figure (6.1) shows the generalized cash flow diagram for an engineering project. However, a more simplified cash flow diagram as shown in figure (6.2) is used for the development of the analytical method. In this simplified scenario, the expenditure for design and construction comes from a combination of equity and interim financing. It is assumed that repayment of interim financing is due at the end of the construction period. No attempt is made to include permanent financing in the analytical method because of the numerous financing alternatives available in the market. The analytical method described herein is developed by applying the framework to quantify the uncertainty of a derived variable to the three levels of the project economic structure as shown in figure (6.3). At the work package/revenue stream level the derived variables are work package duration, start time, cost and net rev-enue streams. The primary variables at the work package/revenue stream are those variables in the functions specified by the analyst. At the project performance level the derived variables are the project duration, cost and revenue while the primary 122 Chapter 6. The Analytical Method 123 $/year / Interim \ / Financing ^ Permanent Salvage Financing Value — \ Current Dollar / \ Expenditure / Amortization of Permanent Financing Time (yrs) Ope^ratin^ Repayment of Discharge of interim Financing Loan Balance Figure 6.1: Generalized Cash Flow Diagram for an Engineering Project n$/year Salvage'> Value ' - Interim Financing Due Figure 6.2: Cash Flow Diagram for the Analytical Method Chapter 6. The Analytical Method 124 variables are the derived variables at the work package/revenue stream level. At the project decision level the derived variables are project net present value and internal rate of return while the primary variables are discounted project cost and revenue. This apphcation combines all of the developments and studies from chapters two to five. For generality, the analytical method treats cost and revenue as continuous cash flows under continuous discounting (Tanchoco et al., 1981; Buck, 1989). 6.2 Work Package/Revenue Stream Level The work package/revenue stream is the first level of apphcation. At this level, the framework is applied as developed, permitting the analyst to use general functional forms for work package durations, costs and revenue streams. 6.2.1 Work Package Duration Work package duration can be estimated directly as a hohstic value or derived using a functional relationship which treats work scope, anticipated job conditions, likely construction methods, productivity and resource levels or a sub-network of activities. When the estimation is hohstic, the analyst/experts provide percentile values for their subjective prior probabihty distributions and the correlation matrix for work package durations. The first four moments for a work package duration are evaluated from the method described in section (2.3.2) using these percentile values. When the estimation is decomposed, the analyst must specify the functional forms for work package durations. The analyst/experts provide percentile values for their subjective prior probabihty distributions and the correlation coefficients for primary variables in the functions for work package durations and identify common (shared) Chapter 6. The Analytical Method 125 Analyst/Expert Input * Precedence Relations among Work Packages and Revenue Streams * Functions for Work Package Duration, Cost and Revenue Streams * Subjective Estimates for Percentiles of Primary Variables and Correlation Matrices, and Shared Variables in Functions for Work Package Durations, Costs and Revenue Streams t 1 W.P Durations W.P Start Times W.P Costs and Revenue Streams I WORK PACKAGE/REVENUE STREAM LEVEL I First Four Moments for Work Package and Revenue Stream Start Times, Work Package Durations, Costs and Net Revenue Streams PROJECT PERFORMANCE LEVEL I First Four Moments for Project Duration, Cost and Revenue PROJECT DECISION LEVEL I First Four Moments for Project Net Present Value and Cumulative Distribution Function for Project Internal Rate of Return Figure 6.3: Flowchart for the Analytical Method Chapter 6. The Analytical Method 126 primary variables among the functions. The correlation matrix for work package du-rations is evaluated from this information. The specified function for a work package duration is treated as <j(X) in equation (2.19). The first four moments for a work package duration are evaluated from equations (2.36) to (2.39) using the ehcited pos-itive definite correlation matrix for the variable transformation. 6.2.2 Work Package Start Time Since start time positions the work package with respect to time it is the variable that links time and cost. Consequently, it is important to have accurate estimates of the moments for start times. In most analytical methods, the start time of a work package is determined by the longest path to that work package. Let Tf be the start time of the ith work package, Tf be the start time and Th be the duration of the preceding hth work package. The start time of the ith work package from the longest path is denned as, Tf = maxi [Tf + Th] (6.1) where maxV implies that the maximization is to be over all the hnks uh to i" ter-minating at the ith work package. While equation (6.1) gives the maximum expected value for the ith work package start time, it does not necessarily evaluate the max-imum uncertainty because it ignores shorter but more uncertain (higher variance or skewed) paths. This is the main drawback in using the longest path approach in stochastic network analysis. In theory, an accurate estimate of the moments for start times would involve the analysis of all paths leading to the work packages. Ang et al., (1975), proposed an analytical technique called "Probabilistic Network Evaluation Technique (PNET)" to evaluate the completion time probabihty of project Chapter 6. The Analytical Method 127 duration by considering multiple paths to complete the project. Since project duration is the start time of the finish work package of the precedence network the developments in PNET can be generalized to the work package start time. P N E T For a project network with a specified number of activities and a set of n possible paths from the start node to the end node, Ang et al., (1975) state the probability of completing the project in time t, denoted p(t) is, p(t) = 1 - [P(T1>t) + P(T1<t,T2>t) + + P(Ti <t,T2< t, , r n _ ! < t,Tn > t)] (6.2) where Ti,T2, , T n are the durations of the respective n paths. The bounds on the completion time probability p(t) are (Ang et al., 1975), n TT P(Ti <t) < p{t) < min P(T{ < t) (6.3) When all the paths are assumed to be statistically independent, (i.e all possible paths to a node or work package are used to evaluate p(t)), the value for p(t) is the most pessimistic (lower bound) and when all the paths are assumed to be perfectly correlated (so that one path is representative of all paths), the value for p(t) is the most optimistic (upper bound). The lower bound of p(i) is the upper bound for duration (see figure 6.5). When all the paths are perfectly correlated, duration is represented by the longest path. The longest path duration always gives an optimistic estimate for completion time probability (Ang et al., 1975). In other words, the longest path always yields the most Chapter 6. The Analytical Method 128 optimistic mean duration for work package start time. Since work package cost and revenue stream calculations are linked to start time, longest path based analytical solutions do not adequately estimate the statistics of the derived variables. This is the rationale for a "better" solution from Monte Carlo simulation. On the other hand, if the work package start times are based on the lower bound of p(t), it yields the most pessimistic mean duration. Therefore, when an alternative is evaluated at the bounds of equation (6.3), the resulting solutions are the bounds for the derived variables in the project economic structure. The start times on which the derived variables should be estimated can be obtained from equation (6.2) if the joint probabilities between the path durations are evaluated. However, the evaluation of joint probabilities for equation (6.2) is complex (Ang et al., 1975). Instead, PNET works around this problem by considering all major paths for estimating p(t) while avoiding the evaluation of joint probabilities. PNET assumes that the activity durations are statistically independent. Also, it is limited to treating finish to start = 0 relationships between activities to evaluate the expected values and variances of individual path durations. Although individual activities are considered to be statistically independent, two different paths are considered to be correlated as a result of common activities. Then, the correlation between two paths i and j having m common activities is defined as (Ang et al., 1975), PH = k-^—~ (6.4) (Ti (Tj where a\jk is the variance of the kth common activity on paths i and j, ai and aj are the standard deviations for duration of paths i and j and p^ is the correlation coefficient between paths i and j. An approximation for computing p(i) was derived by Ang et al., (1975) from the Chapter 6. The Analytical Method 129 following observations: (1) paths with long mean durations and high coefficients of variations have the greatest impact on p(i) (defined as major paths); (2) if several paths are each highly correlated with a major path, then those paths can be repre-sented by that major path (upper bound of p(t))] (3) if representative paths have low correlations, p(t) can be approximated by the product of the respective path prob-abilities (lower bound). Consequently, PNET approximates the project completion time probabihty, p(t) by, p(t) « P(TX < t) P(T2 < t) P(Tr < t) (6.5) where P(Ty < t), P(T2 < t), , P(TT < t) are the probabilities of each representative path completing the project in time t, for r representative paths. Those paths with p^ > p are represented by path i (the longer path because it has a lower p(t)) from the assumption that p represents the transition between high and low correlation. When p = 1, the estimate for p(t) is the lowest (upper bound on duration), whereas when p = 0, p(t) is the highest (lower bound on duration). If all the major paths are correlated with the longest path, PNET reduces to PERT. In applying PNET, Ang et al. (1975) estimate p(t) from equation (6.5) using a transitional correlation value of p = 0.5 and assuming the representative path durations to be normally distributed. Some of the shortcomings of PNET are: (1) by assuming the individual activities (or work packages) to be statistically independent it ignores the correlation brought about by the use of shared resources such as manpower, equipment, management, etc; (2) p(t) is dependent upon the level of interdependence between various paths, i.e the selection of the most suitable transitional correlation p (Crandall, 1977); (3) a representative path duration may not be normally distributed if a few skewed Chapter 6. The Analytical Method 130 work package durations dominate the path to a work package or if the work packages appear early in the network. Modified P N E T The PNET algorithm developed by Ang et al., (1975), is modified to overcome some of the shortcomings in applying it to work package start time. The modifications are: (1) include the Hnear correlations between work package durations in evaluating the first four moments of path durations; (2) include the shape characteristics (skewness and kurtosis) of representative paths in evaluating the first four moments of the work package start time. The modified PNET approach to compute the first four moments of a work pack-age start time are as follows. First, the first four moments for duration of each path to a work package are evaluated using equations (6.11) to (6.14), thereby including the hnear correlations between work package durations. To facihtate the treatment of correlations between work package durations on a path, only finish to start = 0 relationships are permitted. Then, in order of decreasing mean path durations all of the individual paths are sequentially ordered. Second, representative paths to a work package are identified as in PNET. Similar to PNET, the transitional correlation p must be specified by the analyst. Third, the first four moments of the representative path durations are used to approximate cumulative distribution functions from the Pearson family of distributions. This ensures that shape characteristics of a repre-sentative path are not ignored. However, as discussed in section (2.4.5), it may not always be possible to approximate a Pearson type distribution. In such a situation, modified PNET defaults to PNET. Fourth, the cumulative distribution function for start time of a work package is developed by evaluating p(t) from equation (6.5) for a Chapter 6. The Analytical Method 131 range of durations. The starting duration for the distribution range is obtained from, tetart = E[Ti]max - 3 <Timaz (6.6) where E[Ti]max is the largest expected value from all the path durations (i.e the expected value of the longest path) and tr^^. is the largest standard deviation for all the path durations. If p(t) > 0 for the starting duration, then t3tart in equation (6.6) is reduced until the starting p(t) — 0. The duration range for the cumulative distribution function is complete when p{t) — 1 is obtained. Finally, given the tableau of values for p(t) versus t, the first four moments for work package start times are evaluated similar to section (2.3.2). In the author's experience, the developed cumulative distribution functions for start times have always approximated to Pearson type distributions. However, the default is the PNET algorithm. The improvements to the work package start time by applying modified PNET instead of PNET are: (1) since the work package start time is always a primary variable in the functional form for work package cost, the treatment of correlation at two levels, - between work package durations on an individual path and between paths due to common work packages makes the evaluation of first four moments for work package start times, costs and their bounds more precise; (2) considering skewness and kurtosis of the individual paths makes the first four moments for start time of work packages at the beginning of a project more precise, because the number of predecessor work packages on an individual path are too few to invoke the central limit theorem. When there are sufficient predecessor work packages on a path to invoke the central limit theorem (as done by PNET), the approximation of the path durations to the Pearson family of distributions will reflect it because the normal distribution is a member of the Pearson family. The drawbacks of modified PNET are: (1) it is also dependent upon the level of interdependence between various paths (Crandall, 1977). However, the estimation of Chapter 6. The Analytical Method 132 upper and lower bounds provides a sensitivity analysis on the transitional correlation specified by the analyst; (2) allows only single (finish to start — 0) logic relationships to sequence work packages. Ability to sequence work packages in overlapping and/or compound relationships will enhance the practicality of the application. However, the treatment of correlation between work packages in overlapping and/or compound relationships on a path or between paths are still theoretically complex. Harris (1978) has shown that overlapping relationships can be transformed to single relationships (finish to start = 0) using one or two additional work packages, and compound rela-tionships using two additional work packages with time-discontinuous assumption. 6 . 2 . 3 Work Package Cost The estimate for expenditure to design and construct a work package is defined as the work package cost. External economic variables have a strong influence on the work package cost estimate. Escalation primarily due to inflation and interest payments for the construction loan (interim financing) are a significant portion of the cost estimate. In estimating the escalation during construction in work package cost, the analytical method allows different rates for different categories of cost. For the simplification of the derivation, it is assumed that the inflation rates and interest rate for financing of work package cost are constant over the construction period. However, if necessary both of these quantities can be expressed as functions of time. The generalized discounted work package cost is represented by (see figure 6.4), WPd = f eVc'-vPsa fTci Coi(r) e ( ^ - v K dr (6.7) Jo + (1 - /) e(r-v)TP eeCiTSci fTci c , r ) e(eCi-r)r d r Jo Chapter 6. The Analytical Method 133 where WPCi is the discounted i work package cost, COJ(T) is the function for con-stant dollar cash flow for the ith work package, TsCi and Ta are work package start time and duration, Tp is the time at which the repayment of interim financing is due for all work packages, / is the equity fraction, 8c{,r and y are inflation, interest and discount rates respectively. The time r is measured from the start of the ith work package. Coi(r) can be either hohstic or a decomposed function of work scope, resources applied, and productivity. The estimation of discounted work package cost is always decomposed. The an-alyst specifies the functional form Coi(t) for equation (6.7). The analyst/experts provide percentile values for their subjective prior probabihty distributions and the correlation coefficients for primary variables in the functions for discounted work package costs and identify common primary variables among the functions. The correlation matrix for work package costs is evaluated from this information. The system function #(X) to approximate the first four moments for a discounted work package cost is equation (6.7). The first four moments for a discounted work package cost are evaluated from equations (2.36) to (2.39) using the ehcited positive definite correlation matrix for the variable transformation. The bounds for work package costs are obtained when the transitional correlation p = 1 and p — 0. 6.2.4 Net Revenue Stream The possibility of generating a number of revenue streams at different points in time is typical of large engineering projects. Therefore, the ability to study the economic effects of projected revenue with respect to time is essential. The start time of a revenue stream is its link to the precedence network describing the development and operation phases. The analyst must specify the work package and the fraction of that Chapter 6. The Analytical Method Figure 6.4: Generalized Discounted Work Package Cost Chapter 6. The Analytical Method 135 work package duration after which the revenue stream is projected to begin. The start time of the revenue stream is then evaluated from network analysis. To link revenue streams beginning after construction, the operation period is specified as the duration for finish work package. The duration for an individual revenue stream is a primary variable of the function for discounted net revenue. The net revenue stream is defined as the difference between gross revenue and its operation and maintenance cost. Both, the gross revenue and the operation and maintenance cost are inflated wTith different rates, and revenues are assumed to inflate once operation starts. The discounted net revenue stream is represented by, NRSi = / T S a + r f i M i 2 o i ( 0 e e R i ( t _ T s H i ) - ^ o i ( f ) e e M i i e'^dt (6.8) Ri 1 1 where NRSi is the discounted ith net revenue stream, Roi(t) and M0i(t) are the func-tions for constant dollar cash flow for ith gross revenue and operation and maintenance cost, T$m and Tm are early start time and duration of the revenue stream, 0^,6]^. and y are inflation and discount rates respectively. The estimation for discounted net revenue stream is also decomposed. The analyst specifies Roi(t) and Moi(t) for equation (6.8) as functional forms or holistic constant dollar values. The analyst/experts provide percentile values for their subjective prior probability distributions and the correlation coefficients for primary variables in the functions for discounted net revenue streams and identify common primary variables among the functions. The correlation matrix for net revenue streams is evaluated from this information. The system function #(X) to approximate the first four moments for a discounted net revenue stream is equation (6.8). The first four moments for a discounted net Chapter 6. The Analytical Method 136 revenue stream are evaluated from equations (2.36) to (2.39) using the ehcited posi-tive definite correlation matrix for the variable transformation. The bounds for net revenue streams are obtained when the transitional correlation p = 1 and p = 0. 6.3 Project Performance Level The functions for all the derived variables at the project performance level are hnear additive. The derived variables at this level are project duration, project cost and project revenue, while the primary variables are the derived variables at the work package/revenue stream level. Assumption 6.1 : There are no non-linear correlations between the transformed variables at the project performance level. Let Y be a derived variable at the project performance level. Then, Y = g(X) = £, X( (6.9) where X is the vector of derived variables from the work package/revenue stream level. Let Z be the vector of transformed variables at project performance level (from equation 2.24). Since g(X) is always hnear, the transformed functional form G(Z) at the project performance level from equation (2.35) is, Y = G(Z) = £ i = l £ % i=i Zi (6.10) where B = D L. The expected value of the derived variable Y is, E[Y) = £ E BH E[Zi] (6.11) Chapter 6. The Analytical Method 137 the second central moment of Y is, P*{Y) = £ t = i dG 1 2 dZi (6.12) the third central moment of Y is, = £ i=l L dG_ dZi (6.13) the fourth central moment of Y is, M4(n = £ t = i dG 1 4 / x 4 ( Z i ) + 6 £ £ t = i j=i+i dG' 2 dG' dZi p2(Zi) p2{Zj) (6.14) where dG dZi £ Bji ; and E[Zi], p2(Zi), ps(Zi), p4(Zi) are the first four 3 = 1 moments of the ith transformed uncorrelated variable. The first two moments of the derived variable are exact with or without assump-tion (6.1) because the transformed function G(Z) is hnear. With assumption (6.1), the third and fourth moments are also exact. The correlations between primary variables at the project performance level are hnear because correlations for derived variables approximated from section (4.3) are always hnear. These correlations are included in the moments for Z, and therefore in the first four moments for the derived variable. Even if there are no non-hnear correlations among the primary variables, it is not possible to conclude that the transformed variables are free of non-hnear correlations. Chapter 6. The Analytical Method 138 Hence, third and fourth moments will be in error only if non-linear correlations de-velop between the transformed variables. Since the measurement and treatment of non-linear correlations are still theoretically complex this assumption is reasonable. In addition, it permits the computation of exact first four moments for a derived variable at the project performance level. 6.3.1 Project Duration The project duration is the start time of the finish work package of the precedence network. The first four moments of project duration are obtained from the modi-fied PNET algorithm. The upper bound for project duration is computed when the transitional correlation p = 1 while the lower bound is when p = 0 (see figure 6.5). 6.3.2 Project Cost The project cost is the summation of all the work package costs. When there are n work packages in the construction project, the discounted project cost is given by, n DPC = WPd (6.15) where WPCi 1 S the discounted ith work package cost from equation (6.7). The func-tion g(X) for discounted project cost is equation (6.15). The first four moments for discounted project cost are computed from equations (6.11) to (6.14). The project cost is expressed in discounted dollars for generality. When required the project cost can be expressed in total, current or constant dollars. The bounds for project cost in total, current or constant dollars are obtained when the transitional correlation p = 1 and p = 0. A typical example for upper and lower bounds of project cost in current dollars is depicted in figure (6.6). Chapter 6. The Analytical Method Figure 6.5: Upper and Lower Bounds for Project Duration Longest Path (p =0.0) All Paths (p =1-0) / / / / / / / / / / / / ' s V- -r.,7?---r' Cost (current dollars) Figure 6.6: Upper and Lower Bounds for Project Cost Chapter 6. The Analytical Method 140 6.3.3 Project Revenue The project revenue is the summation of all the revenue streams. When there are m revenue streams in the construction project, the discounted project revenue is given by, m DPR = £ NRSi (6.16) where NRSi is the discounted ith net revenue stream from equation (6.8). The func-tion g(X) for discounted project revenue is equation (6.16). The first four moments for discounted project revenue are computed from equations (6.11) to (6.14). 6.4 Project Decision Level The project decision level is the top of the hierarchy of the project economic structure. The derived variables at this level, project net present value and internal rate of return are the decision criteria for an investment. To quantify their uncertainty, the analytical method exploits the fact that the functions for these derived variables are the same for all engineering projects. 6.4.1 Project Net Present Value The net present value of a project is the difference between the project revenue and the project cost discounted at minimum attractive rate of return. The first four moments for project net present value are computed by assuming discounted project cost and discounted project revenue to be independent. Then the first four moments of project net present value are, Chapter 6. The Analytical Method 141 E[NPV] = E[DPR] - E[DPC] (6.17) u2{NPV) = u2{DPR) + p2{DPC) (6.18) u3(NPV) = u3(DPR) - p3{DPC) (6.19) PA(NPV) PA(DPR) + p4{DPC) + 6 p2(DPR) p2{DPC) (6.20) where DPR and DPC are discounted project revenue and discounted project cost 6.4.2 Project Internal Rate of Return The internal rate of return of a project is the discount rate at which the discounted project revenue is equal to the discounted project cost. In other words, the discount rate at which the project net present value is zero. The internal rate of return is an imphcit function of the net present value and therefore does not provide a direct functional form to apply the framework. Hillier (1963) proposed a method to develop the cumulative distribution function for internal rate of return utihzing its definition. A number of authors have since discussed this method for applications (Bonini, 1975; Davidson and Cooper, 1976; Wagle, 1967; Zinn et al., 1977). The analytical method develops the expected value, standard deviation and cumulative distribution function for internal rate of return by using a variation of the method suggested by Hillier (1963). respectively. Chapter 6. The Analytical Method 142 Initially, first four moments for net present value at a discount rate, r — 0.01, are evaluated. Using these first four moments a Pearson type distribution is approx-imated for net present value. (The author's experience is that it is always possible to approximate a Pearson distribution for net present value because the first four moments for net present value, discounted project revenue and cost are exact. How-ever, the default is Hillier's approach.) From this distribution the probability for NPV < 0\r is obtained. This is the probability that IRR < r. Summarizing in equation form (equation 9 from Hillier, 1963), P{IRR < r) = P(NPV < 0\r) (6.21) The cumulative distribution function for internal rate of return is developed, by re-peating the above process while incrementing the discount rate by 0.01, until the range 0 < P(IRR <r) < 1 is obtained from equation (6.21). Then using the 2.5%, 5%, 50%, 95% and 97.5% values of the developed cumulative distribution function, the expected value and standard deviation for internal rate of return are computed from equations (2.5) to (2.11). Hillier (1963) approximated the cumulative distribution functions for net present value to the normal distribution to develop the cumulative distribution function for internal rate of return. The cumulative distribution function for internal rate of return was also approximated to the normal distribution to obtain the expected value and the standard deviation for internal rate of return. Inyang (1983) showed that the assumption of normality made by Hillier (1963),(1969) and Wagle (1967) is in error because skewness develops for situations where input variables are skewed; response of the decision criterion to changes in input variables are non-linear; input variables are insufficient; discontinuity in cash flow occurs (staged construction). Chapter 6. The Analytical Method 143 The analytical method utilizes the first four moments for net present values at different discount rates to approximate Pearson type distributions in developing the cumulative distribution function for internal rate of return, thus allowing for the treatment of skewness. Also, since equations (2.5) to (2.11) are used to compute the expected value and standard deviation for internal rate of return, there is no necessity to approximate the developed cumulative distribution function to a normal distribution. The upper and lower bounds for the project net present value at minimum at-tractive rate of return and the project internal rate of return are computed when the transitional correlation p — 1 and p = 0 respectively. Typical examples of bounds for the project net present value and the project internal rate of return are depicted in figures (6.7) and (6.8). 6.5 Discussion Cooper and Chapman (1987) state that four moment methods (those using the first four moments of primary variables to calculate the first four moments of a derived variable) for risk analysis achieve a large increase in generality over two moment meth-ods (mean and variance) and are more versatile because: they allow primary variable distributions to be quite general; the moments are related to the distributions shape characteristics; computational requirements are modest. However, they question the computational accuracy of the four moment methods because: calculations for the first four moments of a derived variable require the central moments of primary vari-ables higher than the fourth order, which can be numerically significant; and the restrictions generally imposed on the possible forms of interdependence relationships between primary variables. Chapter 6. The Analytical Method F 0.8 z V t 0.6 g' 0.4 Qu q rt 0.2 Longest Path (P =0.0) All Paths ' / ' / s / ' / (P-1.0) / / / / / / / / / / / / / • / / / / Net Present Value ($) Figure 6.7: Bounds for the Project Net Present Value o 13 CC 0.8 V r  £ 0.4 t5 o '2 fc. 0.2 ri p Longest Path (P =0.0) All Paths (p=1.0) / / / / / / / / ./ s • y i _ Discount Rate (%) Figure 6.8: Bounds for the Project Internal Rate of Return Chapter 6. The Analytical Method 145 This section will discuss how some of the issues raised by Cooper and Chapman (1987) affect the analytical method, and what can be done to increase computa-tional accuracy where possible. In addition, this analytical solution is compared to that which can be obtained from the currently available moment analysis approach (standard approach) to show the improvement of the derivation. 6.5.1 Computational Accuracy The first four moments for derived variables at project performance and decision levels computed by the analytical method are exact because of the linear functional forms. Therefore, at these two levels only the first four moments of primary vari-ables are required. However, at work package/revenue stream level the issue raised by Cooper and Chapman (1987) regarding higher order moments are valid because general functional forms are permitted for derived variables. The second, third and fourth central moments for the derived variables require up to fourth, sixth and eighth order moments of primary variables. Since the framework considers moments up to the fourth order, the approximation for the second central moment has considered the necessary central moments of primary variables. As all the primary variables are approximated to Pearson type distributions it is possible to generate moments up to the eighth order from the recurrence property of the Pearson family (Kendall and Stuart, 1969 - see Appendix A.6). Then the approximations for third and fourth central moments for a derived variable can consider the necessary central moments of primary variables. However, until more practical experience is gained in the elicitation of subjective probabilities from experts, it is prudent to use only the first four moments for primary variables. With experience, higher order moments of primary variables can be included in the approximations for third and fourth central moments of a derived variable. Chapter 6. The Analytical Method 146 The question whether the fifth and higher order central moments of primary vari-ables are numerically significant in the approximations for the third and fourth central moments is neither proved nor disproved in the literature, possibly because of the dif-ficulty of the exercise. After a rigorous theoretical study, Tukey (1954) concluded that the approximations for first four moments of a derived variable are much better than seems to be usually realized. His study used terms up to the fifth order. When generalized four moment methods are suggested for risk analysis, primary variables are assumed to be statistically independent (Siddall, 1972; Jackson, 1982). The variable transformation approach used by the analytical method treats hnear correlations at all levels of the project economic structure in a consistent manner. The concern raised by Cooper and Chapman (1987) regarding treating interdependencies between primary variables is overcome to the extent that the analytical method treats the correlation information that is generally available during feasibihty analysis. 6.5.2 Standard Approach In the fourth chapter, the variable transformation method was compared numerically to the standard approach to show that it treats correlation information more con-sistently at the work package/revenue stream level. Similarly, when the solution for derived variables at the project performance level using standard approach is com-pared, it is evident that the analytical method using variable transformation treats hnear correlations accurately and consistently. The correlations between primary variables at project performance level (i.e de-rived variables at work package/revenue stream) are restricted to hnear correlations from section (4.3). Assuming that there are no non-hnear correlations between pri-mary variables at project performance level and using equation (6.9) as flr(X), the first four moments for a derived variable from the standard approach are, Chapter 6. The Analytical Method 147 E[Y] = £ E[Xi] (6.22) p2(Y) = £ p3(Xi) + 2 £ Y coviX^Xj) i=l j=i+l i=l (6.23) p3(Y) « £ MXi) i=i (6.24) n T I (6.25) where E[Xi], p2(X{), p3(X{), p4(Xi) are the first four moments of the ith primary variable at project performance level. Consider an engineering project consisting of five work packages, with expected values, standard deviations and shape characteristics for work package costs as shown in Table 6.1. The correlation matrix for work package costs is R w p c - Table 6.2 shows the first four moments and shape characteristics for project cost computed by the analytical method (equations 6.11 to 6.14), standard approach (equations 6.22 to 6.25), and when work package costs are assumed to be statistically independent (Siddall, 1972). Table 6.1: Statistics for Work Package Costs W.P # E[WPC] 0~wpc ft 01 107.40 43.67 0.5 2.2 02 194.82 22.92 -0.8 2.8 03 305.55 28.32 0.6 2.4 04 411.10 50.78 0.7 2.5 05 492.60 37.76 -0.6 2.4 Chapter 6. The Analytical Method 148 "1.00 0.41 0.58 0.67 0.51" 0.41 1.00 0.28 0.48 0.39 0.58 0.28 1.00 0.61 0.60 0.67 0.48 0.61 1.00 0.48 .0.51 0.39 0.60 0.48 1.00. Table 6.2: First Four Moments and Shape Characteristics for Project Cost Moments Analytical Standard Statistically Method Approach Independent E[PC] 1511.47 1511.47 1511.47 p2(PC) 21184.46 21184.46 7239.68 1018621. 105012. 105012. PA{PC) 1202791440. 149342432. 149342432. 0.3303 0.0341 0.1705 02 2.6801 0.3328 2.8493 The expected value (equations 6.11 and 6.22) and second central moment (equa-tions 6.12 and 6.23) for Y are identical, indicating that hnear correlation is treated accurately by the analytical method because equations (6.22) and (6.23) are exact when g(X) is hnear (Kendall and Stuart, 1969). Since third and fourth central mo-ments for Y from the standard approach do not contain any hnear correlation terms, they are same as when the primary variables are assumed to be statistically indepen-dent (Siddall, 1972). Where as, the third and fourth central moments computed from the analytical method contain the hnear correlations because the variable transfor-mation ensures that they are included in equations (6.13) and (6.14). When primary variables are statistically independent and the number of variables is large, from the central limit theorem the derived variable should approach normal-ity. Even with five work package costs the shape characteristics for project cost for Chapter 6. The Analytical Method 149 the independent case are close to a normal distribution. When shape characteristics for project cost from the analytical method and standard approach are compared, those from the standard approach do not reflect the skewness of the work package costs, and the kurtosis is in the impossible range for a distribution. Those from the analytical method reflects skewness and kurtosis because it has included the hnear correlation between the work package costs. 6.6 Summary This chapter combined all of the developments and studies done in the previous chapters with the project economic structure to propose an analytical method for time and economic risk quantification during feasibility analysis for large engineering projects. The method computes the first four moments of derived variables at work package/revenue stream level (work package duration, cost and net revenue), project performance level (project duration, cost and revenue) and project decision level (net present value) using the moments of primary variables in their functional forms. The shape characteristics of the derived variables are used to approximate Pearson type distributions for them to quantify their uncertainty. The bounds for derived variables are obtained when transitional correlation p — 1 and p = 0. The computed moments for derived variables at project decision and project per-formance level are exact. The approximations for moments are only for the derived variables at the work package/revenue stream level. The expected value, standard deviation and cumulative distribution function for project duration are obtained from modified PNET while those for project internal rate of return are obtained from a variation of Hillier's method. The concerns raised by Cooper and Chapman (1987) regarding the computational accuracy of the four moment method and treatment of interdependence between primary variables have been discussed with suggestions for Chapter 6. The Analytical Method 150 further improvements. One of the objectives of this research is to computerize the analytical method to explore its behavior, to validate it and to test its practicality in the measurement of uncertainty of performance and decision parameters. The source code for the analytical method is available in a file called TIERA (Time and Economic Risk Analysis). It has been developed as a generalized numerical processor that has the flexibility to model general functional forms for work package durations, costs and revenue streams. See Appendix D for more details. The developed method, wThile providing a consistent analytical approximation to a problem that has long relied on Monte Carlo simulations for solutions, shows that it is more appropriate for time and economic risk quantification of large engineering projects. It includes the features of a good simulation model such as: interaction of time, cost, and revenue by using a precedence network; performing sensitivity and probability analysis; treating multiple paths in network analysis; treating correla-tion between variables at the input level; and the quantification of risks of decision variables by developing cumulative distribution functions. In addition, it overcomes most of the constraints that exist during feasibility stage for realistic modeling of an engineering project by: requiring expert judgements as input; treating correlation between primary variables and between derived variables at all levels; obtaining in-termediate milestone information necessary to set realistic targets for performance; permitting the use of unlimited number of variables to model a project; estimating bounds for decision variables; and above all having the capability to evaluate a range of alternatives economically to select the most suitable strategy to develop a project. C h a p t e r 7 V a l i d a t i o n s a n d A p p l i c a t i o n s 7.1 General The analytical method to estimate bounds on and to quantify the uncertainty in time and economic risks for large engineering projects was developed in the previous chap-ter. This chapter describes validation and applications of the analytical method. In most of the examples presented in this chapter it is difficult to separate the vahdation studies from the apphcations. Therefore, it will be helpful to the reader if the results from the analytical method are viewed as apphcations and those from Monte Carlo simulations are viewed as validations. Monte Carlo simulations are used to validate the analytical method because at present, simulation based models are considered to be the "state-of-the-art" for quan-tification of time and economic risks in large engineering projects (Cain, 1980; Diek-mann, 1985; Flanagan et al., 1987; Hayes et al., 1986; Jaafari, 1988a; Newendorp, 1976; Perry and Hayes, 1985b; Thompson and Wilmer, 1985). When the variables are uncorrelated, a successful vahdation should demonstrate that given the same problem structure, primary variables and probabihty distributions, the quantified uncertainty of time and economic variables from the simulation lie within the upper and lower bounds approximated from the analytical method. Since, the analytical method treats correlations efficiently, correlations must be 151 Chapter 7. Validations and Apphcations 152 treated in the simulation process to permit comparisons for validations. The treat-ment of correlations in Monte Carlo simulations is a non-trivial task (Johnson, 1987). Even though a number of methods have been suggested for treating correlations in simulations, no method has been validated rigorously (eg. compared to known analyt-ical solutions) to be considered as a bench mark for these validations. Nevertheless, a method which the author considers as the best approximation for treating correla-tions between variables in simulations is adopted. However, rigor in the validations similar to that of the uncorrelated situations cannot be achieved. The next section contains a brief description on Monte Carlo simulation, the theoretical basis for the method used to include correlations between primary variables, and the "acceptable" number of iterations for the simulation. In the third section, the modified PNET algorithm is applied to the two numerical examples presented by Ang et al. (1975). The first is a road pavement project, while the second is an industrial building project. The apphcations show that the modified PNET algorithm which is based on the precedence network reproduces the results obtained by Ang et al. (1975) using the arrow network, thereby validating the modified algorithm. The flowchart for the modified PNET algorithm is illustrated in Appendix D. Sections four to six describe the validation studies that were performed. In the fourth, a parallel network of identical work packages is used to validate the simulation process. This is the first of the two hmiting cases that are used to validate the Monte Carlo simulation process. The fifth section uses data from an actual deterministic feasibility analysis as the first example to validate the analytical method. The first example contains the second limiting case for the Monte Carlo simulation and four simulations to validate the analytical method. The second hmiting case is a single Chapter 7. Validations and Apphcations 153 dominant path of a highly interrelated precedence network. In the first two simula-tions, low coefficients of variation for work package durations are used. In addition to the vahdation, this permits a realistic comparison with the deterministic study. The third and fourth simulations use the same numerical example with high coeffi-cients of variations for work package durations. Since derived economic variables are dependent upon the start times, this increase permits the study of the effect of high variance on the quantification and bounding of their risks. In the sixth section the second example that is used for the vahdation is presented. It is a hypothetical engineering project developed to demonstrate the full potential of the analytical method. Two complete simulation were performed. The first assumed that all the primary variables are uncorrelated, while the second assumed that the primary variables at the input level are correlated. This is the correlation treatment that can be duphcated by simulation. The example is extended to a third level where correlations at all levels of the project economic structure are treated. In the seventh section, the different ways in which the analytical method can perform sensitivity analysis are explored. This discussion outlines how one of the sensitivity analyses can be used to distribute the contingency allocated to a derived variable at a desired probabihty of success, to its primary variables. The current dollar estimate for project cost is used as the derived variable. 7.2 Monte Carlo Simulation Conceptually, performing a Monte Carlo simulation is simple. It requires a determin-istic model, identification of the random variables, a probabihty distribution for each random variable, a random number generator, and then a sample value from each dis-tribution for each iteration using a random number from the uniform distribution on the interval [0.1], (i.e £7(0,1)), as the entry point in a cumulative distribution function Chapter 7. Validations and Apphcations 154 of the variables (see figure 7.1). The larger the number of iterations, the more reliable are the results from the simulation (Cain, 1980; Eilon and Fowkes, 1973; Flanagan et al., 1987; Inyang, 1983; Jaafari, 1988a; Johnson, 1987; Hertz, 1964; Hull, 1977, 1980; Kalos and Whitlock, 1986; Kryzanowski et al., 1972; Newendorp, 1976; Riggs, 1989; Van Tetterode, 1971). The procedure described above however, implies that each random variable is independent of the others. In the current problem most variables are dependent (Cooper and Chapman, 1987; Inyang, 1983; Perry and Hayes, 1985b). 7.2.1 Treatment of Correlations The importance of treating correlations between variables in Monte Carlo simulations has been long recognized (Eilon and Fowkes, 1973; Inyang, 1983; Hertz, 1964; Hull, 1977, 1980; Kryzanowski et al., 1972; Newendorp, 1976; Thompson and Wilmer, 1986; Van Tetterode, 1971). None of the suggested methods however, has been rigorously validated. After an extensive review of the available techniques, Inyang (1983) pro-posed the following approach to model correlations in Monte Carlo simulations for risk analysis of engineering projects. 1. Random numbers are generated for each of the variables that make up the risk analysis model. A column of random numbers is thus generated. 2. The correlation factors between variables have to be input as a matrix. The ran-dom numbers are modified depending on the correlation with each other. Any type of correlation factor (total, partial or no correlation) can be handled. 3. The value of a variable is obtained depending on the value of its modified random number as a result of the correlation between the variables. The author agrees with Inyang (1983), that the above procedure is the most suit-able approach to model correlations in simulations. However, two shortcomings have to be highlighted. First, the algorithm used for the modification of random numbers Chapter 7. Validations and Apphcations 155 was not derived in the thesis by Inyang (1983). Second, the correlation matrices elicited for the simulation have to be positive definite (see section 4.2). The process that was used for the validation model is based on the above procedure (Inyang, 1983). However, since it is not known whether the method for treating correlations in the simulation overestimates or underestimates the effects of correlation, the sim-ulation results provide only an approximate bench mark for the analytical treatment of correlation. The possibility thus exists that the simulation results may not be contained within the upper and lower bounds predicted by the analytical method. The random numbers were modified by extending the algorithm developed by Van Tetterode (1971) to the multivariate situation. The random number correction is pairwise. Since the positive definite correlation matrix is used to modify the random numbers assigned to the primary variables, the multivariate situation is recognized. The random number correction is as follows (Van Tetterode, 1971). RNij = RNj + ay (RNi - RNj) (7.1) where RNi and RNj are the ith and jth random numbers in the column generated from U(0,1), (step 1, Inyang, 1983), RN^ is the ijth random number corrected for the correlation between variables i and j in the matrix of corrected random numbers, and is the correction factor given by, ±±^IZA (7.2) 2 Pij - 1 where is the correlation coefficient between variables i andj. The correction factor is lies in the interval, 0 < a,j < 1 (see figure 7.2 and Appendix E for proof) for all correlation values. The modification from equation (7.1) and (7.2) ensures Chapter 7. Validations and Apphcations 156 that the corrected random numbers are within the interval [0,1]. A small numerical example is presented to demonstrate the random number mod-ification process. Assume a three variable model having the following correlation matrix, R, where R 1.0 -0.48 0.42 -0.48 1.0 -0.69 L 0.42 -0.69 1.0 J and the column of random numbers generated from U(0, 1) as [ 0.32 0.75 0.14]r. Then, the matrix of ctij values from equation (7.2) and the matrix of random numbers corrected for the correlation values from equation (7.1) are given below. a 1.(1 0.35365 0.31638 0.35365 1.0 0.48804 0.31638 0.48804 1.0 R N 0.32 0.5979 0.1969 0.4721 0.75 0.4377 0.2631 0.4523 0.14 For each iteration, matrix R N is computed from the generated column of random numbers. Then, a row selected from the matrix R N at random can be used as the random numbers for that iteration of the simulation. 7.2.2 The Number of Iterations The literature is diverse on the number of iterations that should be performed for an "acceptable" simulation (Flanagan et al., 1987; Jaafari, 1988; Inyang 1983; Perry and Hayes, 1985b). The recommended numbers range from 100 to 1000 iterations. How-ever, most of these recommendations are not supported theoretically or empirically, and may not be applicable in all situations (Inyang, 1983). Chapter 7. Validations and Applications 1 RN 0 X Figure 7.1: Random Variate Generation Correction Factor - or Figure 7.2: The Correction Factor a for Different Values of p Chapter 7. Validations and Apphcations 158 Bury (1975) has shown that a simulation of 1000 iterations has an error band of 4.3% at 95% confidence level. Error band is the accuracy to which the cumulative distribution function generated from the simulation approximates to the unknown cumulative distribution function of the derived variable. That is, the error band brackets the unknown cumulative distribution function in 95% (or (1 — a) 100%) of all simulation samples. At 95% confidence level, for an error band of 2% at least 4600 iterations are required. Inyang (1983) states that at 95% probability, 1000 iterations will give a level of accuracy of 6% and 8.5% for the expected value (mean) and the standard deviation respectively. Since simulation generates a random sample to represent the derived variable, irrespective of the number of primary variables, the larger the size of the sample the more accurate are the estimates for the expected value, standard deviation and the cumulative distribution function generated from simulation. In this thesis, when duration was the only derived variable 15,000 to 20,000 iterations were used for the simulation. For complete time and economic risk quantification 4,000 to 6,000 iter-ations were used. Larger simulations were used for duration because of its smaller problem structure and because of its importance as the linking variable in economic risk quantification. The comparatively large size of the simulations also permits the study of the stabihty of the expected value and standard deviation with increasing number of iterations. 7.3 Modified P N E T Algorithm The modified PNET algorithm is applied to the two numerical examples that were presented by Ang et al., (1975). The first example is a road pavement project, while the second is an industrial building project. Chapter 7. Validations and Apphcations 159 7.3.1 Road Pavement Project This project involves the paving of 2.2 miles of roadway pavement and the construc-tion of appurtenant drainage structures, excavation to grade, placement of macadam shoulders, erection of guardrails, and landscaping (Ang et al., 1975). The precedence network for the project used by the modified PNET, based on the logic of the arrow network given by Fig.2 of Ang et al. (1975) is shown in figure (7.3). The various activities of the project, respective mean durations and standard deviations for the activities from Table 1 of Ang et al. (1975), are given in Appendix F. Table 2 from Ang et al., (1975), containing all nine paths of the network arranged in order of decreasing mean path durations, mean path durations (pr) and standard deviations (<TT) a r e listed in Table 7.1. The nine paths, mean path durations and standard deviations from the modified PNET algorithm are given in Table 7.2. Table 7.1: Ordered Paths and Duration Statistics - Table 2, Ang et al., (1975) Path PT 0~T # Activities in the Path days days 1 •4, 7,-12, 13, 18, 20, 22, 25, 27 61 5.00 2 6, 10, 15, 19, 21, 23, 24, 26, 27 57 9.00 3 6, 11, 16, 19, 21, 23, 24, 26, 27 52 7.94 4 5, 9, 14, 19, 21, 23, 24, 26, 27 49 6.54 5 5, 8, 13, 18, 20, 22, 25, 27 42 4.00 6 3, 28, 20, 22, 25, 27 29 3.24 7 3, 1, 23, 24, 26, 27 29 5.19 8 2, 17, 28, 20, 22, 25, 27 28 3.16 9 2, 17, 1, 23, 24, 26, 27 28 5.12 The dummy activities required for the arrow network (activities 1 and 28) are not necessary for the precedence network used by the modified PNET (see paths 6, 7, 8 and 9 in Tables 7.1 and 7.2). In addition to ordering the paths in decreasing mean durations, the modified PNET orders the paths in decreasing standard deviations Chapter 7. Validations and Applications ^ CM ^CM WP o CM :co 1 WP CM I - I " 3 ? £ CM £CM IS I s WP CD T— CL. Figure 7.3: The Precedence Network for the Road Pavement Project Chapter 7. Validations and Applications 161 Table 7.2: Ordered Paths and Duration Statistics from Modified PNET Path Ang PT 0~T # et al Activities in the Path days days 1 1 4, 7, 12, 13, 18, 20, 22, 25, 27 61 5.00 2 2 6, 10, 15, 19, 21, 23, 24, 26, 27 57 9.00 3 3 6, 11, 16, 19, 21, 23, 24, 26, 27 52 7.93 4 4 5, 9, 14, 19, 21, 23, 24, 26, 27 49 6.59 5 5 5, 8, 13, 18, 20, 22, 25, 27 42 4.00 6 7 3, 23, 24, 26, 27 29 5.17 7 6 3, 20, 22, 25, 27 29 3.24 8 9 2, 17, 23, 24, 26, 27 28 5.12 9 8 2, 17, 20, 22, 25, 27 28 3.16 when mean path durations are equal. This ensures the selection of the path with the highest variance as the representative path from the paths having the same mean duration (see paths 6, 7, 8 and 9 in Tables 7.1 and 7.2). The representative paths for the transitional correlation p = 0.5 are paths 1 and 2 from PNET (Ang et al., 1975) and the modified PNET. The comparison shows that modified PNET identifies the paths correctly, evaluates the expected value (mean) and standard deviation for path durations accurately, and selects the representative paths correctly. The ordering of paths may differ because the modified PNET gives priority to the path with the higher variance when the mean durations are identical. 7.3.2 Industrial Building Project This project involves the construction of a single-story industrial building. The build-ing is comprised of reinforced concrete piers, frost walls, structural steel columns, and a precast roof (Ang et al., 1975). The precedence network for the project used by the modified PNET, based on the logic of the arrow network given by Fig.5 of Ang et al. (1975) is shown in figure (7.4). The various activities of the project, respective mean durations and standard deviations for the activities from Table 3 of Ang et al. Chapter 7. Validations and Applications Ife u) I =tfe CVJ J3. I * 8 CD Ife T-4fc co sj. CO f^e CM f^e CM =tfecU *J3 CM ^ CM I * CM" I T * 8 J T =tfe co =*fe CO =*fe CO I 3 -J T ^ CO 3 " Ife £ I in zr I3Z I?8 ? 1 i * • 1 CO 1 CO TT., 3fe g Figure 7.4: The Precedence Network for the Industrial Building Project Chapter 7. Validations and Applications 163 (1975), are given in Appendix F. Ang et al. (1975), listed only the first ten paths arranged in decreasing mean path durations (Table 4, Ang et al., 1975). Table 7.3 lists all 33 paths in the project network as ordered by the modified PNET algorithm. The second column in Table 7.3 contains the path numbers of the ten paths listed in Table 4, Ang et al., (1975). Even though path 7 from modified PNET had the largest variance of the paths with mean duration of 66 days, PNET had not considered it as a major path. The representative paths for the transitional correlation p = 0.5 are paths 1, 3 and 5 from PNET (Ang et al, 1975), while the modified PNET algorithm identifies paths 1, 3, 5, and 32. PNET considered only the first ten paths as the major paths. Even though path 32 is also a representative path by definition, it does not play a role in the completion time probability calculations because its mean path duration is insignificant when compared to the other representative paths. While PNET neglects those paths with low mean path durations, the modified PNET considers all the paths in the selection of representative paths. As shown later in the validations of the analytical method, the difference in execution time for the modified PNET routine to evaluate a single path (longest path approach) or all the paths in the project network is negligible. The two comparisons validate the modified PNET algorithm used in the analytical method for time and economic risk quantification. 7.4 Parallel Network A parallel network consisting of thirty five identical work packages in five parallel paths as shown in figure (7.5) is used as the first Hmiting case to validate the Monte Carlo simulation process. Since simulations are used to validate the analytical ap-proach, it is essential to validate the simulation process first. Chapter 7. Vahdations and Apphcations 164 Table 7.3: Ordered Paths and Duration Statistics for the Industrial Building Path Ang PT 0~T # et al Activities in the Path days days 1 1 17, 18, 32 33, 35 78 12.20 2 2 17, 18, 32 34, 35 76 12.20 3 3 9, 13, 15, 20, . ., 28, 36 69 12.12 4 4 9, 13, 15, 20, .. , 27, 31, 35 68 12.14 5 5 1, 2, 5, 7, 8, 10, 13, 15, 20, 28, 36 67 3.85 6 6 1, 3, 4, 5, 7, 8, 10, . .., 13, 15, 20, 28, 36 67 3.85 7 - 9, 12, 14, 18, 32, 33, 35 66 12.25 8 8 9, 13, 15, 20, .. ., 24, 29, 30, 36 66 12.09 9 9 1, 2, 5, 7, 8, 10, . . , 13, 15, 20, 27, 31, 35 66 3.87 10 10 1, 3, 4, 5, 7, 8, 10, . .., 13, 15, 20, 27, 31, 35 66 3.87 11 7 1, 3, 6, 7, 8, 10, 13, 15, 20, 28, 36 66 3.85 12 - 1, 3, 6, 7, 8, 10 13, 15, 20, 27, 31, 35 65 3.87 13 - 9, 12, 14, 18, 32, 34, 35 64 12.25 14 - 1, 3, 4, 5, 7, 8, 10, . .., 12, 14, 18, 32, 33, 35 64 4.22 15 - 1, 2, 5, 7, 8, 10, . . , 12, 14, 18, 32, 33, 35 64 4.22 16 - 1, 3, 4, 5, 7, 8, 10, . .., 13, 15, 20, 24, 29, 30, 36 64 3.79 17 - 1, 2, 5, 7, 8, 10, 13, 15, 20, 24, 29, 30, 36 64 3.79 18 - 1, 3, 6, 7, 8, 10, 12, 14, 18, 32, 33, 35 63 4.22 19 - 1, 3, 6, 7, 8, 10, 13, 15, 20, 24, 29, 30, 36 63 3.79 20 - 1, 3, 4, 5, 7, 8, 10, . .., 12, 14, 18, 32, 34, 35 62 4.22 21 - 1, 2, 5, 7, 8, 10, 12, 14, 18, 32, 34, 35 62 4.22 22 - 1, 3, 6, 7, 8, 10, . . , 12, 14, 18, 32, 34, 35 61 4.22 23 - 1, 3, 4, 5, 7, 16, 20, 28, 36 59 3.73 24 - 1, 2, 5, 7, 16, 20, .. , 28, 36 59 3.73 25 - 1, 3, 4, 5, 7, 16, 20, 27, 31, 35 58 3.75 26 - 1, 2, 5, 7, 16, 20, .. ., 27, 31, 35 58 3.75 27 - 1, 3, 6, 7, 16, 20, .. , 28, 36 58 3.73 28 - 1, 3, 6, 7, 16, 20, .. ., 27, 31, 35 57 3.75 29 - 1, 3, 4, 5, 7, 16, 20, 24, 29, 30, 36 56 3.67 30 - 1, 2, 5, 7, 16, 20, .. , 24, 29, 30, 36 56 3.67 31 - 1, 3, 6, 7, 16, 20, .. ., 24, 29, 30, 36 55 3.67 32 - 19, 33, 35 38 6.14 33 - 19, 34, 35 36 6.14 Includes all intervening activities Chapter 7. Validations and Applications 165 1* "c i l t i 1 i Zl CO •5 * • CO •5 * Q_ CD ~] CM Q_ CO r- CM •5 * • CO B: * •5 * •5 * •5 * Si * Q_ lO r- CM •5 * Q_ CO Bi * Q_ CO Bi * o_ cn 5 5 Q L O ~] CM B= * a . * -3i * Q_ CM Bi * Q . co Bi 5 Bi 5 Q_ lO ^ CO B: * $ * * CO $ * Q_ O Bi * Q L , _ Bi * ^ CM B= * ^ CO t i I I 1 r Figure 7.5: The Parallel Network Chapter 7. Validations and Apphcations 166 From the longest path (or PERT) approach every path in the parallel network is a critical path. Therefore, the cumulative distribution function for project duration is the cumulative distribution function from any path duration. This is the lower bound for completion time probabihty. From the modified PNET algorithm, completion time probabihty for project duration for any transitional correlation p (i.e 0 < p < 1) is the same as the upper bound (p = 1). Therefore, the cumulative distribution function for project duration for the parallel network from a valid Monte Carlo simulation process should give the same cumulative distribution function as for the upper bound from the PNET algorithm. The expected value, standard deviation, skewness and kurtosis for duration for all the work packages are E[WPD] — 3.644 months, CTWPD = 0.67 months, y/fa — 0.3 and 82 = 3.5. The statistics for project duration from the longest path (lower bound p = 0), for any transitional correlation, 0 < p < 1, and the expected value and standard deviation from the simulation are given in Table 7.4. Table 7.4: Statistics for Project Duration for First Limiting Case Project Duration Expected Standard y/Fi fa (months) Value Deviation Longest Path (p — 0) 25.51 1.76 0.11 3.07 When 0 < p < 1 27.63 1.24 0.30 3.2 Monte Carlo Simulation 27.60 1.30 The cumulative distribution functions for project duration from the longest path, when 0 < p < 1 and a Monte Carlo simulation of 20,000 iterations is depicted in figure (7.6). This simple hmiting case, while validating the Monte Carlo simulation process that is used to vahdate the analytical method, also confirms the theoretical postulations made by the modified PNET algorithm. Chapter 7. Validations and Applications 20 22 24 26 28 30 32 34 Duration (months) Figure 7.6: CDFs for Project Duration for the Parallel Network Chapter 7. Validations and Apphcations 168 7.5 First Example This section demonstrates the second hmiting case to validate the Monte Carlo sim-ulation and the first two validations of the analytical method. The data for this example is obtained from an actual deterministic feasibility analysis conducted for a mineral project in South America. The starting point for the analysis is at the work package level. For study purposes herein, the original construction program is modified as shown in figure (7.7). The logic of the original program is maintained throughout. The work package durations are developed to correspond to the modified construction schedule. The deterministic estimates and statistics for work package durations are given in Appendix F. 7.5.1 Second Limiting Case The precedence network depicted in figure (7.7) is highly interrelated. However, if there is one dominant path in the network then that path will dominate completion time probability of the project. Therefore, the project duration from the longest path (lower bound), all the paths (upper bound) and from the simulation should be similar. Such a path can be created by changing the statistics for duration for work package #7 to E[WPD] = 20.01 months, <TWPD = 1.609 months, v^i = 0.2 and 82 = 2.6. Then the dominant path consists of work packages #2, #7, #20, #24, #30, #31. The expected value, standard deviation, skewness and kurtosis for project duration for the dominant path (lower bound p = 0), from all paths (p = 1) and the expected value and standard deviation from simulation are given in Table 7.5. The cumulative distribution functions for project duration from lower and upper bounds, and a Monte Carlo simulation of 15000 iterations are depicted in figure (7.8). This limiting case also validates the Monte Carlo simulation process. In addition, it Chapter 7. Validations and Applications 169 F t CM CO OL Q_ CO a. CO CM CM Q_ 0. > t CM CO CM Q . I 8 OL i f ) 0. CO I CO 0) 0_ OL o CO OL Si OL O CM CM 0-CM CM OL CO OL OL 0-CM CO O L CO OL Q. CO 0. OL 5 CM OL 0. CO Figure 7.7: The Project Network for the First Example Chapter 7. Validations and Applications 170 38 40 42 44 46 48 50 52 Duration (months) Figure 7.8: CDFs for Project Duration for the Single Dominant Path Chapter 7. Validations and Applications 171 Table 7.5: Statistics for Project Duration for Second Limiting Case Project Duration Expected Standard ft (months) Value Deviation Longest Path (p — 0) 45.01 1.87 0.15 2.85 All the Paths (p = 1) 45.01 1.88 0.1 2.8 Monte Carlo Simulation 44.96 1.53 confirms accuracy of the modified PNET algorithm. 7.5.2 First Validation Two simulations were done as the first validation. The derived variable for the first simulation was only project duration. For the first validation, low coefficients of vari-ation for work package durations are assumed. Table 7.6 contains the expected values and standard deviations for project duration from the simulation at 1000 iteration intervals, and the statistics evaluated from the analytical approach at different transi-tional correlations. Figure (7.9) illustrates the cumulative distribution functions for upper and lower bounds approximated from the analytical method and that gener-ated from a simulation of 15,000 iterations. Figure (7.10) depicts, in addition to those in figure (7.9), the cumulative distribution functions for project duration at different transitional correlation (p) values. The second simulation is a complete time and economic risk quantification. How-ever, the statistics and cumulative distribution function generated for project duration are not considered because the first simulation is much larger. The work package costs of the project network depicted by figure (7.7) are estimated such that the sum of the work package costs is equivalent to the constant dollar cost estimate of the deter-ministic feasibility analysis. The deterministic estimates for work package costs are given in Appendix F. Chapter 7. Validations and Apphcations 172 Table 7.6: Statistics for Project Duration from First Validation - Ex #1 Simulation Analytical Method # E[PD] 0~PD P E[PD] <?PD mths mths mths mths 1000 37.29 1.11 0.0 36.08 1.30 0.13 2.95 2000 37.30 1.15 0.1 36.11 1.27 0.2 2.8 3000 37.28 1.40 0.2 36.11 1.27 0.2 2.8 4000 37.27 1.51 0.3 36.92 1.03 0.3 3.2 5000 37.27 1.56 0.4 36.92 1.03 0.3 3.2 6000 37.26 1.59 0.5 36.92 1.03 0.3 3.2 7000 37.24 1.61 0.6 37.31 1.02 0.3 3.3 8000 37.25 1.63 0.7 37.31 1.02 0.3 3.3 9000 37.25 1.65 0.8 37.74 0.91 0.4 3.4 10000 37.25 1.66 0.9 37.95 0.82 0.5 3.5 11000 37.25 1.66 1.0 38.34 0.68 0.5 3.4 12000 37.25 1.67 13000 37.25 1.51 14000 37.26 1.34 15000 37.26 1.19 The function for discounted work package cost (WPCi) used for the analysis is as follows. WPd = f e^-^d f C i Coi(r) dr (7.3) Jo +(1 - f ^ - v ^ e 6 * * * * [Tci Coi(T)e^-r^dT Jo where WPCi is the discounted ith work package cost, C0i(r) is the constant dollar cash flow for the ith work package, Tsci and Ta are work package start time and duration, Tp is the time at which the repayment of interim financing is due for all work packages (assumed as the end of the construction phase), / is the equity fraction, and 6ct, r and y are inflation, interest and discount rates respectively. The time r is measured from the start of the ith work package. Chapter 7. Validations and Apphcations 173 The function for revenue streams (NRSi) is as follows. Roi{t)e6R'(t-Ts^) - MK(t)e*"i e~ytdt (7.4) where NRSi is the discounted ith revenue stream, Roi(t) and Moi(t) are the constant and TR{ are start time and duration of the revenue stream, and 6^, 9M{ and y are inflation and discount rates respectively. The deterministic values for the respective primary variables (i.e work package durations and costs, annual revenues and operating costs, inflation and financing rates) are assumed to be the median values of their frequency distributions. The expected value, standard deviation, skewness and kurtosis for work package durations, costs, annual gross revenues and operation and maintenance costs for the revenue streams are given in Appendix F. For illustrative purposes herein, uniform constant dollar expenditure profiles for work package costs and annual operating costs were assumed. Similarly, uniform con-stant dollar revenue profiles were assumed for gross annual revenue streams. A com-mon inflation rate with the following statistics, E[6c] = 5.837%, crec = 0.395%, \/W\ — 0.1 and 62 = 2.6 is assumed for all work package costs. A construction loan for 85% (/ = 0.15) of the current dollar expenditure on construction is assumed. The statistics for the interest rate on the construction loan are E[r] = 8.631%, o~T = 0.704%, \f0[ = 0.0 and 32 = 3.6. The minimum attractive rate of return used for the analysis and vahdations is 20%. All the variables in the analysis are assumed to be uncorrelated. Tables 7.7, 7.8 and 7.9 contain results from the second simulation at 500 iteration intervals and statistics from the analytical method at different transitional correlation values for discounted project cost, discounted project revenue and project net present value. dollar cash flow for the i gross revenue and operation and maintenance cost, TsRi Chapter 7. Validations and Apphcations 174 Table 7.7: Statistics for Discounted Project Cost from First Validation - Ex #1 Simulation Analytical Method # E[DPC] °~DPC P E[DPC] °~DPC ft $ $ $ $ 500 87054064 9588990 0.0 87792088 9658726 0.097 2.617 1000 86791152 9586503 0.1 87767561 9655823 0.097 2.617 1500 86764320 9739846 0.2 87766524 9655819 0.097 2.617 2000 86775360 9789910 0.3 87168757 9591506 0.097 2.617 2500 86727216 9819083 0.4 87168757 9591506 0.097 2.617 3000 86785520 9834827 0.5 87166313 9591480 0.097 2.617 3500 86804592 9757994 0.6 86901802 9562288 0.097 2.617 4000 86805648 9705724 0.7 86900640 9562167 0.097 2.617 0.8 86598834 9529379 0.097 2.617 0.9 86445916 9512848 0.097 2.617 1.0 86130819 9480193 0.097 2.617 Table 7.8: Statistics for Discounted Project Revenue from First Validation-Ex #1 Simulation Analytical Method # E[DPR) °~DPR P E[DPR] O'DPR ft % % $ % 500 143957280 12268435 0.0 147761732 11912759 -0.051 2.744 1000 143692784 12273304 0.1 147689326 11906346 -0.051 2.744 1500 144041696 12045838 0.2 147689326 11906346 -0.051 2.744 2000 144331664 12135314 0.3 146044842 11790591 -0.051 2.744 2500 144558464 12128094 0.4 146056516 11790591 -0.051 2.744 3000 144623264 11984054 0.5 146056516 11790591 -0.051 2.744 3500 144699216 11889887 0.6 145282056 11740939 -0.051 2.744 4000 144820096 11871214 0.7 145278674 11740788 -0.051 2.744 0.8 144429474 11682179 -0.051 2.744 0.9 144015454 11652676 -0.051 2.744 1.0 143232846 11598742 -0.051 2.744 Chapter 7. Validations and Apphcations 175 Table 7.9: Statistics for Project NPV from First Validation - Ex #1 Simulation Analytical Method # E[NPV] VNPV P E[NPV] a NPV ft $ $ $ $ 500 56903152 15845556 0.0 59969644 15336388 -0.048 2.847 1000 56909520 15262957 0.1 59921765 15329579 -0.048 2.847 1500 57275088 15202756 0.2 59922801 15329576 -0.048 2.847 2000 57546720 15573732 0.3 58879177 15199179 -0.048 2.847 2500 57817088 15527935 0.4 58887759 15199179 -0.048 2.847 3000 57820480 15474244 0.5 58890203 15199162 -0.048 2.847 3500 57875184 15318487 0.6 58380254 15142226 -0.048 2.847 4000 57993648 15290463 0.7 58378034 15142032 -0.048 2.847 0.8 57830640 15075887 -0.048 2.847 0.9 57569539 15042578 -0.048 2.847 1.0 57102027 14980150 -0.048 2.847 Table 7.10 contains the expected value and standard deviation for project inter-nal rate of return from the simulation and from the analytical method at different transitional correlation values. The analytical method develops the cumulative dis-tribution function for internal rate of return using cumulative distribution functions for net present value at incremental discount rates. The expected value and standard deviation for internal rate of return are approximated using percentiles from that cumulative distribution function. Figures (7.11), (7.12), (7.13) and (7.14) illustrate the cumulative distribution func-tions for upper and lower bounds approximated from the analytical method and those generated from a simulation of 4000 iterations for discounted project cost, discounted project revenue, net present value and internal rate of return. The cumulative dis-tribution functions and the estimates for expected values for derived time and eco-nomic variables demonstrate that the results generated from Monte Carlo simulation are within the upper and lower bounds predicted by the analytical approximations, thereby validating the analytical method. Chapter 7. Validations and Apphcations 176 Table 7.10: Statistics for Project IRR from First Validation - Ex #1 Simulation Analytical Method # E[IRR] &IRR P E[IRR] VlRR 500 32.71 4.14 0.0 33.241 4.094 1000 32.69 3.99 0.1 33.231 4.091 1500 32.76 4.01 0.2 33.231 4.091 2000 32.81 4.11 0.3 33.019 4.034 2500 32.87 4.13 0.4 33.019 4.034 3000 32.88 4.11 0.5 33.020 4.034 3500 32.88 4.07 0.6 32.930 4.037 4000 32.89 4.03 0.7 32.930 4.037 0.8 32.836 4.033 0.9 32.791 4.029 1.0 32.720 4.020 Table 7.11: Comparison of CPU times from First Vahdation - Ex #1 Simulation Analytical Method # CPU Sec. P CPU Sec. 500 511 0.0 34.45 1000 1021 0.1 35.03 1500 1531 0.2 35.09 2000 2042 0.3 35.18 2500 2550 0.4 35.56 3000 3059 0.5 34.89 3500 3567 0.6 35.22 4000 4079 0.7 35.75 0.8 35.86 0.9 36.54 1.0 37.78 Chapter 7. Vahdations and Apphcations 177 Table 7.11 contains a comparison of the execution time for simulation and the analytical method. The computational economy of the analytical method is clearly highhghted. For this example, the analytical method is about thirty times faster when compared to the generally recommended number of iterations (1000) for risk quantification using Monte Carlo simulation (Inyang, 1983; Perry and Hayes, 1985b). Both analyses were done on an IBM 3081 mainframe computer. There are sev-enty three possible paths to complete the project network depicted by figure (7.7). When p = 0 only the moments on the longest path are considered to evaluate the statistics for project duration. When p = 1 the moments of all 73 paths are consid-ered to evaluate the statistics for project duration. The comparison of the execution times however, show that the time difference to evaluate statistics and cumulative distribution functions for upper and lower bounds from the analytical method are negligible. While 73 paths is not a significant number for a large engineering project, it still demonstrates that evaluating the bounds for an alternative is not an excessive burden in terms of the computational economy when compared to simulation. 7.5.3 Second Validation The same numerical values as for the previous case are used for the second validation. The only difference is that the coefficients of variation for work package durations are approximately 40% instead of the 3% to 13% used in the previous case. Since the derived economic variables are dependent upon time, this increase permits us to study its effect on their risk quantification. The statistics for the revised work package durations are given in Appendix F. Two simulations were done. The derived variable for the first simulation was only project duration. Table 7.12 contains the expected values and standard deviations from the simulation and the statistics evaluated from the analytical method. Chapter 7. Validations and Applications 32 34 36 38 40 42 Duration (months) Figure 7.9: CDFs for Project Duration - First Validation - Ex #1 Figure 7.10: CDFs for Project Duration - First Validation - Ex #1 Chapter 7. Validations and Applications 179 Cost (discounted $*10 8) Figure 7.11: CDFs for Discounted Project Cost - First Validation - Ex #1 100 120 140 160 180 Revenue (discounted $*106) Figure 7.12: CDFs for Discounted Project Revenue - First Validation - Ex #1 Chapter 7. Validations and Applications 180 0 20 40 60 80 100 120 Net Present Value ($*106) Figure 7.13: CDFs for Project Net Present Value - First Validation - Ex #1 20 25 30 35 40 45 50 Discount Rate (%) Figure 7.14: CDFs for Project Internal Rate of Return - First Validation - Ex #1 Chapter 7. Validations and Apphcations 181 Table 7.12: Statistics for Project Duration from Second Validation - Ex #1 Simulation Analytical Method # E[PD] P E[PD) °~PD Vft ft mths mths mths mths 1000 46.01 5.19 0.0 36.31 6.56 0.12 3.09 2000 45.89 5.00 0.1 42.38 5.91 0.7 3.9 3000 45.93 5.08 0.2 42.38 5.91 0.7 3.9 4000 45.91 5.09 0.3 44.12 5.17 0.8 4.6 5000 45.88 5.10 0.4 44.45 5.02 0.8 4.5 6000 45.87 5.11 0.5 45.55 4.50 0.8 4.5 7000 45.83 5.12 0.6 46.01 4.28 0.9 5.0 8000 45.84 5.10 0.7 46.45 4.11 0.8 4.5 9000 45.83 5.02 0.8 46.84 3.91 0.8 4.4 10000 45.84 4.95 0.9 48.61 3.19 0.7 4.2 11000 45.87 4.92 1.0 49.12 2.97 0.8 5.2 12000 45.89 4.90 13000 45.89 4.88 14000 45.91 4.86 15000 45.91 4.85 Figure (7.15) illustrates the cumulative distribution functions for upper and lower bounds approximated from the analytical method and that generated from a simula-tion of 15,000 iterations. Figure (7.16) depicts, in addition to those in figure (7.15), the cumulative distribution functions for project duration at different transitional correlation values. The second simulation is again a complete time and economic risk quantification. Tables 7.13, 7.14 and 7.15 contain results from the simulation at 500 iteration intervals and statistics from the analytical method at different transitional correlation values for discounted project cost, discounted project revenue and project net present value. Tables 7.16 contains the expected value and standard deviation for project internal rate of return from the simulation and the analytical method at different transitional correlation values. Chapter 7. Vahdations and Apphcations 182 Table 7.13: Statistics for Discounted Project Cost from Second Validation - Ex #1 Simulation Analytical Method # E[DPC] &DPC P E[DPC) 0~DPC fa $ % % % 500 81021792 9462825 0.0 87795608 9800603 0.092 2.557 1000 80835632 9537916 0.1 83698131 9322582 0.092 2.564 1500 80844544 9699357 0.2 83438139 9276978 0.094 2.574 2000 80826096 9752933 0.3 82318999 9166055 0.093 2.569 2500 80795552 9797531 0.4 82090258 9138546 0.093 2.570 3000 80809072 9791703 0.5 81377056 9049674 0.094 2.574 3500 80816496 9750850 0.6 80958126 9004202 0.094 2.574 4000 80828448 9728706 0.7 80600925 8966904 0.094 2.574 0.8 80348004 8935649 0.094 2.575 0.9 79175607 8800934 0.095 2.579 1.0 78815314 8760869 0.095 2.580 Table 7.14: Statistics for Discounted Project Revenue from Second Vahdation-Ex #1 Simulation Analytical Method # E[DPR] &DPR P E[DPR) &DPR 02 $ $ $ $ 500 127760688 14264898 0.0 147861348 12781663 -0.038 2.720 1000 127718064 13930838 0.1 135939654 11762603 -0.046 2.724 1500 128100240 13516248 0.2 135668030 11502775 -0.045 2.728 2000 128309872 13668063 0.3 132635839 11390905 -0.046 2.727 2500 128541968 13602110 0.4 132007641 11322201 -0.046 2.727 3000 128504048 13480204 0.5 129969465 11101722 -0.046 2.728 3500 128558912 13500583 0.6 129129937 11010403 -0.046 2.729 4000 128685376 13480615 0.7 128343894 10936793 -0.046 2.729 0.8 127624844 10861321 -0.046 2.730 0.9 124503245 10569469 -0.045 2.731 1.0 123611279 10487196 -0.045 2.732 Chapter 7. Validations and Apphcations 183 Table 7.15: Statistics for Project NPV from Second Validation - Ex #1 Simulation Analytical Method # E[NPV] a NPV 9 E[NPV] °~NPV VFi A $ % $ % 500 46737200 15634686 0.0 60065740 16106605 -0.040 2.828 1000 46889744 14909846 0.1 52241523 15008997 -0.044 2.831 1500 47259008 14681378 0.2 52229892 14777556 -0.045 2.834 2000 47478240 15058231 0.3 50316839 14620851 -0.045 2.833 2500 47735504 15001127 0.4 49917383 14550094 -0.045 2.833 3000 47680688 14979397 0.5 48592410 14322878 -0.045 2.834 3500 47725680 14848319 0.6 48171811 14223384 -0.045 2.834 4000 47838080 14796526 0.7 47742969 14142800 -0.045 2.834 0.8 47276840 14064641 -0.045 2.835 0.9 45327638 13753913 -0.045 2.836 1.0 44795965 13665069 -0.045 2.836 Table 7.16: Statistics for Project IRR from Second Vahdation - Ex #1 Simulation Analytical Method # E[IRR] &IRR P E[IRR] °~IRR 500 30.83 3.94 0.0 33.457 4.756 1000 30.86 3.77 0.1 31.887 4.233 1500 30.9.3 3.76 0.2 31.886 4.073 2000 30.97 3.87 0.3 31.551 4.087 2500 31.03 3.89 0.4 31.472 4.056 3000 31.02 3.88 0.5 31.176 3.908 3500 31.02 3.82 0.6 31.075 3.971 4000 31.03 3.78 0.7 31.019 3.957 0.8 30.929 3.927 0.9 30.571 3.708 1.0 30.477 3.702 Chapter 7. Validations and Apphcations 184 Table 7.17: Comparison of CPU times from Second Validation - Ex #1 Simulation Analytical Method # CPU Sec. P CPU Sec. 500 551 0.0 35.04 1000 1100 0.1 35.19 1500 1642 0.2 35.25 2000 2185 0.3 35.98 2500 2728 0.4 36.20 3000 3272 0.5 36.31 3500 3816 0.6 36.27 4000 4361 0.7 36.34 0.8 36.32 0.9 36.89 1.0 39.65 Figures (7.17), (7.18), (7.19) and (7.20) illustrate the cumulative distribution func-tions for the transitional correlation p — 0.5, upper and lower bounds approximated from the analytical method and those generated from a simulation of 4,000 iterations for discounted project cost, revenue, net present value and internal rate of return. The second validation also demonstrates that cumulative distribution functions and the estimates for time and economic variables generated from the simulation are within the upper and lower bounds predicted by the analytical method. Table 7.17 contains a comparison of the execution time for the simulation and the analytical method. Again, the computational economy of the analytical method is highlighted. The more significant observation is the wider bounds for derived variables when compared to the previous case. The only change from the first to second validation is an increase in the coefficients for variation for work package durations. Therefore, wider bounds are a direct result of the increase in the variance for work package durations and start times. This observation highlights the significance of work package duration and start time in economic risk quantification. Chapter 7. Validations and Applications 185 Duration (months) Figure 7.15: CDFs for Project Duration - Second Validation - Ex #1 Duration (months) Figure 7.16: CDFs for Project Duration - Second Validation - Ex #1 Chapter 7. Validations and Applications 186 50 60 70 80 90 100 110 120 Cost (discounted $*10 6) Figure 7.17: CDFs for Discounted Project Cost - Second Validation - Ex #1 80 100 120 140 160 180 Revenue (discounted $*106) Figure 7.18: CDFs for Discounted Project Revenue - Second Validation - Ex #1 Chapter 7. Validations and Applications 187 Net Present Value ( $ * 1 0 6 ) Figure 7.19: CDFs for Project Net Present Value - Second Validation - Ex #1 Discount Rate (%) Figure 7.20: CDFs for Project Internal Rate of Return - Second Validation - Ex #1 Chapter 7. Validations and Apphcations 188 The analytical method permits the analyst to specify a transitional correlation (p) for decision making. The cumulative distribution functions for time and economic variables when p — 0.5 are included to demonstrate how, in addition to providing a risk quantification at the specified p, the analytical method can perform the sensitivity of that quantification by approximating the bounds. It must be stressed that p = 0.5 is used only as an example, because it is not possible to recommend a single value for p that can be used for all risk analyses of engineering projects. The analysis however, can be conducted using the hmiting values for p, (0,1), as well as an intermediate value (say p — 0.5). This approach provides the analyst with additional insights and as demonstrated by this example, it is still ten times faster than Monte Carlo simulation. 7.5.4 Discussion The examples presented in this section vahdated the analytical method. In addition, the validations clearly demonstrated the computational economy of the analytical method when compared to Monte Carlo simulations. There were 164 random primary variables at the input level for both approaches. The two hmiting cases that were used to validate the simulation process also confirmed the theoretical postulations made by the modified PNET algorithm. The first validation demonstrated the abihty of the analytical method to fit eas-ily into the existing deterministic estimation approaches prevalent in the construction industry. This flexibility is important for a theoretical development to become a prac-tical tool in the industry. By considering the deterministic estimates as the median values for the work packages, subjective probabihties can be ehcited. This permits the analyst/experts in engineering construction to begin the risk quantification process from the familiar deterministic structure. Chapter 7. Validations and Apphcations 189 Table 7.18 contains a comparison of the deterministic and probabilistic estimates for constant, current and total dollar estimates for project cost. While the deter-ministic values and the expected values are comparable, it demonstrates that the deterministic values on which most of the decisions are based at present, only have about a 50% probability of success. The quantification of the uncertainty associated with the estimates for project cost permits the contingency to be allocated on the probability of success of the project. Table 7.18: Deterministic and Probabilistic Analyses of Project Cost Constant Dollar Cost Current Dollar Cost Total Dollar Cost Deterministic Probabilistic 124450100 137628834 151287416 E[PC] 0~PC VP\ ft 126394711 139737616 153804634 14041896 15742136 17036142 0.095 0.093 0.096 2.61 2.59 2.61 In addition to validating the analytical method, the second validation demon-strated the significance of the variance of work package durations and start times to the derived economic variables. The bounds of the derived variables are wider when compared to the first validation. The cumulative distribution functions at the tran-sitional correlation p = 0.5, illustrate the ability of the analytical method to quantify the economic variables for decision making. Figures (7.21) and (7.22) depict the cumulative distribution functions for p — 0.5, upper and lower bounds for current and total project costs. Even though, project duration has wide bounds (see figure 7.15), the bounds for current and total dollar project costs are relatively tight. The reason for this phenomena is that, since the start time is one of the six variables in the function for work package cost its significance (sensitivity) is reduced. This is further highlighted in the next example when the start time is one of seventeen variables in the work package cost function. Chapter 7. Validations and Applications 190 Cost (current $*106) Figure 7.21: CDFs for Current Dollar Project Cost - Second Validation - Ex #1 220 Cost (total $*10e) Figure 7.22: CDFs for Total Dollar Project Cost - Second Validation - Ex #1 Chapter 7. Validations and Apphcations 191 7.6 Second Example The second example is a hypothetical engineering project of thirteen work packages and three revenue streams. The precedence network of the work packages is shown in figure (7.23). For illustrative purposes herein the primary variables in the func-tions for the work package durations, costs and revenue streams are assumed to be stationary over the duration of the work package or revenue stream. In reality these primary variables (labor usage, productivity, inflation and interest rates, etc) are time dependent. The assumption allows the development of simplified but realistic models. The function for work package durations used in this example is as follows. Qi WPDi (7.5) Pu Li where Qi is the quantity descriptor, P^. is the labour productivity rate and Li is the labour usage. The function for discounted work package cost (WPCi) is as follows. WPCi f ,(BL.-y)TSc. fTC' CL. Li e^i-"* dr Jo + el'Mi-yVs* [TC' CMi Pu Li e(6M>-y> dr Jo rTci + + + e(0Ei-y)TSci j " C e E . e(BEi-y)r ^ Jo rTa >Ei &i eVii-vVsn fTci j c e(fli,.-i/>r d r Jo e{<>s,-v)TsCt f T C t S< dr +(1 - /)e( r-^ r" _|_ e8MiTSci Jo ±Ci e 6 ^ a [TciCL. Lx e^i-^dr Jo ' [TCiCM, PL, Li e^-^dr Jo Ta I + ee*iTs* [ " CE. Eie^-^Tdr Jo •+ e e ' i T s « f Ci Ic. e^-r>dr Jo tTc- Si_ io TCi + e6^c. f'^^L e^-^dr J(7.6) Chapter 7. Validations and Applications 192 Sta Figure 7.23: The Project Network for the Second Example Chapter 7. Vahdations and Apphcations 193 where WPC{ is the discounted i work package cost, Ci{, CM;, and (7E; are the unit rates for labour, materials and equipment, Li and Ei are the labour and equipment usage profiles, PL, is the labour productivity rate, Ic{ and Si are the indirect and sub contractor costs assumed as uniform constant dollar profiles, ^M;) QEI, 8it and 6st are inflation rates for labour, materials, equipment, indirect cost and sub contractor cost, and TsCi and Ta are work package start time and duration for the ith work package respectively. Tp is the time at which the repayment of interim financing is due for all work packages (assumed as the end of the construction phase), / is the equity fraction, and r and y are interest and discount rates respectively. The time r is measured from the start of the ith work package. The function for revenue streams (NRSi) is as follows. NRSi = I*5"**" iRveW-**) - Mae8"'*} e~*dt (7.7) where NRSi is the discounted ith revenue stream, Roi and Moi are the constant dollar cash flow for the ith gross revenue and operation and maintenance cost assumed as uniform profiles, TsRi and TR; are start time and duration of the revenue stream, and , 8Mi and y are inflation and discount rates respectively. The expected value, standard deviation, skewness and kurtosis for individual pri-mary variables in the functions for work package durations, costs, and revenue streams are given in Appendix F. A construction loan for 75% (/ = 0.25) of the current dol-lar expenditure on construction is assumed. The statistics for the interest rate on the construction loan are E[r] = 7.537%, trr = 0.852%, V / ^ = 0.2 and 82 = 2.5. A minimum attractive rate of return of 9% is used for the analysis and vahdation. This section demonstrates third and fourth validations of the analytical method. The third vahdation assumes all the primary variables to be uncorrelated. The fourth validation treats hnear correlations between the primary variables. The example is extended to a third level where correlations at all levels of the project economic Chapter 7. Validations and Apphcations 194 Table 7.19: Statistics for Project Duration from Third Vahdation - Ex #2 Simulation Analytical Method # E\PD) 0~PD P E[PD] fa mths mths mths mths 500 30.73 4.24 0.0 29.44 4.69 0.34 2.77 1000 30.85 4.40 0.1 29.44 4.69 0.34 2.77 1500 30.89 4.37 0.2 29.44 4.69 0.34 2.77 2000 30.92 4.42 0.3 29.44 4.69 0.34 2.77 2500 30.94 4.46 0.4 29.78 4.53 0.4 2.9 3000 30.98 4.50 0.5 31.84 4.06 0.3 2.9 3500 30.98 4.55 0.6 31.84 4.06 0.3 2.9 4000 31.01 4.62 0.7 31.94 3.98 0.3 2.8 0.8 32.04 3.90 0.4 3.1 0.9 32.42 3.69 0.4 3.2 1.0 32.42 3.69 0.4 3.2 structure are treated. 7.6.1 Third Validation A simulation for complete time and economic risk quantification was done for the third vahdation of the analytical method. For this simulation all of the variables were assumed to be uncorrelated. Tables 7.19, 7.20, 7.21 and 7.22 contain results from the simulation at 500 iteration intervals and statistics from the analytical method at different transitional correlation values for project duration, discounted project cost, discounted project revenue and project net present value. Tables 7.23 contains the expected value and standard deviation for project internal rate of return from the simulation and the analytical method at different transitional correlation values. Chapter 7. Validations and Apphcations 195 Table 7.20: Statistics for Discounted Project Cost from Third Validation - Ex #2 Simulation Analytical Method # E[DPC] °~DPC P E[DPC] °~DPC ft % $ $ $ 500 47747712 7272635 0.0 47656668 6800455 0.188 2.724 1000 47549456 7255953 0.1 47656668 6800455 0.188 2.724 1500 47539104 7097718 0.2 47656668 6800455 0.188 2.724 2000 47484848 7290801 0.3 47656668 6800455 0.188 2.724 2500 47574352 7313699 0.4 47642261 6798415 0.188 2.724 3000 47586560 7287798 0.5 47548355 6784741 0.188 2.724 3500 47592672 7249368 0.6 47548355 6784741 0.188 2.724 4000 47623920 7317997 0.7 47544054 6784131 0.188 2.724 0.8 47535878 6783307 0.188 2.724 0.9 47519155 6780948 0.188 2.724 1.0 47519155 6780948 0.188 2.724 Table 7.21: Statistics for Discounted Project Revenue from Third Validation - Ex #2 Simulation Analytical Method # E[DPR) °~DPR P E[DPR] °~DPR ft $ $ $ $ 500 69621328 13437693 0.0 70266290 13744103 -0.431 4.718 1000 69777088 13114197 0.1 70266290 13744103 -0.431 4.718 1500 69942976 13445692 0.2 70266290 13744103 -0.431 4.718 2000 70071568 13397565 0.3 70266290 13744103 -0.431 4.718 2500 70116800 13345129 0.4 70183320 13726601 -0.431 4.717. 3000 70091024 13409836 0.5 69672971 13624327 -0.429 4.706 3500 69932000 13602733 0.6 69672971 13624327 -0.429 4.706 4000 69941072 13626623 0.7 69648247 13618945 -0.429 4.706 0.8 69624938 13613874 -0.429 4.706 0.9 69529211 13593995 -0.428 4.705 1.0 69529211 13593995 -0.428 4.705 Chapter 7. Validations and Apphcations Table 7.22: Statistics for Project NPV from Third Vahdation - Ex #2 Simulation Analytical Method # E[NPV) c r N P V P E[NPV) <JNpV 82 % % % $ 500 21872384 15545242 0.0 22609623 15334489 -0.327 4.098 1000 22226560 15178882 0.1 22609623 15334489 -0.327 4.098 1500 22411568 15381492 0.2 22609623 15334489 -0.327 4.098 2000 22592032 15317318 0.3 22609623 15334489 -0.327 4.098 2500 22546816 15415091 0.4 22541059 15317899 -0.327 4.097 3000 22507776 15496218 0.5 22124616 15220217 -0.324 4.085 3500 22338016 15561380 0.6 22124616 15220217 -0.324 4.085 4000 22312432 15657844 0.7 22104194 15215127 -0.324 4.084 0.8 22089060 15210220 -0.324 4.084 0.9 22010056 15191377 -0.324 4.082 1.0 22010056 15191377 -0.324 4.082 Table 7.23: Statistics for Project IRR from Third Vahdation - Ex #2 Simulation Analytical Method # E[IRR] °~IRR P E[IRR] °~IRR 500 16.13 5.07 0.0 16.305 5.061 1000 16.26 4.94 0.1 16.305 5.061 1500 16.29 5.00 0.2 16.305 5.061 2000 16.35 4.99 0.3 16.305 5.061 2500 16.32 5.02 0.4 16.268 5.044 3000 16.30 5.03 0.5 16.061 4.955 3500 16.23 5.04 0.6 16.061 4.955 4000 16.23 5.07 0.7 16.052 4.953 0.8 16.046 4.952 0.9 16.014 4.944 1.0 16.014 4.944 Chapter 7. Validations and Apphcations 197 Figure (7.24) illustrates the cumulative distribution functions for upper and lower bounds for project duration approximated from the analytical method and that gen-erated from the simulation of 4,000 iterations. Figure (7.25) depicts, in addition to those in figure (7.24), the cumulative distribution functions for project duration at different transitional correlation values. Figures (7.26), (7.27), (7.28) and (7.29) illus-trate the cumulative distribution functions for upper and lower bounds approximated from the analytical method and those generated from the simulation for discounted project cost, revenue, net present value and internal rate of return. The third validation also demonstrates that cumulative distribution functions and the estimates for expected values for time and economic variables generated from the simulation are within the upper and lower bounds predicted by the analytical method. Thereby, validating the analytical method. The bounds for the derived economic variables are extremely tight. These bounds are the sensitivity of the derived economic variables with respect to the start times of the work packages. Table 7.24 contains the comparison of the execution time for the simulation and the analytical method. The computational economy of the analytical method is again highlighted. For this example the analytical method is about fifty times faster when compared to 1000 iterations from the Monte Carlo simulation. 7.6.2 Fourth Validation The fourth validation of the analytical method was also a complete time and economic risk quantification. The only change from the previous section is that the primary variables in the functions for work package durations and costs are considered to be correlated. The positive definite correlation matrices for work package duration and cost functions were obtained using the process described in sections 4.2.2. Since the function given by equation (7.5) is used to evaluate the work package Chapter 7. Vahdations and Apphcations 198 Table 7.24: Comparison of CPU times from Third Vahdation - Ex #2 Simulation Analytical Method # CPU Sec. P CPU Sec. 500 861 0.0 28.64 1000 1721 0.1 28.60 1500 2577 0.2 28.67 2000 3427 0.3 28.57 2500 4275 0.4 27.97 3000 5121 0.5 28;05 3500 5969 0.6 28.11 4000 6819 0.7 28.28 0.8 28.38 0.9 28.23 1.0 28.10 durations, an identical positive definite correlation matrix was used for all the work package durations. This simplification is also convenient when the correlations be-tween the derived variables are approximated using the identified common (shared) primary variables. The positive definite correlation matrix for work package dura-tions and the positive definite correlation matrix for work package costs used for all the work packages in this apphcation are given in Appendix F. Even though the function for work package costs given by equation (7.6) has seventeen variables, the positive definite correlation matrix is only 14x14. The reason is because three variables - work package duration, start time, and project duration are always pre-defined in the decomposed function for work package cost. Their moments are evaluated from the modified PNET algorithm. Therefore, the correlation matrix is only of the variables that are ehcited as input primary variables. The computer program 'ELICIT' ensures that there is no confusion during the ehcitation process by identifying the pre-defined variables as the first three variables of the decomposed function (see equation 6.7). Chapter 7. Validations and Applications 199 Duration (months) Figure 7.24: CDFs for Project Duration - Third Validation - Ex #2 10 20 30 40 50 Duration (months) Figure 7.25: CDFs for Project Duration - Third Validation - Ex #2 Chapter 7. Validations and Applications 200) ¥ 0 8 o O V to o O t3 CD '£? ri 2 a. 0.6 0.4 0.2 Simulation (n=4000) _ All Paths Longest Path / / i i 20 30 40 50 60 Cost (discounted $*'\0 e) 70 80 Figure 7.26: CDFs for Discounted Project Cost - Third Validation - Ex #2 v C rr o 3 C % O rr ts o '2 QL ri o 0.8 0.6 0.4 h 0.2 Simulation (n=4000) _ All Paths Longest Path / - / i li fi - li i 11 — i 1 1 • • i i 10 20 30 40 50 60 70 80 90 Revenue (discounted $*106) 100 110 120 Figure 7.27: CDFs for Discounted Project Revenue - Third Validation - Ex #2; Chapter 7. Validations and Applications 201 -40 -20 0 20 40 60 80 Net Present Value ($ *106) Figure 7.28: CDFs for Project Net Present Value - Third Validation - Ex #2 Discount Rate (%) Figure 7.29: CDFs for Project Internal Rate of Return - Third Validation - Ex #2 Chapter 7. Validations and Apphcations 202 Table 7.25: Statistics for Project Duration from Fourth Validation - Ex #2 Simulation Analytical Method # E[PD] P E[PD] ft mths mths mths mths 1000 29.34 3.05 0.0 29.31 4.59 0.37 2.81 2000 29.41 3.00 0.1 29.31 4.59 0.37 2.81 3000 29.44 3.04 0.2 29.31 4.59 0.37 2.81 4000 29.48 3.17 0.3 29.31 4.59 0.37 2.81 5000 29.51 3.22 0.4 29.57 4.46 0.4 2.9 6000 29.47 3.20 0.5 31.65 4.03 0.3 2.9 0.6 31.65 4.03 0.3 2.9 0.7 31.77 3.94 0.4 3.1 0.8 31.83 3.88 0.4 3.0 0.9 32.21 3.68 0.4 3.1 1.0 32.21 3.68 0.4 3.1 Tables 7.25 contains results from the simulation at 1000 iteration intervals and statistics from the analytical method at different transitional correlation values for project duration. Figure (7.30) illustrates the cumulative distribution functions for upper and lower bounds for project duration and that generated from the simulation of 6,000 iterations. Figure (7.31) depicts the cumulative distribution functions for project duration at different transitional correlation values. The cumulative distribution function from the simulation is comparatively tight with the upper part outside of the bounds predicted from the analytical method. When the standard deviations from Table 7.19 and 7.25 are compared, the analytical method shows a small reduction while the simulation shows a significant dampening which can be attributed to the approach used to treat correlations. The approach used does not distinguish between positive and negative correlations. This dampening caused the distribution to be outside the bounds. However, a study of the individual work package durations show that there should not be a significant reduction in the variance for project duration. Chapter 7. Validations and Apphcations 203 Table 7.26 contains the expected values, standard deviations and the differences when the primary variables are uncorrelated and correlated for individual work pack-ages. The difference of all the expected values are negligible. Except for work pack-ages # 6, 7, 8 and 10 the difference in the standard deviations are small. When these work packages are studied in the context of the paths in the project network (see next section) and their contributions to path variances, except for the third longest path which has work packages 6 and 10, none of the others have more than one of the above four. In addition, their contributions to path variances are small. Hence, none of the paths can have a significant reduction in variance from the uncorrelated case to the correlated case. Table 7.26: Statistics for Project Variables WP# Expected Value (months) Standard Deviation (months) Uncor Corr Differ Uncor Corr Differ 02 7.715 7.681 -0.44% 2.717 2.625 - 3.38% 03 4.971 4.952 -0.38% 1.311 1.267 - 3.35% 04 6.949 6.918 -0.45% 2.451 2.368 - 3.38% 05 3.363 3.356 -0.21% 1.046 1.047 0.1 % 06 3.44 3.363 -0.22% 1.118 0.90 -19.38% 07 1.732 1.687 -0.26% 0.667 0.56 -16.04% 08 6.634 6.442 -0.29% 2.251 1.702 -24.39% 09 5.812 5.795 -0.29% 2.039 1.999 - 1.96% 10 2.748 2.686 -0.23% 0.899 0.725 -19.35% 11 4.571 4.55 -0.46% 1.185 1.114 - 6.0 % 12 6.979 6.959 -0.29% 2.453 2.405 - 1.95% 13 6.797 6.794 -0.05% 2.399 2.442 1.79% 14 6.337 6.349 0.18% 2.372 2.442 2.95% Tables 7.27, 7.28 and 7.29 contain results from the simulation at 1000 iteration intervals and statistics from the analytical method at different transitional correlation values for discounted project cost, discounted project revenue and project net present value. Tables 7.30 contains the expected value and standard deviation for project Chapter 7. Validations and Apphcations 204 Table 7.27: Statistics for Discounted Project Cost from Fourth Validation-Ex #2 Simulation Analytical Method # E[DPC) &DPC P E[DPC) 0~DPC ft % % % $ 1000 46114784 5724537 0.0 46808547 6422711 0.214 2.742 2000 46077568 5668257 0.1 46808547 6422711 0.214 2.742 3000 46144304 5705700 0.2 46808547 6422711 0.214 2.742 4000 46200736 5750056 0.3 46808547 6422711 0.214 2.742 5000 46197600 5783185 0.4 46797312 6421195 0.214 2.742 6000 46228352 5786760 0.5 46705300 6408483 0.214 2.742 0.6 46705300 6408483 0.214 2.742 0.7 46700307 6407809 0.214 2.742 0.8 46693962 6407225 0.214 2.742 0.9 46678295 6405120 0.214 2.742 1.0 46678295 6405120 0.214 2.742 internal rate of return from the simulation and the analytical method at different transitional correlation values. Figures (7.32), (7.33), (7.34) and (7.35) illustrate the cumulative distribution functions for upper and lower bounds approximated from the analytical method and those generated from the simulation for discounted project cost, revenue, net present value and internal rate of return. The estimates and cumulative distribution functions for time and economic vari-ables are reasonably close to the predicted envelope of bounds. It must be noted that the bounds are again extremely tight. In addition to start time being one of the seventeen variables in the work package cost functions, the project network given by figure (7.23) is also small, with few interrelationships between work packages. The combination of these two factors increase the tightness of the bounds. Table 7.31 contains the comparison of the execution time for the simulation and the analytical method. The computational economy of the analytical method is again highlighted. The execution times for Monte Carlo simulation from third and fourth validations are similar. The reason for the similarity is because the same computer Chapter 7. Validations and Apphcations 205 Table 7.28: Statistics for Discounted Project Revenue from Fourth Validation-Ex #2 Simulation Analytical Method # E[DPR] °~DPR P E[DPR] 0~DPR fa % % % % 1000 70145200 13176377 0.0 70299596 13749690 -0.431 4.720 2000 70439824 13459856 0.1 70299596 13749690 -0.431 4.720 3000 70465632 13466430 0.2 70299596 13749690 -0.431 4.720 4000 70314480 13685384 0.3 70299596 13749690 -0.431 4.720 5000 70220368 13688517 0.4 70233575 13735779 -0.431 4.719 6000 70297504 13691024 0.5 69720113 13633218 -0.429 4.708 0.6 69720113 13633218 -0.429 4.708 0.7 69690857 13626900 -0.429 4.708 0.8 69674338 13623299 -0.429 4.708 0.9 69582818 13604284 -0.428 4.707 1.0 69582818 13604284 -0.428 4.707 Table 7.29: Statistics for Project NPV from Fourth Vahdation - Ex #2 Simulation Analytical Method # E[NPV] °~NPV P E[NPV) 0~NPV P2 $ $ $ $ 1000 24029376 14524955 0.0 23419049 15175809 -0.337 4.151 2000 24368384 14635079 0.1 23419049 15175809 -0.337 4.151 3000 24323088 14750314 0.2 23419049 15175809 -0.337 4.151 4000 24106912 14934437 0.3 23419049 15175809 -0.337 4.151 5000 24010976 14976197 0.4 23436263 15162564 -0.337 4.150 6000 24053856 14972542 0.5 23014812 15064305 -0.334 4.137 0.6 23014812 15064305 -0.334 4.137 0.7 22990550 15058301 -0.334 4.137 0.8 22980376 15054793 -0.334 4.137 0.9 22904523 15036692 -0.334 4.135 1.0 22904523 15036692 -0.334 4.135 Chapter 7. Validations and Apphcations 206 Table 7.30: Statistics for Project I R R from Fourth Validation - E x #2 Simulation Analyt ical Method # E[IRR] VlRR P E[IRR] °~IRR 1000 16.90 4.69 0.0 16.654 5.026 2000 16.97 4.73 0.1 16.654 5.026 3000 16.94 4.75 0.2 16.654 5.026 4000 16.86 4.80 0.3 16.654 5.026 5000 16.83 4.82 0.4 16.628 5.014 6000 16.83 4.81 0.5 16.427 4.912 0.6 16.427 4.912 0.7 16.413 4.902 0.8 16.408 4.898 0.9 16.366 4.869 1.0 16.366 4.869 Table 7.31: Comparison of C P U times from Fourth Validation - E x #2 Simulation Analytical Method # C P U Sec. P C P U Sec. 1000 1702 0.0 29.00 2000 3395 0.1 29.03 3000 5085 0.2 29.12 4000 6775 0.3 29.08 5000 8465 0.4 29.11 6000 10168 0.5 28.41 0.6 28.40 0.7 28.61 0.8 28.72 0.9 28.71 1.0 28.83 Chapter 7. Validations and Apphcations 207 program was used for both simulations. The only difference in the input was the use of identity matrices as the correlation matrices for the third validation and positive definite correlation matrices for the fourth. One could argue that if the random number modification algorithm was not included in the computer program for the third validation, the simulation could have been more efficient. 7.6.3 Correlations at A l l Levels of the Project The analytical method can approximate the linear correlations between derived vari-ables using the linear correlations between primary variables when the common (shared) variables are identified (see section 4.3). The common variables are de-fined as those variables of the same type having the same first four moments in the functional forms for two or more derived variables. Hence, correlation between work package durations, costs and revenue streams can be treated in the evaluation of the first four moments of project duration, cost and revenue. As demonstrated in this sec-tion, the contribution to the moments of the derived variables from these correlations can be significant. It is not possible to duplicate this treatment for the Monte Carlo simulation from the available information. The simulation assumes that only the linear~correlations between the primary variables in the functions are available (see figure 7.36). How-ever, if it is possible to obtain the correlation matrix for the complete system, then simulation can treat all the correlations. For project cost in this example, one would have to elicit a (182 — i)x(182 — I) positive definite correlation matrix, where I is the total number of common variables in the functions for derived variables minus the number of sets of common variables. Even though linear correlation coefficients between all the work package durations are approximated, only the positive definite correlation matrices for the individual Chapter 7. Validations and Applications 208 Duration (months) Figure 7.30: CDFs for Project Duration - Fourth Validation - Ex #2 Duration (months) Figure 7.31: CDFs for Project Duration - Fourth Validation - Ex #2 Chapter 7. Validations and Applications 209 U 0-8 8 V t> 0.6 o O ts o cr 0.4 CL 2 Q- 0.2 Simulation (n=6000) All Paths Longest Path 30 40 50 60 Cost (discounted $*10 6) 70 Figure 7.32: CDFs for Discounted Project Cost - Fourth Validation - Ex #2 1 CD S 0.8 CD r  S> 0.6 c CD > CD r  t3 CD '2 CL ri 0.2 2 Q . 0.4 Simulation (n-6000) _ All Paths •V •7 ff Longest Path II l l \ ^ 1 l i i i i i 10 20 30 40 50 60 70 80 90 Revenue (discounted $*10 6 ) 100 110 120 Figure 7.33: CDFs for Discounted Project Revenue - Fourth Validation - Ex #2 Chapter 7. Validations and Applications 210 Figure 7.34: C D F s for Project Net Present Value - Fourth Val idat ion - E x #2 0 5 10 15 20 25 30 Discount Rate (%) Figure 7.35: C D F s for Project Internal Rate of Return - Fourth Val idat ion - E x #2 Chapter 7. Validations and Applications 211 Correlation between work packages approximated by the analytical method utilizing the common (shared) variables between the functions for durations and costs. I I i The correlation information required for the Monte Carlo simulation ] The correlation information available for the Monte Carlo simulation Figure 7.36: Correlation Matrix for the Complete System Chapter 7. Validations and Apphcations 212 paths to a work package are necessary to evaluate the first four moments of its start time. For example, there are six paths to complete the project illustrated by fig-ure (7.23). The positive definite correlation matrices for the six paths approximated by the analytical method are as follows. (The paths are ordered in decreasing mean path durations). Path #1 - Work Packages # 2, # 3, # 6, # 12, # 14. 1.0 0.0 0.0 0.3 0.0 1.0 0.53 0.0 0.0 0.53 1.0 0.0 0.3 0.0 0.0 1.0 .0.0 0.59 0.45 -0.19 Path #2 - Work Packages # 2, # 4, # 7, # 14. 1.0 0.3 0.0 0.0" 0.3 1.0 0.0 0.79 0.0 0.0 1.0 0.42 .0.0 0.79 0.42 1.0. Path #3 - Work Packages # 2, # 3, # 6, # 10, # 13. Rpath#l = 0.0 0.59 0.45 -0.19 1.0 R Path#2 •i.o 0.0 0.0 0.0 0.0" 0.0 1.0 0.53 0.53 0.0 Rpath#3 = 0.0 0.53 1.0 0.36 0.0 0.0 0.53 0.36 1.0 0.0 .0.0 0.0 0.0 0.0 1.0. Path #4 - Work Packages # 2, # 3, # 5, # 9, # 13. R-Path#4 ~ 1.0 0.0 0.0 0.92 o.o-0.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.92 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0. Chapter 7. Vahdations and Apphcations 213 R-Path#5 = Path #5 - Work Packages # 2, # 3, # 6, # 11. 1.0 0.0 0.0 0.0 0.0 1.0 0.53 0.0 0.0 0.53 1.0 0.0 LO.O 0.0 0.0 l.OJ Path #6 - Work Packages # 2, # 4, # 8. "1.0 0.3 0.0 R-Path#6 = 0.3 1.0 0.0 .0.0 0.0 1.0 The correlation matrix for the work package costs is however a 13x13 matrix. This is because all the work package costs are summed to evaluate the project cost. The positive definite correlation matrix for work package costs is as follows. "1.0 .29 .24 .26 .25 .23 .23 .25 .25 .29 .24 .23 .23-.29 1.0 .25 .30 .28 .26 .27 .29 .29 .35 .29 .27 .35 .24 .25 1.0 .26 .25 .23 .23 .25 .25 .29 .25 .23 .23 .26 .30 .26 1.0 .27 .24 .24 .26 .27 .47 .26 .24 .24 .25 .28 .25 .27 1.0 .15 .24 .25 .16 .30 .25 .24 .23 .23 .26 .23 .24 .15 1.0 .21 .23 .14 .28 .23 .22 .21 .23 .27 .23 .24 .24 .21 1.0 .23 .24 .28 .22 .22 .21 .25 .29 .25 .26 .25 .23 .23 1.0 .25 .30 .25 .23 .23 .25 .29 .25 .27 .16 .14 .24 .25 1.0 .31 .25 .24 .23 .29 .35 .29 .47 .30 .28 .28 .30 .31 1.0 .29 .28 .28 .24 .29 .25 .26 .25 .23 .22 .25 .25 .29 1.0 .23 .20 .23 .27 .23 .24 .24 .22 .22 .23 .24 .28 .23 1.0 .22 .23 .35 .23 .24 .23 .21 .21 .23 .23 .28 .20 .22 1.0. Chapter 7. Validations and Apphcations 214 Tables 7.32 contains the expected values and standard deviations approximated by the analytical method for project duration, total dollar project cost, project net present value and internal rate of return for the transitional correlations p = 0, p = 0.5 and p — 1.0. The revenue streams were assumed to be uncorrelated because of the difficulty in identifying common variables. The correlation matrices given above were used in the evaluation of moments for project duration and costs. Table 7.32: Statistics for Project Variables Project Variable P = = 0 P - 0.5 P = 1.0 E[PV] CTpv E[PV] (Tpy E[PV] <TPV Duration 29.31 5.43 32.57 4.75 33.06 4.46 Cost (Tot$) 56955300 13650279 57850640 13869375 57984776 13902068 NPV 23494481 17758202 22838442 17612327 22738804 17589760 IRR 17.251 6.743 16.923 6.616 16.860 6.573 Table 7.33 and 7.34 contain comparisons of the expected values and standard deviations approximated by the analytical method at the transitional correlations p = 0, p = 0.4 and p = 1.0 for project duration and current dollar project costs when all the variables are uncorrelated (third vahdation case), primary variables in the functions for work package durations and costs are correlated (fourth vahdation case) and when the primary variables, work package durations and costscare correlated. The p = 0.4 is used because the current dollar project cost at p > 0.5 is same as the upper bound (p = 1) estimate. There is only a marginal difference in the statistics for the project duration and the project cost from the first and second cases. Even though the expected value for project duration is slightly larger when p — 0.4 and p = 1 for the third case, the expected values for project cost are similar for all three situations. Since project cost is the hnear addition of work package costs, there is no effect from the correlation between work package costs. Hence, the identical expected values for project cost Chapter 7. Validations and Apphcations 215 Table 7.33: Comparison of the Statistics for Project Duration Type of the Correlations P = 0 p = 0.4 p = 1.0 E[PD] °~PD E[PD] E[PD] &PD Uncorrelated 29.44 4.69 29.78 4.53 32.42 3.69 Primary only 29.31 4.59 29.57 4.46 32.21 3.68 All Variables 29.31 5.43 32.22 4.82 33.06 4.46 Table 7.34: Comparison of the Statistics for Current Dollar Project Cost Type of the Correlations p = 0 p = 0.4 0.5 < p < 1.0 E[PC] o~pc E[PC] vpc E[PC] <TPC Uncorrelated Primary only All Variables 54129602 53485125 53485125 7584872 7285335 12776812 54145288 53500807 53500807 7588666 7289218 12781905 54158049 53513534 53513534 7589427 7289989 12784827 from the second and third cases. However, there is nearly 18% and 75% increases in the standard deviations for project duration and cost due to the correlations between work package durations and costs. The correlations between work package costs are relatively small. This clearly illustrates that these correlations can be significant. Figure (7.37) depicts the cumulative distribution functions for project duration at different transitional correlation values approximated from the analytical method. Figure (7.38), (7.39) and (7.40) depict, the cumulative distribution functions for upper and lower bounds approximated from the analytical method for total dollar project cost, project net present value and internal rate of return. The hnear correlations between the primary variables in the functions for work package durations and costs, and the hnear correlations between work package durations and work package costs respectively are treated. Chapter 7. Validations and Applications c o t3 0.8 h 2 3 Q V c o 1 3 Q t5 CD '2 CL n 2 CL 0.6 h 0.4 h 0.2 Longest Path P =0.3,0.4 P =0.5 P =0.6 All Paths 15 20 25 30 35 Duration (months) 40 45 Figure 7.37: CDFs for Project Duration p 0.2 All Paths ^ 0.8 |— Longest Path o O v 1/5 0.6 o o CD If 0.4 CL 20 40 60 Cost (total $*10 6 ) 80 100 Figure 7.38: CDFs for Total Dollar Project Cost Chapter 7. Validations and Applications ^ 0.8 Q_ Z > 0.6 Q_ Z t5 o o 0.4 °- 0.2 -All Paths Longest Path // // // — // // // // // // M .4 / > y y • — i r i i i (40) (20) 0 20 40 Net Present Value ($*106) 60 80 Figure 7.39: CDFs for Project Net Present Value 0.8 -0.6 All Paths Longest Path 0.4 0.2 ,'S .4? 10 15 20 25 Discount Rate (%) 30 35 Figure 7.40: CDFs for Project Internal Rate of Return Chapter 7. Validations and Apphcations 218 7.6.4 Discussion The precedence network for the engineering project used as the example in this section was small (see figure 7.23), with few interrelationships between work packages. This feature had both advantages and disadvantages. The advantages were, because of its size it was possible to elaborate the work package durations and costs to detailed functions, thereby demonstrating the full potential of the analytical method. Also, it was possible to illustrate the treatment of correlations at all levels of the project economic structure. For example, if the precedence network was highly interrelated with a large number of paths to complete the project as in the first example, it would not have been feasible to illustrate the positive definite correlation matrices for work package durations on individual paths and for work package costs. The disadvantage is that the elaboration of work package cost functions and the few interrelationships between work packages, combined to approximate extremely tight bounds for economic variables. Thereby, hampering the validation process. There were 210 random primary variables at the input level. The comparisons of execution times for the simulation and the analytical method highhghted the com-putational economy of the analytical method. The treatment of correlations between work package durations and between work package costs clearly demonstrated their significance. 7.7 Sensitivity Analysis and Contingency This section will briefly discuss the different ways in which the analytical method can perform sensitivity analysis and use one of them to outline a method to distribute the contingency allocated to a derived variable to its primary variables. Current dollar project cost is used as the example for the derived variable. Chapter 7. Validations and Apphcations 219 7.7.1 Sensitivity Analysis The concept of sensitivity analysis is simple. If a change in a primary variable has little effect on the derived variable, then the estimate for the derived variable is not hkely to depend to any great extent on the accuracy of the estimate for that primary variable. On the other hand, if a change in a primary variable produces a large change in the estimate for the derived variable, then the uncertainty surrounding that primary variable may well be a significant consideration when evaluating the derived variable. The sensitivity of a primary variable is measured by the total sensitivity coefficient for that variable. For a functional relationship given by, Y — <7(X), the sensitivity of the derived variable with respect to the primary variables is given by (Russell, 1985), f " ? S, % (7.8) AY AX-where —y- and 1 are the percent changes in Y and Xi respectively, and Si is the total sensitivity coefficient of X,-. For the sensitivity plot, Si is the gradient of the sensitivity hne relating percent change of Xi to percent change in Y. The total sensitivity coefficient Si is defined as (Russell, 1985), b l - dXi Y ( 7 ' 9 ) BY where 4£^- is the sensitivity coefficient of Y with respect to Xi. Since moment analysis is based on the truncated Taylor series expansion of £f(X), the partial derivatives with respect to primary variables should be evaluated. How-ever, the analytical method transforms the primary variables X to Z and #(X) to G(Z) prior to using the Taylor series expansion. Even though the sensitivity coefficients Chapter 7. Validations and Apphcations 220 T£^- are not evaluated by the analytical method it still evaluates |^ f> sensitivity coefficients with respect to the transformed variables. Hence, the analytical method has an in-built sensitivity analysis process, whereby the sensitivity coefficients either increase or decrease the contribution of each term, depending on the importance of each transformed variable to the derived variable. Nevertheless, the sensitivity plot of -y- versus * can be developed by obtain-ing a range of outputs at different percent changes of X{. This can be a rather long process. However, since the analytical method is efficient and computationally eco-nomical, if desired it can be developed. Similar sensitivity analysis can be performed on the subjective estimates for primary variables. If the analyst requires a sensitivity analysis on the subjective estimates, again a sensitivity plot can be developed from a range of outputs at percent changes of subjective estimates. Since the objectives of this thesis do not require the validation of the input primary variables, such a study is not presented. The third sensitivity analysis performed by the analytical method is on the tran-sitional correlation (p) specified by the analyst. The bounds for time and economic variables recognize the high degree of uncertainty associated with the decisions that have to be made during the feasibility analysis. By the definition of risk analysis, the quantification of risk for a specified transitional correlation should encompass the uncertainty of the assumed scenarios. However, the bounds add further reliability to the quantification because they are the true analytical bounds for those assumed scenarios. The fourth way in which the analytical method can perform sensitivity analysis is the basis for the method to distribute the contingency allocated to a derived variable to its primary variables. The derived variables at the project performance level are all linear additive and the sensitivity coefficients with respect to primary variables are Chapter 7. Vahdations and Apphcations 221 equal to one. W h e n the uncertainty i n the derived variable is due to the uncertainty of the p r i m a r y variables alone (no effects due to correlation), the variance of Y is the s u m m a t i o n of the variances of the p r i m a r y variables. T h e allocated contingency can t h e n be d i s t r i b u t e d to the p r i m a r y variables o n the basis of the i n d i v i d u a l percentage contributions to the total uncertainty of the derived variable. 7.7.2 Distribution of Contingency T h e contingency is generally defined as the a m o u n t i n c l u d e d i n a n estimate to cover the overruns due to unforeseen items a n d events i n the d e n n e d project scope. Since, this al location is done for derived variables such as project d u r a t i o n or cost, its m a n -agement is i m p o r t a n t . T h e a b i h t y to distribute the contingency to work packages provide a logical basis to manage it. T h e objective of this section is to demonstrate a n analytical m e t h o d to distribute the contingency. Inyang (1983) derived the contingency (C) as, C = XC - EB (7.10) where Xc is the target cost a n d EB is the base estimate cost. H e preferred the base estimate cost to the expected value used by Y e o (1982), because it was necessary to assume that the project cost was n o r m a l l y distr ibuted to derive the contingency a n d because the base estimate cost is always smaller t h a n the expected value. However, the target cost Xc was not related to any p r o b a b i h t y of success (or failure). A s highlighted i n the i n t r o d u c t i o n , institutions such as the W o r l d B a n k now r e c o m m e n d the use of probabilit ies of success (or failure) for performance variables. T h i s thesis derives the contingency (C) as, C = Xp - E[PC) (7.11) Chapter 7. Validations and Apphcations 222 where Xp is the cost estimate to achieve a desired probability of success and E[PC] is the expected value of project cost. Consider the current dollar project cost at the transitional correlation of p = 0.5 for the second example. Table 7.35 contains the values for Xp, C and the percentage of the contingency to the expected value {^c] x 1 0 ° ) for Pr.[PC] < 0.75, Pr.[PC) < 0.9 and Pr.[PC] < 0.95 for two cases. The first case has considered only the cor-relation between primary variables (the correlation treatment that the Monte Carlo simulation can duplicate) and the second has treated the correlation between primary variables, work package durations and work package costs. The expected values for project cost from both cases are identical (see Table 7.34). However, the variance for the second case has increased by about 200%. This is reflected in the values for Xp and C, where to achieve the same probability of success the contingencies have to be increased by about 80%. For example, if the contingency was set at a 90% probability of success using the results from the first case, in reality the project cost has only about a 75% probability of success. This example, again highlights the significance of the correlation between work package costs. When the contingencies are compared as percentages of the expected value, the insufficiency of the traditional allocations of 10% to 15% is clearly demonstrated. Table 7.35: Xp, C and E £ c 1 for Different Probabilities of Success Correlations Primary Variables Only Primary k, Derived Variables Scenario XP C % XP C % Pr.[PC] < 0.75 Pr.[PC] < 0.9 Pr.{PC) < 0.95 58470505 63243785 66037795 4956971 9730251 12524261 9.26 18.2 23.4 62088867 70581087 75632830 8575333 17067553 22119296 16.0 31.9 41.3 The main advantage of this definition is that the contingency distributed to indi-vidual work packages can be used to predict their probabilities of success (or failure). Chapter 7. Vahdations and Apphcations 223 This provides the bench mark to manage the contingency. Since the analytical method evaluates the expected value of project cost as the summation of all the expected val-ues of work package costs, equation (7.11) can be re-written as, Xp = f2(E[WPCCDi} + CON{) (7.12) where E[WPCcDi] is the expected value of the ith work package cost and CONi is the contingency distributed to the ith work package on the basis of its percentage contribution to the variance of project cost. Then, E[WPCcDi] + CONi, is the amount available for the cost of defined scope and unforeseen items and events of the ith work package. The probabihty of success of this amount can be measured from cumulative distribution function for the ith work package cost. Not only does it provide a bench mark to manage the contingency but also allows the project manager to transfer contingency between work packages on a logical basis. Consider the second case in Table 7.35. The derived variable is the current dollar project cost while the primary variables are work package costs. The expected values, standard deviations, coefficients of variations, skewness, kurtosis and the percentage contributions to the variance of the project cost from individual work package costs are given in Table 7.36. The percentage contributions to variance of project cost is evaluated from the following function. % Contribution from WPCCDi = n ^ W P C c D i ^ (7.13) £ p2(WPCCDi) i=l where pz(WPCcDi) is the variance of the ith work package cost. Figure (7.41) depicts the cumulative distribution function for the current dollar project cost at p = 0.5, and the Xp values given in Table 7.35. The values for the Chapter 7. Validations and Apphcations 224 Table 7.36: Statistics for Current Dollar Work Package Cost - Ex #2 WP# E[WPC] O~WPC C.O.V % 32 % Cont. 02 3894677 1575760 40.46 0.381 2.174 4.67 03 6137617 2237672 36.45 0.818 2.802 9.42 04 7877801 3215341 40.81 0.387 2.179 19.45 05 1723771 701822 40.71 0.503 2.304 0.93 06 917295 353412 38.52 1.317 4.082 0.25 07 2146194 959195 44.69 1.001 3.204 1.73 08 3667912 1486228 40.52 1.572 4.967 4.15 09 4080786 1677239 41.10 0.406 2.198 5.29 10 1728653 697393 40.34 1.256 3.892 0.92 11 2370725 852850 35.97 0.344 2.142 1.37 12 8220980 3370799 41.00 0.404 2.196 21.38 13 2397196 1037134 43.26 0.859 2.885 2.02 14 8349926 3886408 46.54 0.578 2.401 28.42 contingencies distributed on the basis of percentage contributions, the total amount available for the defined scope and unforeseen items, and the probabilities of success for individual work packages based on those total amounts for the three scenarios of Pr.[PC) < 0.75, Pr.[PC] < 0.9 and Pr.[PC] < 0.95 are given in Table 7.37. Figure (7.42) depicts the values for work package # 4 from Table 7.37. This work package was selected because it is an early work package in the network that has a high contribution to variance of project cost. A typical example of where things could go wrong. The values in Table 7.37 show that allocating contingency on the probabihty of success (or failure) of a global criterion such as project cost may not necessarily reflect the true situation because none of the work packages costs achieved the probabihty of success desired for the project cost. Analytically this can be reasoned that risks decrease when they are aggregated. In terms of practical situations, the importance of distributing the contingency becomes apparent. The contingency was distributed on the assumption that those work package cost Chapter 7. Validations and Apphcations Table 7.37: Distributed Contingency and Probability of Success 225 # Pr.[PC] < 0.75 Pr.[PC) < 0.9 Pr.[PC] < 0.95 Contn. Total < Contn. Total < Contn. Total < 02 400468 4295145 .61 797055 4691732 .68 1032971 4927648 .72 03 807796 6945413 .67 1607763 7745380 .76 2083637 8221254 .80 04 1667902 9545703 .68 3319639 11197440 .81 4302203 12180004 .87 05 79752 1803523 .58 158728 1882499 .61 205710 1929481 .63 06 21440 938735 .62 42672 959967 .64 55300 972595 .65 07 148353 2294547 .63 295269 2441463 .67 382664 2528858 .69 08 355876 4023788 .68 708303 4376215 .75 17951 4585863 .77 09 453635 4534421 .61 902873 4983659 .69 1170110 5250896 .74 10 78893 1807546 .63 157021 1885674 .66 203497 1932150 .68 11 117482 2488207 .57 233825 2604550 .61 303034 2673759 .63 12 1833406 10054386 .69 3649043 11870023 .82 4729105 12950085 .89 13 173221 2570417 .62 344764 2741960 .66 446810 2844006 .69 14 2437109 10787035 .73 4850598 13200524 .85 6286304 14636230 .91 variances which contribute most to the variance of project cost cause most of the uncertainty in the project cost. Therefore, the distribution ensured that the work packages with higher contributions had the greater probability of success and vice versa. Having got the initial bench marks to reflect the reasoning for the distribution, it is now possible to transfer some of the contingency from work package costs that have a greater probability of success to those which have a greater probability of failure. Unlike project cost, the distribution of contingency for project duration is not straightforward because the project duration is not a summation of all the work package durations. However, the modified PNET algorithm does permit a basis for an approach. Since the variances of all the paths to complete the project are evaluated when the transitional correlation p = 1, a sensitivity analysis similar to that adopted for the project cost can be utilized. Chapter 7. Validations and Applications 226 Cost (current $*108) Figure 7.41: CDF for Current Dollar Project Cost 17.5 Cost (current $*106) Figure 7.42: CDF for Current Dollar Cost for Work Package #4 Chapter 7. Validations and Apphcations 227 Consider the jth path of the precedence network. The variance for the j path when the correlations between work package durations are not considered is given by, where p-i{WPDij) is the variance of the ith work package duration on the jth path. The contingency allocated for project duration can then be distributed to the work package durations on the jth path based on the individual percent contributions to the variance of jth path duration. Then similar to work package costs it is possible to measure the probability of success of individual work package durations for defined scope and unforeseen items and events. However, since a work package can be on more than one path it can have a number of distributed contingency durations. In such situations the lowest distributed contingency should be assumed as the duration for unforeseen items and events. The measured probabilities of success will then be the lowest for every work package duration in the network. Again, providing a bench mark to manage the contingency allocated for project duration. This chapter described the validations and the applications of the analytical method developed in the previous chapter. The validations of the analytical method was performed by using Monte Carlo simulations. The simulations were used because at present, simulation based models are considered to be the "state-of-the-art" for quantification of time and economic-risks in large engineering projects. (7.14) i=l 7.8 Summary Chapter 7. Vahdations and Apphcations 228 The second section of this chapter derived the random number modification pro-cess used to treat correlations between variables in Monte Carlo simulation. The number of iterations for an "acceptable" simulation should be based on the stan-dard error for the expected value and standard deviation, and the error band for the cumulative distribution function generated from the simulation. The modified PNET algorithm was validated by solving the two examples pre-sented by Ang et al., (1975). The comparison of the results showed that modified PNET identifies the individual paths correctly, evaluates the expected value (mean) and standard deviation for path durations accurately, and selects the representative paths correctly. The ordering of paths may differ because in addition to ordering the paths in decreasing mean durations, the modified PNET orders the paths in de-creasing standard deviations when mean path durations are equal. This ensures the selection of the path with the highest variance as the representative path from the paths having the same mean duration. The project duration of a parallel network was used as the first limiting case to validate the Monte Carlo simulation process. When simulations are used to validate an analytical approach it is essential that the simulation process is first validated. A single dominant path of a highly interrelated network was used as the second limiting case. The simulations for both cases behaved as predicted thereby validating the Monte Carlo simulation process. The first example for validating the analytical method was an actual feasibility study of a mineral project. This example had 164 random primary variables at the input level and four simulations were performed. Two were for the case when the coefficients of variations for work package durations were low (first validation), while the others were for the case when the coefficients of variations for work package durations were approximately 40% (second validation). Chapter 7. Validations and Applications 229 In both instances, the cumulative distribution functions and the estimates for ex-pected values for derived time and economic variables generated from simulations were within the upper and lower bounds predicted by the analytical method. Thereby, val-idating the analytical method. The computational economy of the analytical method was highlighted from the comparisons of execution times. A brief comparison with deterministic results and when p = 0.5 were done for the first and second validations respectively. The second example was an hypothetical engineering project developed to demon-strate the full potential of the analytical method. The project network was small (thirteen work packages) with few interrelationships between work packages. Elab-orate functions were used for work package durations, costs and revenue streams. There were 210 random primary variables at the input level. Two complete simula-tions for time and economic risk quantifications were done. The first assumed that all the primary variables were uncorrelated (third vahdation) while the second assumed that the primary variables in the functions for work package durations and costs were correlated (fourth vahdation). The bounds for derived economic variables predicted by the analytical method were extremely tight, because the work package start time was one of the seventeen variables in the function for work package cost and because of the few interrelation-ships between work packages. The simulations demonstrated the validity of the ana-lytical method by generating estimates and cumulative distribution functions similar to those from the analytical method. The third section demonstrated the treatment of hnear correlations between work package durations and between work package costs when evaluating the moments for project duration and costs. The analytical method approximates these correla-tions using the hnear correlations between the primary variables when the common Chapter 7. Validations and Apphcations 230 (shared) variables are identified. The positive definite correlation matrices developed by the analytical method were given. The Monte Carlo simulation does not have the capability to duplicate this treatment. A comparison of the statistics and the contin-gencies that should be allocated to achieve a desired probability of success highlighted the significance of the effect of these correlations. The standard deviations for project duration and current dollar cost increased nearly 18% and 75% respectively. The final section described the four ways in which the analytical method can and/or do perform sensitivity analyses, and used the fourth approach to develop an analytical basis to distribute the contingency allocated at a desired probability of success. The distribution of contingencies allocated for Pr.[PC] < 0.75, Pr.[PC] < 0.9 and Pr.[PC] < 0.95 of current dollar project cost showed that none of the work packages achieved the same probability of success. This distribution is biased towards the work packages costs with variances that contribute most to project variance, giving them the greatest probability of success. Overall, this chapter demonstrated the validity and the computational economy of the analytical method in the quantification of time and economic variables in large engineering projects. C h a p t e r 8 C o n c l u s i o n s a n d R e c o m m e n d a t i o n s 8.1 Conclusions The primary objectives of this thesis were to develop an analytical method for eco-nomic risk quantification during feasibihty analysis for large engineering projects and to computerize the method to explore its behavior, to validate it and to test its practicahty in the measurement of uncertainty of decision variables. The secondary objective was to lay the foundation for obtaining the input data necessary to make the analytical method a practical tool for the construction industry. The main con-clusions from the developments of this thesis are as follows. 1. The analytical method is a comprehensive alternative to Monte Carlo simu-lation for the quantification of time and economic risks in large engineering projects. 2. The start times of work packages and revenue streams evaluated from the analysis of the precedence network provided the hnk to model the interaction of time, cost and revenue throughout the hfe cycle of a project. 3. The definition of the project economic structure and the freedom to use any type of functional form for work package durations, costs and revenue streams provided the freedom to model a project realistically to any level of detail using any number of variables. 231 Chapter 8. Conclusions and Recommendations 232 4. The risk measurement framework is suitable for systems where pre-determined functional forms are available, data limitations exist and the decisions are not based on extreme probabilities. 5. The reliance on subjective probabilities to obtain data for the primary variables at the input level recognized the data limitations that exist during the feasibility analysis. The elicitation of accurate, calibrated and coherent subjective probabilities as the measurement of expert belief incorporated the theoretical requirements into the practical process. 6. It was concluded that when eliciting subjective estimates for duration, nei-ther the holistic nor the decomposed estimation was the "better" approach. 7. The consideration of multiple paths of the project network provided a more realistic evaluation of the statistics for work package start time. In addition, the modified PNET algorithm provided the basis to evaluate the true analytical bounds for derived time and economic variables. 8. The uncertainty of the project performance and decision variables were quantified by consistently utilizing the moment analysis approach with the Pearson family of distributions. 9. The correlations between variables was identified as an important feature of this problem. The variable transformation approach developed to treat the correlations between primary variables and between derived variables was found to be accurate and robust. 10. The elicitation of positive definite correlation matrices for primary vari-ables in the functional forms at the input level incorporated an important theoretical requirement into the practical application. Chapter 8. Conclusions and Recommendations 233 11. The approximation and the treatment of correlations between work pack-age durations, between work package costs and between revenue streams demonstrated the abihty of the analytical method to go beyond the capa-bilities of the Monte Carlo simulation process. 12. It was concluded that during the feasibility analysis work package concept can be utihzed as the approach to obtain intermediate milestone informa-tion to set realistic targets for performance. 13. The quantification of uncertainty of project time and economic variables provided the basis to answer such strategic questions as setting up of the contingency for a probabihty of success (or failure) and the reliability of the "go - no go" decision. The individual contributions to the overall uncertainty was used to distribute the contingency for project variables to work packages. It was found that the probabihty of success predicted at the project level was not achieved at the work package level. 14. When the starting points were identical, the results from the Monte Carlo simulations were within the upper and lower bounds predicted by the ana-lytical method. From the validations it was concluded that the analytical method had the flexibility to model and evaluate the derived time and economic variables of a project accurately and economically. The analytical method and the computer programs developed from this research achieved the objectives of this thesis. However, in terms of the total process of risk management for large engineering projects, these developments are only the begin-ning. Unless there is an efficient approach to quantify time and economic risks it is impossible to respond to the identified risks. On other hand, until the area of risk response is developed the quantifications are of httle use. The recommendations for Chapter 8. Conclusions and Recommendations 234 future research briefly highlight the scope of the work that is necessary to make this development a practical tool for engineering construction. 8.2 Recommendations for Future Work Recommendations for future work are identified under three sections, namely, the analytical method, computer programs and the risk management process. 8.2.1 Analytical Method The analytical method developed in this thesis consisted of a number of major build-ing blocks - project economic structure; risk measurement framework; elicitation of subjective probabilities and positive definite correlation matrices; treatment of corre-lations between variables; and the modified PNET algorithm. 1. Project Economic Structure : It is recommended that a suite of time, cost and revenue estimating relationships at the work package/revenue stream level be developed. This would significantly increase the ability of the analytical method to model large engineering projects realistically. The publications by Tanchoco et al., (1981) and Buck, (1989) are useful starting points. 2. Risk Measurement Framework : At present the approximations for the first four moments of the derived variable consider terms only up to the fourth order. However, since all the primary variables are approxi-mated to Pearson type distributions, it is possible to obtain the higher order moments from the recurrence property of the Pearson family. It is recommended that with practical experience in the elicitation of subjec-tive probabilities terms up to the eighth order be included. This would Chapter 8. Conclusions and Recommendations 235 ensure more accurate approximations for the first four moments of the de-rived variables at the work package/revenue stream level because all of the necessary terms are included. 3. Elicitation of Subjective Probabilities : The development in this the-sis was the foundation for obtaining input data. The next stage should concentrate on building up experience from field apphcations, refining and validating the ehcitation approach. To this end, a complete automation and pre-testing of the process is recommended. With experience from field apphcations, the process can be refined based on the performance of analysts and experts. Also, the development of calibration curves for vahdation is recommended. The pubhcations by Budescu and Wallsten, (1987), Phillips, (1987), Wallsten and Budescu, (1983), Murphy and Win-kler, (1984), and Wright and Ayton, (1987) are useful starting points. 4. Elicitation of Correlation Matrices : A more consistent approach to ehcit the correlation coefficients between variables is necessary. It is recom-mended that effort should be devoted towards developing questions that would better capture the expert's knowledge about correlated variables. The pubhcations by Inyang (1983), Hull (1977), Kadane et al., (1980) and Keefer and Bodily, (1983) are good starting points. The routine in the interactive computer program "ELICIT" that ensures the positive defi-niteness of the correlation matrix should be further refined with a better user interface. 5. Correlations between Variables : The variable transformation ap-proach described in this thesis was found to be both accurate and robust in the treatment of correlations between variables. More studies to test Chapter 8. Conclusions and Recommendations 236 and to further understand the transformation is recommended. The ap-proximation and the treatment of correlations between derived variables require further studies to understand the benefits from the transformation. 6. Modified P N E T Algorithm : At present the modified algorithm treats only finish to start — 0 relationships. The extension of the modified PNET algorithm to treat overlapping relationships will increase the ver-satility of the modelling capability. This extension however, requires the development of an algorithm to treat the correlations between work pack-age durations in overlapping relationships. It is strongly recommended that future efforts be devoted to developing such an algorithm. Ideally, a single value for the transitional correlation p that can be used for all economic risk analyses of engineering projects should be recommended. However, at present it is not possible to make this recommendation. Efforts should be devoted to deriving such a value. The study of the behavior of the modified PNET algorithm for highly interrelated networks versus those with few relationships (hnear networks in pipeline or highway projects) is the logical starting point. 8.2.2 Computer Programs One of the primary objectives of this thesis was to computerize the analytical method to explore its behavior, to validate it and to test its practicahty in the measurement of uncertainty. The two computer programs developed by this research facihtated the achievement of this objective. While the computer programs were not meant to be software development, they are a useful starting point for a software package. At present, both programs, "ELICIT" and "TIERA", lack sophistication espe-cially in the area of user friendliness. It is recommended that efforts be devoted to Chapter 8. Conclusions and Recommendations 237 achieving a higher degree sophistication, for this development to become a practical tool for decision makers in engineering construction. 8.2.3 Risk Management Process In the introduction, the process of risk identification, risk quantification and risk response was identified as the most suitable approach for risk management in engi-neering construction. This thesis presented a computationally economical approach that can be used develop the basis for decision makers to respond to the identified risks. Until the area of risk response is developed the quantifications of derived time and economic risks are of little use. The next stage of this research should concentrate on developing strategies to respond to the quantified risks. It is strongly recommended that efforts be devoted towards this end. The extensive research done by the Project Management Group at the University of Manchester Institute of Science and Technology (UMIST) can be used as the starting point. The publications by Perry and Hayes, (1985a), (1985b), Hayes et al, (1986), and Howard (1988) should help this process. It is strongly recommended that efforts should be devoted towards obtaining industry collaboration for applications of this development. B i b l i o g r a p h y Ahuja, H.N., & Arunachalam, V. (1984)., "Risk Evaluation in Resource Al-location.", Journal of Construction Engineering & Management, Vol.110, No.3, pp.324-336. Ahuja, H.N., & Nandakumar, V. (1985)., "Simulation Model to Forecast Project Completion Time " , Journal of Construction Engineering & Manage-ment, Vol.111, No.4, pp.325-342. Amos, D.E., & Daniel, S.L. (1971)., "Tables of Percentage Points of Stan-dardized Pearson Distributions", Research Report # SC-RR - 710348, NTIS, Virginia, U.S.A. Ang, A.H-S., Abdelnour, J. , & Chaker, A.A. (1975)., "Analysis of Activ-ity Networks under Uncertainty.", Journal of Engineering Mechanics Division, ASCE , Vol.101 No.EM4, pp.373-387. Ang, A.H-S., & Tang, W.H. (1975)., Probability Concepts in Engineering Planning and Design, John Wiley & Sons, New York. Ashley, D.B. (1980a)., "Construction Joint Ventures", Journal of the Con-struction Division, ASCE, Vol.106, No.C03, pp.267-280. Ashley, D.B. (1980b)., "Coordinated Insurance for Major Construction Project", Journal of the Construction Division, ASCE, Vol.106, No.C03, pp.307-313. Ashley, D.B., & Bonner, J.J. (1987)., "Political Risks in International Con-struction" , Journal of Construction Engineering & Management, Vol.113, No.3, pp.447-467. Ashton, A.H. , & Ashton, R.H. (1985)., "Aggregating Subjective Forecasts : Some Empirical Results", Management Science, Vol.31, No.12, pp.1499-1508. Ashton R.H., (1986)., "Combining Judgements of Experts: How Many and Which Ones ?", Organisational Behavior & Human Decision Processes, Vol.38, pp.405-414. Au, T. (1988)., "Profit Measures and Methods of Economic Analysis for Cap-ital Project Selection", Journal of Management in Engineering, Vol.4, No.3, pp.217-228. Baker, R.W. (1986)., "Handling Uncertainty", International Journal of Project Management, Vol.4, No.4, pp.205-210. Bacharach, M . (1975)., "Group Decisions in the Face of Differences of Opin-ion", Management Science, Vol.22, No.2, pp.182-191. 238 Bibliography 239 [14] Beach, B . H . (1975)., "Expert Judgements About Uncertainty: Bayesian De-cision Making in Realistic Settings", Organisational Behavior & Human Perfor-mance, Vol.14, pp.10-59. [15] Beach, L.R. , Christensen-Szalanski, J .J .J . , & Barnes, V . (1987)., "As-sessing Human Judgement: Has it Been Done, Can it Be Done, Should it Be Done?", Judgmental Forecasting, Edited by G. Wright and P. Ayton, John Wi-ley & Sons, N.Y.. pp.49-62. [16] Bellman, R. (1970)., Introduction to Matrix Analysis (2nd Edition), McGraw-Hill, New York. [17] Benjamin, J.R., & Cornell, C .A . (1970)., Probability, Statistics and Deci-sion for Civil Engineers, McGraw-Hill Book Co., New York. [18] Bjornsson, H . C . (1977)., "Risk Analysis of Construction Cost Estimates", Transactions, American Association of Cost Engineers, pp.182-189. [19] Bonini, C P . (1975)., "Risk Evaluation of Investment Projects", OMEGA, International Journal of Management Science, Vol.3, No.6 , pp.735-750. [20] Borcherding, J.D. (1977)., "Cost Control Simulation and Decision Making", Journal of the Construction Division, ASCE, Vol.103, No.C04, pp.577-591. [21] Bordley, R . F . (1982)., "A Multiplicative Formula for Aggregating Probabihty Assessments", Management Science, Vol.28, No.10, pp.1137-1148. [22] Bordley, R .F . , & Wolff, R .W. (1981)., "On the Aggregation of Individual Probabihty Estimates", Management Science, Vol.27, No.8, pp.959-964. [23] Britney, R.B. (1976)., "Bayesian Point Estimation and the PERT Scheduling of Stochastic Activities", Management Science, Vol.22, No.9, pp.938-948. [24] Buck, J.R. (1989)., Economic Risk Decisions in Engineering and Management, Iowa State University Press, Ames. [25] Budescu, L.R. , &: Wallsten, T.S. (1987)., "Subjective Estimation of Precise and Vague Uncertainties", Judgmental Forecasting, Edited by G. Wright and P. Ayton, John Wiley & Sons, N.Y., pp.63-82. [26] Bunn, D .W. (1975)., "Anchoring Bias in the Assessment of Subjective Prob-abihty", Operation Research Quarterly, Vol.26, No.2, ii, pp.449-454. [27] Bunn, D .W. (1979a)., "A Perspective on Subjective Probabihty for Prediction and Decision", Technical Forecasting & Social Change, Vol.14, pp.39-45. [28] Bunn, D.W. (1979b)., "Estimation of Subjective Probabihty Distributions in Forecasting and Decision Making", Technical Forecasting & Social Change, Vol.14, pp.205-216. [29] Bury, K . V . (1975)., Statistical Models in Applied Science, John Wiley & Sons, New York. [30] Cain, W . O . Jr (1980)., "Risk Analysis Enhances Exploration", Oil & Gas Journal, Nov 17, 1980, pp.119-132. Bibliography 240 [31] Carr, R . L , Brightman, T .O. , & Johnson, F .B. (1974). "Progress Model for Construction Activity", Journal of Construction Division, ASCE, Vol. 100, No. COl, pp.59-64. [32] Chapman, C B . (1979)., "Large Engineering Project Risk Analysis", IEEE Transactions on Engineering Management, Vol.EM 26, No.3, pp.78-86. [33] Chapman, C B . & Cooper, D.F. (1983)., "Risk Engineering: Basic Con-trolled Interval and Memory Models", Journal of the Operational Research So-ciety, Vol.34, No.l, pp.51-60. [34] Chapman, C B . , Phillips, E .D. , Cooper, D .F . & Lightfoot, L. (1985)., "Selecting an Approach to Project Time and Cost Planning", International Jour-nal of Project Management, Vol.3, No.l, pp.19-26. [35] Chesley, G.R. (1975)., "Ehcitation of Subjective Probabilities: A Review", The Accounting Review, Vol.50, pp.325-337. [36] Chicken, J . C , &: Hayns, M.R. (1989)., The Risk Ranking Techniques in Decision Making, Pergamon Press, U.K. [37] Christensen-Szalanski, J .J .J . & Beach, L.R. , (1984)., "The Citation Bias: Fad and Fashion in the Judgement and Decision Literature", American Psychol-ogist, Vol.39, pp.75-78. [38] Clemen, R .T . (1986)., "Calibration and the Aggregation of Probabilities", Management Science, Vol.32, No.3, pp.312-314. [39] Cooper, D .F . , MacDonald, D . H . , & Chapman, C B . (1985)., "Risk Anal-ysis of a Construction Cost Estimate", International Journal of Project Manage-ment, Vol.3, No.l, pp.19-26. [40] Cooper, D.F . , & Chapman, C B . (1987)., Risk Analysis for Large Projects, Models, Methods and Cases, John Wiley & Sons, N.Y. [41] Crandall, K . C (1976)., "Probabilistic Time Scheduling", Journal of the Con-struction Division, ASCE, Vol.102, No.C03, pp.415-423. [42] Crandall, K . C (1977)., "Analysis of Schedule Simulations", Journal of the Construction Division, ASCE, Vol.103, No.C03, pp.387-394. [43] Crandall, K . C , & Woolery, J . C . (1982)., "Schedule Development un-der Stochastic Scheduling", Journal of Construction Division, ASCE, Vol.108, No.C02, pp.321-329. [44] Davidson, L .B . , & Cooper, D.O. (1976)., "A Simple Way of Developing a Probabihty Distribution of Present Value", Journal of Petroleum Technology, pp.1069-1078. [45] DeCoster, D .T. (1970)., "PERT/COST : The Challenge", Information for Decision Making : Quantitative and Behavioral Dimensions, A. Rappaport, Prentice-Hall, N.J., pp.249-257. [46] DeFinetti, B. (1970)., Theory of Probability, John Wiley & Sons, New York. Bibliography 241 [47] DeGroot, M . H . (1970)., Optimal Statistical Decision, McGraw-Hill Book Co., New York. [48] DeGroot, M . H . (1975)., Probability and Statistics, Addison-Wesley, Reading, Mass. [49] DeGroot, M . H . (1979)., "Improving Predictive Distributions ", Bayesian Statistics , J. M. Bernardo et al. (Eds.), University Press, Valencia, Spain., Vol.1, pp.385-395. [50] Delbecq, A.L . , Van de Ven, A.H. , & Gustafson, D.H. (1975)., Group Techniques for Program Planning, Scott Foresman, Glenview, Illinois. [51] Der Kiureghian, A., & Liu, P-L. (1986)., "Structural Reliability under Incomplete Probability Information", Journal of Engineering Mechanics, ASCE, Vol.112, No.l, pp.85-104. [52] Deshmukh, S.S. (1976)., "Risk Analysis", Transactions American Association of Cost Engineers, pp.118-121. [53] Devore, J .L. (1982)., Probability & Statistics for Engineering and the Sciences , Brooks / Cole Publishing Company, Monterey, California. [54] Diaconis, P., & Ylvisaker, D. (1985)., "Quantifying Prior Opinion", Bayesian Statistics 2, J.M. Bernardo et al. (Eds.) Elsevier Science Publishing Co., N.Y., Vol.2, pp.133-156. 38. [55] Dickey, J . M . (1979)., "Beliefs about Beliefs, A Theory for Stochastic As-sessments of Subjective Probabilities", Bayesian Statistics, J.M. Bernardo et al. (Eds.), University Press, Valencia, Spain., Vol.1, pp.471-488. [56] Dickey, J . M . , & Chen, C-H. (1985)., "Direct Subjective Probability Mod-elling Using Ellipsoidal Distributions", Bayesian Statistics 2, J.M. Bernardo et al. (Eds.) Elsevier Science Publishing Co., N.Y., Vol.2, pp.157-182. [57] Dickey, J . M . , Dawid, A.P., & Kadane, J .B. , (1986). "Subjective Prob-ability Assessment Methods for Multivariate-t and Matrix-t Models", Chap 12, Bayesian Inference and Decision Techniques: Studies in Bayesian Econometrics and Statistics, Elsevier Science Publishing Co., New York, Vol.6, pp.177-195. [58] Diekmann, J .E . (1983)., "Probabilistic Estimating : Mathematics and Apph-cations " , Journal of Construction Engineering & Management, Vol.109, No.3, pp.297-307. [59] Eilon, S. & Fowkes, T.R. (1973)., "Sampling Procedures for Risk Simula-tion", Operation Research Quarterly, Vol.24, No.2, pp.241-252. [60] Einhorn, H.J. (1972)., "Expert Measurement and Mechanical Combination", Organisational Behavior & Human Performance, Vol.7, pp.86-106. [61] Elmaghraby, S.E. (1977)., "Probabilistic Activity Networks (PANs): A Crit-ical Evaluation and Extension of the PERT Model-Chap 4", Activity Networks: Project Planning and Control by Network Models, John Wiley & Sons., pp.228-320. Bibliography 242 [62] Farid, F . , Boyer, L . T . , & Kangari, R., (1989). "Required Return on Invest-ments in Construction" , Journal of Construction Engineering & Management, Vol.115, No.l, pp.109-125. [63] Flanagan, R. & Norman, G . (1980)., "Risk Analysis - an Extension of Price Prediction Techniques for Building Work", Construction Papers, Vol.1, No.3 pp.27-34. [64] Flanagan, R., Kendall, A., Norman, G . , & Robinson, G.D. (1987)., "Life Cycle Costing and Risk Management", Journal of Construction Manage-ment and Economics, Vol.5, pp.S53-S71. [65] French, S. (1980)., "Updating of Belief in the Light of Someone Else's Opin-ion", Journal of Royal Statistical Society A, Vol.143 Pt.l, pp.43-48. [66] French, S. (1982)., "On the Axiomatisation of Subjective Probabilities", The-ory & Decision, Vol.14, pp.19-33. [67] French, S. (1985)., " Group Consensus Probability Distributions : A Critical Survey " , Bayesian Statistics 2, J.M. Bernardo et al. (Eds.), Elsevier Science Publishing Co., N.Y., Vol.2, pp.183-202. [68] French, S. (1986)., "Calibration and the Expert Problem", Management Sci-ence, Vol. 32, No.3, pp.315-321. [69] Gates, M . (1971)., "Bidding Contingencies and Probabilities", Journal of the Construction Division, ASCE, Vol.97, No.C02, pp.277-303. [70] Graybill, F.A. (1983)., Matrices with Applications in Statistics, Wadsworth International Group, CA. [71] Green, P.E. (1967)., "Critique of: Ranking Procedures and Subjective Prob-ability Distributions", Management Science, Vol.14, No.4, pp.B250-B251. [72] Gustafson, D . H . , Shukla, R .K. , Delbecq, A., & Walster, G.W. (1973)., "A Comparative Study of Differences in Subjective Likelihood Estimates Made by Individuals, Interacting Groups, Delphi Groups, and Nominal Groups", Or-ganisational Behavior & Human Performance, Vol.9, pp.281-291. [73] Hall, J .N. (1986)., "Use of Risk Analysis in North Sea Projects", International Journal of Project Management, Vol.4, No.4, pp.217-222. [74] Hampton, J . M . , Moore, P.G. , & Thomas, H . (1973). "Subjective Prob-ability and its Measurement", Journal of Royal Statistical Society, A, Vol.136, Pt.l, pp.21-42. [75] Harr, M . E . (1987)., Reliability Based Design in Civil Engineering, McGraw-Hill Book Co., New York. [76] Harris, R.B. (1978)., Precedence and Arrow Networking Techniques for Con-struction, John Wiley & Sons, New York. [77] Hayes, R .W. , Perry, J .G . , Thompson, P.A., & Wilmer, G . (1986)., "Risk Management in Engineering Construction, Implications for Project Man-agers", Project Management Group, UMIST, Thomas Telford Ltd.,London., pp.5-36. Bibliography 243 Hemphill, R .B . (1968)., "A Method for Predicting the Accuracy of a Con-struction Cost Estimate " , Transactions of American Association of Cost Engi-neers , pp.20-1-20-18. Hendrickson, C , Martinelli, D. , & Rehak, D. (1987). "Hierarchical Rule-Based Activity Duration Estimation", Journal of Construction Engineering & Management, Vol.113, No.2, pp.288-301. Hertz, D.B. (1964)., "Risk Analysis in Capital Investment", Harvard Business Review, Vol.42, No.l, pp.95-106. Hilliard, J .E . , & Leitch, R .A . (1975)., "Cost Volume Profit Analysis under Uncertainty", Accounting Review, Vol.50, No.l, pp.69-80. Hillier, F.S. (1963)., "The Derivation of Probabilistic Information for the Evaluation of Risky Investments", Management Science, Vol.9 , pp.443-457. Hillier, F.S. (1969)., The Evaluation of Risky Interrelated Investments, North Holland, Amsterdam. Hogarth, R . M . (1975)., "Cognitive Processes and the Assessment of Subjec-tive Probabihty Distributions", Journal of the American Statistical Association, Applied Section, Vol.70, No.350 , pp.271-294. Holt, C . A . , Jr. (1986)., "Scoring-Rule Procedures for Ehciting Subjective Probabihty and Utility Functions", Chap. 18, Bayesian Inference and Decision Techniques: Studies in Bayesian Econometrics and Statistics, Elsevier Science Pubhshing Co., New York, Vol.6, pp.279-290. Howard, R .A. (1971)., "Proximal Decision Analysis", Management Science, Vol.17, No.9, pp.507-541. Howard, R .A . (1988)., "Decision Analysis: Practice and Promise", Manage-ment Science, Vol.34, No.6, pp.679-695. Huber, G.P. (1974)., " Methods for Quantifying Subjective Probabilities and Multi - Attribute Utilities " , Decision Science, Vol.5, No.3, pp.430-458. Hull, J . C . (1977)., "Dealing with Dependence in Risk Simulation", Operation Research Quarterly, Vol.28, No.l, ii, pp.201-213. Hull, J . C . (1978)., "The Accuracy of the Means and Standard Deviations of Subjective Probabihty Distributions", Journal of Royal Statistical Society, A, Vol.141, Pt.l, pp.79-85. Hull, J . C . (1980)., The Evaluation of Risk in Business Investment, Pergamon Press. Inyang, E .D. (1983)., Some Aspects of Risk Analysis for Decision Making in Engineering Project Management, Ph.D. Thesis, University of Manchester, UMIST, U.K. Jaafari, A . (1984)., "Criticism of CPM for Project Planning Analysis", Journal of Construction Engineering & Management, Vol.110, No.2, Jun 1984, pp.222-233. Bibliography 244 94] Jaafari, A . (1986)., "Strategic Issues in Formulation and Management of Megaprojects in Australia", International Journal of Project Management, Vol.4, No.2, pp.198-206. 95] Jaafari, A . (1987)., "Genesis of Management Confidence Technique", Journal of Management in Engineering, Vol.3, No.l, pp.60-80. 96] Jaafari, A . (1988a)., "Project Viability and Economic Risk Analysis", Journal of Management in Engineering, Vol.4, No.l, pp.29-45. [97] Jaafari, A . (1988b)., "New Vistas in Strategic Assessment of Projects", Jour-nal of Management in Engineering, Vol.4, No.2, pp.122-138. [98] Jaafari, A . (1988c)., "Probabilistic Unit Cost Estimation for Project Con-figuration Optimization", International Journal of Project Management, Vol.6, No.4, pp.226-234. [99] Jackson, P.S. (1982)., "A Second - Order Moments Method for Uncertainty Analysis", IEEE Transactions on Reliability, Vol. R-31, No.4, pp.382-384. [100] Jaedicke, R .K. , & Robichek, A . A . (1964)., "Cost Volume Profit Analysis under Uncertainty", Accounting Review Vol.39, No.4, pp.917-926. [101] Johnson, M . E . (1987)., Multivariate Statistical Simulation, John Wiley & Sons, New York. [102] Johnson, N.L. , Nixon, E . , & Amos, D . E . (1963)., " Table of Percentage Points of Pearson Curves , for Given y/]3l and f32 , Expressed in Standard Measure " , Biometrika, Vol.50, 3&4, pp.459-498. [103] Kadane, J .B. , Dickey, J . M . , Winkler, R .L . , Smith, W.S., & Peters, S.C. (1980)., "Interactive Elicitation of Opinion for a Normal Linear Model", Journal of the American Statistical Association, T&M Section, Vol.75, No.372 , pp.845-854. [104] Kalos, M . H . & Whitlock, P.A., (1986)., Monte Carlo Methods, John Wiley & Sons, New York. [105] Keefer, D.L. , & Bodily, S.E. (1983)., "Three-Point Approximations for Continuous Random Variables", Management Science, Vol.29, No.5, pp.595-609. [106] Kendall, M . G . , and Stuart, A . (1969)., The Advanced Theory of Statistics, Vol.1, Third Edition, Hafner Publishing Co., New York. [107] Kennedy, K . W . , & Thrall, R . M . (1976)., " PLANET : A Simulation Approach to PERT " , Computer & Operation Research, Vol.3, pp.313-325. [108] King, W.R. , & Wilson, T . A . (1967)., "Subjective Time Estimates in Criti-cal Path Planning - A Preliminary Analysis", Management Science, Vol.13, No.5, pp.307-320. [109] King, W.R. , Wittevrongel, D . M . , & Hezel, K . D . (1967). "On the Anal-ysis of Critical Path Time Estimating Behavior", Management Science, Vol.14, No.l, pp.79-84. Bibliography 245 [110] King, W.R. , & Lukas, P.A. (1973)., "An Experimental Analysis of Network Planning", Management Science, Vol.19, No.12, pp.1423-1432. [Ill] Kottas, J .F . , & Lau, H-S. (1978)., "Stochastic Breakeven Analysis", Jour-nal of the Operational Research Society, Vol.29, No.3, pp.251-257. [112] Kottas, J .F . , & Lau, H-S. (1980)., "A Review of the Statistical Foundation of a Class of Probabilistic Planning Models", Computer & Operation Research, Vol.7, pp.277-284. [113] Kottas, J .F . , & Lau, H-S. (1982)., "A Four-Moments Alternative to Sim-ulation for a Class of Stochastic Management Models", Management Science, Vol.28, No.7, pp.749-758. [114] Kryzanowski, L . , Lusztig, P., & Schwab, B. (1972). "Monte Carlo Sim-ulation and Capital Expenditure Decisions - A Case Study", The Engineering Economist , Vol.18, No.l, pp.31-48. [115] Lindley, D .V . (1982)., "The Improvement of Probability Judgements", Jour-nal of Royal Statistical Society A, Vol.145 Pt.l, pp.117-126. [116] Lindley, D . V . (1986)., "Another Look at an Axiomatic Approach to Expert Resolution", Management Science, Vol.32, No.3, pp.303-306. [117] Lindley, D .V. , Tversky, A . , & Brown, R .V. (1979)., "On the Reconcilia-tion of Probability Assessments", Journal of Royal Statistical Society A, Vol.142 Pt.2, pp.146-180. [118] Lock, A . (1987)., "Integrating Group Judgements in Subjective Forecasts", Judgmental Forecasting, Edited by G. Wright and P. Ayton, John Wiley & Sons, N.Y., pp.109-127. [119] Ludke, R .L . , Stauss, F .F . , & Gustafson, D .H. (1977). "Comparison of Five Methods for Estimating Subjective Probability Distributions", Organisa-tional Behavior & Human Performance, Vol.19, pp.162-179. [120] Malcolm, D . G . , Roseboom, J . H . , Clark, C . E . , & Fazar, W. , (1959)., "Apphcations of a Technique for Research and Development Program Evaluation (PERT) " , Operations Research, Vol.7, No.5, pp.646-669. [121] Makridakis, S., & Winkler, R .L . (1983)., "Averages of Forecasts: Some Empirical Results", Management Science, Vol.29, No.9, pp.987-996. [122] McGough, E . H . (1982)., "Scheduling: Effective Methods and Techniques", Journal of the Construction Division, ASCE, Vol.108, No.COl, pp.75-84. [123] Milkovich, G . T . , Annoni, A . J . , & Mahoney, T . A . (1972). "The Use of the Delphi Procedures in Manpower Forecasting", Management Science, Vol.19, No.4, Pt.l, pp.381-388. [124] Mirchandani, P.B. (1976)., "Shortest Distance and Reliability of Probabilis-tic Networks", Computer & Operation Research, Vol.3, pp.347-355. [125] Moder, J .J . & Rodgers, E . G . (1968)., "Judgement Estimates of the Mo-ments of PERT Type Distributions", Management Science, Vol.15, No.2, pp.B77-B83. Bibliography 246 [126] Moeller, P.E. (1972)., "VERT - A Tool to Assess Risk", AIIE, Technical Papers, pp.211-222. [127] Moore, P.G. (1977)., "The Manager's Struggles with Uncertainty", Journal of Royal Statistical Society A, Vol.140 Pt.2, pp.129-165. [128] Morris, P.A. (1974)., "Decision Analysis Expert Use", Management Science, Vol.20, No.9, pp.1233-1241. [129] Morris, P.A. (1977)., "Combining Expert Judgements: A Bayesian Ap-proach", Management Science, Vol.23, No.7, pp.679-693. [130] Morris, P.A. (1983)., "An Axiomatic Approach to Expert Resolution", Man-agement Science, Vol.29, No.l, pp.24-32. [131] Morris, P.A. (1986)., "Observations on Expert Aggregation" , Management Science, Vol.32, No.3, pp.321-328. [132] Morrison, D.G. (1967)., "Critique of: Ranking Procedures and Subjective Probabihty Distributions", Management Science, Vol.14, No.4, pp.B253-B254. [133] Murphy, A.H., & Winkler, R.L. (1971a)., "Forecasters and Probabihty Forecasts: The Responses to a Questionnaire", Bulletin of the American Meteo-rological Society, Vol.52, No.3 , pp.158-165. [134] Murphy, A.H., & Winkler, R.L. (1971b)., "Forecasters and Probabihty Forecasts: Some Current Problems", Bulletin of the American Meteorological Society, Vol.52, No.4, pp.239-247. [135] Murphy, A.H., & Winkler, R.L. (1975)., "Subjective Probabihty Forecast-ing: Some Real World Experiments", Utility, Probability & Human Decision, D. Wendt & C.A.J. Vlek (Editors), Reidel, Dordrecht, pp.177-198. [136] Murphy, A.H., & Daan, H. (1984)., "Impacts of Feedback and Experience on the Quahty of Subjective Probabihty Forecasts : Comparison of Results from First and Second Years of the Zierikzee Experiment", Monthly Weather Review, Vol.112, No.3, pp.413-423. [137] Murphy, A.H., & Winkler, R.L. (1984)., "Probabihty Forecasting in Me-teorology", Journal of the American Statistical Association, T&M Sect., Vol.79, No.387 , pp.489-500. [138] Murphy, A.H., Hsu, W-R., Winkler, R.L., & Wilks, D.S. (1985)., "Use of Probabilities in Subjective Quantitative Precipitation Forecasts : Some Experimental Results", Monthly Weather Review, Vol.113, No.12, pp.2075-2089. [139] Myers, R.H. (1986)., Classical and Modern Regression with Applications, Duxbury Press, Boston, Mass. [140] Newendorp, P.D. (1976)., "A Method for Treating Dependencies Between Variables in Simulation Risk-Analysis Models", Journal of Petroleum Technol-ogy, Oct 1976, pp.1145-1150. [141] Ord, J.K. (1972)., Families of Frequency Distributions, Charles Griffin & Co., London. Bibliography 247 [1421 Ord, J . K . (1985)., "Pearson System of Distributions", Encyclopedia of Sta-tistical Sciences, Vol.6, pp.655-659. [1431 Pearson, E.S. (1963)., "Some Problems Arising in Approximating to Proba-bility Distribution, Using Moments", Biometrika, Vol.50, 1&2, pp.95-112. [1441 Pearson, E.S., & Tukey, J.W. (1965)., "Approximate Means and Standard Deviations Based on Distances Between Percentage Points of Frequency Curves", Biometrika, Vol.52, 3&4, pp.533-546. [1451 Perry, C , & Greig, I.D. (1975)., "Estimating the Mean and Variance of Subjective Distributions in PERT and Decision Analysis", Management Science, Vol.21, No.12, pp.1477-1480. [1461 Perry, J . G . (1986)., "Risk Management - an Approach for Project Managers", International journal of Project Management, Vol.4, No.4, pp.211-216. [1471 Perry, J . G . , et al. (1983)., "Mersey Barrage Pre-Feasibility Study", Marin-tech North West, University of Manchester, Vol 1, 2, & 3. [1481 Perry, J . G . , & Hayes, R.W. (1985a)., "Construction Projects - Knows the Risk", Charted Mechanical Engineer, February, pp.42-45. [149] Perry, J . G . , & Hayes, R.W. (1985b)., "Risk and its Management in Con-struction Projects", Proceedings, Institution of Civil Engineers, U.K, Part 1, Vol.78, pp.499-521. [1501 Phillips, L .D . (1987)., "On the Adequacy of Judgmental Forecasts", Judg-mental Forecasting, Edited by G. Wright and P. Ayton, John Wiley & Sons, N.Y., pp.11-30. [1511 Pouliquen, L.Y. , (1970)., Risk Analysis in Project Appraisal, The Johns Hopkins Press, Baltimore. [152] Pratt, J .W. , Raiffa, H . , & Schlaifer, R. (1964). "The Foundations of De-cision Under Uncertainty : An Elementary Exposition", Journal of the American Statistical Association, T&M Sect., Vol.59, pp.353-375. [1531 Pratt, J .W. , & Schlaifer, R. (1985)., "Repetitive Assessment of Judg-mental Probabihty Distributions: A Case Study", Bayesian Statistics 2, J.M. Bernardo et al. (Eds.), Elsevier Science Pubhshing Co., N.Y., Vol.2, pp.393-424. [1541 Press, S.J. (1979)., "Bayesian Inference in Group Judgement Formulation and Decision Making using Qualitative Controlled Feedback " , Bayesian Statis-tics, J.M. Bernardo et al. (Eds.), University Press, Valencia, Spain, Vol.1, pp.397-415. [1551 Press, S.J. (1985)., "Multivariate Group Assessment of Probabihties of Nu-clear War " , Bayesian Statistics 2, J.M. Bernardo et al. (Eds.), Elsevier Science Pubhshing Co., N.Y., Vol.2, pp.425-462. [1561 Pritsker, A.A.B., & Happ, W . W . (1966)., "GERT: Graphical Evalua-tion and Review Technique, Part I. Fundamentals", The Journal of Industrial Engineering, Vol.17, No.5,pp.267-274. Bibliography 248 [157] Pritsker, A.A.B., & Whitehouse, G . E . (1966)., "GERT: Graphical Eval-uation and Review Technique, Part II. Probabilistic and Industrial Engineering Apphcations", The Journal of Industrial Engineering, Vol.17, No.6, pp.293-301. [158] Project Performance Results for 1986 (1988), Operations Evaluation De-partment, The World Bank, Washington, D.C. [159] Ranasinghe, K . A . M . K , (1987), Quantification of Risks During Feasibility Analysis for Capital Projects, M.A.Sc. Thesis, University of British Columbia, Vancouver, Canada. [160] Ravinder, H .V . , Kleinmuntz, D.W., & Dyer, J.S. (1988). "The Reliabil-ity of Subjective Probabilities Obtained Through Decomposition", Management Science, Vol.34, No.2,pp.306-312. [161] Reinschmidt, K . F . , & Frank, W . E . (1976)., "Construction Cost Flow Management System", Journal of the Construction Division, ASCE, Vol.102, No.C04, pp.615-627. [162] Reutlinger, S., (1970)., Techniques for Project Appraisal Under Uncertainty, The Johns Hopkins Press, Baltimore. [163] Riggs, L.S. (1989)., "Numerical Approach for Generating Beta Random Vari-ates", Journal of Computing in Civil Engineering, Vol.3, No.2, pp.183-191. [164] Russell, A.D. (1985), CIVL 520 - Course Notes, Department of Civil Engi-neering, University of British Columbia, Vancouver. [165] Savage, L.J . (1954)., The Foundations of Statistics, John Wiley & Sons, New York. [166] Savage, L .J . (1971)., "Elicitation of Personal Probabilities and Expecta-tions", Journal of the American Statistical Association, T&M Section, Vol.66, No.336 , pp.783-801. [167] Schervish, M.J. (1986)., "Comments on Some Axioms for Combining Expert Judgements", Management Science, Vol.32, No.3, pp.306-312. [168] Seaver, D.A. (1977)., "How Groups Can Assess Uncertainty: Human Inter-action Versus Mathematical Models", IEEE, International Conference on Cy-bernetics, pp.185-190. [169] Seaver, D.A., Von Winterfeldt, D., & Edwards, W. (1978). "Eliciting Subjective Probability Distributions on Continuous Variables", Organisational Behavior & Human Performance, Vol.21, pp.379-391. [170] Selvidge, J. (1975)., "A Three Step Procedure for Assigning Probabilities to Rare Events", Utility, Probability & Human Decision, D. Wendt & C.A.J. Vlek (Editors), Reidel, Dordrecht, pp.199-216. [171] Siddall, J.N. (1972)., Analytical Decision-Making in Engineering Design, Prentice-Hall, Englewood Cliffs, N.J. [172] Shafer, S.L. (1974) "Risk Analysis for Capital Projects Using Risk Elements", Transactions American Association of Cost Engineers, pp.218-223. Bibliography 249 173] Smith, D.E. (1971)., " A Taylor's Theorem - Central Limit Theorem Ap-proximation: It's use in Obtaining the Probabihty Distribution of Long-Range Profit", Management Science, Vol.18, No.4, Part I, pp.B214-B219. 174] Smith, K.A., & Thoem, R.L. (1976)., "Project Cost Evaluation Using Probabihty Concepts", Transactions American Association of Cost Engineers, pp.275-279. 175] Smith, L.H. (1967)., " Ranking Procedures and Subjective Probabihty Dis-tributions " , Management Science, Vol.14, No.4, pp.B236-B249. 176] Spetzler, CS & Stael von Holstein, C-A. (1975). "Probabihty Encoding in Decision Analysis", Management Science, Vol.22, No.3, pp.340-358. 177] Spooner, J.E. (1974)., " Probabilistic Estimating " , Journal of the Con-struction Division , ASCE , Vol.100, No. COl, pp.65-77. 178] Stael von Holstein, C-A.S. (1971)., "An Experiment in Probabilistic Weather Forecasting", Journal of Applied Meteorology, Vol.10, pp.635-645. 179] Stael von Holstein, C-A.S. (1972)., "Probabilistic Forecasting: An Exper-iment Related to the Stock Market", Organisational Behavior & Human Perfor-mance, Vol.8, pp. 139-158. 180] Starr, M.K., & Tapiero, CS. (1975)., "Linear Breakeven Analysis Under Risk", Journal of the Operational Research Society, Vol.26, No.4, pp.847-856. 1181] Suppes, P. (1975)., "The Measurement of Belief", Journal of Royal Statistical Society B, Vol.36, pp.160-191. [182] Tanchoco, J.M.A., Buck, J.R., & Leung, L.C. (1981). "Modeling and Discounting of Continuous Cash Flow Under Risk", Engineering Costs and Pro-duction Economics, Vol.5, pp.205-216. [183] Taylor, P. (1988)., "Superiority of NPV over NPV/K, and Other Criteria in the Financial Appraisal of Projects: The Case of Energy Conservation", Inter-national Journal of Project Management, Vol.6, No.4, pp.223-225. [184] The Twelfth Annual Review of Project Performance Results (1987), Operations Evaluation Department, The World Bank, Washington, D.C. [185] Thompson, P.A. (1981)., Organisation and Economics of Construction, McGraw-Hill Book Company, U.K. [186] Thompson, P.A. & Whitman, J.D. (1973)., "Project Appraisal by Cost Model", Management Decision, Vol.11, pp.301-308. [187] Thompson, P.A. & Whitman, J.D. (1974)., "A Network Based Financial Modelling System Linking Project Appraisal, Construction and Operation", \th International INTERNET Congress, Paris, pp.329-336. [188] Thompson, P.A. et al. (1980)., Severn Tidal Power, Sensitivity and Risk Analysis., Project Management Group, University of Manchester, UMIST. Bibliography 250 [189 [190 [191 [192 [193 [194 [195 [196 [197 [198 [199 [200 [201 [202 [203 [204 Thompson, P.A. & Wilmer, G . (1985)., "CASPAR - A Program for Engi-neering Project Appraisal and Management", 2nd International Conference on Civil & Structural Engineering Computing, Vol.1, pp.75-81. Tukey, J .W. (1954)., "The Propagation of Errors, Fluctuations and Toler-ances, Basic Generalized Formulas", Technical Report No. 10, Statistical Tech-niques Research Group, Princeton University, Princeton, N.J. Tversky, A . & Kahneman, D. (1984)., Judgement under Uncertainty: Heuristics and Biases, Cambridge University Press. Van Tetterode, L . M . (1971)., "Risk Analysis ...or Russian Roulette ?", Transactions American Association of Cost Engineers, pp.185-191. Vergara, A . J . & Boyer, L . T . (1974)., "Probabilistic Approach to Estimat-ing and Cost Control", Journal of the Construction Division, ASCE, Vol.100, No.C04, pp.543-552. Wagle, B. (1967)., "A Statistical Analysis of Risk in Capital Investment Projects", Operation Research Quarterly, Vol.18, No.l, pp.13-33. Wallace, D . M . (1977)., "Statistical Analysis of Cost Data", Transactions American Association of Cost Engineers, pp.286-296 Wallsten, T.S., & Budescu, D .V . (1983)., "Encoding Subjective Probabil-ities: A Psychological and Psychometric Review", Management Science, Vol.29, No.2, pp.151-173. Wilkinson, J . H . (1965)., The Algebraic Eigenvalue Problem, Clarendon Press, Oxford, U.K. Winkler, R .L . (1967a)., "The Assessment of Prior Distributions in Bayesian Analysis", Journal of the American Statistical Association, T&M Section, Vol.62, pp.776-800. Winkler, R .L . (1967b)., "The Quantification of Judgement : Some Method-ological Suggestions", Journal of the American Statistical Association,T&M Sect., Vol.62, pp.1105-1120. Winkler, R . L . (1968)., "The Consensus of Subjective Probabihty Distribu-tions", Management Science, Vol.15, No.2, pp.B61-B75. Winkler, R .L . (1971)., "Probabilistic Prediction: Some Experimental Re-sults", Journal of the American Statistical Association, T&M Section, Vol.66, No.336, pp.675-685. Winkler, R .L . (1981)., "Combining Probabihty Distributions from Depen-dent Information Sources", Management Science, Vol.27, No.4, pp.479-488. Winkler, R .L . (1983)., "Judgement Under Uncertainty", Encyclopedia of Statistical Sciences, Vol.4, pp.332-336. Winkler, R . L . (1986a)., "Expert Resolution", Management Science, Vol.32, No.3, pp.298-303. Bibliography 251 [205] Winkler, R.L. (1986b)., "On 'Good Probability Appraisers", Chap 17, Bayesian Inference and Decision Techniques: Studies in Bayesian Econometrics and Statistics, Elsevier Science Publishing Co., New York, Vol.6, pp.265-278. [206] Woolery, J .C , & Crandall, K.C. (1983)., "Stochastic Network Model for Planning Scheduling", Journal of Construction Engineering & Management, Vol.109, No.3, pp.342-354. [207] Wright, G. & Ayton, P. (1987)., "The Psychology of Forecasting", Judg-mental Forecasting, Edited by G. Wright and P. Ayton, John Wiley & Sons, N.Y., pp.83-105. [208] Yeo, K.T. (1982)., A Systems Approach to Cost Management of Large Scale Offshore Oil Projects, Ph.D. Thesis, University of Manchester, UMIST, U.K. [209] Youker, R. (1989)., "Managing the Project Cycle for Time, Cost and Qual-ity: Lessons from World Bank Experience", International Journal of Project Management, Vol.7, No.l, pp.43-48. [210] Zinn, CD. , Lesso, W.G., & Motazed, B. (1977). "A Probabilistic Approach to Risk Analysis in Capital Investment Projects", The Engineering Economist, Vol.22, No.4, pp.239-260. A p p e n d i x A T h e F i r s t F o u r M o m e n t s A . l General This appendix derives the functions for first four moments of the correlated primary variables and the transformed uncorrelated variables from their definitions. Let a derived variable be described by the function Y = fl'(X), where X is the vector of its correlated primary variables. The truncated second order Taylor series expansion of g(X) about the mean values X is given by (equation 2.19), S (X) « 9 (X) + £ A (Xt - Xi) + ^ E ^ ( * - * ) ( A ' , -XS) (A.1) The function is transformed to the uncorrelated space by equation (2.35). Then the function becomes Y = G(Z) where Z is the vector of transformed uncorrelated variables. A . 2 Expected Value From equation (2.20) the expected value of Y is, E[Y) * <7(X) + Q^T coviXttXj) (A.2) 252 Appendix A. The First Four Moments 253 In the transformed system the variables are uncorrelated. When using the trans-formed system function G(Z) the expected value of Y is (equation 2.36), E[Y] * G(Z) + \ t ^ l*(Zi) (A.3) A.3 Second Central Moment From equation (2.21) the second central moment of Y is, P2{Y) = E 1 n n 82a + E E 9 2 t h dXJXj(Xi ~ X i ) ( X j 3 i n n -Y Y 2 t i h d x < d x i d2g cov(Xi,Xj) (AA) p2(Y) = E ^ ^ W i W ; { X i " X i ) { X j ~ X j ) n n n + E E E Q^: QXjdxk ^ X i ~ X i ^ ^ X j ~ X j ^ ^ X k ~ X k ^ " E E E ^ T Q£§X; (Xi - X{) cov(Xj,Xk) (A.5) 1 n n n n d29 d2 9 + I E E E E flvflv. flV.flv. - ( ^ i -j^  n n n n 0 n n n n dXidXj dXkdXt d2g dXidXj dXkdXt d29 d29 dXidXj dXkdX, x (Xk - Xk) (Xt - Xt) (Xi - Xi) (Xj - Xj) x cov(Xk,Xi) Appendix A. The First Four Moments 254 Neglecting the cross moment terms in equation (A.5) that cannot be defined due to the lack of moment information, M{Y) E t = i 'dgj dXi ^ t t ^ ^ X , ) " dg d2g ^ 6Xi 8X? 1 n - - Y 2 h d2g dxf 8Xf n n - 2 E E p4(Xi) d2g [coviX^Xjf 1 d2g 1 2 <9A? M*)]S E E i = i j = i + i dXidXj [coviXi.Xif (A.6) E i = l 55 T 2 n n h dX? ^ { X i ) pA{X{) - \p,2(Xi)f £ E i=i \d2g] 2 [dX?\ d2g dXidXj [covtXitXjtf (A.7) Appendix A. The First Four Moments 255 In the transformed system the variables are uncorrelated. When using the trans-formed system function G(Z) the second central moment of Y is (equation 2.37), MY) E t=i dG_ dZi Mz i ) " dG d2G + 1^ ^T2 M ^ i ) 9Zi dzt 1 n (A.8) ' d2G 2 dzt p4(z{) - I M Z i ) } ' If all the correlated primary variables are normally distributed or if it is assumed that there are no non-linear correlations between the transformed variables, M Y ) E i=l dG_ dZi Mz i ) " dG d2G d2G dZi 1 n ^ i=l L 1 2 + E E i=i j = i + i d2G dZidZj M Z i ) - [ M Z i ) Y 1 2 M z i ) M ZJ) (A.9) A .4 Third Central Moment From equation (2.21) the third central moment of Y is, M Y ) = E - Xi) (A.10) Appendix A. The First Four Moments 256 Neglecting fifth and higher order terms in equation (A.10), E n n n pi *Ti *=i dXi dXi dXk + ^ S S S £ ^ *k 9xdkixl  {Xi ~Xi) {Xj - X j ) x (Xk - Xk) {Xt - Xt) 3 f v f v a 5  89 929 (x x)(x- x) x cov(Xk,Xl)] (A.ll) Neglecting the cross moment terms in equation (A.ll) that cannot be denned due to the lack of moment information, E i=l - - E dg_ dXi dg t*(Xi) 12 dXi dg_ dXi &g_ dX? &g_ dX? pA(Zi) (A.12) ^ " a 5 <95 d2g fe Mi ^ 9X8X- W X » X ^ Pz{Y) E o n + - E 2 fe dg dXi dg_ dXi 12 JUL* dX ,r2 pA(Z{) - ^2(^)]s (A.13) " " 8g dg d2g . fe Mi oxi Mi ^ d X , f ^ , ^ ) ] Appendix A. The First Four Moments 257 In the transformed system the variables are uncorrelated. When using the trans-formed system function G(Z) the third central moment of Y is (equation 2.38), E i = l L 3 " + 5 g dG_ dZi 8G I 3 dZi Mzi) 2 d2G dZf p4{Z{) - [p2(Zi)f (A.14) If all the correlated primary variables are normally distributed or if it is assumed that there are no non-linear correlations between the transformed variables, MY) E i = l + - E 1 3 dG dZi dG dZi Mzi) d2G p4(Zi) - M Z ; ) f dzf „ " " dG dG d2G ,„ N (A.15) A . 5 Fourth Central Moment From equation (2.23) the fourth central moment of Y is, MY) = E - E E S'g 2 fe S^flA - i - cov(Xi,Xj) Appendix A. The First Four Moments 258 Neglecting fifth and higher order terms in equation (A.16), MY) « E n n n n £ S £ £ m «f7 ^  ^ ( X i " * , ) ( X i " **f x (X„ - Xk) (X, - A',)] (A.17) Neglecting the cross moment terms in equation (A.17) that cannot be denned, MY) « £ dg 8Xi u4(Xi) (A.18) In the transformed system the variables are uncorrelated. When using the trans-formed system function G(Z) the fourth central moment of Y is (equation 2.39), MY) « £ i = l dG dZi MZ i ) (A.19) If all the correlated primary variables are normally distributed or if it is assumed that there are no non-linear correlations between the transformed variables, MY) * £ dG 1 4 dZi Mz i ) (A.20) n n + 6 E E t=i j=t+i dG' 2 dZi <9^  Mzi) M ZJ) Appendix A. The First Four Moments 259 A . 6 Note : Higher Order Moments The fifth and higher order terms are neglected in the approximations for the third and fourth central moments of a derived variable because moment information of the primary variables are not available beyond the fourth order (section 2.3.2). However, since all of the primary variables are approximated by Pearson type distributions, if required, it is possible to generate higher order central moments for the primary vari-ables from the recurrence property of the Pearson family (Kendall and Stuart, 1969). The recurrence relationship for fifth and higher order central moments for primary variables approximated by Pearson type distributions is (Pearson, 1963; Kendall and Stuart, 1969), [o - (n + 1) c*i] pn - nb0 pn^ - (n + 2)fc 2 + 1 ( A " 2 1 ) where p3 (/*4 + Zu\) a = I0p4p2 — 18/4 — 12/i| (A.22) h /*2 ( 4 / ^ 4 ~ 3/ig) °° - 10p4p2 - 18,*! - 12M! { ' 6, = - / * 3 (A*4 + Zp\) 10p4p2 - 18p32 - Ylp\ y " ' (2p2p4 - 3/tg - 6pp I0p4p2 — 18p2 — 12/zf, (A.25) A p p e n d i x B I n v e s t i g a t i o n o f R Q This appendix investigates the positive semi-definite correlation matrix R o discussed in section (4.2.1). The correlation matrix is as follows. •1.0 0.5 0.5 " R o = 0.5 1.0 -0.5 (B.l) .0.5 -0.5 1.0 . Since the correlation matrix is in the standardized space, E [Xi] = 0 and fi2(Xi) = 1. The variances of the linear combinations of variables are, )i2(Xl + X2) = fi2(X1) + p2(X2) + 2Cov(X1,X2) + 1 (B.2) p2(Xx + *s)= /*2(X1) + fi2(X3) + 2Cov(X1,X3) + 1 (B.3) \i2(X2 + X3) = u2(X2) + fj,2{X3) + 2Cov(X2,X3) = 1 (B.4) Therefore, only the linear combination of variables 2 and 3 is valid. Hence, CovjX^X. + X,) Pl ,2+3 = , = (B.5J ^fi2(xi) MX2 + X3) Since /z2(X1) = 1 and u.2(X2 + X3) = 1 pi,2+3 = Cov(X1,X2+X3) (B.6) 260 Appendix B. Investigation of RQ 261 From definition, Cov{XuX2 + X3) = £ [ A \ ( X 2 + X 3 ) ] - £[A\] E[X2 + X3] (B.7) = E[XX X2] + E[X1 X3] = E[Xr]E[X2] + Cov{XuX2) + £[AM£[X 3] + Cov{XuX3) Therefore, Cov(X1,X2+X3) = Cov(X1,X2) + Cov(XuX3) (B.8) From definition, Cov(XuX2) = p 1 | 2 Jn2(Xi) ^(X*) = 0.5 (B.9) Cov{XuX3) = P l < 3 y/^iXi) p2{X3) = 0.5 (B.10) From equations (B.6), (B.8), (B.9) and (B.10), ^ > 2 + 3 = Cov(XltX2) + Cov{XuX3) = 1 (B.ll) Variable 1 is perfectly correlated with the linear combination of Variables 2 and 3. A p p e n d i x C B o u n d s f o r a C o r r e l a t i o n C o e f f i c i e n t This appendix derives the bounds for a correlation coefficient to exist in a positive definite correlation matrix. The bounds are derived from the necessary condition on vector b (see equation C.17) given by Kadane et al., (1980), for a nxn correlation matrix R n to be positive definite when R n - i is positive definite. The proof for that condition was derived by Dr. Ricardo 0. Foschi during the review period of this thesis. His contribution which is described in the first section is gratefully acknowledged. C . l The Proof Let R n be a nxn correlation matrix partitioned as, R n - l b R n = 1 where R n - i is (n — l)x(n — 1) correlation matrix for n b T = [pin P2n Pn-ln]-Let R n _ i be positive definite. For any vector of n scalars x 2,3,, [ x n - l Z n J R n - l b x n _ i " xn > 0 where x £ _ x = [xx x 2 z n - i ] -Expanding equation (C.2), x j , ! R n _ i x n _i + 2 xn x j_ x b + x2n > 0 262 (C.l) and (C.2) (C.3) Appendix C. Bounds for a Correlation Coefficient 263 The roots for xn from equation (C.3) are, Xn = - X n - l b ± ^ ( x j . i b) 2 - x T _ x R n _ x x n _ ! (C.4) For imaginary roots x n - i b b T x n _ i - x j _ j R n _ i x„_x < 0 (C.5) Rewriting equation (C.5), x j _ ! [ R „ _ ! - b b T ] x n _ x > 0 (C.6) Therefore, for R n to be positive definite ( R n _ i — b b T ) must be positive definite. Then, for any vector of n — 1 scalars y, y A [ R n _ x - b bx\ y > 0 (C.7) Since R n - i is symmetric R n - i is also symmetric. Choosing y = R n - i b y T = b T ( R ^ ) * = b T ^ ( c . 8 ) Substituting in equation (C.7), b T R - ^ [ R n _ x - b b T ] R - l x b > 0 (C.9) Expanding equation (C.9), [l - b T R - ^ b] b T R - ^ b > 0 (CIO) Since R n - i is positive definite, from Cholesky decomposition, R n _ x = L L T (C. l l ) Hence, R - ^ = ( L - ^ L - 1 (C.12) Appendix C. Bounds for a Correlation Coefficient 264 Therefore, b T R - ^ b = b T ( L " 1 ) T L - 1 b (€.13) Substituting z = L _ 1 b and z T = b T ( L - 1 ) m equation (C.13) b T R - ^ b = z T z > 0 (C.14) Hence, R~^j is also positive definite. From equation (C.10) l - b T K~\ b > 0 (C.15) Therefore, the necessary condition on b when R n - i is positive definite is b T R - \ b < 1. (C.16) C . 2 The Bounds R n is positive definite if R n - i is positive definite and, b T R ^ x b < 1 (C.17) and b matrices in equation (C.17), Si Bi B T : T : r < 1 (C.18) s 2 B 2 where T is the correlation coefficient (pjn) for which bounds are required, Si is a (j — l ) x ( n — 1) matrix and S 2 is a [n — 1 — j)x(n — 1) matrix, B j and B T are lx(j — 1) and l x ( n — 1 — j) rcjw matrices, and Sj is a l x ( n — 1) row matrix. Appendix C. Bounds for a Correlation Coefficient 265 Multiplying b T by R ^ in equation (C.18), Ci i TSj i C 2 < 1 (C.19) where C x = SI and C 2 = Bj S 2 . Since, Sj, Ci and C 2 are lx(ra — 1) row matrices the quadratic equation (4.13) for real bounds from equation (C.3) is, n - 1 s» r 2 + [c a j + c2j + £ sj{ Bu + Y, sn B*] r j - 1 n - 1 + £ (Clt + C2i) Bu + Y (di + c*) B2i - 1 < 0 t=l i = j + l A p p e n d i x D T h e C o m p u t e r P r o g r a m s D . l General One of the primary objectives of this research was to computerize the analytical method for economic risk analysis. The computer programs could then be used to explore its behavior, to verify it and to test its practicality in the measurement of uncertainty of performance and decision variables. This appendix describes the two computer programs, "ELICIT" and "TIERA", developed to achieve this objective. The two programs written in FORTRAN 77 can be executed together or separately. D.2 ELICIT - Program to Obtain Input Data ELICIT is an interactive program, to ensure that the subjective probabilities elicited for primary variables at the input level are coherent (section 3.7), their correlation matrices are positive definite (section 4.2.2), and to elicit the common (shared) vari-ables in the functional forms. The flowchart for ELICIT is depicted in Figure (D.l). The objective of ELICIT is to obtain interactively all of the information necessary to set up the input files required to execute TIERA. ELICIT is developed in three sections - work package durations, work package costs and revenue streams. The program begins with work package durations and proceeds to the next module only if it is asked to. The output from the three sections are written to data files in Units 266 Appendix D. The Computer Programs 267 * # of Work Packages (Including Start & Finish) * Method of Duration Estimation NextWP ' Subjective Estimates * Common Variables In Functions * Positive Definite Correlation Matrix NextRS # of Revenue Streams ' Type of Functional Form ' # of Primary Variables * Positive Definite Correlation Matrix Yes * Common Variables In Functional Forms STOP Next Var iab le ! Figure D.l: Flowchart for ELICIT Appendix D. The Computer Programs 268 11, 12 and 13 which are the input files for TIERA. To ensure that the subjective percentile estimates for a primary variable are co-herent ELICIT approximates them to a Pearson type distribution. The flowchart of this process is depicted in Figure (D.2). When the subjective estimates are coherent, ELICIT displays the expected value, standard deviation, skewness and kurtosis of the approximated Pearson distribution as verification and proceeds to the next variable. When the estimates are not coherent the analyst/expert are given an opportunity to re-estimate percentiles as suggested in section 3.7. To ensure that a correlation matrix is positive definite ELICIT follows the theoret-ical development described in section 4.2.2. When the vector of correlation coefficient values - b is elicited, the program checks for the condition given by equation (4.14). If the condition is satisfied the program accepts the b vector as valid correlation coeffi-cients between the primary variables. When the condition is not satisfied the program informs the user that the theoretical requirement for a valid R n has been violated, and requests the user to identify a previous variable in the ordered list whose correla-tion coefficient with the current variable that should be changed. Once the user has identified a variable, ELICIT calculates the bounds for that correlation coefficient (if they exist) from equation (4.15), thereby giving guidance for the user to conform to the theoretical requirement. Thirdly, the common (shared) variables between the functions for work package durations, the functions for work package costs and the functions for revenue streams are elicited. Utilizing this information, TIERA develops positive definite correlation matrices for work package durations, work package costs and revenue streams. These correlation matrices are then used in the evaluation of moments for path durations (hence project duration), project cost and project revenue respectively. Appendix D. The Computer Programs 269 STOP •\ * 5th Percentile Estimate (C) * 25th Percentile Estimate Yes Approximate Expected Value and Standard Deviation {o-oQ gg ) Yes | Obtain 2.5% and 97.5% Estimates Yes (Condition 1) Subjective Estimates are Coherent Display Expected Value, Standard Deviation, Skewness and Kurtosis — r ~ Next Variable Figure D.2: Flowchart to Ensure Coherence of Subjective Estimates Appendix D. The Computer Programs 270 D.3 TIERA - Program for Risk Quantification TIERA is the computer program of the analytical method developed in this thesis for time and economic risk quantification. It is developed in two modules. The main module follows Figure (6.3) and consists of all the analytical derivations described in chapter 6. Except for the reverse arrow in Figure (6.3), where the first four moments for work package start times are evaluated from the modified PNET algorithm, all of the other arrows use the moments of the primary variables at the lower level to evaluate the first four moments of the derived variables at the higher level. The flowchart for the modified PNET algorithm is depicted in Figure (D.3). When the transitional correlation, p = 0, the modified PNET algorithm defaults to the longest path approach because there is only one representative path. Then the process will always stop at the third decision node. Figure (D.4) depicts the flowchart for the process to trace all the paths to a work package (or milestone) from the start work package of the precedence network. The algorithm to trace all the paths to a work package was based on the "stack" concept. The second module for TIERA is an external subroutine consisting of functions for work package durations, work package costs and revenue streams that are specified by the analyst. At present, the analyst can specify five functions for work package durations, and ten each for work package costs and revenue streams. If more functions are needed for an analysis, the number can be increased with a small modification to the main program. The main program of TIERA should always be executed in combination with a compiled version of the external subroutine consisting of the functions. To execute TIERA the main program looks for data from Units 1, 10, 11, 12 and 13. The data file containing the table of Pearson distributions should always be spec-ified at Unit 1. At present the data file contains 2665 distributions. When the table Appendix D. The Computer Programs 271 Read the Logical Relationships between Work Packages from Unit 10 I Obtain all the Paths to a Work Package (see Figure D.4) Yes First Four Moments for Start Time of the W.P STOP Expected Value (EV) and Standard Deviation (SD) for all Path Durations Re-arrange Paths in the Decreasing Order of EV (Also Decreasing order of SD when EV are equal) I Calculate Correlation Coefficient for each pair of Paths Check Transitional Correlation P Select Representative Paths # o f Representative Paths, = 1^  'No Yes First Four Moments for the Representative Path Duration Evaluate the First Four Moments for each Representative Path STOP Obtain a Pearson Distribution for each Representative Path using its First Four Moments \ Calculate t start Obtain Cumulative Distribution Function for Start Time by Evaluating p(t) for the range 0 £ p(t) £ 1 Evaluate First Four Moments for Start Time using Percentile Values from the developed Cumulative Distribution Function STOP Figure D.3: Flowchart of the Modified PNET Algorithm Appendix D. The Computer Programs 272 Put WP and its Predecessors on Stack and Temporary Counting Array of Predecessor #s. Decrease Successor's Predecessor # Take WP off Stack Take WP off List STOP Yes Decrease Successor's Predecessor # t Take Start WP off Stack I Take Start WP off List I Save List Add Start WP to List Store the WP on the List I Store each Predecessor of the WP on the Stack i Set # of Predecessor WP's for each Predecessor WP on the Temporary Counting Array Figure D.4: Flowchart to Trace all the Paths to a Work Package Appendix D. The Computer Programs 273 developed by Amos and Daniel (1971) is included the file will contain approximately 12,100 distributions. The data file containing the logical relationships between work packages should be specified at Unit 10. At present only finish to start = 0 re-lationships between work packages are permitted. The data files for work package durations, work package costs and revenue streams should be specified at Units 11, 12 and 13 respectively. These data files are the output files from ELICIT. The output from TIERA is written to Unit 7. A typical output from TIERA is illustrated in Figure (D.5). The output is for the third case of the second example given in chapter 7 (i.e correlations between primary variables and between derived variables are treated). Units 5 and 6, the reading and writing units for FORTRAN are left free to permit the joint execution of ELICIT and TIERA. When the two programs are executed together the reading and writing for user - computer interaction are from Units 5 and 6 respectively. Appendix D. The Computer Programs 274 Figure D.5: Typical Output from TIERA g OCT / 19B9 15.99 3. 10 0 .48 2.67 10 OCT / 1989 16.00 3.24 0 .71 2.86 11 OCT / 1989 16.00 3.24 0 .71 2.88 12 OCT / 1989 16.00 3.24 0 .71 2.88 13 MAY / 1990 23.00 4.24 0 .40 2.90 14 MAY / 1990 23.39 4.27 0 .50 3. 10 15 FEB / 1991 32.57 4.75 0 .40 2.90 PROJECT DURATION FOR I A TRANSITIONAL CORRELATION OF 0.50 : THE TIME UNIT IS CALENDAR MONTH • * •• EXPECTED VALUE • • •* STANDARD DEVIATION SKEWNESS • • • KURTOSIS FEB / 1991 32.57 4.75 0.40 2.90 • • • WORK PACKAGE COSTS DISCOUNTED AT A RATE OF RETURN OF 0.090 • • • P f • • • EXPECTED VALUE • * # • ' • STANDARD DEVIATION • • • SKEWNESS • * * • • • KURTOSIS 1 0. 0. 0. .000 0.000 2 3697832. 1523500. 0. .398 2. 190 3 5578746. 2056956. 0. .831 2.830 4 7125178. 2954487. 0. 401 2. 193 5 1524454. 625166. 0. 511 2.314 6 810978. 315346. 1. 324 4.105 7 1882968. 845204. 1. 007 3.218 8 3170606. 1307522. 1 . 591 5.038 9 3505625. 1459957. 0. 419 2.211 10 1497887. 608433. 1. 261 3.909 11 2042131. 739623. 0. 343 2. 141 12 7037942. 2931777. 0. 420 2.211 13 1960845. B65159. 0. 900 2.971 8824271. 3225952. 0.592 2.421 0. 0. 0.000 0.000 • • • • THE PROJECT COST DISCOUNTED AT A RATE OF RETURN OF O.OBO • • • • EXPECTED VALUE STANDARD DEVIATION • • • • • • SKEWNESS KURTOSIS * • • 46659262. 11195712. 0.26S 2.789 * • * NET REVENUE STREAMS DISCOUNTED AT A RATE OF RETURN OF 0.090 • • • • * • EXPECTED VALUE STANDARD DEVIATION • • • SKEWNESS KURTOSIS 32676410. 11681997. -0.677 6.148 18432211. 4951469. -0.230 2.260 18389084. 4885079. 0.2B2 2.604 • • • • THE PROJECT REVENUE DISCOUNTED AT A RATE OF RETURN OF 0.090 • • • • EXPECTED VALUE STANDARD DEVIATION SKEWNESS KURTOSIS * • * 69497704. 13595959. -0.427 4.695 • • • • THE PROJECT NET PRESENT VALUE AT A DISCOUNT RATE OF 0.090 • • • • EXPECTED VALUE • S T A N D A R D DEVIATION * • • • • • SKEWNESS • • • • • • KURTOSIS • * * 22838442. . 17612327. -0.265 3.567 Appendix D. The Computer Programs Figure (D.5) contd. 277 3 O) UJ 3 _ l < o x o tt in Z CD o — i-l CO > tu a u a ui a: ~3 < O Q cr z a. < ui x cr o u. cr u. o cn < > tt o Ul 3 tt O o co in <o o a < > o < z cc 3 CM — i cn Ul X »« o in ni < > tt o o A p p e n d i x E T h e C o r r e c t i o n F a c t o r a The proof given in this appendix is an extension to that from Van Tetterode (1971). Let Q and P be two independent variables from (7(0,1). Then, E[Q] = E[P] = \ ; crl = a% = 1 ; cov(Q,P) = 0 Let R be a new random variable formed as, R = P + a (Q - P) (E . l ) Then from equation (E. l ) , o\ = (1 - a ) 2 4 + a 2 £ 7 2 (E.2) From the definition of covariance of R and Q, cov(R,Q) = cov(P + a(Q - P ) , Q) (E.3) = cov(P,Q) + cew(a((3-P) JC?) (E.4) Since cov(P,Q) = 0 = acov(Q,Q) - acov(P,Q) (E.5) cov(R,Q) = a cr2 (E.6) Let the correlation coefficient between R and Q be p. From the definition, , = C O v { R > Q ) (E.7) 278 Appendix E. The Correction Factor a 279 Substituting equation (E.2) and (E.6) in (E.7), a a? p = . (E.8) (^(1 - a) 2 a2 + a 2 a2 ) aQ Since a\ = a% — — 12 9 Vl-2a + 2 a 2 Therefore, from equation (E.9), (2p2 - 1) a 2 - 2p2 a + p2 = 0 (E.10) The correction factor a as the solution of equation (E.10) is given by, p2 ± p.y/l - p> a = - 2p2 - 1 Therefore, the random number correction is as follows (Van Tetterode, 1971). (E.ll) RNij = RNj + cm {RN - RNj) (E.12) where RNi and RNj are the random numbers generated for variables A"; and Xj respectively, and RNij is the random number corrected for the linear correlation p^ between Xi and Xj relative to Xi. When P i j = 0; a^ = 0; RNij = RNj Pij = +1; ay = 1; RNij = RNi Pij = —1; a^ = 1; RN^ = RNi When p^ — 0 the corrected random number is given by the jth random number demonstrating independence. When the correlation is either perfect positive or perfect negative the corrected random number is same as the iih, demonstrating perfect correlation. Therefore, for all values of p^, 0 < ctij < 1. A p p e n d i x F I n p u t D a t a f o r N u m e r i c a l E x a m p l e s This appendix contains the input data used for the numerical examples in chapter seven for the vahdation studies and apphcations of the analytical method for time and economic risk quantification. Road Pavement Project Table F.l is Table 1 from Ang et al. (1975). It describes the various activities of the project involving the paving of 2.2 miles of roadway pavement and the construc-tion of appurtenant drainage structures, excavation to grade, placement of macadam shoulders, erection of guardrails, and landscaping. The respective mean durations and corresponding standard deviations are also hsted. Industrial Building Project Table F.2 is Table 3 from Ang et al. (1975). It describes the various activities of the project involving the construction of a single-story industrial building. The building is comprised of reinforced concrete piers, frost walls, structural steel columns, and a precast roof deck. The respective mean durations and corresponding standard deviations are also hsted. 280 endix F. Input Data for Numerical Examples Table F.l: Activities and Estimated Durations (Pavement Project) E[D] # Description of Activities days days 01 Dummy 0 0 02 Set-up batch plant 2 0.5 03 Order and deliver paving mesh 5 1.0 04 Dehver rebars for double barrel culvert 6 1.5 05 Move in equipment 3 0.5 06 Dehver rebars for small box culvert 7 4.0 07 Build double barrel culvert 10 2.0 08 Clear and grub from station 42 - station 100 3 1.0 09 Clear and grub from station 100 - station 158 7 1.5 10 Build box culvert at station 127 5 2.0 11 Build box culvert at station 138 3 1.5 12 Cure double barrel culvert 9 2.0 13 Move dirt between station 42 - station 100 5 1.5 14 Start moving dirt between station 100 - station 158 3 0.5 15 Cure box culvert at station 127 9 4.5 16 Cure box culvert at station 138 6 2.0 17 Order and stockpile paving material 2 0.5 18 Place subbase from station 42 - station 100 7 1.73 19 Finish moving dirt between station 100 - station 158 5 2.0 20 Pave from station 42 - station 100 10 2.0 21 Place subbase from station 100 - station 158 7 3.31 22 Cure pavement from station 42 - station 100 6 1.5 23 Pave from station 100 - station 158 10 4.5 24 Cure pavement from station 100 - station 158 6 1.5 25 Place shoulders from station 42 - station 100 3 1.0 26 Place shoulders from station 100 - station 158 3 1.0 27 Place guardrail and landscape 5 1.5 28 Dummy 0 0 Appendix F. Input Data, for Numerical Examples 282 Table F.2: Activities and Estimated Durations (Industrial Building Project) # Description of Activities E[D] days days 01 Mobilization 32 3.2 02 Move in 2 0.5 03 Initial layout 2 0.5 04 Dummy 0 0 05 Site rough grading 2 0.5 06 Layout of piers 1 0.5 07 Excavate piers 2 1.0 08 Dummy 0 0 09 Order and dehver rebars 40 12.0 10 Form and rebars piers 2 0.5 11 Pour piers 2 0.5 12 Cure piers 4 0.8 13 Strip piers 1 0.1 14 Dummy 0 0 15 Dummy 0 0 16 Excavate frost walls 1 0.5 17 Order and dehver structurl steel columns 60 12.0 18 Erect structural steel columns 5 1.0 19 Order and dehver precast roof deck 30 6.0 20 Form and mesh frost walls 3 0.9 21 Pour frost walls 1 0.3 22 Cure frost walls 4 0.4 23 Strip frost walls 1 0.1 24 Backfill 2 0.5 25 Grade and compact gravel for floor 2 0.2 26 Rebar floor and set screeds 2 0.5 27 Pour and finish floor 2 0.5 28 Dummy 0 0 29 Excavate and grade parking 2 0.2 30 Stone base for parking 1 0.2 31 Dummy 0 0 32 Set roof deck 5 1.5 33 Hang siding and waterproof roof 6 1.2 34 Hang doors 4 1.2 35 Clean up 2 0.5 36 Bituminous surface in parking 3 0.3 Appendix F. Input Data, for Numerical Examples 283 First Example The first example is an actual deterministic feasibility analysis conducted for a mineral project in South America. Deterministic Estimates Table F.3 contains the description and deterministic estimates for duration cost of work packages. The work package durations were developed to correspond to the modified construction schedule. The work package costs were estimated such that the sum of the work package costs is equivalent to the constant dollar cost estimate of the deterministic feasibility analysis. The Statistics The deterministic values are assumed as the median values of probability distributions for work package durations and costs. Table F.4 contains the expected value, standard deviation, skewness and kurtosis for work package durations and costs used in this example. Revised Durations Table F.5 contains the statistics for the revised work package durations. The coeffi-cients of variation for work package durations are approximately 4 0 % instead of the 3% to 13% used in the previous case. Appendix F. Input Data for Numerical Examples 284 Table F.3: Deterministic Values for Work Package Durations and Costs WP# Work Package Description Dura Cost mths $ 01 Start Work Package - -02 Engineering &; Mobilization 4 2800000 03 Construction of a temporary fuel tank 3 200000 04 Road & Rail for equipment transfer 3 2520900 05 Camp expansion 3 2620000 06 Roads for construction requirements 8 2400000 07 Water supply scheme 11 2501100 08 Mine auxiliary building 11 4233800 09 Town-site - Phase 1 8 3552200 10 Power house construction 5 865800 11 Rainy season : Downtime 3 -12 Office, changehouse & lab for plant 10 2497200 13 Road/rail/port transfer facilities 10 4198000 14 Construction of process plant 8 4996300 15 Taihngs Dam 8 3980000 16 Town-site - Phase 2 8 4000000 17 Power plant - supply & distribution 13 6958300 18 Roads for operational requirements - Phase 1 9 3500000 19 Construction of permanent fuel system 9 743900 20 Taihngs Pipeline - Phase 1 4 550000 21 Plant shop & warehouse 13 1513600 22 Pre-production 21 33047700 23 Rainy season - Downtime 3 -24 Taihngs thickner - Phase 1 5 440000 25 Town-site - Phase 3 6 2000000 26 Taihngs pipehne - Phase 2 5 682500 27 Taihngs thickner - Phase 2 4 346000 28 Equipment & installation of process plant 6 11853700 29 Roads for operational requirements - Phase 2 6 1475000 30 Reclaim water system 9 1356100 31 Start up 3 600000 32 Project mgmt., org. expenses, import tax 33 18018000 33 Finish Work Package (Revenue Period) 180 -Total Base Estimate 36 124450100 Appendix F. Input Data for Numerical Examples Table F.4: Statistics for Work Package Durations and Costs WP# Duration (months) Constant Dollar Cost ($ E[WPD) O'WPD ft E[Co) 02 01 02 3.98 0.51 0.4 9.0 2836999 913186 0.1 2.1 03 3.02 0.27 0.7 9.0 203700 67051 0.2 2.2 04 2.98 0.33 -0.3 3.5 2550166 912688 0.1 2.1 05 2.98 0.33 -0.3 3.5 2649599 912707 0.1 2.1 06 7.99 0.92 0.2 2.1 2418499 881804 0.1 2.1 07 11.04 0.84 0.1 2.1 2537692 913156 0.1 2.1 08 11.02 0.45 0.5 5.5 4258293 1337784 0.1 2.0 09 7.99 0.92 0.2 2.1 3579135 1140382 0.1 2.0 10 5.01 0.44 0.1 2.6 860250 273656 0.0 2.0 11 3.02 0.27 0.7 9.0 - - - -12 10.03 0.58 0.2 2.3 2535235 913261 0.1 2.1 13 10.05 0.55 0.3 2.4 4198739 1276596 0.0 2.0 14 7.93 0.27 0.4 3.7 5034668 1581374 0.1 2.0 15 7.94 0.33 0.3 3.5 4042899 1370345 0.1 2.0 16 7.93 0.27 0.4 3.7 4073999 1341012 0.1 2.0 17 13.01 1.06 0.0 2.0 7029228 2220857 0.1 1.9 18 9.02 0.33 0.3 3.5 3536999 1095334 0.1 2.0 19 9.02 0.33 0.3 3.5 746157 237101 0.0 1.9 20 3.98 0.51 0.4 9.0 555550 191632 0.1 2.0 21 13.01 0.55 0.0 2.3 1517817 471158 0.1 2,1 22 21.02 1.06 0.1 2.1 33400048 10952316 0.1 2.0 23 3.02 0.27 0.7 9.0 - - -24 4.99 0.40 0.0 2.8 445550 149120 0.1 2.0 25 6.02 0.40 0.2 2.8 2018499 638774 0.2 2.2 26 5.01 0.30 0.4 5.6 688975 219015 0.1 1.9 27 3.96 0.51 0.0 2.4 349330 118624 0.1 2.0 28 6.02 0.55 0.1 2.2 12074330 3992590 0.2 2.0 29 6.02 0.48 0.2 2.6 1502749 518038 0.2 2.1 30 9.02 0.33 0.3 3.5 1390842 458266 0.3 2.3 31 3.02 0.33 0.3 3.5 607400 194779 0.1 1.9 32 33.45 0.91 0.7 6.4 18751328 6156604 0.5 2.6 33 180.00 - - - - - - -Appendix F. Input Data for Numerical Examples 286 Table F.5: Statistics for Revised Work Package Durations WP# Duration (months) E[WPD) 0~WPD 02 3.99 1.59 0.3 6.0 03 3.04 1.21 0.4 5.4 04 3.04 1.21 0.4 5.4 05 3.04 1.21 0.4 5.4 06 7.99 3.19 0.1 2.9 07 11.15 4.41 0.2 2.9 08 11.15 4.41 0.2 2.9 09 7.99 3.19 0.1 2.9 10 5.01 2.00 0.1 3.1 11 3.04 1.21 0.4 5.4 12 10.09 4.01 0.2 3.8 13 10.09 4.01 0.2 3.8 14 7.99 3.19 0.1 2.9 15 7.99 3.19 0.1 2.9 16 7.99 3.19 0.1 2.9 17 13.06 5.22 0.2 5.9 18 9.12 3.59 0.3 3.9 19 9.12 3.59 0.3 3.9 20 3.99 1.59 0.3 6.0 21 13.06 5.22 0.2 5.9 22 21.08 8.41 0.1 3.1 23 3.04 1.21 0.4 5.4 24 5.01 2.00 0.1 3.1 25 6.05 2.40 0.1 2.9 26 5.01 2.00 0.1 3.1 27 3.99 1.59 0.3 6.0 28 6.05 2.40 0.1 2.9 29 6.05 2.40 0.1 2.9 30 9.12 3.59 0.3 3.9 31 3.04 1.21 0.4 5.4 32 33.91 5.65 0.8 8.0 Appendix F. Input Data, for Numerical Examples 287 Revenue Streams The expected value, standard deviation, skewness and kurtosis for annual revenue and operating cost for the revenue streams are given in Table F.6. Table F.6: Statistics for Annual Revenue and Operating Costs RS# Annual Revenue ($) Annual Operating Cost ($) E[R] fa E[OkM] VWi 82 01 57771120 7925719 -0.2 2.5 18714624 2917630 0.5 2.9 02 66244656 5804341 -0.2 2.2 18951120 3503251 0.1 2.1 03 68975456 8889606 -0.3 2.1 21454432 4331357 0.7 2.5 04 77449584 6990992 0.0 2.1 21453872 4331492 0.7 2.5 05 61242768 7014973 -0.2 2.1 21638864 4058441 0.9 2.8 06 60687760 8011651 -0.3 2.1 19176192 3222784 0.4 2.5 07 61242768 4596042 -0.4 2.7 13533594 1982918 0.2 2.5 08 32325072 3656270 -0.2 2.6 10425628 2472484 0.7 3.5 Second Example The second example is a hypothetical engineering project of thirteen work packages and three revenue streams. Work Package Duration The statistics for primary variables for work package duration model are given in Tables F.7, F.8 and F.9. The positive definite correlation matrix for primary variables that was used for all the work package durations is given by RwPD • R W P D 1.00 -0.30 0.40 -0.30 1.00 -0.35 0.40 -0.35 1.00 Appendix F. Input Data for Numerical Examples Table F.7: Statistics for Quantity Descriptor Q i (ft3) WP# E[Qi) <TQi 02 Common 02 38397.3 12186.1 0.5 3.3 * 03 60555.0 8829.3 0.9 9.0 04 76850.0 24440.5 0.5 3.2 ** 05 16185.0 3527.4 0.8 7.8 06 8092.5 1373.3 0.4 3.6 07 20370.0 5802.8 0.8 9.0 08 32429.2 7030.8 0.8 7.8 09 38397.3 12186.1 0.5 3.3 * 10 16160.8 2820.8 0.3 2.6 11 21998.0 2621.4 0.2 2.4 12 76850.0 24440.5 0.5 3.2 ** 13 20413.0 5782.4 0.7 8.5 14 76850.0 24440.5 0.5 3.2 * Table F.8: Statistics for Labour Productivity Rate PjJi (ft3/ WP# E[Pu) <M 02 Common 02 9.0 1.25 0.0 5.6 * 03 10.2 2.23 0.8 8.0 04 9.0 1.25 0.0 5.6 05 10.1 2.28 0.1 2.2 06 10.2 2.23 0.8 8.0 ** 07 10.2 2.23 0.8 8.0 08 8.4 1.28 0.1 8.8 09 9.0 1.25 0.0 5.6 * 10 10.2 2.23 0.8 8.0 ** 11 10.1 2.28 0.1 2.2 *** 12 9.0 1.25 0.0 5.6 13 9.9 2.22 0.9 9.0 14 10.2 2.23 0.8 8.0 ** Appendix F. Input Data for Numerical Examples 289 Table F.9: Statistics for Labour Usage Li (m.d/year) WP# E[Li] 02 Common 02 6833.2 692.7 0.4 2.4 03 15185.0 1539.5 0.4 2.3 * 04 15185.0 1539.5 0.4 2.3 * 05 6074.0 615.8 0.4 2.4 ** 06 3074.0 761.0 1.0 7.2 07 15370.0 3805.3 1.0 7.2 08 7777.5 2339.8 1.1 5.7 09 9055.5 832.9 0.4 4.3 10 7685.0 1902.6 1.0 7.2 11 6074.0 615.8 0.4 2.4 ** 12 15092.5 1388.1 0.4 4.3 *** 13 3850.8 393.4 0.4 2.3 14 15092.5 1388.1 0.4 4.3 Work Package Cost The statistics for primary variables for work package cost model are given in Ta-bles F.8, F.9, F.10, F.ll and F.12. The statistics for primary variables in Table F.12 are common for all the work package costs. Therefore, when the primary variables are assumed to be correlated, from the definition all of the work package costs are correlated. The positive definite correlation matrix for primary variables that was used for all the work package costs is given by R-yvpe-Appendix F. Input Data, for Numerical Examples 290 Table F.10: Statistics for Equipment Usage E{ (e.d/year) WP# E[Ei] °~Ei VK A Common 02 512.0 126.8 0.8 5.0 03 600.0 60.8 0.0 3.2 04 851.0 136.6 0.7 9.0 05 300.0 30.4 0.0 3.2 * 06 256.1 63.4 0.8 5.0 07 303.7 30.8 0.4 2.4 ** 08 425.5 68.3 0.7 9.0 09 305.5 31.8 0.5 2.3 10 230.5 57.1 1.0 7.2 11 303.7 30.8 0.4 2.4 12 1063.8 170.7 0.7 9.0 13 461.0 114.2 1.0 7.2 14 300.0 30.4 0.0 3.2 * Table F . l l : Statistics for Subcontractor Cost 5,- ($) WP# E[Si] VK 02 Common 02 20370.0 5802.8 0.8 9.0 * 03 10185.0 2901.4 0.8 9.0 ** 04 38425.0 12220.2 0.5 3.2 05 10185.0 2901.4 0.8 9.0 ** 06 21966.5 2593.1 0.3 2.6 07 32370.0 7054.8 0.8 7.8 08 40462.5 6866.5 0.4 3.6 *** 09 20555.0 4918.8 0.6 3.4 10 8092.5 1373.3 0.4 3.6 11 6464.3 1128.3 0.3 2.6 12 40462.5 6866.5 0.4 3.6 13 20370.0 5802.8 0.8 9.0 * 14 10185.0 2901.4 0.8 9.0 ** Appendix F. Input Data for Numerical Examples 291 Table F.12: Statistics for Common Primary Variables Primary Variable E[X] <rx 02 CLi{$lrn.d) 141.85 22.76 0.7 9.0 76.85 19.03 1.0 7.2 CEi(9/e.d) 301.85 27.76 0.4 4.3 Ic.(%jyear) 161850.0 35274.18 0.8 7.8 Oi* (%) 6.07 0.64 1.0 7.2 eM, (%) 5.04 0.58 0.8 9.0 6Ei (%) 5.04 0.58 0.8 9.0 Os. (%) 6.07 0.64 1.0 7.2 »u (%) 6.07 0.64 1.0 7.2 r(%) 7.54 0.85 0.2 2.5 1.0 -.56 0 0 .15 .65 0 0 0 .25 0 0 0 0 " -.56 1.0 0 0 .34 -.7 0 0 0 -.4 -.2 0 0 0 0 0 1.0 0 .20 0 0 -.56 0 0 0 -.2 0 0 0 0 0 1.0 .30 .15 0 .15 0 0 0 0 .7 0 .15 .34 .20 .30 1.0 .20 .15 .20 0 0 6 0 0 -.4 .65 -.7 0 .15 .20 1.0 0 0 0 .20 0 0 0 0 0 0 0 0 .15 0 1.0 0 0 0 .50 0 0 0 0 0 -.56 .15 .20 0 0 1.0 0 0 0 .30 0 0 0 0 0 0 0 0 0 0 1.0 .60 .30 .25 .3 .30 .25 -.4 0 0 0 .20 0 0 .60 1.0 .20 0 0 0 0 -.2 0 0 0 0 .50 0 .30 .20 1.0 0 0 0 0 0 -.2 0 0 0 0 .30 .25 0 0 1.0 0 0 0 0 0 .7 0 0 0 0 .30 0 0 0 1. 0 0 0 0 0 -.4 0 0 0 .30 0 0 0 0 1.0. Appendix F. Input Data for Numerical Examples 292 Revenue Streams The expected value, standard deviation, skewness and kurtosis for annual revenue and operating cost for the'revenue streams are given in Table F.13. Table F.13: Statistics for Annual Revenue and Operating Costs RS# Annual Revenue ($) Annual Operating Cost ($) E[R] O-R VK K E\OkM) VK 01 5907500 1763709 -0.8 7.8 590750 176371 -0.8 7.8 02 3453750 694077 -0.3 3.5 323670 70548 0.8 7.8 03 3027749 441466 0.9 9.0 509249 145070 0.8 9.0 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0050456/manifest

Comment

Related Items