Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Incorporating geophysics in the hydrogeological decision-making process Clayton, Edward Andrew 2000

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2000-0170.pdf [ 56.21MB ]
Metadata
JSON: 831-1.0089395.json
JSON-LD: 831-1.0089395-ld.json
RDF/XML (Pretty): 831-1.0089395-rdf.xml
RDF/JSON: 831-1.0089395-rdf.json
Turtle: 831-1.0089395-turtle.txt
N-Triples: 831-1.0089395-rdf-ntriples.txt
Original Record: 831-1.0089395-source.json
Full Text
831-1.0089395-fulltext.txt
Citation
831-1.0089395.ris

Full Text

Incorporating Geophysics in the Hydrogeological Decision-Making Process by E D W A R D (NED) A N D R E W C L A Y T O N B.S.E., Princeton University, 1990  A THESIS SUBMITTED IN P A R T I A L F U L F I L L M E N T OF THE REQUIREMENTS FOR THE D E G R E E OF M A S T E R OF APPLIED SCIENCE in THE F A C U L T Y OF G R A D U A T E STUDIES Department of Earth and Ocean Sciences  We accept this thesis as conforming to the required standard  THE UNIVERSITY OF BRITISH C O L U M B I A February 2000 © Edward Andrew Clayton, 2000  1  In presenting this thesis in partial fulfilment  of the requirements for an advanced  degree at the University of British Columbia, 1 agree that the Library shall make it freely available for reference and study. 1 further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department  or by his or  her  representatives.  It  is understood  that copying  or  publication of this thesis for financial gain shall not be allowed without my written permission.  Department of ^OPHK  Q A J  The University of British Columbia Vancouver, Canada Date  DE-6 (2/88)  flota*  ^cJeACC  J  11  ABSTRACT A risk-based economic decision making framework for hydrogeology-related engineering problems was developed, and tested in a real-world case study, for incorporating geophysics in a probabilistic site conceptual model that is directly linked to a decision analysis model. The framework employs a flexible, practical geostatistical methodology that quantitatively accounts for uncertainty associated with measurement inaccuracy and spatial variability. Two types of decision analyses are performed: (1) all types of existing information are integrated in the site conceptual model and the most cost-effective engineering design is determined, accounting for risk costs associated with uncertainty, and (2) the expected future economic worth of different measurements, including geophysics, to the overall problem is evaluated. Site characterization is often one of the most important and expensive activities in hydrogeology-related engineering problems, having a large impact on the engineering solution and total cost. Thus, this activity should be carried out in the most effective way possible—providing the most worthwhile information at the lowest possible cost. The worth of information is defined by how much the overall economic value of the engineering solution is improved, including reduction in costs associated with the risk of engineering failure. Most conventional characterization techniques involve invasive sampling or testing of the subsurface, sampling only a small fraction of the overall site. Geophysics measurements are non-invasive, have a much larger spatial coverage, can be acquired at a much higher density, and are relatively inexpensive. However, geophysics measurement? usually do not measure the properties of interest in these problems, and are often plagued by noise and other uncertainties that limit quantitative usage of the data. These measurements could be a very valuable site characterization technique if they could be quantitatively incorporated in the decision making process, in a way that accounts for their inherent uncertainty. The Markov-Bayes indicator geostatistics methodology [Alabert, 1987; Zhu and Journel, 1993] provides a versatile and straightforward approach for incorporating geophysics, and any other indirect or direct site characterization measurement of the property of interest, in a probabilistic spatial model of the property. The uncertainty associated with the indirect measurements is determined through a simple calibration between collocated indirect and direct measurements, and accounted for in the probabilistic model using indicator cokriging. The output of the model is either (1) a set of probability of class membership maps for defined interval classes of the property or (2) a set of equally-likely realizations of class membership, or actual values, of the property. The methodology was adapted and expanded to a comprehensive set of routines, optimized for incorporating geophysics data in the uncertainty model and easily linking the model to a decision analysis model. A real world case study involving soil contamination remediation was performed to test the integrity and practicality of the developed Markov-Bayes uncertainty framework, and its ability to incorporate actual geophysics measurements. The case study site requires excavation and selective treatment of soil contaminated above specified action levels, with a penalty cost for underclassifying or over-classifying soil contamination. A soil remediation design and data worth analysis decision model, closely linked to the uncertainty framework, was developed for the problem. The expected value optimal remediation design was determined based on the information provided by soil sample data. A sensitivity analysis was also performed—for a range of contamination underclassification unit costs and different real-time sampling alternatives. The optimal remediation designs require excavation of almost the entire site and range in total cost from $4.5 to $9 million Canadian dollars, the higher cost designs corresponding to the scenarios where real-time sampling is not an option and the underclassification costs are substantially higher than when contamination is correctly classified. When excavation and real-time batch sampling, followed by appropriate treatment, is an alternative, it is the optimal action for most of the site—since it eliminates risk costs. A data worth analysis was also performed to evaluate the  Ill worth of additional soil sampling versus geophysics surveys of different data quality levels. The results show that soil sampling—from 20 boreholes evenly spaced across the site—provides negligible worth to the site remediation. The same is true for geophysics, except for the scenario where the geophysics is of very good quality for delineating contamination (greater than 80% probability of correctly identifying the highest contamination levels) and the surveys cover most of the site. Even this scenario provides little value unless the underclassification cost is greater than two times the correct classification cost. Ground penetrating radar (GPR) and frequency domain electromagnetics (FDEM) surveys were acquired as an indirect measurement of hydrocarbon and metals contamination, and the processed data was incorporated in the decision model to evaluate the change in the optimized remediation design. Combining the results from both surveys in the Markov-Bayes uncertainty model produces a significant change in the probability of contamination maps in the region where the surveys were performed. However, the geophysics provides little improvement to the overall design and, consequently, little reduction in total cost. This result is anticipated from the data worth analysis, since the survey covered only a small portion of the overall site. The case study results illustrate that the developed Markov-Bayes uncertainty framework can be effectively employed in hydrogeology-related problems to (1) evaluate the economic worth of geophysics and (2) incorporate geophysics data in a risk-based decision model. While the geophysics acquired in this study produced little value to the decision making in this particular problem, the integration of the "soft" results from both the GPR and F D E M surveys with "hard" soil sample data had a significant impact on the contamination probability model in the region of the surveys. This suggests that using geophysics within the Markov-Bayes uncertainty framework could provide a promising technique for indirectly measuring hydrogeological properties—providing much greater spatial coverage than conventional sampling techniques.  iv T A B L E OF CONTENTS  ABSTRACT  TABLE OFCONTENTS  LIST OF T A B L E S  LIST O F FIGURES  ACKNOWLEDGEMENTS  CHAPTER 1 INTRODUCTION  .II  ..  IV  XI  XII  XXIII  1  l.t Motivation for research  2  1.2 Research goal  4  1.3 Research contributions  4  1.4 Thesis overview  6  CHAPTER 2 BACKGROUND 2.1 Example hydrogeology problem  9 9  2.1.1 Geology of site  10  2.1.2 Hydrogeology of site  11  2.1.3 Contamination of site and the resulting implications  11  2.2 The nature of uncertainty in hydrogeology problems  12  2.2.1 The relationship between uncertainty and scale  13  2.2.1.1 Measurement scale  13  v  TABLE OF CONTENTS 2.2.1.2 Model scale  14  2.2.1.3 The relationship between model and measurement scales  16  2.2.1.4 Network scale  18  2.2.1.5 Spectral analysis of scale and uncertainty  19  2.2.1:6 Summary  20  2.2.2 Uncertainty associated with the measurement process  2.3 The decision-making process  21  26  2.3.1 Key components of decision-making process for engineering problems  27  2.3.2 Concept of risk in decision-making  32  2.3.3 Engineering versus health risk  35  2.3.4 Concept of data worth  39  2.4 The site characterization process  40  2.4.1 Objectives  41  2.4.2 Organization  43  2.4.2.1 Temporal perspective  44  2.4.2.2 Problematic perspective  46  2.4.2.3 Summary  52  2.5 Review of previous work  52  2.5.1 Geophysics applications in hydrology: general references and examples  53  2.5.2 Incorporating geophysics in probabilistic framework: measurement uncertainty not quantified 53 2.5.3 Incorporating geophysics in probabilistic framework: measurement uncertainty quantified 54 2.5.4 Incorporating geophysics in hydrogeological decision-making framework  59  TABLE OF CONTENTS C H A P T E R 3 G E O P H Y S I C S IN H Y D R O G E O L O G Y P R O B L E M S  64  3.1 Introduction to geophysics  64  3.2 Applications of geophysics in hydrogeology-related studies  68  3.3 Advantages of geophysics  73  3.4 Difficulties associated with applying geophysics  76  3.4.1 The relationship between instrument observations and geophysical properties  78  3.4.1.1 Physical model - geophysical inversion  80  3.4.1.2 Empirical and statistical models  84  3.4.2 The relationship between geophysical properties and material properties  95  3.4.2.1 The rock physics problem - conceptual model approach  97  3.4.2.2 The rock physics problem - statistical approach  100  3.4.2.3 The rock physics problem - comparison of approaches  101  3.4.3 Noise in geophysical measurements  103  3.4.4 Summary  107  C H A P T E R 4 A C C O U N T I N G F O R A N D M O D E L I N G U N C E R T A I N T Y ...110 4.1 Introduction  Ill  4.1.1 Uncertainty analysis approaches  Ill  4.1.2 Probability: Philosophical schools of thought  115  4.2 Accounting for uncertainty in spatial random variables  117  4.2.1 Derivation of spatial R V s  117  4.2.2 Classical least squares regression analysis  120  4.2.3 Geostatistics  122  4.2.3.1 Kriging estimation methods  125  TABLE OF CONTENTS  vii  4.2.3.2 Geostatistical simulation  131  4.2.4 Scale  132  4.3 Probability analysis using the Markov-Bayes indicator estimation methodology 4.3.1 Model inference  •  135 135  4.3.1.1 Exploratory data analysis  136  4.3.1.2 Secondary variable calibration model  138  4.3.1.3 Prior expected value  143  4.3.1.4 Covariance models  146  4.3.2 Geostatistical estimation  152  4.3.3 Limitations  160  4.4 Summary  C H A P T E R 5 CASE STUDY  161  165  5.1 Introduction  171  5.2 Background  174  5.2.1 Setting  174  5.2.2 History of site  175  5.3 Site Characterization  176  5.3.1 Soil and groundwater sampling and analyses  177  5.3.1.1 Borehole logging  177  5.3.1.2 Soil sampling for chemical analysis  178  5.3.1.3 Groundwater sampling for chemical analysis  181  5.3.1.4 Potentiometric head measurements and slug tests  183  5.3.1.5 Oil-interface probe measurements  184  5.3.2 Geophysics  184  TABLE OF CONTENTS  viii  5.3.2.1 Choice of geophysical techniques  185  5.3.2.2 GPR Surveys  189  5.3.2.3 F D E M Surveys  194  5.4 Analysis and Modeling  197  5.4.1 Risk-Cost-Benefit Decision Model  197  5.4.2 Geostatistical Model  207  5.4.2.1 Exploratory data analysis  208  5.4.2.2 Model inference and validation  217  5.4.3 Present Day (Prior) Decision Analysis  237  5.4.3.1 Contamination level probabilities  238  5.4.3.2 Remediation design  262  5.4.4 Future Site Characterization Data Worth (Preposterior) Analysis  281  5.4.4.1 Perfect information  287  5.4.4.2 Borehole sampling  289  5.4.4.3 Geophysics  295  5.4.4.4 Summary  311  5.4.5 Decision Analysis With Geophysics (Posterior Analysis)  311  5.4.5.1 F D E M processing  312  5.4.5.2 GPR processing  318  5.4.5.3 F D E M calibration  323  5.4.5.4 GPR calibration  332  5.4.5.5 Updating contamination probabilities  338  5.4.5.6 Updating remediation designs  350  5.4.5.7 Summary  356  TABLE OF CONTENTS CHAPTER 6 CONCLUSIONS AND RECOMMENDATIONS 6.1 Research results 6.1.1  Uncertainty model  ix 357 357 360  6.1.2 Decision model  367  6.1.3 Case study  373  6.1.4 Summary  382  6.2 Recommendations for further research  387  BIBLIOGRAPHY  391  APPENDIX A T H E GEOPHYSICAL INVERSE PROBLEM  396  A.l  Background  396  A.2 Well-posed inverse problems  399  A.3 Conceptualization of the inverse problem  403  A.4 Inverse solution methods  405  A.4.1  Global versus local methods  406  A.4.2  Direct versus indirect methods  408  A.4.3  Accounting for uncertainty in the inverse solution  411  A.4.3.1  Backus-Gilbert approach  412  A.4.3.2  Minimum relative entropy approach  415  A.4.3.3  Geostatistical approach  417  TABLE OF CONTENTS  •  APPENDIX B THE M A R K O V - B A Y E S GEOSTATISTICAL APPROACH B.l  Introduction  B.2 Overview of indicator geostatistics  x 431 431  432  B.2.1 Definition of indicator spatial random variables  432  B.2.2 Properties of indicators  432  B.2.3 Indicator coding of data  433  B.2.4 Indicator kriging  436  B.2.5 Indicator simulation  440  XI LIST OF TABLES Table 5.3.1: Case study site general stratigraphy  178  Table 5.3.2: Regulatory action levels for soil contamination  180  Table 5.3.3: Regulatory action levels for groundwater contamination  182  Table 5.4.1a: Case study decision model parameters—objective and decision questions  198  Table 5.4.1b: Case study decision model parameters—constraints  199  Table 5.4.1c: Case study decision model parameters—decision variables  200  Table 5.4.2: Declustering analysis results for all contaminant levels  219  Table 5.4.3: Variogram parameters for all contaminant levels  230  Table 5.4.4: Number of soil samples used as hard and soft data for cokriging estimation  248  Table 5.4.5: Comparison of site-wide contaminant level expected values of kriging and simulation grid results for 3 conditioning data scenarios: (1) hard sample data, (2) all sample data, and (3) all sample data and soft C prior probabilities 262 Table 5.4.6: Hypothetical soft data calibration data set where the soft measurement can delineate between hard values of 2 and less than 2 with 80% success, but has no ability to distinguish between values of 1 and 0....296 Table 5.4.7: FDEM conductivity cutoffs chosen for calibration of FDEM data to metals and MOG contaminant levels, measured in soil samples 326 Table 5.4.8: Summary parameters for results of calibration between FDEM conductivity and metals / MOG soil samples 328 Table 5.4.9: GPR amplitude cutoffs chosen for calibration of GPR data to metals and MOG contaminant levels, measured in soil samples 335 Table 5.4.10: Summary parameters for results of calibration between GPR relative amplitudes and metals / MOG soil samples 337 Table 6.1.1: Summarized case study decision analysis remediation design results—for a range of decision model parameters and three information usage scenarios: (1) direct soil sample measurements, (2) direct and cocontaminant soil sample measurements, (3) direct and co-contaminant soil sample measurements plus prior probabilities of contamination exceeding action levels 376 Table 6.1.2: Summarized case study future data worth decision analysis results—for several contamination underclassification costs and three future site characterization scenarios: (1) soil sample measurements for detecting all contaminants in 21 boreholes spread evenly across the site, (2) geophysics measurements for detecting metals contamination, (3) geophysics measurements for detecting pure phase hydrocarbons contamination. Data worth is represented by reduction in remediation design cost with additional information. 380 Table 6.1.3: Summarized case study decision analysis remediation design results after incorporating both the FDEM and GPR geophysics measurements—for a range of decision model parameters. Data worth is defined by the reduction in the remediation design cost with the additional information provided by geophysics 382 Table B.l: Indicator coding of data [Alabert, 1987; Deutsch and Journel, 1992]  .434  LIST OF FIGURES  Xll  Figure 2.1: Schematic map view of example problem site  10  Figure 2.2: Schematic 3-dimensional view of example problem site  11  Figure 2.3: Filtering of hypothetical porosity depth section due to the measurement and upscaling processes. Geometric mean boxcar filter is used for measurement filtering—labeled "# m res." for # meters resolution, corresponding to filter width. Upscaling to 5 meter cell dimension grid performed on 1 meter resolution measurement 17 Figure 2.4: Conceptualization of the decision-making process for hydrogeology-related engineering problems  27  Figure 2.5: Pyramid approach for defining decision objectives. The apex of the pyramid represents the primary objective, underlain by intermediate objectives that are to be met from bottom to top—culminating with the primary objective 28 Figure 3.1: Spatial averaging associated with a single measurement of a typical surface geophysical method  82  Figure 4.1: Hypothetical example of a pdf for hydraulic conductivity (K), on a log scale. The pdf shows that the probability of K being 10 m/s is 0.18 112 5  Figure 4.2.2: Example of a hypothetical calibration scattergram. In this example there are two indicator classes. Each point represents a calibration pairing where both the primary and secondary variables were measured at the same location. Al and A2 (BI and B2) are the primary (secondary) indicator classes; the line separating the two is the quantity value demarcating the two classes. SI though S4 are the sum of the calibration pairings within each sector of the scattergram 140 Figure 4.2.3: Effect of spatial filtering on indicator kriging of hypothetical hard soil chemical concentration measurements. Probability of concentrations above the lower concentration threshold level, assuming the measurements are (a) point measurements and (b) measurements having a circular areal support volume with a radius of 5 units 158 Figure 4.2.4: Effect of spatial filtering on indicator kriging of hypothetical hard soil chemical concentration measurements. Probability of concentrations above the upper concentration threshold level, assuming the measurements are (a) point measurements and (b) measurements having a circular areal support volume with a radius of 5 units 159 Figure 5.1: Example grid of probabilities of contaminant concentration greater than a regulatory threshold level.. 169 Figure 5.2: Example optimized soil remediation design  169  Figure 5.2.1: Schematic map of case study site  175  Figure 5.4.1: Schematic case study remediation design  202  Figure 5.4.2a: 3-d cube of case study soil sample metals analysis results (top view)  209  Figure 5.4.2b: 3-d cube of case study soil sample metals analysis results (side view)  210  Figure 5.4.3a: 3-d cube of case study soil sample PAH analysis results (top view)  211  Figure 5.4.3b: 3-d cube of case study soil sample PAH analysis results (side view)  212  Figure 5.4.4a: 3-d cube of case study soil sample MOG analysis results (top view)  213  Figure 5.4.4b: 3-d cube of case study soil sample MOG analysis results (side view)....  214  LIST OF FIGURES  xiii  Figure 5.4.5a: Histogram of soil samples analyzed for metals  215  Figure 5.4.5b: Ogive of soil samples analyzed for metals  215  Figure 5.4.6a: Histogram of soil samples analyzed for PAH  216  Figure 5.4.6b: Ogive of soil samples analyzed for PAH  216  Figure 5.4.7a: Histogram of soil samples analyzed for MOG  216  Figure 5.4.7b: Ogive of soil samples analyzed for MOG  217  Figure 5.4.8a: Declustering analysis results for metals threshold level B  220  Figure 5.4.8b: Declustering analysis results for metals threshold level C  221  Figure 5.4.9a: Declustering analysis results for PAH threshold level B  222  Figure 5.4.9b: Declustering analysis results for PAH threshold level C  223  Figure 5.4.10: Declustering analysis results for MOG special waste threshold level  224  Figure 5.4.11a: Horizontal variography for metals level B threshold  226  Figure 5.4.11b: Vertical variography for metals level B threshold  226  Figure 5.4.11c: Variography for metals level C threshold  227  Figure 5.4.12a: Horizontal variography for PAH level B threshold  227  Figure 5.4.12b: Vertical variography for PAH level B threshold  228  Figure 5.4.12c: Variography for PAH level C threshold  228  Figure 5.4.13: Variography for MOG special waste threshold  229  Figure 5.4.14: Calibration of PAH samples to metals - (a) cdf table, (b) misclassification probabilities  232  Figure 5.4.15: Calibration of MOG samples to metals - (a) cdf table, (b) misclassification probabilities  233  Figure 5.4.16: Calibration of metals samples to PAH - (a) cdf table, (b) misclassification probabilities  234  Figure 5.4.17: Calibration of MOG samples to PAH - (a) cdf table, (b) misclassification probabilities  235  Figure 5.4.18: Calibration of metals samples to PAH - (a) cdf table, (b) misclassification probabilities  236  Figure 5.4.18: Calibration of metals samples to PAH - (a) cdf table, (b) misclassification probabilities  237  Figure 5.4.19a: Probability of metals level B soil contamination—from kriging of all samples analyzed for metals. 240 Figure 5.4.19b: Probability of metals level C soil contamination—from kriging of all samples analyzed for metals. 240 Figure 5.4.20a: Probability of metals level B soil contamination—from cokriging of all samples  241  Figure 5.4.20b: Probability of metals level C soil contamination—from cokriging of all samples  241  LIST OF FIGURES  xiv  Figure 5.4.21a: Probability of metals level B soil contamination—from cokriging of all samples and soft type C prior probabilities 242 Figure 5.4.21b: Probability of metals level C soil contamination—from cokriging of all samples and soft type C prior probabilities 242 Figure 5.4.22a: Probability of PAH level B soil contamination—from kriging of all samples analyzed for PAH. .243 Figure 5.4.22b: Probability of PAH level C soil contamination—from kriging of all samples analyzed for PAH. .243 Figure 5.4.23a: Probability of PAH level B soil contamination—from cokriging of all samples  244  Figure 5.4.23b: Probability of PAH level C soil contamination—from cokriging of all samples  244  Figure 5.4.24a: Probability of PAH level B soil contamination—from cokriging of all samples and soft type C prior probabilities 245 Figure 5.4.24b: Probability of PAH level C soil contamination—from cokriging of all samples and soft type C prior probabilities 245 Figure 5.4.25: Probability of MOG special waste soil contamination—from kriging of all samples analyzed for MOG 246 Figure 5.4.26: Probability of MOG special waste soil contamination—from cokriging of all samples  246  Figure 5.4.27: Probability of MOG special waste soil contamination—from cokriging of all samples and soft type C prior probabilities 247 Figure 5.4.28a: Difference between probabilities estimated from all sample data versus only samples analyzed for contaminant being estimated—metals level B 248 Figure 5.4.28b: Difference between probabilities estimated from all sample data versus only samples analyzed for contaminant being estimated—metals level C 249 Figure 5.4.29a: Difference between probabilities estimated from all sample data versus only samples analyzed for contaminant being estimated—PAH level B 249 Figure 5.4.29b: Difference between probabilities estimated from all sample data versus only samples analyzed for contaminant being estimated—PAH level C 250 ;  Figure 5.4.30: Difference between probabilities estimated from all sample data versus only samples analyzed for contaminant being estimated—MOG SW 250 Figure 5.4.31: Assigned prior probabilities for soft type C data scenario—metals  251  Figure 5.4.32: Assigned prior probabilities for soft type C data scenario—PAH  252  Figure 5.4.33: Assigned prior probabilities for soft type C data scenario—MOG  252  Figure 5.4.34a: Differences greater than 0.1 between probabilities estimated from all sample data and soft C data versus only hard samples—metals level B 253 Figure 5.4.34b: Differences greater than 0.02 between probabilities estimated from all sample data and soft C data versus only hard samples—metals level C 254 Figure 5.4.35a: Differences greater than 0.1 between probabilities estimated from all sample data and soft C data versus only hard samples—PAH level B 254  LIST OF FIGURES  xv  Figure 5.4.35b: Differences greater than 0.02 between probabilities estimated from all sample data and soft C data versus only hard samples—PAH level C 255 Figure 5.4.36: Differences greater than 0.02 between probabilities estimated from all sample data and soft C data versus only hard samples—MOG special waste 255 Figure 5.4.37: Variance of PAH level B probabilities estimated from kriging of soil sample analyzed for PAH.. ..256 Figure 5.4.38: Variance of PAH level B probabilities estimated from cokriging of all soil sample data  257  Figure 5.4.39: Variance of PAH level B probabilities estimated from cokriging of all soil sample data and type C soft data prior probabilities 257 Figure 5.4.40a: Metals level B soil contamination expected value of 25 realizations—simulated using all sample data as conditioning data 258 Figure 5.4.40b: Metals level C soil contamination expected value of 25 realizations—simulated using all sample data as conditioning data 259 Figure 5.4.41a: PAH level B soil contamination expected value of 25 realizations—simulated using all sample data as conditioning data 259 Figure 5.4.41b: PAH level C soil contamination expected value of 25 realizations—simulated using all sample data as conditioning data 260 Figure 5.4.42: MOG special waste soil contamination expected value of 25 realizations—simulated using all sample data as conditioning data 260 Figure 5.4.43: PAH level B soil contamination expected value of 25 realizations— simulated using all sample data and soft C prior probabilities as conditioning data 261 Figure 5.4.44: Venn diagram of hypothetical metals and MOG contaminant level sets  263  Figure 5.4.45: Optimized total cost of remediation for data scenario where only hard sample data is used in geostatistical estimation 267 Figure 5.4.46: Optimized total cost of remediation for data scenario where all sample data is used in geostatistical estimation 268 Figure 5.4.47: Optimized total cost of remediation for data scenario where all sample data and soft C prior probabilities are used in geostatistical estimation 268 Figure 5.4.48: Value of information (cost savings in remediation design) provided by using soft sample data instead of just hard sample data 270 Figure 5.4.49: Value of information (cost savings in remediation design) provided by using soft sample data and soft C prior probabilities instead of just hard sample data 270 Figure 5.4.50a: Optimized remediation design based on hard sample data only—for scenario where (1) batch sampling is not an alternative, (2) ratio of future to present unit remediation cost is 2.0 273 Figure 5.4.50b: Portion of site not requiring contamination treatment for remediation design in Figure 5.4.50a....273 Figure 5.4.50c: Optimized remediation design based on all sample data—for same scenario in Figure 5.4.50a  274  Figure 5.4.50d: Portion of site not requiring contamination treatment for remediation design in Figure 5.4.50c....274  LIST OF FIGURES  xvi  Figure 5.4.50e: Optimized remediation design based on all sample data and soft C prior probabilities—for same scenario in Figure 5.4.50a 275 Figure 5.4.50f: Portion of site soil volume not requiring treatment for contamination—remediation design in Figure 5.4.50e 275 Figure 5.4.51a: Optimized remediation design based on all sample data—for scenario where (1) batch sampling is an alternative at $4 / m , (2) ratio of future to present unit remediation cost is 2.0 276 3  Figure 5.4.51b: Optimized remediation design based on all sample data and soft C prior probabilities—for same scenario in Figure 5.4.51a 276 Figure 5.4.52a: Optimized remediation design based on all sample data—for scenario where (1) batch sampling is an alternative at $8 / m , (2) ratio of future to present unit remediation cost is 2.0 277 3  Figure 5.4.52b: Optimized remediation design based on all sample data and soft C prior probabilities—for same scenario in Figure 5.4.52a 277 Figure 5.4.52c: Portion of site not designated for batch sampling—for remediation design in Figure 5.4.52a  278  Figure 5.4.53a: Optimized remediation design based on all sample data—for scenario where (1) batch sampling is not an alternative, (2) ratio of future to present unit remediation cost is 1.25 278 Figure 5.4.53b: Optimized remediation design based on all sample data and soft C prior probabilities—for same scenario in Figure 5.4.53a 279 Figure 5.4.54a: Optimized remediation design based on all sample data—for scenario where (1) batch sampling is not an alternative, (2) ratio of future to present unit remediation cost is 1.125 279 Figure 5.4.54b: Optimized remediation design based on all sample data and soft C prior probabilities—for same scenario in Figure 5.4.54a 280 . Figure 5.4.55: Optimized remediation design based on all sample data—for scenario where (1) batch sampling is an alternative at $8 / m , (2) ratio of future to present unit remediation cost is 3.0 280 3  Figure 5.4.56: Optimized remediation design based on all sample data—for scenario where (1) batch sampling is not an alternative, (2) ratio of future to present unit remediation cost is 1.0 281 Figure 5.4.57a: Expected value total remediation cost with additional hypothetical data of different types, including (1) EM geophysics of different data qualities for detecting metals level C, (2) 21 fully-sampled boreholes, and (3) perfect information—for scenario where batch sampling is not an alternative 283 Figure 5.4.57b: Economic value (cost savings in remediation design) of additional hypothetical data—for same scenario and data types as in Figure 5.4.57a 284 Figure 5.4.57c: Expected value total remediation cost with additional hypothetical data—for scenario where $4/m batch sampling is an alternative and same data types as Figure 5.4.57a 284 3  Figure 5.4.57d: Economic value (cost savings in remediation design) of additional hypothetical data—for same scenario and data types as in Figure 5.4.57c 285 Figure 5.4.58a: Expected value total remediation cost with additional hypothetical data of different types, including (1) EM geophysics of different data qualities for detecting MOG SW, (2) 21 fully-sampled boreholes, and (3) perfect information—for scenario where batch sampling is not an alternative 285 Figure 5.4.58b: Economic value (cost savings in remediation design) of additional hypothetical data—for same scenario and data types as in Figure 5.4.58a 286  LIST OF FIGURES  xvii  Figure 5.4.58c: Expected value total remediation cost with additional hypothetical data—for scenario where $4/m batch sampling is an alternative and same data types as Figure 5.4.58a 286 3  Figure 5.4.58d: Economic value (cost savings in remediation design) of additional hypothetical data—for same scenario and data types as in Figure 5.4.58c 287 Figure 5.4.59a: Remediation design based on scenario where there is perfect information about all contaminant levels—for a simulated realization conditioned on all sample data 288 Figure 5.4.59b: Remediation design based on scenario where there is perfect information about all contaminant levels— for a simulated realization conditioned on all sample data and soft C prior probabilities 288 Figure 5.4.60: Location of 21 hypothetical boreholes and sample depths used for data worth analysis of borehole sampling 290 Figure 5.4.61a: Probability of metals level B soil contamination—estimated from cokriging of hypothetical sample outcomes in 21 boreholes and existing hard and soft samples 291 Figure 5.4.61b: Probability of metals level C soil contamination—estimated from cokriging of hypothetical sample outcomes in 21 boreholes and existing hard and soft samples 292 Figure 5.4.62a: Probability of PAH level B soil contamination—estimated from cokriging of hypothetical sample outcomes in 21 boreholes and existing hard and soft samples 292 Figure 5.4.62b: Probability of PAH level C soil contamination—estimated from cokriging of hypothetical sample outcomes in 21 boreholes and existing hard and soft samples 293 Figure 5.4.63: Probability of MOG special waste soil contamination—estimated from cokriging of hypothetical sample outcomes in 21 boreholes and existing hard and soft samples 293 Figure 5.4.64a: Optimized remediation design based on simulated samples in 21 boreholes and all existing sample data—for scenario where (1) batch sampling is not an alternative, (2) ratio of future to present unit remediation cost is 2.0 294 Figure 5.4.64b: Optimized remediation design based on simulated samples in 21 boreholes and all existing sample data—for scenario where (1) $4/m batch sampling is an alternative, (2) ratio of future to present unit remediation cost is 2.0 294 3  Figure 5.4.65: Calibration of hypothetical soft data to PAH levels using constructed data set in Table 5.4.6—(a) cdf table, (b) misclassification probabilities 297 Figure 5.4.66: Same calibration as Figure 5.4.65, except normalized by the inferred PAH contaminant level expected values—(a) cdf table, (b) misclassification probabilities 298 Figure 5.4.67a: Simulation of geophysics soft data outcome where the measurement can perfectly delineate PAH contamination above and below threshold level C, but provides no information about the level B threshold. One (red) corresponds to the geophysics measurement above the calibration soft cutoff, zero (green) below, zero (green) below, -1 (blue) outside problem domain 299 Figure 5.4.67b: Simulation of geophysics soft data outcome where the measurement can perfectly delineate PAH contamination above and below threshold level B, but provides no information about the level C threshold. Grid values defined same as in Figure 5.4.67a 300 Figure 5.4.68a: Simulation of geophysics soft data outcome where the measurement can delineate PAH contamination above and below threshold level C with 90% success, but provides no information about the level B threshold. Grid values defined same as in Figure 5.4.67a 300  LIST OF FIGURES  xviii  Figure 5.4.68b: Simulation of geophysics soft data outcome where the measurement can delineate PAH contamination above and below threshold level B with 90% success, but provides no information about the level C threshold. Grid values defined same as in Figure 5.4.67a 301 Figure 5.4.69a: Simulation of geophysics soft data outcome where the measurement can delineate PAH contamination above and below threshold level C with 80% success, but provides no information about the level B threshold. Grid values defined same as in Figure 5.4.67a 301 Figure 5.4.69b: Simulation of geophysics soft data outcome where the measurement can delineate PAH contamination above and below threshold level B with 80% success, but provides no information about the level C threshold. Grid values defined same as in Figure 5.4.67a 302 Figure 5.4.70a: Simulation of geophysics soft data outcome where the measurement can delineate PAH contamination above and below threshold level C with 60% success, but provides no information about the level B threshold. Grid values defined same as in Figure 5.4.67a 302 Figure 5.4.70b: Simulation of geophysics soft data outcome where the measurement can delineate PAH contamination above and below threshold level B with 60% success, but provides no information about the level C threshold. Grid values defined same as in Figure 5.4.67a 303 Figure 5.4.71a: Probability of PAH level B soil contamination—estimated from cokriging of all sample data and a hypothetical geophysics realization, where the geophysics can perfectly delineate PAH contamination above and below threshold level C, but provides no information about the level B threshold 304 Figure 5.4.71b: Probability of PAH level C soil contamination—estimated the same way as in Figure 5.4.71a  305  Figure 5.4.72a: Probability of PAH level B soil contamination—-estimated from cokriging of all sample data and a hypothetical geophysics realization, where the measurement can delineate PAH contamination above and below threshold level C with 90% success, but provides no information about the level B threshold 305 Figure 5.4.72b: Probability of PAH level C soil contamination—estimated the same way as in Figure 5.4.72a  306  Figure 5.4.73a: Probability of PAH level B soil contamination—estimated from cokriging of all sample data and a hypothetical geophysics realization, where the measurement can delineate PAH contamination above and below threshold level C with 70% success, but provides no information about the level B threshold 306 Figure 5.4.73b: Probability of PAH level C soil contamination—estimated the same way as in Figure 5.4.73a  307  Figure 5.4.74a: Optimized remediation design based on all existing sample data and a simulated geophysics realization, where the geophysics can perfectly delineate metals contamination above and below threshold level C, but provides no information about the level B threshold—for scenario where (1) batch sampling is not an alternative, (2) ratio of future to present unit remediation cost is 2.0 308 Figure 5.4.74b: Optimized remediation design based on all existing sample data and a simulated geophysics realization, where the geophysics has the same applicability as in 5.4.74a—for scenario where (1) $4/m batch sampling is an alternative, (2) ratio of future to present unit remediation cost is 2.0 309 3  Figure 5.4.75a: Optimized remediation design based on all existing sample data and a simulated geophysics realization, where the geophysics can delineate metals contamination above and below threshold level C with 90% success, but provides no information about the level B threshold—for same scenario where (1) batch sampling is not an alternative, (2) ratio of future to present unit remediation cost is 2.0 309 Figure 5.4.75b: Optimized remediation design based on all existing sample data and a simulated geophysics realization, where the geophysics can delineate metals contamination above and below threshold level C with 70% success, but provides no information about the level B threshold—for same scenario as in Figure 5.4.75a. 310  LIST OF FIGURES  xix  Figure 5.4.76a: Contour plot of F D E M horizontal dipole, N W - S E tool-oriented conductivity, rectified and gridded to decision model grid coordinates 314 Figure 5.4.76b: Contour plot of F D E M horizontal dipole, S W - N E tool-oriented conductivity, rectified and gridded to decision model grid coordinates 314 Figure 5.4.77a: Contour plot of F D E M vertical dipole, N W - S E tool-oriented conductivity, rectified and gridded to decision model grid coordinates 315 Figure 5.4.77b: Contour plot of F D E M vertical dipole, S W - N E tool-oriented conductivity, rectified and gridded to decision model grid coordinates 315 Figure 5.4.78a: Contour plot of difference between F D E M horizontal dipole, N W - S E and S W - N E tool-oriented conductivity 317 Figure 5.4.78b: Contour plot of difference between F D E M vertical dipole, N W - S E and S W - N E tool-oriented conductivity 317 Figure 5.4.78b: Contour plot of combined F D E M horizontal dipole conductivity grid—constructed by merging lownoise parts of N W - S E and S W - N E oriented grids 318 Figure 5.4.79a: Processed GPR relative amplitude section from case study site—line 2 of 15 in production survey. 319 Figure 5.4.79b: Processed GPR relative amplitude section from case study site—line 9 of 15 in production survey. 320 Figure 5.4.79c: Processed GPR relative amplitude section from case study site—line 14 of 15 in production survey. 320 Figure 5.4.80: 3-d volumetric representation of processed GPR relative amplitude sections acquired at case study site 321 Figure 5.4.81a: 3-d, 200 M H z , GPR relative amplitude grid—from processed, transformed, and gridded case study production survey 2-d sections 322 Figure 5.4.81b: Coarser 3-d, 200 M H z , G P R relative amplitude grid—from processed, transformed, gridded and averaged case study production survey 2-d sections 322 Figure 5.4.82: Calibration scatterplots of collocated F D E M horizontal dipole conductivity grid values versus soil sample analysis results for: (a) metals contaminant levels (1 = less than level B threshold, 2 = between level B and C , 3 = greater than level C), (b) M O G contaminant levels (1 = less than level B threshold, 2 = between level B and C , 3 = between level C and SW, 4 = greater than special waste). Conductivity cutoff chosen for calibration is shown as green line 324 Figure 5.4.83: Calibration scatterplots of collocated F D E M vertical dipole conductivity grid values versus soil sample analysis results for: (a) metals contaminant levels (1 = less than level B threshold, 2 = between level B and C , 3 = greater than level C), (b) M O G contaminant levels (1 = less than level B threshold, 2 = between level B and C , 3 = between level C and SW, 4 = greater than special waste). Conductivity cutoff chosen for calibration is shown as green line 325 Figure 5.4.84: Ogive plots of calibration results for F D E M horizontal dipole conductivity—calibrated to metals sample data for delineating metals contamination above and below level B threshold: (a) cdf table, normalized to hard data being equally proportioned in classes, (b) misclassification probabilities for same normalization; (c) cdf table, normalized to inferred contaminant level expected values, (d) misclassification probabilities for same normalization 329  LIST OF FIGURES  xx  Figure 5.4.85: Ogive plots of calibration results for FDEM vertical dipole conductivity—calibrated to metals sample data for delineating metals contamination above and below level B threshold: (a) cdf table, normalized to hard data being equally proportioned in classes, (b) misclassification probabilities for same normalization; (c) cdf table, normalized to inferred contaminant level expected values, (d) misclassification probabilities for same normalization 330 Figure 5.4.86: Ogive plots of calibration results for FDEM horizontal dipole conductivity—calibrated to MOG sample data for delineating MOG contamination above and below special waste threshold: (a) cdf table, normalized to hard data being equally proportioned in classes, (b) misclassification probabilities for same normalization; (c) cdf table, normalized to inferred contaminant level expected values, (d) misclassification probabilities for same normalization 331 Figure 5.4.87: Ogive plots of calibration results for FDEM vertical dipole conductivity—calibrated to MOG sample data for delineating MOG contamination above and below special waste threshold: (a) cdf table, normalized to hard data being equally proportioned in classes, (b) misclassification probabilities for same normalization; (c) cdf table, normalized to inferred contaminant level expected values, (d) misclassification probabilities for same normalization 332 Figure 5.4.88: Calibration scatterplots of collocated GPR averaged relative amplitude grid values versus soil sample analysis results for: (a) metals contaminant levels (1 = less than level B threshold, 2 = between level B and C, 3 = greater than level C), (b) MOG contaminant levels (1 = less than level B threshold, 2 = between level B and C, 3 = between level C and SW, 4 = greater than special waste). GPR amplitude cutoff chosen for calibration is shown as green line. Cutoff is for absolute value of amplitude in (a) 334 Figure 5.4.89: Ogive plots of calibration results for GPR absolute value amplitudes—calibrated to metals sample data for delineating metals contamination above and below level B threshold: (a) cdf table, normalized to hard data being equally proportioned in classes, (b) misclassification probabilities for same normalization; (c) cdf table, normalized to inferred contaminant level expected values, (d) misclassification probabilities for same normalization 336 Figure 5.4.90: Ogive plots of calibration results for GPR amplitudes—calibrated to MOG sample data for delineating MOG contamination above and below special waste threshold: (a) cdf table, normalized to hard data being equally proportioned in classes, (b) misclassification probabilities for same normalization; (c) cdf table, normalized to inferred contaminant level expected values, (d) misclassification probabilities for same normalization 337 Figure 5.4.91a: Probability of metals level B soil contamination—estimated from cokriging of all sample data and FDEM horizontal and vertical dipole conductivity 339 Figure 5.4.91b: Probability of metals level C soil contamination—estimated from cokriging of all sample data and FDEM horizontal and vertical dipole conductivity 340 Figure 5.4.92: Probability of MOG special waste soil contamination—estimated from cokriging of all sample data and FDEM horizontal and vertical dipole conductivity 340 Figure 5.4.93a: Probability of metals level B soil contamination—estimated from cokriging of all sample data and GPR absolute value amplitudes 341 Figure 5.4.93b: Probability of metals level C soil contamination—estimated from cokriging of all sample data and GPR absolute value amplitudes 341 Figure 5.4.94: Probability of MOG special waste soil contamination—estimated from cokriging of all sample data and GPR amplitudes 342 Figure 5.4.95a: Probability of metals level B soil contamination—estimated from cokriging of all sample data, FDEM horizontal and vertical dipole conductivity, and GPR amplitudes 343  LIST OF FIGURES  xxi  Figure 5.4.95b: Probability of metals level C soil contamination—estimated from cokriging of all sample data, F D E M horizontal and vertical dipole conductivity, and GPR amplitudes 344 Figure 5.4.96: Probability of M O G special waste soil contamination—estimated from cokriging of all sample data, F D E M horizontal and vertical dipole conductivity, and GPR amplitudes 344 Figure 5.4.97: Differences greater than 0.1 between metals level B probabilities estimated after and before addition of F D E M and GPR geophysics data to all existing sample data 345 Figure 5.4.97b: Differences greater than 0.02 between metals level C probabilities estimated after and before addition of F D E M and GPR geophysics data to all existing sample data 345 Figure 5.4.98: Differences greater than 0.1 between M O G special waste probabilities estimated after and before addition of F D E M and GPR geophysics data to all existing sample data 346 Figure 5.4.99a: Probability of metals level B soil contamination across geophysics survey region—estimated from cokriging of all sample data, F D E M horizontal and vertical dipole conductivity, and GPR amplitudes 346 Figure 5.4.99b: Probability of metals level C soil contamination across geophysics survey region—estimated from cokriging of all sample data, F D E M horizontal and vertical dipole conductivity, and GPR amplitudes 347 Figure 5.4.100: Probability of M O G special waste soil contamination across geophysics survey region—estimated from cokriging of all sample data, F D E M horizontal and vertical dipole conductivity, and GPR amplitudes. 347 Figure 5.4.101a: Variance of metals level B soil contamination probabilities—estimated from cokriging of all sample data, F D E M horizontal and vertical dipole conductivity, and GPR amplitudes 348 Figure 5.4.101b: Variance of metals level C soil contamination probabilities—estimated from cokriging of all sample data, F D E M horizontal and vertical dipole conductivity, and GPR amplitudes 348 Figure 5.4.102: Variance of M O G special waste soil contamination probabilities—estimated from cokriging of all sample data, F D E M horizontal and vertical dipole conductivity, and GPR amplitudes 349 Figure 5.4.103a: Probability of contamination above any of the metals, P A H , or M O G regulatory threshold levels—estimated from cokriging of all sample data, F D E M horizontal and vertical dipole conductivity, and GPR amplitudes 349 Figure 5.4.103b: Probability of contamination above any of the metals, P A H , or M O G regulatory threshold levels—estimated from cokriging of all sample data (no geophysics) 350 Figure 5.4.103a: Optimized remediation design based on all existing sample data, F D E M horizontal and vertical dipole conductivity, and GPR amplitudes—for scenario where (1) batch sampling is not an alternative, (2) ratio of future to present unit remediation cost is 2.0 351 Figure 5.4.103b: Portion of site soil volume not requiring treatment for contamination—remediation design in Figure 5.4.103a 352 Figure 5.4.104a: Optimized remediation design based on all existing sample data, F D E M horizontal and vertical dipole conductivity, and GPR amplitudes—for scenario where (1) $4/m batch sampling is an alternative, (2) ratio of future to present unit remediation cost is 2.0 352 3  Figure 5.4.104b: Optimized remediation design for same scenario as Figure 5.4.104a, except unit cost of batch sampling is $8/m 353 3  LIST OF FIGURES  xxii  Figure 5.4.105: Optimized remediation design based on all existing sample data, FDEM horizontal and vertical dipole conductivity, and GPR amplitudes—for scenario where (1) batch sampling is not an alternative, (2) ratio of future to present unit remediation cost is 1.125 353 Figure 5.4.106a: Total cost of remediation for different batch sampling scenarios and across a range of future-topresent remediation cost ratios—based on all existing sample data, FDEM horizontal and vertical dipole conductivity, and GPR amplitudes 355 Figure 5.4.106b: Economic value (cost savings in remediation design) of FDEM and GPR geophysics surveys—for same scenario and data types as in Figure 5.4.106a 355 Figure 6.1.1: Schematic diagram of different approaches—statistical and physical model—for estimating a subsurface property of interest (hydrocarbon saturation) from indirect geophysics measurements (GPR) that account for uncertainty in the measurement and estimation 360 Figure 6.1.2a: Flow diagram of adapted M-B geostatistics approach—see caption next page  365  Figure 6.1.2b: Flow diagram of the M-B geostatistics approach adapted and modified for this work, illustrating the process for implementing the methodology—the different modules, required and optional steps, and their sequence. Continued from previous page 366 Figure 6.1.3: Flow diagram of decision-making framework developed for the case study, illustrating the process for implementing the different modules of the risk-based decision model. The linkages with the M-B uncertainty model and overall site characterization process are highlighted 372 Figure 6.1.4: Total optimized soil remediation design cost derived from decision model, for data scenario where all soil sample data—both direct and soft co-contaminant concentration measurements—are used in geostatistical estimation. The results for three batch sampling scenarios are plotted: (blue) batch sampling is not an alternative, (magenta) batch sampling is an available alternative at a unit cost $4 / cubic meter, and (cyan) batch sampling is an alternative at a unit cost $8 / cubic meter. In addition, the total cost of remediation for the scenario when there is no uncertainty about soil contamination, or Expected Value of Perfect Information ("EVPI") is plotted (red) 377 Figure A . l : Spatial averaging associated with a typical surface geophysical method  399  Figure A.2: Relationship between property distribution and measurement spaces for the forward and inverse problems (after McLaughlin and Townley [1996]) 403 Figure A.3: Relationship between spatial resolution and measurement spacing in the global inverse problems.407  XX111  ACKNOWLEDGEMENTS  I am very grateful to Dames and Moore for their approval and assistance in performing the case study; Klohn-Crippen Consultants, Ltd. for gratis use of their EM-31; and the U B C rock physics group for use of their GPR equipment. Special thanks to all colleagues, friends, and family who provided the extra motivation and encouragement to complete this long overdue thesis. In particular, I am very obliged to my advisor and current and previous manager for their incredible understanding and extra incentives to finish.  1 CHAPTER 1 INTRODUCTION  The primary goal of this overall study is to assess how well geophysics, used for site characterization, can be rigorously incorporated in the decision-making process for hydrogeological problems. Geophysical measurements, with their large volume of investigation (spatial coverage of measurements), high sampling density, fast acquisition time, and noninvasive nature, potentially provide significant value to hydrogeological problems, but they are inherently indirect or "soft" measurements of the subsurface properties of importance. Thus, there is intrinsic uncertainty associated with the information they provide. This makes these measurements difficult to rigorously incorporate in a risk-based decision-making process and underlies the need for developing a methodology to effectively and efficiently do so. Indeed, this represents the impetus for this work—to develop a comprehensive, yet hopefully practical, decision-making framework that incorporates geophysics-type "soft" measurements into such a process, accounting for the uncertainty in the measurements, in addition to the natural spatial variability in subsurface property distributions. It was considered essential as part of this development to apply the framework to a comprehensive, non-ideal case study involving a real world problem—where actual geophysics measurements are acquired, analyzed, and incorporated. Such a case study is considered to be, by far, the best way to assess and refine the integrity, applicability, and practicality of the approach. A l l too often new hydrogeological methodologies are assessed only by applying the methodology to a synthetic problem. While such an exercise is useful for initial algorithmic verification, it omits the all important step of making and assuring the approach is applicable and practical in a real world setting. The case study undertaken for this research is the assessment, acquisition, and incorporation of geophysical measurements into a risk-based decision-making framework for soil environmental remediation at a real estate development. This case study is the focus of the overall research study.  Chapter I. 1.1  INTRODUCTION  2  Motivation for research Recently it has become increasingly important to formally account for uncertainty in  hydrogeology-related problems in order to make more informed, robust decisions. These problems are inherently plagued by a high degree of uncertainty due to their strong dependence on hydrogeological conditions and processes that vary significantly in space and time, and at many different scales. Uncertainty in existing conditions entails even greater uncertainty in future predictions of how the hydrogeological system will behave and, thus, predictions of how proposed engineering systems influenced by hydrogeology will perform. Performance uncertainty translates into the risk of unintended outcomes. Despite these complexities, there is an increasing need to efficiently allocate limited resources due to the, often, non-revenue generating nature of engineering projects dealing with hydrogeology and, in the environmental remediation arena, the large number of contaminated sites. Efficient allocation of resources requires a cost-effective engineering design process, which can only be accomplished if the many sources and forms of uncertainty are characterized and explicitly accounted for in the analysis. In addition, decision-makers are increasingly being held accountable for the decisions they make in the, often, adversarial atmosphere surrounding hydrogeology-related problems. This adversarial environment exists because of the regulatory and social context of these problems and the multiple, differing stakeholders involved, as well as the disillusionment of the public about environmental restoration/protection projects. Suspicion accompanying these projects stems from their recent history of large expenditures and unrealized goals. This accountability necessitates documentation of the reasoning behind decisions made in the face of so much uncertainty. In this regard a careful elucidation of uncertainty and risk provides a very useful communication tool for rationalizing decisions. Finally, progressively more environmental regulations are using health risk to receptors (humans or other forms of life) as an indicator for environmental compliance. In order to reliably assess health risk it is necessary to characterize and propagate uncertainty in any of the linked  Chapter I. INTRODUCTION  3  chain of processes along the path—from contaminant release into the environment to contact with potential receptors—that ultimately / potentially influence health risk; this includes hydrogeological processes. While characterizing uncertainty in hydrogeological engineering problems, and including it in a systematic risk-based decision-making process, helps to increase the reliability of decisions, the only way to make more cost-effective, but still robust, engineering decisions is to reduce the uncertainty in conditions that ultimately influence the decisions and their outcomes. In this respect, the site investigation process plays a very important role by characterizing critical site conditions and, thus, reducing uncertainty in the system. The ultimate goal of site investigations in a decision-making context is to reduce the risk, and associated costs, of unintended outcomes—engineering failure resulting from not accounting for all possibilities, or an overly conservative and, thus, unnecessarily costly decision. Recently there has been a major interest and effort in using geophysics for site characterization of hydrogeology-related problems. Geophysics provides the advantages of high density, non-invasive, and quick measurements covering large areas, compared to invasive methods such as drilling and direct sampling of subsurface material. The downside of geophysics is that the measurement can have a significant amount of uncertainty associated with it. There has been little effort, however, to assess the overall economic value of geophysics to an engineering project from the perspective of the decision-maker. Does geophysics provide enough new information to a problem to be cost effective? In other words, is the risk reduction (due to the reduction in uncertainty) resulting from the new information outweigh the cost of performing geophysics? Is it best (most cost-effective) to collect geophysics, take borehole samples, or neither? These are the types of questions a decision-maker must confront. To date these issues and ways to answer them have not been addressed in a rigorous risk-based decision-making context.  Chapter I. 1.2  INTRODUCTION  4  Research goal The overall objective of this research is to investigate the application of geophysics to  hydrogeology-related problems from the perspective of the decision-maker. To accomplish this goal a general framework for incorporating geophysics in the decision-making process has to be developed, so that questions like those posed above can be properly answered. Of utmost importance is to apply the overall process to a real-world case study where a specific hydrogeological problem is addressed and actual geophysical field measurements are collected and incorporated in the decision process. The applicability of the method cannot be effectively demonstrated without carrying out a real-world example due to the unpredictable reliability, sitespecific nature, and inherent uncertainties associated with geophysics and other aspects of hydrogeology-related problems. Also, the particular approach is largely untested and, therefore, by taking a deductive approach, the case study can be used as a learning tool to identify problems with the framework and make inferences about what aspects can be applied to hydrogeologyrelated problems in general. It should be emphasized that the purpose of the case study is to illustrate the methodology; the actual success of geophysics in providing value to the problem is not necessary for meeting this objective.  1.3  Research contributions The contributions of this research can be grouped into two broad categories—quantitative  and qualitative contributions. The quantitative contribution is the development of a rigorous, robust, and generalized methodology for incorporating geophysical measurement data, along with other types of precise and imprecise information, into a risk-based decision-making framework. This methodology accounts for, quantifies, and propagates uncertainty, including measurement uncertainty. Within this framework, a decision-maker can estimate the required accuracy level for geophysics to be worthwhile, compare the economic worth of different  Chapter 1. INTRODUCTION  5  measurement types and strategies, and determine the most cost-effective engineering design based on all existing information. The qualitative contribution is a recommended approach for applying geophysics to hydrogeology-related problems in a decision-making context—a sequential, iterative, and systematic procedure for determining if, when, why, and how geophysics should be applied for site characterization. Largely inspired and refined as a result of lessons learned from the case study, these general guidelines address the realistic limitations of geophysics. They focus on determining if geophysics has the possibility of providing useful information for a particular problem. If so, which geophysical technique(s) should be used? At what stage of the project should it be performed? How should it be run? (e.g., What type and amount of ground truthing is required?; Should complementary methods be run?; How should the results be interpreted?) Will geophysics be economically viable and worthwhile? How should the results be incorporated in the decision-making process? The recommended approach does not necessarily provide specific answers to these questions for a particular problem, but provides a general, systematic, and realistic way to approach the use of geophysics in hydrogeology-related problems from a riskbased decision-making perspective.  Chapter I. 1.4  INTRODUCTION  6  Thesis overview The guiding principle of this research was to develop and test a practical and useable site  characterization and decision making approach—applicable to real-world hydrogeology-related problems. This entails that the breadth of the research and the attendant real world case study is quite comprehensive, covering a broad spectrum of disciplines and topics —from geophysics to ;  geostatistics and decision analysis. Consequently, in addition to recording all the results, this thesis attempts to address and explain all the major areas that have important bearing on the work; this has resulted in a very long write-up. The purpose of this section is to a provide a "road map" to the thesis, so that readers of different backgrounds and interests can read what they want. C H A P T E R 1: INTRODUCTION (this chapter) summarizes what the thesis is about, what motivated the research, and what the goals are. This chapter should be read by all. C H A P T E R 2: B A C K G R O U N D provides an  introduction to and explanation of the  important underlying concepts of this work: 1. Uncertainty and how it affects hydrogeology-related problems (section 2.2). This section contains a fairly lengthy and in-depth discussion on uncertainty and its relationship to scale, that is not directly applicable to the case study and the results of this research (section 2.2.1). However, the introduction of this section and the discussion on measurement uncertainty (section 2.2.2) should be read. 2. Decision making, how it is impacted by risk, and how it is accomplished in a formal framework—all in the context of hydrogeology-related problems (section 2.3). This entire section is important to read if the reader is unfamiliar with the risk-based decision making process and the concepts behind it. Section 2.3.3, a discussion comparing economic risk and health risk, is not needed to understand the results of this work, but is applicable to the philosophical foundation of the case study problem. 3. The site characterization process, how it can be formally organized, and how it relates to decision making (section 2.4). This section delves quite heavily into site characterization  Chapter 1. INTRODUCTION  1  theory, basically summarizing, and putting in the context of this research, the work of Baecher [1972]. The introduction to this section, subsection 2.4.1 outlining the objectives of site characterization, and the introduction and summary to section 2.4.2, discussing the organization of site characterization, should probably be read. In addition, the hypothetical example hydrogeology problem, that is made reference to throughout the thesis to explain different concepts, is introduced (section 2.1) and a review of previous research that is closely related to the subject matter is provided—organized by topic (section 2.5). C H A P T E R 3: G E O P H Y S I C S IN H Y D R O G E O L O G Y P R O B L E M S  provides an  introduction to geophysics and its application to site characterization in hydrogeology-related problems. The chapter is basically organized to start simple and general, then to become progressively more in-depth and specific to particular aspects related to this work. Section 3.1 is a short overview of geophysics for readers unfamiliar with this type of measurement. Section 3.2 further discusses applications of geophysics in hydrogeology-related problems and section 3.3 outlines the advantages that geophysics provides such problems, using hypothetical examples for illustration. Section 3.4 provides a long discussion on the difficulties associated with applying geophysics to the types of problems of interest and how these can be addressed. It is recommended to read at least the introduction to this section, the introductions to subsections 3.4.1 and 3.4.2, and the summary (subsection 3.4.4); they describe the need and impetus for the uncertainty framework used in this work. The internal subsections of 3.4.1 and 3.4.2 discuss in considerable detail the many different techniques used for dealing with the indirect, non-unique nature of geophysics measurements, including the geostatistical technique used in this work; this is not necessary for understanding the results of this work, since the specific technique used is discussed in detail in C H A P T E R 4. C H A P T E R 4: A C C O U N T I N G F O R A N D M O D E L I N G U N C E R T A I N T Y provides a comprehensive summary of the different methods of accounting for uncertainty in spatial variables, as well as a detailed discussion of the geostatistics methodology applied in this  Chapter I. INTRODUCTION  8  work—from a practitioner's perspective. Section 4.1 introduces the concept of uncertainty and how it can be represented and dealt with—recommended for readers unfamiliar with probability theory concepts. Section 4.2 becomes more detailed about the different methods for forming and updating spatial random variables to represent empirical quantity uncertainty. While this section isn't necessary for understanding the results of this work, it does conceptually discuss the general geostatistics approaches used—geostatistics is introduced in section 4.2.3, including the concept behind, and advantages of, the kriging and simulation methods used. Section 4.3 details the workings of the specific geostatistics approach used in this work—the Markov-Bayes indicator random variable approach—and should be read to understand the results. C H A P T E R 5: C A S E S T U D Y describes the soil contamination remediation case study performed in this work—the background of the case study problem, the different tasks undertaken as part of it, and all the representative results of all the key analyses performed. It is the core of this thesis. While undoubtedly a lot of information due to the broad scope of the case study, it should be read in its entirety to understand the full scope of the work accomplished; but, if only if only certain aspects of the work are of interest, the background and the section(s) covering these aspects can be read. C H A P T E R 6: CONCLUSIONS A N D R E C O M M E N D A T I O N S provides a comprehensive summary of the (1) key findings and contributions of this work (sections 6.1.1 and 6.1.2), including flow diagrams of the uncertainty and decision models that were developed, and (2) the results of the case study (section 6.1.3). In addition, recommendations for further research that builds on this research are presented (section 6.2). APPENDICES A and B provide detailed, mathematical discussions on the geophysical inverse problem and the Markov-Bayes indicator geostatistics estimation methodology, respectively.  9 CHAPTER 2 BACKGROUND  This chapter provides a background to the important concepts underlying this research. These concepts include: •  the nature and types of uncertainty inherent to hydrogeology problems, the hydrogeological decision-making process and the concept of risk, and  •  the site characterization process.  In addition, a comprehensive review of important, closely related previous work is presented at the end of the chapter. To help clarify the explanation of these concepts; and others throughout the thesis, an example hydrogeology problem—akin to the case study problem, but more general—is introduced and referred to repeatedly in this chapter, and throughout the document.  2.1  Example hydrogeology problem In order to illustrate and provide real-world applicability to the concepts presented  throughout this work, a generic, but realistic, example hydrogeology problem is presented. The hypothetical example is made to closely resemble the general components of the case study problem, thus providing a smooth transition into the case study section of the thesis. The example problem is a waterfront real estate development on a site which has a long history of industrial activity, including operating as a major railyard, fuel storage depot, and commercial ferry terminal. A schematic map view of the site is shown in Figure 2.1. The site contained a nest of underground and above ground fuel storage tanks of different sizes connected by a network of pipelines leading to pump houses and dispensing stations. A l l of the fuel tanks and most of the pipelines have since been removed. The fuel depot serviced a large railyard with multiple tracks built on cinder and ballast roadbed, a trailer truck yard, and a commercial ferry terminal—all of which are being decommissioned. The site borders a tidal marine inlet. One of  Chapter 2. BACKGROUND  10  the main concerns facing the developer is the potential for subsurface chemical contamination, resulting from the extensive industrial activity.  Areas^where contam-; inatiori measured •  Fuel pipelines (?)  • ,O ^ - ^ s ?  ^  ,-Wharf  ^  '  4/  Truck  Tank car loaders 0  25  •:>:•'<• y^-Ci^''  loading rack 50 meters  Figure 2.1: Schematic map view of example problem site.  2.1.1  Geology of site The hypothetical site is largely built on manmade fill (up to ten meters thick) overlying  natural sediments, and has a generally flat surface topography just above mean sea level (MSL). A schematic three dimensional view of the site is shown in Figure 2.2. The stratigraphy consists of: •  one to two meters of heterogeneous, mostly gravel-sized surface fill underlain by  •  four to eight meters of mostly sandy intermediate fill, interspersed with areas of cobbles, concrete refuse, and wood fragments, which, in turn, is underlain by native clay-rich till with interbedded marine sand lenses.  The thickness of the fill layers generally decrease inland from the shoreline and eventually pinch out at the base of a small five meter high bluff and manmade retaining wall.  Chapter 2.  BACKGROUND  11  Figure 2.2: Schematic 3-dimensional view of example problem site.  2.1.2  Hydrogeology of site The hydrogeology of the hypothetical site is characterized by an unconfined aquifer in the  high permeability fill material, underlain by a leaky aquitard associated with the till sediments, that is interspersed with higher permeability zones associated with the sand lenses. The size and interconnectedness of the higher permeability zones in the till is highly variable. The water table averages three meters below the surface. The hydrogeology of the site is greatly complicated by a strong tidal influence which causes large fluctuations in the water table and groundwater flow, in both the magnitude and direction of flow. The area receives a large amount of rainfall, especially in the winter, which acts as a large source of recharge.  2.1.3  Contamination of site and the resulting implications As mentioned earlier, the hypothetical site has a long history of industrial activity which  has lead to subsurface contamination. As a result of leaking fuel tanks, spills, and the release of used mechanical fluid and burned coal (or cinder) there are many scattered ancestral source zones of hydrocarbon and metal contaminants, both above and below the water table. The  Chapter 2. BACKGROUND  12  contaminants in these source zones can migrate in their initial form and/or migrate with groundwater in other forms. The different types of contamination are schematically depicted in Figure 2.2, where the black plume represents free-phase contaminant and the hatched gray plume aqueous-phase contaminant coming from the free phase. In order to build on the site the developer must meet regulatory requirements for the intended land/water use to insure that the health of future inhabitants is not jeopardized. Thus, the hydrogeology engineering problem at this hypothetical site is to determine: 1.  if and where contamination occurs,  2.  the future fate and detrimental effects of any contamination, and  3.  if, when, and how the contamination should be remediated or contained, if it indeed exists.  2.2  The nature of uncertainty in hydrogeology problems Due to the complexity of the hydrogeological environment, uncertainty is pervasive and  multi-faceted in problems which are influenced by this environment. It is pervasive in that uncertainty exists in one form or another in every real-world hydrogeology problem and it is multi-faceted in that it comes in many different, often overlapping, forms. There are numerous ways to characterize and classify uncertainty. From a practitioner's standpoint, the semantics of how uncertainty is classified is not particularly important, as long as it is recognized. What is important is to identify and understand: •  the different manifestations of uncertainty plaguing a particular problem, how these manifestations affect the practitioner's comprehension of the problem, and which types of uncertainty can potentially be reduced and which are systemic to the problem.  Once these aspects of the uncertainty are recognized and understood, a robust approach for coping with it in the engineering problem can be developed. A discussion of the types and characteristics of uncertainty in hydrogeology problems is discussed in this section, while approaches for dealing with it are discussed in the following sections.  Chapter 2. BACKGROUND 2.2.1  13  The relationship between uncertainty and scale Uncertainty and scale are inextricably linked. Consequently, to properly understand  uncertainty, it is crucial to analyze it from the perspective of scale. In an ideal world, reality would be perfectly known at all scales and, thus, uncertainty would not exist. For the hypothetical problem described earlier, this scenario would entail a perfect understanding down to the smallest scale (i.e. the molecular level) of the current partitioning and spatial distribution of contaminants, as well as the spatial distribution of factors controlling groundwater transport. In addition, it would entail a perfect understanding of all the processes controlling mass transport in the subsurface (e.g. multiphase flow, tidal forcing of groundwater, contaminant mass partitioning) at all scales so that the future distribution of contaminant mass across the site could be perfectly predicted. In fact, with such a scenario, the common measurable parameters used to model such processes (e.g. hydraulic conductivity, porosity, dispersivity) would no longer be applicable since they inherently represent averaging across smaller scales of variability. Of course, this extent of knowledge is never achievable. There is a technical and practical limit to the level of detail that can be accounted for in the real-world. 2.2.1.1 Measurement scale The lower limit of detail (scale of variability) that can be resolved in the real-world is inherently controlled by the measurement process. A l l measurements inevitably act as filters where the measured parameter represents an average of some variable(s) across the scale of the measurement [Beckie, 1996]. For example, the hydraulic conductivity (K) measured from a well core sample using a permeameter test represents an equivalent parameter which accounts for the drag caused by the substrate pore structure on fluid flow, averaged across the volume of the core. This argument applies to measurements of processes which vary in space, as well as time. As a result, the measured parameter will not vary significantly at scales less than the measurement scale. This implies that the measurement process smoothes out small-scale variability. Thus,  Chapter 2. BACKGROUND  14  there is an intrinsic lower limit to the scale of variability that can be resolved, and this limit is determined by the smallest measurement scale. It is important to realize the difference between measurement scale and measurement volume. The former is defined by the characteristic measurement filter width, itself related to the spatial frequency spectrum for the measurement (the spectral description of scale is discussed in the forthcoming Section 2.2.1.5). The measurement volume is the total volume of material sampled by the measurement. For example, consider a groundwater tracer test performed by injecting a chemical tracer in one well and measuring the concentration of the tracer chemical versus time in another well 30 meters away—"downstream" along the mean flow path. The measurement volume is something of an ellipsoid, with the long axis being the 30 meters along the flow path between the wells and the shorter axes perpendicular to the flow path in each direction, with lengths less than 30 meters. The measurement scale, however, could be considerably smaller if the information from the concentration versus time is used to delineate heterogeneity in the hydraulic conductivity field between the wells. As illustrated by this example, the measurement volume and scale are often difficult to quantify. Strictly, there is no such thing as a point measurement since, in reality, there is an infinite range of scales of variability, and any measurement inevitably averages out the smaller scales. The implication of this fact on uncertainty is that there is always some degree of intrinsic uncertainty—which cannot be reduced—associated with the scales of variability in a variable that cannot be resolved by measurements. These unresolved scales of variability and their associated intrinsic uncertainty will be referred to as natural variability and uncertainty due to natural variability, respectively. 2.2.1.2 Model scale The model used to represent a system also has an inherent limit on the minimum scale of variability that can be represented. Ideally, in order to most accurately model the system, the smallest resolved model scale (e.g. model grid cell size) could be set to the smallest measurement scale, since this represents the greatest level of detail about a parameter that can be  Chapter 2. BACKGROUND  15  ascertained through measurements. However, to do so is often technically infeasible or impractical due to our inability to accurately model the small-scale physics and/or computational limitations (i.e. excessive computer effort). For example, to numerically model groundwater flow across a several acre site at the scale of well cores would require millions (if not billions) of grid discretizations. In addition, to accurately characterize the site, the required number of cores would be enormous. Inevitably, simplifications must be made, such as increasing the model scale (e.g. grid cell size), in order to make the problem tractable. Unfortunately, increasing the model scale causes an equivalent increase in the lowest possible scale that can be resolved by the model (the actual resolved scale is dependent on measurement spacing, as discussed in forthcoming Section 2.2.1.4) and, thus, an increase in the level of uncertainty that cannot be reduced (uncertainty due to natural variability). Conversely, another model scale constraint could be that the model has to accurately represent the engineering design required—setting a minimum level of discretization. For example, if a one meter thick groundwater flow barrier wall is being considered for containing a plume of contaminated groundwater, to accurately model the wall the flow model scale would probably have to be on the order of a meter in and around the modeled wall. If the smallest measurement scale is larger than this, then, even at measurement locations, there would be uncertainty associated with spatial averaging at a scale larger than the model scale. The larger the model scale the smaller the amount of system behavior explicitly described and resolved by the model and the greater proportion of true behavior which has to be accommodated by the model parameters through some sort of averaging [Beckie, 1996]. For example, as the size of the grid blocks in a groundwater flow model are increased, the scale of the parameter defining hydraulic conductivity (K) must also be equally increased, requiring that the K parameter account for more of the physics controlling flow, across a greater volume of, possibly heterogeneous, subsurface material. This increased burden and amount of spatial averaging results in the inability to resolve small-scale features, such as high K channels, which, in turn, means the true physics of flow through these features can no longer be explicitly  Chapter 2. BACKGROUND  16  modeled. This loss of resolution could entail that the time it takes contaminants in groundwater to travel through the substrate is underestimated by the model due to not accounting for the high K channel. However, for some problems small-scale groundwater flow behavior may be irrelevant, such as regional groundwater studies where only large-scale groundwater flow is of concern; the process model scale can be on the order of kilometers, and still provide the required information. Model parameters such as K are called phenomenological parameters and are used in conjunction with closure models in an attempt to represent the effect of unresolved physics on resolved physics in the system model [Beckie, et al., 1994]. For example, in fluid transport problems, the effect of unresolved fluid motions upon contaminant movement is often represented using dispersion parameters in a Fickian closure model, that models the unresolved behavior as random motion. The dispersion parameters essentially represent the parameters of a simple probabilistic model describing the random small-scale fluid motion. 2.2.1.3  The relationship between model and measurement scales  If the model scale is larger than the measurement scale, then an additional averaging (or filtering) process has to be performed to upscale measurements to the scale of the model. If the parameter being upscaled is process dependent, then the upscaling operation will be context dependent, possibly making the operation complex and imprecise, even with full measurement coverage. Therefore, upscaling represents another potential source of uncertainty that cannot be reduced, even at the model scale.  Chapter 2. BACKGROUND  0.2 \ 0  17  1  1  1  1  10  20  30  40  1  50  Depth  Figure 2.3: Filtering of hypothetical porosity depth section due to the measurement and upscaling processes. Geometric mean boxcar filter is used for measurement filtering—labeled "# m res." for # meters resolution, corresponding to filter width. Upscaling to 5 meter cell dimension grid performed on 1 meter resolution measurement.  Figure 2.3 illustrates a hypothetical example of the smoothing effects of the measuring process at different measurement scales, and the upscaling process from the measurement scale to a larger model scale, for a porosity 1-d depth section. For this example, the spatial filter associated with these processes is simply an equally-weighted geometric average (a "box car" filter), applied at every 0.1 meter increment, and the parameter being measured or modeled is assumed to be process independent (e.g. porosity). The true variable distribution is highly discontinuous, analogous to a layered stratigraphy where there are large contrasts between the layers and the layer thickness varies significantly. Although small-scale features are evident in the filtered responses to the heterogeneous variable in Figure 2.3, the primary variability is at scales equal to or larger than, approximately, the scale of averaging. The small-scale features and the sharpness of the changes in variability of the averaged parameters is largely an artifact of the sharp boundaries of the filters. The example five meter model scale illustrated in the figure is  Chapter 2. BACKGROUND  18  generated by averaging the measurement values within each five meter grid cell and assigning that constant value to the cell assuming the measurement scale is 1 meter and the measurements are taken every 0.1 meter. The grid upscaling is seen to average the porosity measurements and the structured discretization masks the true location of bed boundaries. Usually the fact that variability in parameters, and the associated uncertainty, at scales below the model scale can never be resolved by the model is considered a basic assumption of the model. In other words, it should be understood that a model only models processes at the model scale or larger. The utility of phenomenological parameters and closure models is only an attempt to account for the effect of unresolved behavior on the modeled resolved behavior. In this sense, the variability at scales below the model scale (which can be called natural variability) and the associated uncertainty can be ignored, except in the way it influences resolved scales, since it is understood the model does not represent those scales of the system. For example, a groundwater flow model with grid blocks on the scale of tens of meters cannot model small-scale flow at the size of a well core. If the details of flow at the scale of cores is considered crucial to a particular problem, then the only recourse is to reduce the model scale. 2.2.1.4 Network scale Unfortunately, just because a model has the ability to resolve variability in the behavior of a system at scales equal to or greater than the model scale does not ensure that it will accurately do so. How well the model represents these scales of variability in system behavior depends on how well the variability—in the model parameters corresponding to the same scales—is characterized through measurements. The only way to fully characterize the parameter variability down to the scale of the model is to have full measurement coverage at the spacing of the measurement scale (assuming there is no uncertainty in the measurements and, if upscaling is required, the upscaling process is perfect) [Beckie,  1996]. Beckie [1996] refers to the  characteristic spacing between measurements as the network scale, and the inability of the model to accurately resolve scales between the network and model scales as a model closure problem. Therefore, in addition to the uncertainty due to natural variability, there is also uncertainty about  Chapter 2. BACKGROUND  19  the variability of parameters at scales between the network and model scales, that propagates to uncertainty in the modeled system behavior at those scales. However, unlike the uncertainty due to natural variability, this type of uncertainty can be reduced and, thus, can be referred to as uncertainty due to ignorance. 2.2.1.5 Spectral analysis of scale and uncertainty Beckie [1996] uses a spectral approach to clearly show the separation between unresolved scales of variability (where there is uncertainty due to natural variability), subgrid scales of variability (where there is uncertainty due to ignorance), and resolved scales (where there is no uncertainty). Spectral techniques transform functions from the space or time domain (e.g. the spatial distribution of K) to the spatial or temporal frequency domain where the functions plot as an energy spectrum (spectral energy versus frequency of variation). These techniques are valuable for analyzing scale issues because the energy (which is related to the amount of variability) associated with different frequencies or ranges of frequencies (scales) can be directly identified. Nyquist sampling theorems prove that the Nyquist frequency (f  N  =  / 2& ) /  i S  t  n  e  N  greatest degree of variability that can be fully described by the Nyquist sampling interval ( AJV). Thus, in the frequency-domain, the boundaries between the different scales can be conveniently defined by the appropriate Nyquist frequencies: for the boundary between subgrid and resolved scales — fas - j/^^, where AM is the model scale (e.g. grid block size) or the smallest measurement scale (e.g. the radius of influence of a slug test), whichever is larger. •  for the boundary between unresolved and subgrid scales — /SR -  » where AN is the  network scale (characteristic spacing between the center point of measurements). These definitions imply that the smallest scale of variability that can possibly be resolved by a model is AM and the smallest scale which is actually resolved is AN . Beckie [1996] illustrates how the size of the subgrid scale variability section of the frequency spectrum (that represents unresolved variability) can be reduced by taking measurements at a denser spacing across the problem domain, which reduces the size of the  Chapter 2. BACKGROUND  20  network scale. In essence, this process corresponds to reducing uncertainty due to ignorance. To completely eliminate the closure problem, measurements have to be taken at a spacing equal to the Nyquist interval that corresponds to the unresolved scale - subgrid scale boundary—the larger of the model scale or smallest measurement scale. Taking measurements at closer intervals than this would provide no new information to the model (assuming the measurements are errorfree and at a scale equal to or larger than the model scale, i.e. no upscaling is required). Beckie [1996] also shows that the same reduction of the subgrid scale variability component of the spectrum can be accomplished by increasing the measurement scale (i.e. use a measurement device that measures more of the subsurface), while concurrently increasing the model scale to match the measurement scale. However, it is important to realize that increasing the measurement/model scale will not increase the resolution of the model or reduce the overall uncertainty, since it only moves the spectral boundary distinguishing subgrid and unresolved scales, both of which are unresolved and plagued by uncertainty. It will reduce the model closure problem, though, resulting in a more accurate model representation, albeit the model representation is at a larger scale. 2.2.1.6 Summary Uncertainty is best understood and characterized from the perspective of scale. Three important scales, common to most problems, are measurement, model, and network scales. Measurement scale corresponds to the smallest scale at which a measuring device measures—also commonly referred to as the measurement resolution. Model scale is the smallest scale of variability in a system that is represented by the model. Network scale characterizes the spacing of measurements across the problem domain. The relationship between these scales determines the true resolving power of a model, the potential resolving power of a model, and the scales of natural variability in a system that can never be resolved by a model. Spectral theory shows that the smallest scale which can be resolved by a model is the characteristic network scale and the best resolution a model can ever achieve, when the problem domain is  Chapter 2. BACKGROUND  21  fully characterized, is the measurement or model scale, whichever is larger. The implications of these resolution limits on model uncertainty are: •  uncertainty due to natural variability is intrinsic in a model and cannot be reduced,  •  uncertainty associated with scales of variability between the model/measurement scale and the network scale can be reduced by taking additional measurements.  The second type of uncertainty can be considered uncertainty due to ignorance since it results from not having enough information—information that is attainable. There are other types of uncertainty that can afflict system models, including uncertainty resulting from inaccuracies in the model representation of physical processes or uncertainty due to measurement errors. The latter type will be discussed in the following section. A final point is that the uncertainty issues covered in this section apply to the modeling of all types of systems, including economic systems, where model resolution, for example, could correspond to resolving the temporal variability in the unit cost of pumping a well.  2.2.2  Uncertainty associated with the measurement process The measurement process involves performing a field test (or performing a lab test on a  field sample) in order to observe a model parameter, the objective being to reduce uncertainty about the nature of the true model parameter distribution across the site. As shown in the previous section, every measurement inherently acts as a filter, averaging across a certain measurement scale and, thus, filtering out smaller-scale variability. Consequently, every measurement has a certain degree of uncertainty (due to natural variability) associated with it as a result of a lower limit on resolution. In addition, some measurements do not directly observe model parameters, but instead observe state variables which must be converted to model parameters through some sort of measurement model, leading to the potential for more uncertainty. In particular, this is true when measuring process-dependent phenomenological parameters, such as K. For example, the core permeameter test for measuring K involves the observation of the state variables hydraulic head  Chapter 2. BACKGROUND  22  and specific flux, from which the model parameter K must be inverted by applying Darcy's equation to a one-dimensional, steady-state flow measurement model. The measurement of K, in the same material from which the core was extracted, using a slug test could yield different results solely based on the fact that the slug test model assumes radial instead of linear flow (if K in that material is dependent on the flow context, i.e. K is anisotropic). In addition, the results could differ due to the different measurement scales sampled by the two techniques. Measurements such as these represent indirect measurements since they do not directly measure the parameter of interest, the model parameter, but instead measure instrument-dependent state variables, which must somehow be related to the model parameter. Indirect measurements can contain multiple tiers of indirectness, and complex relationships between the instrument state variables and model parameter. This can especially be true for geophysics, where, not only do the geophysical parameters need to be inverted from the measured instrument-dependent state variable(s) based on a measurement model, but the model parameter of interest must then be somehow related to the geophysical parameters. The use of direct current (DC) vertical electrical surveys (VES) to estimate subsurface saturated porosity will be used as an example to better illustrate this complicated linkage. The D C VES technique measures the electrical resistivity (p) of the subsurface by sending a DC electrical current through a region of the subsurface between two electrodes and measuring the resulting electrical potential (voltage) in the subsurface between two other electrodes. A socalled apparent electrical resistivity (p ) can be calculated directly from the measured voltage a  and electrode spacing. Each pa represents some sort of average resistivity across some volume of the subsurface. The electrode spacing can be varied to obtain different depths of investigation. The geophysical information usually desired from a V E S is the geoelectrical structure—the spatial distribution of p in the subsurface. Calculation of the geoelectrical structure beneath the survey profile requires simultaneous inversion of the entire set of p« measurements for different electrode spacing using a specified measurement model. The most common measurement models used for electrical profiling are  Chapter 2. BACKGROUND  23  one dimensional, varying only in the depth direction (i.e. vertically-layered stratigraphy). Even using this simple model, the inversion of VES measurements is non-unique, meaning that many different geoelectrical section solutions are possible from the same data set. The solution that is selected depends on the inversion approach and user judgment. Thus, the calculated geoelectrical section has a significant degree of uncertainty associated with it as a result of the inversion process; this uncertainty can be further exacerbated if there are errors in the p  a  measurements  and/or the true geoelectrical section varies significantly from the one dimensional assumption. At this stage, the geoelectrical section still has to be converted to a saturated porosity section. This process requires determining the relationship between p and porosity, while accounting for the fact that many properties other than porosity affect resistivity (e.g. clay content, pore water resistivity variations). Due to the dependency of p on many factors which are usually not well characterized (or not characterized at all), the relationship between porosity and p is non-unique, resulting in additional uncertainty in the final porosity estimates. Thus, in the process of calculating porosity from VES measurements, uncertainty has entered into the estimate through: 1. errors in the VES measurements, 2. the scale and assumptions made in the VES measurement model, 3. the non-unique inversion of VES measurements to obtain a geoelectrical section, and 4. the non-unique relationship between p and porosity.  As illustrated by the previous geophysics example, many types of error can plague measurements, leading to compounded uncertainty in the estimate of model parameters. The form that these errors take can be classified as either coherent/unbiased or incoherent/biased. Coherency in errors, also called imprecision and non-repeatability, is characterized by variability in the results of measurements taken at the same location (data scatter). The mean of these results eventually converges on the true value after a large number of measurements are taken. This type of data scatter can be considered random error. Incoherence in errors, also called inaccuracy, is  Chapter 2. BACKGROUND  24  characterized by data scatter as well, but the mean of the data converges on a value which is not the true value. Coherent measurement errors are much easier to cope with than incoherent measurement errors because the associated uncertainty in the model parameter can be accounted for using probabilistic methods. This can not be accomplished with incoherent errors unless the bias (displacement) in the mean from the true value is somehow known. Thus is the importance in distinguishing between these two forms of errors. Incorporating this distinction, measurement errors can be classified based on their source as follows: •  Coherent, random noise due to interference of the measurement (e.g. 60 Hertz noise in a seismic section due to nearby power lines affecting the recording instrument)  •  Incoherent, biased noise due to interference of the measurement (e.g. multi-frequency noise in a seismic section due to trucks on a nearby road)  •  Measurement scale not equal to model scale (e.g. K measured from a pump test / core samples when interested in K on the order of meters)—can be considered an unbiased error if the downscaling / upscaling approach is unbiased  •  Indirect measurement with imperfect correlation between measurement and model parameter uncertainty (e.g. K estimated from seismic velocity)—bias depends on approach for handling uncertainty in the correlation  •  Non-unique parameter estimate resulting from inversion of measured state variable(s) (e.g. K parameter field estimated from hydraulic head measurements in five wells)—bias depends on approach for handling uncertainty in the inversion  A l l these types of measurement error—and many can contaminate a single measurement (as shown in the previous geophysics example)—will result in errors in the estimate of the model parameter, unless the error is accounted for through uncertainty. The uncertainty associated with these error types can be considered uncertainty due to ignorance, since it can be reduced or eliminated by taking measurements with less error. From the perspective of scale, imperfect measurements can result in a loss of resolution of the model parameter variability at some scales  Chapter 2. BACKGROUND  25  compared to perfect measurements. Whether a particular scale of variability larger than the network scale can be resolved using imperfect measurements is difficult to assess, but depends on the relationship between the degree of uncertainty in the estimated model parameter and the amount of variability at that scale. Measurement data is often categorized as either hard or soft data—hard referring to "exact" and soft to "inexact". The categorization can be very important since the two types of data are often treated very differently in a problem. However, this distinction is ambiguous since the terms "exact" and "inexact" are subjective. Every measurement contains some degree of error. Thus, the distinction between hard and soft data should be based on a defined error threshold; measurement errors of a magnitude below the threshold are considered acceptable and errors of a magnitude above unacceptable. Acceptable and unacceptable errors should be defined based on their potential to corrupt the model results to a degree that alters the conclusions made based on them. Performing a model sensitivity analysis can help in choosing the threshold. Although the choice of error threshold will invariably be somewhat subjective, having a strict, quantitative criterion to distinguish between hard and soft data is better than having none at all. A type of soft measurement that is in a class by itself is engineering or geologic judgment. This type of measurement involves an expert making a subjective, semi-quantitative assessment about a problem parameter, usually a large-scale parameter covering a large portion of the site. For example, in reference to the example problem, hydrofacies classes could be defined based on geological criteria set by a hydrogeologist with experience at similar sites, and an upper and lower bound on K could be assigned to each hydrofacies class. A n important component of the assessment is attaching uncertainty bounds to the parameter estimate. In hydrogeology, the measurement process often consists of performing a field test to measure an intermediate, technique-dependent variable, which must be transformed to obtain the primary parameter of interest—based on some measurement model and inversion approach. There are many places in this multi-stage process where errors can potentially enter in and, potentially, contaminate or create uncertainty in the final parameter estimate. The propensity for  Chapter 2. BACKGROUND  25  compared to perfect measurements. Whether a particular scale of variability larger than the network scale can be resolved using imperfect measurements is difficult to assess, but depends on the relationship between the degree of uncertainty in the estimated model parameter and the amount of variability at that scale. Measurement data is often categorized as either hard or soft data—hard referring to "exact" and soft to "inexact". The categorization can be very important since the two types of data are often treated very differently in a problem. However, this distinction is ambiguous since the terms "exact" and "inexact" are subjective. Every measurement contains some degree of error. Thus, the distinction between hard and soft data should be based on a defined error threshold; measurement errors of a magnitude below the threshold are considered acceptable and errors of a magnitude above unacceptable. Acceptable and unacceptable errors should be defined based on their potential to corrupt the model results to a degree that alters the conclusions made based on them. Performing a model sensitivity analysis can help in choosing the threshold. Although the choice of error threshold will invariably be somewhat subjective, having a strict, quantitative criterion to distinguish between hard and soft data is better than having none at all. A type of soft measurement that is in a class by itself is engineering or geologic judgment. This type of measurement involves an expert making a subjective, semi-quantitative assessment about a problem parameter, usually a large-scale parameter covering a large portion of the site. For example, in reference to the example problem, hydrofacies classes could be defined based on geological criteria set by a hydrogeologist with experience at similar sites, and an upper and lower bound on K could be assigned to each hydrofacies class. A n important component of the assessment is attaching uncertainty bounds to the parameter estimate. In hydrogeology, the measurement process often consists of performing a field test to measure an intermediate, technique-dependent variable, which must be transformed to obtain the primary parameter of interest—based on some measurement model and inversion approach. There are many places in this multi-stage process where errors can potentially enter in and, potentially, contaminate or create uncertainty in the final parameter estimate. The propensity for  Chapter 2. BACKGROUND  26  these errors to be problematic is often dependent on the problem context. It is important for the practitioner to carefully dissect the measurement process in order to identify potential sources of error and determine i f it is necessary to account for uncertainty in the parameter values ascertained from a particular measurement type—thus classifying the data as soft data.  2.3  The decision-making process The types of decisions that have to be made in engineering problems, especially  hydrogeology-related problems, are often vexing, equivocal, and difficult to make as a consequence of the high degree of uncertainty associated with these problems. In the decisionmaking context, a decision refers to a choice about a future course of action made from a set of possibilities, referred to as decision alternatives. Thus, decision-making requires looking into the future and predicting what decision alternative will be most beneficial / least costly. This process becomes progressively more difficult as uncertainty about factors influencing the decision increases, since the future consequences of choosing a particular decision alternative become less predictable. Figure 2.4 is a visual conceptualization of the decision-making process for a hydrogeology-related engineering problem. The decision-makers are faced with several important decisions: (1) which engineering design to implement, (2) which site investigation design to follow, and (3) which type(s) of data is most worthwhile to collect (e.g. hard borehole data or soft geophysics data). To make these decisions they must compare the benefits, costs, and risks associated with each alternative, all within a "cloud" of uncertainty. The figure illustrates the types of difficult issues a decision-maker must confront. The following sections will examine these issues and present a general framework for systematically dealing with them in order to ensure that robust decisions are made, even in the face of uncertainty.  Chapter 2.  BACKGROUND  27  Decision Makers  Worth collecting?  isks?' Costs? Bgr^fits?  Which design? IDesian A3  ££iSn_A2  D e s i g n A1  Geophysics (Soft data) Investigation design?  / Worth  n  co  ecting?  -—^  Engineering Alternatives  Borehole Data (Hard data) Figure 2.4: Conceptualization of the decision-making process for hydrogeology-related engineering problems  2.3.1  Key components of decision-making process for engineering problems The first step in carrying out a formalized approach to decision-making is to define the key  components of the problem, which can be categorized as: •  objectives,  •  decision questions,  •  constraints, and  •  decision variables.  (Note: the terms objective and constraint can be used in many different contexts with different implications. In Section 2.3 these terms will always be used in a decision-making context and, thus, will not be prefixed by the word "decision". However, in other sections of this work, where their usage could be confusing, they will be prefixed with "decision".) Defining clear objectives before decisions are made is a fundamental, often overlooked, part of the decision-making process, since without objectives there is no context for decision-  Chapter 2. BACKGROUND  28  making. Indeed, decision-making without objectives is a contradiction in terms; how can a decision be made when the desired outcome of the decision is unknown? The comparison between how well the expected outcomes from different decision alternatives meet the predefined objective(s) is the main criteria for evaluating the attractiveness of an alternative. A problem can have many objectives, although it is important that the objectives do not contradict each other. An effective way to set objectives is to use a top-down pyramid approach (see Figure 2.5) where the primary problem objective(s) are defined first and represent the apex of the pyramid. Intermediate objectives which will help in the process of achieving the primary objective(s) make up the underlying levels of the pyramid. The structure of the pyramid should be such that intermediate objectives should also assist in reaching other intermediate objectives lying above them, in addition to reaching the primary objective(s). Usually, the different tiers of the pyramid correspond to different stages in the project time line.  Figure 2.5: Pyramid approach for defining decision objectives. The apex of the pyramid represents the primary objective, underlain by intermediate objectives that are to be met from bottom to top—culminating with the primary objective.  It is important to stress that the emphasis should be on limiting, not increasing, the complexity in the defined set of objectives in order to simplify and clarify the decision-making process as much as possible. The ideal situation is to have just one primary objective and to be  Chapter 2. BACKGROUND  29  able to assess the influence of every activity on reaching this objective. Then an activity's value to the overall project goal can be determined. However, establishing this global linkage of cause and effect between activity and primary objective can be difficult in the early stages of a project when the problem is ill-defined. Before a site is characterized to some degree it may be impossible to determine what the possible design alternatives might be. In fact, it may not even be possible to establish a primary objective before the nature of the problem is identified. This is a situation where an intermediate objective becomes advantageous, defined to reach the level of site description where these obscurities in problem definition are cleared up. In the example problem, at the early stages of the project, while it may be surmised that there is a potential for contamination, it may be unknown whether there actually is a contamination problem and, if there is one, what type of problem it is (e.g. shallow soil contamination, deep subsurface contamination which is immobile, groundwater contamination entering the inlet). At this stage a generic primary objective could be to implement the most costeffective engineering design. However, there is so much uncertainty about the nature of the actual problem that the set of possible design alternatives has to account for an unlimited number of scenarios, making it difficult to directly link site activities with the objective (e.g. the influence of taking hydraulic head measurements in existing wells on the design decision). A n intermediate objective could be to develop a site conceptual model which describes the general geology, hydrogeology, and sources of contamination (if any) across the site and identifies the parameters that have a strong influence on the design decision. Although this intermediate objective represents a qualitative and subjective criterion (unlike the purely quantitative primary objective), it can be thought of as a compass, used to point the project in the right direction—towards achieving the overall goal. Once the objective(s) of an engineering problem are defined, the decisions which need to be made in order to achieve them can be set; these will be referred to as the decision questions. The primary decision question facing the decision-maker(s) in engineering problems is which engineering design should be implemented to solve the problem at hand. However, to reach the  Chapter 2. BACKGROUND  30  point where this decision can be properly made usually requires addressing a set of antecedent questions. These questions could include: •  Which parameters are important to the engineering problem and, thus, need to be characterized? Which processes are important and, thus, need to be modeled? How complex of a model should be used?  •  What site characterization strategy should be implemented? How much characterization data is enough?  A n important aspect of decision-making is posing the correct questions or, in other words, determining what decisions have to be made. Constraints, in a decision-making context, represent defined limits imposed on certain aspects of the engineering problem, usually regarding the engineering design performance. Imposing these limits may, in turn, rule out certain potential decision alternatives which are unable to satisfy the constraints. Constraints differ from objectives in that they represent project requirements that do not vary and must be satisfied. In contrast, objectives constitute project goals which are striven for, but not necessarily met, and are usually phrased in terms of maximizing or minimizing some performance metric. Constraints can be made objectives and vice versa, but the two are treated fundamentally different in the problem analysis. Constraints can be categorized as technical, economic, regulatory, and social. In reference to the example problem, technical constraints could correspond to the maximum excavation slope angle possible before the walls collapse for an engineering design requiring excavation of contaminated soil. Another technical constraint could be the maximum pumping rate in a water well used for containment or extraction of contaminated groundwater—determined based on the well design, pump capacity, and local hydrogeology. A n economic constraint could correspond to a limit on the time required for completion of site remediation in order that development of the site can start on time. Regulatory constraints on contamination could be in one of several forms, as defined by the governing regulatory body, including: (1) maximum allowable contaminant  Chapter 2. BACKGROUND  31  concentrations depending on the intended land use or (2) maximum allowable health risk (e.g. probability of one death in a million over an average lifetime), which would depend on the predicted type and amount of exposure to a contaminant. Another type of regulatory constraint could be a maximum allowable drawdown of the water table in an aquifer used for water supply. Social constraints are much more abstract and difficult to define, but are included since they are a reality in many problems dealing with the environment and natural resources, which includes hydrogeology problems. A possible social constraint in the example problem is the public perception of living on contaminated ground. In other words, the developer may want to reduce contamination to negligible levels (or eliminate it completely), even though this level of remediation exceeds regulatory requirements, in order to allay the public's fear about the health effects of living on "contaminated" ground. Although difficult to estimate, the economic benefits of abiding by this social constraint may be significant in the long run, if doing so increases the public demand to live there, thus increasing occupancy and allowing higher prices to be charged. The parameters in a problem which influence the decision(s) are the decision variables. Decision variables include the parameters that define the decision alternatives, and are used as inputs in the model(s) describing system performance, as well as the model output parameters, which are used to differentiate alternatives. In a mathematical context, the former type of parameters are independent variables and the latter type are dependent variables. In the example problem, independent decision variables could be: where, how much, and what types of contaminants were released; •  hydrogeological properties affecting flow and transport (e.g. hydraulic conductivity and porosity); and unit costs, such as the unit cost of pumping a well, excavating soil, and performing a particular type of treatment.  Dependent decision variables could be: the present and future spatial distribution and concentrations of contaminants in different phases (i.e. non-aqueous, aqueous, and sorbed phases of contaminant mass);  Chapter 2. BACKGROUND  32  the number, and associated pumping rates, of pumping wells; •  the amount of material excavated, treated; and the total cost of remediation for a design alternative, which could be the metric used to compare how well different alternatives meet the objective (if the primary objective is to minimize total cost). A decision model should be as simple and straightforward as possible while still properly  accounting for, through decision variables and system models, all the processes that significantly influence the decision alternatives.  2.3.2  Concept of risk in decision-making As mentioned earlier, uncertainty complicates the decision-making process by creating  unpredictability and, thus, should be accounted for in the process. To account for uncertainty in decision-making it is necessary to understand the relationship between the two. Uncertainty in any of the decision variables in an engineering problem results in the possibility of unintended decision outcomes (decision alternatives failing to meet expectations). There are two types of unintended outcomes resulting from choosing a non-optimal decision alternative: •  not meeting the specified objective(s) and/or constraints (decision failure), and  •  meeting all objectives and constraints, but another alternative would have met the objective(s) better.  In the example problem, the first type of outcome could correspond to the existence of soil contamination above the regulatory threshold level (representing a constraint) at certain locations of the site, even after  the selected remediation design (representing  a decision  alternative)—calling for excavation and treatment of soil from identified areas of the site—is implemented. The selected excavation design failed to account for some of the contamination due to underclassification. The second type of outcome would result if uncontaminated soil is inadvertently excavated and treated in an overly-conservative soil remediation plan. The  Chapter 2. BACKGROUND  33  remediation of clean soil is due to overclassification of contamination and results in unnecessary costs. In this particular example, a single decision alternative can result in both types of outcomes, underclassification in one area of the site and overclassification in another area. If a problem has no uncertainty then there should be no chance of unintended outcomes, since there is no doubt about the best decision alternative. However, once uncertainty enters into a problem, this chance can become non-negligible. The chance and consequence of an unintended decision outcome will be referred to as risk and risk cost, respectively, in this work. Cost, in this context, represents the difference between the unintended and intended outcome in the value of the dependent decision variable(s) used to distinguish alternatives. It does not have to be a monetary value. In reference to the example problem underclassification scenario described above, the risk cost could correspond to the monetary failure costs (e.g. fines and the cost of delayed remediation) associated with not properly remediating contaminated soil. Thus, as the risks associated with a decision alternative increase (due to heightened uncertainty in the decision variable(s)), the likelihood of having to absorb risk costs increases. Of course, at the time a decision has to be made, it is unknown whether an unintended outcome will actually occur and, thus, whether an associated risk cost will be incurred. However, if the problem is conceptualized as an ensemble of equally-likely descriptions of reality (realizations), based on available information about the decision variables, each realization would have a particular decision alternative outcome and, thus, risk cost (or lack there of) associated with it. Then an expected, or probabilistic  risk cost, could be thought of as the average risk cost for all  realizations . 1  Probabilistic risk cost is the key to risk-based decision-making, which accounts for uncertainty in decision variables. The purpose of formalized decision-making is to identify the decision alternative that best meets the problem objective(s), while satisfying all constraints. In 'These definitions for risk are somewhat unique to this work. In most of the work on hydrogeological decision analysis, risk is defined as the expected cost of failure (see Freeze, et al., 1990), which is synonymous with probabilistic risk cost in this work, if failure is taken in a more general sense to mean an unintended outcome. The definitions concerning risk are purposely more descriptive in this work in an attempt to clear up some of the ambiguity associated with the term "risk".  Chapter 2. BACKGROUND  34  order to differentiate the alternatives based on these criteria, it is necessary to model their future impact on the system, with respect to the objectives and constraints, in terms of standard metrics (dependent decision variables) that can be compared. Probabilistic risk cost represents a statistically robust measure of the influence of uncertainty in decision variables on decision alternative outcomes (i.e. the negative impact of unintended outcomes as compared to their expected outcomes). The measure is in terms of the same dependent decision variables used to gauge the attractiveness of intended alternative outcomes. Thus, probabilistic risk cost can be used in conjunction with the benefits and costs associated with intended outcomes to compare and rank alternatives in a decision-making framework that accounts for risk. This approach to incorporating risk in decision-making can be illustrated using the example problem. Say that it is determined that the probability of soil contamination above the regulatory threshold level in a certain part of the site is fifty percent. Then the no action alternative would result in a probabilistic risk cost of one-half the dollar cost of failure associated with underclassification of contaminated soil (the product of the probability of failure and the failure cost). The intended costs and benefits for no action are zero. However, for the remediation alternative (excavation and treatment of soil), the probabilistic risk cost is zero since there are no unintended outcomes resulting from underclassification or overclassification of contamination (i.e. the presence of contamination is correctly classified). The intended cost of the remediation alternative is simply the dollar cost of performing the remediation, and the intended benefit is zero (there are no benefits in this problem scenario). Thus, if the primary problem obje