Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Tomographic techniques and their application to geotechnical and groundwater flow problems Laidlaw, James Stuart 1987

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1988_A7 L34.pdf [ 18.37MB ]
Metadata
JSON: 831-1.0052412.json
JSON-LD: 831-1.0052412-ld.json
RDF/XML (Pretty): 831-1.0052412-rdf.xml
RDF/JSON: 831-1.0052412-rdf.json
Turtle: 831-1.0052412-turtle.txt
N-Triples: 831-1.0052412-rdf-ntriples.txt
Original Record: 831-1.0052412-source.json
Full Text
831-1.0052412-fulltext.txt
Citation
831-1.0052412.ris

Full Text

TOMOGRAPHIC TECHNIQUES AND THEIR APPLICATION TO GEOTECHNICAL AND GROUNDWATER FLOW PROBLEMS by J A M E S S T U A R T L A I D L A W B . A . S c , The University of British Columbia, 1985 A T H E S I S S U B M I T T E D IN P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R O F A P P L I E D S C I E N C E in T H E F A C U L T Y O F G R A D U A T E S T U D I E S The Department of Geological Sciences We accept this thesis as conforming to the required standard T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A December 1987 © J a m e s Stuart Laidlaw, 1987 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department The University of British Columbia 1956 Main Mall Vancouver, Canada V6T 1Y3 A B S T R A C T Most downhole tools in use today measure properties immediately adjacent to the borehole, and as such, only a small portion of the subsurface volume is known with any degree of certainty. When dealing with geologic situations which are characteristically heterogeneous, the engineer often requires more information than what present tests can provide. Tomography is an in-situ testing method that allows the generation of a two dimensional subsurface image by reconstructing material property variations between boreholes. It is essentially a solution to the inverse problem where signals are measured and, through computer manipulation, are used to infer material contrasts in the subsurface. For the purposes of this thesis, a two dimensional configuration is used to demonstrate and evaluate the tomographic technique with source and receiver locations positioned at intervals down adjacent and nearly vertical boreholes. Both iterative and direct matrix solution methods are used to evaluate the use of seismic and groundwater flow data for subsurface tomography. T h e iterative methods include a variation of the classical algebraic reconstruction technique ( C A R T ) , a modified version of the A R T algorithm ( M A R T ) , and a modified version of the A R T algorithm using the Chebyshev norm criterion ( L A R T ) . The purpose of the iterative tests is to determine the best algorithm for signal reconstruction when data noise and different damping parameters are applied. T h e matrix methodologies include a constrained L1 linear approximation algorithm and singular value decomposition routines ( S V D ) . These methods solve the set of linear equations (Ax = b) which the tomographic techniques produce. The purpose of this stage of testing is to optimize a direct method of solution to the sets of linear equations such that different forms of anomaly can be discerned. Numerous synthetic seismic and groundwater data sets are used by both iterative and matrix algorithms. Seismic test data sets are generated by calculation of transit times through materials of known seismic velocity. Groundwater test data sets are generated by drawdown analyses and finite element procedures. A l l algorithms demonstrate a reasonable ability at reconstructing sections which closely re-i i sembled the known profiles. Vertical anomalies, however, are not as well defined as horizontal anomalies. This is primarily a result of incomplete cross-hole scanning geometry which also affects the rank and condition of the matrices used by the direct forms of solution. The addition of Gaus-sian noise to the data produces poor reconstructions regardless of the type of algorithm used. This emphasizes the fact that tomographic techniques require clear and relatively error-free signals. T A B L E OF C O N T E N T S Page Abstract i i Table of Contents i v List of Figures y i - i Acknowledgements . . . r ; x Preface '. . . xd Chapter 1 1.1 Introduction 1 1.2 Purpose and Objective 2 Chapter 2 2.1 Algebraic Reconstruction Techniques ( A R T ) 5 2.1.1 Brief Historical Overview of Tomography 5 2.1.2 Mathematics of A R T 6 2.2 Summary and Conclusion 15 Chapter 3 3.1 Reconstruction Techniques 18 3.2 Methods Used in this Investigation 18 3.2.1 T h e Classical A R T Algorithm ( C A R T ) 18 3.2.2 The Modified A R T Algorithm ( M A R T ) 22 3.2.3 The Modified A R T Algorithm Using the L°° Criterion ( L A R T ) 24 3.2.4 Matrix Reconstruction Technique . . . 25 3.3 Summary and Conclusions . . . 31 Chapter 4 4.1 Testing Procedures 33 i v 4.1.1 Seismic Considerations 33 4.1.2 Groundwater Considerations 35 4.2 Test Configurations 38 4.2.1 Seismic 38 4.2.2 Groundwater 38 4.2.3 Parameters Varied 46 4.3 Summary and Conclusions 47 Chapter 5 5.1 Applications of A R T 48 5.2 Stopping Criteria and Image Resolution for the Iterative Techniques 48 5.2.1 Stopping Criteria 48 5.2.2 Image Clarity 51 5.2.3 Summary and Conclusions 53 5.3 Test Results 53 5.3.1 Seismic Patterns 54 5.3.2 Groundwater Patterns 71 5.3.3 Timings 80 5.4 Analysis of Test Results 80 5.4.1 The Best Iterative Algorithm 83 5.4.2 Applications 84 5.5 Summary and Conclusions 84 Chapter 6 6.1 Applications of Matrix Techniques 86 6.1.1 Introduction and Review 86 6.2 Initial Test Results 86 6.2.1 Results 86 6.3 Alterations to the Procedure 87 6.3.1 Redundant Data 87 v.. 6.3.2 Elimination of Low Singular Values 90 6.3.3 Producing a Hignly Underdeterrnined Problem 93 6.4 Special Problems 99 6.4.1 T h i n Fracture Analysis 99 6.4.2 High Contrast Anomalies 99 6.4.3 Horizontal Variations 101 6.4.4 Noise in the Data 101 6.4.5 Parameterization Effects 106 6.4.6 A Priori information 108 6.5 Summary and Analysis of Test Results 110 6.5.1 Conclusion 110 Chapter 7 7.1 General Conclusions 112 7.2 Further Study 113 7.2.1 Improvements on the A R T Algorithm 113 7.2.2 Applications Research 114 7.3 Summary and Conclusions 116 References 117 LIST OF F I G U R E S Figure Page 1. Cross Hole Configuration 3 2. Computer Aided Tomographic System (After Scudder, 1978) 8 3. Tomographic Section Discretized into Pixels 10 4. Pixels Which are Not Intercepted by Raypath 11 5. Underdetermined System 13 6. Overdetermined System 14 7. The Tangent Law 16 8. Ray Width Calculation Used in Classical A R T 21 9. Areas Not Included in the Reconstruction 23 10. The Influence of Outliers. (After Menke, 1984a) 26 11. Calculation of Length Matrix 28 12. Method of Intercept Calculation. (After Dunbar, 1986) 29 13. Cross Hole Seismic Investigation 34 14. Cross Hole Transient Groundwater Investigation 36 15. Seismic Test Pattern. Test Pattern A 39 16. Seismic Test Pattern. Test Pattern B 40 17. Homogeneous Seismic Test Pattern 41 18. One Layer Seismic Test Pattern . 42 19. Two Layer Seismic Test Pattern 43 20. Homogeneous Groundwater Test Pattern 44 21. One Layer Groundwater Test Pattern 45 22. Variation of Ray Coverage 52 23. Test Pattern A by M A R T . 0.5 Damping, No Noise 55 24. Test Pattern A by L A R T . 0.5 Damping, No Noise 55 25. Test Pattern A by C A R T . 0.5 Damping, No Noise 56 26. Test Pattern A by C A R T . 0.5 Damping, No Noise 56 vrr 27. The Influence of Damping Parameters on Convergence 57 28. Section Demonstrating Vertical Smearing 59 29. Raypaths Tending to Skim Horizontal Boundaries 60 30. Demonstrating the Smearing on a Horizontal Anomaly 61 31. Test Pattern A by L A R T . 0.5 Damping, 0.05 Noise 62 32. Test Pattern A by L A R T . 0.5 Damping, 0.1 Noise 62 33. L A R T and Test Pattern A with Initial Matrix = 0 . 0 64 34. Original Homogeneous Test Pattern 65 35. Seismic Pattern by M A R T . 0.5 Damping, No Noise 65 36. Seismic Pattern by L A R T . 0.5 Damping, No Noise 66 37. Seismic Pattern by C A R T . 0.5 Damping, No Noise 66 38. One Layer Pattern by C A R T . 0.5 Damping, No Noise 68 39. Two Layer Pattern by M A R T . 0.5 Damping, No Noise 68 40. One Layer Pattern by L A R T . 0.5 Damping, No Noise 69 41. One Layer Pattern by M A R T . 0.5 Damping, No Noise 69 42. Two Layer Pattern by L A R T . 0.5 Damping, No Noise 70 43. Two Layer Pattern by C A R T . 0.5 Damping, No Noise 70 44. The Effect of Noise Level on Solution Convergence 72 45. Test Pattern B by C A R T . 0.5 Damping, No Noise 73 46. Test Pattern B by L A R T . 0.5 Damping, No Noise 73 47. Homogeneous Pattern by C A R T . 0.5 Damping, No Noise 75 48. Homogeneous Pattern by L A R T . 0.5 Damping, No Noise 75 49. Effect of Damping Value on Convergence 76 50. Finite Element Configuration for Transient Test 78 51. One Layer Groundwater Pattern by M A R T . 0.5 Damping, No Noise 79 52. One Layer Groundwater Pattern by L A R T . 0.5 Damping, No Noise 79 53. M A R T Convergence Characteristics with 0.5 Damping ,. . . 81 54. Algorithm Execution Time 82 55. Singular Values of the Seismic Test Pattern 88 v'xii 56. Addit ion of Redundancy 89 57. Two Orders of Redundancy by C L 1 91 58. Three Orders of Redundancy by C L 1 91 59. Singular Values of Redundant Data Sets 92 60. One Layer Seismic Pattern by S V D 94 61. Two Layer Groundwater Flow Pattern 95 62. Central Anomaly Groundwater Pattern 95 63. Two Layer Groundwater Pattern by S V D 96 64. Central Anomaly Pattern by S V D 96 65. Highly Underdetermined Configuration by S V D 98 66. Two Order of Magnitude Diffusivity Contrast 100 67. Horizontal Variations Producing Vertical Anomalies 102 68. Configuration for Minor Horizontal Diffusivity Contrast 103 69. Configuration for Large Horizontal Diffusivity Contrast 103 70. Groundwater Pattern With Minor Horizontal Diffusivity Variation 104 71. Groundwater Pattern With Large Horizontal Diffusivity Variation 104 72. One Layer Seismic Pattern by S V D . 0.05 Noise 105 73. One Layer Seismic Pattern by S V D . No Noise 105 74. One Layer Pattern by S V D Using Finer Meshes 107 75. One Layer Pattern by S V D . 0.05 Noise 109 76. Borehole Pixels Eliminated From Matrix Calculation 109 77. Seismic Cone Penetrometer and its Application 115 . i x A C K N O W L E D G E M E N T S I would like to thank Dr. W . S. Dunbar for his help in the evolution of this thesis. He not only suggested the topic, but provided invaluable counselling, input (and prodding) throughout the course of the work. I would also like to thank the University of British Columbia for the 1986-1987 University Graduate Fellowship which provided me with substantial financial assistance. A special thank you also goes out to the faculty, staff and students of the Departments of Geological Sciences and Civi l Engineering at the University of British Columbia for their assistance in getting my thoughts and words together. Lastly, but certainly not least, I thank my family for putting up with me during the long months of research and writing. x P R E F A C E A n interesting observation: " If you steal from one author, it's plagarism; if you steal from many, it's research." Wilson Mizner * A n d , a not uncommon fate for theses: "Many thanks for your book (thesis); I shall lose no time in reading it." Benjamin Disraeli * * as stated by N . G . McPhee (1978) and (1981) x i C H A P T E R 1 1.1 Introduction Engineers and designers who deal with structures built on or in the earth have been faced with the perpetual problem of finding out what is actually beneath the ground they work on. Geologic deposits are characteristically heterogeneous, either on the microscale (microfractures, crystalline inclusions, etc.) or on the macroscale (layer pinchouts, channel deposits, faults, folds, etc.), and as such, a detailed subsurface picture of what exists at any given site is extremely difficult to obtain. Here, the term picture is used to describe the profile obtained from crosshole scanning. Often, the engineer must rely on past construction experience in a given area to provide him or her with an understanding of the subsurface conditions. In-situ exploration has become standard procedure for all scales of geotechnical investigation in recent years. Previously, only large scale projects such as dams and bridge foundations warranted the costs and time for subsurface exploration. Now, at least one borehole or in-situ test at the site is usually required by the engineer before any sort of design is undertaken. In spite of these advances in the ground investigation process, a few boreholes are not enough to characterize the conditions under an extensive foundation. Indeed, for one 2-1/2 inch (6.35 cm) diameter borehole used to investigate the ground conditions under a 10 by 20 meter foundation, the ratio of the area of soil structure known to the area of soil structure at site is over 1:15000. Because of the high cost of extensive subsurface investigations, the main thrust of current in-situ testing is to make the most of what is available. Sophisticated downhole logging and testing devices have and are being developed to fill this need (Robertson, 1985; Campanella and Robertson, 1981), however the question that still remains is can the information obtained from one or more boreholes be applied (with confidence) over the entire site? O r , stated another way, is the geology sufficiently continuous at the site so as to allow extrapolation of borehole data? Often, the answer to one or both of these questions is a conjectured no. Therefore, a method or methods which sample large portions of the subsurface seem to be necessary. Tomography is a method by which large volumes of subsurface material may be scanned and 1 analysed without the need of extensive excavations or equipment. Basically, the tomographic field methodology is that data is gathered by means of source/receiver positions placed at intervals down adjacent boreholes (Figure 1). Crosshole signals are obtained for all possible combinations of source and receiver location, and these data are then used by a computer program to reconstruct a subsurface profile of a particular material property. For example, in cross-hole seismic tomography, travel times between shot and receiver locations are recorded and together with computer-calculated raypath lengths, a profile of velocity (actually inverse velocity) values is generated. This velocity profile can then be used to suggest soil density variations. N Tomographic techniques have had wide application. The major use of these techniques has been in the medical imaging field. C A T (computer aided tomography) and C T A T (computerized trans-verse axial tomography) have revolutionized the diagnostic process in brain tumor identification and have had further success in whole body tumor scanning (Cho, 1974; Shepp and Logan, 1974; Scudder, 1978). Applications of tomography have also been extended to radio astronomy, geophys-ical seismology, X-ray crystallography and electromagnetic investigations (McMechan, 1983). The use of tomography, however, is by no means limited to the fields listed here. 1.2 Purpose and Objective The purpose of this thesis is to examine four interpretational algorithms used in tomographic imaging and determine whether different forms of input and under which circumstances this input can be applied to them. Three of these methods involve the iterative solution to the algebraic reconstruction problem and the fourth involves direct matrix solution. The iterative techniques will be compared on the basis of computational speed, reconstruction accuracy and solution convergence. A s cross-hole seismic techniques have been extensively used in practice and their theories are well established, these methods will be applied to evaluate the iterative algorithms. Hydrologic data will also be used at a later stage to examine reconstruction efficiency with data from a different form of source input. A number of different data sets (both seismic and hydrologic) will be used for the matrix reconstruction method. The main thrust of this stage of the investigation, however, will be to optimize a direct method of matrix solution so that many different forms of anomaly may be distinguished, and the validity of using these two different radiation types be established. 9 //*&// • ® 0 0 0 1 ® SOURCE LOCATION 0 RECEIVER LOCATION \ RAY PATH i F i g u r e 1 : C r o s s H o l e C o n f i g u r a t i o n T y p i c a l l y U s e d i n G e o t o m o g r a p h i c S c a n n i n g 3 The objective of these comparisons is to extend the tomographic technique to geotechnical and groundwater flow problems by showing that these solution methods will produce reasonably accurate subsurface profiles which identify anomalous zones when seismic or transient flow data is used as input. A second objective is to show that the iterative techniques are capable of running on present-day personal computer systems. C H A P T E R 2 2.1 Algebraic Reconstruction Techniques Algebraic reconstruction is a method of solving large, sparse systems of equations. A s to-mographic techniques and applications have developed over the years, so has the assortment of algebraic reconstruction methodologies, with almost every author having his/her own preference as to how the reconstruction technique should be carried out. The individual programs differ because there is an ongoing search for the 'best' algorithm which will produce the 'best' overall picture in any given set of circumstances (eg.: for unusually shaped anomalies, for refraction of rays, for noise in the data, etc.). Despite the ever increasing number of quasi-unique methodologies, the basic algebraic reconstruction technique has changed little. 2.1.1 Brief Historical Overview of Tomography Radon, in his 1917 paper " O n the Determination of Functions from their Integrals along Certain Manifolds" laid the mathematical groundwork for tomography by developing the earliest known solution to the reconstruction problem (Horn, 1978; Scudder, 1978). However, it was Bracewell in 1956 who first used the reconstruction techniques in a practical sense. Bracewell used a series of strip sums of microwave radiation to reconstruct the image of the sun and thus prove the valid-ity of the tomographic technique. Similar methods, but on a much smaller scale, have been used by electron microscopists in their analysis of the structure of complex specimens (Scudder, 1978). The shift towards the medical applications of tomography began in the 1960's and early 1970's. The theory of using reconstruction techniques with radioactive methods led to the development of the first head scanner by Hounsfield at the Center Research Laboratories of E . M . I . L t d . in 1970. A s well, many important papers were published at this time discussing reconstruction processes and their relative merits. Convolution methodologies were analysed by Ramachandran and Lak-shminarayanan in 1971, and Mersereau (1973) used Fourier analysis to recover pictures from their projections. Several papers compared the available algorithms, such as Herman's paper (1972) com-paring algebraic reconstruction with summation techniques, Herman and Rowland (1973) analysing algebraic reconstruction techniques ( A R T ) , convolution, and simultaneous iterative reconstruction 5 techniques (SIRT), and Gilbert's paper (1972) again compared SIRT and A R T , with reference to Fourier techniques. Other authors, however, chose to investigate the behaviour of the basic A R T algorithm and offer suggestions as to its improvement. Two excellent papers along these lines are by Gordon (1974) and Herman, Lent and Rowland (1973). In the middle 1970's and early 1980's, new applications for A R T were being investigated. It was found that tomography could be applied to geophysical exploration, and a renewed interest in these techniques was born. M c C a n n , Grainger and M c C a n n (1975) attempted inter-borehole acoustic measurements to interpret geologic features which led the way for cross-hole imaging in coal mines (Radcliff and Balanis, 1979; Mason, 1981) and in crystalline rocks (Wong, Hurley and West, 1983; Ivansson, 1986; Gustavsson et al, 1986). Electromagnetic probing of the subsurface was also attempted at an underground mass transit site near Washington D . C . (Dines and Lytle, 1979) and on a granitic pluton (Ramirez, 1986), both of which produced good results. Today, there are even more applications for tomographic techniques using wide ranges of radi-ation types. 2.1.2 Mathematics of A R T The proof and mathematical concepts of Radon's solution to the reconstruction problem are presented in his 1917 paper. For the purpose of this thesis, however, the most important formula to come out of the analysis is all that will be discussed. Radon's inversion formula is given as: where f{r,4>) is the adsorbing density at the point r,9 in polar coordinates; p(l,9) is the density integral or ray sum along the ray (1,0); 0,<f> are angular values; lis the distance perpendicular from the region being scanned; r is the radial distance; t= I — rcos(9 — <p) or the perpendicular distance from a given point to a ray. The assumptions necessary in order to apply this formula are (Horn, (1) 1978): 1) f(r,<t>) is bounded, continuous and zero outside the region scanned, 2) then p(l,9) will also be zero outside a certain range for /, 3) p(l, 9) is continuous, and 6 4) the partial derivative of p{l,0) with respect to / is continuous. This formula is designed for essentially continuous circumferential sampling of a region (as is indi-cated by the limits of integration) and it can be used almost directly in modern medical imaging as complete (360 degree) sampling can be achieved through rotation of the radiation source and receivers (Figure 2). The C A T scanners presently in wide use use around the world are prime ex-amples of this technology. The implication of a complete scanning field is that virtually all points in the digitized section will be sampled by at least one raypath. Borehole tomography, on the other hand, cannot economically use full 3 6 0 ° scans. To obtain such a scan would require the drilling of a large number of holes to fully encompass the area of interest. The typical application of a downhole tomographic technique would be to shoot in a criss-cross manner between two adjacent and nearly vertical boreholes (See Figure 1). The resulting area for reconstruction is a rectangular section of incomplete raypath coverage. To handle the new geometry, modifications must be made to Radon's original inversion formula. For most radiation types used in modern tomography, two basic parameters may be determined from well to well shooting: either the slowness profile or the attenuation profile of the material. The values measured at the receivers for these two parameters are time of flight for the slowness profile and response history for the attenuation profile. For each raypath through the material in question, the relationship of slowness to travel time is : where k. is the ray number, T is the travel time, / is the ray length, and v is the velocity at position (x,y). The amplitude to attenuation field a(x, y) is : where A is the amplitude of the response k, A„ is the source amplitude, a(x,y) = -KJ/CQ, f is the frequency and c and Q are dependent on position. A„ must be corrected for the radiation pattern, geometric spread and instrument response so that a will only describe the effects of apparent attenuation (which is the intrinsic dissipation and elastic scattering). If one takes Pk = ln(A/A„) then the amplitude response formula reduces to : (2) 7 0= rotation angle X - R A Y D E T E C T O R X - R A Y T U B E DATA ACQUISITION S Y S T E M GENERATOR PATIENT T A B L E C O M P U T E R S Y S T E M G - R A P H I C D I S P L A Y S Y S T E M SCANNER SySTEM M A & N E T I C T A P E . o D I S K F i g u r e 2 : D i a g r a m o f a C o m p u t e r A i d e d T o m o g r a p h i c S y s t e m U s e d i n M e d i c a l I m a g i n g ( A f t e r S c u d d e r , 1 9 7 8 ) 8 Pk = - j a(x,y)dl (4) which is of the same form as the travel time response formula. Thus, a general integral equation can be written as : R fc= f S{x,y)dl (5) J ray with Rk representing the response parameter and S(x, y) representing the material property of interest (Peterson et al, 1985). Note that the general integral formula derived in equation 5 is similar to the original Radon inverse formulation of equation 1. In practice, the region between the boreholes is discretized into a grid of 'pixels' or 'cells'. These pixels can have many shapes but are usually chosen to be rectangular or square. Figure 3 demonstrates the discretization process. Wi th the discretized picture, the line integral over the raypath becomes a summation: NPX NPY Rk = Y,Y, s'.ikAL>.* (6) 1 1 . where 1 < k < n, n is the number of rays, Rk is the response for ray k, is the material property at the i,j pixel intercepted by the k th ray, ALi}-k is the length of the k th ray through the i,j pixel, and NPX, NP Y are the number of pixels in the i and j directions. This is the basic formula used for algebraic reconstruction techniques. One can see that the above formula (6) readily lends itself to matrix notation of the form: Ax=b (7) where A is an m by n matrix of lengths, x is a vector of properties, and 6 is a vector of responses. Intuitively, the matrix A must be very sparse as for each raypath, a large number of the pixels in the grid are not intercepted (Figure 4). Thus a reasonable limit must be used on the discretization size of the pixels so that enormous matrices are not produced. For example, a discretization of 10 x 10 pixels with source and receiver positions in the center of each pixel adjacent to the boreholes produces a matrix that has 100 pixels times 100 rays = 10000 elements. Increasing the number 9 ® SOURCE LOCATIONS © RECEIVER LOCATKDWS i _ J PIXELS i F i g u r e 3 : T o m o g r a p h i c S e c t i o n D i s c r e t i z e d i n t o ; P i x e l s o r C e l l s 10 RAYPATH 1^ 1 PIXEL CROSSED by RAYPATH |~1 PIXEL UNTOUCHED bv RAYPATH F i g u r e 4 : D e m o n s t r a t i o n o f t h e L a r g e N u m b e r o f P i x e l s w h i c h a r e N o t I n t e r c e p t e d b y R a y p a t h 11 of pixels by a smaller discretization increases the matrix size geometrically. The large storage requirements of matrix A usually preclude the use of solution methods that involve the storage of the entire matrix A at one time in memory. A n example of such a technique is the singular value decomposition to be discussed in Chapter 6. One way around the problem of large matrix size is to ignore and/or avoid the matrix de-velopment. Instead, one treats each raypath (equation 6) as a separate equation and solves the reconstruction problem iteratively. Basically, the steps in the solution process are: 1) initial estimate of the material properties Sijk, 2) find ray sum Rl for the existing material properties Sijk, 3) compare Rj. with Rk (measured time) 4) adjust the elements of the matrix Siik so that R^ni,m) = Rk (usually accomplished by simple back projection), 5) continue for all rays until Rl = Rk (for all k) or until a set tolerance is reached. The major area for improvement of the reconstruction* process has typically been step 4, and it is here that the many reconstruction algorithms differ. Excellent papers which describe the mathematical development of algebraic reconstruction schemes are by Dines and Lytle (1979); Peterson, Paulsson and McEvi l ly (1985); Fawcett and Clayton (1984); Gordon (1974); and Herman, Lent and Rowland (1973). Before concluding this section, some important points must first be addressed. The first point is that the matrix A is not always a square matrix. The implications of this are that there may be too much information (overdetermined) or too little information (underdetermined) for the iterative solution process. These situations can occur if there are not enough raypaths to cover the grid area (underdetermined, Figure 5) or there are many more raypaths than pixels (Figure 6) and each pixel is intercepted by several rays (overdetermined). In the case of the purely overdetermined problem, one can use a least squares solution to determine the best picture. The redundant information may even be used to check the picture obtained or aid in its resolution. For the purely underdetermined problem, no unique solution exists for the set of equations. Indeed, there is an infinite number of possible solutions to the data set. Only with the use of additional information can a unique solution be found. This 'a priori' 12 \ \ \ y y y y y \ \ \ y y y y y y \ \ y \ \ \ \ F i g u r e 5 : U n d e r d e t e r m i n e d S y s t e m w h e r e t h e N u m b e r o f R a y p a t h s ( E q u a t i o n s ) i s G r e a t l y E x c e e d e d b y t h e N u m b e r o f P i x e l s ( U n k n o w n s ) 13 Figure 6 : Overdetermined System w h e r e the Number of Raypaths (Equat ions) G r e a t l y Exceeds the Number of P i x e l s (Unknowns) 14 information may take the form of the expected distribution of properties throughout the section, known properties adjacent to the borehole or perhaps limits being set on the range of the material property itself. The use of 'a priori' information will be discussed further in a later section. There also exists a special group of problems that are mixed-determined. This is a condition which is common to geotomographic problems. The mixed-determined situation arises primarily from poor or incomplete raypath coverage. In the cross-hole methodology, a number of pixels lie on the circumference of the domain, and thus, may only be intercepted by fewer than two raypaths. Interior pixels, on the other hand, are intercepted by a multitude of raypaths. The result is that different pixels in the same domain may be either under or overdetermined depending on the degree of raypath coverage. This can result in slow convergence or no convergence of the iterative solution algorithm. Menke (1984a) discusses these types of problem in detail and outlines a number of solution strategies. The second point that must be addressed is the assumption of the straight line raypath. The algorithms examined in this paper all have the implicit assumption that the rays propagate in a straight line from the source to receiver. This assumption is not strictly valid because refraction and reflection will occur as the rays travel through media of different material properties. Flow of water, for example, will refract according to the Tangent Law (Figure 7) and optical rays will refract according to Snell's Law. Incorporation of ray bending into the reconstruction algorithm is difficult because the program not only has to solve the linear equation, but must recalculate the length of the ray through each pixel at each iteration due to ray bending. Iterative algorithms with ray bending have been written (Baguelin, pers. comm.), but they require some assumptions to be made about the subsurface structure and are slow to converge. The assumption of the straight line optical model is incorrect in the strict sense, however, it can be used with confidence as long as the contrast between materials is not great (ie.: for seismic models, a low density difference between layers). In such a case, the difference in reconstruction obtained by bent rays and by straight rays is insignificant (Peterson et al, 1985; McMechan , 1983). 2.2 Summary and Conclusion Algebraic reconstruction techniques have had a long history, originating from Radon's original inversion formulation and developing into a wide variety of algorithms that have been used for 15 AFTER FREEZE AND CHERRV, n7<?) DARCYS LAW: Q = K A ^ " WHERE-. Q = FIOWRATE ( m A ) K = Hrcw/wuc Cb^OUCT/v'/TY (Vs) A= A R E A OF Flow (m*) J L . F i g u r e 7 : T h e T a n g e n t L a w D e s c r i b i n g G r o u n d w a t e r R e f r a c t i o n a t I n t e r f a c e s 16 medical, geophysical and geotechnical purposes. The mathematics of reconstruction is well estab-lished and actually quite simple; the algorithm is merely the solution of a set of linear equations which result from the discretization of the problem domain and the passage of rays through this domain. For the iterative techniques, the equations are solved one at a time, until the calculated responses (via calculations from the problem domain) are equal to or nearly equal to the measured responses. Matrix solution of these equations is possible by way of matrix solving algorithrns, but due to the large, sparse matrix that results from this process, there is often reluctance to try such techniques. The problem of too much or too little information is basic to all field experiments. Too much information is generally good as it implies redundancy; too little information, however, indicates that no unique solution exists or can be found without additional input data. Finally, the straight line raypath model used for most reconstruction techniques is not strictly correct, however for most applications, it is sufficiently correct. 17 CHAPTER 3 3.1 Reconstruction Techniques Numerous iterative algorithms have been developed over the years to solve the reconstruction problem of tomography. These algorithms differ in either the method by which the deviation from the response data is back-projected over the individual raypaths, or in the method of iteration through the suite of raypaths available. The motivation for developing the different algortihms is to improve the clarity and accuracy of the reconstruction. Here, clarity is defined as the ability of the algorithm to produce a profile which has well defined anomaly boundaries that correspond to the boundaries of the known profile, and accuracy is the ability of the algorithm to produce a profile which replicates or resembles the known profile in terms of material property values and anomaly shape. 3.2 Methods Used in this Investigation 3.2.1 The Classical A R T Algorithm ( C A R T ) T h e original (or classical) A R T algorithm was formulated by Gordon et al (1970). While their procedures generated a good deal of controversy at the time, the algorithm has since been shown to have a firm mathematical basis (Herman et al, 1973; Gordon, 1974). The original algorithm was intended for applications in electron microscopy and X-ray photography (and later in medical tomography) which had the implicit assumption that a full 3 6 0 ° view could be obtained. This assumption, however, is not valid for cross-hole studies, but little needs to be changed in the algorithm to account for (or adjust for) the lack of coverage. Assuming that the region of interest i?, has been digitized into square (or rectangular) pixels of uniform grayness (where the level of grayness signifies the intensity of the physical property being measured at that point), Gordon (1974) describes the classical A R T algorithm as follows: Let fl designate the qth estimate of the density function / at the reconstruction element i in R. Let 18 where C3- is a weighting function = 1.0 (by Gordon et al, 1970), P}q is the value of the qth estimate of the projection P3-. Let / be the mean density of the object in R. Then , the A R T algorithm is: (9) where i— l,...,ra, n is the number of elements, and Nj is a weighting function. Expressed in words, this means that the iterative process is started with all reconstruction elements set to some constant (/). In each iteration, the difference between the sum of the actual projection data and the estimated projection data is calculated (-rf- — -p~), and the correction (or .7 .7 difference) is evenly divided between all reconstruction elements which are a part of the ray in that projection. Each ray is considered in turn, and corrections are applied over several iterations until a convergence criterion is reached. The classical A R T algorithm used in this study is a slightly modified version of the algorithm described above. In the computer algorithm, the region of interest (as before) is divided into a group of. rectangular pixels with the boreholes forming the horizontal limits of the space. The size of the pixels is based on the average spacing of the receiver positions down the response borehole, and this average spacing is then used (with modifications to account for differing hole distances) to divide the region into vertical sections. In practice, the spacing of source and receiver locations should be based on the scale of feature to be resolved. Such information may be available from borehole logs. This discretization process produces (for convenience) an n by n space of pixels. The calculation process begins by determining which pixels are intercepted by which rays. The rays themselves are all possible straight line connections between source and receiver points, and are of finite width. The width of ray is very important in that it determines or selects which pixels will have an influence on the total ray sum. Herman et al (1973) as well as man)' other authors have suggested that the ray width be a function of the angle of the ray to the horizontal, namely : 19 (10) where 6 is the angle of the ray with respect to the horizontal, W is the vertical spacing or size of pixels , and W0 is the resultant ray width. This width calculation is an effort to avoid 'streaking' or having too many pixels influencing the ray sum. This process is discussed in detail by Herman et al (1973). In the computerized model, equation 10 is used. Once the ray width is determined, all pixels whose centers lie within this ray are determined and stored for use in the reconstruction process. The length of path, as well as which pixels lie within that path are stored for each individual ray (Figure 8). The start of the reconstruction process begins with a matrix of values representing the pixel values of the parameter of interest (say inverse velocity) being set to some average value of velocity calculated from the ray sum data. Each set of ray data is then used individually to determine whether the calculated ray sum (obtained by multiplying the pixel length by the inverse pixel velocity) is the same as the ray sum of the data. If it is not, the difference (whether positive or negative) is evenly distributed over the pixels in the ray. Thus, this is an unconstrained A R T algorithm where the magnitude or sign of the corrections are not restricted (Herman et al, 1973). The process continues through the rays sequentially with one iteration indicating that all rays have been analysed once: The iterations proceed until the overall discrepancy between calculated and actual ray sums meets a preset criterion. The convergence criterion used here will be discussed in a later section. Often, damping parameters are applied in the Classical A R T (as well as other algebraic reconstruction) formulations. The damping parameters are used to avoid the oscillation of the iterative solution which has been noted by many authors. The oscillation is the result of the full back projection of the time summation error over the pixels lying within the ray's domain. Basically, the damping parameters simply reduce the magnitude of the back projection so that convergence is facilitated. The use of damping parameters is important in the reconstruction process but, according to Peterson et al (1985), the proper form of damping parameter is often difficult to obtain because it is invariably site specific. Figure 8 : Ray Width C a l c u l a t i o n Used i n the C l a s s i c a l ART A l g o r i t h m w El e w PIXELS WITH CENTERS MARKED ANGLE OF RAYPATH WITH THE HORIZONTAL RAYPATH (LONG- DASH) AND ZOME OF I N T E R E S T ( e w C L o S E D BY SHORT DA5H£D UN£s) PIXEL WIDTH CALCULATED WiCTH oF RAY BAND W e r (A/ x ( ' M A X ( ' j S i n 91 j / C O S 21 3.2.2 Modified A R T Algorithm ( M A R T ) The modified A R T algorithm is, as the name implies, a modification of the original or classical A R T algorithm. It was developed primarily to deal with the fact that the classical A R T algorithm ignored the contribution of pixels through which a portion of the ray band might pass (Figure 9). In the modified A R T algorithm, a straight line is assumed between shot and receiver positions, and the contribution of each pixel which is cut by that ray is included in the reconstruction process. Following the same assumptions as the classical A R T derivation, the modified A R T algorithm also starts with a homogeneous region of uniform grayness. Shot and receiver boreholes delineate the lateral extent of the region with the lower surface being the borehole bottoms and the upper surface usually coinciding with ground surface. The modified A R T program uses the same pixel generation algorithm as the classical A R T algorithm and thus an array of distances representing the extreme points of each pixel is generated based on the spacing of the boreholes and the receiver positions. The second step of the reconstruction process is to identify which pixels are crossed by which ray. This is accomplished by an iterative technique whereby shot and receiver locations are identified as to which pixel they lie in and where the pixel layer boundaries are with respect to these locations. Then , starting at the source hole, the algorithm calculates which pixels are dissected by the ray and the length of that ray in each pixel. Once all rays are accounted for and all files are complete, the iterative reconstruction process begins. The algorithm, again, sets the pixel array to an average value (/), using the same process as the C A R T algorithm. Then , by analysing each ray individually, the algorithm compares real data to calculated pixel data. The mathematical process is as follows: the algorithm uses the function where / is the ray number; N is the number of pixels intercepted by ray /; Ax,-,* is the length of ray / passing through pixel fijk is the material property at pixel This function calculates the time summation of the ray using the estimated /,;, values of the pixels. The difference t — Tst = delta is then calculated. The difference is then distributed through the pixels of the ray with each pixel's addition based on the overall proportion of the ray going through the pixel. This is again an N (11) i 22 Figure 9 : Demonstrating the Areas which are Not Included i n the R e c o n s t r u c t i o n when Ray Width C a l c u l a t i o n s are Made 23 unconstrained A R T algorithm as there is no restriction on the magnitude or sign of the difference. ray k; and ASjjk is the length of ray through pixel The calculation process continues until the stopping criterion is reached or a given number of iterations has been completed. One iteration is accomplished when the calculations for all rays have been completed once. The M A R T algorithm described here is similar to those of Dines and Lytle (1979) (but not of Peterson, et al's (1985) algorithm of the same name), and incorporates all intersected pixels into the reconstruction process. The algorithm, however, does not take into account the inevitable ray bending that takes place in-situ. 3.2.3 The Modified A R T Using L°° Criterion ( L A R T ) T h e L A R T algorithm is yet another variation of the basic algebraic reconstruction scheme. This procedure involves the utilization of the E° or Chebyshev norm to provide an iterative methodology. A norm is a measure of length or distance that is frequently used in statistics for comparison purposes. This measure of length is one way of determining the goodness of fit of a particular data set, of which the Euclidean length is one such example. The most commonly employed norms are those based on the sum of some power of the elements of a vector and are given the name L", where n is the power: (12) where is the (q + l ) th estimate of is the qth estimate of fi}; Tlk is the total length of L norm (13) 24 Successively higher norms give the largest element of e successively larger weight. The limiting case of n —> oo is equivalent to the selection of the vector element with largest absolute value as the measure of length, and is written as: L°° norm : Heljoo = max |e , | (14) t (Menke, 1984a). In graphical form (Figure 10), the difference between the norms can clearly be seen with the L1 norm almost ignoring the outlier whereas L°° norm is being affected by the outlier. Thus, con-sidering higher norms gives estimates that are nearer the worst case scenario for model parameters (Menke, 1984a). In the case of the computerized algorithm, the worst case possibility is when all pixels which are intercepted by a ray are weighted equally, regardless of the length of ray which intercepts that pixel. This process is similar to that used by Peterson et al (1985) in the derivation of their A R T 1 algorithm. The procedure of calculation is similar to that used by the M A R T algorithm, therefore it will not be discussed here. The main difference is that instead of weighting the difference calculated from the time sum and data values according to the length of ray through the pixels intercepted, the difference is merely divided up evenly over the total number of pixels intercepted by the raypath. It is by this method that the contributions of the pixels just barely touched by the ray can be given equal importance to those pixels more significantly intercepted. 3.2.4 Matrix Reconstruction Technique The matrix construction scheme used in this study was first suggested by Dunbar (pers. comm.). In general, solving the inverse problem by means of matrix inversion techniques is avoided due to the large, sparse and generally random distribution of elements in the resulting matrix. Also, the fact that the problem may be (and usually is) underdetermined inhibits most direct inversion-style solutions. If, however, the problem is carefully formulated and a mainframe computer with ample storage and matrix decomposition routines is used, the use of matrices to solve the tomographic reconstruc-25 Figure 10: The L n Norms and the (Af ter Menke, 1984a) Inf luence of O u t l i e r s 26 tion problem can be applied without difficulty. The purpose of using the matrix form here is to examine the efficiency and accuracy of such an algorithm. (The algorithms could also be set up on a personal computer equipped with a math co-processor and sufficient storage. It would probably be necessary to consider 'packed' storage methods to store the matrix.) The algorithm was developed with seismic data in mind, although later, transient groundwater flow data is also used. Therefore the ensuing discussion will be centered on travel times, lengths and inverse velocity values in the formulation of the solution arrays such that: [lengths] [l/velocity] = [time] (15) T h e format of the grid generation routine (discretization of the region) is the same as for C A R T , M A R T and L A R T algorithms. The grid is generated by the calculations made on the average spacing of the receivers and the distance between the boreholes. The extreme grid pixel coordinates are then stored in an array. The generation of the matrix requires that the lengths of ray segments through the pixels be placed in a matrix (Figure 11), such that the rows of the matrix represent the shot (ray) numbers and the columns of the matrix represent the pixel numbers with each entry representing the length of ray through the pixel. The straight line ray segment passes from shot point to receiver point. The length and relative position (in Zy,xn. coordinates) of this ray is known from the coordinate data read into the program. If one changes the coordinate system to a simpler one, say dividing the ray up so that its end points lie at +1 and -1 in this system, the calculation of lengths can be reduced to simple mathematical formulae. Referring to Figure 12, the vertical intercepts ( A L F A ) of the ray z t ; z / can be calculated from the formula: ALFA(K) = (2* X{1) - XS - XR)/{XR-XS) (16) where XR is the receiver position i.n real coordinates; XS is the source position in real coordinates; X(I) is the incremental distance. The horizontal intercepts are calculated using a similar formula. The process is iterative so that what is placed in the A L F A ( K ) array is the relative position ( unsorted, in +1,-1 space ) of the horizontal and vertical intercepts on the line designating the raypath. After sorting these 27 Figure 11: C a l c u l a t i o n of the A (Length) M a t r i x for M a t r i x Recons t ruc t ion Techniques 1 2 3 4 — . ® 7 (D - WPATH 2 - PIXEL S H O T NUMBER PIYEL NUMBER l / 0 Is 0 0 o 0 2 0 0 o lis ** 0 0 o 3 0 o o 3^+ 0 0 -^ 38 X = LENGTH THROUGH PIXEL WE "A" MATRIX ( A x = b ) 28 Figure 12: Demonstration of the Method of In te rcep t C a l c u l a t i o n . (Af ter Dunbar, 1986) N M x H O R I Z O N T A L I N T E R C E P T S 0 V E R T l C f t l - I N T E R C E P T S M N U M B E R OF H O R I Z O N T A L GR.IO L I N E S N N U M 6 E R OF V E R T I C A L GRID L I N E S £ N~2 INTERSECTIONS HORIZONTALLY * AT MOST, M-2 INTERSECTIONS V E R T I C A L L Y IF X; 6z. X ^ X"; T H E N INTERSECTION W H E R E X - Xi OR X z 29 values into ascending order and converting ray coordinates to real world coordinates, the ray is divided up into segments representing the pixel interceptions. Calculations using xltX2 coordinates are then made to identify the ray lengths and a simple equation is used to place the length into its proper matrix location. The matrix is then written to a file where it is stored until reconstruction procedures require it. A vector of shot times is also written to a separate file which represents the b vector in equation 7. The purpose of this exercise is to use the A matrix (of pixel lengths) and b vector (of shot times) to determine the x vector (of the pixels' inverse velocity values). This process Jcan be quite complicated if adequate matrix solution routines are not available. However, most mainframe computers have a number of matrix solution routines at their disposal. One of the methods chosen to solve the matrix problem presented here is a constrained Ll linear approximation model developed by Barrodale and Roberts at the University of Victoria, Canada. The process of solution is as follows: Given a m by n system of linear equations Ax=b (17) the algorithm calculates an L1 solution of (17) subject to p linear equality constraints Cx = d (18) and q linear inequality constraints Ex<f (19) i.e., the subroutine determines a column vector x* which minimizes 16 - AX\\L = 16,; - A,-x\ ' (20) subject to the given constraints (18),(19). The method is completely general in the sense that no restrictions are imposed on the matrices A , C , E . Furthermore, if no vector x satisfying (18) and (19) exists, the subroutine detects this and informs the user that the problem is infeasible (Barrodale and Roberts, 1978). 30 The V approximation is often preferable to the least squares solution (L2) in that the L1 solution ignores noisy or erroneous data more than the L2 solution. This has a significant advantage in field testing where random calibration and human errors are commonplace. Another matrix solution algorithm makes use of the singular value decomposition (SVD) . Menke (1984a) gives an excellent discussion of the theoretical background to the S V D . Basically the method decomposes the m by n matrix A of equation 7 into the product of three matrices: A = USV* (21) where U is an orthogonal m by m matrix and V is an orthogonal n by n matrix. S is a m by n diagonal matrix whose main diagonal is composed of the singular values of A, a,, 1 < i < n, arranged in descending order. In practice, only the first n columns of U are computed; the remaining m— n columns correspond to zero singular values and thus constitute a 'null' space. The rank of the matrix A is determined by the number of non-zero singular values. Since the matrices U and V are orthogonal, the solution to the matrix equation Ax = b may be written x = A'b = VS'1 U'b (22) where A* denotes the pseudo-inverse of A and the matrix 5 _ 1 is a diagonal matrix defined by c - i _ / Xlar, if o-,: # 0 , , b" - \ 0 , if a, = 0 ( 2 3 j 3.3 Summary and Conclusions The algorithms chosen for this investigation include C A R T (classical algebraic reconstruction techniques as proposed by Gordon et al, 1970), M A R T (modified algebraic reconstruction tech-niques which are a variation of A R T ) , L A R T ( A R T using the L°° or Chebyshev norm), and matrix reconstruction techniques using CL1 (the constrained Ll linear approximation method), and sin-gular value decomposition (SVD) . The mathematics of each solution form is well established and for the first three methods listed here, involves the iterative solution of a set of linear equations representing ray path sums through 31 the problem domain. The fourth method involves the direct matrix solution of these sets of linear equations. C H A P T E R 4 4.1 Testing Procedures A series of test patterns were generated in order to 1) test the effectiveness of the various A R T and matrix algorithms, and 2) examine the effects of changing key parameters in these algorithms. These patterns, although generally simple in configuration, give a range of the subsurface patterns that might be encountered in geotechnical investigations. 4.1.1 Seismic Considerations The methodology of tomography can take many forms ranging from radioactive bombardment of the subsurface to simple seismic experimentation. For the purpose of this study, the seismic mode of investigation will be used. The basic configuration for the seismic studies is to have a string of geophones down a receiving borehole while shots (explosions) are successively taken down a source hole (Figure 13). This produces a rectangular domain through which the reconstruction takes place. The parameter measured by this configuration is the interval transit time with accuracy on the order of a few milliseconds along a straight line raypath connecting the shot and receiver positions. After discretizing the domain of interest, the length of the ray through the pixels of the domain can be calculated and a series of ray sums generated of the form where At is the interval transit time; A i s the ray length through pixel and v,-.j is the velocity value at pixel By utilizing the algebraic reconstruction technique, these times and lengths can approximate an inverse velocity (or slowness) profile. This parameter of slowness not only indicates what type of material is in the ground (McCann et al, 1975), but also the approximate density contrasts which are in the subsurface. These density contrasts can be critical for foundation stability calculations. (24) 33 M U L T I - C H A M M E L S E I S M I C ///$>>// b 6 PRINTER • e m G E O P H O N E S DETONATOR. E X P L O S I V E C H A N G E S R E C E I V E R Hot£ SOURCE HOLE Figure 13: Layout of a Cross Hole Seismic I n v e s t i g a t i o n 34 The use of seismic radiation has wide application as the energy produced by a seismic shot is large. The frequency and amplitude attainable by seismic shooting also allows the wave to travel and be detected at greater distance without severe attenuation. This , combined with the fact that seismic data can have the energies of successive shots added together, makes seismic testing more desirable from a data acquisition point of view than groundwater pressure testing. 4.1.2 Groundwater Considerations Groundwater pulse testing has been used for many years (Walter and Thompson, 1982; Prosser, 1981) yet it is often only attempted between packers in a single borehole. Crosshole modes of testing have been proposed (Black and K i p p . 1981) and have been tried in some situations (Hsieh, et al, 1985a and 1985b) but there remains the problem of achieving significant pressure changes which can be detected at distance over a reasonably short period of time. Slug tests require tens of minutes for the pressure front to travel a few meters. A n additional problem with crosshole groundwater testing is that the theory for multilayer domains is complex, as can be seen in the analogous heat conduction formulae given by Mikhailov and Ozisic (1984) and Luikov (1968). While conductivity tensors can be calculated (Hsieh et al, 1985a,b; Way and M c K e e , 1982), the identification of multiple layers is difficult. The tomographic technique can serve to overcome the difficulty of multilayer analysis. The ideal configuration for a cross-hole groundwater test would be very similar to that of the seismic array, that is a series of shot points down the source hole and a series of receiver points down the response hole. In this case, however, the shot points would be the slotted ends of piezometers through which a pressure pulse (by means of a mechanically induced hydraulic force or simply a slug of water) can be transferred to the geologic medium, and the receivers would be a series of pressure transducers with accuracy on the order of 0.001 psi (0.007 kPa) (Figure 14). The shot and receiver points would be connected together so that the time from pressure input to response detection is obtained. A set head or pressure deviation from normal groundwater pressures (independant of tidal fluctuations and established on the basis of hydrogeologic records) would be chosen to indicate the arrival of the pressure pulse wave, or alternately if the complete pressure history is obtained from the receivers, computerized filter processing could be undertaken to determine the initial arrival. These arrival times, however they are obtained, would then be used in the tomographic reconstruction. The test, as described, is seen to be a transient test analysis and thus is governed fkio S L O T T E D Pvc P<PE B E N T O N ITE S E A L . FILL. 7ZZ z z 2 2 2 2 Z CONSTANT P R E S S U R E P U M P PACKERS — P O M P T O I N F L A T E P A C K E R S o Figure 14: Layout of a Cross Hole Trans ien t Groundwater I n v e s t i g a t i o n . 36 by the equation: (25) (Bear, 1972; as given by Black and K i p p , 1981) where h is the hydraulic head (in length (L) units), S„ is the specific storage (with units of l / L ) and K is the hydraulic conductivity (units L / T ) of the medium. The basic property described by SJK is 1/D where D is the hydraulic diffusivity of the medium with units of L 2 / T . It is desired to obtain the units of the unknowns x, in the matrix Equation 7. Consider a one dimensional form of Equation 25 and write it in finite difference form: so that the units of x are (essentially an inverse 'pixel' diffusivity per unit length along the raypath). Henceforth, the term 1/D' will describe this material property value obtained by tomo-graphic reconstruction which is indicative of the diffusivity variation of the medium. Individual shot/receiver measurements are time consuming and tedious to perform, thus the development of a multiple reading pressure transducer is necessary in order that this type of test be viable. A multiple port piezometer is presently available, and a string of electronics capable of measuring pressures at several levels in a single borehole is under development. The data obtained from the source/receiver pressure monitors (i.e., the travel time) can then be input into the A R T and matrix algorithms for reconstruction. In this way the multilayer mathematical analysis is replaced by the iterative processing of data. It should be noted that the travel time is not the only parameter which could be measured. The attenuation of the pressure wave could also be monitored and an attenuation profile be generated. This attenuation method is similar to those already performed in the oil industry, (Hirasaki, 1974; Kamal , 1979; Johnson et at, 1966) and can be used in a number of ways to describe reservior heterogeneity. Also, the entire pressure response history may provide valuable information as to hydrologic parameters. While a cross-hole pressure test has admittedly more problems than seismic analysis, it does provide valuable information concerning the hydraulic properties of the subsurface material which Letting ht — h,-Y — 2h, + ki+i and ht — fij - K 'i 1 the equation becomes Ax(ht/hi)(Ax/D) = At 37 may be critical for contamination migration analysis or aquifer evaluation. A possible scenario for using cross-hole pressure tests would be to confirm layering in an earth-filled dam where the proper placement of graded material is essential for stability and seepage control. 4.2 Test Configurations A number of test patterns are used to analyse the tomographic algorithms. Both seismic and groundwater applications are used. The test patterns are generally simple in form, and are meant to demonstrate the ability of the algorithms to reconstruct accurate profiles from data generated using different forms of source input. 4.2.1 Seismic T h e five seismic patterns developed for this study are shown in Figures 15 to 19. These patterns were used to distinguish horizontal versus vertical resolution capabilities, the effects of low and high velocity contrasts (10 percent and 100 percent in this case), and the ability to predict whether no anomalies or multiple anomalies exist in the section. A l l data sets for these test patterns were generated by solving the forward problem using discretized pixel lengths and the given velocity values of the pixels. 4.2.2 Groundwater T w o of the groundwater test patterns developed for this study are shown in Figures 20 and 21. They range from a homogeneous, isotropic case to that of layered systems. The homogeneous test pattern data set was generated using the Jacob method of pumping test drawdown analysis (Freeze and Cherry, 1979). A set of hydrogeologic parameters was assumed for the section, and calculations were made as to the pressure response at an observation well for a given rate of outflow at the source hole. The distance between source and observation positions in the wells correspond to the raypath lengths of Figure 11. When the pressure drop reached 0.5 centimeters (which corresponds to an approximately 0.007 psi change in total head), it was assumed that the pressure wave had arrived and a time of arrival was calculated. These arrivals were then used in the reconstruction algorithms. The other groundwater test patterns used in this study were analysed by means of a modified two-dimensional finite element program developed by Wang and Anderson (1982), and later by a 38 Figure 15: Seismic Test Pat tern . Test Pattern • - » ' * * * * * * • • 1 * t * .'•,'.:'""'-v: •' v ; . 2m . ; - 1 ' . ' / * ''»/** / -* * • \ • • * 4 i H 10 m. H Vi = Vi = 1.0 KILOMETERS /SECOND V z = l - l x V, 39 Figure 16: Seismic Test P a t t e r n . Test P a t t e r n B. V| = V| = 1.0 klLOMETERs/SECOND V 2 - 2 . 0 * V, 40 Figure 17: Homogeneous Seismic Test P a t t e r n . V, = |.0 kiLOM£T£K . /SECOND 41 Figure 18: One Layer Se ismic Test P a t t e r n . v. 10 m. V, - \/x ~ \.0 KILOMETERS /SECOND V Z = | . | * V, i 42 Figure 19: Two Layer Seismic Test P a t t e r n T Lm. 1 v, 10 m. V, - Vi = 1.0 KILOMETERS/SECOND V a = I.I * V, 43 g u r e 2 0 : H o m o g e n e o u s G r o u n d w a t e r T e s t P a t t e r n . (S* SroRAriVlTY , T=T«ftis/SMissiv/iTY^ 44 Figure 2 1 : One Layer Groundwater Test P a t t e r n . S, TJ t 2m k 10 m - > V r , = 1.0 Vm* V r z = 10/0 f S - S T O R A T W I T Y , T- T R A N 5 K | 5 S 1 V | T Y ^ ! i I I 45 two-dimensional finite element program developed by Dunbar (1987). The sections were discretized using a predetermined mesh size of linear, isoparametric elements. The boundary conditions were established as being no flow boundaries on the upper and lower surfaces of the grid. Initially, all nodes were set to a single head value, and the head values at nodes where the pressure pulses were induced were kept constant with time (constant head boundaries). Pressure head changes on the order of several meters were then placed at set nodes down the source hole and the corresponding pressure changes over time were monitored at the receiver hole nodes. Again , a 0.5 cm change in total head was assumed to indicate the first arrival. In the event that the ideal receiver location (always assumed to be at the center of the pixel's side) did not coincide with a nodal point, a linear interpolation between adjacent nodes was used to estimate the time of the first arrival. Numerous test patterns were analysed in this fashion with the size and shape of anomalies being changed, as well as the fineness of the discretized mesh used. While other more mathematically complex solutions for multilayer systems are available, the finite element code was found to be the most efficient way of obtaining an approximate solution to these transient flow problems. Note that the analytical and numerical test cases (the drawdown analyses and the finite element analyses) are carried out in a different manner, yet both produce the necessary time of arrival data. 4.2.3 Parameters Varied The effects of differing damping levels (for the iterative techniques) and different intensities of noise were analysed by varying these parameters in the tomographic tests. The iterative damping parameter was varied on a simple percentage basis with 0.30, 0.50, 0.70, and 0.90 levels being used. The damping parameter was applied to the calculated ray sum during each iteration of the algorithms. A l l three of the algebraic reconstruction algorithms were used to analyse the effects of damping parameter variation. Gaussian noise at levels of 5 percent and 10 percent was added to the calculated data by means of a uniform random number generator ( R N G ) and Gaussian noise calculation subroutine. Again, all three of the algebraic reconstruction algorithms as well as the matrix algorithms were used to analyse the effects of Gaussian noise addition. 46 4.3 Summary and Conclusions Seismic signals offer the best time of arrival sure pulse data for groundwater analyses, while well supported in terms of available equipment, would not otherwise be attained by geophysics. data for use in the tomographic algorithms. Pres-not as sharp or well defined as seismic and not as nevertheless offers hydrogeologic information that 47 < CHAPTER 5 5.1 Applications of ART The three iterative algebraic reconstruction techniques will be examined in this section. Here, the discussion will review the common stopping and resolution criteria which are used in the iterative techniques, and then the results of tests run using these techniques will be examined. This section will close with an analysis of the test results and general conclusions concerning the iterative reconstruction methods. 5.2 Stopping Criteria and Image Resolution for the Iterative Techniques One of, if not the most important aspects of iterative tomographic reconstruction is when to stop the iterative process. Ideally, the process should stop when the exact or near-exact picture of the domain is attained. Unfortunately, data noise, incomplete coverage and a subsurface profile which is usually unknown prior to the computer simulation makes such an occurence highly unlikely. Many methods have been developed in order to try and overcome these problems. Specialized techniques of stopping the iterations when the ray sums are close to those predicted by the data (and from what is expected to be at a given site), and of aiding the program to converge to a more accurate picture of the region are key facets of the tomographic method. 5.2.1 Stopping Criteria The stopping criteria is a name given to a process which halts the iterative reconstruction process when a condition (or conditions) is met. This condition is set by the programmer and is chosen so that the iterations will stop when the reconstructed "picture is close to that of the region being investigated. There have been many stopping criteria proposed in recent years, most of which are based on a least squares [L2 norm) minimization of the error between the estimated and computed pictures. The three most common stopping criteria found in the literature are those originally stated in the paper by Gordon, Bender and Herman (1970). These criteria are : 1) discrepancy between the calculated and measured data , 48 (26) where N is the number of paths; Yk is the measured parameter; Yk" is the rath estimate of the measured parameter, and ra is the iteration number. 2) the variance , where / is the number of cells; X is the mean slowness of the field; X" is the rath estimate of cell slowness. and, 3) the entropy , where / is the number of cells; X is the mean slowness of the field; X" is the rath estimate of the pixel slowness (Peterson et al, 1985; Radcliff and Balanis, 1978). Other stopping criteria do exist. For example, those given by Dines and Lytle (1979) and Herman et al (1973) are sometimes used, yet they are only variations of the basic least squares methodology. Peterson et al (1985) have found that D" will tend to zero, V"' will tend to a minima, and Sn will tend to a maxima with increasing values of n provided the set of linear equations in the reconstruction are complete and consistent. Unfortunately, none of the criteria can measure how close the picture is to that found in the ground, only how close the calculated picture is to that described by the data. For most cross-hole problems, the data obtained may be incomplete and/or inconsistent due to gaps in the coverage or erroneous readings. The stopping criterion chosen for this investigation is the discrepancy between the calculated and measured data based on shot-path summations. This criterion was chosen for several reasons. T h e first, and probably the most important reason is that the other criteria are based on the differ-ence between the cells or pixels of the picture and the mean value of slowness of that picture. This may be applicable for configurations with complete (360 degree) coverage, however the incomplete (27) (28) 49 coverage of cross-hole tomography gives zones in which pixels are intercepted by fewer than two raypaths (Menke, 1984b). This results in zones that are not affected by the iterative reconstruc-tion process and are, for all intended purposes, stagnant. Comparing the pixels in these zones to those of a mean slowness value would result in erroneous or premature stoppages. The discrepancy calculation, on the other hand, deals with the ray paths themselves, and does not deal with pixels outside of those raypaths. Thus, criterion convergence in the zones that are sampled is much more preferable to quasi-convergence through zones that are unknown or unsampled. The second reason for choosing the discrepancy criterion is that (as found by Peterson et al, 1985) the Dn value will tend to zero if the set of linear equations are complete and consistent, whereas V " and Sn tend to minima and maxima respectively. Due to inherent oscillation in the classical A R T solution which has been noted by many authors, the maxima/minima stopping calculation can be thrown off by local rather than global maxima and minima while attempting to converge. This can occur when changes in successive stopping criterion calculations are on the order of machine error accuracy. D" tends to zero and thus minor oscillation will not affect the overall tendancy of this function. A third, and perhaps more minor point is that the discrepancy (Dn) is more easily and directly incorporated in the reconstruction algorithm than the other criteria. The primary disadvantage of using this stopping criterion is that while the criteria given in Equations 26, 27 and 28 will eventually converge to a solution, the criteria have also been found to diverge after this point has been reached which results in solution ambiguity (Herman et al, 1973). Also, because of incomplete coverage resulting in an underdetermined problem, there is the possibility that the criteria-meeting solution will^bear no relation to the in-situ conditions. It is because of these problems that a more reliable stopping criteria must be developed. This author believes that a graphics in/graphics out method of reliability checking is a plausible solution. What is meant by graphics in/graphics out is the utilization of a graphics monitor coupled to the microcomputer to produce a visual image of the region and pixels. After each iteration, the image would be updated and the programmer would see immediately how well the picture converges, and can stop the iterations once he/she feels confident with the reconstructed region illustrated. There are problems with this process due to the time taken after each iteration to examine the updated region, however, a user controlled monitoring of the process is more appealing than a computer 50 calculated stoppage. 5.2.2 Image Clarity In tomography, the basic question that one must ask when given a reconstructed picture is "is this an accurate representation of the region?" If the problem is overdetermined or evenly determined with very accurate data, then the answer may be yes. However, for an underdetermined problem, or those problems with noisy data or for regions with high contrast or unusually shaped objects within them, the answer could be no. In fact, the reconstruction may bear no resemblance to what is in the region itself. Thus, the question arises as to how the clarity of the reconstructed image may be improved. One of the easiest and most effective methods to improve resolution is to obtain more complete coverage. This is especially true for cross-hole studies. Menke (1984b) examined the experimental geometry of cross-borehole tomography and concluded that as boreholes do not surround the region of study, the data collected is inherently incomplete (Figure 22). In fact, only the central area of the study domain of Figure 22 is more or less completely covered. The addition of more shot and receiver points down the boreholes as well as on the surface (McMechan, 1983; Ivansson, 1986) or in adits and tunnels (Mason, 1981) can serve to increase the area of coverage and thus provide additional data in which to resolve the media under study. A second method to improve resolution is to use all available (whether known or assumed) data. A priori information serves to either reduce the number of unknown pixels or serves to limit the range of the pixels' parameter variation (Menke, 1984a). A third method is to set the initial grid to a configuration which is anticipated to be close to that of the true picture. In this way, convergence is quick and the possibility of excessive iterative oscillation in the solution is reduced. Caution must be taken, however, as in setting the grid to a preconceived value, this grid configuration may bear no resemblance to the true picture and may result in a completely useless reconstruction. Thus, a good deal of thought is required to ensure a reasonable starting condition is in place, usually by the assumption of in-situ conditions. The oscillation of the iterative algebraic solution noted previously is the result of the full back projection of the ray sum differences over the pixel arrays. Many methods of damping the back projection error have been proposed to alleviate this problem, however optimum relaxation 51 ® S O U R C E / R E C E I V E R • 0-Z K A V S 2-8 R A Y S » * > 8 R A Y S Figure 2 2 : V a r i a t i o n of Ray Coverage i n the Cross Hole Test C o n f i g u r a t i o n 52 (damping) schemes cannot be calculated mathematically so they must be obtained experimentally (Peterson et al, 1985). In this study, a very simple percentage weighting scheme was applied to the ray sum differences. This percentage scheme was also varied to determine which value provided the best resolution and computational efficiency. A final area which improves resolution is the elimination of errors. This involves an increase in accuracy of the measuring instruments, the proper calibration of all components and a tighter control over the data gathering process. While most error can be eliminated in this manner, there is always the ever present random error and noise. These can never be eliminated completely, so they must be limited as much as possible. A very good discussion of measurement error can be found in a paper by Kowalski (1977). 5.2.3 Summary and Conclusions A number of methods to improve the tomographic reconstruction process have been discussed. A n optimum stopping criterion is critical for a reasonable tomographic image to be produced and most are based on some form of a least squares criterion. While discrepancy (D"'), variance ( V s ) , and entropy (Sn) criteria are the most common criteria put forward in the literature, the discrepancy (D") criterion was chosen for this investigation as it deals specifically with the ray sums, it tends to zero for complete and consistent linear equations, and is easy to implement. Ideally, however, a visual representation of the region updated after each iteration is felt to be the optimum stopping criterion as it allows the programmer to use his/her judgement as to which picture or iteration is best. Methods to improve image resolution are many. The easiest method is to increase domain coverage, but in cross-hole studies, complete coverage is often impossible. Other methods include using all known or assumed data to reduce the number of unknowns in the equations, setting the initial grid configuration close to the actual or expected in-situ configuration, weighting the back projection of error to reduce solution oscillation and the elimination of data errors and noise. 5.3 Test Results Computer test runs were conducted on the previously described patterns in order to evaluate the resolution properties of the iterative programs under various conditions. The test patterns and changed parameters used in these tests represent conditions that might exist in-situ. The algorithms will be evaluated on a pattern by pattern basis with all changes in parameters for that particular pattern included in the discussion. 5.3.1 Seismic Patterns Test Pattern A Test pattern A shown in Figure 15 was used to evaluate the horizontal and vertical resolution capabilities of the algorithms. The central cross is completely enclosed by the region and the velocity contrast is relatively low (10 percent), hence the straight line raypath assumption is valid and the pixels in the region of interest are crossed by more than two raypaths each. Thus, the resolving power of the algorithms themselves is the critical factor. C A R T , M A R T and L A R T algorithms were tested by this pattern with variation of noise level, starting pattern and damping parameter. In the field, it is generally not known what soil profile exists in the ground prior to exploration. Even with exploration, the inherent variability of geology creates uncertainty in extrapolation practices. The advantage of having the test pattern or profile known beforehand (as is the case here) is that the known profile can be directly compared with the calculated profile. In terms of accuracy, all algorithms correctly placed the cross in the domain but, as stated previously, failed to fully define the cross itself. Figures 23, 24 and 25 show the results of the M A R T , L A R T - and C A R T algorithms for 0 percent added noise and 50 percent damping. Note that the L A R T algorithm, again, provided the best picture in terms of closeness to the expected profile and known inverse velocity values. When damping factors were changed for this test pattern, little change was made in the overall picture resolution, although some minor improvements were seen at lower values of damping factor (Figure 26). Decreasing the damping factor gave an expected increase in discrepancy convergence (Figure 27) as less error was being back projected at each iteration, however only minor changes in execution time were seen. Lower damping factors gave a marginally higher execution time while only marginally increasing the picture resolution. This would seem to indicate that the damping factor is not critical for pictures consisting of a few pixels. However, larger arrays have been shown 54 Figure 23: Test P a t t e r n A as Reconstructed by the MART A l g o r i t h m . 50% Damping, 0% Noise Added F igu re 24: Test P a t t e r n A as Reconstructed by the LART A l g o r i t h m . 50% Damping, 0% Noise Added 55 Figure 25: Test Pa t t e rn A as Reconstructed by the CART A l g o r i t h m . 50% Damping, 0% Noise Added 56 m 0.5 Qf 0.3 0.2 0.0 P A M P J N & X 0.5 O 0.7 D 0.9 ITERATION Figure 27: Test P a t t e r n A as Reconst ructed by the MART A l g o r i t h m Showing the Inf luence of Damping Parameters on S o l u t i o n Convergence. 57 \ (Peterson et al, 1985) to be greatly affected by the size and form of the damping parameter. Herman et al (1973) suggested that a damping parameter set at 50 percent or less would be adequate for most problems. Based on the results of this experiment, it would seem that this is a reasonable assumption, although analyses should be carried out at each site to determine the optimum damping parameter (Peterson et al, 1985). In terms of convergence to the known solution, all the algorithms identified the horizontal section of the anomaly with reasonable accuracy, yet none were able to correctly identify both vertical portions. In fact, only the L A R T algorithm was able to periodically identify the upper and lower portions of the cross, thus indicating a general inability to resolve vertical anomalies. A cross section taken horizontally along the line of pixels which should have shown the upper vertical portion of the cross indicates that the algorithms smeared the anomaly over the picture (Figure 28). This phenomenon has been well documented by McMechan (1983) and Herman et al (1973) and it is the result of incomplete coverage of the domain due to the cross-hole configuration. McMechan explains that due to the dominant horizontal orientation of the raypaths, structures which lie parallel to that direction will be better resolved than those lying perpendicular to it due the ability of some rays to skim the anomaly's boundary (Figure 29). By skimming the boundary of an anomaly, the maximum contrast between materials is achieved, and as such, well defined contrasts in response values are obtained. Well defined response contrasts lead to a better defined picture. Thus, vertical structures are less clearly discerned by strictly horizontal or nearly horizontal raypaths and will only be properly positioned if some a priori information is available or steeply dipping raypaths cross the anomaly.,- Figure 30 shows the effects of this skimming and smearing even in the horizontal section of the cross where its ends are attenuated to the boundary of the section. Differing noise levels produced an expected response. When both 5 percent and 10 percent Gaussian noise was added to the data, there was a corresponding increase in the number of itera-tions to convergence and a corresponding decrease in picture resolution. Figures 31 and 32 show representative results of the L A R T algorithm with added noise. These analyses indicate that high quality data is required for an accurate reconstruction to occur. The noisier or the worse the data, the less chance there is that the reconstruction techniques will converge to the 'correct' picture. 58 PlXBL NUMdBRfNQ / 2 3 4 5 6 7 8 /0 // IX 13 /4 /5 lb a IB 19 20 21 22 23 24 25 /.con / 7X£2 . NUMBER Figure 28: Cross Sec t ion Through Test P a t t e r n A as Reconst ructed by the LART A l g o r i t h m , Demonstrating the Smearing of a V e r t i c a l Anomaly. 59 Figure 29: Cross Hole C o n f i g u r a t i o n s Have the C h a r a c t e r i s t i c tha t Raypaths Tend to Skim H o r i z o n t a l Boundar ies . Thus, Th i s C o n f i g u r a t i o n i s Able to Resolve Such Boundaries More A c c u r a t e l y . 60 PIXBL 1 2 3 4 5 6 7 8 <? /O // IZ 13 /4 /5 lb 17 18 19 20 2/ 22 23 24 25 /CCh A COMPUTED r A C T U A L /3 /4 / 5 P/XE~L NUMBER Figure 30: Cross S e c t i o n Through Test Pa t t e rn A as Reconst ructed by the LART A l g o r i t h m . Demonstrat ing the Smearing on a H o r i z o n t a l Anomaly. 61 Figure 31: Test P a t t e r n A as Reconst ructed by the LART A l g o r i t h m . 50% Damping, 5% Gaussian Noise Added F igure 32: Test P a t t e r n A as Reconstructed by the LART A l g o r i t h m . 50% Damping, 10% Gauss ian Noise Added 62 Changing the initial values of the pixels for this pattern provided the most dramatic change. Wi th the initial matrix set to zero rather than some average pixel value, convergence to a solution was extremely slow (Figure 33) and where the convergence criterion was reached, the computed picture bore no resemblence to the known profile. This shows that the closer the initial recon-struction is to the actual solution, the more rapid the convergence and the better the resolution. This also shows that the use of a priori information is extremely valuable in that setting the initial pattern to an average (or estimated) value, the accuracy of the reconstruction was improved. Homogeneous Pattern T h e homogeneous test pattern (Figure 17) was developed to see if the programs could ad-equately determine that a homogeneous, isotropic medium exists and if not, what "phantoms" might it place in the domain. In this sense, the word phantom is used to describe images (anoma-lous velocity zones) that are known not to exist. Figures 34 to 37 show the results of the M A R T , L A R T and C A R T algorithms for 0 percent noise and 50 percent damping. While the circumferential pixels surrounding the central domain show deviation from a uniform slowness value, those pixels which are in the central region do exhibit a single, unique value. The fact that the surrounding pixels deviate from the central values is, again, a product of the cross-hole configuration. The outermost pixels have only one or two raypaths which pass through them whereas the central region has many crisscrossing raypaths. Thus, central regions have higher resolution due to this redundancy and outer regions have lower resolution. This phenomenon has been studied in detail by Menke (1984b). Even though the outer pixels indicate that phantoms might exist, the more completely covered central region shows that the domain is, in fact, uniform. Different damping factors had little effect on the overall picture resolution. Central regions of the reconstructed picture stayed relatively homogeneous, whereas outer (circumferential) pixels still exhibited phantom behaviour. The only difference observed by changing damping factors was a slight decrease in convergence time for both C A R T and L A R T algorithms. The M A R T algorithm had slow convergence characteristics and poor resolution in all cases. Noise affected the pattern in the expected way. Increased noise levels had a distorting effect on the pattern obtained with highest noise levels producing a slightly inhomogeneous slowness profile. + / 2 3 4 5 6 7 2 9 to H 17. 1$ i+ 15 lb n ITERATION Figure 33: LART A l g o r i t h m and Test P a t t e r n A w i t h 50% Damping and I n i t i a l M a t r i x Set to 0.0 64 7 V (ms/m) > 1.30 > 1.20 2 i - i o > i .oo 1.00 Figure 3 4 : O r i g i n a l Homogeneous Test P a t t e r n for the Seismic Study 65 Figure 36: Homogeneous Seismic Test P a t t e r n as Reconstructed by the LART A l g o r i t h m . 50% Damping, 0% Noise Added F igure 37: Homogeneous Seismic Test P a t t e r n as Reconstructed by the CART A l g o r i t h m . 50% Damping, 0% Noise Added 66 One Layer and Two Layer Stratigraphy The one and two layer profiles were designed to test the algorithms' ability to clearly determine stratigraphic layering with only a 10 percent velocity variation between, the layers. The patterns used in these tests are shown in Figures 18 and 19. A l l algorithms exhibited an excellent ability to discern the horizontal stratigraphic layering. With 50 percent damping and no noise addition, all algorithms were able to locate and accurately extrapolate the layered sequences, as seen in Figures 38 and 39. One problem seismic studies have with layered media is the fact that if the velocity of the medium does not increase downwards (as with increased pressure, the density increases and void space decreases), there tends to be a masking effect where the lower low velocity layer is 'averaged out' by the enclosing higher velocity layers. The result is that anomalously low velocity layers are sometimes missed. A similar phenomenon can be seen in the results of computer runs with the one and two layer cases. The layering represents the higher velocity media, hence there are zones where masking might occur. There is a zone in the upper and lower central portion of the one layer picture in Figures 40 and 41 that tends (numerically) to have a higher velocity (lower inverse velocity) value than the surrounding pixels. In the two layer case (Figures 42 and 43) there exists a zone between the two layers which shows an averaging taking place and is serving to eliminate the interceding low velocity layer. Fortunately, this effect is not so well pronounced as to totally obscure the velocity contrast at this point. It should be noted here that the edge effects described in the previous section are more subdued In the layered cases. While the circumferential pixels still exhibit some deviation from the expected picture, they are, for the most part, in line with the anticipated velocities of the layers. In terms of convergence (the speed at which the discrepancy value goes to zero), the layered data sets provided the algorithms with the most rapid convergence characteristics yet observed. In all tests of layered media, the average number of iterations to converge was 7, and usually the L A R T algorithm proved to converge with the greatest speed, requiring the fewest number of iterations. When the damping parameter was changed for a given algorithm, the resulting picture was changed little, yet the speed of convergence was affected. By decreasing the damping parameter, the algorithms, in general, took slightly less time to converge to a solution. After a level of 0.5 (or 50 percent damping), however, the convergence speed was unchanged and perhaps even increased 67 Figure 3 9 : Two Layer Seismic Test P a t t e r n as Reconst ructed by the MART A l g o r i t h m . 50% Damping, 0% Noise Added 68 Figure 40: One Layer Seismic Test P a t t e r n as Reconstructed by the LART A l g o r i t h m . 50% Damping, 0% Noise Added 69 Figure 42: Two Layer Sei Reconstructed 50% Damping, smic Test P a t t e r n as by the LART A l g o r i t h m . 0% Noise Added F igure 43: Two Layer Seismic Test P a t t e r n as Reconst ructed by the CART A l g o r i t h m . 50% Damping, 0% Noise Added 70 slightly. Nonetheless, the resulting cross section, in all cases, clearly showed the expected layering. The addition of noise increased the convergence time of the algorithms and also decreased the sharpness of the profile. Figure 44 demonstrates this variation. Increasing the level of noise gave progressively worse reconstructions in terms of picture acuity. Test Pattern B The pattern used for this set of data is exactly the same as that used for test pattern A except that in this case, the velocity contrast between the media is 100 percent (Figure 16). The tests run with the algorithms showed many of the same features as with test pattern A . Vertical anomalies were poorly defined and and horizontal anomalies were generally well defined. The key contrast here is the level of smearing that takes place. With the higher velocity contrast, the smearing or masking of the vertical anomaly is extensive with the entire anomalous pixel level becoming relatively uniform. The C A R T algorithm seemed to be the best at picking out the vertical anomaly created by the cross (Figure 45), whereas both the M A R T and L A R T algorithms failed to clearly distinguish the cross' position (Figure 46). The decrease of the damping parameter did not help (in this case) in resolving the cross and neither did the addition of noise. The reason for the lack of performance of the algorithms even with stronger or clearer material differences again relates to the orientaion of the anomaly's boundary. The vertical anomaly, even though of a high material contrast, is not well defined because of the algorithm's inability to place the slowness anomaly at its proper position in the grid. Again, this is a result of the incomplete scanning geometry of the cross-hole configuration. 5.3.2 Groundwater Patterns Homogeneous T h e data set used to simulate a homogeneous, isotropic medium for the groundwater test pat-tern was obtained by using Jacob's transient well test procedures. A s the section used here is homogeneous, only the distances between source and receiver were varied to produce the homo-geneous data set. This data was organized into a file and then used to evaluate the effectiveness of the reconstruction algorithms on groundwater flow data. The homogeneous pattern is shown in Figure 20. 71 0.5-0.4-03 0.2-0.J NO/SE LEVEL * 0.0% x 5.0% o 10.0% 8 lo ITERATION NUMBER Figure 4 4 : The E f f e c t of Noise L e v e l on S o l u t i o n Convergence. LART A l g o r i t h m wi th One Layer Seismic Test P a t t e r n . 50% Damping. 72 Figure 4 5 : Test Pa t t e rn B as Reconstructed by the CART A l g o r i t h m . 50% Damping, 0% Noise Added F igure 46: Test P a t t e r n B as Reconst ructed by the LART A l g o r i t h m . 50% Damping, 0% Noise Added 73 Using Jacob's solution technique, and assuming a preset detection limit value, one can calculate the arrival time of the drawdown or pulse front to several decimal places. In practice, however, the arrival is not sharp due to a number of hydrogeologic factors which serve to attenuate the pressure wave so that an 'instantaneous' reading is unobtainable. Invariably, data gathered in the field will be noisy. For the purposes of this test, however, an easily readable and calculable wave front is assumed. The results obtained for the groundwater data (homogeneous case) were very similar to those obtained by seismic means. The central portion of the array proved to produce a homogeneous zone of one slowness (or value (Figures 47 and 48) whereas the circumferential pixels tended to produce phantoms. Again, this is related to the ray coverage problems discussed earlier. No particular algorithm (for a given damping parameter and noise level) produced a noticeably better picture (in terms of closeness to the actual value). However, the M A R T algorithm proved to be the slowest to converge for the homogeneous case. This could be due to the fact that the initialization array was so close to the actual array that any alteration to the array did not help the resolution. A s a result, the picture simply oscillated around the correct solution. If the above statement is true, then it follows that decreasing the damping parameter value should allow the algorithms to converge more rapidly as less and less of the calculated ray time discrepancy is back projected over the pixels. This turns out to be the case. Figure 49 shows representative convergence characteristics for the C A R T algorithm at the various levels of damping. A decrease in the damping value showed an increase in the convergence (i.e.: the more steeply dipping curve) of these algorithms for this particular test pattern. A s with the other tests, it was found that increasing the noise level in the data gave progres-sively worse results and longer convergence times. This exemplifies the need for good data when proceeding with algebraic reconstructions. One Layer Case The data for the one layer groundwater test case (Figure 21) was obtained from two dimen-sional, linear, isoparametric finite element programs used in the transient analysis of aquifer / reservoir drawdown. Modifications were made to the original Wang and Anderson (1984) and Dun-bar (pers. comm.) programs in order to allow for different material types (i.e., different storativity 74 Figure 47: Homogeneous Groundwater Test P a t t e r n as Reconstructed by the CART A l g o r i t h m . 50% Damping, 0% Noise Added F igu re 48: Homogeneous Groundwater Test P a t t e r n as Reconstructed by the LART A l g o r i t h m . 50% Damping, 0% Noise Added 75 >-a a o U7 / 0 " os-0.7 OS-OA-0.3 a • o + + o a • o o DAMPING + 0 3 x 0.5 o 0.7 n 0.9 • OX OA I Z Z A 5 G 7 S f IO il a 13 14 15 ITERATION NUMBER F i g u r e 4 9 : E f f e c t o f D a m p i n g V a l u e o n C o n v e r g e n c e . C A R T A l g o r i t h m w i t h H o m o g e n e o u s S e i s m i c T e s t P a t t e r n . 0% N o i s e A d d e d 76 and transmissivity values). The region used is a representation of a 10 x 10 meter cross section of ground. The upper and lower surfaces of Figure 21 were designated no flow boundaries, while pressure input nodes were defined as constant head boundaries. The test was set so that all rays would be analysed individually (eg.: that each source would have a separate set of response curves associated with it). Responses were measured at the corners (nodes) of the pixels (Figure 50) and the time of arrival of the response was calculated from a linear interpolation of the nodal responses (as the receivers were typically positioned between the receiver hole nodes). After running several raypath tests on the modified programs, it soon became apparent that there were difficulties with the process. Anomalous pressure increases were observed in the output data when drawdown conditions were created. These head anomalies measured only a few millime-ters in magnitude and diminished with time. Eventually all nodes showed a drawdown profile after this pulse had passed. It was concluded that this anomaly was the product of numerical errors, as at steady state conditions, the head distribution was found to be very similar to that calculated analytically for the given material configuration. The fact that the pulse was present, however, made the picking of the wave front arrival times more difficult. O n certain raypath tests, uniform drawdown occurred and thus the 0.5 centimeter (cm) pressure deviation could be easily chosen, but on others, the pulse anomaly had to be accounted for. It was decided that as the anomalous fluctuation was small and of limited time span, that the pressure deviation arrival time needed for the data set would be taken after the anomalous pulse had passed. It was on this basis that the one layer data set was established. This process of picking the arrival time is somewhat judgemental and will undoubtedly be different for each person who tries it. It is evident that the resulting data obtained by this process will be exceedingly noisy and/or of poor quality. The results of the algorithm reconstruction using this poor quality and/or noisy data are promising. The algorithms all correctly identified that there was a different layer present and cor-rectly positioned it in the array (Figures 51 and 52). M u c h more pronounced velocity smearing is shown in these reconstructions than with any other test pattern. This is most likely due to the poor quality of data. O f the three algorithms, the M A R T and L A R T algorithms proved superior in converging to a more correct solution. The algorithms all had difficulty meeting the convergence 77 Figure 5 0 : F i n i t e Element C o n f i g u r a t i o n for the One Layer Trans ien t Groundwater Test O © © © 10 II 12 © 18 zi 1 2 2 3 24 ® is 2 7 28 3o © 12 24 Zt 32 33 34 3 5 JO © • m N0D5 NUMBER PIXEL NUMBER PIXELS WITH S / T PIXELS WITH S / T 1.0 S /m 1 \ 0 . 0 78 Figure 5 1 : The One Layer Groundwater Test P a t t e r n as Reconst ructed by the MART A l g o r i t h m . 50% Damping, 0% Noise Added F igure 5 2 : The One Layer Groundwater Test P a t t e r n as Reconst ructed by the LART A l g o r i t h m . 50% Damping, 0% Noise Added 79 criterion as all required more than 15 iterations to reach the criterion's limit. O f the three algo-rithms, the M A R T algorithm appeared to converge more rapidly than the others as is indicated by flattening of the discrepancy line in Figure 53. A change of damping parameter did not affect the speed of convergence of the algorithms a significant amount. This is again attributable to the poor quality data. Increasing the noise level had an expected effect of slowing down the convergence slightly, however any significant affect on picture quality or increase in execution time was not observed. In spite of the relatively poorer resolution obtained from this data set, the fact that the algo-rithms were able to correctly identify the presence of a layer of different properties indicated that groundwater test data can be used in algebraic reconstruction techniques. 5.3.3 Timings The execution times of the programs were tabulated for each test run (Figure 54). Apart from the minor increases in execution time due to slower convergence of certain algorithms with certain data sets, the time itself proved to be independant of algorithm type and primarily dependant on the size of the discretized mesh. There is a definite correspondence shown by Figure 54, and the extension of this curve can be used to determine execution times of progressively larger and larger matrices. It is evident that past a 20 by 20 matrix, the execution time becomes excessively long for an I B M P C (r - registered trade mark) and like computers, not only for the computation time but also in the storage requirements as a 20 x 20 pixel domain requires over 160,000 bytes of stored information for these algorithms. With the advent of newer and faster computers , this problem will likely be overcome. In fact, tests run on an I B M X T (r) compatible computer showed that "for a given program and data set, the program execution time could be cut by over 50 percent. A final note: much of the execution time for the programs used here was taken up in writing the ray length, pixel length and pixel numbers to separate files, then reading them once again when reconstructing the image. A great deal of time could be saved by writing all these values to internally stored arrays. 5.4 Analysis of Test Results The tests run on the data sets gave valuable clues as to the performance and the applicability 80 5.0 4.0 5 £ 3.0 o in 2.0-AO X X / Z 3 + 5 C 7 2 f 'O ll It /3 / f /5 . - . ITERATION NUMBER Figure 53: One Layer Groundwater Flow P a t t e r n w i t h the MART A l g o r i t h m . Convergence C h a r a c t e r i s t i c s w i t h 50% Damping 81 I Go-es 5 / 0 / 5 20 25 NUMBER OF RAYPATHS AND PIXELS Figure 5 4 : R e l a t i o n s h i p Between the Number of P i x e l s (and Raypaths) to A l g o r i t h m Execu t ion Time 82 of the algorithms under various conditions. The results of these tests may be generalized to provide criteria and/or establish the limitations of the algorithms under various circumstances. 5.4.1 The Best Iterative Algorithm The tests were inconclusive as to which reconstruction algorithm was superior. A l l algorithms were able to pick most of the anomaly positions with a certain degree of accuracy, yet there was an inconsistency as to which algorithm produced the clearer or most well defined picture. In a general sense, the L A R T algorithm appeared to provide the most consistent reconstruction clarity, although other algorithms , at times, gave superior resolution. The L A R T algorithm was also the most consistent in the convergence criteria, often being the first (or sometimes only) algorithm to meet the stopping criterion before 15 iterations, which indicates how efficiently the algorithm reconstructs a particular problem. In tests where the reconstruction array was set to different starting values, a more clear in-dication of superior performance was seen. It was shown previously that the closer the array is set to the actual or expected inverse velocity distribution, the faster the convergence to a solution. Indeed, when the initial array was set to 0.0, all the algorithms were exceedingly slow to converge. Damping parameter values are very problem dependant and should be chosen only after ex-tensive testing and/or estimation of the subsurface properties (Peterson et al, 1985). There are a number of possible alternatives to provide damping to a particular algorithm, and further study is required to determine the relative merits of each process. A s for the simple , proportional method of damping used in this study, a few general comments can be made. While large damping pa-rameters tend to lessen the execution time of the program, they also make the picture become less well defined. This is-probably due to overshoot and excessive back projection values being used. As one decreases the parameter, there is a corresponding increase in the clarity of the picture at the expense of the execution time. In examining the results of the tests, it appears that a damping parameter of approximately 0.5 (50 percent) is sufficient to ensure clarity while not causing long run times. The larger the array becomes, the more pronounced the effect will be. This value of 50 percent has also been proposed by Herman et al (1973) as being the upper limit of a first approx-imation to an A R T damping parameter. Parameters lower than this value may, however, cause slower convergence again because of the lower correction factors being applied at each iteration. 83 The effects of noise on the reconstruction process is clear in that the higher the level of noise in the data, the worse the reconstruction. This emphasizes the fact that for the best tomographic results, noise must be kept to a minimum, if not eliminated. 5.4.2 Applications Algebraic reconstruction techniques have been used widely in geophysics and geotechnical ap-plications. It has been proven effective in determining the shape and position of velocity anomalies in the subsurface and has traditionally provided the best quality data in terms of noise levels. The application of A R T to groundwater testing has been less widely used. Ramierez (1986) used electromagnetic radiation to determine the electrical conductivity characteristics of a fractured granite, but so far ( to this author's knowledge) no major testing has been done specifically with cross-hole pressure pulse tomography. This lack of data is possibly due to the limitations of present equipment and/or the severe pulse attenuation properties of most aquifers which make the picks of a first arrival extremely difficult. In the laboratory computer testing done in this study, it was found that A R T could be used successfully with groundwater test data and could accurately reconstruct slowness anomalies which indicate diffusivity variations. It was also found, however, that the data obtained from such modelling was inherently poor (due to numerical dispersion, programmer adjustment on picks, etc.) and gave reconstructions with apparent velocity phantoms. Additions of other sources of random noise served only to make the situation worse. 5.5 Summary and Conclusions Analyses were conducted on various data sets using C A R T , M A R T and L A R T algorithms. Geophysical and groundwater flow data sets were tested. The geophysical testing showed that horizontal anomalies were well defined by the programs whereas vertical anomalies were not. This was due to the fact that ray coverage was primarily horizontal and thus could detect horizontal anomalies more easily. Vertical anomalies , on the other hand, tended to be smeared across the domain. Velocity contrasts also produced smearing effects in the geophysical test cases. This was due to masking effect of higher velocity layers surrounding low velocity layers. While smearing was evident, it did not mask the layerings to any great extent. 84 Groundwater test data showed some unique problems. The finite element codes used to analyse two dimensional transient flow characteristics produced anomalous oscillations commonly found in numerical solutions. This made the picking of arrival times at the receiver very difficult. In spite of this, the results obtained by the groundwater data were promising in that the anomalies expected in the profile showed, up in the recontructed array. General problems existed for the circumferential pixels in the reconstruction domains. This was especially evident when homogeneous test data was used, as the algorithms produced phantoms in the edge pixels. Incomplete ray sampling of the edge pixels was the reason for these phantoms oc-curring. Internal pixels, which had more complete sampling and redundancy, exhibited no phantom behaviour and consistently produced near correct or exact profiles. , Lowering the damping parameter value, in general, proved to increase algorithm convergence time as well as marginally aiding picture resolution. Addit ion of noise to the data sets consistently proved to decrease picture resolution. Execution times of the algebraic reconstruction algorithms proved to primarily be functions of matrix size, and secondarily a function of parameterization. 85 CHAPTER 6 6.1 Applications of Matrix Techniques 6.1.1 Introduction and Review The matrix solution tests developed for this study are examined separately because : a) the%pro-grarns are run on a different computer system (an Amda hl mainframe computer), and b) the matrix manipulation, while solving the same inverse problem, involves a completely different methodology than that of the iterative algebraic reconstruction. The process by which the direct matrix solving technique was implemented is as follows : the raw data obtained from cross-hole shooting was fed into the ray/pixel calculation program described in Section 3.2. Then , the pixel numbers, path lengths and shot arrival times are loaded into storage files which are subsequently used to initiate the matrix forming algorithm. The matrix programs use this data to calculate a solution to the set of linear equations. This section will review the matrix solution techniques used in this study, evaluate the results from these techniques, and identify and discuss the methods used to increase the accuracy of the reconstructions. Then , special tomographic problems will be discussed and their relationship to direct solution techniques. The section will close with an analysis of the test results and conclusions concerning the direct matrix solving methods. 6.2 Initial Test Results The two methods used to obtain solutions to the matrix formulated problem were CL1, the constrained L1 linear approximation algorithm developed by Barrodale and Roberts at the Univer-sity of Victoria, Canada, and several singular value decomposition routines (SVD) available on the U B C computer system. 6.2.1 Results Several sets of data were used in the initial evaluation of these matrix methodologies. The primary data sets included a one layer seismic configuration (Figure 18) and a one layer groundwater flow configuration (Figure 21). 86 The CLl algorithm developed by Barrodale and Roberts (1978) has the option of constraining both the residual (b-Ax) vector and the' solution (x) vector. The constraints ranged from : 1) less than or equal to zero, 2) unconstrained, 3) greater than or equal to zero. For the purpose of this study, the first option (greater than or equal to zero) was chosen for the solution vector, and the residual vector was left unconstrained. Unfortunately, the results obtained by this method were disappointing. Numerous zero velocity (or diffusivity) pixels occured in the solution and the range of values spanned several orders of magnitude. In order to identify the cause of these problems, S V D was used and the singular values associ-ated with the decomposition process were calculated and displayed. The singular values for both one layer data sets produced similar patterns . The majority of singular values were slightly greater than 1.0, but were not more than one order of magnitude above this value. There was a group of values, however, that was several (up to 6) orders of magnitude below 1.0. This means that, numerically, the rank of matrix A is small (see Equation 23, page 31). These values were seen to be the cause of the erroneous reconstructions (Figure 55). The low values, most of which were on the order of 10~ c , interfere with the multiplication process in that the S V D sequence: A' = V S ' 1 ^ leads to X= V S _ 1 1 7 ' 6 (29) produces small product values that are on the order of round off error significance. Thus, the fundamental problem found here is the low numerical rank of matrix A . 6.3 Alterations to the Procedure In order to obtain accurate" reconstructions from the given data using direct matrix solution algorithms, a number of procedural modifications were attempted. 6.3.1 Redundant Data In an attempt to transform the existing under or mixed determined problem into an overde-termined one and thus forcing a solution to the matrix equation, redundant information was added to the data. The information was obtained by means ofsupplimentary source locations placed 0.3 meters above and below the existing source locations of the hypothetical sections (Figure 56). 87 I 0 5 H 10s u l •D _! -z. IO~ + + + + + + + + + + + + + + + + 4-10 15 SINGULAR VALUE NUMBER + + + + + 20 25 Figure 5 5 : S i n g u l a r Values Assoc i a t ed w i th the One Layer Se ismic Test P a t t e r n Data Set 88 SOURCES 0.2>m 03 • z • » 1 3 <> •2 • 1 "3 • 2 • 1 • 3 ••1 •< I >3 < •3 <— 2m. —> RECEIVERS I FIRST S E T OF 5O0l?C£S Z SECOND SET OF SOURCES 3 THIRD SET OF SOURCES Figure 56: A d d i t i o n of Redundancy to the O r i g i n a l Cross Hole C o n f i g u r a t i o n 89 The receiver positions were left unchanged. The data obtained from these new positions provided two and three orders of redundancy to the original cross-hole data, and hopefully would produce a reasonable tomographic section. Both CLl and S V D algorithms were used in the analysis of the new data. The results of CL1 analysis of the second and third order redundancy data is shown in Figures 57 and 58. There is some minor improvement in the resolution of the expected anomaly when more data is used, but even with the three sets of data, the anomaly is still not clearly evident. T h e singular values associated with the additional data is shown graphically in Figure 59. The addition of more and more data clearly does not affect the magnitude of the small singular values to any great extent. While marginal movement of these values towards the ideal 1.0 mark is seen, a great deal more data must be added to achieve this goal. (Assuming that the rate of change of values~remains relatively constant.) From these observations, it can be concluded that large data sets are required to transform such a tomographic problem to an overdetermined case, and this is likely due to the poor conditioning of the matrices involved which, in turn, is a consequence of the poor coverage inherent in the crosshole methodology. 6.3.2 Elimination of Low Singular Values A standard technique used by numerous authors in singular value decomposition of poorly conditioned matrices is to eliminate the small singular values. This process, while either losing or ignoring matrix information, is generally able to produce better results than by incorporating the full set of singular values in the matrix multiplication process. Using one order of redundancy data, the singular value decomposition of the length matrix [A] was calculated, and successive runs were made with each run eliminating the next lowest singular value from the S matrix. The resulting S matrix was then used to obtain the solution vector x via the matrix multiplication of equation 36. The objective of the successive run approach was to find the critical singular value, or that value which, when eliminated, significantly improved the reconstruction's clarity. Note that the solution vector for the S V D matrix reconstruction techniques has been left unconstrained. This means that any calculated value may be assigned to a pixel, even though that value may be physically unrealistic (i.e.: negative values). 90 F i g u r e 5 7 : T h e O n e L a y e r S e i s m i c T e s t P a t t e r n R e c o n s t r u c t e d b y t h e CL1 A l g o r i t h m U s i n g Two O r d e r s o f R e d u n d a n c y D a t a 91 F i g u 10s \ re 5 9 : Singular Values Associated with Redundant Data Sets x FIRST ORDER REDUNDANCY • SECOND ORDER REDUNDANCY O THIRD ORDER REDUNDANCY * 9 x \0° O o ? 9 ? x x x ? 9 o * x » i o - 5 i X X X X ics*** > 1 . r I O 1 5 20 25 SINGULAR VALUE N U M B E R 92 A s expected, when all the small singular values were eliminated, there was a great improvement in picture clarity (Figure 60). The anomaly was positioned where it was expected to be and the contrast between material types was distinct. Elimination of further values only marginally improved upon this picture. Using this same procedure on other data sets (a two layer transient flow case and an isolated anomaly in a transient flow field) produced similar results (Figures 61 to 64). Once the anomalously low singular values were eliminated, the calculated and expected profiles converged rapidly. Further deletion of singular values, again slightly improved picture clarity. The isolated anomaly is a particularly interesting case in that the reconstruction was able to position the anomaly where it was supposed to be. With incomplete scanning geometry, the cross-hole method is usually unable to accurately position vertical anomalies and typically smears them across the section. Evidentally, the geometry of the scanning region was such that positioning of the anomaly was possible. This case also shows that isolated objects, as well as layers, can be detected by this method. In conclusion, the elimination of small singular values seems to be a viable method of obtaining solutions to problems with low rank matrices. 6.3.3 Producing a Highly Underdetermined Problem Woodbury (pers. comm.) observed that the cross-hole tomographic problem was an inherently mixed or underdetermined one, and as such, a unique solution was often not obtainable. He suggested that instead of trying to make the problem overdetermined by means of additional data (i.e., redundancy), that developing a highly underdetermined problem and solving by means of S V D may be more feasible and produce better results. Woodbury (1987) explained that " in the case where there is less data than parameters (i.e., m < n), or if the problem is rank deficient, then a unique solution of the least squares (or S V D ) problem is not possible. A minimum length solution can be obtained by solving the constrained problem: min J = plp (30) subject to : 93 Figure 60: One Layer Seismic Test P a t t e r n as Reconst ructed by S ingu la r Value Decomposit ion w i th S i n g u l a r Values Cut Off Below 0.1 94 Figure 61: Two Layer Groundwater Flow Test P a t t e r n 7r (s/m*) \.o m 'o.o Figure 62: C e n t r a l Anomaly Groundwater Flow Test P a t t e r n 95 Figure 64: C e n t r a l Anomaly Groundwater Test P a t t e r n as Reconst ructed by SVD wi th S i n g u l a r Values Cut Off Below 1.0 96 d — Ap (31) for which the solution is: (32) In this case, J is the desired objective function and pe is the desired solution vector." Note that the minimum length solution described above is simply the singular value decomposition of matrix A for a situation where the dimensions of A are m by n (m < n). While the highly underdetermined problem has no unique solution, it does, however, have a unique minimum length (or best fit) solution to the given data. Thus, the goal of the exercise is to obtain this unique solution. This theory was tested on the one layer transient groundwater flow problem. The cross-hole groundwater data was kept as is (i.e., five sources and five receivers), but alterations were made to the matrix calculation algorithm such that additional pixels were generated. In previous ex-periments, the number of raypaths and the number of pixels were even. For a five source and five receiver configuration, 25 raypaths were generated for a 5 x 5 pixel field. In the highly un-derdetermined case, the number of pixels were increased nine-fold so that the A matrix now had dimensions of 225 by 25. The result of singular value decomposition of this matrix is shown in Figure 65. Although circumferential pixels are somewhat anomalous in value, the central layer is clearly visible and discernable from the surrounding pixels. The implications of the use of highly underdetermined problems in tomography are far reaching. The inherent limitations of cross-hole scanning geometry and the prohibitive costs of redundant data sets can both be taken care of by this scheme. This method, however, should be used with caution. The problem is a highly underdetermined one and as such has no unique solution. Therefore, the values obtained by this analysis should not be taken as exact material property values but only as indicative of property contrasts. 97 Figu re 65: H i g h l y Underdetermined C o n f i g u r a t i o n of the One Layer Test Groundwater F low Test P a t t e r n as Recons t ruc ted by SVD, No Values Dele ted 6.4 Special Problems 6.4.1 T h i n Fracture Analysis Identification of small anomalies such as thin fractures comes down to the problem of contrast. Basically, the question is: will there be a sufficient contrast between the fractures and the sur-rounding medium so that a discernable differential response is apparent? For many geometries and radiation types, the answer is probably not. The smaller the anomaly, the less likely that it will be < picked up by cross-hole shooting. In such cases, it may be necessary to go to an alternate geometry (ie.: one which is aligned favourably to the anomaly) or another type of radiation. A n example of just such an instance may be seen with widely spaced fractures in plutonic rocks. Seismic shooting may smear the contrasts in velocity between fracture and intact rock across the section, or in fact, may not even pick the fractures up due to their minor velocity contibution. Electromagnetic radi-ation, on the other hand, may readily pick up water-filled fractures due to their large contrast in dielectric properties with respect to the country rock (Ramierez, 1986). Thus , in order to identify small anomalies, one must choose the radiation type and scanning geometry carefully so that there is a high probability of detection. 6.4.2 High Contrast Anomalies One of the basic premises of the tomographic problem as formulated in this study was that the straight line raypath approximation was applicable. For this assumption to be valid, low material property contrasts had to exist. In dealing with materials which have high property contrasts (i.e., an order of magnitude for seismic , or several orders of magnitude for transient groundwater flow), the straight line assumption breaks down and a ray tracing technique must be used to compensate for ray bending at interfaces and perhaps even in the layers themselves. The high contrast case also produces additional complexity in the data gathering process. In seismic cases, high velocity layers will tend to mask low velocity layers and this is accentuated by larger and larger contrasts. In groundwater flow problems, larger contrasts in conductivity may inhibit the movement of the transient pressure pulse which might result in poor data being collected. Diffusivity values differing up to three orders of magnitude were used to evaluate the effects of high contrast anomalies. Figure 66 demontrates the effects of these contrasts. Even at two orders of 99 Figu re 6 6 : One Layer Groundwater Test P a t t e r n w i t h a Two Order of Magnitude D i f f u s i v i t y Con t r a s t 100 magnitude difference in diffusivity, the straight line approximation is breaking down and the arrival time of the pressure pulse is harder to detect accurately. Both of these factors lead to poorer and poorer reconstructions. From these tests, it can be concluded that if the straight line raypath assumption is to be used, then low material property contrasts must exist to obtain a valid solution. 6.4.3 Horizontal Variations Horizontal property variations occur in geology as facies changes, gradation changes, fracture frequency changes and like phenomenon. In many ways, these variations have some of the same problems associated with them as with high contrast anomalies because the straight line raypath approximation may no longer be valid, and the time of arrival of the radiation pulse may be harder to distinguish due to attenuation through the differing media. If the horizontal variation is pervasive (vertically extensive) then additional problems develop because, in theory, this becomes a vertical anomaly case with all its associated difficulties (Figure 67). However, if the anomaly is relatively isolated, say to one layer in an otherwise homogeneous medium, then there is a strong probability that this anomaly will be identified. A n example of just such a case is illustrated in Figures 68 to 71, where a thin, horizontal layer of moderate property contrast is embedded in a homogeneous sedimentary medium. The system was analysed using finite element procedures and singular value decomposition. While the actual horizontal variation is not truly apparent in the reconstructed section, the layer itself is quite distinct. 6.4.4 Noise in the Data Noise levels of 5 and 10 percent magnitude were added to the one layer data. A s expected, the addition of noise to the data greatly diminished the clarity and reconstruction accuracy of the profile. Figure 72 illustrates the effect of 5 percent noise on the one layer seismic system using S V D . Even at this relatively low level, and with the singular value cutoff at a higher level than in previous tests, the picture is markedly changed from the no added noise case (Figure 73). This emphasizes the fact that tomographic techniques rely heavily on the accuracy of the measured data. Noise can result in poor picture quality, reconstruction phantoms (anomalies which appear in the section but are known not to exist), and perhaps even complete lack of convergence to the true profile. Thus, 101 TOMOGRAPHIC SECTION X FACIES x CHfllUGES /VALUES IN Figure 67: F igure Demonstrating that H o r i z o n t a l P rope r ty V a r i a t i o n s May In Fact Produce V e r t i c a l Anomalies 102 103 Figu re 70: One Layer Groundwater Test P a t t e r n w i t h Minor H o r i z o n t a l D i f f u s i v i t y V a r i a t i o n (1/D = 1.0 to 25.0 s/m*m) F i g u r e 71: One Layer Groundwater Test P a t t e r n w i t h Large H o r i z o n t a l D i f f u s i v i t y V a r i a t i o n (1/D = 1.0 to 100.0 s/m*m) 104 Figure 72: One Layer Seismic Test P a t t e r n as Reconstructed by SVD. S i n g u l a r Values Cut Off Below 3 .0 , 5% Gaussian Noise Added 105 every effort must be made at the investigation stage to minimize or eliminate the probable sources of random noise. A point to note here is that the reconstruction accuracy was determined subjectively by com-parison to the known cross section. Alternate methods do exist to make an objective judgement of picture accuracy such as the Chi-squared test outlined by Wonnacott and Wonnacott (1970). For consistency with the other matrix reconstruction methods, however, the subjective analysis will be used. 6.4.5 Parameterization Effects The transient groundwater flow analyses utilized finite element techniques to obtain pulse time of arrival data. When discretizing the study section into a finite element mesh, the programmer is subjectively deciding on the size of anomaly that will be detectable. The smaller the element, the smaller the anomaly which can be modelled. The investigation up to this point has used a fairly coarse mesh, with elements up to 2 meters per side, theorizing that the coarser the mesh, the less accurate the time of arrival data will be (due to interpolation effects), and thus producing a data set which is least accurate or 'worst case'. Thus, if anomalies can be detected from such data , then finer discretization can only lead to more accurate data and better reconstructions. A finer mesh was used in conjunction with the one layer transient analysis in order to test the parameterizing effects. The finer mesh was produced by subdividing the original pixel mesh ninefold, that is, for each pixel of the old mesh, the new mesh had nine pixels within the same area. The times of arrival obtained from the finer mesh, in general, were within 10 percent of those of the coarse mesh with the exception of those times calculated near the anomaly boundary. These times varied up to 25 percent. The reason for this variation lies in the linear interpolation procedure. W i t h the coarse mesh, the time of arrival interpolation was taken between nodes on the boundary of the anomaly and as such, the arrival calculated might be affected by large time differences across the elements containing the anomaly. The finer mesh, on the other hand, had nodes within the layer itself, thus interpolation effects were minimized. Reconstruction using the finer mesh data (Figure 74) defined the anomaly well, producing a fairly distinct layer. It is important to note that there is always a trade-off with the fineness of mesh versus execution 106 Figure 7 4 : One Layer Groundwater Test P a t t e r n as Reconst ructed by SVD. Data Obtained From Trans i en t A n a l y s i s Using a F i n e r Mesh. S i n g u l a r Values Cut Off Below 3.0 107 time and cost. The programmer must make his/her own judgements as to the discretization of the finite element mesh. 6.4.6 A Priori Information In practice, a borehole is not drilled without some form of testing and/or sampling program being carried out. From such investigation comes data which can then be used to reduce the number of unknowns in the tomographic reconstruction process. Reduction of unknowns has been found useful in the iterative algebraic reconstruction scheme in that it speeds convergence of the solution. Here, data will be added that will physically reduce the size of the A matrix by the elimination of select columns, and thereby test the applicability of a priori information in direct matrix solution. The tests were conducted by assuming one or more pixel values was known (e.g., through laboratory sample testing) prior to computer modelling. Thus, the column representing the pixel(s) contribution to the reconstruction may be eliminated by multiplying the column length values by the known pixel value and subtracting this amount from the appropriate right hand side time value. It was found that the position of the pixel in the section was very important in its ability to improve resolution. The problem, as in the 5 x 5 pixel case, is that certain pixels do not contribute a great deal to the reconstruction and the numerical constraint of these pixels does not affect picture resolution to a large extent. This is especially true of the circumferential pixels. Interior pixels which have greater raypath coverage have a more significant effect on resolution, but these pixels are not easily sampled and/or tested. Constraining the pixels adjacent to the boreholes improved picture, clarity. Not only were the -. anomalies more easily discerned, but the addition of prior information also served to make the picture smoother and somewhat more symmetric. A n example of the improvement known data can have on the reconstruction is shown in Figures 75 and 76. Figure 75 shows one layer groundwater flow data with 5 percent Gaussian noise added. The resulting picture is not too clear and the anomalous layer is not obvious. Figure 76, on the other hand, shows the same data set, only this time with known borehole pixel values eliminated from the matrix. The resulting picture here is much clearer and the layer is easily seen. 108 ,. ,. ,. ,.WA,.,.,.1 ,. ,.W. ,. ,. ,. ,. ,.'.|.. ....vA'A'AlAWA'AW Figure 75: One Layer Groundwater Test P a t t e r n as Reconstructed by SVD. 5% Gauss ian Noise Added, S ingu la r Values Cut Off Below 1.0 F igure 76: One Layer Groundwater Test P a t t e r n as Reconst ructed by SVD. 5% Gauss ian Noise Added, S ingu la r Values Cut Off Below 1.0. Borehole P i x e l s Cons t r a ined i n M a t r i x C a l c u l a t i o n . 109 6.5 Summary and Analysis of Test Results Matrix solution techniques to the tomographic problem provided fast and efficient reconstruc-tion methods. Numerous matrix methodologies and algorithms exist; only a few techniques are discussed here. The experiments have shown that direct matrix solution may not be adequate to obtain a reasonable tomographic reconstruction. Additional constraints and/or matrix manipulation may be necessary to isolate a possible solution. A s the cross-hole tomographic problem is typically mixed or under determined, there may be numerous possible solutions to a given set of data. Two methodologies discussed here have proven to be useful in obtaining a solution to poorly conditioned and rank deficient matrices. The methods are: elimination of anomalous singular values from the singular value decomposition of the length matrix [A], and imposing a highly underdetermined system and solving for the minimum length solution using S V D . Both methods either lose or ignore certain matrix information (i.e., the singular values) and as such will not represent exact solutions, but anomaly identifiers. Redundant data, unless there is a great deal of it, has proven not to be useful in obtaining a unique solution. Special geometries or types of problem in tomography need to be examined separately so as to determine the best methodology to use. Extremely small anomalies may not be picked up by standard techniques, but alternate methods may, in fact, distinguish them easily and clearly. Care must be taken in zones of high parameter variation. The tomographic technique, as developed here, relies on a straight line raypath assumption, and significant deviations from this assumption will only lead to poor results. Noise is also a key factor in reconstruction accuracy, as tomography relies on an identifiable and relatively clear signal. Additions of random noise interferes with the gathering of good (or precise) data, and will ultimately interfere with the clarity of the reconstructed section. 6.5.1 Conclusion Direct matrix solution of tomographic data sets proved insufficient to obtain plausible recon-structions. Singular value elimination and imposition of highly underdetermined systems were demonstrated to be viable techniques to obtain possible solutions and/or identify anomalies. 110 The experiments proved the necessity for good quality data for the tomographic techniques and the necessity for adherence to the basic premises of the techniques themselves. Deviations from these premises leads to poorer results. I l l C H A P T E R 7 7.1 General Conclusions This paper has attempted to show the worth of algebraic reconstruction techniques as applied specifically to geotechnical and groundwater investigations, as well as demonstrating that these to-mographic techniques can easily be implemented on micro-computers. The test patterns presented in this paper are admittedly small in scale, however, they do indicate the resolution capabilities of the algorithms. Multiple layer systems, as would typically be found in-situ, were well resolved with relatively limited data by these techniques. The appropriate damping parameter and iterative reconstruction methodology were shown to be problem specific, and as such , on site investigations must be undertaken in order to determine the optimum choice of these factors. The effects of noise were as expected, with more data noise giving poorer reconstructions. This emphasizes the fact that the tomographic technique not only requires good areal coverage of the problem area, but reasonably accurate data as well. A key factor in future will be to increase the cross-hole coverage by adding surface sources/detectors and/or increasing the number of downhole measuring instru-ments. With this increase in data comes the additional need for computer storage (i.e., via virtual memory, high density diskettes, hard drives, etc.) if the personal computer is to continue to be used with this investigation technique. The application of tomography to groundwater investigations has been shown to be possi-ble. Further development of testing equipment and investigation techniques are required, however, before a more thorough study of this application can be made. A n important comment that must be made about geotechnical tomography is that it is not an exacting form of analysis. Noise, incomplete scanning geometry, poor redundancy characteristics, and in the case of matrix reconstructions - omitted data, make the picture generated by this technique non-unique. Only the use of a priori information (i.e., the constraining of the magnitude of the solution vector values, the addition of known or suspected information, the deletion of nonessential pixels, etc.), or the addition of numerous other source/receiver positions can convert the typically under or mixed-determined tomographic problem to an even or overdetermined one. Thus, the cross-hole tomographic technique should be considered an anomaly finder and not an exact representation of what exists in the subsurface. 112 7.2 Further Study A s with any relatively new procedure or method of analysis, there are always new areas to inves-tigate, and tomography is no exception. While the medical applications of algebraic reconstruction techniques has been firmly established and in wide use (Sochurek, 1987), there still remains the further study of the geotechnical, geophysical and groundwater applications of tomography, and the improvement of the algorithm itself to provide better resolution and computational efficiency. 7.2.1 Improvements on the A R T Algorithm T h e iterative algorithms examined in this paper are by no means the only tomographic tech-niques available. Gilbert (1972) and Herman, Lent and Rowland (1973) have examined a number of alternative methods including the simultaneous iterative reconstruction technique (SIRT) and have found that the S I R T algorithm can produce better reconstructions than the A R T algorithms (Oppenheim, 1974). SIRT, which was originally proposed by Gilbert (1972), operates by changing pixel densities using the data from all projections simultaneously. Thus, each iteration of S I R T (once through the entire picture) is equivalent to n iterations of an A R T - t y p e algorithm, where n is the number of raypaths. As the SIRT algorithm handles all projections simultaneously, there is a great increase in the computer storage requirements in order to handle this type of problem. As basic micro-computers presently only have limited storage capabilities, the use of the S I R T algorithm is not feasible for these machines when dealing with large problems. However, with the advancement of microcomputer technology, this limitation may soon change. Wi th the use of the S I R T algorithms, both the resolution and accuracy of the reconstructed picture can be improved (Gilbert, 1972), and this appears to be one of the main areas for future research. Another area that will be of future interest is the modification of the basic A R T algorithm. While there has been much interest in the stopping criteria and image resolution facets of tomog-raphy, a universal method of determining the accuracy of the reconstruction still does not exist. Further analysis of the mathematical or least squares criteria is required as are methods to pro-vide more complete coverage of the problem domain. Also, a graphics form of error checking is believed by this author to be an important area for research. The visual display of the iterative process allows programmer participation in the determination of what is the true or expected pro-file. Graphics in/graphics out technology would also allow the programmer to manually input ray 113 refraction between layers, and thus account for the ray bending characteristics of the medium. The matrix form of tomographic problem solving has also been shown to be viable. Further studies in this area might also include graphics display techniques, the development of more ade-quate matrix solution methods, and the analysis of filtering procedures to better determine arrival times and/or process out unwanted noise from the data. Another key area for study in future might be the application of tomography to three dimensional problems. This would only be applicable to mainframe capable systems, however, due to the large storage requirements of the x,y,z coordinates for each node in the section. 7.2.2 Applications Research The investigations described in this paper and by other authors indicate that there are many more areas of application of the tomographic technique. Key future areas of research include groundwater studies to investigate layer continuity, contaminant transport and perhaps fracture density. While the single pressure pulses or slugs (which are required to obtain a reasonable response at distance) may be too large for near surface investigations, sinusoidal pulses as proposed by Black and K i p p (1981) or seismic tests may be ways of getting around this problem. For deep investigations, however, large pulses are not seen to be problematical and analyses for deep structures may be carried out using theories similar to those presently used in the oil industry for well response testing. While use of pressure pulses is seen to be an exciting new area of research, geophysical (seis-mic) investigations still have much to offer. A s stated at the beginning of this paper, the problem presently facing in-situ testing is to get as much information as possible from a limited area of in-vestigation, and as such, new downhole tools have been developed to fill this need. The seismic cone penetrometer (Campanella et al, 1984) is one such tool which monitors the shear wave propagation through subsurface materials, and from it infers material moduli. The use of tomography with the penetrometer seems ideal. Surface shots using hammer or elephant gun sources and/or down-hole shots using seismic charges coupled with the seismic cone serve to produce a three-dimensional view of the subsurface (Figure 77). This not only provides additional soil property information, but also indicates layer discontinuity and anomalous zones which might cause foundation instability. Further study in this area is required to refine the geotechnical aspects and implications. 114 , OSCILLOSCOPE TRIGGER STATIC LOAD / / A y / / : k H A M M E f ? / SHEAR W A V E SOOflCE / SHEAfc WAVE SEISMIC CONE PENETROMETER A J 0 r ° A, C3 P0SSI6LE SHEAR WAV£ Q SOURCE LOCATiOkiS A« FOR TOMOGRAPHIC o PENETROMETER HOLE 0 S , 0 f i i F i g u r e 7 7 : S e i s m i c C o n e P e n e t r o m e t e r a n d i t s A p p l i c a t i o n t o I n - S i t u T o m o g r a p h i c T e s t i n g 115 7.3 Summary and Conclusions There are a number of improvements and applications still yet to be investigated involving tomographic techniques. The mathematics of the process are simple to understand, calculations can be done on a micro-computer, and it is widely applicable in the geotechnical and groundwater engineering professions. Further research and testing is all that is required before tomography becomes an integral part of in-situ investigations. 116 REFERENCES Barrodale, I. and Roberts, F . D . K . , 1978. Solution of the Constrained LI Linear Approximation Problem. University of Victoria Department of Mathematics Report, Victoria, B . C . . Black, J . H . and K i p p , K . L . , 1981. Determination of Hydrogeological Parameters Using Sinusoidal Pressure Tests : A Theoretical Appraisal , Water Resources Research, 17(3), 686-692. Campanella, R . G . and Robertson, P . K . , 1981. Applied Cone Research. U B C Department of Civi l Engineering, Soil Mechanics Series 46, Vancouver, B . C . . Campanella, R . G . , Rice, A . , and Robertson, P . K . , 1984. Research and Development of a Downhole Seismic Cone Penetrometer. U B C Department of Civi l Engineering, Soil Mechanics Series 73, Vancouver, B . C . . Cho, Z . H . , 1974. General Views on 3-D Image Reconstruction and Computerized Transverse Axia l Tomography, IEEE Transactions on Nuclear Science, NS-21, 44-71. Dines, K . A . and Lytle, R . J . , 1979. Computerized Geophysical Tomography, Proceedings IEEE, 67(7), 1065-1073. Fawcett, J .A . , and Clayton, R . W . , 1984. Tomographic Reconstruction of Velocity Anomalies, Bul-letin of the Seismological Society of America, 74(6), 2201-2219. Freeze, R . A . , a n d Cherry, J . A . , 1979. Groundwater. Prentice-Hall Inc., Englewood Cliffs, N . J . , U S A . Gilbert, P., 1972. Iterative Methods for the Three-Dimensional Reconstruction of an Object from Projections., Journal of Theoretical Biology, 36, 105-117. Gordon, R. , Bender, R. , and Herman, G . T . , 1970. Algebraic Reconstruction Techniques ( A R T ) for Three Dimensional Electron Microscopy and X - R a y Photography, Journal of Theoretical Biology, 29, 471-481. Gordon, R. , 1974. A Tutorial on A R T , IEEE Transactions on Nuclear Science, NS-21, 78-93. Gustavsson, M . , Ivansson, S., Moren, P.. and Pihl , J . , 1986. Seismic Borehole Tomography : Measurement System and Field Studies. Proceedings of the IEEE, 74(2), 339-346. 117 Herman, G . T . , 1972. Two Direct Methods for Reconstructing Pictures from their Projections : A Comparative Study, Computer Graphics and Image Processing, 1, 123-144. Herman, G . T . and Rowland, S .„ 1973. Three Methods for Reconstructing Objects from X-rays : A Comparative Study, Computer Graphics and Image Processing, 2, 151-178. Herman, G . T . , Lent, A . , and Rowland, S., 1973. A R T : Mathematics and Applications, Journal of Theoretical Biology, 42, 1-32. Hirasaki, G . J . , 1974. Pulse Tests and Other Early Transient Pressure Analyses for In-Situ Estima-tion of Vertical Permeability, Society of Petroleum Engineers Journal, February 1974, 75-90. Horn, B . K . , 1978. Density Reconstruction Using Arbitrary Ray-Sampling Schemes, Proceedings of the IEEE, 66(5), 551-562. Hsieh, P . A . and Neuman, S.D., 1985a. Field Determination of the Three Dimensional Hydraulic Conductivity Tensor of Anisotropic Media , a) Theory, Water Resources Research, 21(11), 1655-1665. Hsieh, P . A . , Neuman, S.D., Stiles, G . K . , and Simpson, E . S . , 1985b. Field Determination of the Three Dimensional Hydraulic Conductivity Tensor of Anisotropic Media , b) Methodology and Application to Fractured Rocks, Water Resources Research, 21(11), 1667-1676. Ivansson, S., 1986. Seismic Borehole Tomography : Theory and Computational Methods, Proceed-ings IEEE, 74(2), 328-338. Johnson, C . R . , Greenkorn, R . A . , and Woods, E . G . , 1966. Pulse Testing : A New Mehtod for De-scribing Reservoir Flow Properties Between Wells, Journal of Petroleum Technology, December 1966, 1599-1604. Kamal , M . M . , 1979. The Use of Pressure Transients to Describe Reservoir Heterogeneity, Journal of Petroleum Technology, August 1979, 1060-1070. Kowalski, G . , 1977. Reconstruction of Objects From Their Projections: The Influence of Measure-ment Errors on the Reconstruction, IEEE Transactions on Nuclear Science, NS-24, 850-864. Luikov, A . V . , 1968. Analytical Heat Diffusion Theory. Academic Press, New York. M c C a n n , D. , Grainger, P., and M c C a n n , C , 1975. Inter-borehole Acoustic Measurements and Their Use in Engineering Geology, Geophysical Prospecting, 23. 50-69. 118 McMechan , G . A . , 1983. Seismic Tomography in Boreholes, Geophysical Journal of the Royal As-tronomical Society, 74, 601-612. McPhee, N . G . , 1978. The Book of Insults, Ancient and Modern. Van Nostrand Reinhold, Toronto, Canada. McPhee, N . G . , 1981. The Second Book of Insults. Van Nostrand Reinhold, Toronto, Canada. Mason, I. M . , 1981. Algebraic Reconstruction of a Two- Dimensional Velocity Inhomogeneity in the High Hazels Seam of Thoresby Colliery, Geophysics, 46(3), 298-308. Menke, W . , 1984a. Geophysical Data Analysis : Discrete Inverse Theory. Academic Press, Orlando, Florida. Menke, W . , 1984b. The Resolving Power of Cross Borehole Tomography, Geophysical Research Letters, 11(2), 105-108. Mersereau, R . , 1973. Recovering Multi-Dimensional Signals from Their Projections, Computer Graphics and Image Processing, No . 2, 179-195. Mikhailov, M . D . , a n d Ozisic, M . N . , 1984. Unified Analysis and Solutions of Heat and Mass Diffu-sion. John Wiley and Sons, New York. Oppenheim, B . E . , 1974. More Accurate Algorithms for Iterative 3 Dimensional Reconstruction, IEEE Transactions on Nuclear Science, NS-21, 12-11. Peterson, J . E . , Paulsson, B . , and McEvi l ly , T . V . , 1985. Applications of Algebraic Reconstruction Techniques to Cross hole Seismic Data, Geophysics, 50(10), 1566-1580. Prosser, D . W . , 1981. A Method of Performing Response Tests on Highly Permeable Aquifers, Groundwater, 19(6), 588-592. Radcliff, R . D . , and Balanis, C . A . , 1979. Reconstruction Algorithms for Geophysical Applications in Noisy Environments, Proceedings IEEE, 67(7), 1060-1064. Ramachandran, G . N . and Lakshminarayanan, A . V . , 1971. Three-Dimensional Reconstruction from Radiographs and Electron Micrographs, Application of Convolution Instead of Fourier Trans-forms, Proc. Nat. Acad. Sci. U.S., 68, 2236-2240. Ramierez, A . L . , 1986. Recent Experiments Using Geophysical Tomography in Fractured Granite, Proceedings IEEE, 74(2), 347-352. Robertson, P . K . , 1985. In-Situ Testing and its Application to Foundation Engineering. U B C Department of Civi l Engineering, Soil Mechanics Series 91, Vancouver, B . C . . Scudder, H . J . , 1978. Introduction to Computer Aided Tomography, Proceedings IEEE, 66(6), 628-637. Shepp, L . A . , and Logan, B . F . , 1974. Fourier Reconstruction of a Head Section, IEEE Transactions on Nuclear Science, NS-21, 21-32. Sochurek, H . , 1987. Medicine's New Vision, National Geographic Society Magazine, Washington, D.C., 171(1), 2-41. Walter, G .R . ,and Thompson, G . M . , 1982. A Repeated Pulse Technique for Determining the Hy-draulic Properties of Tight Formations, Groundwater, 20(5), September-October 1982. Wang, H . F . , a n d Anderson, M . P . , 1982. Introduction to Groundwater Modelling, Finite Element and Finite Difference Techniques. Freeman Press, New York. Way, S.C.,and M c K e e , C . R . , 1982. In-Situ Determination of Three-Dimensional Aquifer Perme-abilities, Groundwater, 20(5), 594-603. Wong, J . , Hurley, P., and West, G . , 1983. Cross-Hole Seismology and Seismic Imaging in Crystalline Rocks, Geophysical Research Letters, 10(8), 686-689. Wonnacott, R . J . , and Wonnacott, T . W . , 1970. Econometrics. John Wiley and Sons, New York. Woodbury, A . D . , 1987. Inverse Theory and Model Identifiability: An Overview. C . S . C . E . Centen-nial Conference, Montreal, Quebec, Canada, May 1987. 120 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0052412/manifest

Comment

Related Items