Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Tomographic techniques and their application to geotechnical and groundwater flow problems Laidlaw, James Stuart 1987

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-UBC_1988_A7 L34.pdf [ 18.37MB ]
JSON: 831-1.0052412.json
JSON-LD: 831-1.0052412-ld.json
RDF/XML (Pretty): 831-1.0052412-rdf.xml
RDF/JSON: 831-1.0052412-rdf.json
Turtle: 831-1.0052412-turtle.txt
N-Triples: 831-1.0052412-rdf-ntriples.txt
Original Record: 831-1.0052412-source.json
Full Text

Full Text

TOMOGRAPHIC TECHNIQUES AND THEIR APPLICATION TO GEOTECHNICAL AND GROUNDWATER FLOW PROBLEMS by JAMES STUART LAIDLAW B . A . S c , T h e University of British Columbia, 1985  A THESIS  SUBMITTED  IN P A R T I A L F U L F I L L M E N T O F  THE REQUIREMENTS  FOR T H E DEGREE OF  MASTER OF APPLIED  SCIENCE  in  T H E F A C U L T Y O F G R A D U A T E STUDIES The Department of Geological Sciences  We accept this thesis as conforming to the required standard  T H E UNIVERSITY  O F BRITISH  December  COLUMBIA  1987  © J a m e s Stuart Laidlaw, 1987  In  presenting  degree  at  this  the  thesis in  University of  partial  fulfilment  of  of  department publication  this or of  thesis for by  his  or  requirements  British Columbia, I agree  freely available for reference and study. I further copying  the  that the  representatives.  It  this thesis for financial gain shall not  permission.  Department The University of British Columbia 1956 Main Mall Vancouver, Canada V6T 1Y3  an advanced  Library shall make  it  agree that permission for extensive  scholarly purposes may be her  for  is  granted  by the  understood  that  head  of  copying  my or  be allowed without my written  ABSTRACT  M o s t downhole tools in use today measure properties immediately  adjacent to the borehole,  and as such, only a small portion of the subsurface volume is known with any degree of certainty. W h e n dealing with geologic situations which are characteristically heterogeneous, the engineer often requires more information than what present tests can provide. Tomography  is an in-situ testing  method that allows  the generation  of a two dimensional  subsurface image by reconstructing material property variations between boreholes.  It is essentially  a solution to the inverse problem where signals are measured and, through computer manipulation, are used to infer material contrasts in the subsurface. For the purposes of this thesis, a two dimensional configuration is used to demonstrate  and  evaluate the tomographic technique with source and receiver locations positioned at intervals down adjacent and nearly vertical boreholes.  B o t h iterative and direct matrix solution methods are used  to evaluate the use of seismic and groundwater flow data for subsurface tomography. T h e iterative methods include a variation of the classical algebraic reconstruction technique ( C A R T ) , a modified version of the A R T algorithm ( M A R T ) , and a modified version of the A R T algorithm using the Chebyshev norm criterion ( L A R T ) .  T h e purpose of the iterative  tests is to determine  the best  algorithm for signal reconstruction when data noise and different damping parameters are applied. T h e matrix methodologies include a constrained L linear approximation algorithm and singular 1  value decomposition routines ( S V D ) . These methods solve the set of linear equations ( A x = b) which the tomographic techniques produce.  T h e purpose of this stage of testing is to optimize a direct  method of solution to the sets of linear equations  such that different forms of anomaly can be  discerned. Numerous synthetic seismic and groundwater data sets are used by both iterative and matrix algorithms.  Seismic test data sets are generated by calculation of transit times through materials  of known seismic velocity.  Groundwater test data sets are generated by drawdown analyses and  finite element procedures. A l l algorithms demonstrate  a reasonable  ability at reconstructing sections which closely reii  sembled the known profiles. anomalies.  Vertical anomalies,  however,  are not as well defined as horizontal  T h i s is primarily a result of incomplete cross-hole scanning geometry which also affects  the rank and condition of the matrices used by the direct forms of solution. T h e addition of Gaussian noise to the d a t a produces poor reconstructions regardless of the type of algorithm used. T h i s emphasizes the fact that tomographic techniques require clear and relatively error-free signals.  T A B L E OF C O N T E N T S Page Abstract  i i  Table of Contents  iv  List of Figures  yi-i  Acknowledgements Preface  . . . ; r  '.  x  . . xd  Chapter 1 1.1 Introduction  1  1.2 Purpose and Objective  2  Chapter 2 2.1 Algebraic Reconstruction Techniques ( A R T )  5  2.1.1 Brief Historical Overview of Tomography  5  2.1.2 Mathematics of A R T  6  2.2 Summary and Conclusion  15  Chapter 3 3.1 Reconstruction Techniques  18  3.2 M e t h o d s Used in this Investigation  18  3.2.1 T h e Classical A R T A l g o r i t h m ( C A R T )  18  3.2.2 T h e Modified A R T A l g o r i t h m ( M A R T )  22  3.2.3 T h e Modified A R T A l g o r i t h m Using the L°° Criterion ( L A R T )  24  3.2.4 M a t r i x Reconstruction Technique  25  3.3 S u m m a r y and Conclusions  .  .  .  .  .  .  31  Chapter 4 4.1 Testing Procedures  33  iv  4.1.1 Seismic Considerations  33  4.1.2 Groundwater Considerations  35  4.2 Test Configurations  38  4.2.1 Seismic  38  4.2.2 Groundwater  38  4.2.3 Parameters Varied  46  4.3 Summary and Conclusions  47  Chapter 5 5.1 Applications of A R T  48  5.2 Stopping Criteria and Image Resolution for the Iterative Techniques  48  5.2.1 Stopping Criteria  48  5.2.2 Image Clarity  51  5.2.3 Summary and Conclusions  53  5.3 Test Results  53  5.3.1 Seismic Patterns  54  5.3.2 Groundwater Patterns  71  5.3.3 Timings  80  5.4 Analysis of Test Results  80  5.4.1 T h e Best Iterative A l g o r i t h m  83  5.4.2 Applications  84  5.5 S u m m a r y and Conclusions  84  Chapter 6 6.1 Applications of M a t r i x Techniques  86  6.1.1 Introduction and Review  86  6.2 Initial Test Results  86  6.2.1 Results  86  6.3 Alterations to the Procedure  87  6.3.1 R e d u n d a n t D a t a  87  v..  6.3.2 Elimination of Low Singular Values  90  6.3.3 Producing a Hignly Underdeterrnined Problem  93  6.4 Special Problems 6.4.1  99  T h i n Fracture Analysis  6.4.2 High Contrast Anomalies  99 99  6.4.3 Horizontal Variations  101  6.4.4 Noise in the D a t a  101  6.4.5 Parameterization  Effects  6.4.6 A Priori information 6.5 S u m m a r y and Analysis of Test Results 6.5.1 Conclusion  106 108 110 110  Chapter 7 7.1 General Conclusions  112  7.2 Further Study  113  7.2.1 Improvements on the A R T A l g o r i t h m  113  7.2.2 Applications Research  114  7.3 S u m m a r y and Conclusions  116  References  117  L I S T OF F I G U R E S Figure  Page  1. Cross Hole Configuration  3  2. C o m p u t e r A i d e d Tomographic System (After Scudder, 1978)  8  3. Tomographic Section Discretized into Pixels  10  4. Pixels W h i c h are Not Intercepted by R a y p a t h  11  5. Underdetermined  13  6. Overdetermined  System System  14  7. T h e Tangent Law  16  8. Ray W i d t h Calculation Used in Classical A R T  21  9. Areas N o t Included in the Reconstruction  23  10. T h e Influence of Outliers.  26  (After Menke, 1984a)  11. Calculation of Length M a t r i x 12. M e t h o d of Intercept Calculation.  28 (After D u n b a r , 1986)  29  13. Cross Hole Seismic Investigation  34  14. Cross Hole Transient Groundwater Investigation  36  15. Seismic Test Pattern.  Test Pattern A  39  16. Seismic Test Pattern.  Test Pattern B  40  17. Homogeneous Seismic Test Pattern  41  18. One Layer Seismic Test Pattern  .  42  19. T w o Layer Seismic Test Pattern  43  20. Homogeneous Groundwater Test Pattern  44  21. One Layer Groundwater Test Pattern  45  22. Variation of R a y Coverage  52  23. Test Pattern A by M A R T . 0.5 D a m p i n g , No Noise  55  24. Test Pattern A by L A R T . 0.5 D a m p i n g , N o Noise  55  25. Test Pattern A by C A R T . 0.5 D a m p i n g , N o Noise  56  26. Test Pattern A by C A R T . 0.5 D a m p i n g , N o Noise  56  vrr  27. T h e Influence of D a m p i n g Parameters on Convergence  57  28. Section Demonstrating Vertical Smearing  59  29. R a y p a t h s Tending to S k i m Horizontal Boundaries  60  30. Demonstrating the Smearing on a Horizontal A n o m a l y  61  31. Test Pattern A by L A R T . 0.5 D a m p i n g , 0.05 Noise  62  32. Test Pattern A by L A R T . 0.5 D a m p i n g , 0.1 Noise  62  33. L A R T and Test Pattern A with Initial Matrix = 0 . 0  64  34. Original Homogeneous  65  Test Pattern  35. Seismic Pattern by M A R T . 0.5 D a m p i n g , No Noise  65  36. Seismic Pattern by L A R T . 0.5 D a m p i n g , No Noise  66  37. Seismic Pattern by C A R T . 0.5 D a m p i n g , N o Noise  66  38. O n e Layer Pattern by C A R T . 0.5 D a m p i n g , N o Noise  68  39. T w o Layer Pattern by M A R T . 0.5 D a m p i n g , N o Noise  68  40. O n e Layer Pattern by L A R T . 0.5 D a m p i n g , N o Noise  69  41. O n e Layer Pattern by M A R T . 0.5 D a m p i n g , N o Noise  69  42. T w o Layer Pattern by L A R T . 0.5 D a m p i n g , N o Noise  70  43. T w o Layer Pattern by C A R T . 0.5 D a m p i n g , N o Noise  70  44. T h e Effect of Noise Level on Solution Convergence  72  45. Test Pattern B by C A R T . 0.5 D a m p i n g , No Noise  73  46. Test Pattern B by L A R T . 0.5 D a m p i n g , N o Noise  73  47. Homogeneous  Pattern by C A R T . 0.5 D a m p i n g , N o Noise  75  48. Homogeneous  Pattern by L A R T . 0.5 D a m p i n g , N o Noise  75  49. Effect of D a m p i n g Value on Convergence  76  50. Finite Element Configuration for Transient Test  78  51. O n e Layer Groundwater Pattern by M A R T . 0.5 D a m p i n g , N o Noise  79  52. O n e Layer Groundwater Pattern by L A R T . 0.5 D a m p i n g , N o Noise  79  53. M A R T Convergence Characteristics with 0.5 D a m p i n g  ,.  .  .  81  54. A l g o r i t h m Execution T i m e  82  55. Singular Values of the Seismic Test Pattern  88  v'xii  56. A d d i t i o n of Redundancy  89  57. T w o Orders of Redundancy by C L 1  91  58. Three Orders of Redundancy by C L 1  91  59. Singular Values of Redundant D a t a Sets  92  60. One Layer Seismic Pattern by S V D  94  61. T w o Layer Groundwater Flow Pattern  95  62. Central A n o m a l y Groundwater Pattern  95  63. T w o Layer Groundwater Pattern by S V D  96  64. Central A n o m a l y Pattern by S V D  96  65. Highly Underdetermined  98  Configuration by S V D  66. T w o O r d e r of Magnitude Diffusivity  Contrast  100  67. Horizontal Variations Producing Vertical Anomalies  102  68. Configuration for M i n o r Horizontal Diffusivity  Contrast  103  69. Configuration for Large Horizontal Diffusivity  Contrast  103  70. Groundwater Pattern W i t h M i n o r Horizontal Diffusivity  Variation  104  71. Groundwater Pattern W i t h Large Horizontal Diffusivity  Variation  104  72. One Layer Seismic Pattern by S V D . 0.05 Noise  105  73. One Layer Seismic Pattern by S V D . N o Noise  105  74. One Layer Pattern by S V D Using Finer Meshes  107  75. One Layer Pattern by S V D . 0.05 Noise  109  76. Borehole Pixels Eliminated F r o m M a t r i x Calculation  109  77. Seismic Cone Penetrometer and its Application  115  . ix  ACKNOWLEDGEMENTS I would like to thank D r . W . S. D u n b a r for his help in the evolution of this thesis. suggested the topic, but provided invaluable counselling,  He not only  input (and prodding) throughout  the  course of the work. I would also like to thank the University of British C o l u m b i a for the 1986-1987 University Graduate Fellowship which provided me with substantial  financial assistance.  goes out to the faculty, staff and students of the Departments  A special thank you also  of Geological Sciences and C i v i l  Engineering at the University of British C o l u m b i a for their assistance in getting my thoughts and words together. Lastly, but certainly not least, I thank my family for putting up with me during the long months of research and writing.  x  PREFACE  A n interesting  observation:  " If you steal from one author, it's plagarism; if you steal from many, it's research."  Wilson M i z n e r *  A n d , a not uncommon fate for theses:  "Many thanks for your book (thesis); I shall lose no time in reading it."  Benjamin Disraeli *  * as stated by N . G . M c P h e e (1978) a n d (1981)  xi  CHAPTER 1  1.1 Introduction Engineers and designers who deal with structures built on or in the earth have been faced with the perpetual problem of finding out what is actually beneath the ground they work on. Geologic deposits are characteristically heterogeneous,  either on the microscale (microfractures, crystalline  inclusions, etc.) or on the macroscale (layer pinchouts, channel deposits, faults, folds, etc.), and as such, a detailed subsurface picture of what exists at any given site is extremely difficult to obtain. Here, the t e r m picture is used to describe the profile obtained from crosshole scanning. Often, the engineer must rely on past construction experience in a given area to provide h i m or her with an understanding of the subsurface conditions. In-situ exploration has become standard procedure for all scales of geotechnical investigation in recent years. Previously, only large scale projects such as dams and bridge foundations warranted the costs and time for subsurface exploration.  Now, at least one borehole or in-situ test at the  site is usually required by the engineer before any sort of design is undertaken.  In spite of these  advances in the ground investigation process, a few boreholes are not enough to characterize the conditions under an extensive foundation. Indeed, for one 2-1/2 inch (6.35 cm) diameter borehole used to investigate the ground conditions under a 10 by 20 meter foundation, the ratio of the area of soil structure known to the area of soil structure at site is over 1:15000. cost of extensive subsurface investigations, the most of what is available.  Because of the high  the main thrust of current in-situ testing is to make  Sophisticated downhole logging and testing devices have and are  being developed to fill this need (Robertson, 1985; C a m p a n e l l a and Robertson, 1981), however the question that still remains is can the information obtained from one or more boreholes be applied (with confidence) over the entire site? O r , stated another way, is the geology sufficiently continuous at the site so as to allow extrapolation of borehole data? Often, the answer to one or both of these questions is a conjectured no. Therefore, a method or methods which sample large portions of the subsurface seem to be necessary. Tomography is a method by which large volumes of subsurface material may be scanned and 1  analysed without the need of extensive excavations or equipment. Basically, the tomographic field methodology  is that data is gathered by means of source/receiver  positions  placed at intervals  down adjacent boreholes (Figure 1). Crosshole signals are obtained for all possible combinations of source and receiver location, and these data are then used by a computer program to reconstruct a subsurface profile of a particular material property. For example, in cross-hole seismic tomography, travel times between shot and receiver locations are recorded and together with computer-calculated raypath lengths, a profile of velocity (actually inverse velocity) values is generated. profile can then be used to suggest soil density variations.  T h i s velocity  N  Tomographic techniques have had wide application. T h e major use of these techniques has been in the medical imaging field. C A T (computer aided tomography) and C T A T (computerized transverse axial tomography) have revolutionized the diagnostic process in brain tumor identification and have had further success in whole body tumor scanning (Cho, 1974; Shepp and L o g a n , Scudder, 1978). Applications of tomography have also been extended to radio astronomy, ical seismology,  X - r a y crystallography and electromagnetic  investigations  1974;  geophys-  ( M c M e c h a n , 1983). T h e  use of tomography, however, is by no means limited to the fields listed here.  1.2 Purpose and Objective T h e purpose of this thesis is to examine four interpretational algorithms used in tomographic imaging and determine whether different forms of input and under which circumstances this input can be applied to them.  Three of these methods involve the iterative solution to the algebraic  reconstruction problem and the fourth involves direct matrix solution. T h e iterative techniques will be compared on the basis of computational speed, reconstruction accuracy and solution convergence.  A s cross-hole  in practice and their theories are well established, iterative algorithms.  seismic techniques have been extensively these methods will be applied to evaluate  used the  Hydrologic data will also be used at a later stage to examine reconstruction  efficiency with data from a different form of source input. A number of different data sets (both seismic reconstruction method. optimize  and hydrologic)  will be used for the matrix  T h e main thrust of this stage of the investigation,  a direct method  of matrix solution  however,  will be to  so that many different forms of anomaly  distinguished, and the validity of using these two different radiation types be established.  9  may be  //*&//  •  ®  0  0  1 0  ®  SOURCE  0  RECEIVER  \  RAY PATH  LOCATION LOCATION  i Figure  1  :  Cross Hole C o n f i g u r a t i o n T y p i c a l l y in Geotomographic Scanning  3  Used  T h e objective of these comparisons is to extend the tomographic technique to geotechnical and groundwater flow problems by showing that these solution methods will produce reasonably accurate subsurface profiles which identify anomalous zones when seismic or transient flow data is used as input. A second objective is to show that the iterative techniques are capable of running on present-day personal computer systems.  CHAPTER 2  2.1 Algebraic Reconstruction Techniques Algebraic reconstruction is a method of solving large, sparse systems of equations. mographic techniques  A s to-  and applications have developed over the years, so has the assortment  algebraic reconstruction methodologies,  of  with almost every author having his/her own preference as  to how the reconstruction technique should be carried out. T h e individual programs differ because there is an ongoing search for the 'best' algorithm which will produce the 'best' overall picture in any given set of circumstances (eg.: for unusually shaped anomalies, for refraction of rays, for noise in the data, etc.).  Despite the ever increasing number of quasi-unique methodologies,  the basic  algebraic reconstruction technique has changed little.  2.1.1 Brief Historical Overview of Tomography R a d o n , in his 1917 paper " O n the Determination of Functions from their Integrals along Certain Manifolds"  laid the mathematical groundwork for tomography by developing the earliest  known  solution to the reconstruction problem (Horn, 1978; Scudder, 1978). However, it was Bracewell in 1956 who first used the reconstruction techniques in a practical sense.  Bracewell used a series of  strip sums of microwave radiation to reconstruct the image of the sun and thus prove the validity of the tomographic technique.  Similar methods, but on a much smaller scale, have been used  by electron microscopists in their analysis of the structure of complex specimens  (Scudder, 1978).  T h e shift towards the medical applications of tomography began in the 1960's and early 1970's. T h e theory of using reconstruction techniques with radioactive methods led to the development of the first head scanner by Hounsfield at the Center Research Laboratories of E . M . I . L t d . in 1970. A s well, many important papers were published at this time discussing reconstruction processes and their relative merits.  Convolution methodologies  were analysed by R a m a c h a n d r a n and L a k -  shminarayanan in 1971, and Mersereau (1973) used Fourier analysis to recover pictures from their projections.  Several papers compared the available algorithms, such as Herman's paper (1972) com-  paring algebraic reconstruction with summation techniques, Herman and Rowland (1973) analysing algebraic reconstruction techniques ( A R T ) , convolution, and simultaneous iterative reconstruction 5  techniques  ( S I R T ) , and Gilbert's paper (1972) again compared S I R T and A R T , with reference to  Fourier techniques.  O t h e r authors, however, chose to investigate the behaviour of the basic A R T  algorithm and offer suggestions as to its improvement.  T w o excellent papers along these lines are  by G o r d o n (1974) a n d Herman, Lent and Rowland (1973). In the middle 1970's and early 1980's, new applications for A R T were being investigated. was found that tomography could be applied to geophysical exploration, and a renewed in these techniques was born. acoustic measurements  M c C a n n , Grainger and M c C a n n (1975) attempted  It  interest  inter-borehole  to interpret geologic features which led the way for cross-hole imaging in  coal mines (Radcliff and Balanis, 1979; M a s o n , 1981) and in crystalline rocks (Wong, Hurley and West, 1983; Ivansson,  1986; Gustavsson et al, 1986).  Electromagnetic probing of the subsurface  was also attempted at an underground mass transit site near Washington D . C . (Dines a n d Lytle, 1979) and on a granitic pluton (Ramirez, 1986), both of which produced good results. Today, there are even more applications for tomographic techniques using wide ranges of radiation types.  2.1.2 Mathematics of A R T T h e proof a n d mathematical concepts of Radon's solution to the reconstruction problem are presented in his 1917 paper. For the purpose of this thesis, however, the most important formula to come out of the analysis is all that will be discussed.  Radon's inversion formula is given as:  (1)  where f{r,4>) is the adsorbing density at the point r,9 in polar coordinates;  p(l,9) is the density  integral or ray s u m along the ray (1,0); 0,<f> are angular values; lis the distance perpendicular from the region being scanned; r is the radial distance; t=  I — rcos(9 — <p) or the perpendicular distance  from a given point to a ray. T h e assumptions necessary  in order to apply this formula are (Horn,  1978): 1)  f(r,<t>)  is bounded, continuous and zero outside the region scanned,  2) then p(l,9) will also be zero outside a certain range for /, 3) p(l, 9) is continuous, and  6  4) the partial derivative of p{l,0) with respect to / is continuous. T h i s formula is designed for essentially continuous circumferential sampling of a region (as is indicated by the limits of integration)  and it can be used almost directly i n modern medical imaging  as complete (360 degree) sampling can be achieved through rotation of the radiation source and receivers (Figure 2). T h e C A T scanners presently in wide use use around the world are prime examples of this technology.  T h e implication of a complete scanning field is that virtually all points  in the digitized section will be sampled by at least one raypath. Borehole tomography, on the other hand, cannot economically use full 3 6 0 ° scans. T o obtain such a scan would require the drilling of a large number of holes to fully encompass the area of interest. T h e typical application of a downhole tomographic technique would be to shoot in a criss-cross manner between two adjacent and nearly vertical boreholes for  (See Figure 1).  reconstruction is a rectangular section of incomplete raypath coverage.  T h e resulting area T o handle the new  geometry, modifications must be made to Radon's original inversion formula. For most radiation types used i n modern tomography, two basic parameters may be determined from well to well shooting:  either the slowness profile or the attenuation profile of the material.  T h e values measured at the receivers for these two parameters are time of flight for the slowness profile and response history for the attenuation profile. For each raypath through the material in question, the relationship of slowness to travel time is :  (2) where k. is the ray number, T is the travel time, / is the ray length, and v is the velocity at position (x,y).  T h e amplitude to attenuation field a(x, y) is :  where A is the amplitude of the response k, A„ is the source amplitude, a(x,y)  = -KJ/CQ,  f is the  frequency and c and Q are dependent on position. A„ must be corrected for the radiation pattern, geometric  spread and instrument response  attenuation  so that a will only describe the effects of apparent  (which is the intrinsic dissipation and elastic scattering).  then the amplitude response formula reduces to : 7  If one takes P  k  =  ln(A/A„)  0= rotation angle  X-RAY DETECTOR  DATA  X-RAY  ACQUISITION  TUBE  SYSTEM  GENERATOR  COMPUTER  PATIENT TABLE  G-RAPHIC D I S P L A Y S Y S T E M  SYSTEM  SCANNER SySTEM  o MA&NETIC  DISK  TAPE.  Figure  2  :  Diagram of a Computer Aided Tomographic System Used in Medical Imaging (After Scudder, 1978)  8  Pk = - j a(x,y)dl  (4)  which is of the same form as the travel time response formula. T h u s , a general integral equation can be written as :  R=  f  fc  S{x,y)dl  (5)  J ray  with R interest  representing the response  k  (Peterson  et al, 1985).  parameter and S(x, y) representing the material property of  Note that the general integral formula derived in equation 5 is  similar to the original R a d o n inverse formulation of equation 1. In practice, the region between the boreholes is discretized into a grid of 'pixels' or 'cells'. These pixels can have many shapes but are usually chosen to be rectangular or square. Figure 3 demonstrates  the discretization  process.  W i t h the discretized picture, the line integral over the raypath becomes a summation:  NPX NPY  Rk = Y,Y,  s  '.ik >.*  (6)  AL  1 1 . where 1 < k < n, n is the number of rays, R at the  i,j  and NPX,  pixel intercepted by the  k th  ray,  k  is the response for ray k,  AL -  i} k  is the length of the  k  is the material property th ray through the  i,j  pixel,  NP Y are the number of pixels in the i and j directions. T h i s is the basic formula used  for algebraic reconstruction techniques.  O n e can see that the above formula (6) readily lends itself  to matrix notation of the form:  Ax=b  (7)  where A is an m by n matrix of lengths, x is a vector of properties, and 6 is a vector of responses. Intuitively, the matrix A must be very sparse as for each raypath, a large number of the pixels in the grid are not intercepted (Figure 4). T h u s a reasonable limit must be used on the discretization size of the pixels so that enormous matrices are not produced. For example, a discretization of 10 x 10 pixels with source and receiver positions in the center of each pixel adjacent to the boreholes produces a matrix that has 100 pixels times 100 rays = 10000 elements. 9  Increasing the number  ®  SOURCE LOCATIONS  ©  RECEIVER LOCATKDWS  i_J  PIXELS  i Figure  3  :  Tomographic Section Pixels or Cells  10  Discretized  into  ;  RAYPATH  Figure  4  :  1^1  PIXEL  CROSSED by  |~1  PIXEL  UNTOUCHED  Demonstration  of  Intercepted  Not  11  the  bv  which  are  RAYPATH  Large  RAYPATH  Number by  of  Raypath  Pixels  of pixels by a smaller discretization  increases  the matrix size geometrically.  T h e large storage  requirements of matrix A usually preclude the use of solution methods that involve the storage of the entire matrix A at one time in memory. A n example of such a technique is the singular value decomposition to be discussed i n C h a p t e r 6. One way around the problem of large matrix size is to ignore a n d / o r avoid the matrix development.  Instead, one treats each raypath (equation 6) as a separate equation and solves the  reconstruction problem iteratively. Basically, the steps i n the solution process are: 1) initial estimate of the material properties  S , ijk  2) find ray s u m Rl for the existing material properties 3) compare Rj. with R (measured k  S , ijk  time)  4) adjust the elements of the matrix S  iik  so that R^ ,  ni m)  = R (usually accomplished by simple k  back projection), 5) continue for all rays until Rl = R  k  T h e major area for improvement  of the reconstruction* process has typically been step 4, and  it is here that the many reconstruction mathematical  development  (for all k) or until a set tolerance is reached.  algorithms differ.  of algebraic reconstruction  Excellent papers which describe the  schemes are by Dines and Lytle (1979);  Peterson, Paulsson and M c E v i l l y (1985); Fawcett and C l a y t o n (1984); G o r d o n (1974); and H e r m a n , Lent and Rowland (1973). Before concluding this section, some important points must first be addressed. T h e first point is that the matrix A is not always a square matrix. T h e implications of this are that there may be too much information (overdetermined) or too little information (underdetermined) for the iterative solution process.  These situations can occur if there are not enough raypaths to cover the grid area  (underdetermined, Figure 5) or there are many more raypaths than pixels (Figure 6) and each pixel is intercepted by several rays (overdetermined). In the case of the purely overdetermined determine the best picture.  T h e redundant information may even be used to check the picture  obtained or aid i n its resolution. exists for the set of equations. set.  problem, one can use a least squares solution to  For the purely underdetermined problem, no unique solution  Indeed, there is an infinite number of possible solutions to the data  O n l y with the use of additional information can a unique solution be found. 12  T h i s 'a priori'  \  y  \  y y y  \  y  \  y  \  y  \ y y  5  :  \ \  \  y  Figure  y y  \  \  \  U n d e r d e t e r m i n e d System where the Number of Raypaths (Equations) is G r e a t l y Exceeded by the Number o f P i x e l s (Unknowns)  13  Figure 6 :  Overdetermined System w h e r e the Number of Raypaths ( E q u a t i o n s ) G r e a t l y Exceeds the Number of P i x e l s (Unknowns)  14  information may take the form of the expected distribution of properties throughout the section, known properties adjacent to the borehole or perhaps limits being set on the range of the material property itself. T h e use of 'a priori' information will be discussed further in a later section. There also exists a special group of problems that are mixed-determined. which is common to geotomographic  problems.  from poor or incomplete raypath coverage.  T h i s is a condition  T h e mixed-determined situation arises primarily  In the cross-hole methodology, a number of pixels lie on  the circumference of the domain, and thus, may only be intercepted by fewer than two raypaths. Interior pixels, on the other h a n d , are intercepted by a multitude of raypaths.  T h e result is that  different pixels in the same domain may be either under or overdetermined depending on the degree of raypath coverage. algorithm.  T h i s can result in slow convergence or no convergence of the iterative solution  Menke (1984a) discusses these types of problem in detail and outlines  a number of  solution strategies. T h e second point that must be addressed is the assumption of the straight line raypath. T h e algorithms examined in this paper all have the implicit assumption straight line from the source to receiver.  that the rays propagate in a  T h i s assumption is not strictly valid because refraction  and reflection will occur as the rays travel through media of different material properties.  Flow  of water, for example, will refract according to the Tangent Law (Figure 7) and optical rays will refract according to Snell's Law. Incorporation of ray bending into the reconstruction algorithm is difficult because the program not only has to solve the linear equation, but must recalculate  the  length of the ray through each pixel at each iteration due to ray bending. Iterative algorithms with ray bending have been written (Baguelin, pers. comm.), but they require some assumptions to be made about the subsurface structure and are slow to converge. T h e assumption of the straight line optical model is incorrect in the strict sense, however, it can be used with confidence as long as the contrast between materials is not great (ie.: for seismic models, a low density difference between layers).  In such a case, the difference in reconstruction  obtained by bent rays and by straight rays is insignificant (Peterson et al, 1985; M c M e c h a n , 1983).  2.2 Summary and Conclusion Algebraic reconstruction techniques have had a long history, originating from Radon's original inversion formulation and developing into a wide variety of algorithms that have been used for  15  AFTER F R E E Z E AND CHERRV,  n7<?)  DARCYS  LAW:  Q=  K A ^ "  WHERE-. Q=  FIOWRATE  ( A) m  K Hrcw/wuc =  Cb^OUCT/v'/TY  (Vs)  A= A R E A OF Flow (m*) JL.  Figure  7  :  The Tangent Law D e s c r i b i n g Refraction at Interfaces  16  Groundwater  medical, geophysical and geotechnical purposes.  T h e mathematics of reconstruction is well estab-  lished and actually quite simple; the algorithm is merely the solution of a set of linear equations which result from the discretization of the problem domain and the passage of rays through this domain.  For the iterative techniques, the equations are solved one at a time, until the calculated  responses (via calculations from the problem domain) are equal to or nearly equal to the measured responses. Matrix solution of these equations is possible by way of matrix solving algorithrns, but due to the large, sparse matrix that results from this process, there is often reluctance to try such techniques.  T h e problem of too much or too little information is basic to all field experiments.  T o o much information is generally good as it implies redundancy; too little information, however, indicates that no unique solution exists or can be found without additional input data. Finally, the straight line raypath model used for most reconstruction techniques is not strictly correct, however for most applications, it is sufficiently correct.  17  CHAPTER 3  3.1 Reconstruction Techniques Numerous iterative algorithms have been developed over the years to solve the reconstruction problem of tomography. These algorithms differ in either the method by which the deviation from the response  d a t a is back-projected over the individual raypaths, or in the method of iteration  through the suite of raypaths available.  T h e motivation for developing the different algortihms is  to improve the clarity and accuracy of the reconstruction.  Here, clarity is defined as the ability  of the algorithm to produce a profile which has well defined anomaly boundaries that correspond to the boundaries of the known profile, and accuracy is the ability of the algorithm to produce a profile which replicates or resembles  the known profile in terms of material property values and  anomaly shape.  3.2 Methods Used in this Investigation 3.2.1 T h e Classical A R T A l g o r i t h m  (CART)  T h e original (or classical) A R T algorithm was formulated by G o r d o n et al (1970). W h i l e their procedures generated a good deal of controversy at the time, the algorithm has since been shown to have a firm mathematical basis (Herman et al, 1973; G o r d o n , 1974).  T h e original algorithm  was intended for applications in electron microscopy and X-ray photography (and later in medical tomography) assumption,  which had the implicit assumption however,  is not valid for cross-hole  that a full 3 6 0 ° view could be obtained. studies,  This  but little needs to be changed in the  algorithm to account for (or adjust for) the lack of coverage. A s s u m i n g that the region of interest i?, has been digitized into square (or rectangular) pixels of uniform grayness (where the level of grayness signifies the intensity of the physical property being measured at that point), G o r d o n (1974) describes the classical A R T algorithm as follows: Let fl designate the qth estimate of the density function / at the reconstruction element i in R. Let 18  where C - is a weighting function = 1.0 (by G o r d o n et al, 1970), P 3  q }  is the value of the qth estimate  of the projection P -. Let / be the mean density of the object in R. T h e n , the A R T algorithm is: 3  (9) where i—  l,...,ra, n is the number of elements, and Nj is a weighting function.  Expressed in words, this means that the iterative process is started with all reconstruction elements set to some constant (/).  In each iteration, the difference between the sum of the actual  projection data and the estimated projection d a t a is calculated (-rf- — -p~), and the correction (or  .7  .7  difference) is evenly divided between all reconstruction elements which are a part of the ray in that projection.  E a c h ray is considered in turn, and corrections are applied over several iterations until  a convergence criterion is reached. T h e classical A R T algorithm used in this study is a slightly modified version of the algorithm described above.  In the computer algorithm, the region of interest  (as before) is divided into a  group of. rectangular pixels with the boreholes forming the horizontal limits of the space. T h e size of the pixels is based on the average spacing of the receiver positions down the response borehole, and this average spacing is then used (with modifications to account for differing hole to divide the region into vertical sections.  distances)  In practice, the spacing of source and receiver locations  should be based on the scale of feature to be resolved.  Such information may be available from  borehole logs. T h i s discretization process produces (for convenience) an n by n space of pixels. T h e calculation process begins by determining which pixels are intercepted by which rays.  T h e rays  themselves are all possible straight line connections between source and receiver points, and are of finite width. T h e width of ray is very important in that it determines or selects which pixels will have an influence on the total ray sum. Herman et al (1973) as well as man)' other authors have suggested that the ray width be a function of the angle of the ray to the horizontal, namely : 19  (10)  where 6 is the angle of the ray with respect to the horizontal, W is the vertical spacing or size of pixels , and W  0  is the resultant ray width. T h i s width calculation is an effort to avoid 'streaking' or  having too many pixels influencing the ray sum. T h i s process is discussed in detail by H e r m a n et al (1973). In the computerized model, equation 10 is used. Once the ray width is determined, all pixels whose centers lie within this ray are determined and stored for use in the reconstruction  process.  T h e length of path, as well as which pixels lie within that path are stored for each individual ray (Figure 8). T h e start of the reconstruction process begins with a matrix of values representing the pixel values of the parameter of interest (say inverse velocity) being set to some average value of velocity calculated from the ray sum data. whether  Each set of ray data is then used individually to determine  the calculated ray sum (obtained by multiplying the pixel length by the inverse  velocity) is the same as the ray sum of the data.  pixel  If it is not, the difference (whether positive or  negative) is evenly distributed over the pixels in the ray.  T h u s , this is an unconstrained A R T  algorithm where the magnitude or sign of the corrections are not restricted (Herman et al, 1973). T h e process continues through the rays sequentially with one iteration indicating that all rays have been analysed once:  T h e iterations proceed until the overall discrepancy between calculated and  actual ray sums meets a preset criterion. in a later section.  T h e convergence  criterion used here will be discussed  Often, damping parameters are applied in the Classical A R T (as well as other  algebraic reconstruction)  formulations.  T h e damping parameters are used to avoid the oscillation  of the iterative solution which has been noted by many authors.  T h e oscillation is the result of  the full back projection of the time summation error over the pixels lying within the ray's domain. Basically, the damping parameters simply reduce the magnitude of the back projection so that convergence process  is facilitated.  T h e use of damping parameters  is important in the  reconstruction  but, according to Peterson et al (1985), the proper form of damping parameter is often  difficult to obtain because it is invariably site specific.  Figure 8 :  Ray Width C a l c u l a t i o n ART A l g o r i t h m  Used i n the  Classical  w  El e  PIXELS WITH CENTERS MARKED ANGLE OF RAYPATH WITH THE HORIZONTAL RAYPATH (LONG- DASH) AND ZOME OF INTEREST  w  PIXEL  (ewCLoSED  BY SHORT DA5H£D U N £ s )  WIDTH  CALCULATED W i C T H o F RAY B A N D W  21  e  r  (A/ x ( ' M A X ( ' j S i n 91  j /COS  3.2.2 Modified A R T A l g o r i t h m ( M A R T ) T h e modified A R T algorithm is, as the name implies, a modification of the original or classical A R T algorithm. It was developed primarily to deal with the fact that the classical A R T algorithm ignored the contribution of pixels through which a portion of the ray band might pass (Figure 9). In the modified A R T algorithm, a straight line is assumed between shot and receiver positions, and the contribution of each pixel which is cut by that ray is included in the reconstruction Following the same assumptions  process.  as the classical A R T derivation, the modified A R T algorithm  also starts with a homogeneous region of uniform grayness.  Shot and receiver boreholes delineate  the lateral extent of the region with the lower surface being the borehole bottoms and the upper surface usually coinciding with ground surface. generation  T h e modified A R T program uses the same pixel  algorithm as the classical A R T algorithm and thus an array of distances representing  the extreme points of each pixel is generated based on the spacing of the boreholes and the receiver positions. T h e second step of the reconstruction process is to identify which pixels are crossed by which ray. T h i s is accomplished by an iterative technique whereby shot and receiver locations are identified as to which pixel they lie in and where the pixel layer boundaries are with respect to these locations. T h e n , starting at the source hole, the algorithm calculates which pixels are dissected by the ray and the length of that ray in each pixel. Once all rays are accounted for and all files are complete, the iterative reconstruction process begins.  T h e algorithm, again, sets the pixel array to an average value (/), using the same process as  the C A R T algorithm. T h e n , by analysing each ray individually, the algorithm compares real data to calculated pixel data. T h e mathematical process is as follows:  the algorithm uses the function  N  (11)  i where / is the ray number; N is the number of pixels intercepted by ray /; Ax,-,* is the length of ray / passing through pixel  f  ijk  is the material property at pixel  summation of the ray using the estimated /,;, values of the pixels. then calculated. addition based  T h i s function calculates the time T h e difference t — Ts  t  = delta is  T h e difference is then distributed through the pixels of the ray with each pixel's on the overall proportion of the ray going through the pixel. 22  T h i s is again an  Figure 9 :  Demonstrating the Areas which are Not I n c l u d e d i n the R e c o n s t r u c t i o n when Ray Width C a l c u l a t i o n s are Made  23  unconstrained A R T algorithm as there is no restriction on the magnitude or sign of the difference.  (12)  where ray  is the (q + l ) t h estimate of  is the qth estimate of f ; i}  k; and ASjjk is the length of ray through pixel  Tl  k  is the total length of  T h e calculation process continues until the  stopping criterion is reached or a given number of iterations has been completed. O n e iteration is accomplished when the calculations for all rays have been completed once. The  M A R T algorithm described here is similar to those of Dines and Lytle (1979) (but not of  Peterson, et al's (1985) algorithm of the same name), and incorporates all intersected pixels into the reconstruction process. T h e algorithm, however, does not take into account the inevitable ray bending that takes place in-situ.  3.2.3 T h e Modified A R T Using L°° Criterion  (LART)  T h e L A R T algorithm is yet another variation of the basic algebraic reconstruction scheme.  This  procedure involves the utilization of the E° or Chebyshev norm to provide an iterative methodology. A n o r m is a measure of length or distance that is frequently used in statistics for comparison purposes. T h i s measure of length is one way of determining the goodness of fit of a particular data set, of which the Euclidean length is one such example. T h e most commonly employed norms are those based on the sum of some power of the elements of a vector and are given the name L", where n is the power:  L norm  (13) 24  Successively  higher norms give the largest element of e successively larger weight.  T h e limiting  case of n —> oo is equivalent to the selection of the vector element with largest absolute value as the measure of length, and is written as:  L°° norm :  Heljoo  =  max|e,| t  (14)  (Menke, 1984a). In graphical form (Figure 10), the difference between the norms can clearly be seen with the L  1  norm almost ignoring the outlier whereas L°° n o r m is being affected by the outlier. T h u s , con-  sidering higher norms gives estimates that are nearer the worst case scenario for model parameters (Menke, 1984a). In the case of the computerized algorithm, the worst case possibility is when all pixels which are intercepted by a ray are weighted equally, regardless of the length of ray which intercepts  that  pixel. T h i s process is similar to that used by Peterson et al (1985) in the derivation of their A R T 1 algorithm. T h e procedure of calculation is similar to that used by the M A R T algorithm, therefore it will not be discussed here.  T h e main difference is that instead of weighting the difference  calculated  from the time sum and d a t a values according to the length of ray through the pixels intercepted, the difference is merely divided up evenly over the total number of pixels intercepted by the raypath. It is by this method that the contributions of the pixels just barely touched by the ray can be given equal importance to those pixels more significantly  intercepted.  3.2.4 Matrix Reconstruction Technique T h e matrix construction  scheme used  in this study  was first suggested by Dunbar  (pers.  comm.). In general, solving the inverse problem by means of matrix inversion techniques is avoided due to the large, sparse and generally random distribution of elements in the resulting matrix. A l s o , the fact that the problem may be (and usually is) underdetermined inhibits most inversion-style  direct  solutions.  If, however, the problem is carefully formulated and a mainframe computer with ample storage and matrix decomposition routines is used, the use of matrices to solve the tomographic reconstruc25  Figure  10:  The L Norms and the I n f l u e n c e of ( A f t e r Menke, 1984a) n  26  Outliers  tion problem can be applied without difficulty.  T h e purpose of using the matrix form here is to  examine the efficiency and accuracy of such an algorithm. (The algorithms could also be set up on a personal computer equipped with a math co-processor and sufficient storage.  It would probably  be necessary to consider 'packed' storage methods to store the matrix.) T h e algorithm was developed with seismic d a t a in m i n d , although later, transient groundwater flow data is also used. and  Therefore the ensuing discussion will be centered on travel times,  lengths  inverse velocity values in the formulation of the solution arrays such that:  [lengths] [l/velocity]  The CART,  format of the grid generation MART  = [time]  routine (discretization  and L A R T algorithms.  T h e grid is generated  (15)  of the region) is the same  as for  by the calculations made on the  average spacing of the receivers and the distance between the boreholes.  T h e extreme grid pixel  coordinates are then stored in an array. T h e generation of the matrix requires that the lengths of ray segments through the pixels be placed in a matrix (Figure 11), such that the rows of the matrix represent the shot (ray) numbers and the columns of the matrix represent the pixel numbers with each entry representing the length of ray through the pixel. T h e straight line ray segment passes from shot point to receiver point.  T h e length and relative position (in Zy,xn. coordinates)  of this  ray is known from the coordinate data read into the program. If one changes the coordinate system to a simpler one, say dividing the ray up so that its end points lie at +1 and -1 in this system, the calculation of lengths can be reduced to simple mathematical formulae. Referring to Figure 12, the vertical intercepts ( A L F A ) of the ray z z / can be calculated from the formula: t ;  ALFA(K)  = (2* X{1) - XS - XR)/{XR-XS)  (16)  where XR is the receiver position i.n real coordinates; XS is the source position in real coordinates; X(I)  is the incremental distance. The  horizontal intercepts are calculated using a similar formula.  that what is placed in the A L F A ( K )  array is the relative position  T h e process is iterative so ( unsorted, in +1,-1  space )  of the horizontal and vertical intercepts on the line designating the raypath. After sorting these 27  Figure 11:  C a l c u l a t i o n of the A (Length) M a t r i x M a t r i x R e c o n s t r u c t i o n Techniques  1  3  2  4 —  (D - W P A T H  .  2 - PIXEL  ®  7  PIYEL  NUMBER  l  /  0  2  0  0  o  3  0  o  o  Is  SHOT NUMBER  lis  0  0  o  0  **  0  0  o  0  0  ^3+  X = LENGTH THROUGH PIXEL  WE  for  "A" MATRIX (Ax = b)  28  -^38  Figure  12:  Demonstration of the Method of I n t e r c e p t Calculation. ( A f t e r Dunbar, 1986)  N  M  x  HORIZONTAL  0  VERTlCftl-  M  N U M B E R OF GR.IO  N  NUM6ER GRID  INTERSECTIONS  £  N~2  *  AT MOST, M-2  INTERCEPTS  INTERCEPTS HORIZONTAL  LINES OF  VERTICAL  LINES  HORIZONTALLY  INTERSECTIONS VERTICALLY  IF  X;  6z. X  ^  X";  THEN  INTERSECTION  WHERE  29  X  -  Xi  OR  X  z  values into ascending order and converting ray coordinates to real world coordinates, the ray is divided up into segments representing the pixel interceptions.  Calculations using x X2  coordinates  lt  are then made to identify the ray lengths and a simple equation is used to place the length into its proper matrix location. T h e matrix is then written to a file where it is stored until reconstruction procedures require it. A vector of shot times is also written to a separate file which represents the b vector in equation 7. T h e purpose of this exercise  is to use the A matrix (of pixel lengths)  times) to determine the x vector (of the pixels' inverse velocity values). complicated  if adequate  matrix solution  routines are not available.  and b vector (of shot  T h i s process can be quite J  However, most mainframe  computers have a number of matrix solution routines at their disposal. O n e of the methods chosen to solve the matrix problem presented here is a constrained  L  l  linear approximation model developed by Barrodale and Roberts at the University of V i c t o r i a , Canada.  T h e process of solution is as follows: Given a m by n system of linear equations  Ax=b  the algorithm calculates an L  1  (17)  solution of (17) subject to p linear equality constraints  Cx =  d  (18)  and q linear inequality constraints  Ex<f  (19)  i.e., the subroutine determines a column vector x* which minimizes  16 - AX\\L = subject to the given constraints (18),(19).  16,; -  A,-x\ '  (20)  T h e method is completely general in the sense that no  restrictions are imposed on the matrices A , C , E . Furthermore, if no vector x satisfying (18) and (19) exists, the subroutine detects this and informs the user that the problem is infeasible (Barrodale and Roberts, 1978). 30  The  V  approximation is often preferable to the least squares solution  (L )  in that the  2  solution ignores noisy or erroneous data more than the L solution. T h i s has a significant 2  L  1  advantage  in field testing where random calibration and h u m a n errors are commonplace. Another matrix solution algorithm makes use of the singular value decomposition ( S V D ) . Menke (1984a) gives an excellent discussion of the theoretical background to the S V D . Basically the method decomposes the m by n matrix A of equation 7 into the product of three matrices:  A = where  USV*  (21)  U is an orthogonal m by m matrix and V is an orthogonal n by n matrix.  S is a m by n  diagonal matrix whose main diagonal is composed of the singular values of A, a,, 1 < i < n, arranged in descending order. In practice, only the first n columns of U are computed; the remaining m— n columns correspond to zero singular values and thus constitute  a 'null' space.  T h e rank of the  matrix A is determined by the number of non-zero singular values. Since the matrices  U and V are orthogonal, the solution to the matrix equation Ax = b may  be written  x = A'b = VS'  1  U'b  (22)  where A* denotes the pseudo-inverse of A and the matrix 5  c - i _ / l r, " -\0, X a  b  if o-,: # 0 if a, = 0  _ 1  is a diagonal matrix defined by  , (  , 2  3  j  3.3 Summary and Conclusions The  algorithms chosen for this investigation  techniques  include C A R T (classical algebraic reconstruction  as proposed by Gordon et al, 1970), M A R T  (modified algebraic reconstruction  tech-  niques which are a variation of A R T ) , L A R T ( A R T using the L°° or Chebyshev norm), and matrix reconstruction techniques using CL  1  gular value decomposition  (the constrained L  l  linear approximation method), and sin-  (SVD).  T h e mathematics of each solution form is well established and for the first three methods listed here, involves the iterative solution of a set of linear equations representing ray path sums through 31  the problem domain. T h e fourth method involves the direct matrix solution of these sets of linear equations.  CHAPTER  4  4.1 Testing Procedures A series of test patterns were generated in order to 1) test the effectiveness of the various A R T and matrix algorithms, and 2) examine the effects of changing key parameters in these algorithms. These patterns, although generally simple in configuration, give a range of the subsurface that might be encountered in geotechnical  patterns  investigations.  4.1.1 Seismic Considerations T h e methodology of tomography can take many forms ranging from radioactive bombardment of the subsurface to simple seismic experimentation.  For the purpose of this study, the seismic mode  of investigation will be used. T h e basic configuration for the seismic studies is to have a string of geophones down a receiving borehole while shots (explosions) are successively taken down a source hole (Figure 13). T h i s produces a rectangular domain through which the reconstruction takes place. T h e parameter measured by this configuration is the interval transit time with accuracy on the order of a few milliseconds along a straight line raypath connecting the shot and receiver positions. After discretizing the domain of interest,  the length of the ray through the pixels of the domain  can be calculated and a series of ray sums generated of the form  (24)  where At is the interval transit time; A i s the ray length through pixel value at pixel  B y utilizing the algebraic reconstruction  can approximate an inverse velocity  (or slowness) profile.  technique,  and v,-.j is the velocity these times and lengths  T h i s parameter of slowness not only  indicates what type of material is in the ground ( M c C a n n et al, 1975), but also the approximate density contrasts which are in the subsurface.  These density contrasts can be critical for foundation  stability calculations. 33  MULTI-CHAMMEL SEISMIC  b 6 DETONATOR.  PRINTER •em  ///$>>//  GEOPHONES  EXPLOSIVE CHANGES  RECEIVER  SOURCE  Hot£  F i g u r e 13:  HOLE  Layout of a Cross Hole S e i s m i c  34  Investigation  T h e use of seismic radiation has wide application as the energy produced by a seismic shot is large.  T h e frequency and amplitude attainable by seismic shooting also allows the wave to travel  and be detected at greater distance without severe attenuation.  T h i s , combined with the fact that  seismic data can have the energies of successive shots added together, makes seismic testing more desirable from a data acquisition point of view than groundwater pressure  testing.  4.1.2 Groundwater Considerations Groundwater pulse testing has been used for many years (Walter and T h o m p s o n , 1982; Prosser, 1981) yet it is often only attempted between packers in a single borehole. Crosshole modes of testing have been proposed (Black and K i p p . 1981)  and have been tried in some situations  (Hsieh, et al,  1985a and 1985b) but there remains the problem of achieving significant pressure changes which can be detected at distance over a reasonably short period of time. Slug tests require tens of minutes for the pressure front to travel a few meters. A n additional problem with crosshole groundwater testing is that the theory for multilayer domains is complex, as can be seen in the analogous heat conduction formulae given by Mikhailov and Ozisic (1984) and Luikov (1968). While conductivity tensors can be calculated (Hsieh et al, 1985a,b; Way and M c K e e , 1982), the identification of multiple layers is difficult.  T h e tomographic technique can serve to overcome the difficulty of multilayer analysis.  T h e ideal configuration for a cross-hole groundwater test would be very similar to that of the seismic array, that is a series of shot points down the source hole and a series of receiver points down the response hole.  In this case, however, the shot points would be the slotted ends of piezometers  through which a pressure pulse (by means of a mechanically induced hydraulic force or simply a slug of water) can be transferred to the geologic m e d i u m , and the receivers would be a series of pressure transducers with accuracy on the order of 0.001 psi (0.007 kPa) (Figure 14). T h e shot and receiver points would be connected together so that the time from pressure input to response detection is obtained. tidal  A set head or pressure deviation from normal groundwater pressures  fluctuations  of  and established on the basis of hydrogeologic records) would be chosen to indicate  the arrival of the pressure pulse wave, or alternately from the receivers, arrival.  (independant  computerized filter processing  if the complete pressure history is obtained  could be undertaken to determine  the initial  These arrival times, however they are obtained, would then be used in the tomographic  reconstruction.  T h e test, as described, is seen to be a transient test analysis and thus is governed  CONSTANT PRESSURE PUMP  fkio  SLOTTED  Pvc  P<PE  POMP I NFLATE  o B E N T O N ITE SEAL.  2 2  zz PACKERS  —  7ZZ  22Z  FILL.  F i g u r e 14:  Layout of a Cross Hole T r a n s i e n t Investigation.  36  Groundwater  TO PACKERS  by the equation:  (25) (Bear, 1972; as given by Black and K i p p , 1981) where h is the hydraulic head (in length (L) units), S„ is the specific storage (with units of l / L ) and K is the hydraulic conductivity the m e d i u m . T h e basic property described by SJK  (units L / T ) of  is 1 / D where D is the hydraulic diffusivity of  the m e d i u m with units of L / T . It is desired to obtain the units of the unknowns x, in the matrix 2  Equation 7. Consider a one dimensional form of Equation 25 and write it in finite difference form:  Letting h — h,t  Y  — 2h, + k  i+i  and h — fij t  - K'i 1  Ax(h /hi)(Ax/D) t  so that the units of x are  (essentially  the equation becomes =  an inverse  At 'pixel' diffusivity  per unit length  along  the  raypath). Henceforth, the term 1 / D ' will describe this material property value obtained by tomographic reconstruction which is indicative of the diffusivity variation of the medium. Individual shot/receiver development  measurements  are time consuming and tedious to perform, thus the  of a multiple reading pressure transducer is necessary in order that this type of test  be viable. A multiple port piezometer is presently available, and a string of electronics capable of measuring pressures at several levels in a single borehole is under development. T h e data obtained from the source/receiver be input into the A R T and matrix algorithms mathematical  pressure monitors (i.e., the travel time) can then for reconstruction.  In this way  the multilayer  analysis is replaced by the iterative processing of data. It should be noted that the  travel time is not the only parameter which could be measured.  T h e attenuation  wave could also be monitored and an attenuation profile be generated.  of the pressure  T h i s attenuation  method  is similar to those already performed in the oil industry, (Hirasaki, 1974; K a m a l , 1979; Johnson et at, 1966) and can be used in a number of ways to describe reservior heterogeneity. Also, the entire pressure response history may provide valuable information as to hydrologic parameters. While a cross-hole pressure test has admittedly more problems than seismic analysis, it does provide valuable information concerning the hydraulic properties of the subsurface material which 37  may be critical for contamination migration analysis or aquifer evaluation. A possible scenario for using cross-hole pressure tests would be to confirm layering in an earth-filled d a m where the proper placement of graded material is essential for stability and seepage control.  4.2 Test Configurations A number of test patterns are used to analyse the tomographic algorithms. B o t h seismic and groundwater applications are used. T h e test patterns are generally simple in form, and are meant to demonstrate the ability of the algorithms to reconstruct accurate profiles from d a t a generated using different forms of source input.  4.2.1 Seismic T h e five seismic patterns developed for this study are shown in Figures 15 to 19. These patterns were used to distinguish horizontal versus vertical resolution capabilities, the effects of low and high velocity contrasts (10 percent and 100 percent in this case), and the ability to predict whether no anomalies or multiple anomalies exist in the section. A l l data sets for these test patterns were generated by solving the forward  problem using  discretized pixel lengths and the given velocity values of the pixels.  4.2.2 Groundwater T w o of the groundwater test patterns developed for this study are shown in Figures 20 and 21. T h e y range from a homogeneous, isotropic case to that of layered systems. T h e homogeneous test pattern d a t a set was generated using the Jacob method of p u m p i n g test drawdown analysis (Freeze and Cherry, 1979). A set of hydrogeologic parameters was assumed for the section, and calculations were made as to the pressure response at an observation well for a given rate of outflow at the source hole.  T h e distance between source and observation positions  in the wells correspond to the raypath lengths of Figure 11. When the pressure drop reached 0.5 centimeters (which corresponds to an approximately 0.007 psi change in total head), it was assumed that the pressure wave had arrived and a time of arrival was calculated. These arrivals were then used in the reconstruction algorithms. T h e other groundwater test patterns used in this study were analysed by means of a modified two-dimensional finite element program developed by W a n g and Anderson (1982), and later by a 38  F i g u r e 15:  Seismic Test P a t t e r n .  Test  Pattern  •- » ' * * * * ** • •1 * * t  .'•,'.'""'-v •' v :  :  ;  .  2m.  ; - ' . '/ * ''»/** / -* * 1  • \• • * i 4  H  10 m .  Vi  =  Vi  V  z  =  =  39  1.0  l-l  KILOMETERS / S E C O N D  x V,  H  F i g u r e 16:  V|  S e i s m i c Test P a t t e r n .  =  V|  =  V  2  40  1.0  -  2.0  Test P a t t e r n B .  klLOMETERs/SECOND  * V,  F i g u r e 17:  V,  Homogeneous S e i s m i c Test  =  |.0  41  kiLOM£T£K . /SECOND  Pattern.  F i g u r e 18:  One Layer S e i s m i c Test  Pattern.  v.  10  V, - \/  ~  x  V  Z  \.0 = |.|  m.  KILOMETERS /SECOND *  V, i  42  F i g u r e 19:  Two Layer S e i s m i c Test  Pattern  T 1  Lm.  v,  10 m.  V,  -  Vi  V  = a  43  KILOMETERS/SECOND  1.0  =  I.I  * V,  gure  20:  Homogeneous  Groundwater  Test  (S* SroRAriVlTY , T=T«ftis/SMissiv/iTY^  44  Pattern.  Figure  21:  One Layer Groundwater Test  S,  Pattern.  TJ  t 2m  k  >  10 m -  Vr,  =  Vrz = fS-  STORATWITY  1.0  Vm*  10/0 , T-  TRAN5K|5S1V|TY^  ! i I  I  45  two-dimensional finite element program developed by D u n b a r (1987). T h e sections were discretized using a predetermined mesh size of linear, isoparametric elements. T h e boundary conditions were established  as being no flow boundaries on the upper and lower surfaces of the grid.  Initially, all  nodes were set to a single head value, and the head values at nodes where the pressure pulses were induced were kept constant with time (constant head boundaries).  Pressure head changes on the  order of several meters were then placed at set nodes down the source hole and the corresponding pressure changes over time were monitored at the receiver hole nodes. total head was assumed to indicate the first arrival.  A g a i n , a 0.5 c m change in  In the event that the ideal receiver location  (always assumed to be at the center of the pixel's side) did not coincide with a nodal point, a linear interpolation between adjacent nodes was used to estimate the time of the first arrival.  Numerous  test patterns were analysed in this fashion with the size and shape of anomalies being changed, as well as the fineness of the discretized mesh used.  While other more mathematically  complex  solutions for multilayer systems are available, the finite element code was found to be the most efficient way of obtaining an approximate solution to these transient flow problems. Note that the analytical and numerical test cases (the drawdown analyses and the finite element analyses) are carried out in a different manner, yet both produce the necessary time of arrival data.  4.2.3 Parameters Varied  T h e effects of differing damping levels (for the iterative techniques)  and different intensities of  noise were analysed by varying these parameters in the tomographic tests. T h e iterative d a m p i n g parameter was varied on a simple percentage basis with 0.30, 0.50,  0.70,  and 0.90 levels being used. T h e damping parameter was applied to the calculated ray sum during each iteration of the algorithms.  A l l three of the algebraic reconstruction algorithms were used to  analyse the effects of damping parameter variation. Gaussian noise at levels of 5 percent and 10 percent was added to the calculated data by means of a uniform r a n d o m number generator ( R N G ) and Gaussian noise calculation subroutine.  Again,  all three of the algebraic reconstruction algorithms as well as the matrix algorithms were used to analyse the effects of Gaussian noise addition. 46  4.3 Summary and Conclusions Seismic signals offer the best time of arrival data for use in the tomographic algorithms. Pressure pulse data for groundwater analyses,  while not as sharp or well defined as seismic and not as  well supported in terms of available equipment, nevertheless offers hydrogeologic information that would not otherwise be attained by geophysics.  47  <  CHAPTER 5  5.1 Applications of ART T h e three iterative algebraic reconstruction techniques will be examined in this section. the discussion iterative  will review the common stopping and resolution  techniques,  Here,  criteria which are used in the  and then the results of tests run using these techniques  will be examined.  T h i s section will close with an analysis of the test results and general conclusions concerning the iterative reconstruction  methods.  5.2 Stopping Criteria and Image Resolution for the Iterative Techniques O n e of, if not the most important aspects of iterative tomographic reconstruction is when to stop the iterative process.  Ideally, the process should stop when the exact or near-exact  picture  of the domain is attained. Unfortunately, data noise, incomplete coverage and a subsurface profile which is usually unknown prior to the computer simulation makes such an occurence highly unlikely. M a n y methods  have been developed  in order to try and overcome these problems.  Specialized  techniques of stopping the iterations when the ray sums are close to those predicted by the data (and from what is expected to be at a given site), and of aiding the program to converge to a more accurate picture of the region are key facets of the tomographic method.  5.2.1 Stopping Criteria  T h e stopping criteria is a name given to a process which halts the iterative process when a condition (or conditions)  is met.  reconstruction  T h i s condition is set by the programmer and is  chosen so that the iterations will stop when the reconstructed "picture is close to that of the region being investigated. There have been many stopping criteria proposed in recent years, most of which are based on a least squares [L  2  norm) minimization of the error between the estimated and computed pictures.  T h e three most common stopping criteria found in the literature are those originally stated in the paper by G o r d o n , Bender and Herman (1970). These criteria are : 1) discrepancy between the calculated and measured data , 48  (26)  where N is the number of paths;  Y  k  is the measured parameter;  Y " is the rath estimate of the k  measured parameter, and ra is the iteration number. 2) the variance ,  (27)  where / is the number of cells; X is the mean slowness of the field; X" is the rath estimate of cell slowness. and, 3) the entropy ,  (28)  where / is the number of cells; X is the mean slowness of the field; X" is the rath estimate of the pixel slowness (Peterson et al, 1985; Radcliff and Balanis, 1978). Other stopping criteria do exist. For example, those given by Dines and Lytle (1979) and Herman et al (1973) are sometimes used, yet they are only variations of the basic least squares methodology. Peterson et al (1985) have found that D" will tend to zero, S  n  V"' will tend to a m i n i m a , and  will tend to a m a x i m a with increasing values of n provided the set of linear equations in the  reconstruction are complete and consistent.  Unfortunately, none of the criteria can measure how  close the picture is to that found in the ground, only how close the calculated picture is to that described by the data. For most cross-hole problems, the data obtained may be incomplete a n d / o r inconsistent due to gaps in the coverage or erroneous readings. T h e stopping criterion chosen for this investigation is the discrepancy between the calculated and measured data based on shot-path summations.  T h i s criterion was chosen for several reasons.  T h e first, and probably the most important reason is that the other criteria are based on the difference between the cells or pixels of the picture and the mean value of slowness of that picture. T h i s may be applicable for configurations with complete (360 degree) coverage, however the incomplete 49  coverage of cross-hole  tomography gives zones in which pixels are intercepted by fewer than two  raypaths (Menke, 1984b).  T h i s results in zones that are not affected by the iterative reconstruc-  tion process and are, for all intended purposes, stagnant.  C o m p a r i n g the pixels in these zones to  those of a mean slowness value would result in erroneous or premature stoppages. T h e discrepancy calculation, on the other h a n d , deals with the ray paths themselves, and does not deal with pixels outside of those raypaths. T h u s , criterion convergence in the zones that are sampled is much more preferable to quasi-convergence  through zones that are unknown or unsampled. T h e second reason  for choosing the discrepancy criterion is that (as found by Peterson et al, 1985) the D  value will  n  tend to zero if the set of linear equations are complete and consistent, whereas V " and S  n  m i n i m a and m a x i m a respectively.  tend to  Due to inherent oscillation in the classical A R T solution which  has been noted by many authors, the m a x i m a / m i n i m a stopping calculation can be thrown off by local rather than global maxima and m i n i m a while attempting to converge.  T h i s can occur when  changes in successive stopping criterion calculations are on the order of machine error accuracy. D" tends to zero and thus minor oscillation will not affect the overall tendancy of this function. A third, and perhaps more minor point is that the discrepancy (D ) n  is more easily and directly  incorporated in the reconstruction algorithm than the other criteria.  T h e primary disadvantage  of using this stopping criterion is that while the criteria given in  Equations 26, 27 and 28 will eventually converge to a solution, the criteria have also been found to diverge after this point has been reached which results in solution ambiguity (Herman et al, 1973). Also, because of incomplete coverage resulting in an underdetermined problem, there is the possibility that the criteria-meeting solution will^bear no relation to the in-situ conditions. because of these problems that a more reliable stopping criteria must be developed. believes that a graphics in/graphics out method  It is  T h i s author  of reliability checking is a plausible  solution.  W h a t is meant by graphics in/graphics out is the utilization of a graphics monitor coupled to the microcomputer to produce a visual image of the region and pixels. After each iteration, the image would be updated and the programmer would see immediately how well the picture converges, and can stop the iterations once he/she feels confident with the reconstructed region illustrated. There are problems with this process due to the time taken after each iteration to examine the updated region, however, a user controlled monitoring of the process is more appealing than a computer 50  calculated  stoppage.  5.2.2 Image Clarity In tomography, the basic question that one must ask when given a reconstructed picture is "is this an accurate representation  of the region?"  If the problem is overdetermined or evenly  determined with very accurate data, then the answer may be yes. However, for an underdetermined problem, or those problems with noisy data or for regions with high contrast or unusually shaped objects within them, the answer could be no. In fact, the reconstruction may bear no resemblance to what is in the region itself. T h u s , the question arises as to how the clarity of the reconstructed image may be improved. O n e of the easiest and most effective methods to improve resolution is to obtain more complete coverage.  T h i s is especially true for cross-hole studies.  Menke (1984b) examined the experimental  geometry of cross-borehole tomography and concluded that as boreholes do not surround the region of study, the data collected is inherently incomplete (Figure 22).  In fact, only the central area of  the study domain of Figure 22 is more or less completely covered. T h e addition of more shot and receiver points down the boreholes as well as on the surface ( M c M e c h a n , 1983; Ivansson, 1986) or in adits and tunnels  (Mason, 1981)  can serve to increase the area of coverage  and thus provide  additional data in which to resolve the media under study. A second method to improve resolution is to use all available (whether known or  assumed)  data. A priori information serves to either reduce the number of unknown pixels or serves to limit the range of the pixels' parameter variation (Menke, 1984a). A third method is to set the initial grid to a configuration which is anticipated to be close to that of the true picture. In this way, convergence is quick and the possibility of excessive iterative oscillation in the solution is reduced. Caution must be taken, however, as in setting the grid to a preconceived value, this grid configuration may bear no resemblance to the true picture and may result in a completely useless reconstruction.  T h u s , a good deal of thought is required to ensure a  reasonable starting condition is in place, usually by the assumption of in-situ conditions. T h e oscillation  of the iterative  algebraic solution noted previously  back projection of the ray sum differences over the pixel arrays.  is the result of the full  M a n y methods of damping the  back projection error have been proposed to alleviate this problem, however optimum relaxation 51  ®  •  » *  Figure 2 2 :  SOURCE /  RECEIVER  0-Z  KAVS  2-8  RAYS  > 8  RAYS  V a r i a t i o n of Ray Coverage Hole Test C o n f i g u r a t i o n  52  i n the  Cross  (damping) schemes cannot be calculated mathematically so they must be obtained experimentally (Peterson et al, 1985). In this study, a very simple percentage weighting scheme was applied to the ray sum differences.  T h i s percentage  scheme was also varied to determine which value provided  the best resolution and computational  efficiency.  A final area which improves resolution is the elimination of errors.  T h i s involves an increase  in accuracy of the measuring instruments, the proper calibration of all components and a tighter control over the data gathering process.  While most error can be eliminated in this manner, there  is always the ever present random error and noise. they must be limited as much as possible.  These can never be eliminated completely,  A very good discussion of measurement  so  error can be  found in a paper by Kowalski (1977).  5.2.3 Summary and Conclusions A number of methods to improve the tomographic reconstruction process have been discussed. A n optimum stopping criterion is critical for a reasonable tomographic image to be produced and most are based on some form of a least squares criterion. While discrepancy and entropy (S ) n  (D"'),  variance ( V ) , s  criteria are the most common criteria put forward in the literature, the discrepancy  (D") criterion was chosen for this investigation  as it deals specifically with the ray sums, it tends  to zero for complete and consistent linear equations, and is easy to implement. Ideally, however, a visual representation of the region updated after each iteration is felt to be the o p t i m u m stopping criterion as it allows the programmer to use his/her judgement as to which picture or iteration is best. M e t h o d s to improve image resolution are many.  T h e easiest method is to increase domain  coverage, but in cross-hole studies, complete coverage is often impossible.  Other methods include  using all known or assumed data to reduce the number of unknowns in the equations, setting  the  initial grid configuration close to the actual or expected in-situ configuration, weighting the back projection of error to reduce solution oscillation and the elimination of data errors and noise.  5.3 Test Results C o m p u t e r test runs were conducted on the previously described patterns in order to evaluate the resolution properties of the iterative programs under various conditions.  T h e test patterns and  changed parameters used in these tests represent conditions that might exist in-situ. The  algorithms will be evaluated on a pattern by pattern basis with all changes in parameters  for that particular pattern included in the discussion.  5.3.1 Seismic Patterns  Test Pattern  A  Test pattern A shown in Figure 15 was used to evaluate the horizontal and vertical resolution capabilities  of the algorithms.  T h e central cross is completely  enclosed  by the region and the  velocity contrast is relatively low (10 percent), hence the straight line raypath assumption is valid and  the pixels in the region of interest are crossed by more than two raypaths each.  T h u s , the  resolving power of the algorithms themselves is the critical factor. C A R T , M A R T and L A R T algorithms were tested by this pattern with variation of noise level, starting pattern and damping parameter.  In the field, it is generally not known what soil profile  exists in the ground prior to exploration. E v e n with exploration, the inherent variability of geology creates uncertainty in extrapolation practices. T h e advantage of having the test pattern or profile known beforehand (as is the case here) is that the known profile can be directly compared with the calculated profile. In terms of accuracy, all algorithms correctly placed the cross in the domain but, as stated previously, failed to fully define the cross itself. MART,  L A R T - and C A R T  Figures 23, 24 and 25 show the results of the  algorithms for 0 percent added noise and 50 percent damping.  Note  that the L A R T algorithm, again, provided the best picture in terms of closeness to the expected profile and known inverse velocity  values.  W h e n d a m p i n g factors were changed for this test pattern, little change was made in the overall picture resolution, although some minor improvements were seen at lower values of damping factor (Figure 26).  Decreasing the damping factor gave an expected increase in discrepancy  convergence  (Figure 27) as less error was being back projected at each iteration, however only minor changes in execution time were seen. Lower damping factors gave a marginally higher execution time while only marginally increasing the picture resolution.  T h i s would seem to indicate that the damping  factor is not critical for pictures consisting of a few pixels. However, larger arrays have been shown 54  F i g u r e 23:  Test P a t t e r n A as R e c o n s t r u c t e d by the MART Algorithm. 50% Damping, 0% Noise Added  F i g u r e 24:  Test P a t t e r n A as R e c o n s t r u c t e d by the LART Algorithm. 50% Damping, 0% Noise Added  55  F i g u r e 25:  Test P a t t e r n A as R e c o n s t r u c t e d by the CART Algorithm. 50% Damping, 0% Noise Added  56  PAMPJN&  X O D  0.5 0.7 0.9  0.5  Qf  0.3 m  0.2  0.0  ITERATION  F i g u r e 27:  Test P a t t e r n A as R e c o n s t r u c t e d by the MART A l g o r i t h m Showing the I n f l u e n c e of Damping Parameters on S o l u t i o n Convergence.  57  \  (Peterson et al, 1985) to be greatly affected by the size and form of the d a m p i n g parameter.  Herman  et al (1973) suggested that a damping parameter set at 50 percent or less would be adequate for most problems.  Based on the results of this experiment, it would seem that this is a reasonable  assumption, although analyses should be carried out at each site to determine the o p t i m u m damping parameter (Peterson et al, 1985).  In terms of convergence  to the known solution, all the algorithms identified the horizontal  section of the anomaly with reasonable vertical portions. and  accuracy, yet none were able to correctly identify  both  In fact, only the L A R T algorithm was able to periodically identify the upper  lower portions of the cross, thus indicating a general inability to resolve vertical anomalies.  A cross section  taken horizontally along the line of pixels which should have shown the upper  vertical portion of the cross indicates that the algorithms smeared the anomaly over the picture (Figure 28). T h i s phenomenon has been well documented by M c M e c h a n (1983) and H e r m a n et al (1973) and it is the result of incomplete coverage of the domain due to the cross-hole configuration. M c M e c h a n explains that due to the dominant horizontal orientation of the raypaths,  structures  which lie parallel to that direction will be better resolved than those lying perpendicular to it due the ability of some rays to skim the anomaly's boundary (Figure 29). B y skimming the boundary of an anomaly, the m a x i m u m contrast between materials is achieved, and as such, well defined contrasts in response values are obtained. Well defined response contrasts lead to a better defined picture. T h u s , vertical structures are less clearly discerned by strictly horizontal or nearly horizontal raypaths and will only be properly positioned if some a priori information is available or steeply dipping raypaths cross the anomaly.,- Figure 30 shows the effects of this skimming and smearing even in the horizontal section of the cross where its ends are attenuated  to the boundary of the  section.  Differing noise levels produced an expected  response.  W h e n both 5 percent and 10 percent  Gaussian noise was added to the data, there was a corresponding increase in the number of iterations to convergence representative  and a corresponding decrease in picture resolution.  Figures 31 and 32 show  results of the L A R T algorithm with added noise. These analyses indicate that high  quality data is required for an accurate reconstruction to occur. T h e noisier or the worse the data, the less chance there is that the reconstruction techniques will converge to the 'correct' picture. 58  /  2  3  4  6 7 8 PlXBL NUMdBRfNQ  5 /0  // IX 13 /4 /5  20 21 22 23 24 25 lb  a  IB 19  /.con  /7X£2.  F i g u r e 28:  NUMBER  Cross S e c t i o n Through Test P a t t e r n A as R e c o n s t r u c t e d by the LART A l g o r i t h m , Demonstrating the Smearing of a V e r t i c a l Anomaly.  59  F i g u r e 29:  Cross Hole C o n f i g u r a t i o n s Have the C h a r a c t e r i s t i c t h a t Raypaths Tend to Skim H o r i z o n t a l B o u n d a r i e s . Thus, T h i s C o n f i g u r a t i o n i s Able to R e s o l v e Such Boundaries More A c c u r a t e l y .  60  2 3 4 5 6 7 8 <? /O // IZ 13 /4 /5 lb 17 18 19 20 2/ 22 23 24 25 1  PIXBL  /CCh  r  A COMPUTED ACTUAL  /3 P/XE~L  F i g u r e 30:  /4  /5  NUMBER  C r o s s S e c t i o n Through Test P a t t e r n A as R e c o n s t r u c t e d by the LART A l g o r i t h m . Demonstrating the Smearing on a H o r i z o n t a l Anomaly.  61  F i g u r e 31:  Test P a t t e r n A as R e c o n s t r u c t e d by the LART Algorithm. 50% Damping, 5% G a u s s i a n N o i s e Added  F i g u r e 32:  Test P a t t e r n A as R e c o n s t r u c t e d by the LART Algorithm. 50% Damping, 10% G a u s s i a n N o i s e Added  62  Changing the initial values of the pixels for this pattern provided the most dramatic change. W i t h the initial matrix set to zero rather than some average pixel value, convergence to a solution was extremely  slow (Figure 33) and where the convergence  picture bore no resemblence  to the known profile.  criterion was reached, the computed  T h i s shows that the closer the initial recon-  struction is to the actual solution, the more rapid the convergence and the better the resolution. T h i s also shows that the use of a priori information is extremely valuable in that setting the initial pattern to an average (or estimated) value, the accuracy of the reconstruction was improved.  Homogeneous The equately  Pattern  homogeneous test pattern (Figure 17) was developed determine that a homogeneous,  to see if the programs could ad-  isotropic m e d i u m exists and if not, what  "phantoms"  might it place in the domain. In this sense, the word phantom is used to describe images (anomalous velocity zones) that are known not to exist. Figures 34 to 37 show the results of the M A R T , L A R T and C A R T algorithms for 0 percent noise and 50 percent damping.  W h i l e the circumferential pixels surrounding the central domain  show deviation from a uniform slowness value, those pixels which are in the central region do exhibit a single, unique value. T h e fact that the surrounding pixels deviate from the central values is, again, a product of the cross-hole  configuration.  T h e outermost pixels have only one or two  raypaths which pass through them whereas the central region has many crisscrossing  raypaths.  T h u s , central regions have higher resolution due to this redundancy and outer regions have lower resolution. T h i s phenomenon has been studied in detail by Menke (1984b). Even though the outer pixels indicate that phantoms might exist, the more completely covered central region shows that the domain is, in fact, uniform. Different damping factors had little effect on the overall picture resolution. of the reconstructed picture stayed relatively homogeneous,  Central  whereas outer (circumferential)  regions pixels  still exhibited phantom behaviour. T h e only difference observed by changing d a m p i n g factors was a slight decrease in convergence time for both C A R T and L A R T algorithms. T h e M A R T algorithm had  slow convergence characteristics and poor resolution in all cases. Noise affected the pattern in the expected way. Increased noise levels had a distorting effect on  the pattern obtained with highest noise levels producing a slightly inhomogeneous  slowness profile.  +  /  2  3  4  5  6  2  7  9  to H  17. 1$ i+ 15 lb  n  ITERATION  F i g u r e 33:  LART A l g o r i t h m and Test P a t t e r n A w i t h 50% Damping and I n i t i a l M a t r i x Set t o 0.0  64  V  7  ( /m) ms  > 1.30  > 1.20 2i-io > i.oo  1.00  Figure 3 4 :  O r i g i n a l Homogeneous Test P a t t e r n Seismic Study  65  for  the  F i g u r e 36:  Homogeneous S e i s m i c Test P a t t e r n as R e c o n s t r u c t e d by the LART A l g o r i t h m . 50% Damping, 0% Noise Added  F i g u r e 37:  Homogeneous S e i s m i c Test P a t t e r n as R e c o n s t r u c t e d by the CART A l g o r i t h m . 50% Damping, 0% Noise Added  66  One Layer and Two Layer Stratigraphy T h e one and two layer profiles were designed to test the algorithms' ability to clearly determine stratigraphic layering with only a 10 percent velocity variation between, the layers.  T h e patterns  used in these tests are shown in Figures 18 and 19. A l l algorithms exhibited an excellent ability to discern the horizontal stratigraphic layering. W i t h 50 percent d a m p i n g and no noise addition, all algorithms were able to locate and accurately extrapolate the layered sequences, as seen in Figures 38 and 39. O n e problem seismic studies have with layered media is the fact that if the velocity of the medium does not increase downwards  (as  with increased pressure, the density increases and void space decreases), there tends to be a masking effect where the lower low velocity layer is 'averaged out' by the enclosing higher velocity  layers.  T h e result is that anomalously low velocity layers are sometimes missed. A similar phenomenon can be seen in the results of computer runs with the one and two layer cases. T h e layering represents the higher velocity  media, hence there are zones where masking might occur.  There is a zone  in the upper and lower central portion of the one layer picture in Figures 40 and 41 that tends (numerically) to have a higher velocity (lower inverse velocity) value than the surrounding pixels. In the two layer case (Figures 42 and 43) there exists a zone between the two layers which shows an averaging taking place and is serving to eliminate the interceding low velocity layer. Fortunately, this effect is not so well pronounced as to totally obscure the velocity contrast at this point. It should be noted here that the edge effects described in the previous section are more subdued I n the layered cases. While the circumferential pixels still exhibit some deviation from the expected picture, they are, for the most part, in line with the anticipated velocities of the layers. In terms of convergence  (the speed at which the discrepancy value goes to zero), the layered  data sets provided the algorithms with the most rapid convergence characteristics yet observed. In all tests of layered media, the average number of iterations to converge was 7, and usually the L A R T algorithm proved to converge with the greatest speed, requiring the fewest number of iterations. W h e n the damping parameter was changed for a given algorithm, the resulting picture was changed little, yet the speed of convergence was affected.  B y decreasing the damping parameter,  the algorithms, in general, took slightly less time to converge to a solution. After a level of 0.5 (or 50 percent damping), however, the convergence speed was unchanged and perhaps even increased 67  Figure 3 9 :  Two Layer S e i s m i c Test P a t t e r n as R e c o n s t r u c t e d by the MART A l g o r i t h m . 50% Damping, 0% Noise Added  68  Figure  40:  One Layer S e i s m i c Test P a t t e r n as R e c o n s t r u c t e d by the LART A l g o r i t h m . 50% Damping, 0% Noise Added  69  Figure  42:  Two Layer S e i smic Test P a t t e r n as R e c o n s t r u c t e d by the LART A l g o r i t h m . 50% Damping, 0% Noise Added  F i g u r e 43:  Two Layer S e i s m i c Test P a t t e r n as R e c o n s t r u c t e d by the CART A l g o r i t h m . 50% Damping, 0% Noise Added  70  slightly. Nonetheless, The  the resulting cross section, in all cases, clearly showed the expected layering.  addition of noise increased the convergence time of the algorithms and also decreased the  sharpness of the profile. Figure 44 demonstrates  this variation. Increasing the level of noise gave  progressively worse reconstructions in terms of picture acuity.  Test Pattern  B  T h e pattern used for this set of data is exactly the same as that used for test pattern A except that in this case, the velocity contrast between the media is 100 percent (Figure 16). T h e tests run with the algorithms showed many of the same features as with test pattern A . Vertical anomalies were poorly defined and and horizontal anomalies were generally well defined. here is the level of smearing that takes place.  W i t h the higher velocity  T h e key contrast  contrast,  the smearing  or masking of the vertical anomaly is extensive with the entire anomalous pixel level becoming relatively uniform. T h e C A R T algorithm seemed to be the best at picking out the vertical anomaly created by the cross (Figure 45), whereas both the M A R T and L A R T algorithms failed to clearly distinguish the cross' position (Figure 46).  T h e decrease of the damping parameter did not help  (in this case) in resolving the cross and neither d i d the addition of noise. T h e reason for the lack of performance of the algorithms even with stronger or clearer material differences again relates to the orientaion of the anomaly's boundary. T h e vertical anomaly, even though of a high material contrast, is not well defined because of the algorithm's inability to place the slowness anomaly at its proper position in the grid.  A g a i n , this is a result of the incomplete  scanning geometry of the cross-hole configuration.  5.3.2 Groundwater Patterns Homogeneous T h e data set used to simulate a homogeneous,  isotropic m e d i u m for the groundwater test pat-  tern was obtained by using Jacob's transient well test procedures. homogeneous,  A s the section  used here is  only the distances between source and receiver were varied to produce the homo-  geneous data set.  T h i s data was organized into a file and then used to evaluate the effectiveness  of the reconstruction algorithms on groundwater flow data. T h e homogeneous pattern is shown in Figure 20. 71  NO/SE LEVEL  0.5-  * x o  0.0% 5.0% 10.0%  0.4-  03  0.2-  0.J  8  lo  ITERATION NUMBER Figure  44:  The E f f e c t of Noise L e v e l on S o l u t i o n Convergence. LART A l g o r i t h m w i t h One Layer S e i s m i c Test P a t t e r n . 50% Damping.  72  Figure  45:  F i g u r e 46:  Test P a t t e r n B as R e c o n s t r u c t e d by the CART Algorithm. 50% Damping, 0% Noise Added  Test P a t t e r n B as R e c o n s t r u c t e d by the LART Algorithm. 50% Damping, 0% N o i s e Added  73  Using Jacob's solution technique, and assuming a preset detection limit value, one can calculate the arrival time of the drawdown or pulse front to several decimal places. In practice, however, the arrival is not sharp due to a number of hydrogeologic factors which serve to attenuate the pressure wave so that an 'instantaneous' reading is unobtainable. Invariably, data gathered in the field will be noisy.  For the purposes of this test, however, an easily readable and calculable wave front is  assumed. T h e results obtained for the groundwater data (homogeneous case) were very similar to those obtained by seismic means.  T h e central portion of the array proved to produce a homogeneous  zone of one slowness (or  value (Figures 47 and 48) whereas the circumferential pixels tended  to produce phantoms.  A g a i n , this is related to the ray coverage problems discussed earlier.  particular algorithm (for a given d a m p i n g parameter and noise level) produced a noticeably  No  better  picture (in terms of closeness to the actual value). However, the M A R T algorithm proved to be the slowest to converge for the homogeneous case. T h i s could be due to the fact that the initialization array was so close to the actual array that any alteration to the array did not help the resolution. A s a result, the picture simply oscillated around the correct solution. If the above statement is true, then it follows that decreasing the damping parameter value should allow the algorithms to converge more rapidly as less and less of the calculated ray time discrepancy is back projected over the pixels.  T h i s turns out to be the case.  Figure 49 shows  representative convergence characteristics for the C A R T algorithm at the various levels of damping. A decrease in the damping value showed an increase in the convergence  (i.e.: the more steeply  dipping curve) of these algorithms for this particular test pattern. A s with the other tests, it was found that increasing the noise level in the data gave progressively worse results and longer convergence times.  T h i s exemplifies  the need for good data when  proceeding with algebraic reconstructions.  One Layer The  Case  data for the one layer groundwater test case (Figure 21) was obtained from two dimen-  sional, linear, isoparametric  finite  element programs used in the transient  analysis  of aquifer  /  reservoir drawdown. Modifications were made to the original Wang and Anderson (1984) and D u n bar (pers. comm.) programs in order to allow for different material types (i.e., different storativity 74  F i g u r e 47:  F i g u r e 48:  Homogeneous Groundwater Test P a t t e r n as R e c o n s t r u c t e d by the CART A l g o r i t h m . 50% Damping, 0% Noise Added  Homogeneous Groundwater Test P a t t e r n as R e c o n s t r u c t e d by the LART A l g o r i t h m . 50% Damping, 0% Noise Added  75  DAMPING /0"  + x o n  03 0.5 0.7 0.9  os-  + +  0.7  >a  a •  a o U7  o  o  a •  OS-  o  o  5  G  •  OA0.3 OX OA  I  Z  Z  A  7  S  f  IO  il  a  13 14 15  ITERATION NUMBER  Figure  49:  E f f e c t of Damping V a l u e on Convergence. CART A l g o r i t h m w i t h Homogeneous Seismic Pattern. 0% N o i s e Added  76  Test  and transmissivity The and  values).  region used is a representation  of a 10 x 10 meter cross section of ground.  lower surfaces of Figure 21 were designated  were defined as constant  head boundaries.  T h e upper  no flow boundaries, while pressure input nodes  T h e test was set so that all rays would be analysed  individually (eg.: that each source would have a separate set of response curves associated with it). Responses were measured at the corners (nodes) of the pixels (Figure 50) and the time of arrival of the response was calculated from a linear interpolation of the nodal responses (as the receivers were typically positioned between the receiver hole nodes). After running several raypath tests on the modified programs, it soon became apparent that there were difficulties with the process.  Anomalous pressure increases were observed in the output  data when drawdown conditions were created. These head anomalies measured only a few millimeters in magnitude and diminished with time. Eventually all nodes showed a drawdown profile after this pulse had passed.  It was concluded that this anomaly was the product of numerical errors, as  at steady state conditions, the head distribution was found to be very similar to that calculated analytically for the given material configuration.  T h e fact that the pulse was present,  however,  made the picking of the wave front arrival times more difficult. O n certain raypath tests, uniform drawdown occurred and thus the 0.5 centimeter  (cm) pressure deviation could be easily  chosen,  but on others, the pulse anomaly had to be accounted for. It was decided that as the anomalous fluctuation  was small and of limited time span, that the pressure deviation arrival time needed for  the data set would be taken after the anomalous pulse had passed. It was on this basis that the one layer data set was established.  T h i s process of picking the arrival time is somewhat  judgemental  and will undoubtedly be different for each person who tries it. It is evident that the resulting data obtained by this process will be exceedingly The  results  noisy a n d / o r of poor quality.  of the algorithm reconstruction  using this poor quality a n d / o r noisy  data are  promising. T h e algorithms all correctly identified that there was a different layer present and correctly positioned it in the array (Figures 51 and 52). M u c h more pronounced velocity smearing is shown in these reconstructions  than with any other test pattern.  T h i s is most likely due to the  poor quality of data. O f the three algorithms, the M A R T and L A R T algorithms proved superior in converging to a more correct solution. T h e algorithms all had difficulty meeting the convergence 77  Figure 5 0 :  F i n i t e Element C o n f i g u r a t i o n f o r the One Layer T r a n s i e n t Groundwater Test  ©  O  ©  ©  12  II  10  © 18  zi  12  24  23  ® is  © Zt  12  32  3o  28  27  24  33  JO  © • m  78  34  3  N0D5  NUMBER  PIXEL  NUMBER  PIXELS  WITH  PIXELS  WITH  S  S  /  T  /  T  5  1.0 \0.0  S/m  1  Figure 5 1 :  Figure 5 2 :  The One Layer Groundwater Test P a t t e r n R e c o n s t r u c t e d by the MART A l g o r i t h m . 50% Damping, 0% Noise Added  The One Layer Groundwater Test P a t t e r n R e c o n s t r u c t e d by the LART A l g o r i t h m . 50% Damping, 0% Noise Added  79  as  as  criterion as all required more than 15 iterations to reach the criterion's limit.  O f the three algo-  rithms, the M A R T algorithm appeared to converge more rapidly than the others as is indicated by flattening  of the discrepancy line in Figure 53.  A change of damping parameter did not affect the speed of convergence of the algorithms a significant amount.  T h i s is again attributable to the poor quality data. Increasing the noise level  had an expected effect of slowing down the convergence slightly, however any significant affect on picture quality or increase in execution time was not observed. In spite of the relatively poorer resolution obtained from this data set, the fact that the algorithms were able to correctly identify the presence of a layer of different properties indicated that groundwater test data can be used in algebraic reconstruction  techniques.  5.3.3 T i m i n g s T h e execution times of the programs were tabulated for each test run (Figure 54). A p a r t from the minor increases in execution time due to slower convergence of certain algorithms with certain data sets, the time itself proved to be independant of algorithm type and primarily dependant on the size of the discretized mesh.  There is a definite correspondence shown by Figure 54, and the  extension of this curve can be used to determine execution times of progressively matrices.  It is evident that past a 20 by 20 matrix, the execution  larger and larger  time becomes excessively long  for an I B M P C (r - registered trade mark) and like computers, not only for the computation time but also in the storage requirements as a 20 x 20 pixel domain requires over 160,000 bytes of stored information for these algorithms. will likely be overcome.  W i t h the advent of newer and faster computers , this problem  In fact, tests run on an I B M X T (r) compatible computer showed that "for  a given program and data set, the program execution time could be cut by over 50 percent. A final note: much of the execution time for the programs used here was taken up in writing the ray length, pixel length and pixel numbers to separate  files,  then reading them once again  when reconstructing the image. A great deal of time could be saved by writing all these values to internally stored arrays.  5.4 Analysis of Test Results T h e tests run on the data sets gave valuable clues as to the performance and the applicability 80  5.0  4.0  5 £ o in  3.0  2.0-  X X  AO  /  Z  3  +  5  C 7  2  f  'O ll It  /3 / f  /5  . - .  ITERATION NUMBER  F i g u r e 53:  One Layer Groundwater Flow P a t t e r n w i t h the MART A l g o r i t h m . Convergence C h a r a c t e r i s t i c s w i t h 50% Damping  81  I Go-  es  5  /  0  /  NUMBER AND  Figure 5 4 :  5  20 OF  RAYPATHS  PIXELS  R e l a t i o n s h i p Between the Number of P i x e l s (and Raypaths) t o A l g o r i t h m E x e c u t i o n Time  82  25  of the algorithms under various conditions. T h e results of these tests may be generalized to provide criteria a n d / o r establish the limitations of the algorithms under various circumstances.  5.4.1 T h e Best Iterative A l g o r i t h m T h e tests were inconclusive as to which reconstruction algorithm was superior. A l l algorithms were able to pick most of the anomaly positions with a certain degree of accuracy, yet there was an inconsistency  as to which algorithm produced the clearer or most well defined picture.  In a  general sense, the L A R T algorithm appeared to provide the most consistent reconstruction clarity, although other algorithms , at times, gave superior resolution. most consistent in the convergence  T h e L A R T algorithm was also the  criteria, often being the first (or sometimes only) algorithm  to meet the stopping criterion before 15 iterations, which indicates how efficiently  the algorithm  reconstructs a particular problem. In tests where the reconstruction array was set to different starting values, a more clear indication of superior performance was seen.  It was shown previously that the closer the array is  set to the actual or expected inverse velocity distribution, the faster the convergence to a solution. Indeed, when the initial array was set to 0.0, all the algorithms were exceedingly slow to converge. D a m p i n g parameter values are very problem dependant and should be chosen only after extensive testing a n d / o r estimation  of the subsurface properties (Peterson et al, 1985). There are a  number of possible alternatives to provide damping to a particular algorithm, and further study is required to determine the relative merits of each process. A s for the simple , proportional method of damping used in this study, a few general comments  can be made.  W h i l e large damping pa-  rameters tend to lessen the execution time of the program, they also make the picture become less well defined.  T h i s is-probably due to overshoot  and excessive back projection values being used.  A s one decreases the parameter, there is a corresponding increase in the clarity of the picture at the expense of the execution time. In examining the results of the tests, it appears that a damping parameter of approximately 0.5 (50 percent) is sufficient to ensure clarity while not causing long run times. T h e larger the array becomes, the more pronounced the effect will be. T h i s value of 50 percent has also been proposed by H e r m a n et al (1973) as being the upper limit of a first approximation to an A R T damping parameter.  Parameters lower than this value may, however, cause  slower convergence again because of the lower correction factors being applied at each iteration. 83  The  effects of noise on the reconstruction process is clear in that the higher the level of noise  in the data, the worse the reconstruction.  T h i s emphasizes the fact that for the best tomographic  results, noise must be kept to a m i n i m u m , if not eliminated.  5.4.2 Applications Algebraic reconstruction techniques have been used widely in geophysics and geotechnical applications. It has been proven effective in determining the shape and position of velocity  anomalies  in the subsurface and has traditionally provided the best quality data in terms of noise levels. The  application of A R T to groundwater testing has been less widely used.  Ramierez (1986)  used electromagnetic radiation to determine the electrical conductivity characteristics of a fractured granite, but so far ( to this author's knowledge)  no major testing has been done specifically with  cross-hole pressure pulse tomography. T h i s lack of data is possibly due to the limitations of present equipment a n d / o r the severe pulse attenuation properties of most aquifers which make the picks of a first arrival extremely  difficult.  In the laboratory computer testing  was found that A R T could be used successfully  done in this study, it  with groundwater test data and could accurately  reconstruct slowness anomalies which indicate diffusivity variations.  It was also found, however,  that the data obtained from such modelling was inherently poor (due to numerical dispersion, programmer adjustment on picks, etc.)  and gave reconstructions with apparent velocity phantoms.  Additions of other sources of random noise served only to make the situation worse.  5.5 Summary and Conclusions Analyses were conducted on various data sets using C A R T , M A R T  and L A R T  algorithms.  Geophysical and groundwater flow data sets were tested. The  geophysical testing showed that horizontal anomalies were well defined by the programs  whereas vertical anomalies were not.  T h i s was due to the fact that ray coverage was primarily  horizontal and thus could detect horizontal anomalies  more easily.  Vertical anomalies  , on the  other h a n d , tended to be smeared across the domain. Velocity contrasts also produced smearing effects in the geophysical test cases. T h i s was due to masking effect of higher velocity layers surrounding low velocity evident, it did not mask the layerings to any great extent. 84  layers.  While smearing was  Groundwater test data showed some unique problems. T h e finite element codes used to analyse two dimensional transient flow characteristics produced anomalous oscillations commonly found in numerical solutions.  This made the picking of arrival times at the receiver very difficult. In spite of  this, the results obtained by the groundwater data were promising in that the anomalies  expected  in the profile showed, up in the recontructed array. General problems existed for the circumferential pixels in the reconstruction domains. T h i s was especially evident when homogeneous test data was used, as the algorithms produced phantoms in the edge pixels. Incomplete ray sampling of the edge pixels was the reason for these phantoms occurring. Internal pixels, which h a d more complete sampling and redundancy, exhibited no phantom behaviour and consistently  produced near correct or exact profiles.  ,  Lowering the damping parameter value, in general, proved to increase algorithm convergence time as well as marginally aiding picture resolution. A d d i t i o n of noise to the data sets consistently proved to decrease picture resolution. Execution times of the algebraic reconstruction algorithms proved to primarily be functions of matrix size, and secondarily a function of parameterization.  85  CHAPTER 6  6.1 Applications of Matrix Techniques  6.1.1 Introduction and Review  T h e matrix solution tests developed for this study are examined separately because : a) the pro%  grarns are run on a different computer system (an A m d a h l mainframe computer), and b) the matrix manipulation, while solving the same inverse problem, involves a completely different methodology than that of the iterative algebraic reconstruction. T h e process by which the direct matrix solving technique was implemented is as follows : the raw data obtained from cross-hole shooting was fed into the ray/pixel calculation program described in Section 3.2. T h e n , the pixel numbers, path lengths and shot arrival times are loaded into storage files which are subsequently  used to initiate the matrix forming algorithm. T h e matrix programs  use this data to calculate a solution to the set of linear equations. T h i s section will review the matrix solution techniques used in this study, evaluate the results from these techniques, and identify and discuss the methods used to increase the accuracy of the reconstructions.  T h e n , special tomographic problems will be discussed  direct solution techniques.  and their relationship  to  T h e section will close with an analysis of the test results and conclusions  concerning the direct matrix solving methods.  6.2 Initial Test Results T h e two methods constrained L  1  used to obtain solutions to the matrix formulated problem were CL , 1  the  linear approximation algorithm developed by Barrodale and Roberts at the Univer-  sity of V i c t o r i a , C a n a d a , and several singular value decomposition routines ( S V D ) available on the U B C computer system.  6.2.1  Results  Several sets of data were used in the initial evaluation of these matrix methodologies.  The  primary data sets included a one layer seismic configuration (Figure 18) and a one layer groundwater flow configuration  (Figure 21). 86  T h e CL  l  algorithm developed by Barrodale and Roberts (1978) has the option of constraining  both the residual (b-Ax) vector and the' solution (x) vector. T h e constraints ranged from : 1) less than or equal to zero, 2) unconstrained, 3) greater than or equal to zero. For the purpose of this study, the first option (greater than or equal to zero) was chosen for the solution vector, and the residual vector was left unconstrained. disappointing.  Unfortunately, the results obtained by this method were  Numerous zero velocity (or diffusivity) pixels occured in the solution and the range  of values spanned several orders of magnitude. In order to identify the cause of these problems, S V D was used and the singular values associated with the decomposition  process were calculated and displayed.  T h e singular values for both  one layer data sets produced similar patterns . T h e majority of singular values were slightly greater than 1.0, but were not more than one order of magnitude above this value. of values, however, that was several (up to 6) orders of magnitude below 1.0. numerically, the rank of matrix A is small (see Equation 23, page 31). be the cause of the erroneous reconstructions  (Figure 55).  There was a group T h i s means  that,  These values were seen to  T h e low values, most of which were on  the order of 1 0 ~ , interfere with the multiplication process in that the S V D sequence: c  A'  = VS' ^ 1  leads to X=  VS  _ 1  17'6  (29)  produces small product values that are on the order of round off error significance.  Thus,  the  fundamental problem found here is the low numerical rank of matrix A .  6.3 Alterations to the Procedure In order to obtain accurate" reconstructions from the given data using direct matrix solution algorithms, a number of procedural modifications were attempted.  6.3.1 Redundant D a t a  In an attempt to transform the existing under or mixed determined problem into an overdetermined one and thus forcing a solution to the matrix equation, redundant information was added to the data.  T h e information was obtained by means o f s u p p l i m e n t a r y  source locations  placed  0.3 meters above and below the existing source locations of the hypothetical sections (Figure 56). 87  I0 H 5  + ++ + + + + + + + ul  10  +  s  + + + + +  4-  •D  _!  -z.  IO~ + + + + +  10  15  20  SINGULAR VALUE NUMBER  Figure 5 5 :  S i n g u l a r V a l u e s A s s o c i a t e d w i t h the One Layer S e i s m i c Test P a t t e r n Data Set  88  25  •z •» 1  <>  3  SOURCES  RECEIVERS •2 • 1 "3  • 2 • 1 • 3  ••1 •< I  <  >3  0.2>m 03  •3  <— 2m. —>  F i g u r e 56:  I  FIRST S E T OF 5O0l?C£S  Z  SECOND SET OF SOURCES  3  THIRD SET OF SOURCES  A d d i t i o n of Redundancy to the C r o s s Hole C o n f i g u r a t i o n  89  Original  T h e receiver positions were left unchanged. T h e data obtained from these new positions provided two and three orders of redundancy to the original cross-hole data, and hopefully would produce a reasonable tomographic section.  B o t h CL  l  and S V D algorithms were used in the analysis of the  new data. T h e results of CL  1  analysis of the second and third order redundancy data is shown in Figures  57 and 58. There is some minor improvement in the resolution of the expected anomaly when more data is used, but even with the three sets of data, the anomaly is still not clearly evident. T h e singular values associated with the additional data is shown graphically in Figure 59. T h e addition of more and more data clearly does not affect the magnitude of the small singular values to any great extent. While marginal movement of these values towards the ideal 1.0 mark is seen, a great deal more data must be added to achieve this goal. values~remains relatively  (Assuming that the rate of change of  constant.)  F r o m these observations, it can be concluded that large data sets are required to transform such a tomographic problem to an overdetermined case, and this is likely due to the poor conditioning of the matrices involved which, in turn, is a consequence of the poor coverage inherent in the crosshole methodology.  6.3.2 Elimination of Low Singular Values A standard technique used by numerous authors in singular value decomposition conditioned matrices is to eliminate the small singular values.  of poorly  T h i s process, while either losing or  ignoring matrix information, is generally able to produce better results than by incorporating the full set of singular values in the matrix multiplication process. U s i n g one order of redundancy data, the singular value decomposition of the length matrix [A] was calculated, and successive runs were made with each run eliminating the next lowest singular value from the S matrix.  T h e resulting S matrix was then used to obtain the solution vector x  via the matrix multiplication of equation 36. T h e objective of the successive r u n approach was to find the critical singular value, or that value which, when eliminated, significantly  improved the  reconstruction's clarity. Note that the solution vector for the S V D matrix reconstruction  techniques  has been left unconstrained. T h i s means that any calculated value may be assigned to a pixel, even though that value may be physically unrealistic (i.e.: negative values). 90  Figure  57:  The  One  Layer  Seismic  Reconstructed Two O r d e r s o f  Test  by the CL1 Redundancy  91  Pattern  Algorithm Data  Using  F i g u re 5 9 :  S i n g u l a r Values A s s o c i a t e d Data Sets  with Redundant  10 \ s  x  FIRST ORDER  REDUNDANCY  •  SECOND ORDER  REDUNDANCY  O THIRD ORDER  *  REDUNDANCY  9 O o  x  ? x  9 ? ? 9 o x x  * x »  \0°  io- i 5  X  X  X  X  ics***  >  1 IO  . 15  SINGULAR VALUE NUMBER  92  r 20  25  A s expected, when all the small singular values were eliminated, there was a great improvement in picture clarity (Figure 60).  T h e anomaly was positioned  the contrast between material types was distinct.  where it was expected  to be and  Elimination of further values only marginally  improved upon this picture. Using this same procedure on other data sets (a two layer transient flow case and an isolated anomaly in a transient flow field) produced similar results (Figures 61 to 64). O n c e the anomalously low singular values were eliminated, the calculated and expected profiles converged rapidly. Further deletion  of singular values,  again slightly improved picture clarity.  T h e isolated  anomaly is a  particularly interesting case in that the reconstruction was able to position the anomaly where it was supposed to be. W i t h incomplete scanning geometry, the cross-hole method is usually unable to accurately position vertical anomalies and typically smears them across the section. Evidentally, the geometry of the scanning region was such that positioning of the anomaly was possible.  This  case also shows that isolated objects, as well as layers, can be detected by this method. In conclusion, the elimination of small singular values seems to be a viable method of obtaining solutions to problems with low rank matrices.  6.3.3 P r o d u c i n g a Highly Underdetermined P r o b l e m W o o d b u r y (pers. comm.) observed that the cross-hole tomographic problem was an inherently mixed or underdetermined one, and as such, a unique solution was often  not obtainable.  He  suggested that instead of trying to make the problem overdetermined by means of additional data (i.e., redundancy), that developing a highly underdetermined problem and solving by means of S V D may be more feasible and produce better results. Woodbury (1987) explained that " in the case where there is less data than parameters (i.e., m < n), or if the problem is rank deficient, then a unique solution of the least squares (or S V D ) problem is not possible.  A m i n i m u m length solution can be obtained by solving the constrained  problem:  min  J = pp l  subject to : 93  (30)  F i g u r e 60:  One Layer S e i s m i c Test P a t t e r n as R e c o n s t r u c t e d by S i n g u l a r V a l u e Decomposition w i t h S i n g u l a r V a l u e s Cut Off Below 0.1  94  F i g u r e 61:  Two Layer Groundwater Flow Test  Pattern  7 r ( / *) s  \.o  m  F i g u r e 62:  'o.o  C e n t r a l Anomaly Groundwater Flow Test Pattern  95  m  F i g u r e 64:  C e n t r a l Anomaly Groundwater T e s t P a t t e r n as R e c o n s t r u c t e d by SVD w i t h S i n g u l a r Values Cut Off Below 1.0  96  d — Ap  (31)  for which the solution is:  (32)  In this case, J is the desired objective function and p  e  is the desired solution vector."  Note that the  m i n i m u m length solution described above is simply the singular value decomposition of matrix A for a situation where the dimensions of A are m by n (m < n). While the highly underdetermined problem has no unique solution, it does, however,  have a unique m i n i m u m length  (or best fit)  solution to the given data. T h u s , the goal of the exercise is to obtain this unique solution. T h i s theory was tested on the one layer transient groundwater flow problem. T h e cross-hole groundwater d a t a was kept as is (i.e., five sources and five receivers), but alterations were made to the matrix calculation algorithm such that additional pixels were generated. periments, the number of raypaths and the number of pixels were even. five receiver configuration, 25 raypaths were generated for a 5 x 5 pixel  In previous ex-  For a five source and field.  In the highly un-  derdetermined case, the number of pixels were increased nine-fold so that the A matrix now had dimensions of 225 by 25.  T h e result of singular value decomposition  Figure 65. A l t h o u g h circumferential pixels are somewhat  of this matrix is shown in  anomalous in value, the central layer is  clearly visible and discernable from the surrounding pixels. T h e implications of the use of highly underdetermined problems in tomography are far reaching. T h e inherent limitations of cross-hole scanning geometry and the prohibitive costs of redundant data sets can both be taken care of by this scheme. T h i s method, however, should be used with caution. T h e problem is a highly underdetermined one and as such has no unique solution.  Therefore, the  values obtained by this analysis should not be taken as exact material property values but only as indicative of property contrasts.  97  F i g u r e 65:  H i g h l y U n d e r d e t e r m i n e d C o n f i g u r a t i o n of the One Layer T e s t Groundwater F l o w T e s t P a t t e r n as R e c o n s t r u c t e d by SVD, No V a l u e s D e l e t e d  6.4 Special Problems 6.4.1 T h i n Fracture Analysis  Identification of small anomalies such as thin fractures comes down to the problem of contrast. Basically, the question is: will there be a sufficient contrast between the fractures and the surrounding medium so that a discernable differential response is apparent? For many geometries and radiation types, the answer is probably not. T h e smaller the anomaly, the less likely that it will be <  picked up by cross-hole shooting.  In such cases, it may be necessary to go to an alternate geometry  (ie.: one which is aligned favourably to the anomaly) or another type of radiation. A n example of just such an instance may be seen with widely spaced fractures in plutonic rocks. Seismic shooting may smear the contrasts in velocity between fracture and intact rock across the section, or in fact, may not even pick the fractures up due to their minor velocity contibution. Electromagnetic radiation, on the other h a n d , may readily pick up water-filled fractures due to their large contrast in dielectric properties with respect to the country rock (Ramierez, 1986). T h u s , in order to identify small anomalies, one must choose the radiation type a n d scanning geometry carefully so that there is a high probability of detection.  6.4.2 H i g h Contrast Anomalies  O n e of the basic premises of the tomographic problem as formulated in this study was that the straight line raypath approximation was applicable. For this assumption to be valid, low material property contrasts had to exist. In dealing with materials which have high property contrasts (i.e., an order of magnitude for seismic , or several orders of magnitude for transient groundwater  flow),  the straight line assumption breaks down and a ray tracing technique must be used to compensate for ray bending at interfaces and perhaps even in the layers themselves. also produces additional complexity in the data gathering process.  T h e high contrast case  In seismic cases, high velocity  layers will tend to mask low velocity layers and this is accentuated by larger and larger contrasts. In groundwater flow problems, larger contrasts in conductivity may inhibit the movement of the transient pressure pulse which might result in poor d a t a being collected. Diffusivity values differing up to three orders of magnitude were used to evaluate the effects of high contrast anomalies.  Figure 66 demontrates the effects of these contrasts. 99  Even at two orders of  Figure 6 6 :  One Layer Groundwater Test P a t t e r n w i t h a Two Order of Magnitude D i f f u s i v i t y C o n t r a s t  100  magnitude difference in diffusivity, the straight line approximation is breaking down and the arrival time of the pressure pulse is harder to detect accurately. B o t h of these factors lead to poorer and poorer reconstructions. F r o m these tests, it can be concluded that if the straight line raypath assumption is to be used, then low material property contrasts must exist to obtain a valid solution.  6.4.3 Horizontal Variations Horizontal property variations occur in geology as facies changes, gradation changes, fracture frequency changes and like phenomenon.  In many ways, these variations have some of the same  problems associated with them as with high contrast anomalies because the straight line raypath approximation may no longer be valid, and the time of arrival of the radiation pulse may be harder to distinguish due to attenuation through the differing media. If the horizontal variation is pervasive (vertically extensive) then additional problems develop because, in theory, this becomes a vertical anomaly case with all its associated difficulties (Figure 67). However, if the anomaly is relatively isolated, say to one layer in an otherwise  homogeneous  m e d i u m , then there is a strong probability that this anomaly will be identified. A n example of just such a case is illustrated in Figures 68 to 71, where a t h i n , horizontal layer of moderate property contrast is embedded in a homogeneous sedimentary medium. T h e system was analysed using finite element procedures and singular value decomposition.  While the actual horizontal variation is not  truly apparent in the reconstructed section, the layer itself is quite distinct.  6.4.4 Noise in the D a t a Noise levels of 5 and 10 percent magnitude were added to the one layer data.  A s expected,  the addition of noise to the data greatly diminished the clarity and reconstruction accuracy of the profile. Figure 72 illustrates the effect of 5 percent noise on the one layer seismic system using S V D . E v e n at this relatively low level, and with the singular value cutoff at a higher level than in previous tests, the picture is markedly changed from the no added noise case (Figure 73). T h i s emphasizes the fact that tomographic techniques rely heavily on the accuracy of the measured data. Noise can result in poor picture quality, reconstruction phantoms (anomalies which appear in the section but are known not to exist), and perhaps even complete lack of convergence to the true profile. T h u s , 101  TOMOGRAPHIC SECTION  X  F i g u r e 67:  FACIES CHfllUGES  x  /VALUES IN  F i g u r e Demonstrating t h a t H o r i z o n t a l P r o p e r t y V a r i a t i o n s May In F a c t Produce V e r t i c a l Anomalies  102  103  Figure  Figure  70:  71:  One Layer Groundwater Test P a t t e r n w i t h Minor H o r i z o n t a l D i f f u s i v i t y V a r i a t i o n (1/D = 1.0 to 25.0 s/m*m)  One Layer Groundwater Test P a t t e r n w i t h Large H o r i z o n t a l D i f f u s i v i t y V a r i a t i o n (1/D = 1.0 to 100.0 s/m*m)  104  F i g u r e 72:  One Layer S e i s m i c Test P a t t e r n as R e c o n s t r u c t e d by SVD. S i n g u l a r V a l u e s Cut Off Below 3 . 0 , 5% Gaussian Noise Added  105  every effort must be made at the investigation stage to minimize or eliminate the probable sources of r a n d o m noise. A point to note here is that the reconstruction accuracy was determined subjectively by comparison to the known cross section. Alternate methods do exist to make an objective judgement of picture accuracy such as the Chi-squared test outlined by Wonnacott and Wonnacott (1970). For consistency with the other matrix reconstruction methods, however, the subjective analysis will be used.  6.4.5 Parameterization  Effects  T h e transient groundwater flow analyses utilized finite element techniques to obtain pulse time of arrival data. W h e n discretizing the study section into a finite element mesh, the programmer is subjectively  deciding on the size of anomaly that will be detectable.  smaller the anomaly which can be modelled.  T h e investigation  T h e smaller the element, the  up to this point has used a fairly  coarse mesh, with elements up to 2 meters per side, theorizing that the coarser the mesh, the less accurate the time of arrival data will be (due to interpolation effects), and thus producing a data set which is least accurate or 'worst case'.  T h u s , if anomalies can be detected from such d a t a ,  then finer discretization can only lead to more accurate data and better reconstructions. A finer mesh was used in conjunction with the one layer transient analysis in order to test the parameterizing effects.  T h e finer mesh was produced by subdividing the original pixel mesh  ninefold, that is, for each pixel of the old mesh, the new mesh had nine pixels within the same area. T h e times of arrival obtained from the finer mesh, in general, were within 10 percent of those of the coarse mesh with the exception of those times calculated near the anomaly boundary. These times varied up to 25 percent.  T h e reason for this variation lies in the linear interpolation procedure.  W i t h the coarse mesh, the time of arrival interpolation was taken between nodes on the boundary of the anomaly and as such, the arrival calculated might be affected by large time differences the elements containing the anomaly.  across  T h e finer mesh, on the other h a n d , had nodes within the  layer itself, thus interpolation effects were minimized. Reconstruction using the finer mesh data (Figure 74) defined the anomaly well, producing a fairly distinct layer. It is important to note that there is always a trade-off with the fineness of mesh versus execution 106  Figure 7 4 :  One Layer Groundwater Test P a t t e r n as R e c o n s t r u c t e d by SVD. Data O b t a i n e d From T r a n s i e n t A n a l y s i s Using a F i n e r Mesh. S i n g u l a r V a l u e s Cut Off Below 3.0  107  time and cost. T h e programmer must make his/her own judgements as to the discretization of the finite element mesh.  6.4.6 A Priori Information In practice, a borehole is not drilled without some form of testing a n d / o r sampling program being carried out. F r o m such investigation comes d a t a which can then be used to reduce the number of unknowns in the tomographic reconstruction process.  Reduction of unknowns has been found  useful in the iterative algebraic reconstruction scheme in that it speeds convergence of the solution. Here, data will be added that will physically reduce the size of the A matrix by the elimination of select columns, and thereby test the applicability of a priori information in direct matrix solution. T h e tests were conducted by assuming one or more pixel values was known (e.g., through laboratory sample testing) prior to computer modelling. T h u s , the column representing the pixel(s) contribution to the reconstruction may be eliminated by multiplying the column length values by the known pixel value and subtracting this amount from the appropriate right hand side time value. It was found that the position of the pixel in the section was very important i n its ability to improve resolution. T h e problem, as in the 5 x 5 pixel case, is that certain pixels do not contribute a great deal to the reconstruction and the numerical constraint of these pixels does not affect picture resolution to a large extent.  T h i s is especially  true of the circumferential pixels.  Interior pixels  which have greater raypath coverage have a more significant effect on resolution, but these pixels are not easily sampled a n d / o r tested. Constraining the pixels adjacent to the boreholes improved picture, clarity. N o t only were the -.  anomalies more easily discerned, but the addition of prior information also served to make the picture smoother and somewhat more symmetric. A n example of the improvement known data can have on the reconstruction is shown in Figures 75 and 76. Figure 75 shows one layer groundwater flow data with 5 percent Gaussian noise added. anomalous layer is not obvious.  T h e resulting picture is not too clear and the  Figure 76, on the other h a n d , shows the same data set, only this  time with known borehole pixel values eliminated from the matrix. much clearer and the layer is easily seen.  108  T h e resulting picture here is  . . . .WA . . .1 . .W. . . . . .'.|.. ....vA'A'AlAWA'AW  , , , ,  ,  ,  ,  , ,  , , , , ,  Figure  75:  One Layer Groundwater Test P a t t e r n as R e c o n s t r u c t e d by SVD. 5% G a u s s i a n Noise Added, S i n g u l a r Values Cut Off Below 1.0  Figure  76:  One Layer Groundwater Test P a t t e r n as R e c o n s t r u c t e d by SVD. 5% G a u s s i a n Noise Added, S i n g u l a r Values Cut Off Below 1.0. Borehole P i x e l s C o n s t r a i n e d in Matrix Calculation. 109  6.5 Summary and Analysis of Test Results M a t r i x solution techniques to the tomographic problem provided fast and efficient reconstruction methods. discussed  Numerous matrix methodologies  and algorithms exist; only a few techniques  are  here.  T h e experiments  have shown that direct matrix solution may not be adequate  reasonable tomographic reconstruction. be necessary  to isolate a possible  to obtain a  A d d i t i o n a l constraints a n d / o r matrix manipulation may  solution.  A s the cross-hole  tomographic problem is typically  mixed or under determined, there may be numerous possible solutions to a given set of data. T w o methodologies  discussed here have proven to be useful in obtaining a solution to poorly conditioned  and rank deficient matrices. singular value decomposition  T h e methods are: elimination of anomalous singular values from the of the length matrix [A], and imposing a highly underdetermined  system and solving for the m i n i m u m length solution using S V D . B o t h methods either lose or ignore certain matrix information (i.e., the singular values) and as such will not represent exact solutions, but anomaly identifiers.  Redundant data, unless there is a great deal of it, has proven not to be  useful in obtaining a unique solution. Special geometries or types of problem in tomography need to be examined separately to determine the best methodology  to use.  so as  Extremely small anomalies may not be picked up by  standard techniques, but alternate methods may, in fact, distinguish them easily and clearly. Care must be taken in zones of high parameter variation.  T h e tomographic technique, as developed  here, relies on a straight line raypath assumption, and significant deviations from this assumption will only lead to poor results. Noise is also a key factor in reconstruction accuracy, as tomography relies on an identifiable and relatively clear signal.  A d d i t i o n s of r a n d o m noise interferes with the  gathering of good (or precise) data, and will ultimately interfere with the clarity of the reconstructed section.  6.5.1 Conclusion  Direct matrix solution of tomographic data sets proved insufficient to obtain plausible reconstructions.  Singular value elimination  demonstrated  and imposition  of highly underdetermined systems were  to be viable techniques to obtain possible solutions a n d / o r identify  110  anomalies.  T h e experiments proved the necessity for good quality d a t a for the tomographic techniques and the necessity for adherence to the basic premises of the techniques themselves. these premises leads to poorer results.  Ill  Deviations from  CHAPTER 7  7.1 General Conclusions T h i s paper has attempted to show the worth of algebraic reconstruction techniques as applied specifically to geotechnical and groundwater investigations,  as well as demonstrating that these to-  mographic techniques can easily be implemented on micro-computers.  T h e test patterns presented  in this paper are admittedly small in scale, however, they do indicate the resolution of the algorithms.  capabilities  Multiple layer systems, as would typically be found in-situ, were well resolved  with relatively limited data by these techniques.  T h e appropriate damping parameter and iterative  reconstruction methodology were shown to be problem specific, and as such , on site investigations must be undertaken in order to determine the o p t i m u m choice of these factors. T h e effects of noise were as expected,  with more data noise giving poorer reconstructions.  T h i s emphasizes  the fact  that the tomographic technique not only requires good areal coverage of the problem area, but reasonably accurate data as well. A key factor in future will be to increase the cross-hole by adding surface sources/detectors ments.  coverage  a n d / o r increasing the number of downhole measuring instru-  W i t h this increase in data comes the additional need for computer storage (i.e., v i a virtual  memory, high density diskettes, hard drives, etc.) used with this investigation  if the personal computer is to continue to be  technique.  T h e application of tomography  to groundwater  investigations  has been shown to be possi-  ble. Further development of testing equipment and investigation techniques are required, however, before a more thorough study of this application can be made. A n important comment that must be made about geotechnical tomography is that it is not an exacting form of analysis.  Noise, incomplete scanning geometry, poor redundancy characteristics,  and in the case of matrix reconstructions  - omitted  data, make the picture generated  by this  technique non-unique. O n l y the use of a priori information (i.e., the constraining of the magnitude of the solution nonessential  vector values,  the addition of known or suspected  information, the deletion  pixels, etc.), or the addition of numerous other source/receiver  positions can convert  the typically under or mixed-determined tomographic problem to an even or overdetermined T h u s , the cross-hole  of  one.  tomographic technique should be considered an anomaly finder and not an  exact representation of what exists in the subsurface. 112  7.2 Further Study A s with any relatively new procedure or method of analysis, there are always new areas to investigate, and tomography is no exception.  While the medical applications of algebraic reconstruction  techniques has been firmly established  and in wide use (Sochurek, 1987), there still remains the  further study of the geotechnical,  geophysical  and groundwater applications of tomography, and  the improvement of the algorithm itself to provide better resolution and computational  efficiency.  7.2.1 Improvements on the A R T Algorithm T h e iterative algorithms examined in this paper are by no means the only tomographic techniques available. Gilbert (1972) and Herman, Lent and Rowland (1973) have examined a number of alternative  methods including the simultaneous  iterative reconstruction technique (SIRT) and  have found that the S I R T algorithm can produce better reconstructions than the A R T algorithms ( O p p e n h e i m , 1974). S I R T , which was originally proposed by Gilbert (1972), operates by changing pixel densities using the data from all projections simultaneously.  T h u s , each iteration of S I R T  (once through the entire picture) is equivalent to n iterations of an A R T - t y p e algorithm, where n is the number of raypaths.  A s the S I R T algorithm handles all projections simultaneously,  there  is a great increase in the computer storage requirements in order to handle this type of problem. A s basic micro-computers presently only have limited storage capabilities,  the use of the  SIRT  algorithm is not feasible for these machines when dealing with large problems. However, with the advancement of microcomputer technology, this limitation may soon change.  W i t h the use of the  S I R T algorithms, both the resolution and accuracy of the reconstructed picture can be improved (Gilbert, 1972), and this appears to be one of the main areas for future research. A n o t h e r area that will be of future interest is the modification of the basic A R T algorithm. W h i l e there has been much interest in the stopping criteria and image resolution facets of tomography, a universal method of determining the accuracy of the reconstruction still does not exist. Further analysis of the mathematical or least squares criteria is required as are methods to provide more complete coverage of the problem domain.  Also, a graphics form of error checking is  believed by this author to be an important area for research.  T h e visual display of the iterative  process allows programmer participation in the determination of what is the true or expected profile. Graphics in/graphics out technology would also allow the programmer to manually input ray 113  refraction between layers, and thus account for the ray bending characteristics of the medium. T h e matrix form of tomographic problem solving has also been shown to be viable. studies in this area might also include graphics display techniques, the development  Further  of more ade-  quate matrix solution methods, and the analysis of filtering procedures to better determine arrival times a n d / o r process out unwanted noise from the data. A n o t h e r key area for study in future might be the application of tomography to three dimensional problems. T h i s would only be applicable to mainframe capable systems, however, due to the large storage requirements of the x,y,z coordinates for each node in the section.  7.2.2 Applications Research T h e investigations  described in this paper and by other authors indicate that there are many  more areas of application of the tomographic  technique.  K e y future areas of research  include  groundwater studies to investigate layer continuity, contaminant transport and perhaps fracture density.  While the single  pressure pulses or slugs (which are required to obtain  response at distance) may be too large for near surface investigations, by Black and K i p p (1981) or seismic deep investigations,  a reasonable  sinusoidal pulses as proposed  tests may be ways of getting  around this problem.  For  however, large pulses are not seen to be problematical and analyses for deep  structures may be carried out using theories similar to those presently used in the oil industry for well response  testing.  While use of pressure pulses is seen to be an exciting new area of research, geophysical mic) investigations  (seis-  still have much to offer. A s stated at the beginning of this paper, the problem  presently facing in-situ testing is to get as much information as possible from a limited area of investigation, and as such, new downhole tools have been developed to fill this need. T h e seismic cone penetrometer  (Campanella et al, 1984) is one such tool which monitors the shear wave propagation  through subsurface materials, the penetrometer  and from it infers material m o d u l i .  T h e use of tomography  with  seems ideal. Surface shots using hammer or elephant gun sources a n d / o r down-  hole shots using seismic charges coupled with the seismic cone serve to produce a three-dimensional view of the subsurface (Figure 77). T h i s not only provides additional soil property information, but also indicates layer discontinuity and anomalous zones which might cause foundation instability. Further study in this area is required to refine the geotechnical aspects and implications. 114  , OSCILLOSCOPE  TRIGGER STATIC: L O A D  k  HAMMEf?  //Ay// SHEAR  /  WAVE  SOOflCE  SHEAfc WAVE  / SEISMIC CONE PENETROMETER  A  r ° 3  J0  A,  C  Q «  A  P0SSI6LE SHEAR WAV£ SOURCE LOCATiOkiS FOR TOMOGRAPHIC  o PENETROMETER HOLE 0S,  0fii  Figure  77:  Seismic Cone Penetrometer and its A p p l i c a t i o n to In-Situ Tomographic Test ing  115  7.3 Summary and Conclusions There are a number of improvements and applications still yet to be investigated tomographic techniques.  involving  T h e mathematics of the process are simple to understand, calculations  can be done on a micro-computer, and it is widely applicable in the geotechnical and groundwater engineering professions.  Further research and testing is all that is required before tomography  becomes an integral part of in-situ investigations.  116  REFERENCES  Barrodale, I. and Roberts,  Problem.  F.D.K.,  1978.  Solution of the Constrained LI Linear Approximation  University of V i c t o r i a Department of Mathematics Report, V i c t o r i a , B . C . .  Black, J . H . and K i p p , K . L . , 1981. Determination  of Hydrogeological  Parameters Using Sinusoidal  Pressure Tests : A Theoretical Appraisal, Water Resources Research, 17(3), 686-692. C a m p a n e l l a , R . G . and Robertson, P . K . , 1981.  Applied Cone Research.  U B C Department of C i v i l  Engineering, Soil Mechanics Series 46, Vancouver, B . C . . C a m p a n e l l a , R . G . , Rice, A . , and Robertson, P . K . , 1984.  Seismic Cone Penetrometer.  Research and Development of a Downhole  U B C Department of C i v i l Engineering, Soil Mechanics Series 73,  Vancouver, B . C . . C h o , Z . H . , 1974. General Views on 3 - D Image Reconstruction and Computerized Transverse A x i a l Tomography,  IEEE Transactions on Nuclear Science, NS-21,  Dines, K . A . and L y t l e , R . J . , 1979.  Computerized  Geophysical  44-71.  Tomography,  Proceedings IEEE,  67(7), 1065-1073. Fawcett, J . A . , a n d C l a y t o n , R . W . , 1984. Tomographic Reconstruction of Velocity Anomalies,  letin of the Seismological Society of America, 74(6), Freeze, R . A . , a n d Cherry, J . A . , 1979.  Groundwater.  Bul-  2201-2219.  Prentice-Hall Inc., Englewood  Cliffs, N . J . ,  USA. G i l b e r t , P., 1972. Iterative M e t h o d s for the Three-Dimensional Projections.,  Journal of Theoretical Biology, 36,  G o r d o n , R . , Bender, R . , and H e r m a n , G . T . , 1970. for Three Dimensional  Reconstruction of an Object from  105-117.  Algebraic Reconstruction Techniques ( A R T )  Electron Microscopy and X - R a y Photography,  Journal of Theoretical  Biology, 29, 471-481. G o r d o n , R . , 1974. A Tutorial on A R T , Gustavsson,  IEEE Transactions on Nuclear Science, NS-21,  M . , Ivansson, S., M o r e n , P.. and P i h l , J . , 1986.  Measurement System and Field Studies.  Seismic Borehole Tomography  Proceedings of the IEEE, 74(2),  117  78-93.  339-346.  :  H e r m a n , G . T . , 1972.  T w o Direct M e t h o d s for Reconstructing Pictures from their Projections : A  Comparative Study, Computer  Graphics  H e r m a n , G . T . and R o w l a n d , S . „ 1973. A Comparative Study, Computer  and Image Processing,  Three M e t h o d s for Reconstructing  Graphics  and Image Processing,  H e r m a n , G . T . , L e n t , A . , and Rowland, S., 1973. Theoretical  Biology,  Hirasaki, G . J . , 1974.  42,  the IEEE,  Objects from X-rays :  2, 151-178.  A R T : Mathematics and Applications, Journal  of  1-32.  Pulse Tests and Other Early Transient Pressure Analyses for In-Situ Estima-  tion of Vertical Permeability, Society of Petroleum H o r n , B . K . , 1978.  1, 123-144.  Density Reconstruction  Engineers  Journal,  February 1974, 75-90.  Using A r b i t r a r y Ray-Sampling Schemes, Proceedings of  66(5), 551-562.  Hsieh, P . A . and N e u m a n , S . D . , 1985a. Conductivity  Field Determination  of the Three Dimensional Hydraulic  Tensor of Anisotropic M e d i a , a) Theory,  Water Resources  Research,  21(11),  1655-1665. Hsieh, P . A . , N e u m a n , S . D . , Stiles, G . K . , and Simpson, E . S . , 1985b.  Field Determination  of the  Three Dimensional Hydraulic Conductivity Tensor of Anisotropic M e d i a , b) Methodology and Application to Fractured Rocks, Water Resources Research, Ivansson, S., 1986. ings IEEE,  Seismic Borehole Tomography : Theory and Computational Methods, Proceed-  74(2), 328-338.  Johnson, C . R . , Greenkorn, R . A . , and Woods, E . G . , 1966.  Pulse Testing : A New M e h t o d for De-  scribing Reservoir Flow Properties Between Wells, Journal 1966,  of Petroleum  Technology,  December  1599-1604.  K a m a l , M . M . , 1979. of Petroleum  T h e Use of Pressure Transients to Describe Reservoir Heterogeneity,  Technology,  Kowalski, G . , 1977.  Luikov, A . V . , 1968.  Reconstruction  Analytical  Journal  August 1979, 1060-1070. of Objects F r o m T h e i r Projections:  ment Errors on the Reconstruction,  McCann,  21(11), 1667-1676.  IEEE  Heat Diffusion  Transactions Theory.  D . , Grainger, P., and M c C a n n , C , 1975.  Their Use in Engineering Geology, Geophysical 118  T h e Influence of Measure-  on Nuclear Science,  NS-24, 850-864.  Academic Press, New York. Inter-borehole  Prospecting,  Acoustic Measurements  23. 50-69.  and  M c M e c h a n , G . A . , 1983. Seismic Tomography i n Boreholes, tronomical  Geophysical  Journal  of the Royal As-  Society, 74, 601-612.  M c P h e e , N . G . , 1978. The Book of Insults, Ancient  and Modern.  V a n Nostrand Reinhold, Toronto,  Canada. M c P h e e , N . G . , 1981. The Second Book of Insults. M a s o n , I. M . , 1981. Algebraic Reconstruction  V a n Nostrand R e i n h o l d , Toronto, C a n a d a .  of a T w o - Dimensional  the High Hazels Seam of Thoresby Colliery, Geophysics, Menke, W . , 1984a. Geophysical  Data Analysis  Velocity Inhomogeneity i n  46(3), 298-308.  : Discrete Inverse Theory.  A c a d e m i c Press, O r l a n d o ,  Florida. Menke, W . , 1984b.  T h e Resolving  Power of Cross Borehole Tomography,  Geophysical  Research  Letters, 11(2), 105-108. Mersereau,  R . , 1973. Recovering  Graphics  and Image Processing,  Multi-Dimensional  Signals from T h e i r Projections,  Computer  N o . 2, 179-195.  Mikhailov, M . D . , a n d Ozisic, M . N . , 1984. Unified  Analysis  and Solutions  of Heat and Mass  Diffu-  sion. J o h n Wiley and Sons, New York. O p p e n h e i m , B . E . , 1974. M o r e Accurate Algorithms for Iterative 3 Dimensional IEEE  Transactions  Reconstruction,  on Nuclear Science, NS-21, 12-11.  Peterson, J . E . , Paulsson, B . , and M c E v i l l y , T . V . , 1985. Applications of Algebraic Reconstruction Techniques to Cross hole Seismic D a t a , Geophysics, Prosser,  D . W . , 1981. A M e t h o d of Performing  Groundwater,  50(10), 1566-1580.  Response Tests on Highly Permeable  Aquifers,  19(6), 588-592.  Radcliff, R . D . , and Balanis, C . A . , 1979. Reconstruction in Noisy Environments, Proceedings IEEE,  Algorithms for Geophysical  Applications  67(7), 1060-1064.  R a m a c h a n d r a n , G . N . and Lakshminarayanan, A . V . , 1971.  Three-Dimensional Reconstruction from  Radiographs and Electron Micrographs, Application of Convolution Instead of Fourier Transforms, Proc. Nat. Acad. Sci. U.S., 68, 2236-2240. Ramierez, A . L . , 1986. Recent Experiments Using Geophysical Tomography in Fractured Granite, Proceedings IEEE,  74(2), 347-352.  Robertson,  P . K . , 1985. In-Situ  Testing  and its Application  to Foundation  Engineering.  UBC  Department of C i v i l Engineering, Soil Mechanics Series 91, Vancouver, B . C . . Scudder, H . J . , 1978.  Introduction  to C o m p u t e r A i d e d Tomography, Proceedings  IEEE,  66(6),  628-637. Shepp, L . A . , and Logan, B . F . , 1974. on Nuclear  Transactions  Science, NS-21, 21-32.  Sochurek, H . , 1987.  D.C.,  Fourier Reconstruction of a H e a d Section, IEEE  Medicine's New Vision, National  Geographic  Society Magazine,  Washington,  171(1), 2-41.  Walter, G . R . , a n d T h o m p s o n , G . M . , 1982.  A Repeated Pulse Technique for Determining the H y -  draulic Properties of T i g h t Formations, Groundwater, Wang, H . F . , a n d Anderson, M . P . , 1982. and Finite  Difference  Techniques.  Way, S . C . , a n d M c K e e , C . R . , 1982. abilities, Groundwater,  Modelling,  Finite  Element  In-Situ Determination of Three-Dimensional Aquifer Perme-  20(5), 594-603. Cross-Hole Seismology and Seismic Imaging in Crystalline  Research Letters, 10(8), 686-689.  Wonnacott, R . J . , and Wonnacott, T . W . , 1970. Woodbury, A . D . , 1987.  to Groundwater  1982.  Freeman Press, New York.  Wong, J . , Hurley, P., and West, G . , 1983. Rocks, Geophysical  Introduction  20(5), September-October  Econometrics.  J o h n Wiley and Sons, New York.  Inverse Theory and Model Identifiability:  nial Conference, Montreal, Quebec, C a n a d a , M a y 1987.  120  An Overview.  C . S . C . E . Centen-  


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items