Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Geophysical survey decomposition and efficient 3D inversion of time-domain electromagnetic data Yang, Dikun 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2014_november_yang_dikun.pdf [ 22.03MB ]
Metadata
JSON: 24-1.0166995.json
JSON-LD: 24-1.0166995-ld.json
RDF/XML (Pretty): 24-1.0166995-rdf.xml
RDF/JSON: 24-1.0166995-rdf.json
Turtle: 24-1.0166995-turtle.txt
N-Triples: 24-1.0166995-rdf-ntriples.txt
Original Record: 24-1.0166995-source.json
Full Text
24-1.0166995-fulltext.txt
Citation
24-1.0166995.ris

Full Text

Geophysical Survey Decomposition and Efficient 3DInversion of Time-domain Electromagnetic DatabyDikun YangB. Sc., China University of Geosciences, 2005M. A. Sc., China University of Geosciences, 2008A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFDoctor of PhilosophyinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Geophysics)The University Of British Columbia(Vancouver)September 2014c© Dikun Yang, 2014AbstractRigorous three-dimensional (3D) forward and inverse modeling of geophysicalelectromagnetic (EM) data can be time-consuming and may require a large amountof memory on expensive computers. In this thesis, a novel framework, called sur-vey decomposition, is proposed to make the 3D EM modeling more efficient.Recognizing the multi-scale nature of the EM modeling problems, the fun-damental idea is to break down an EM survey, which consists of many trans-mitters, receivers and times/frequencies, into a number of subproblems, each ofwhich is only concerned about data modeled by a localized source, receiver andtime/frequency. The modeling is then carried out on the subproblems at differentscales, instead of the original problem as a whole. Such a decomposition is ableto speed up the numerical modeling, because: (1) A subproblem can have highlyefficient discretizations in space and time customized to its localized source, re-ceiver, time/frequency and the specific scale of investigation, for example, it usesa local mesh that is much smaller than the one used in the original global problem;(2) A subproblem is a self-contained EM modeling problem that does not dependon other subproblems, so it is suitable for massive parallelization; (3) Upon de-composition, no modeling is carried out on the global mesh and the amount ofcomputation is proportional to the number of subproblems, so the scalability im-proves significantly.After decomposition, the large number of subproblems is further reduced byadaptive, random and dynamic subsampling of the data. The adaptive schemematches the number of samples to the scale of investigation so that only the datanecessary for the model reconstruction are selected.The framework of survey decomposition is applied to two types of time-domainiiEM (TEM) surveys: airborne TEM and ground large loop TEM. Both synthetic andfield data are inverted using this new approach. I show that survey decompositionis capable of producing modeling and inversion results similar to those from theconventional methods with greatly reduced time and memory usage. Further speed-up by massive parallelization and generalization to other types of EM surveys isstraightforward.iiiPrefaceThis thesis presents my original researches completed at the Department of Earth,Ocean and Atmospheric Sciences and the Geophysical Inversion Facility (GIF) atthe University of British Columbia (UBC), Vancouver, Canada. Some chapterscontain the materials from the papers published on peer-reviewed scientific jour-nals and other sources.Chapter 2 is a revised version of a published paper (Yang, D. and D. Oldenburg,2012, Three-dimensional inversion of airborne time-domain electromagnetic datawith applications to a porphyry deposit: Geophysics, 77, B23-B34). I carried outthe numerical experiments and prepared the maniscript. Oldenburg designed andguided the research and edited the manuscript.Chapter 3 contains the text and figures excerpted from a published paper (Yang,D., D. Oldenburg, E. Haber, 2014, 3-D inversion of airborne electromagnetic dataparallelized and accelerated by local mesh and adaptive soundings: GeophysicalJournal International, 196, 1942-1507) and unpublished materials. I initiated theresearch project, carried out all the experiments and prepared the manuscript. Old-enburg advised the formation of the basic ideas and revised the manuscript.Chapter 4 is a revised version of a published paper (Yang, D., D. Oldenburg,E. Haber, 2014, 3-D inversion of airborne electromagnetic data parallelized andaccelerated by local mesh and adaptive soundings: Geophysical Journal Interna-tional, 196, 1942-1507). I initiated the research project, carried out all the experi-ments and prepared the manuscript. Oldenburg supervised the project and revisedthe manuscript. Haber provided computational basis and technical advice for theresearch.The contents in Chapter 5 have not been published. I initiated the researchivproject, carried out all the experiments and prepared the manuscript. Oldenburgsupervised the project and revised the manuscript. This work will be presented atthe Society of Exploration Geophysicists Annual Meeting in 2014 and a journalpaper is in preparation.Appendix A and B contain the materials derived from the course notes ofMATH 522 and EOSC 555B lectured by Eldad Haber.I prepared the manuscript of the entire thesis with advice and help from Old-enburg on the organization, proof-reading and editing.vTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Electromagnetic Surveys for Conductivity . . . . . . . . . . . . . 11.1.1 Conductivity: a diagnostic physical property . . . . . . . 11.1.2 Maxwell’s equations . . . . . . . . . . . . . . . . . . . . 31.1.3 Controlled source EM surveys . . . . . . . . . . . . . . . 51.2 Interpretations of TEM Data . . . . . . . . . . . . . . . . . . . . 91.2.1 Qualitative and semi-quantitative methods . . . . . . . . . 91.2.2 Rigorous modeling . . . . . . . . . . . . . . . . . . . . . 101.3 3D Numerical Modeling . . . . . . . . . . . . . . . . . . . . . . 121.3.1 Forward modeling . . . . . . . . . . . . . . . . . . . . . 121.3.2 Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . 131.3.3 State of the art . . . . . . . . . . . . . . . . . . . . . . . 17vi1.4 Research Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . 191.4.1 3D modeling: a pending paradigm shift . . . . . . . . . . 191.4.2 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . 202 Why 3D Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.1 Field Example: Airborne TEM Data at Mt. Milligan . . . . . . . . 222.1.1 Geologic background . . . . . . . . . . . . . . . . . . . . 222.1.2 VTEM survey . . . . . . . . . . . . . . . . . . . . . . . . 242.2 Failure of 1D Interpretation . . . . . . . . . . . . . . . . . . . . . 262.2.1 Apparent conductivity . . . . . . . . . . . . . . . . . . . 262.2.2 Time constant analysis . . . . . . . . . . . . . . . . . . . 272.2.3 1D layered model inversion . . . . . . . . . . . . . . . . 272.3 Studies on a Synthetic Model . . . . . . . . . . . . . . . . . . . . 292.3.1 Synthetic 3D model . . . . . . . . . . . . . . . . . . . . 292.3.2 1D inversion . . . . . . . . . . . . . . . . . . . . . . . . 292.3.3 3D inversion . . . . . . . . . . . . . . . . . . . . . . . . 322.4 3D Inversion of Mt. Milligan Data . . . . . . . . . . . . . . . . . 342.4.1 Inversion set-up . . . . . . . . . . . . . . . . . . . . . . . 342.4.2 A multi-level approach . . . . . . . . . . . . . . . . . . . 352.4.3 3D inversion . . . . . . . . . . . . . . . . . . . . . . . . 372.4.4 Model interpretation . . . . . . . . . . . . . . . . . . . . 402.5 Critical Rethink: When is 3D Inversion Necessary? . . . . . . . . 422.5.1 Quantification of the EM footprint . . . . . . . . . . . . . 422.5.2 Comparison between VTEM and DIGHEM . . . . . . . . 432.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Survey Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 473.1 Computational Complexities . . . . . . . . . . . . . . . . . . . . 483.1.1 Space complexity . . . . . . . . . . . . . . . . . . . . . . 483.1.2 Time complexity . . . . . . . . . . . . . . . . . . . . . . 513.1.3 Optimization complexity . . . . . . . . . . . . . . . . . . 533.2 Identification of Subproblems . . . . . . . . . . . . . . . . . . . . 553.2.1 Atomic problem . . . . . . . . . . . . . . . . . . . . . . 55vii3.2.2 Practical recipes . . . . . . . . . . . . . . . . . . . . . . 573.2.3 Decomposition of surveys with localized sources . . . . . 583.2.4 Decomposition of surveys with distributed sources . . . . 613.3 Localized Discretization . . . . . . . . . . . . . . . . . . . . . . 623.3.1 Local mesh . . . . . . . . . . . . . . . . . . . . . . . . . 633.3.2 Local time discretization . . . . . . . . . . . . . . . . . . 663.3.3 Global-local interactions . . . . . . . . . . . . . . . . . . 683.3.4 Massive parallization . . . . . . . . . . . . . . . . . . . . 723.4 Data Subsampling . . . . . . . . . . . . . . . . . . . . . . . . . . 753.4.1 Oversampling in EM modeling . . . . . . . . . . . . . . . 763.4.2 Adaptive subsampling using cross validation . . . . . . . 773.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804 Airborne TEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824.1 Local Mesh for Airborne Sounding . . . . . . . . . . . . . . . . . 824.1.1 Forward modeling on local mesh . . . . . . . . . . . . . . 834.1.2 Sensitivity on a local mesh . . . . . . . . . . . . . . . . . 854.1.3 Synthetic inversion of a two-block model . . . . . . . . . 864.2 Adaptive Subsampling of Soundings . . . . . . . . . . . . . . . . 874.2.1 Subsampling workflow . . . . . . . . . . . . . . . . . . . 874.2.2 Synthetic inversion with adaptive soundings . . . . . . . . 884.3 Inversion of the Entire Mt. Milligan Data Set . . . . . . . . . . . 924.3.1 3D inversion . . . . . . . . . . . . . . . . . . . . . . . . 924.3.2 Final model validation . . . . . . . . . . . . . . . . . . . 954.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965 Ground Loop TEM . . . . . . . . . . . . . . . . . . . . . . . . . . . 975.1 Field Example: SQUID Data at the Lalor Mine . . . . . . . . . . 985.2 Decomposition of Large Transmitter . . . . . . . . . . . . . . . . 1005.2.1 Atomic problem with long-offset source-receiver pair . . . 1005.2.2 Adaptive looplet . . . . . . . . . . . . . . . . . . . . . . 1025.2.3 Forward modeling using looplets . . . . . . . . . . . . . . 1055.2.4 Sensitivity using looplets . . . . . . . . . . . . . . . . . . 108viii5.3 Adaptive Subsampling of Receivers . . . . . . . . . . . . . . . . 1085.4 Synthetic Inversion of a Sphere-in-halfspace Model . . . . . . . . 1125.4.1 Global discretization inversion . . . . . . . . . . . . . . . 1125.4.2 Survey decomposition inversion . . . . . . . . . . . . . . 1125.4.3 Model interpretation and validation . . . . . . . . . . . . 1165.5 Inversion of SQUID Data at the Lalor Mine . . . . . . . . . . . . 1195.5.1 Global discretization inversion . . . . . . . . . . . . . . . 1195.5.2 Survey decomposition inversion . . . . . . . . . . . . . . 1195.5.3 Model interpretation and validation . . . . . . . . . . . . 1225.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276.1 Conditions of Using 3D Inversion . . . . . . . . . . . . . . . . . 1276.2 Over-computing in 3D EM Modeling . . . . . . . . . . . . . . . 1286.3 Survey Decomposition: A New Path . . . . . . . . . . . . . . . . 1296.4 Further Development . . . . . . . . . . . . . . . . . . . . . . . . 1316.4.1 Possible improvement . . . . . . . . . . . . . . . . . . . 1316.4.2 Other research opportunities . . . . . . . . . . . . . . . . 132Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134A 3D Forward Modeling Using Finite Volume Method . . . . . . . . . 146B Calculation of Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . 149C Fast Discrete Model Conversion Between Arbitrary Meshes . . . . . 151ixList of TablesTable 2.1 Summary of three inversions using different numbers of sound-ings for the synthetic data. . . . . . . . . . . . . . . . . . . . . 33Table 2.2 Specifications of the extra-coarse, coarse and fine mesh inversions. 37Table 2.3 Numerical performances of the extra-coarse, coarse and finemeshes used in the multi-level inversion. . . . . . . . . . . . . 37Table 4.1 Forward modeling of an airborne sounding on the global meshand two local meshes. . . . . . . . . . . . . . . . . . . . . . . 84Table 5.1 Synthetic inversion using adaptive looplets and adaptive receivers.113Table 5.2 Field data inversion using adaptive looplets and adaptive receivers.121xList of FiguresFigure 1.1 Airborne and large loop ground TEM survey. . . . . . . . . . 8Figure 2.1 General geologic structure and airborne TEM flight lines at Mt.Milligan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Figure 2.2 Cross section of porphyry system at Mt. Milligan. . . . . . . . 25Figure 2.3 Transmitter current waveform and time channels of VTEMsurvey at Mt. Milligan. . . . . . . . . . . . . . . . . . . . . . 26Figure 2.4 Apparent conductivity map of VTEM data at Mt. Milligan. . . 27Figure 2.5 Time constant map of VTEM data at Mt. Milligan. . . . . . . 28Figure 2.6 Depth slice of stitched 1D inversion models of VTEM data atMt. Milligan at an elevation of 950 m. . . . . . . . . . . . . . 29Figure 2.7 Synthetic conductivity model of a porphyry system. . . . . . . 30Figure 2.8 The recovered model from 1D inversion of the synthetic data. 31Figure 2.9 Snapshots of the secondary magnetic field excited by a trans-mitter loop above the center of the resistive stock. . . . . . . . 32Figure 2.10 Data misfit convergences and models of three synthetic 3D in-versions. The red dots indicate the sounding locations and theblack lines outline the true model. . . . . . . . . . . . . . . . 34Figure 2.11 Dimensions of the smallest cells in the extra-coarse (black),coarse (green) and fine (red) meshes. . . . . . . . . . . . . . . 36Figure 2.12 Normalized data misfit as a function of iterations for the inver-sions using three multi-level meshes. . . . . . . . . . . . . . . 38Figure 2.13 Cumulative CPU time of iterations in the extra-coarse, coarseand fine mesh inversions. . . . . . . . . . . . . . . . . . . . . 39xiFigure 2.14 Histogram of normalized data misfit after the fine mesh inversion. 39Figure 2.15 Depth slices of the inversion models at Mt. Milligan at anelevation of 970 m after the (a) extra-coarse, (b) coarse and (c)fine mesh inversions. . . . . . . . . . . . . . . . . . . . . . . 40Figure 2.16 Depth slices of the final conductivity model at elevations of1050 m, 970 m, 890 m and 810 m. . . . . . . . . . . . . . . . 41Figure 2.17 Depth slice of the final interpretation model at an elevation of1030 m overlain by (a) geology and (b) 420 Ωm contour of theDC resistivity model. . . . . . . . . . . . . . . . . . . . . . . 41Figure 2.18 The footprints R of the VTEM and DIGHEM systems at dif-ferent cut-off indices p at Mt. Milligan. . . . . . . . . . . . . 44Figure 3.1 Computational costs of a single airborne TEM sounding aspoorly scaled functions of the number of mesh cells: (a) mem-ory usage, (b) time for one Maxwell matrix factorization, and(c) time for one time step. . . . . . . . . . . . . . . . . . . . 49Figure 3.2 The spectrum of subproblem data grouping. . . . . . . . . . . 57Figure 3.3 Decomposition of dipole source survey with small source-receiveroffset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59Figure 3.4 Decomposition of dipole source survey with large source-receiveroffset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60Figure 3.5 Decomposition of survey with large loop source. . . . . . . . 61Figure 3.6 Decomposition of survey with long wire source. . . . . . . . . 62Figure 3.7 A local mesh designed for a subproblem modeling the source(S) and receiver (R) marked by the red dots. . . . . . . . . . . 63Figure 3.8 Examples of local mesh for zero-offset source-receiver pairoverlaying the global mesh in plan view. . . . . . . . . . . . . 65Figure 3.9 Examples of local mesh for long-offset source-receiver pairoverlaying the global mesh in plan view. . . . . . . . . . . . . 66Figure 3.10 Global and local time discretizations. . . . . . . . . . . . . . 67Figure 3.11 Model of parallel computing for survey decomposition. . . . . 74Figure 3.12 Hypothetical performance of massive parallelization using sur-vey decomposition. . . . . . . . . . . . . . . . . . . . . . . . 75xiiFigure 3.13 Demonstrative example: adapting the number of samples tothe scale of investigation using cross validation. . . . . . . . . 79Figure 4.1 The synthetic model on the global mesh and a local mesh. . . 84Figure 4.2 Forward modeled data on the global mesh and the two succes-sive local meshes in the test. . . . . . . . . . . . . . . . . . . 84Figure 4.3 Sensitivities of dBz/dt datum at t = 0.001 s for the syntheticmodel: (a) the global mesh result and (b) the local mesh resultwith interpolation. . . . . . . . . . . . . . . . . . . . . . . . 85Figure 4.4 Synthetic airborne TEM inversion usign survey decomposi-tion. Depth slices at 150 m of (a) the true model, (b) globaldiscretization inversion model and (c) survey decompositioninversion model. . . . . . . . . . . . . . . . . . . . . . . . . 87Figure 4.5 Summary of synthetic airborne inversion using both survey de-composition and adaptive soundings. . . . . . . . . . . . . . . 90Figure 4.6 Conductivity models recovered at different iterations of adap-tive soundings: (a) Iteration 1 with 48 soundings, (b) Iteration2 with 96 soundings, (c) Iteration 4 with 192 soundings, (d) It-eration 6 with 384 soundings. The red dots indicate the sound-ing locations, and the white boxes outline the exact locationsof the two prisms. . . . . . . . . . . . . . . . . . . . . . . . . 91Figure 4.7 Counts of selection for every sounding throughout the entireinversion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Figure 4.8 Global mesh for the 3D inversion of the VTEM data at Mt.Milligan. The VTEM sounding locations are indicated by thered dots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93Figure 4.9 Summary of Mt. Milligan VTEM 3D inversion. . . . . . . . . 93Figure 4.10 Conductivity model of the Mt. Milligan VTEM 3D inversion:(a) a depth slice at 950 m elevation, (b) a cross section A-B at6109500N. . . . . . . . . . . . . . . . . . . . . . . . . . . . 94Figure 4.11 Data grid of time channel at 0.68 ms for the observed and pre-dicted dBz/dt data at Mt. Milligan. . . . . . . . . . . . . . . . 95xiiiFigure 5.1 Location of the Lalor Mine deposit on the map of geologicdomains in Manitoba (adopted from the government of Mani-toba’s website). . . . . . . . . . . . . . . . . . . . . . . . . . 99Figure 5.2 Layout of the SQUID ground loop TEM survey at the LalorMine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100Figure 5.3 Conductivity model and locations of sources and receivers forthe atomic problem modeling test. The magnetic dipole trans-mitter is marked by the red dot and the receivers by the whitedots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101Figure 5.4 Atomic problem modeling results obtained with local and globaldiscretizations along the two perpendicular lines in Figure 5.3.The local and global discretization modelings are plotted asdots and solid lines respectively. The time channels are distin-guished by colors. . . . . . . . . . . . . . . . . . . . . . . . 102Figure 5.5 Decomposition of a transmitter loop using adaptive loopletsrefinement based on the Voronoi tessellation. The tessellationand the looplets’ moments are updated after every addition oflooplets (black dots). . . . . . . . . . . . . . . . . . . . . . . 104Figure 5.6 Conductivity model and survey layout for the adaptive loopletsmodeling test. The transmitter loop is indicated by the red linesand the data at one of the receivers marked by a yellow dot aretested for comparison. . . . . . . . . . . . . . . . . . . . . . 105Figure 5.7 Locations of looplets and the associated Voronoi tessellationsof the transmitter loop for the four delay times at the examplereceiver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106Figure 5.8 Local mesh and model used in one of the looplet subproblemsmodeling at 10−4 s. . . . . . . . . . . . . . . . . . . . . . . . 107Figure 5.9 Forward modeled data using the global discretization and localdiscretization (adaptive looplets). . . . . . . . . . . . . . . . 107Figure 5.10 Example sensitivity computed using the global discretizationand local discretization (adaptive looplets). The transmitterloop is indicated by the red lines and the receiver location bythe yellow dot. . . . . . . . . . . . . . . . . . . . . . . . . . 109xivFigure 5.11 Summary of the synthetic inversion using the standard algo-rithm with global discretizations. . . . . . . . . . . . . . . . . 113Figure 5.12 Location of random samples of receiver-time (R-T) pair usedat the last iteration of the synthetic inversion. The dimensionof time is depicted as the depth and the transmitter loop is in-dicated by the red lines. . . . . . . . . . . . . . . . . . . . . . 114Figure 5.13 Summary of the synthetic inversion using the survey decom-position with local discretizations. . . . . . . . . . . . . . . . 114Figure 5.14 Synthetic inversion models for the ground loop TEM survey. . 117Figure 5.15 True data misfit of the final survey decomposition inversionmodel reassessed using the global discretizations. . . . . . . . 118Figure 5.16 Data fit of the final survey decomposition inversion model (show-ing delay time at 10−3 s) for the synthetic data. . . . . . . . . 118Figure 5.17 Summary of the Lalor Mine field data inversion using the stan-dard algorithm with global discretizations. . . . . . . . . . . . 120Figure 5.18 Summary of the Lalor Mine field data inversion using the sur-vey decomposition with local discretizations. . . . . . . . . . 121Figure 5.19 Inversion models for the SQUID field data at the Lalor MineVMS deposit. The two models are cut along x-direction andy-direction for cross sections. . . . . . . . . . . . . . . . . . 123Figure 5.20 True data misfits of the final survey decomposition inversionmodel at the Lalor Mine reassessed using the global discretiza-tions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124Figure 5.21 Data fit of the final survey decomposition inversion model (show-ing delay time at 3.78×10−3 s) for the Lalor Mine SQUID data.124Figure A.1 Staggered discretization in 3D. . . . . . . . . . . . . . . . . . 147xvAcknowledgmentsIt is UBC-GIF’s tradition to thank Roman Shekhtman, the man who answers theendless requests from all GIF members, at the very beginning of the acknowledg-ments. I am particularly grateful to his help and assistance on programming andcomputers. My reseaches could not take place if Roman did not walk to the com-puter room and reboot the computers by hand after my codes crashed the wholecluster on holidays.My supervisor, Dr. Douglas Oldenburg, is amazing. He is always academi-cally, emotionally and financially supportive. I have learned not only skills andknowledges but also visions and attitudes from him. Dr. Eldad Haber’s expertisein computational geophysics has enlightened my minds; my research is built on thenumerical foundations that he and Dr. Douglas Oldenburg had developed. Theyare the giants whose shoulders I can stand on. I also thank Dr. Catherine Johnsonand Dr. Christian Schoof, who kindly served on my supervisory committee andcandidacy committee.Major funding for my research is from MITEM Consortium at UBC-GIF andNSERC CRD and IRC programs. I received tuition awards from the Faculty ofGraduate and Postdoctoral Studies and the Faculty of Science, UBC. GeoscienceBC has generously offered me the financial support for two consecutive years(2009∼2010) with Geoscience BC Scholarship program. I am also grateful toEgil Harold Lorntzsen Scholarship and Thomas and Marguerite MacKay Memo-rial Scholarship.Some organizations and individuals have kindly provided help on data andother technical information. In particular, Geoscience BC made the VTEM fielddata at Mt. Milligan available; HudBay Minerals Inc. and Dennis Woods at Dis-xvicovery International Geophysics Inc. provided the SQUID data and backgroundinformation at the Lalor Mine deposit; The discussions with Peter Kowalczyk in-spired the investigation on the airborne EM systems’ footprint in Chapter 2; Ter-rane Metals Corp. (now Thompson Creek Metals Company) and Dr. DianneMitchinson provided information about the geology at Mt. Milligan.My studies would be less productive and boring if there was no invaluablediscussion and curling with David Marchant, Dr. Kristopher Davis, Dr. LaurensBeran and many other colleages in UBC-GIF.As an international student, I should not forget the people who had made myacademic pursuit in Canada possible. Dr. Xiangyun Hu and Prof. Jiaying Wangin China University of Geoscienecs (Wuhan) encouraged and recommended me tostudy in UBC. The St. John’s College in UBC and the lovely residents there hadoffered great comfort to me when I just came to Vancouver.I personally trace the origin of this thesis back to the two books my parents gaveto me as the birthday gifts when I was a first grader, one folk tale that unleashedmy power of imagination and the other One Hundred Thousand Whys that triggeredmy curiosity in science. Thank you, Mom and Dad.Maggie, who gives me unconditional understanding, patience, support and ac-company through the years, deserves the highlighted spot at the end of my ac-knowledgements.xviiDedicationTo whoever is impressed by the following statement.“ Imagination is more important than knowledge.For knowledge is limited, whereas imagination em-braces the entire world, stimulating progress, givingbirth to evolution. It is, strictly speaking, a real fac-tor in scientific research.”– Albert EinsteinxviiiChapter 1IntroductionThe physical properties of the earth are the keys to the understanding of the process,evolution, and status of the planet, for examples, sedimentation, tectonic events,mineralization, groundwater flow, rock deformation, etc. However, direct mea-surement is usually very difficult, especially for the deep interior and vast volumeof the earth. Geophysicists have to rely on observation of a number of differenttypes of physical field to infer the earth’s physical structures. Modeling of thosephysical processes is non-trivial given the complexity of the earth, the underlyinggoverning equations and the variety of survey methods. In this thesis, I develop anovel framework for advanced geophysical modelings, and use it to improve the ef-ficiency of three-dimensional inversion of time-domain electromagnetic inductiondata.This chapter introduces the background of my research, including motivation,literature reviews, fundamental methodologies, objectives, along with the basicequations and assumptions used in this thesis.1.1 Electromagnetic Surveys for Conductivity1.1.1 Conductivity: a diagnostic physical propertyAmong the many physical properties that the earth possesses, electrical conductiv-ity is one of the most diagnostic (Palacky, 1988). Conductivity reflects how easily1the medium conducts electric current when an electric field is present.Previous studies in petrophysics have established the links between the con-ductivity and the geologic targets and activities.• Mineralization. Many minerals are characterized by high conductivities; forexample, different types of natural pyrite exhibit considerable variation inconductivity (Abraitis et al., 2004); some massive sulphide deposits can bemany orders of magnitude more conductive than the background rock (Stolz,2000); and the conductive graphites are indicative in finding unconformityuranium (Mwenifumbo et al., 2004).• Hydrocarbon reservoir. The existence of hydrocarbons can reduce the con-ductivity of sandstone, making a resistive anomaly an interesting target in oiland gas exploration (Waxman et al., 1974; Constable, 2010).• Tectonic activity. It is known that there is partial melting and/or dehydrationof rock, a cause of high conductivity, in the crust and upper mantle, so thedistribution of the conductive zones can be informative in the study of theearth’s interior (Hyndman & Hyndman, 1968; Waff, 1974). The deep faultzones can also be imaged in terms of their electrical conductivity (Ritteret al., 2005).• Groundwater flow. When water moves underground, whether it is a geother-mal source or sea water intrusion, the impacted zone can show very differentconductivity signatures copmpared to the surrounding (Schwarz et al., 1985;Fitterman & Stewart, 1986).• Environmental problems. Both natural processes and human activities canalter the earth’s conductivity near the surface, for example, the develop-ment of sinkholes (van Schoor, 2002), the contamination by landfill leachate(Chambers et al., 2006), the unexploded ordnance (Huang & Won, 2003),and so on.Therefore, there is great interest of detecting the earth’s conductivity structuresat different scales.21.1.2 Maxwell’s equationsDirect measurement of the earth’s conductivity using well logging or rock samplesis very expensive or even impossible, so exploration geophysics uses the observa-tions of electromagnetic fields to infer the conductivity.Conductivity is connected to the EM fields through Maxwell’s equations thatusually consist of four partial differential equations (PDEs). If stated in the differ-ential form, they are∇ · D = ρ f , (1.1a)∇ · B = 0, (1.1b)∇× E =−∂B∂ t , (1.1c)∇× H = J f +∂D∂ t +Js, (1.1d)where t is time, E is electric field, H is magnetic field intensity, B is magnetic fluxdensity, D is electric displacement field, ρ f is free charge density, and J f is freeelectric current density and Js is the artificial external source. Equation 1.1d showsthe possibility of driving the EM fields using artificial sources and observing theresponses of the fields. This motivates the use of the controlled source EM (CSEM)survey methods in the exploration and is a topic of this thesis introduced in the nextsubsection.Fields and fluxes are associated through the constitutive relationsD = εE, (1.2a)B = µH, (1.2b)J = σE, (1.2c)where ε is the electrical permittivity of the medium, µ is the magnetic permeability,and σ is the electrical conductivity; they are the (bulk) physical properties of theEM medium and geophysicists see the earth as a composition of materials withdifferent ε , µ and σ . In order to fully describe the earth’s materials, these propertiesmust be expressed as tensors to model anisotropy, and/or as functions of frequency3for frequency-dependence. As those complications are out of the scope of thisthesis, I assume the earth is made of linear, isotropic and frequency-independentmaterials.More insights can be gained by viewing Maxwell’s equations in frequency do-main. Taking the Fourier transform of equation 1.1c and 1.1d, with equation 1.2substituted in, yields∇× E(ω) =−iωµH(ω), (1.3a)∇× H(ω) = σE(ω)+Js(ω)+ iωεE(ω), (1.3b)where ω = 2pi f is the angular frequency. For the geophysical surveys of inter-est in this thesis, the quasi-static approximation σ  ωε holds for most mediaencountered in geosciences. The physical implication is that the magnetic fieldemanates dominantly from the current of moving charges in the manner of diffu-sion; the quasi-static approximation allows the (wave) term iωεE representing thedisplacement current to be safely neglected.The impact of small variation of µ in most geoscientific objects is consideredsecondary in comparison with the fluctuation of EM fields due to σ , which can varyin a range of 102 ∼ 10−6 S/m. Therefore, it is a reasonable to assume the earth hasfree space values ε0 = 8.85418× 10−12 farad/m and µ0 = 4pi × 10−7 H/m, andconductivity σ is the sole physical property varied in modeling and inversion.I note the assumptions made in this thesis are meant to facilitate the develop-ment of the methodology and do not restrict generalization of my results to othermore sophisticated cases after necessary modifications.The process of EM induction can be viewed as a linear system, for which thesource Js is the input and the measured fields are output. Computing the system’sresponses at an infinite number of frequencies for the entire spectrum providescomplete information. That spectrum can be inverse transformed to generate theimpulse response. The system’s responses, the electric and magnetic fields, arethen the convolution of the impulse response and the waveform of Js in time. Inprinciple, working at infinite times from zero to infinity (seconds) is equivalent toworking at infinite frequencies from zero to infinity (Hz). This means the measure-ments and modeling undertaken in the time domain and their counterparts in the4frequency domain are interchangeable.1.1.3 Controlled source EM surveysThe advantages of controlled source EM (CSEM) is that by changing the surveyparameters, for example, the transmitter/receiver geometries and locations, and thecurrent waveform, the earth can be illuminated in different ways. Also, if the sourcecurrent is known, the conductivity information can be obtained by collecting eitherthe electric field or the magnetic field data.Although there is no fundamental difference between understanding EM in-duction in frequency domain and in time domain, significant distinction exists interms of field surveys, because the instruments are made specific to work witheither specific frequencies or delay times at a specific base frequency:• Frequency-domain EM (FEM). The transmitter, either grounded electrodes(galvanic source) or a closed loop (inductive source), carries a time-harmoniccurrent at some discrete frequencies. The receiver measures the EM fieldsat the same frequencies. The measured signals can be decomposed into twocomponents: one is in-phase (real) with the source and the other is out-of-phase (imaginary or quadrature). So in a FEM survey, there are real andimaginary data at each frequency for each source-receiver pair.• Time-domain EM (TEM). The transmitter carries a certain amount of cur-rent that varies as a function of time with a specific base frequency, usuallyinvolving a pulse or a turn-on and turn-off of current. This time-varying cur-rent induces the secondary fields in the earth, which decay over time. Thereceivers measure the fields as functions of time at a sequence of delay times(time channels).A variety of CSEM survey configurations in time and frequency domain havebeen developed for many different applications and they have provided invaluableinformation about the earth’s conductivity. For example:• Airborne EM method can be used to quickly map the geology (Palacky,1981, 1993), target the mineralization zones (Fraser, 1978), and investi-5gate the groundwater (Fitterman & Deszcz-Pan, 1998) and other near-surfaceproblems (Pfaffhuber et al., 2010).• Ground EM using a large and fixed loop is useful to find highly conductivemineralizations buried under cover (Zang, 1993).• Marine CSEM is able to locate the resistive region in the sea bed that is po-tentially associated with hydrocarbon reservoirs (Edwards, 1997; MacGre-gor & Sinha, 2000; Constable, 2010).• DC resistivity is considered a special case of CSEM, in which the sourcewaveform is constant. Depending on the array configurations, DC resistivitycan be used to look for minerals (Spitzer & Chouteau, 2003; Legault et al.,2008), groundwater (Benkabbour et al., 2004), geo-hazard (Johnson, 2003)and other targets (Bernstone et al., 2000).• A small portable loop-loop system can be used in the environmental andengineering applications to investigate the near surface (De Jong et al., 1979;Jardani et al., 2007).• LOTEM (long-offset TEM) is often used in large-scale investigation of theearth’s crust (Strack et al., 1990) and hydrocarbon prospecting (Strack et al.,1989).In this thesis, I focus upon results in the time domain EM, particularly two mostcommonly-used TEM configurations in mining explorations: airborne and largeloop ground TEM, but there is continued reference to frequencies since my method-ology is equally applicable to both and some concepts are easier to explain or un-derstand in the frequency domain.In practice, a horizontal transmitter loop is usually used due to its easy de-ployment and good coupling with the ground. A trade-off between resolution andpenetration depth exists when choosing the size of loop: a small loop capturesgreat detail of the variation in the conductivity structure near surface and enjoysgood mobility, but is more likely to suffer from low signal-noise ratio (SNR) atlate delay times that have information from deep in the earth; a large loop, on theother hand, creates a large primary field illuminating a large volume of the earth,6but may not be able to differentiate the variations of conductivity at the small scaleand is cumbersome to move around. Therefore, both small and large loop sourceare widely used in TEM surveys for different purposes and they are specificallystudied in this thesis.Small loop source: airborne TEM. When a transmitter loop is much smallerthan the scale of investigation, it can be treated as a magnetic dipole source. Sucha local source induces EM fields in its close proximity, so it is important to movethe loop around to sample a large area and take measurement close to the sourcefor strong anomalous signals. Although surface operations of moving a small loopare possible, an airborne survey is more appealing because of its ability to sweepa large area at a much lower cost. An airborne TEM system usually consists ofa transmitter loop mounted on, or towed by, an aircraft and a towed receiver coilat the center of the transmitter or slightly offset. As the aircraft flies along flightlines, the transmitter emits an EM excitation in a particular waveform, then theinduced fields (usually dB/dt in volts) are measured at a sequence of delay timesby the receiver; this emit-receive cycle completes an airborne TEM sounding atthat location. Over a measurement period of a fraction of a second, while the ar-icraft has moved a few meters, a sounding is acquired. This distance moved isassumed small compared with the scale of investigation of the EM system (see thesketch of a helicopter-borne system in Figure 1.1). By choosing different parame-ters of the survey configuration, for example, the base frequency, waveform, pulsewidth, flight height, loop radius and sounding spacing, airborne TEM systems canbe tuned to geo-electrical explorations at different scales. The applications of sur-veys include regional geologic mapping, mineral deposit targeting, hydrologicaland environmental studies. Most commercial systems for mineral explorations de-liver data at spacings from a few meters to about 10 meters. The spacing betweenparallel flight lines can vary from about kilometers for regional reconnaissance totens of meters for near-surface investigations.Large loop source: ground TEM. In contrast to moving a small loop in air,one can energize a large loop fixed on the surface to illuminate deep targets atdepth. In order to maximize the depth of investigation, such loops are generally afew square kilometers in area. In such a survey, the transmitter induces EM fieldsin the ground using a particular current waveform; then the receiver measures the7σGroundTransmitterGround ReceiversAirborne TransmitterAirborne ReceiverFigure 1.1: Airborne and large loop ground TEM survey.fields of B or dB/dt at some delay times at a particular sounding location; after eachsounding, the receiver moves to the next location to repeat the same procedure ofmeasurement while the transmitter loop is immobile. Due to the cost of movingreceivers on the surface, the common practice of ground TEM so far is followinga few receiver lines, on which soundings are located a certain distance apart (seethe sketch of a ground loop system in Figure 1.1). This drawback is compensatedby its stability allowing much later times to be measured. Therefore, ground TEMis primarily used in applications like the delineation of geologic basement, buriedmineral deposits and others for which depth penetration is important.In addition to measuring the magnetic field (B or dB/dt) on the surface, it ispossible to have receivers down boreholes or flying up in the air. However, there8is no fundamental difference between measuring data in borehole/air and on thesurface from the modeling’s point of view.1.2 Interpretations of TEM DataInterpretation is the process of extracting information from the data for the use ofdecision making. This process is critical as the measured data can only be useful ifthey are interpreted properly. Historically, the interpretation methods have evolvedover time and there are qualitative, semi-quantitative and quantitative methods ofinterpretation.1.2.1 Qualitative and semi-quantitative methodsBecause of the complexity of electromagnetic induction, TEM data have been his-torically interpreted by using qualitative methods. For example, one can differ-entiate regional rock units by looking at time channel grids of an airborne TEMsurvey; or compare the field data on a profile or at a sounding with some well-known canonical responses and infer the geometry of the target. These methodsare easy to implement and have been working well for some simple targets or inthe field for the data quality control.When more specific information is requested, for example, the conductivity(conductance) and the geometry of the target, some semi-quantitative methods canbe used by assuming simplified earth models:• Time constant analysis (Palacky & West, 1973; Nabighian & Macnae, 1991;Macnae, 1998). This method approximates the decay of the field with anexponential function; the time constant derived from the function is relatedto the geometry and conductivity of the target, for example, soundings overlarge and/or good conductor have slow decays and thus large time constants.Calculation of time constant is very fast but it is difficult to separate the sizeand the conductivity based on the time constant alone.• Apparent conductivity (Fraser, 1978; Palacky & West, 1991; Palacky, 1993;Annan et al., 1996). This method finds a representative uniform half-spacethat would produce the same responses as measured data at a particular de-9lay time; it converts data to quantities in conductivity, eliminating the datacomplications contributed by various survey configurations, but the depthinformation is still in time.• Conductivity depth transform/imaging (CDT/CDI) (Wolfgram & Karlik, 1995;Macnae, 1998; Eaton, 1998; Reid & Fullagar, 1998; Macnae et al., 2010).CDT relates time to depth through the concept of diffusion distance, so it cando the imaging by putting the apparent conductivities at estimated depths.Because a diffusion distance is defined for a uniform half-space, the depthestimation can be unreliable if the subsurface contains complicated struc-tures.• Parametric inversion. Under certain circumstance, an earth model can be ef-fectively represented by some simple bodies, like plate (Keating & Crossley,1990; Raiche, 2004) or sphere (Smith & Lee, 2002); and the geometric pa-rameters and conductivity of the bodies are recovered as model parametersin the inversion. Parametric modeling can be as good as rigorous methods ifthe ground truth is similar to the assumed bodies, but, in reality, the simplebodies can hardly explain the entire data set due to the complexity of theactual geology.Using the interpretation methods above, we can quickly get a good handle onthe first-order structure of the target. However, when the earth is more complicatedor advanced information is required from the data, more rigorous and quantitativemethods have to be used.1.2.2 Rigorous modelingRigorous modeling, sometimes referred as pixel-based modeling, rigorously solvesMaxwell’s equations. This method represents the 3D earth by a number of pixels,each of which has a constant physical property. Pixel-based models can have moreflexibility to characterize the earth’s structures, but the cost of computation is in-creased because of the increase in the number of model parameters. Rigorous mod-eling methods are often characterized by the dimensionality of the discretization inspace.10The simplest discretization is one-dimensional (1D) layered model, in whichthe earth consist of a stack of layers each with a different value of a physical prop-erty and thickness. This assumes that the physical property is not a function of hor-izontal location and only changes with depth; the surface also has to be flat. Thisassumption is compatible with geological sedimentation and thus deemed valid inmany cases. 1D inversion provides great insights about depth resolution and worksvery well when the strata are nearly 1D. As many 1D algorithms have been pub-lished (Raiche et al., 1985; Farquharson & Oldenburg, 1993; Lane et al., 2000;Wolfgram et al., 2003; Sattel, 2005; Brodie & Sambridge, 2006; Vallee & Smith,2009; Fullagar et al., 2010), it is a well-established approach in real applications.However, if severe topography is present or the lateral variation of conductivity issignificant, the credibility of 1D inversion results can be questioned.If the geology varies with depth and one horizontal direction, but is constant inthe other horizontally orthogonal direction, one can construct a two-dimensional(2D) model consisting of many “cells” on a 2D plane, which works best with geo-physical profiles along lines perpendicular to the strike direction. The extra dimen-sion means there are more parameters in 2D models. As a transition from 1D to3D, some 2D forward modeling and inversion algorithms have been proposed withapplications (Wilson et al., 2006; Yu, 2012).Eliminating all the restrictions on dimensional homogeneity, a three-dimensional(3D) model can image arbitrarily complicated models, including 1D and 2D pixel-based models. Therefore, a 3D model is a very flexible tool for geophysical inter-pretation. A pixel in a 3D model is called a “cell” and it resides in a “mesh”. Thenumber of cells increases geometrically as one goes from 1D to 2D, then to 3D.Despite the computational costs that increase nonlinearly with the increase in thenumber of cells, the flexibility of 3D modeling is very appealing in real applica-tions.In the following section, I briefly overview the two essential components in-volved in the rigorous modeling:• Forward modeling: computing the theoretical responses of a model in asurvey;• Inverse modeling (inversion): finding a model that reproduces the field11observations.1.3 3D Numerical Modeling1.3.1 Forward modelingGiven a parameterized model m, a forward modeling simulates field observationfor data d = {d1, . . . ,dN} in data spaced = F(m), (1.4)where F is a symbolic forward operator determined by the physics of the problemand applied sources. Real data always contain noise, but the noise can be com-plicated, and is generally treated as a zero-mean random variable not explicitlymodeled due to its complications.If the model is in 3D, the forward problems have to be solved with numericaltechniques. The spatial domain of modeling is first discretized into many cellsbased on a mesh grid. The physical property of every cell then is considered amodel parameter. By placing the fields at nodes, edges, centers or on faces of cells,the differential or integral operators can be converted to vectors and matrices usingthe finite volume (Haber et al., 2000; Haber & Ascher, 2001), finite difference(Wang & Hohmann, 1993; Commer & Newman, 2004), finite element (Pridmoreet al., 1981; Badea et al., 2001; Schwarzbach & Haber, 2013) or integral equationmethods (Hohmann, 1975; Newman et al., 1986). Finally the fields are obtainedby solving some matrix equations.Regardless of the specific discretizing and formulation techniques, eventuallyone has to solve a linear system of equations at each time or frequency in the formofAu = q, (1.5)where the Maxwell matrix A is due to the discretization and conductivity, u is thefield or potential, and the right-hand side q represents sources, boundary conditionsand/or the primary fields. Then the TEM forward modeling algorithms differ inhow equation 1.5 is solved using existing numerical solvers.12Rapid development of computers has made 1D and 2D modeling relativelyeasy, but the numerous model parameters from a 3D discretization can lead toa linear system (equation 1.5) of unmanageable size. Geophysicists continue tostruggle to find better ways of solving 3D problems with good accuracy and speed.The improvement of efficiency in 3D modeling has been an active research topicfor many decades and is still one of the top items on the geophysicists’ wish list,since being able to forward model geophysical responses efficiently is critical tomany advanced analysis of data.1.3.2 InversionA geophysical inversion infers the physical property of the earth from the ob-served data in field. Fundamentally an inversion can be viewed deterministically orstochastically. The former finds one, or some, models that reproduce the data andhopes the models, to some extent, are representative to the ground truth. It leavesthe uncertainty untreated. Stochastic inversion instead regards the model param-eters as random variables and converts data to probability density distributions ofmodel parameters (Tarantola, 2005). Stochastic inversion is more philosophicallysound, but severely suffers from the “curse of dimensionality” and is useful onlywhen the model has a limited number of parameters, for example in 1D layered in-version. My thesis will focus on the deterministic approach and “finding a model”will implicitly refer to that methodology in the rest of my thesis. Nevertheless,because stochastic inversion shares the same foundation of forward modeling withdeterministic inversion, some of my research results are also valid in stochasticinversions whenever applicable.Regardless of the specific survey and type of data, inversions all have the sim-ilar formulations and methodologies: suppose the current model is m and theforward responses based on m is d = F(m), the inversion finds a model updateδm so that the misfit between the observed data dobs and the predicted data d∗ =F(m+ δm) is sufficiently reduced. This process requires two pieces of informa-tion: (1) how current predicted data differ from the observed data, i.e. data misfit;(2) how the pertubations of model parameters affect the predicted data, i.e. sensi-tivity. Provided these two pieces of information are available, variants of inversion13schemes can be formulated with different manipulations.Due to the ill-posedness of the inverse problem, the information in data is notenough to uniquely determine a model update. A sensible way of tackling thisproblem is to impose some prior information on the relationship among modelparameters and formulate the inversion as an optimization problem with Tikhonovregularization (Tikhonov & Arsenin, 1977)minimizem,βφd(m)+βφm(m), (1.6)where φd is a functional of data misfit, φm is a regularization term measuring themodel norm, and β is a trade-off (regularization or Tikhonov) parameter. Equa-tion 1.6 states that in addition to reducing the data misfit, the model must alsocomply with the structural constraints imposed by φm, which generally needs themodel to be “simple” as measured by certain criteria. A “simple” model has largedata misfit, while a model fitting the data better is more “complex”. An inversionalso finds a β that ensures the data are reasonably fit while the model has reason-able amount of structures.A variety of formulations for φd and φm have been proposed in previous pub-lications (Portniaguine & Zhdanov, 1999; Guitton & Symes, 2003; Haber, 2004;Farquharson, 2007; Sun & Li, 2014). Since the choice of φd and φm is not essentialto my research, I assume the most commonly-used L-2 norm for φd and φm. Theregularization term φm contains the smallest component and the flattest component.Specifically, they are defined as (Li & Oldenburg, 1996)φd =12N∑i=1(Fi(m)−dobsiεi)2, (1.7)φm =12αs∫Ω{ws(m−mre f )}2dv+12 ∑i=x,y,zαi∫Ω{wi∂ (m−mre f )∂i}2dv. (1.8)In the data misfit term (equation 1.7), N is the total number of data, F is the forwardmodeling from model to data, dobs is the observed data, and ε is the estimateduncertainty. In the model norm term equation 1.8, Ω is the modeling domain, mre fis a reference model, αs,αx,αy,αz are scalar weighting parameters adjusting the14relative importance of different components in the model norm, and ws,wx,wy,wzare vector weighting parameters fine tuning the penalty to the local structures cellby cell.Expressing equation 1.7 and 1.8 in discrete form, the objective functional inequation 1.6 can be written asφ = 12∥∥Wd[F(m)−dobs]∥∥22 +β12∥∥Wm(m−mre f)∥∥22 , (1.9)where Wd can be a diagonal weighting matrix containing the information about thedata uncertainty, F(m) and dobs are vectors of data, and Wm can be an assemblyof one diagonal matrix and three directional first-order differential matrices withcorresponding weights coded.Equation 1.9 can be minimized using different optimization techniques thatutilize gradient direction, an approximated second-order derivative, or even a fullsecond-order derivative Hessian matrix. Generally gradient methods take less ef-fort to compute but converge more slowly than those making use of curvature in-formation. I use the Gauss-Newton (GN) method as the basis of my inversionalgorithms.Differentiating equation 1.9 with respect to m and setting the gradient to zeroyields a systemg(m) = F ′(m)>W>d Wd[F(m)−dobs]+βW>mWm(m−mre f ) = 0, (1.10)where F ′(m) is the derivative of the forward modelling operator with respect to m.F ′(m) is often called the sensitivity or Jacobian matrix and denoted by J. Supposethe current model is mk, a model update δm is sought so that mk+1 = mk + δm.For non-linear problem J depends on m and local linearization givesF(mk+1) = F(mk +δmk+1)≈ F(mk)+J(mk)δmk+1. (1.11)Equation 1.11 is exact for a linear problem.Substituting equation (1.11) into equation (1.10) and rearranging yields an15equation for the model update δmk+1[J(mk)>W>d WdJ(mk)+βW>mWm]δmk+1 =−g(mk). (1.12)where J(mk) is the Jacobian matrix of the model mk. The updated model mk+1 =mk +αδmk+1 where α is a step length parameter obtained by a line search. Sincethe optimization is an iterative process and the model update δmk+1 does not needto be exact, equation 1.12 is often solved by iterative solvers, for example, theconjugate gradient (CG) methods (Hestenes & Stiefel, 1952). In a CG solver, thecoefficient matrix in equation 1.12 multiplies a vector v at every CG iteration.Essentially this means that the efficiency of optimization depends on the operationof dense matrix J or J> times a vector v (Jv or J>v). There are basically two waysof doing this. The explicit method first computes the elements in J by carrying outmany forward-equivalent computations and storing J in the memory, then does Jvor J>v upon request during CG iterations. The implicit method expresses J as amultiplication of some other matrices, including the forward modeling operationA−1 in equation 1.5; when v is applied to J, the forward modeling actually takesplace at every CG iteration. The choice of explicit or implicit method should beproblem-dependent and is discussed in later chapters when specific problems areconcerned. (See details about the explicit and implicit methods in Appendix B.)The minimizer m of equation 1.6 obtained by solving equation 1.12 may not besatisfactory if β is too large and the data are under-fit. Some approaches have beendeveloped to find an appropriate β (Constable et al., 1987; Hansen & O’Leary,1993; Haber & Oldenburg, 2000). I use an approach called the “cooling method”to find the largest β that offers an acceptable data misfit (Haber, 1997). The pro-cedure can be described as following. The inversion starts with an over-estimatedβ implying the simplicity of the model is more important than the reduction ofdata misfit and the data are only roughly fit; for a particular β , it takes a few GNupdates δm to achieve the optimality; then β is reduced for another round of GNupdates with the new β and the starting model from the previous β ; the “cooling”of β terminates once the model contains enough structure to reasonably reproducethe observed data. The two levels of iteration involved here are called “outer iter-ation” for the β reduction and “inner iteration” for every GN update. The cooling16approach is a process of progressively transferring information from the data to themodel.One single solve of equation 1.12 requires computations of forward responsesF(m) and the sensitivity matrix J(m), which counts for many forward-equivalentcomputations. On top of that, the outer and inner iterations involved in the op-timization problem add even more forward modeling computations. One shouldalso note the inversion is non-unique and more inversions must be carried out forrobust interpretation. All these together make the inversion process very expensiveas hundreds or even thousands of forward modeling computations are required.1.3.3 State of the artSince the pioneering work of 3D numerical modeling in 1970s and 1980s, primarilybased on the integral equation method (Hohmann, 1975; SanFilipo & Hohmann,1985; Newman et al., 1986; Wang et al., 1994), a variety of 3D algorithms usingdifferent numerical techniques have been proposed. Despite the success of thelaboratory development, people have found the technology very difficult to usein practice because the 3D modeling is usually very time-consuming and requiressubstantial amount of computing resources. Therefore, in the last decade we haveseen much research focusing on the improvement of the efficiency using differentapproaches. In particular, I have found the following work relevant and inspiring:• Domain decomposition. The difficulty of 3D modeling is mostly asso-ciated with the large number of cells within the modeling domain. Themost straightforward solution is to decompose the large domain and solvethe subdomain problems using parallel computing (Alumbaugh et al., 1996;Newman & Alumbaugh, 1997; Xie et al., 2000; Commer & Newman, 2004;Holtham & Oldenburg, 2012), and as many as 32768 processors have beenreported in the parallelization (Commer et al., 2008). All of the efforts havemade the large-scale problems more scalable, but problems still exist. Forexample, a fine-grained domain decomposition has many small subproblemsat the cell level and in theory is able to utilize a large number of processors,but the communication overhead may be significant enough to prohibit theuse of additional processors. Also, balancing load across processors needs17sophisticated task scheduling and assignment (Commer & Newman, 2004,2008).• Direct solvers. In the past few years, there has been a trend of migratingfrom iterative solvers to direct solvers when solving the forward modeling inequation 1.5. For a single forward modeling of a few sources, equation 1.5can be inexpensively solved by iterative solvers such as Krylov subspacemethods (Haber et al., 2007; Commer & Newman, 2008). However, whensolving the inverse problem, which involves many A−1 operations for theforward modeling and sensitivity, a direct solver is preferable, because ifA−1 is expressed as factorized triangular matrices many different sourcesand sensitivities of different data can be rapidly solved. With the help of thenewly developed parallel direct solvers, like MUMPS (Amestoy et al., 2006),the large Maxwell matrix A now can be efficiently factorized so that directsolvers can speed up the modeling of realistically-sized problems (Oldenburget al., 2013; Grayver et al., 2013). This thesis develops new strategies basedon the algorithm outlined in Oldenburg et al. (2008, 2013), and refers tothat as the “standard algorithm” since it was state-of-the-art when proposed,but it is a benchmark that I want to surpass. Development of the basic EMalgorithm is not original work of this thesis. Rather, I begin with those basicdevelopments and generate a new methodology to solve problems that weretoo large for the standard algorithm and to greatly speed up those that couldbe solved. Some necessary details on the derivations and implementationsof the algorithm are provided in Appendix A and B.• Multiple meshes. Besides the above, another recent trend of modeling isemerging in different areas. That is the use of multiple meshes in one mod-eling: Commer & Newman (2006) used a fine mesh for the early time mod-eling and switched to a coarse mesh for the late times during time stepping;as a further development, Commer & Newman (2008) decoupled the modelmesh and the simulation mesh, so that the simulation mesh can be optimizedto the frequency and source-receiver orientation after the data decomposi-tion of marine CSEM; Wang et al. (2009) inverted borehole EM data byusing multiple meshes with moving refinements at different small intervals18along the well and modeling the regions outside of the focused sections bycoarse background cells; Cox et al. (2010, 2012) used the concept of foot-print in the integral equation formulation to invert the airborne EM data.Thus every airborne sounding only uses a small part of the entire 3D meshand the sensitivity matrix is dramatically sparsified leading to a more effi-cient Jv computation. The speed-up due to the use of multiple meshes hasbeen shown to be significant.While the previous works each improve efficiency, none is a panacea and atechnique with only a particular improvement soon finds itself in a situation of“too big to solve”, if the size of the survey continues to expand. This imposesserious challenges, but offers great opportunities for my research. My thesis gainsinspirations from the ideas scattered in the above seemingly separate improvementsand consolidates them into one generalized theory of survey decomposition. Thenew theory does not only reveal why the previous improvements work, but alsopoints out a new technical route that uses the theory to the ultimate extend for evenbetter efficiency.1.4 Research Topics1.4.1 3D modeling: a pending paradigm shiftNow we are in a particular time, when the basic 3D EM modeling algorithms havebeen developed in the laboratory, high performance computing devices have be-come easily available and more quantitative information in 3D or even 4D is ex-pected from geophysics. An era of interpretation based on full 3D modeling maysoon become a reality. However, while it is true that more TEM data sets have beenprocessed by 3D inversion than ever before, the fact remains that the majority ofTEM data are still processed by non-3D approaches. In particular, practitioners arefacing some serious challenges in 3D EM modeling:• Expense. In spite of progress made, 3D modeling is still too expensive inthat it usually takes a long time and substantial amount of memory on high-performance computers.19• Big data. Advanced acquisition systems now deliver very large data sets.Modeling the entire large-scale survey can be extremely expensive or evenimpractical with current technologies.• Emerging computing resources. 3D EM modeling is traditionally carriedout on dedicated work stations or computer clusters. Although new com-puting devices/platforms, for examples, the general purpose GPU (graphicsprocessing unit) and the cloud computing, are invented to handle big data, itis still not clear how 3D EM modeling can effectively benefit from the largenumber of distributed processors.• Necessary? The high costs associated with the 3D modeling make it a harddecision when choosing to work in 3D. Without solid evidence of why 3Dmodeling must be used, people tend to lean towards other more economicalnon-3D solutions.I realize there are fundamentally two questions that must be properly answeredbefore the paradigm shift can take place: (1) Is 3D modeling really necessary?(2) If yes, with the help from massive parallelization, what can be done to furtherimprove its efficiency for the large-scale problems? In this thesis, the first questionmotivates my research and the second question is the ultimate problem I solve.1.4.2 Thesis outlineBefore diving into full 3D modeling, I investigate the rationale behind 3D inver-sion: that is, why and under what circumstance, 3D inversion is necessary. InChapter 2, I demonstrate the superior results obtained by 3D inversion, and alsoshow how the restrictions of dimensionality in model parameterization can harman inversion. I further critically examine the conditions of using 3D inversions.This sets the fundamental motivation of working in 3D.Chapter 3 introduces the concept of survey decomposition: why geophysicalEM data should be modeled in this way, and how it is implemented for differentsurvey configurations. This forms the proposed theoretical framework upon whichthe remaining chapters are built.The framework of survey decomposition is applied to airborne and ground loop20EM surveys in Chapter 4 and 5, respectively. These two parallel chapters includethe details about the implementations of the proposed methods and provides bothsynthetic and field data examples as proof of usefulness.Chapter 6 concludes the research and makes statements on the overall impor-tance of my work within the context of the paradigm shift. Further improvementsand future work are also discussed.21Chapter 2Why 3D ModelingWhile 3D inversion in principle was always thought to be desirable, geophysicistshave long been convincing themselves that lower dimensional inversions could atleast yield a reasonable approximate model to the ground truth for interpretation.This chapter shows how 1D airborne TEM inversions can fail when the assumptionof dimensionality is violated. It is based on a real example from a porphyry mineraldeposit. Analysis from another airborne EM data set from the same site howevershows 1D inversion may be acceptable sometimes. Generally, except in certaincircumstances, 3D inversions should be carried out to prevent misinterpretation.Insights developed from this chapter, regarding the sensing volume of particu-lar survey equipment, form part of the foundation for a later analysis in this thesis.2.1 Field Example: Airborne TEM Data at Mt. Milligan2.1.1 Geologic backgroundMt. Milligan is an alkalic porphyry Cu-Au deposit situated approximately 155 kmnorthwest of Prince George in central British Columbia, Canada. The deposit wasformed within the Early Mesozoic Quesnel Terrane, a Late Triassic to Early Juras-sic magmatic arc complex that lies along the western North American continentalmargin where many similar porphyry deposits have been discovered in the past(Welhener et al., 2007).22Extensive exploration programs employing drilling, geologic, geochemical andgeophysical surveys have been carried out since the 1980’s. Mt. Milligan consistsof several mineralization zones of copper and gold (MBX, DWBX, 66 Zone, South-ern Star shown in pink in Figure 2.1). The mineralized zones are associated withmonzonite stocks that have intruded into basaltic volcaniclastic rocks. Three typesof alteration: potassic, albitic (sodic-calcic) and propylitic, can develop in a spatialgeometry represented in Figure 2.2a. The alteration model, obtained by drillingaround the MBX stock (Figure 2.2b), has a pattern similar to the theoretical model.The whole deposit is overlain by a layer of Tertiary sedimentary overburden thatbecomes thicker on the east side.Despite extensive drilling, little logging for the electrical conductivity has beendone, so the true conductivity model at the deposit scale is not available. Howeversome general information about the conductivity is thought to be known. There arebasically four different conductivity targets at Mt. Milligan. The most conductiveunit is expected to be the overburden that is saturated or semi-saturated by groundwater. Faults and fractured rocks are also likely to be conductive, especially whenthey are largely connected and serve as the pathway for groundwater. On the otherhand, solid and unaltered igneous rocks, whether intrusive or host, should be re-sistive. The alteration zone is the most complicated unit and its conductivity mayvary over a broad range of values because of different types and degrees of al-teration. Some laboratory conductivity measurements of hand samples funded byGeoscience BC (Mitchinson & Enkin, 2011) show that the propylitically and albite-altered basalt can be relatively more conductive than other types of alteration dueto high abundances of sulphides (pyrite and chalcopyrite). The monzonite, even al-tered, is still very resistive. Since airborne TEM is most sensitive to conductors, Iexpect that the airborne TEM survey at Mt. Milligan can delineate the overburden,water-saturated fractures and some conductive alteration that contrast with resis-tive rock units, including potassically-altered basalt, unaltered basalt and all typesof monzonite.23611100061095006109000610850061080006107500433000 433500 434000 434500 435000 435500Easting (m)Northing (m)Oliver FaultTertiary SedimentsGreat Eastern Fault ZoneRainbow FaultHarris FaultDivide FaultMBXStock MBXDWBX66 ZoneSouthern StarABCFaultCross sectionA        B       CFlight linesSoundings for3D inversionMonzoniteTrachyteLatiteAndesiteLegendFigure 2.1: General geologic structure and airborne TEM flight lines at Mt.Milligan.2.1.2 VTEM surveyIn 2007, an airborne TEM survey (versatile time-domain electromagnetic, VTEM,a system operated by Geotech Ltd.) was carried out over the Mt. Milligan deposit.The survey consisted of 14 lines, each 2.7 km long and 200 m apart (grey lines inFigure 2.1). The VTEM survey covered all the mineralized zones associated withthe Mt. Milligan porphyry complex: the MBX, DWBX, 66 Zone and SouthernStar.24KKKKAAAAPPPPPPRock type: Alteration:K - PotassicA - AlbiteP - PropyliticBasaltMonzonite(a) Theoretical model with intrusive monzonite and surrounding alteration shells (Mitchin-son & Enkin, 2011).WBX Zone MBX ZoneMBX StockDWBXZone11001000900Elevation (m)1200800A B C433800 434356 434660Easting (m)Potassic Albitic Propylitic(b) Alteration model around the MBX stock at Mt. Milligan by Jago (2008); location ofthis cross section (A-B-C) is indicated in Figure 2.1.Figure 2.2: Cross section of porphyry system at Mt. Milligan.The VTEM system (2007) has a transmitter loop of 13 m radius and dipolemoment of 503000 Am2. The bird, including coincident transmitter and receiver,is towed about 42 m below the helicopter. The helicopter is positioned by onboardGPS and a radar altimeter. The system measures the vertical component of dB/dtdata at 27 off-time time channels from 99 to 9245 µs. These time channels aremarked on the half-cycle of the transmitter current waveform in Figure 2.3. TheVTEM system at Mt. Milligan has a base frequency of 30 Hz. In the final deliv-erables, the 30 waveforms per second are stacked (averaged) and then the data arefiltered to give a reading every 3 to 5 m along each traverse.250 0.005 0.01 0.01500.20.40.60.81Time (second)Normalized transmitter current (A)  VTEM time channelsFigure 2.3: Transmitter current waveform and time channels of VTEM sur-vey at Mt. Milligan.2.2 Failure of 1D InterpretationPlate modeling can be used to interpret airborne TEM data, but it works best withisolated good conductors and is not likely to generate a useful model for a porphyrydeposit. There are also some other methods that assume a 1D conductivity structureroutinely used in the industry: apparent conductivity, time constant analysis and 1Dlayered model inversion.2.2.1 Apparent conductivityApparent conductivity is the conductivity of a uniform half-space that produces thesame response as measured in a single time channel, or a best fit to a number oftime channels. In this calculation, the flight height is taken into account. Here I usea look-up table similar to Sattel (2005) to rapidly find the half-space conductivityof a particular time channel at a certain height. If a solution is double-valued, theone minimizing the lateral conductivity variation is chosen. Figure 2.4 shows anapparent conductivity map of the entire survey at 0.4 ms (VTEM time channel 9).26Figure 2.4: Apparent conductivity map of VTEM data at Mt. Milligan.2.2.2 Time constant analysisThe time constant (or decay constant, τ) can provide a measure of the decay rateand is commonly used to rank conductors (Nabighian & Macnae, 1991). It isestimated by fitting the transient signals with an exponential decay curve. Foreach sounding I compute a τ that best fits VTEM time channels 6 ∼ 12. Thesetime channels correspond to the expected depth of the deposit in this conductivityenvironment. The time constant map is shown in Figure 2.5.2.2.3 1D layered model inversionFinally I invert the VTEM data in 1D. While there are many 1D TEM inversionalgorithms available, I use that outlined by Farquharson & Oldenburg (1993). Foreach sounding I seek a 1D layered model that reasonably reproduces the observeddata and has minimum complexity of structures. In the 1D inversion, the top 600mis subdivided into 40 logarithmically spaced layers. An uncertainty of 10% ofthe datum value plus a floor value of 10−13 V/A is assigned to each datum. Thestarting model for the 1D inversion at each sounding is the best-fitting half-space.27Figure 2.5: Time constant map of VTEM data at Mt. Milligan.The 1D models recovered from the inversion are stitched and interpolated to form apseudo-3D volume of conductivity. Figure 2.6 shows the depth slice at an elevationof 950 m.These three conventional data interpretation methods reveal very similar pat-terns of conductivity. At the large scale there is good correspondence with regionalgeology, particularly the contrast between the conductive sediments on the eastside and more resistive igneous rocks dominating the rest of survey area. Thereare similarities regarding smaller features, and in particular they all suggest thatthe conductivity associated with the MBX stock is higher than the host. Thisstock however is well-known to be monzonite and thus electrically resistive. Inthe following section, I will demonstrate that the misinterpretation arises from theassumption that the earth is 1D.28Figure 2.6: Depth slice of stitched 1D inversion models of VTEM data at Mt.Milligan at an elevation of 950 m.2.3 Studies on a Synthetic Model2.3.1 Synthetic 3D modelTo investigate whether the high conductivity that coincides spatially with the lo-cation of the monzonite stock is a 1D inversion artifact because of 3D geometry,I design a synthetic porphyry-type model that consists of a thin conductive over-burden overlying a resistive intrusive stock surrounded by a conductive alterationshell. The host rock is more conductive than the stock but less conductive than thealteration shell. The synthetic survey consists of 91 soundings at 50m above thesurface on a 7 × 13 data grid (Figure 2.7). The transmitter current waveform is astep-off. Airborne TEM data are forward modeled at 20 VTEM time channels from0.099 to 2.745 ms using the standard algorithm (Oldenburg et al., 2013). The dataare noise-free but a 10% standard deviation is assigned to each datum for inversion.2.3.2 1D inversionTo emulate the previous analysis on the field data, I attempt to generate an approx-imate conductivity model using 1D inversion. The inversion parameters are the290214428Depth (m)8004000–400–800Northing (m)–800 –400 0 400 800Easting (m)Conductivity 0.01S/mThickness 60mOverburdenConductivity 0.05S/mThickness 150mAlterationConductivity 0.001S/mDepth 60 ~ 300mWidth 400mStockConductivity 0.002S/mHost rockSoundingsFigure 2.7: Synthetic conductivity model of a porphyry system.same as for the 1D inversion of field data in the previous section. Figure 2.8 showsthe central cross section and depth slice (160 m below surface) of the pseudo-3Dvolume. The black lines outline the true model. The 1D inversion recovers theoverburden but the remaining features are incorrect. There is a large conductorsitting at where the resistive stock is supposed to be and the conductive shell ismissing. The deep extension of the conductor on the cross section of Figure 2.8also conveys very misleading information that the conductor is rooted at the depthof bedrock.In an attempt to understand the above results, I model the currents and sec-ondary magnetic fields in the earth at various times due to a transmitter locateddirectly above the resistive stock. At early times the induced currents are nearthe surface and localized beneath the transmitter (Figure 2.9a). Hence a 1D inver-sion of the early time data should yield a reasonable estimate of the conductivitybeneath the transmitter. However, at middle and late times, the peak currents prop-agate outward and downward and are concentrated in the conductive shell aroundthe stock (Figure 2.9b). Once the currents reach the conductive material they decayslowly in accordance with the local conductivity. These trapped currents produce300214428Depth (m)8004000–400–800Northing (m)–800 –400 0 400 800Easting (m)–2.45–2.34–2.22–2.10–1.98–1.87–1.75σ (log S/m)Figure 2.8: The recovered model from 1D inversion of the synthetic data.significant dB/dt signals measured at the receiver and that signal can be translatedinto high conductivity beneath the receiver when the data are inverted in 1D.The above phenomenon is well known and in fact forms the basic concept ofwhy TEM surveys are useful. What was not appreciated before this analysis is howdramatic the effect can be in 3D. The circular geometry of the porphyry depositand the location of the sounding maximize the potential for a significant artifactbut other simulations, where the transmitter is displaced well away from the centerof the deposit, produce a similar artifact. Once any appreciable amount of currentreaches the conductive material, a significant signal will be recorded at the receiverlocation.3110–4H (A/m)10–510–610–710–810–910–10Easting (m)0 350 700–700 –350120–20–160–300–440Elevation (m)Transmitter(a) At early time the currents are localized in the overburden close to the transmitter.10–4H (A/m)10–510–610–710–810–910–10Easting (m)0 350 700–700 –350120–20–160–300–440Elevation (m)Transmitter(b) At late time the currents are mostly trapped in the conductive shell.Figure 2.9: Snapshots of the secondary magnetic field excited by a transmit-ter loop above the center of the resistive stock.2.3.3 3D inversionHaving now understood that the correct assumption of dimensionality is crucial,we proceed with 3D inversions of the same synthetic data using Oldenburg et al.(2013). For numerical accuracy, the smallest cell, 50 × 50 × 20m, is about onethird of the diffusion distance of the earliest time for a 0.0067 S/m half-space.Padding cells are also needed so that appropriate boundary conditions are satisfied.The final mesh, 48×48×44, consists of 101376 cells. The uncertainty assignmentis the same as for the synthetic 1D inversion shown in the previous section. Initialand reference models are uniform half-spaces of 0.005 S/m. In addition to obtaina 3D inversion model, I would also like to examine how the number of soundingsaffects the images of inversion. Therefore, three individual inversions using 16, 49and 91 soundings are carried out. The CPU times are summarized in Table 2.1. Allinversions are carried out on 3 nodes of computer cluster, with 6 Intel Xeon E541032Table 2.1: Summary of three inversions using different numbers of soundingsfor the synthetic data.Synthetic I Synthetic II Synthetic IIINumber of transmitter 16 (4 × 4) 49 (7 × 7) 91 (7 × 13)In-line sounding spacing 400 m 200 m 100 mCross-line sounding spacing 400 m 200 m 200 mNumber of iterations 4 5 5Time for one forward modeling 230 s 780 s 1320 sTotal time for inversion 290 min 1108 min 1951 minCPUs and 96GB RAM available.Cross sections and depth slices of the three synthetic inversion models, andcorresponding data misfit curves, are summarized in Figure 2.10. Even with only16 soundings 400 m apart (Synthetic I in Figure 2.10a), the 3D inversion shows theconductivity contrast between the central resistive stock, the surrounding conduc-tive shell and the host. The conductivities and geometry of anomalies are distorted,but the concept of a conductive shell around a resistive stock is evident. This con-trasts with the incorrect image shown in Figure 2.8. Increasing the number ofsoundings to 49 in Synthetic II (Figure 2.10b) greatly improves the image. Allthree geologic units, overburden, conductive shell and resistive stock, are correctlydelineated and have more realistic conductivities. A further increase to 91 sound-ings in Synthetic III produces only marginal improvement by sharpening up theboundaries (Figure 2.10c). This improvement may not be justified by the increasedcomputational time required for the inversion (see CPU time in Table 2.1).To summarize, by carrying out synthetic 1D and 3D inversions I have learnedthat: (1) 3D inversion is necessary for geologic units with 3D geometry because 1Dinversion can produce serious artifacts; (2) 3D inversion, even with a few sound-ings can still recover a reasonable large picture that out-performs a 1D inversionresult; (3) there is a redundancy of information in the airborne TEM data so anefficient inversion should strive to reduce the number of soundings involved whilemaximizing the amount of information from the data that can be incorporated intothe model.33Easting (m) –1.64σ  (log S/m)–2.05–2.47–2.890 1 2 3 4102103104105IterationData misfit0  300  600Depth (m) –800 0 800800–4000400–800Northing (m) (a) Synthetic I0 1 2 3 4 5102103104105IterationData misfit0  300  600Depth (m) –800800–4000400–800Northing (m) Easting (m) 0 800–1.29σ  (log S/m)–1.84–2.39–2.94(b) Synthetic II0 1 2 3 4 5102103104105IterationData misfit0  300  600Depth (m) –800800–4000400–800Northing (m) Easting (m) 0 800–1.22σ  (log S/m)–1.79–2.36–2.94(c) Synthetic IIIFigure 2.10: Data misfit convergences and models of three synthetic 3D in-versions. The red dots indicate the sounding locations and the blacklines outline the true model.2.4 3D Inversion of Mt. Milligan Data2.4.1 Inversion set-upWith the knowledge learned in the synthetic inversions, I invert the VTEM dataset at Mt. Milligan in 3D to see how 3D inversion can improve interpretation.As a proof of concept, I focus upon a 1.4× 1.4 km region surrounding the MBXstock (433700∼435100E, 6108800∼6111200N). The earliest time channels in anairborne TEM data set are usually difficult to model accurately because the exacttransmitter current close to the turn-off time may vary from sounding to sounding,and thus I invert the data from more stable time channels 6 ∼ 20. The diffusion34distance of the first time channel used in inversion, determined by the earliest timeand the overall conductivity obtained from 1D inversion, suggests the 3D inversionwould not have resolution finer than about 50 m. Therefore, the airborne soundingsare down-sampled to 50 m spacing, giving a total of 232 soundings on a 8 × 29data grid (blue dots on grey lines in Figure 2.1).The smallest cell size in horizontal directions is chosen to be 50 m, so the finestinformation in the data can be adequately modeled. In the vertical direction, thesmallest cell size is 20 m. This is needed to capture the topography and accommo-date the transmitter flight heights varying from 20 m to 90 m above surface. Thefinal mesh is 50×50×64 and has a total of 160000 cells.The assigned uncertainty for each datum is 10% plus a noise floor of 10−13V/A; this is the same as the 1D inversion of Mt. Milligan VTEM field data. Dis-cretization of the transmitter waveform is important since this affects the numericalaccuracy, time and memory usage when solving the forward problem. I discretizethe waveform shown in Figure 2.3 to have a minimum number of partitions, eachof which is discretized in a uniform step length, and still retain desired accuracy.This results in two on-time and four off-time Maxwell matrix factorizations anda total of 50 time steps in one forward modeling with the algorithm in Oldenburget al. (2013).2.4.2 A multi-level approachWith my current computer the time for carrying out 50 time steps for 232 transmit-ters on a mesh with 160000 cells requires 33 minutes. Unfortunately an inversiontypically requires hundreds of equivalent forward modelings and this makes 3Dinversion extremely time-consuming. For practical purposes, it is always desirableto reduce the computing time. The reason why 3D forward modeling is slow isthat there are a large number of cells in the mesh and many soundings (sources)need to be modeled. One way to speed up the computation is to adopt a multi-levelconcept: use approximations at the early stage of inversion when the model is faraway from the solution; then gradually refine the modeling to resolve finer-scalefeatures. I note that in airborne TEM inversion, a coarser mesh that moderatelyviolates the general rules of spatial discretization may not hurt the model update at35Figure 2.11: Dimensions of the smallest cells in the extra-coarse (black),coarse (green) and fine (red) meshes.the early stage of 3D inversion, because the data misfit is usually orders of mag-nitude greater than the discretization error. Likewise, including all the soundingsin an early stage of inversion is not necessary and only a down-sampled subset isactually needed. I therefore propose a multi-level strategy to increase the efficiencyof the large inversion problem. That is, first invert a small number of soundings ona coarse mesh, then pass the recovered model to a finer mesh and refine the modelby inverting additional soundings; repeat until the last inversion that is carried outon the finest mesh and includes all the soundings.For this particular VTEM data inversion around the MBX stock at Mt. Milli-gan, I design the dimensions of the smallest cells in the multi-level meshes to havean octree-like relationship as portrayed in Figure 2.11. For the current problem Iuse three levels of mesh which are referred to as extra-coarse (X), coarse (C) andfine (F). The nested octree design reduces the size of the problem by factors of8 and 64 on the coarse and extra-coarse meshes and guarantees exact model con-version between meshes. All three meshes have the same padding cells extendingto 4200 m, which is calculated based on the latest delay time and a representativeresistive background conductivity 0.001 S/m. Specifications of the extra-coarse,coarse and fine mesh inversions are listed in Table 2.2.Numerical accuracy of the three meshes can be monitored by testing 3D for-ward modeled data against a 1D algorithm using conductive (0.025 S/m) and resis-tive (0.001 S/m) half-spaces. CPU times are calibrated for one forward modelingon an Intel i7 960 Quad-Core desktop computer with 16 GB RAM. As shown inTable 2.3, all three meshes do well at late delay times over resistive ground be-cause the same padding distance is used. However, modeling errors at early times36Table 2.2: Specifications of the extra-coarse, coarse and fine mesh inversions.Extra-coarse (X) Coarse (C) Fine (F)Smallest cell size (vertical) 200 m 100 m 50 mSmallest cell size (horizontal) 80 m 40 m 20 mNumber of cells 18750 48552 160000In-line sounding spacing 200 m 100 m 50 mCross-line sounding spacing 200 m 200 m 200 mNumber of soundings 64 120 232Table 2.3: Numerical performances of the extra-coarse, coarse and finemeshes used in the multi-level inversion.Extra-coarse (X) Coarse (C) Fine (F)Number of soundings 64 120 232Early time channel modeling error 35% 18% 6%Late time channel modeling error 3% 3% 3%Total time for factorizations 17 s 75 s 734 sTotal time for time steps 67 s 331 s 3722 sincrease as the mesh discretization becomes coarser. Poor modeling often createsartifacts in the inversion model, but fortunately those artifacts can be correctedin a subsequent finer mesh inversion. Comparison of CPU time required by oneforward modeling on the three meshes shows significant potential of speed-up byusing coarser meshes in the early stages of the inversion.2.4.3 3D inversionI first implement the extra-coarse mesh inversion. The inversion starts with a 0.002S/m half-space with topography and achieves a normalized data misfit 2.55 (bluecurve in Figure 2.12) in about 1.73 hours (X-iterations in Figure 2.13). The model(X5) is converted to the coarse mesh and 100-meter-spacing soundings are used inthe coarse inversion. Because of the information contained in those newly-addedsoundings, the initial normalized data misfit of the coarse mesh inversion rises upto 5.72 (C0 in Figure 2.12). The coarse mesh inversion takes another 5.74 hours(C-iterations in Figure 2.13) to reduce the normalized data misfit to 2.32 (red curve37X0 X1 X2 X3 X4 X5 C0 C1 C2 C3 C4 C5 F0 F10510152025IterationNormalized data misfit  Extra−coarse MeshCoarse MeshFine MeshFigure 2.12: Normalized data misfit as a function of iterations for the inver-sions using three multi-level meshes.in Figure 2.12). Finally the inversion is switched to the fine mesh and uses all ofthe soundings. However, these 50-meter-spacing soundings do not provide signif-icantly more information than the 100-meter-spacing soundings. The initial nor-malized data misfit of the fine mesh inversion is 2.23 (F0 in Figure 2.12), whichis even less than the final misfit of the coarse mesh inversion. The fine mesh in-version, which takes 10.65 hours for one iteration (F-iterations in Figure 2.13), isterminated when there is a reasonably good data fit; 80% of the 3445 data have nor-malized data misfit less than unity and there are only a few outliers that contributelarge misfit (Figure 2.14). Note the extra-coarse mesh inversion (X) is carried outon a Intel i7 960 Quad-Core desktop computer with 16GB RAM available; thecoarse mesh inversion (C) on one node of a cluster with 2 Intel Xeon X5660 CPUsand 64GB RAM available; the fine mesh inversion (F) on two nodes of the samecluster with 4 Intel Xeon X5660 CPUs and 128GB RAM available.Depth slices at an elevation of 970 m of the three mesh inversion models (X5,C5 and F1) are shown in Figure 2.15. In spite of the substantial modeling error (Ta-ble 2.3), the extra-coarse mesh does a good job of delineating the large scale con-ductivity features over the entire area (Figure 2.15(a)). This inversion is carried outquickly on a desktop computer. Increased resolution is evident on the subsequentmeshes. However, little difference is found between the coarse (Figure 2.15(b))and fine (Figure 2.15(c)) mesh inversion models. This suggests that the 50-meter-38X0 X1 X2 X3 X4 X5 C0 C1 C2 C3 C4 C5 F0 F102468101214161820IterationCumulative CPU time (Hour)Figure 2.13: Cumulative CPU time of iterations in the extra-coarse, coarseand fine mesh inversions.0 2 4 6 8 10 12 14 16 18 20 22 24 260200400600800Normalized data misfitCountFigure 2.14: Histogram of normalized data misfit after the fine mesh inver-sion.spacing data do not provide much more information than 100-meter-spacing dataand that the 100-meter-resolution mesh (the coarse mesh) would probably be fineenough for this data set. The total time to complete the inversion at the coarsemesh scale is 7.46 hours and the total time, with a fine scale mesh update, is 18.11hours. These are both significantly smaller than solving the problem on the finescale from the beginning, which took 82.43 hours to achieve the same data misfitin a follow-up benchmark inversion.3961104006109950610950061090506108600Northing (m)433500433950434400434850435300433500433950434400434850435300433500433950434400434850435300Easting (m) Easting (m)Easting (m)(a) (b) (c)–3.64–3.24–2.84–2.44–2.04–1.63–1.23σ (log S/m)Figure 2.15: Depth slices of the inversion models at Mt. Milligan at an ele-vation of 970 m after the (a) extra-coarse, (b) coarse and (c) fine meshinversions.2.4.4 Model interpretationDepth slices of the smoothed final model are shown in Figure 2.16. The most con-ductive feature in this model is a near-surface conductor that cuts the northeasterncorner, which is interpreted as the Tertiary sediments. Our target, the porphyrysystem, is well recovered as a deeply rooted massive resistor in the center of studyarea and a relatively conductive ring around it with an open towards the south. Thisfeature can only been seen in the 3D inversion model and not in other methods ormaps assuming a 1D earth model.Figure 2.17(a) is a depth slice of the inversion model at an elevation of 1030 m(about 100 m below the surface) with the geology overlaid. The general features area resistive core, a surrounding conductive halo, followed by a resistive outer region.On the east, this sequence is modified by the presence of the highly conductingsediments. This conductivity model is also supported by a previous 3D inversion ofDC resistivity data (Oldenburg et al., 1997). In Figure 2.17(b) I overlay contours ofthe conductive anomaly from the DC resistivity inversion at the 1030 m elevation.There is general correspondence, especially for the west-wing of the conductivering, and the MBX stock is found resistive by both 3D airborne TEM inversion andDC resistivity inversion. Note that, in addition to the resistive stock which I havebeen focusing upon, there is a prominent dark-blue-colored resistor on the southernpart of the image. This feature is required by the data which are characterized by40–3.64–3.24–2.84–2.44–2.04–1.63–1.23σ (log S/m)1050970890810Elevation (m)61104006109950610950061090506108600 433500433950434400434850435300Northing (m) Easting (m)Figure 2.16: Depth slices of the final conductivity model at elevations of1050 m, 970 m, 890 m and 810 m.61104006109950610950061090506108600Northing (m)–3.64–3.24–2.84–2.44–2.04–1.63–1.23σ (log S/m)433500433950434400434850435300433500433950434400434850435300Easting (m)Easting (m)(a) (b)MBXStockDWBX66MBXZoneTertiary sediments420Ωm420Ωm<420Ωm<420Ωm >420ΩmFigure 2.17: Depth slice of the final interpretation model at an elevation of1030 m overlain by (a) geology and (b) 420 Ωm contour of the DCresistivity model.41unusual decay constants and negative transients in that region. Interpretation ofthose data may require modeling of complex conductivity for a dispersive medium(Marchant et al., 2013), and will not be discussed in this thesis.The application of 3D inversion to VTEM data at Mt. Milligan has yielded aconductivity model that is geologically and geophysically more plausible than theone obtained in 1D inversion. This study shows that the correct dimensionalityis crucial and by interpreting TEM data in 3D, misleading information, like thatobtained from 1D modeling, can be prevented.2.5 Critical Rethink: When is 3D Inversion Necessary?I have shown, with a synthetic example and field data inversion, that a 3D inversionof airborne TEM data is necessary if the geologic setting is 3D similar to the Mt.Milligan porphyry deposit. The 1D inversions fail because of the lateral variationof the conductivity. However, in a previous study of Mt. Milligan (Oldenburg et al.,1997), 1D inversion of another airborne EM data set (frequency-domain DIGHEM,a system operated by CGG) successfully revealed the resistive nature of the MBXstock. This paradox leads to a critical question pertaining to circumstances aboutwhen a 3D inversion is necessary. I attribute this to the fact that the VTEM system,for the time channels used here, has a much larger footprint than the DIGHEMsystem. In this section I quantify the concept of footprint of an airborne EM systemby using the distribution of electric current density and then discuss the conditionsfor undertaking 1D and 3D inversions.2.5.1 Quantification of the EM footprintThe concept of footprint has been used to quantify the size of the spatial domainwithin which an airborne sounding can effectively sense the conductivity. Differentapproaches were proposed to calculate the size of footprint (Liu & Becker, 1990;Kovacs et al., 1995; Beamish, 2003; Reid & Vrbancich, 2004; Reid et al., 2006;Yin et al., 2014) and other closely related measurements (Smith & Wasylechko,2012), but some limitations exist. I therefore use the similar idea in Liu & Becker(1990), but extend it to realistic 3D conductivity model and make it easier for thecomparison between the time-domain and frequency-domain systems.42My approach is based on a quantity called “pseudo-sensitivity” that is calcu-lated using the distribution of the induced current and the distance between thecurrents and the location of the receiver. Using the finite volume algorithm inHaber et al. (2000) and Oldenburg et al. (2013), the total strength of the currentdensity vector at every cell for a particular frequency/delay time can be computedfor any complicated 3D model and source; then the pseudo-sensitivity of a cell tothe datum at the receiver for that frequency/delay time is calculated as the currentdensity strength of that cell divided by the squared distance between the receiverand that cell. The pseudo-sensitivity honors the current density and the geometricdecay of signal in the Biot-Savart law that describes how the induced currents cangive rise to the magnetic field measured at the receiver.Next, using superposition, the data at the receiver can be treated as the con-tribution made by many subsurface current density vectors. By summing up thepseudo-sensitivities of the cells within a progressively expanding radius R centeredat the sounding, we can tell whether a system at a particular frequency/delay timeis able to probe a relatively broad or small region. For the convenience of com-parison, the summed pseudo-sensitivity within a particular R is further normalizedby the summed pseudo-sensitivity when R = ∞, giving a relative measurement p(0 < p < 1) called the “cut-off index”. If p is given, the corresponding R can besearched by stepping at small increment. Then it is possible to compare differentsystems’ R at the same p. Because the conductivity can have arbitrary variationin 3D, it may be necessary to examinate the entire p-R curve at different cut-offindices.2.5.2 Comparison between VTEM and DIGHEMThis approach is applied to VTEM and DIGHEM surveys at Mt. Milligan. In myinvestigation, I take the 3D conductivity model of the VTEM inversion at Mt. Mil-ligan, and place an airborne sounding above the center of the MBX stock (UTM:434500E, 6109500N, elevation 1180 m). The earliest and latest times of VTEM,234 µs and 2745 µs, and the highest and lowest frequencies of DIGHEM, 56000Hz and 900 Hz, are considered here. The estimated footprints R versus differentcut-off indices p are plotted in Figure 2.18.430.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.90200400600800100012001400pR (m)  VTEM t = 234 µsVTEM t = 2745 µsDIGHEM f = 56000 HzDIGHEM f = 900 HzFigure 2.18: The footprints R of the VTEM and DIGHEM systems at differ-ent cut-off indices p at Mt. Milligan.Unsuprisingly, later delay times, or lower frequencies, have larger footprintcompared to earlier delay times, or higher frequencies within one system (Fig-ure 2.18). But a big gap in the footprint between VTEM and DIGHEM is obvious:even the lowest frequency of DIGHEM has much smaller footprint than the earli-est time of VTEM. At Mt. Milligan, the size of our target, the MBX complex, iscomparable to the footprints of most VTEM delay times and is larger than the foot-prints of most DIGHEM frequencies. Therefore, the ground conductivity can beconsidered to be one-dimensional at the scale of DIGHEM data and a 1D inversionis able to correctly recover the local conductivity. At the scale of the VTEM data,the same conductivity structure becomes complex in three dimensions so that a 3Dinversion with multiple soundings is needed.When it comes to the applicability of 1D and 3D inversion, one needs to con-sider the measurement times/frequencies of the system and the conductivity. Ifthe time channels are early enough, or if the frequency is high enough so that thefootprint is smaller than the scale of conductivity variation, then the local conduc-tivity information can be picked up by an individual sounding and a 1D inversioncan produce an interpretable result. However, for later time channels, or lowerfrequencies, the EM footprints can match the scale of conductivity variation anda 3D inversion is necessary. If the footprint continues to expand and significantly44exceeds the scale of the inhomogeneity, the 3D strucutres can be effectively repre-sented by a uniform value by up-scaling.2.6 SummaryMy studies on the airborne TEM data at Mt. Milligan shows that under some condi-tions 3D inversion is more essential than previously thought. Although the circulargeometry of the conductor at Mt. Milligan seems the most vulnerable case to arti-facts in an 1D inversion, numerical experiments in other less extreme 3D and even2D environments present similar issues that result in significant misinterpretationswhen the footprint of the EM system becomes comparable with the geologic tar-gets in size. When using a large ground transmitter loop, the footprint could reachas far as tens of kilometers. For this reason it is desirable to have 3D inversionas a standard procedure for most TEM data sets, especially for applications, likemineral exploration, in which 3D structures are common.However, improved interpretation comes at high costs. Even with advancedcomputers and sophisticated numerical solvers, 3D inversion of a data set as smallas the focus area at Mt. Milligan MBX complex still takes dozens of hours on largecomputer clusters. This is far more expensive than 1D inversion and parametric 3Dinversion, which can be carried out within minutes on regular desktop computers.To speed up the Mt. Milligan airborne TEM data inversion, I used a multi-levelmethod to save unnecessary fine-scale computing at early inversion iterations. Theidea of “inaccurate computing” has proven to work well, but some problems exist:• The scalability of 3D modeling is poor. As the survey gets large, the compu-tation grows geometrically and eventually one needs to solve a huge problemon a massive mesh at later iterations; this could be difficult or impractical.• Massive parallelization is difficult. If a problem is large, information used incomputing has to be stored on different nodes across a computer clusters. Sowe will soon reach a point that the overhead of communication dominates,and adding more processors makes the inversion even slower.• Soundings (data) are down-sampled on a regular grid based on the size ofthe cell in the mesh. The down-sampling rate, made to match the resolution45of the mesh, was somewhat arbitrary. It was hard to know whether the ratewas too high (loss of information) or too low (over-computing).While the usefulness of 3D inversion is justified with a field example, its effec-tiveness in practice is heavily restricted by the efficiency of 3D modeling. The restof this thesis will concentrate on a fundamentally novel framework of modelingthat accelerates the existing algorithms at a generic level and makes 3D inversionmuch more affordable.46Chapter 3Survey DecompositionThe benefits and necessity of 3D modeling demonstrated in Chapter 2 show thatit is desirable to have EM data routinely inverted in 3D. However, such a shiftof paradigm to full 3D is not easy because of the time and significant amount ofcomputing resources required by realistic 3D modeling. This is exemplified by the3D inversion of a small subset of the Mt. Milligan data in Section 2.4: a surveyof 1.5 × 1.5 km took dozens of hours on two nodes of a computer cluster, evenwith a multi-level approach for speed-up. So, it becomes obvious that without abreakthrough in the efficiency, 3D modeling can hardly make an impact in real-world applications of exploration. Motivated by this issue of efficiency, the restof my thesis research tries to make improvement by fundamentally changing thearchitecture of 3D modeling using an unconventional framework, called surveydecomposition.This chapter first discusses the rationale behind this novel framework by ana-lyzing the computational complexities that have slowed the modeling. Then sur-vey decomposition, along with other related concepts like subproblems, local dis-cretizations, is explained in detail. I also discuss the performance and scalabilityof survey decomposition in parallel computing. Finally, an approach of data sub-sampling using cross validation, particularly useful in dealing with a large numberof subproblems after decomposition, is introduced with a demonstrative example.473.1 Computational ComplexitiesThe idea of decomposing a large EM problem comes from a critical examinationof the complexities involved in 3D modeling and the attempt to eliminate over-computing. Because of the multi-scale nature of EM modeling problems, the com-putational complexity arises from three aspects: space complexity, time complex-ity and optimization complexity (for inverse modeling only). The philosophy of“adapt to the scale” is used to fight against the over-computing in all three of theseaspects.3.1.1 Space complexityThe space complexity is often the primary concern in 3D modeling due to thelarge number of discretized cells (model parameters) and possibly the large numberof sources. There are several typical scenarios that can lead to highly expensivecomputations:• Large surveys. A survey that spatially covers an extensive area can result innumerous cells and thus a large system of equations, regardless of the type ofmesh. In airborne EM, the surveys are usually flown over tens or hundreds ofsquare kilometers demanding millions of model parameters; even for groundloop TEM, in which a few sources are fixed on the surface, the size of theground loop can still require a considerably large volume in the modelingdomain.• Numerous EM sources. This is common in airborne EM surveys that consistof hundreds of thousands of sources (transmitters). Without modificationthis may require the million-parameter problem to be solved hundreds ofthousands of times for just one complete forward modeling.• Complicated model structures. If complicated structures, like topography,dykes and faults, are to be modeled, the mesh must be finely discretized; thisadds more cells in addition to those needed to work with a large survey.Space complexity is a prohibitive issue because of the notoriously poor scal-ability of 3D modeling: the increase of the costs (time and memory) is not pro-portional to the increase of the number of model parameters. I demonstrate the48320 330 340 350 360 370 380320 330 340 350 360 370 380320 330 340 350 360 370 380Number of cells in mesh60402004002000210Memory (GB)Time (s)Time (s)(a)(b)(c)Figure 3.1: Computational costs of a single airborne TEM sounding as poorlyscaled functions of the number of mesh cells: (a) memory usage, (b)time for one Maxwell matrix factorization, and (c) time for one timestep.demand of memory and time by a sequence of single-sounding airborne TEM for-ward modelings on meshes with a variable number of cells using the algorithmdescribed in Appendix A. The forward modeling experiments are measured by thememory required by storing one factorized matrix (Figure 3.1a), the time requiredto carry out one Maxwell matrix factorization (Figure 3.1b), and time required tosolve for the fields at one time step (Figure 3.1c). The poor scalability makes theforward solution computationally expensive when solving realistically-sized prob-lems.The first obvious solution is to reduce the number of model parameters as muchas possible. Parametric 3D modeling can be seen as an extreme example usingstrong and explicit information for the model simplification. Some sophisticated49meshes, like tetrahedral mesh (Schwarzbach & Haber, 2013), are used to do localrefinement, so that the additional cells due to topography, or other complicationsof the model, can be minimized. The number of model parameters can also bereduced by representing the model in a transformed domain with lower dimensionof model space (Oldenburg et al., 1993; Oldenburg & Li, 1994a,b).If many sources are present, a direct solver of Cholesky or LU decomposi-tion is usually used to save time when dealing with multiple right-hand sides inequation 1.5. Direct solvers are generally expensive in matrix factorization, butonce the factorization is finished, many different sources can be quickly solved.This technique has now gained great popularity, thanks to the recent developmentsof matrix factorization algorithms (Amestoy et al., 2006; Oldenburg et al., 2013;Grayver et al., 2013).Another approach which has been very actively pursued is to carry out a do-main decomposition and solve the problem in parallel. Typically, the entire mod-eling domain is broken down into subdomains that can be solved in parallel. Thesolution on one subdomain usually depends on the information from other sub-domains, so there are frequent intercommunications. If the original problem isdecomposed into very fine grains, for example at a cell level (Newman & Alum-baugh, 1997), massive parallelization can be implemented but the benefit of addingmore processors may be quickly marginalized due to the overhead of intercommu-nication. With fewer subdomains, a coarse-grained domain decomposition doesnot have excessive overhead on communication, but it is harder to utilize a largenumber of processors and maintain a good balance of workload among parallelprocessors.All of the previous improvements have made 3D modeling more tractable, butwhen challenged by large data sets from large surveys, the size of the mesh and/orthe number of sources are simply beyond the capability of the existing method-ologies. This can be understood in terms of the scale of modeling: a mesh musthave cells small enough close to every source/receiver, and the domain size must belarge enough to encompass the entire data area and handle the boundary conditionsproperly. This scale contrast usually generates meshes with a very large number ofcells.50Realizing the difficulty of using a global mesh for modeling, survey decom-position seeks a logical decomposition based on the sources and receivers that arespatially confined so that the meshes for the subproblems are localized and canhave fewer cells than the global mesh. This leads to a reduced scale of contrastin a local mesh. Unlike straightforward subdivision of the space in a standard do-main decomposition approach, one subproblem involving a physical source anda physical receiver is a self-contained EM problem and can be modeled indepen-dently with minimal communications with other subproblems. Collectively, thespace complexity from a large mesh is conquered by modeling multiple small EMproblems and putting them together at the end, rather than working on the originalproblem as a whole.3.1.2 Time complexityThe time derivatives in equation 1.1c and 1.1d need the modeling of the time dy-namics of the EM fields, which is usually implemented by either stepping in timeor solving Maxwell’s equations at selected frequencies and then transforming tothe time domain. Time complexity arises because the phenomenon of EM diffu-sion span a range of time from 10−6 to 1 s, or a range of equivalent frequencies,for most earth models in exploration geophysics. If the equations are solved in thefrequency domain, there is still complexity, since the EM problem must be solvedat multiple frequencies.If Maxwell’s equations are solved by a time stepping method, the step lengthmust be carefully chosen so that the step is neither too large, and thus cause ex-cessive numerical errors, nor too small, and thus lead to many steps. One widely-adopted solution to this multi-scale problem in time is to adapt the step length to thescale of delay times: at the beginning of stepping, small steps are used to model therapid change of fields over time; as the fields diffuse, longer steps are employed forefficient modeling without significant loss of accuracy. The choice of step lengthin fact also depends on the type of solver. Iterative methods often have logarithmicstep lengths (Haber et al., 2007), but the solution time is too long. Direct methodsare efficient if the time step is constant, but a single step length for the whole timerange yields too many steps; therefore a trade-off is usually made so that the time51discretization uses a few different step lengths from small to large, each of whichis constantly used in a certain range of time at different scales (Oldenburg et al.,2013; Um et al., 2010).Although in the time-stepping schemes mentioned above the step length is al-ready adjusted to the delay times, there can still be significant contrast of scalewithin a global time discretization designed for all of the times, as the small steplengths at the beginning are only useful for early times and are unnecessarily finefor late times. This over-computing in time is the result of scale contrast betweenearly and late time channels, similar to the modeling on a global mesh as discussedin the previous subsection. This complexity can be minimized by designing thetime discretization for each delay time and modeling each time individually afterthe decomposition in time.The similar minimum scale contrast can also be achieved by doing frequency-domain modeling at some discrete frequencies since each frequency represents acertain scale of modeling and many frequencies can be modeled separately. Thenthe question becomes choosing the frequencies. It has been shown that the optimalset of frequencies can be found by the reduced Krylov subspace (RKS) method(Bo¨rner et al., 2008; Zaslavsky et al., 2011, 2013). However, if the data are intime domain, the time complexity taxes the frequency-domain approach in otherways: (1) a complex system of equations has to be solved for every frequency and(2) a wide frequency spectrum may be required for an accurate frequency-timetransform. Because of (1), a single frequency modeling in frequency domain islikely to be more expensive than a single delay time modeling in time domain.The analyses of space and time complexities have revealed that the overridingidea of having an one-size-fits-all global discretization in either space or time maybe problematic. The modeling can become cumbersome due to the large number ofcells and/or time steps required by the different scales in the global discretizations.Such scale contrasts and associated over-computing can be greatly reduced if dataare modeled separately with customized meshes and time discretizations via surveydecomposition.523.1.3 Optimization complexityAnother complexity is the amount of computation required to solve the inverseproblem. This exists in all types of inversion and is related to the time neededto complete a forward modeling because: (1) an inversion needs the forward re-sponses and the sensitivities to make a model update, and computing sensitivity isequivalent to forward modeling; (2) the forward responses and sensitivities mustbe re-computed every iteration because of the nonlinearity of the problem.The sensitivity, or Jacobian, is an essential part in the minimization of the ob-jective functional used in an inversion. Previous studies have concentrated on thedevelopment of optimization algorithms using full or approximate Hessians or thegradient with the information from the sensitivity. For example, Newton, Gauss-Newton, quasi-Newton and nonlinear conjugate gradient methods have been ap-plied to geophysical inversions. Generally, an optimization method that spendsmore time on computing the quadratic information may converge in fewer iter-ations, while one that concentrates on gradients finishes an iteration quickly butrequires a large number of iterations to achieve the stationary point. So, no algo-rithm is absolutely superior to others.This thesis uses the Gauss-Newton method as the primary optimization methodto demonstrate the framework of survey decomposition. In particular, the Gauss-Newton method solves a normal equation for a model update δmk+1 at the k+1thiteration. The normal equation could look like[J(mk)>J(mk)+βW>W]δmk+1 =−J(mk)>[F(mk)−d]−βW>W(mk−mre f ),(3.1)where F(m) represents forward response of model m, J(m) is the Jacobian matrix,W is the regularization matrix, and mre f is the reference model. The matrix J islarge, dense and then equation 3.1 is generally solved by iterative methods, likeconjugate gradient (CG), that only need to compute J and J> times a vector v.This sensitivity-vector multiplication (Jv or J>v) is a common operation found inother optimization algorithms. The optimization complexity also depends upon thetwo options of making the sensitivity available.The first option is to implicitly represent J or J> in terms of the multiplication53of some matrices (equation B.2 and B.3). When doing Jv or J>v, the vector v issequentially multiplied by those matrices, and it takes the same amount of compu-tation as one complete forward modeling. This approach implicitly computes thesensitivity on the fly upon request, so the amount of computing is more dependenton the number of CG iterations, instead of the number of data. Implicit sensitivityalso needs substantial amount of memory to store the matrices that build J or J>.The second option is to explicitly compute J and store it in the memory for themultiplication. If J is computed row by row (more efficient when the number ofdata is smaller than the number of model parameters), the forward modeling musttake place for N (the number of data) times to form the full J. Once J is computed,Jv and J>v can be quickly carried out for many CG iterations. When computingJ, the explicit method requires the same amount of memory as the implicit methoddoes, but afterwards, all of the matrices in equation B.2 and B.3 can be deleted,leaving only J, relatively small, in the memory.The reduction of computational complexity in sensitivity should use differentstrategies for the implicit and explicit sensitivities. The implicit method is efficientif there is a small number of CG iterations, but this may potentially harm the searchfor optimality. The efficiency of the explicit method is less sensitive to the numberof CG iterations, but the number of data should be as small as possible. Here is whysurvey decomposition can make a difference: (1) if the data are modeled separatelyand concurrently, the number of data will not hinder the efficiency of the explicitmethod; (2) if the data are modeled on the localized mesh, the explicit method canstore J locally with less memory. For these reasons, the explicit sensitivity methodis preferred under the framework of survey decomposition.Now that the optimization complexity is associated with the number of data, itis desirable to reduce the number of data as much as possible. There are two typesof over-sampling that possibly lead to more data than are necessarily needed inthe inversion. Firstly, the number of data in a survey is usually determined by thefield acquisition, which tends to be redundant in both space and time. Secondly, atthe early iterations of an inversion, only a small portion of the information in thedata is needed, and the constructed model is obtained by minimizing an objectivefunctional that is dominated by the regularization term, so computing every singledatum to high accuracy is unnecessary.54The survey decomposition itself, although not able to intrinsically reduce thenumber of data and the number of iterations in an inversion, does offer more flex-ibility to allow the development of better strategies. Once the subproblems areseparated, we can easily control the selection of data for computation. If the rela-tionship between the selection of data and the desired amount of information canbe established, more savings can be achieved by only modeling the very necessarydata (subproblems). The data subsampling problem, tackled by an adaptive andrandom sampling scheme, is discussed in Section 3.4.3.2 Identification of SubproblemsThe insight extracted from Section 3.1 is that modeling data in highly contrastedscales (space and/or time) together with global discretizations is very inefficient.Savings can be obtained by separating the data into subproblems and allowingeach subproblem to have its own localized discretizations that match the scale ofthe subproblem modeling.Bearing this rationale in mind, this section identifies the subproblems in thecontrolled source EM (CSEM) surveys with the concept of an “atomic” problemas a building block.3.2.1 Atomic problemWhile a CSEM survey can be decomposed in many different ways, there are twokeys to decompose a survey in an efficient manner:• Fine-grained decomposition. Every subproblem must be small enough to al-low customized high-efficiency discretization and many small subproblemsfor massive parallelization and platform flexibility.• Self-contained problem. Every subproblem itself is a complete EM model-ing entity with actual sources, receivers working at some time or frequency.The self-containedness that enables the subproblems to be solved indepen-dently is important in maintaining good scalability for a large-scale modelingproblem.55Because every subproblem only models a subset of the entire data set, it iseasier to consider the survey decomposition as a data grouping problem. We canimagine the data are organized by a spreadsheet, in which each row corresponds toeach measurement datum and the columns are the attributes of data, such as sourceindex (S), receiver index (R), delay time or frequency (T/F), field or component(F/C) and so on. Then the data are grouped based on some conditions that can beapplied to one or more attributes; the attributes that are not used in the conditionsare ignored in the grouping process. For example, if a grouping process requiresthe data measured before 0.01 s to be put into a subproblem and the data after 0.01s to be put into another subproblem, the grouping condition only uses the attributeof delay time, and other attributes are not considered; so within in each of the twosubproblems, the data from different transmitters and receivers can be found.The less attributes considered in the grouping conditions, the more inclusivea subproblem can be. The conventional methods, modeling the entire data set inone problem, have the largest subproblem possible using no grouping condition atall. If plotted on the spectrum of subproblem grouping, the conventional globaldiscretization modeling falls at the most inclusive end (Figure 3.2). An inclusivesubproblem is more likely to see high contrast of modeling scales, as the dataare more indiscriminately grouped together with fewer conditions. In the movetowards the more exclusive subproblem, we can find other previously attemptedstrategies that effectively achieved better efficiency by data grouping. For exam-ple, the straightforward domain decomposition or tiling approach can be seen asa grouping using the condition that requires the sources or receivers (S/R) to belocated in some certain subdomains. The multi-level grid approach, modeling dif-ferent times/frequencies on different meshes, is logically related to the groupingusing the condition on the attribute of time/frequency (T/F).Unsurprisingly, the lowest scale contrast and the highest efficiency are foundall the way at the exclusive end of the spectrum, where the grouping conditionsinvolve all of the attributes. In this case, every subproblem only models a singledatum with its own distinct source, receiver, time/frequency and field/component(S-R-T/F-F/C). It appears that now we are at the physical limit of decomposing asurvey and no smaller subproblems can be found. However, the subproblem can bemade even more exclusive by demanding a subproblem to model the data from a56ExclusiveMost ExclusiveGlobal discretization Domain decompositionor tiling Multi-level grid Atomicproblem Nocondition Conditions:S/R Conditions:T/F Conditions:S-R-T/F-F/C Physicallimit ofdecomposition Conditions:Si-Ri-T/F-F/C InclusiveFigure 3.2: The spectrum of subproblem data grouping.particular part of the source or receiver if the loop or wire can be decomposed. Bymaking that “particular part” as small as possible, we arrive at an ultimate limit ofsurvey decomposition: an atomic problem that models a point source and a pointreceiver at an particular time/frequency for a datum of a particular field/component(Si-Ri-T/F-F/C). Now the conditions of fine-grainedness and self-containednessrequested at the beginning of this subsection are both satisfied.The point source in an atomic problem can be either an electric dipole rep-resenting a short straight wire, colloquially called “linelet”, or a magnetic dipolerepresenting a small loop, colloquially called “looplet”. Both electric and magneticdipoles can carry a certain amount of current as a function of time. The point re-ceiver measures the real or complex datum of the electric or magnetic field, or otherquantities of interest at a certain orientation. The atomic problems with differentSi-Ri-T/F-F/C parameters are able to build any type of CSEM survey through sim-ple combination or superposition.3.2.2 Practical recipesIn theory, an EM modeling that is directly using the atomic building block as thesubproblems would have the best efficiency from the single datum simulation’spoint of view. However, solving the entire problem with such fine-grained decom-position may be impractical because of the large number of subproblems and as-sociated complications of management. So practically, we need to find a trade-offpoint on the data grouping spectrum (Figure 3.2) that is as close to the atomic prob-lem as possible while yielding a manageable number of subproblems to be solved.I therefore provide some recipes as a guideline for identifying the subproblems in57any CSEM survey.The first recipe is to combine atomic problems, where possible, into one sub-problem that is computationally the same as one atomic problem. For example, ifthe EM modeler compute the three-component of magnetic field at the receiver atthe same delay time, then the three atomic problems working for Hx, Hy, Hz canbe grouped in one subproblem.The second recipe is to eliminate the atomic problems that show negligibledifferences. In many surveys, the sources and receivers are small enough to be ap-proximated by point sources and receivers, like the transmitter used in the airbronesurveys, so subdivision of such source/receiver is not necessary.The third recipe is to consider encapsulating the atomic problems with smallor moderate contrast of modeling scales. For example, if a survey has frequen-cies/times that spans a relatively small range, the atomic problems with differentfrequency/time can be merged. Unlike the first two principles, this simplificationof decomposition always comes with extra computational costs. The users need tobe aware of this trade-off as it is shifting towards the more inclusive subproblemdata grouping in Figure 3.2.The first recipe is often automatically applied to combine the subproblems withthe same source, receiver and time/frequency but different fields/components inmost modern EM simulators. The applications of the second and third recipes arelargely dependent on the geometries of the survey – the “distances” between thesubproblems in space or time. In realistic field surveys, this boils down to thegeometry of the transmitter and the span of time/frequency range. In the next twosubsections, I examine the decompositions of some CSEM surveys categorized bythe geometry of source.3.2.3 Decomposition of surveys with localized sourcesMany CSEM surveys use the simplest electric or magnetic dipole source, which iseffectively a point source requiring no subdivision according to the second recipein Subsection 3.2.2.The first type of geometry I consider is a dipole source and a receiver that areclose to each other and move together; the source-receiver pair makes measure-58Original ProblemSubproblemSRSRSR SR SR SR SR SR SRFigure 3.3: Decomposition of dipole source survey with small source-receiver offset.ments at every new location, so the data set consists of many individual soundingsat different locations over the survey area (see a cross section diagram in Fig-ure 3.3). This geometry is commonly used to quickly sweep a large area. Examplesinclude the airborne EM using magnetic dipoles and the DC resistivity profiling us-ing electric dipoles (dipole-dipole array). The global mesh designed for the originalproblem must be large and fine enough to encompass the entire survey area; upondecomposition, each subproblem only has one dipole source and receiver and thelocal mesh designed for that subproblem can be much smaller (Figure 3.3). A DCproblem has no time complexity, so there is no decomposition in time. An airbornesurvey usually operates within a relatively narrow band of times or frequencies;as a result, the decomposition of time/frequency is optional. If all time channel-s/frequencies are modeled together, the subproblems can be obtained by a commonsource-receiver (S-R) grouping.The second type of geometry is similar to the first type in mobility, but the59Original ProblemSubproblemS R R R R R RRSFigure 3.4: Decomposition of dipole source survey with large source-receiveroffset.separation between the source and receiver is large. This geometry can be foundin the DC resistivity and marine CSEM using towed streamers. In this geometry, itis common to have multiple receivers for one dipole source to better illuminatingthe underground (see a snapshot of this geometry in Figure 3.4). Modeling suchproblems using the conventional approach requires the global mesh to be finelydiscretized everywhere in a domain large enough to contain the spread of the sys-tem and also the entire survey area. If decomposed, every subproblem only hasone dipole source and one dipole receiver at a certain offset; then the local meshfor a subproblem can be designed in the manner shown in Figure 3.4; this is stillmuch smaller than the mesh that a global discretization would require. MarineCSEM can measure data at times/frequencies with a few orders of magnitude con-trast. Therefore the decomposition in time/frequency is recommended, leading toa common source-receiver-time/frequency (S-R-T/F) grouping.603.2.4 Decomposition of surveys with distributed sourcesSome CSEM surveys use a source as large as the entire survey area. This kindof distributed source can be considered as superposition of many dipole sources.Then the modeling results are obtained by summing up the subproblems’ results.The first example is the ground loop survey using a large and closed loop sourcefixed on the surface and measuring the fields in and outside of the loop (Figure 3.5).The global mesh has to contain the entire loop as well as all receiver locations withfine cells everywhere. If the large loop is broken down into many small looplets,each of which is essentially a magnetic dipole, then the looplet subproblem canhave much smaller local mesh (see one of the looplet subproblems in Figure 3.5).Given the capability of measuring wide-band data in the ground survey, I chooseto do time/frequency decomposition. A subproblem in a ground loop survey istherefore a result of common looplet-receiver-time/frequency (Si-R-T/F) grouping.SRRSiOriginal ProblemSubproblemFigure 3.5: Decomposition of survey with large loop source.61The distributed source can also be a open wire galvanically grounded by theelectrodes, as the one used in LOTEM (Strack et al., 1990), GREATEM (Mogiet al., 1998), MMR (Edwards, 1974) and MIP (Seigel, 1974). The decompositionand mesh design are similar to those for the closed loop, except many electricdipole linelets are used, instead of looplets (Figure 3.6). This type of survey usescommon linelet-receiver-time/frequency (Si-R-T/F) grouping.SRRSiOriginal ProblemSubproblemFigure 3.6: Decomposition of survey with long wire source.3.3 Localized DiscretizationThe fundamental motivation of decomposing a survey is to allow more efficientdiscretization customized to the individual subproblems. In contrast to the standarddiscretizing design that seeks global capability for all sources, receivers and delaytimes at one attempt, my survey decomposition framework promotes “localizeddiscretizations” to avoid the unnecessary contrast of modeling scale in space andtime. The global-local design of the framework involves interactions between the62two levels, which also takes into account how the subproblems can be solved usingmassive parallelization.3.3.1 Local meshA mesh needed for a subproblem is called a “local mesh”, and is designed to onlyaccommodate the source, receiver and time/frequency (S-R-T/F) it concerns. Itcan be highly efficient compared to a global mesh because: (1) the cell sizes can beadapted to the scale of the time/frequency; (2) the domain size of a local mesh maybe much smaller than the original problem; (3) the EM diffusion allows geometriccoarsening of the mesh cells away from the source and receiver locations.A generic example of a local mesh in a structured rectilinear grid (regular mesh)is demonstrated in Figure 3.7. The regions near the source (S) and receiver (R) arelocally refined because EM data have higher resolution in the domain near thesource and receiver. The separation of the local refinements can be adjusted ac-cording to the source-receiver offset in specific subproblems. Other type of meshstructures, unstructured or semi-structured, can have a similar appearance. If thereis uneven topography, the topography close to the source/receiver is finely dis-cretized, whereas the mountains and valleys far away are approximated by bulkycells.Figure 3.7: A local mesh designed for a subproblem modeling the source (S)and receiver (R) marked by the red dots.63A rectilinear local mesh in Figure 3.7 can be characterized by some designparameters:• Smallest cell size. This parameter specifies the cell size at the center of localrefinements and it is usually determined by the ambient conductivity and theearliest time (or highest frequency) it models. In practice, I use an iterativeapproach to adaptively find the appropriate smallest cell size: the programstarts with a cheap initial guess, and then gradually reduces the smallest cellsize until the difference made by the discretization is negligible.• Domain size or boundary condition distance. This parameter specifies thedistance from the refinement center to the outmost cells in the modeling do-main. In order to iteratively determine the size of the modeling domain, Ibegin with a small distance, carry out the forward modeling, and then keepexpanding the boundary until the difference between two successive forwardmodelings falls beneath a prescribed tolerance. This adaptive search proce-dure is carried out for every subproblem and for every updated model duringan inversion, so subproblems at different locations/times/frequencies and atdifferent iterations of inversion have their own customized domain size.• Cell expansion rate. This parameter specifies how fast the cells are coars-ened when moving away from the source/receiver location. It is defined asthe ratio of size of the larger cell to the smaller cell next to it along a certaindirection. Choice of this parameter depends on the variation of conductivity.Empirically I have found a value between 1.2 and 1.5 can produce accept-able results for a good balance between the accuracy and efficiency for mostof the earth models.The local meshes designed for the subproblems with different S-R-T/F can lookvery different. Here I provide two illustrative examples of local meshes and com-pare with the global mesh. The first case is for the subproblems having zero-offsetsource-receiver pair (Figure 3.8), which is common in airborne surveys. The sourceand receiver, marked by the red dots, are located at the center of each local mesh.Because of the scale variations controlled by t/σ (delay time over conductivity)or ρ/ f (resistivity over frequency), it is possible to have local meshes in different64S+RS+RFigure 3.8: Examples of local mesh for zero-offset source-receiver pair over-laying the global mesh in plan view.sizes within one survey, as exemplified by the green and blue local meshes in Fig-ure 3.8. The model parameters outside of the green mesh are deemed irrelevantto the data modeled by the green-colored subproblem; in other words, the domainsize is determined by the domain of sensitivity. For a survey as extensive as theglobal mesh (grey grids), there may be many local meshes whose local refinementscollectively cover the entire global mesh. The difference of space complexity be-tween the local meshes and the global mesh, in terms of the number of cells, isevident and can be even more exaggerated in 3D.The second case is when the source and receiver in a subproblem are relativelyfar apart, which happens in a variety of EM surveys, like ground loop EM andmarine CSEM. This requires two local refinements in a local mesh. In the twoexamples shown in Figure 3.9, the blue and green local meshes are oriented inaccordance with the source (S) and receiver (R) locations of each subproblem. Thesource and receiver with longer offset usually demands a larger local mesh, but thereduction of space complexity is still significant compared to the global mesh.Another important aspect of using local meshes is to represent the global model65RSRSFigure 3.9: Examples of local mesh for long-offset source-receiver pair over-laying the global mesh in plan view.on a local mesh. A technique called “up-scaling” or “up-gridding” or “optimalgridding” may be involved when converting the parameters of many small cells inthe global mesh to one single parameter of a large cell in the local mesh (Li et al.,2001; Fincham et al., 2004). Since it is reasonable to assume a 3D isotropic con-ductivity model in mining geophysics, I adopt the averaging approach in Commer& Newman (2008) and make modifications and extensions so that the mesh con-version is a linear operation based on the interception of global-local cells and thelocal coordinate system can rotate with respect to the global system (Figure 3.9).The specific details of this operation can be found in Subsection 3.3.3.3.3.2 Local time discretizationThe time complexity can be reduced by using local time discretization. Unlike aglobal time discretization that models all of the delay times, a local time discretiza-tion only works for one particular time in a subproblem (S-R-T/F).An illustrative example is presented in Figure 3.10, in which a global problemmodeling two delay times, t1 and t2, is decomposed into two subproblems with660t2t1Local Time Discretization 2Local Time Discretization 1Global Time Discretizationt1 t2δt1δt2TimeFigure 3.10: Global and local time discretizations.their own local time discretizations 1 and 2. Suppose the modeling at time t re-quires a step length of one eighth of t, using the step-adaptive idea in Oldenburget al. (2013), the global time discretization has two different step lengths δ t1 andδ t2 to offer adequate modeling accuracy to t1 and t2. It takes the global problem15 steps to finish the modeling. Upon decomposition, t1 and t2 are modeled inde-pendently using the local time discretizations 1 and 2, so each only needs 8 stepswith the step length of δ t1 and δ t2 respectively. If the subproblems are computedin parallel, the required time is for 8 steps, which is half of what would be neededin the global problem. The saving of time can be further exaggerated if more delaytimes over a large range are modeled. It is important to note that this reduction oftime complexity after decomposition does not compromise the modeling accuracybecause, for example, t2 is still modeled by the step lengths δ t2 and using the ex-cessively small step lengths δ t1 would have contributed to the over-computing fort2.The saving by using local time dicretization can be even more significant whenexplicitly computing the sensitivity matrix. If the sensitivities for t1 and t2 are com-puted using the method in Appendix B, the total number of time steps would be30 (15 steps × 2 data) with the global discretization in Figure 3.10. Separate mod-eling with the two local discretization results in 8 steps for each subproblem and16 steps in total. The example in Figure 3.10 shows two delay times in moderatecontrast; for a realistic survey with time scale contrasts up to three or four ordersof magnitude, the overall efficiency can be improved by an order of magnitude ormore.There are additional benefits of using local time discretizations in comparison67with the global one:• The size of the mesh can be adapted to the scale of the time channel, leadingto more efficient local meshes.• The change of step lengths requires the Maxwell matrix to be re-factorizedand stored if using a direct solver. A constant time step used in a local timediscretization avoids this, making the algorithm more memory-flexible.• The design of a global time discretization has to trade-off between the num-ber of different step lengths (number of factorizations) and the number oftotal time steps. This can result in a step length that reduces the accuracy ofthe simulation immediate after the change of step length. A local discretiza-tion is likely to have better modeling accuracy by choosing a step lengththat is tuned to a specific delay time. When using the implicit time steppingmethod, there are usually 10∼20 steps from t = 0 to the desired delay time.3.3.3 Global-local interactionsUnder the framework of survey decomposition, the information about the data andmodel must be managed at, and exchanged between, the global and local levels.The basic flows of information include: (1) the global process broadcasts the mod-el/data and survey information to many parallel local processes; (2) after the actualcomputations are carried out at the local level, the local processes return the re-sults, usually a few scalars or a vector, to the global process; (3) the global processsummarizes the results and makes decisions about the model update. These stepsconstitute one iteration of using survey decomposition for numerical modeling. Aforward-only modeling needs one such iteration, while an inversion requires manyiterations.The conductivity models, as well as the model updates produced during an in-version, are all presented on a pre-defined global mesh. The survey decompositionapproach transfers the global model to the local meshes on which the subproblemsare solved. A key step in this procedure is to be able to convert models from onemesh to the other, so that the forward responses and sensitivities for the globalmesh are correctly evaluated by solving local problems. No such operation can be68exact and perfect. While I acknowledge mesh conversion is a complicated topicthat deserves sophisticated treatments, I use a simple formula that makes use ofthe linear operation of intersecting volumes. The essential procedure is to find themutual intersections between cells in the global mesh and local mesh. This can berepresented by a sparse matrix R, in which ri j is the intersecting volume of the ithcell in the local mesh and the jth cell in the global mesh. Every subproblem hasits own R matrix reflecting the volumetric relation between the local mesh and theglobal mesh. The material averaging operation for the ith local mesh is expressedasmli = V−1li Rimg, (3.2)where ml and mg are the local and global models respectively. Vl is a diagonalmatrix of local cell volumes and its inverse normalizes each row of R by the volumeof corresponding cell in the local mesh. Matrix R can be computed using either therigorous computational geometry or fast approximate methods, for example, thepoint-cloud algorithm documented in Appendix C.After computations on a local mesh, the modeled data, for example, the mag-netic fields in x, y, z directions, are transformed back to the global coordinatesystem throughdxgidygidzgi= M−1idxlidylidzli , (3.3)where the subscript g and l indicate “global” and “local” coordinate system respec-tively, the subscript i means the ith subproblem, and M is the 3D rotation matrixmapping a vector from the global to the local coordinate system of the ith subprob-lem.Equation 3.3 brings the simulated data from local meshes with different rota-tions and translations to a universal system consistent with the global mesh. Theremay be one more operations of linear combination, denoted by another sparse ma-trix S, before the transformed data can be useful. Matrix S bears the informationabout how the modeling results from the subproblems are summed to render thedesired data. In an airborne survey, S is the identity; the structure of S for a ground69loop survey is discussed in Chapter 5. The final data vector is expressed asdg = S...M−1i dli... , (3.4)wheredli =dxlidylidzli . (3.5)The simulated data from the ith subproblem in equation 3.5 can be any kind of EMfield. If more than one delay time is modeled in the ith subproblem, dli can havemultiple columns corresponding to multiple delay times.The computation of sensitivity, which has the same amount of computing asforward modeling, is also carried out on the local meshes in parallel subproblems,where the coarsening of cells matches the decay of sensitivity. Entries of the sen-sitivity matrix have dimension of “data per unit model parameter”. Column-wise,they can be considered as a special kind of data; every column of the sensitivitymatrix is a vector pertaining to one model parameter. Row-wise, they can be con-sidered as a special kind of model; every row is a vector pertaining to one datum.Therefore, a local sensitivity matrix is transformed to its global counterpart for usein inversion in two steps. Firstly, the sensitivity, as a model on the local meshis mapped to the global mesh. This local-to-global operation uses the same cell-intersecting information needed for the material averaging, but here the process isadditive. The sensitivity matrix of the data in the ith subproblem on the globalmesh in the local coordinate system isJli =. . . jxli . . .. . . jyli . . .. . . jzli . . .V−1li Ri, (3.6)where matrices V−1li and Ri together make the sensitivity available on the globalmesh. In the case where the local mesh is smaller than the global mesh, the global70cells not intersecting with any local cells are set to zero sensitivity. Secondly,the sensitivities, just as the data were, must be rotated back to the global system.Similar to equation 3.4, the sensitivity matrix in the global coordinate system andon the global mesh isJg = S...M−1i Jli... . (3.7)The full sensitivity Jg is rarely formed in practice. Instead, Jg or J>g is morecommonly multiplied by a vector v in a variety of optimization algorithms. In thesurvey decomposition framework, when using an explicit method and direct solver,the sensitivity matrices along with all the local auxiliary matrices are stored locally.When a sensitivity-vector multiplication is requested, the operation can be carriedout concurrently within the subproblems.To implement Jg times v, the vector v is broadcast to all of the subproblems andapplied to equation 3.6 in parallel; then every local process returns a short vector,which is equivalently Jliv, to the global process; the global process assembles thoseshort vectors into a long vector; finally this long vector is applied to S for the finalresult. This procedure can be expressed asJgv = SJlv = S...M−1i Jliv... . (3.8)J>g times v requires the transpose of a dense matrix Jli so it can be slow. Thisis worked around by computing v>Jg then taking the transpose. v> first left-multiplies S, yielding another row vector with the length equal to the number ofdata; then this vector is divided into many short row vectors based on data orderingand segmentation, and the short row vectors are sent to the corresponding sub-problems; within each subproblem, the short row vector left-multiplies M−1i andJli, returning another row vector to the global process; the global process sums up71those row vectors for the final result of v>Jg. This procedure can be expressed asv>Jg =∑iv>S.........exi eyi ezi.........M−1i Jli, (3.9)where exi, eyi and ezi have all zero entries except for the unit value in the row thatcorresponds to the datum from the ith subproblem.Sometimes it is particularly useful to compute the diagonal of the square matrixJ>g Jg. This piece of information can be used to assess the well-posedness of theinverse problem, determine the initial trade-off parameter β in equation 1.6, andprecondition the CG solver when solving equation 1.12. In this calculation, everyrow of S, denoted by s>j for the jth row, left-multiplies Jg at the local level in thesame way of equation 3.9 and then all of the elements in the resulting row vectorare squared; finally those row vectors across different rows of S are summed upand transposed to obtain the diagonal of J>g Jg. This procedure can be expressed asdiag(J>g Jg)> =∑j(∑is>j M−1i Jli)◦2, (3.10)where ◦2 denotes the operation of element-wise (Hadamard) square. Because s>jis sparse, a short-cut can be made by sending only non-zero segments of s>j to theactive subproblems if the data are properly ordered.Equations discussed in this subsection are generic, and can be modified accord-ing to the specific needs in different surveys for improved performance.3.3.4 Massive parallizationSolving the modeling problem after survey-parameter (S-R-T/F) based decompo-sition is appealing because the resultant subproblems can be independently mod-eled, which makes the modeling much more scalable in the environment of mas-sive parallelization compared to the domain-based decomposition with heavy inter-subproblem dependency. In this subsection I examine a wall-clock time model ofusing survey decomposition and discuss its scalability in massive parallelization.For simplicity, the computing environment is assumed to be a uniform array of72processors, each of which has its allocated random-access memory.In massive parallelization, the computing time consists of two parts, the solvingtime spent on the actual modeling and the communication time spent on passingdata through the network. The solving time, assuming every subproblem is com-putationally identical, can be calculated asTs = ts ·NspNp, (3.11)where ts is the time required by solving one subproblem on one processor, Nsp isthe number of subproblems, and Np is the number of processors.The communication time is taxed because a vector, either model or data, mustbe first sent from the global process (host) to the local processes (workers) and thenretrieved from the local processes to the global process. This global-local-globalcommunication is shown by the solid and dashed arrows in Figure 3.11 and its timecan be approximated byTc = 2 · tc · log2 Np, (3.12)where tc is the time of passing a vector between two processors. The model ofparallelization in Figure 3.11 assumes there are enough workers (processors) forevery subproblem; otherwise every worker needs to be assigned more than onesubproblems. The high efficiency of O(logn) in equation 3.12 is due to the factthat the parallel workers barely talk to each other, except at the beginning and endof subproblem computing when vectors are broadcast through the networks.Considering the sum of Ts and Tc as a function of Np, the total time for a givenNsp has a minimum atN∗p =ln22·ts ·Nsptc≈ 0.35 ·Rs/c ·Nsp, (3.13)in which Rs/c is the ratio of ts to tc. If the optimal number of processors is greaterthan the number of subproblems (N∗p > Nsp), the computation can always gainfurther speed-up by adding extra processors (up to Nsp processors). This condition,73HostGlobal discretizationWorker 1Local discretization for Subproblem 1Worker 2Local discretization for Subproblem 2Worker NLocal discretization for Subproblem N...Broadcast vectorCollect and/or sum vectorsFigure 3.11: Model of parallel computing for survey decomposition.referred to as continual speed-up, occurs ifRs/c >2ln2≈ 2.88. (3.14)Assuming Nsp = 10000, ts = 100, Figure 3.12a shows the decay of the solvingtime Ts and logarithmic growth of the communication time Tc for three differentexample scenarios tc = 1,5,10. The minimizer of the total time function, calculatedby equation (3.13) for three tc values, are 350000, 70000 and 35000 (processors),all greater than Nsp, so continual speed-up is possible by using up to Nsp processorsand the minimum time required for Np = Nsp is 127, 233 and 366 for tc = 1,5,10respectively. If tc is relatively large, the communication time begins to dominatethe total time more quickly at a smaller number of processors. Figure 3.12b showsthat the linearity of speed-up depends on Rs/c.In reality, tc is a hardware-dependent parameter and may vary significantly butit is generally true that, after survey decomposition, most computationally intensivejobs, including solving PDEs and dense matrix-vector multiplication, are carriedout within the subproblems at a local level so Rs/c is large enough that the endusers are more bound by the constraint of computational resources than by thealgorithm’s ability of taking advantage of more processors.740 1000 2000 3000 4000 5000101102103104105106Number of processorsTime  Solving time TsCommunication time TcTotal time T0 1000 2000 3000 4000 50000500100015002000250030003500400045005000Number of processorsRelative speed−up  (a) (b)tc = 10tc = 5 tc = 1tc = 10tc = 5tc = 1Rs/c = 10Rs/c = 20Rs/c = 100Rs/c = ∞Figure 3.12: Hypothetical performance of massive parallelization using sur-vey decomposition.3.4 Data SubsamplingSurvey decomposition breaks the entire EM survey down to many physically self-contained subproblems in small size. Although massive parallelization is expectedto tackle the large number of subproblems, it is always desirable to solve as fewsubproblems as possible. I have noticed the collected data are usually oversampledin space and time and are more than actually needed in inversion. This sectiondiscusses the feasibility of modeling only a small portion of available data and in-troduces the general idea of adaptively subsampling the data using cross validation.753.4.1 Oversampling in EM modelingGiven an area of exploration, the amount of data from an EM survey depends on thesampling rate of the equipment in space and time/frequency. The data acquisitionsystems usually samples at the highest possible rate allowed by the budget. Theredundancy of data can improve SNR in the presence of random noise from thedata processing point of view. However, this does not necessitate the modeling ofevery single datum. Recognizing why there is oversampling provides a basis foreffectively reducing the number of subproblems after survey decomposition.Oversampling happens if the information density of the signals is relativelylow compared to the sampling rate, or in other words, the sampled data are pro-viding almost redundant information. In EM modeling, this can be semi-quantifiedby comparing the measurement spacing with the scale of survey, which is usu-ally calculated by “diffusion distance” or “skin depth”. Larger scale means lowerinformation density and possibly more oversampling.Due to the diffusive nature of the EM induction, the EM responses are generallyconsidered “smooth” in both time and space at the scale of most surveys. Forexample, for a half-space model of 0.002 S/m and TEM data at 10−4 s, the diffusiondistance is about 300 m, which is significantly larger than the in-line soundingspacing in most airborne surveys. Information density falls off rapidly from earlyto late time channels. For a ground survey that measures data as late as 1 s, the scaleof the survey can be so large that meaningful change of the EM responses may notbe seen even if the receiver moves across the entire survey area. Nevertheless,data at that time were still measured on a dense grid. The similar smoothness isalso evident in time/frequency domain. Therefore, the first type of oversampling isdue to the mismatch between the information density of EM data and the samplingrate in realistic surveys. Some regular or smart subsampling schemes have beenproposed to work on this (Siripunvaraporn & Egbert, 2000; Foks et al., 2014).The second type of oversampling, caused by the regularization in inversion,can also be explained using the concept of information density and the scale ofsurvey. A regularized inversion often starts with strong emphasis on the simplicityof the model, constructing only large-scale features in the model and using littleinformation from the data. At this stage, not all the information in the data is76needed; so even a sampling rate tuned to properly match the information densityof data yields oversampling. When the inversion proceeds and the regularizationparameter decreases, more model structures at smaller scale will be allowed andthus demand a higher sampling rate.To conclude, in reducing the oversampling, effort can be made by matchingthe sampling rate to (1) the static scale exhibited in the data and/or (2) the dynamicscale concerned in the process of inversion.3.4.2 Adaptive subsampling using cross validationA common practice of choosing a subset of data is to subsample the original datausing a coarser grid with uniform spacings. However, such a static grid may have ahigher risk of biasing the inversion result. For example, an insufficient subsamplingrate on a fixed grid can prevent the inversion from having complete informationabout the noise; in a worst case scenario, when the subsampling happens to bein-phase with a particular noise source, like the swing of the bird in an airbornesurvey, or any kind of periodic structure of the earth, significant bias may be createdin the model. One solution is to sample the continuous signals dynamically andrandomly, which, unlike a uniform grid that only passes low-frequency signals, isnot frequency selective (Herrmann, 2010). Using the scheme of random samplingwith a uniform probability distribution (constant probability density function) alsosimplifies the process of subsampling to finding the optimal number of samplesover the sampling domain.The number of samples required in inversion depends on the scale of investiga-tion. Cross validation can be used in the adaptive search for the practically optimalnumber of samples that matches the scale. This procedure usually involves twosubsets of the data: a training subset and a test subset, which are independentlychosen. Ideally, a certain type of operation or investigation is supposed to applyto the entire data set; however, due to the cost of accessing all the data, one canchoose to only operate on the training subset, from which a decision or result isthen obtained. To prevent the decision/result from being misled by the subsampledtraining subset, the test subset is used as an independent check to verify whetherthe decision/result made based on the training subset also satisfies the test subset.77If the validation fails, the number of samples in the training subset is deemed notrepresentative enough, so more samples should be added. This adaptive procedurecan keep adding new samples until an extreme situation that the training subset isthe entire data set, and the decision/result is guaranteed to pass the test. The testsubset can have the same number of samples as the training subset or some otherreasonable number.Here I use a demonstrative example to show how cross validation can be in-dicative in matching the required number of samples to the scale of investigation.The operation in this example is to simply calculate the mean value of a data vec-tor. For a given vector of length 100, different numbers of random samples persubset (N, the size of subset) from 1 to 40 are used to estimate the mean value inboth training and test subsets. Then the consistency error, evaluating the subsam-pling performance, is defined as the absolute difference between the means fromthe training subset and the test subset for a given N.The first experiment is operated on a vector having high-frequency signals (Fig-ure 3.13a). The mean values estimated by the training and test subsets at differentNs are indicated by the solid dots and circles in Figure 3.13b. There are localfluctuations in the consistency error, but the overall trend shows that larger N canimprove the accuracy of operation. For this particular data vector, about N = 30are required to maintain a performance measured by the consistency error of 1(Figure 3.13c).In the second experiment, the high-frequency component in the original vectoris removed, leaving a smooth signal representing a broader scale of investigation(Figure 3.13d). The mean values from the two subsets converge quickly at a smallerN (Figure 3.13e), so that the consistency error of 1 is achieved for an N less than10 (Figure 3.13f). Therefore, by checking the training subset against another inde-pendent test subset, it is possible to tell whether the current number of samples isenough to match the current scale of investigation.In the context of inversion, the training and test subsets are samples from thefull data set. Every sample of data is a subproblem characterized by source/re-ceiver location, time, or their combination. The operation on the training subset iscomputing a model update at a certain scale determined by the trade-off parameter.The test subset validates the proposed model update by comparing the data misfits780 20 40 60 80 1001015202530xy(a) High−frequency (HF) signal0 10 20 30 4012141618202224Number of Random Samplesy¯(b) Estimated means of HF0 10 20 30 4010−210−1100101Number of Random Samples∆y¯(c) Consistency error of HF0 20 40 60 80 1001015202530xy(d) Low−frequency (LF) signal0 10 20 30 4012141618202224Number of Random Samplesy¯(e) Estimated means of LF0 10 20 30 4010−210−1100101Number of Random Samples∆y¯(f) Consistency error of LFFigure 3.13: Demonstrative example: adapting the number of samples to thescale of investigation using cross validation.of the test subset before and after the update. If the model update satisfactorilyimproves the misfit of the test subset, the model update can be accepted and theinversion can proceed to the next iteration with reduced trade-off parameter; other-wise the model update is rejected. In the latter case, the current number of samplesis deemed insufficient for the current scale of investigation, so more samples needto be added into the training subset to generate a new proposed model update. Thisadaptive approach of data subsampling can be implemented in different ways de-pending on how the data are organized; the specific workflows for airborne andground loop surveys are discussed in Chapter 4 and 5 respectively.I also note that this “adapt-to-the-scale” approach exploits the multi-scale na-ture of EM inversion. The requirement for the modeling accuracy can be relaxedwhen the data and model at a broader scale is considered, leading to significantsaving on the number of data. The implementation of such selective modeling isdifficult in the standard algorithm, but becomes a natural choice if the survey isdecomposed as proposed here.793.5 SummaryThis chapter lays the theoretical foundation of the framework of survey decompo-sition in geophysical EM modeling.Firstly, three computational complexities associated with EM modeling, namelyspace complexity, time complexity and optimization complexity, are critically ex-amined to reveal the over-computing hidden in the standard algorithm. A sig-nificant amount of unnecessary over-computing exists if the conventional globaldiscretization is used. The one-size-fits-all discretization, exemplified by a globalmesh that serves all sources, receivers and times/frequencies, is not economicalbecause of the great contrast of modeling scales in one discretization scheme. Anydiscretization with reduced scale contrast has reduced computational complexityand thus can be more efficient. This idea prompts the decomposition of the surveyso that the source, receiver and time/frequency are localized and the subproblemscan have customized discretizations.Secondly, an atomic building block is established as the smallest component forassembling the subproblems of any type of controlled source EM survey in timeand frequency domain. An atomic problem only has one dipole source, one pointreceiver and one time channel/frequency. With the help of superposition, I wasable to decompose most of the commonly used CSEM surveys, including airborneEM, ground large loop EM, marine CSEM, DC resistivity, MMR/MIP, etc., intosubproblems built by atomic problems.Once the subproblems are identified, I show how the local mesh and local timediscretization are designed for each subproblem. Unlike a global mesh that hasfine cells everywhere over the entire survey area, a local mesh is locally refined atthe dipole source and receiver locations and usually only covers the region that thesubproblem is sensitive to. A local time discretization only models one time chan-nel with a constant step length adapted to the right scale of investigation. The localdiscretization, being different from the global one, mandates a two-level (globaland local) architecture of computation. The actual computations and storage offorward responses, sensitivity and sensitivity-vector multiplication (Jv or J>v) areall carried out within the subproblems at the local level, while the global processis only responsible for summarizing the computing results from the local level. As80no PDE solving is run directly on the global mesh, the size of the global mesh isno longer a prohibitive issue when working with large-scale problems.The logical decomposition based on survey parameters makes the subproblemsfine-grained and also independent to each other. This is ideal for massive paral-lelization. Each subproblem, with its own source, receiver and time/frequency, is aself-contained EM problem. No communication is required after the parallel work-ers receive the initializing broadcast from the host (global) process and before thecomputing results are sent back to the host. Thanks to this independence, eachdatum from the survey can be assigned to a subproblem running on one processorat the maximum scale of parallelization. A simplified run time model is discussedto show the scalability of the new framework.After decomposition, the original problem becomes a number of small sub-problems. I notice that due to oversampling of data, it is not necessary to modelall of them in inversion. An adaptive, random and dynamic data subsampling ap-proach using cross validation is therefore proposed to minimize the number of data(subproblems) used in inversion. Again, the fundamental idea is still to match thesubsampling rate to the scale of modeling.In the next two chapters, the theory of survey decomposition is applied to twosurveys important in mineral explorations, airborne EM and ground loop EM, withboth synthetic and field data examples.81Chapter 4Airborne TEMGiven the theoretical background and the general idea of survey decomposition inChapter 3, this chapter uses the framework of survey decomposition to solve thepractical modeling problems in airborne TEM. Because of its unique way of col-lecting data, the entire airborne survey is decomposed into many soundings. Everysounding is a subproblem with its source, receiver and measurements at a sequenceof time channels. This chapter demonstrates the benefits of carrying out forwardmodeling and sensitivity computation for airborne TEM based on local meshes. Asynthetic example is provided as a proof-of-concept inversion. The VTEM data atMt. Milligan, partly inverted in Chapter 2, is more efficiently inverted as a wholeusing survey decomposition.4.1 Local Mesh for Airborne SoundingThe generic formula in Chapter 3 are still valid for airborne soundings. Treatingthe airborne survey as a special case, some simplifications can be made:• Every sounding has its own local mesh, on which a subproblem concerningall of the delay times is solved.• The separation of transmitter and receiver is small enough so there is onlyone center of refinement. A local mesh for airborne looks similar to Fig-ure 3.7 but without the cells between the transmitter and receiver.82• Such a local mesh does not need rotation, so M in equation 3.3 is the identity.• The non-rotating local mesh has grids parallel to the global mesh grids; R inequation 3.2 can be more quickly computed using the rigorous computationalgeometry.• Every sounding has its own magnetic dipole transmitter, so there is no su-perposition and S in equation 3.4 is also an identity.• If M and S are ignored, the computation of J>g Jgv and diag(J>g Jg) only needone global-local-global cycle of communication, cutting the communicationtime in half.4.1.1 Forward modeling on local meshTo show that airborne soundings can be effectively modeled on local meshes, Iuse a synthetic model with a random conductivity structure. The original model isdefined on a global mesh finely discretized to 50 m resolution for a survey area of4×4 km (Figure 4.1a). I choose the sounding at the center of the model (indicatedby a red dot). The first local mesh (Local Mesh 1) has an expansion rate of 1.2 anda boundary distance 1500 m. This mesh performs well at early times comparedto the global mesh but has up to one order of magnitude error at late times dueto an insufficient distance to the boundary. The local mesh is then expanded toa boundary distance of 3000 m (Local Mesh 2, Figure 4.1b), which is still muchsmaller than the global mesh. With the additional three cells in every direction,Local Mesh 2 is able to provide forward modeled data within 5% of the globalmesh results at late times (Figure 4.2).Although fine structures of the model away from the sounding location are rep-resented by bulky cells, a local mesh is still capable of producing a good simulationat a much reduced cost. Table 4.1 summarizes the three forward modelings on theglobal and local meshes. The global mesh, being large for the entire survey, isexpensive in both time and memory. Because a large Maxwell matrix needs to befactorized, 12 processors are required. The local meshes are much smaller and thusmore efficient, even though only one processor is used.83-3.40-4.57-3.99-1.07-2.82-2.23-1.65-4.73-4.11-3.50-1.65-1.04-2.88-2.27log(S/m) log(S/m)31980-3871-555255520X (m)Elevation (m) 24410-2650-296029600X (m)Elevation (m)(a) (b)Figure 4.1: The synthetic model on the global mesh and a local mesh.10−4 10−3 10−210−1410−1210−1010−8Time (second)dBz/dt @ X = 0, Y = 0 (Volt)  Global meshLocal mesh 1Local mesh 2Figure 4.2: Forward modeled data on the global mesh and the two successivelocal meshes in the test.Table 4.1: Forward modeling of an airborne sounding on the global mesh andtwo local meshes.Global Mesh Local Mesh 1 Local Mesh 2Number of cells 108×108×33 22×22×27 28×28×30Time for one factorization 123 s 1.5 s 4.1 sTime for one time step 1 s 0.02 s 0.06 sNumber of processors 12 1 1Memory (one A−1) 40.5 GB 174 MB 395 MB84Multiplying the cost by the number of soundings further signifies the benefitsof using a local mesh approach in airborne inversion. If the entire survey contained1080 soundings and one forward modeling has 4 factorizations and 50 time steps,the global mesh would require at least 54492 s for one complete forward model-ing on 12 processors and further speed-up by adding more processors is difficult.However Local Mesh 2 needs 19.4 s for one sounding (subproblem) and a total of20952 s if all computations were carried out on a single processor. Distributingthese jobs over the 12 processors reduces this to 1746 s. Increasing the number ofprocessors continues to achieve this linear benefit.4.1.2 Sensitivity on a local meshThe same synthetic model in the forward modeling example (Figure 4.1a) is usedto show the global sensitivity can be reasonably reconstructed using a local mesh.At the same sounding, I choose a row of the sensitivity matrix corresponding to thedBz/dt datum at time t = 0.001 s and present the sensitivity on the mesh. The sen-sitivity is computed directly on the global mesh (Figure 4.3a) and on Local Mesh2 then using equation 3.6 for interpolation (Figure 4.3b). The local mesh resultmatches well with the sensitivity computed on the global mesh. The computationalcosts are similar to those of forward modeling.-1E-1501E-155E-16-5E-16V/log(S/m)31980-3871-5552 05552X (m)Elevation (m) 31980-3871-5552 05552X (m)Elevation (m)(a) (b)Figure 4.3: Sensitivities of dBz/dt datum at t = 0.001 s for the syntheticmodel: (a) the global mesh result and (b) the local mesh result withinterpolation.854.1.3 Synthetic inversion of a two-block modelA synthetic example is designed to test the inversion using survey decomposition.The true model consists of two conductive prisms, 0.1 and 0.05 S/m, buried in a0.01 S/m uniform half-space (see depth slice at 150 m in Figure 4.4a). A syntheticairborne TEM data set is created on a 13×37 data grid over a 1.2×3.6 km areamarked by the red dots. Seven time channels of dBz/dt data from 10−4 to 10−2 s aresimulated at 481 sounding locations 100 m apart. The synthetic data are noise-free,but I require the inversion to fit the data with an assigned uncertainty of 5%. Themesh used for the creation of the synthetic data, and which served as the globalmesh, has 155820 cells (53×98×30). A complete forward modeling consists offour factorizations and 48 time steps, and modeling all 481 soundings on this meshtakes about one hour.As a benchmark, I first invert the synthetic data set directly on the global meshwith a 0.01 S/m half-space as the initial and reference models using the standardalgorithm. The inversion, using 128 GB memory on 24 processors, takes 7 Gauss-Newton iterations and achieves the target misfit in 156 hours (Figure 4.4b).For survey decomposition, I use the identical inversion parameters but withlocal meshes to compute the forward modeling and sensitivities. The first inver-sion test is carried out by equally distributing 481 sub-problems (soundings/localmeshes) on 12 processors. The target misfit is achieved within 15 hours after 8iterations. Figure 4.4c shows the conductivity model recovered using the localmeshes; the two prisms are delineated with correct geometries and conductivities;this result is similar to the model recovered by the standard algorithm directly onthe global mesh (Figure 4.4b).The second run of the same inversion on 24 processors is carried out to testthe scalability. In this case the workload on each processor is cut in half to 20sub-problems. The second inversion produces an identical inversion result, conver-gence curve, and data misfit, but the total CPU time is reduced to about 7.5 hours.Even greater potential for speed-up exists if more processors are available.86404628980602-546-571 15 600 1186 1771 -571 15 600 1186 1771 -571 15 600 1186 1771 Easting (m) Easting (m) Easting (m)Northing (m)-1.00-1.25-1.50-1.75-2.00log(S/m)(a) (b) (c)Figure 4.4: Synthetic airborne TEM inversion usign survey decomposition.Depth slices at 150 m of (a) the true model, (b) global discretizationinversion model and (c) survey decomposition inversion model.4.2 Adaptive Subsampling of Soundings4.2.1 Subsampling workflowAlthough the framework of survey decomposition greatly reduces the time of 3Dinversions, and further acceleration is achievable by adding more processors, thetotal number of soundings in an airborne EM survey typically numbers hundredsof thousands and this can be formidable. Therefore, data subsampling is very nec-essary in airborne data inversion.Using the concept discussed in Section 3.4, I develop an adaptive subsamplingworkflow for airborne soundings (Algorithm 1). This workflow features randomand dynamic subsampling of soundings and the number of soundings N at everyiteration is adapted to the scale of investigation in inversion, which is essentiallycontrolled by the trade-off parameter β .Adaptive subsampling of soundings requires frequent solves of the inverseproblem on the training subset for the proposal of model update δm, and the for-ward problems need to be solved for the test subset to cross validate the model87update. In practice, I want the computation of the local mesh design, matrix factor-izations and forward problems done for the test subsets to be recycled for the train-ing subset (inversion) in the next iteration, so N is always doubled when increased.Each Gauss-Newton iteration involves two steps: (1) selecting the training subsetand computing δm using the training subset; (2) selecting the test subset and eval-uating whether the proposed model update is acceptable by comparing the misfitsof the data in the test subset before and after the update. There are three possibleconsequent actions:• End of inversion. This happens when the data misfit is below a tolerance.• Enough number of soundings (N is large enough). If the data misfit of the testsubset is sufficiently reduced (controlled by a factor µ) after model update,δm is accepted and the test subset, already having the forward solutionsF(m+δm) for the updated model, can be used as the new training subset inthe next iteration after additional computations of the sensitivity J(m+δm).There is no need to increase N; β is cooled to a smaller value to allow moremodel structures in the next iteration.• Not enough number of soundings (N is not large enough). If the model up-date does not improve the data misfit of the test subset, δm is declined, andthe test subset, already having the forward solutions F(m) for the originalmodel, is appended to the existing training subset after additional compu-tations of the sensitivity J(m); this double-sized training subset is used inanother attempt to make a model update with the unchanged β . N is dou-bled.The solve for δm in Algorithm 1 can be attained by using methods other thanthe Gauss-Newton method implemented here. Both explicit and implicit sensitivitycan be used at the end users’ discretion.4.2.2 Synthetic inversion with adaptive soundingsThe combination of survey decomposition and adaptive soundings can greatly re-duce the CPU time for the inversion. I illustrate this with the same synthetic dataset from the two-prism model in Figure 4.4 and invert with the same parameters88Algorithm 1 Inversion with adaptive subsampling of airborne soundingsInitialization:Select N soundings for training subset StrainCompute forward responses F(m)train at StrainCompute sensitivity J(m)train at StrainEmpty test subset StestrepeatSolve for δm using F(m)train, J(m)train, β and mSelect N soundings for test subset StestCompute forward responses F(m)test at StestCompute data misfit φ(m)test at StestCompute forward responses F(m+δm)test at StestCompute data misfit φ(m+δm)test at Stestif φ(m+δm)test ≤ tol or φ(m+δm)test < µ ·φ(m)test at Stest thenCompute sensitivity J(m+δm)test at StestStrain← StestF(m)train← F(m)testJ(m)train← J(m)testEmpty Stest , F(m)test , J(m)testReduce βm←m+δmelseCompute sensitivity J(m)test at StestN = N×2Strain← Strain ∪ StestF(m)train← F(m)train ∪ F(m)testJ(m)train← J(m)train ∪ J(m)testEmpty Stest , F(m)test , J(m)testend ifuntil φ(m)test < tol89using 24 processors. The initial and reference models are still 0.01 S/m half-space.The initial number of soundings is set to 48 at the beginning of adaptive sound-ings. The number of soundings gradually increases to 384 until the target misfit isachieved within 2.5 hours after 6 iterations (Figure 4.5).0 1 2 3 4 5 6050100150200250300350400Iteration  Cumulative CPU time (minute)Number of soundingsNormalized data misfit (per sounding)Figure 4.5: Summary of synthetic airborne inversion using both survey de-composition and adaptive soundings.The final inversion model (Figure 4.6d) is almost identical to the recoveredmodel in the inversion which only uses the local mesh method (Figure 4.4c). Theintermediate models created by adaptive soundings are somewhat different (Fig-ure 4.6). The first iteration only has 48 soundings, so the inversion recovers theglobal trend of the conductivity plus significant artifacts near the surface (Fig-ure 4.6a). As the number of soundings increases, the desired targets become clearer(Figure 4.6b and c). The final model in Figure 4.6d is visually the same as the in-version model in Figure 4.4 if they are rendered in the same color scale. By usingadaptive soundings, about 60% of the CPU time is saved compared to modeling all481 soundings for every iteration.Although random subsets of soundings have been used, the overall coverageof data, in terms of how many times a sounding is used during the inversion, isadequately balanced in space (Figure 4.7). On average, each sounding is selected90-2.05-1.95-1.86-1.90-2.00log(S/m)-2.10-1.91-1.72-1.82-2.00log(S/m)-2.40-1.54-0.67-1.10-1.97log(S/m)-2.24-1.74-1.24-1.49-1.99log(S/m)0-793-4776251647Elevation (m)Easting (m)3922-422664175028360-793-4776251647Elevation (m)Easting (m)3922-422664175028360-793-4776251647Elevation (m)Easting (m)3922-422664175028360-793-4776251647Elevation (m)Easting (m)3922-42266417502836Northing (m) Northing (m)Northing (m)Northing (m)(a) (b)(c) (d)Figure 4.6: Conductivity models recovered at different iterations of adaptivesoundings: (a) Iteration 1 with 48 soundings, (b) Iteration 2 with 96soundings, (c) Iteration 4 with 192 soundings, (d) Iteration 6 with 384soundings. The red dots indicate the sounding locations, and the whiteboxes outline the exact locations of the two prisms.0 500 10000500100015002000250030003500Easting (m)Northing (m)  012345Figure 4.7: Counts of selection for every sounding throughout the entire in-version.91for approximately 2.4 iterations. At the end of the procedure there were only 10soundings that were not used at all. We also note that most commercial airborneEM systems have a much denser spatial sampling rate (5∼10 m) than our syntheticsurvey (100 m). Therefore, using adaptive sounding approach to find the necessarynumber of soundings for an inversion is more economical than inverting everysounding in the overly redundant data set. This is shown in the field example in thenext section.4.3 Inversion of the Entire Mt. Milligan Data Set4.3.1 3D inversionWhen solving EM problems on local meshes, the forward modeling is never carriedout on the global mesh, so the number of cells in the global mesh becomes lesscrucial. For the VTEM data at Mt. Milligan I design a global mesh with 443520(88×84×60) cells to hold the entire survey area at 50 m horizontal resolution and20 m vertical resolution for the topography (Figure 4.8. This mesh, along with thelarge number of soundings, is too large for the standard algorithm in Oldenburget al. (2013) to be practically carried out.Figure 4.9 summarizes three key parameters of the inversion: the cumulativerun time, the number of soundings used at each iteration and the normalized datamisfit. For early iterations only 24 soundings were needed to build up the large-scale conductivity distribution. Smaller-scale features were built up by addingsoundings. The number of soundings used in the final iteration is only 192, whichimplies that the necessary number of soundings for 3D VTEM inversion at Mt.Milligan is about 200, only 1.4% of the total number of soundings acquired in thesurvey. During the entire inversion procedure, information from 744 soundings(5% of the total number of soundings) was incorporated into the final model andthus 744 subproblems (local meshes) have been solved. The total time was 4.3hours and I anticipate more speed-up if additional processors were available.A depth slice of the 3D inversion model over the whole survey area at an el-evation of 950 m and a cross section A-B at 6109500N cutting the major stockMBX are shown in Figure 4.10. White lines outline the major monzonite stocks92429248 431774 434300 436826 439352Easting (m)455628981240-418-2076Elevation (m)Figure 4.8: Global mesh for the 3D inversion of the VTEM data at Mt. Mil-ligan. The VTEM sounding locations are indicated by the red dots.0 1 2 3 4 5 6 7 8050100150200250300Iteration  Cumulative CPU time (minute)Number of soundingsNormalized data misfit (per sounding)Figure 4.9: Summary of Mt. Milligan VTEM 3D inversion.93432478 433389 434300 435211 4361222275408541167148061071786108039610890061097616110622Northing (m)Elevation (m)Easting (m)A BA B-3.59 -2.79 -1.99 -1.20 -0.40log(S/m)(a)(b)MBXFigure 4.10: Conductivity model of the Mt. Milligan VTEM 3D inver-sion: (a) a depth slice at 950 m elevation, (b) a cross section A-Bat 6109500N.94and the faults. VTEM flight lines are indicated by the red lines. Like the small-scale inversion in Section 2.4, the resistive MBX stock is clearly delineated by the3D inversion.4.3.2 Final model validationLastly, I carry out a complete forward modeling using the recovered model in Fig-ure 4.10 at all 14362 soundings.In this data set there are 14362 soundings and 8 data (time channels) at eachsounding. The total number of data is 114896. The misfit of this validation forwardmodeling, if normalized by the total number of data, is φd/N = 2, which comparessatisfactorily with φd/N = 1.87 estimated by the 192 random soundings at the lastiteration and φd/N = 2.1, the data misfit achieved in Section 2.4 for a smaller areaaround the MBX.The time channel grids of the observed and predicted data at 0.68 ms (a delaytime corresponding to the expected depth of the deposit) in Figure 4.11 show the3D model recovered by using local mesh and adaptive soundings can reasonablyreproduce the observed data. The white lines indicate 14362 soundings approxi-mately 3 m apart from each other. The scattered black and red dots show the 744and 192 soundings used in all iterations and the last iteration of inversion respec-tively.Easting (m)Northing (m)  433000 433625 434250 434875 43550061080006108500610900061095006110000log(Volt)−11−10.5−10−9.5−9Easting (m)Northing (m)  433000 433625 434250 434875 43550061080006108500610900061095006110000log(Volt)−11−10.5−10−9.5−9(a) (b)Figure 4.11: Data grid of time channel at 0.68 ms for the observed and pre-dicted dBz/dt data at Mt. Milligan.954.4 SummaryAirborne EM data are difficult to invert in 3D because of the large computationsrequired to handle multiple transmitters and large meshes needed to represent thevolume being modeled. The computational difficulties are tackled by using theframework of survey decomposition.Modeling an airborne EM survey has large space complexity and moderatetime complexity, so such a survey can be conveniently decomposed so that ev-ery airborne sounding is a subproblem modeling all delay times. A local meshis optimally designed to compute the forward responses and sensitivities at everysounding and can yield good accuracy with a modest number of cells. As such anentire forward modelling or sensitivity calculation is easily distributed among anarray of processors in parallel.The second area of progress pertains to the number of soundings used at eachiteration in the inversion. It is well known that modeling every sounding in anairborne data set is not necessary. I propose a random and dynamic subsamplingmethod, called adaptive sounding. It is essentially a random sampler with an adap-tive number of soundings selected for each iteration. Fewer soundings are selectedin early iterations to build up large-scale structure and more soundings are addedlater as the regularization is relaxed and additional structure is needed to fit thedata. The procedure has an additional optional check in that the user may carry outa complete forward modeling at the end, using all the soundings, to validate theconstructed model.In a test using synthetic data from a two-prism model I reduced the inversiontime, compared to the standard approach, by more than a factor of 60. That ex-ample was small and further disparity between the standard and new method willincrease as the size of the problem and the number of processors increase.In a field example, which was too large to be practically solved in Section 2.4,the inversion using the survey decomposition takes only 4.3 hours on 24 processors.This is a very substantial speed-up and represents a major step toward makingroutine 3D inversion of airborne EM data a reality.96Chapter 5Ground Loop TEMWhile the decomposition of an airborne EM survey is straightforward, a groundsurvey using large loops can be more complicated because:• Ground loop surveys are less standardized. The layout of a survey, includingthe geometry of the loop and locations of receiver, varies greatly from onesite to another.• The transmitter loop is usually too large to be local, so it must be properlydecomposed to a number of small dipole sources using superposition.• Data at large dynamic range of times are measured. The time complexityassociated with ground loop EM necessitates decomposition for individualdelay times.This chapter first introduces a field data example, whose specifications are usedfor my investigation of ground loop EM survey. The large loop source is efficientlydecomposed into looplets by using an adaptive looplet approach. Similar to theairborne case, data subsampling is also an integrated part of the workflow; thisminimizes the over-computing and the number of subproblems to be solved. Theframework is tested with a synthetic example and finally applied to the field datato show its efficiency and effectiveness.975.1 Field Example: SQUID Data at the Lalor MineThe Lalor Mine deposit is located in the Chisel Basin portion of the Flin FlonGreenstone Belt, and about 8 km west to Snow Lake in central Manitoba, Canada.The deposit, discovered in 2007 within the meta-volcanics, meta-sediments andgranitoids of the Churchill province near Chisel Lake and Snow Lake, is believedto be the largest volcanogenic massive sulfide (VMS) ever found in this area (Fig-ure 5.1).VMS deposits in this region have geophysical signatures of good conductors,usually discrete bodies, buried in a relatively resistive host. Due to VMS’s strongEM responses, a variety of EM surveys have been conducted to delineate the com-plex of conductors, which are most likely to be directly associated with economicZn-Au-Cu mineralizations. One of the surveys, featuring an advanced JESSY HTSSQUID (superconducting quantum interference device) magnetometer, is of partic-ular interest. Compared to the conventional dB/dt (time derivative of the B-field)data, SQUID data have two advantages: (1) a SQUID magnetometer measuresthree-component data in unprecedented quality (Chwala et al., 2001; Osmond et al.,2002; Leslie et al., 2008), while the dB/dt data measured by induction coils tends tobe sensitive to noise at late times; (2) B-field data can better detect the slow decaysin a good conductor (Smith & Annan, 1998), while the dB/dt data may confuse theslow decays with low conductivity. For these reasons, the SQUID is particularlyuseful in exploration of VMS at the Lalor Mine as SQUID data enhances the sig-nals from a good conductor and is capable of measuring the weak B-field signalsfrom deep targets at very late times. As a relatively new technique, B-field SQUIDdata have not previously been interpreted using rigorous 3D modeling; therefore,work on 3D inversion of this data set has additional value of novelty.The original SQUID survey at the Lalor Mine consists of multiple transmitterloops and receiver lines. In this thesis, I concentrate on a 900× 1500 m rectangulartransmitter loop, called the “East Loop”, and data from two receiver lines perpen-dicular to each other; this provides some degree of 3D coverage. The locations ofthe transmitter and receivers are shown in survey coordinate system in Figure 5.2.The transmitter, powered by a Phoenix TXU-30, transmits a step-off waveform ata base frequency of 0.5 Hz. The magnetic field data measured by the SQUID were98CANADANEJANILINISEAL RIVERGREAT ISLANDHUDSON BAY BASINCHIPEWYANSOUTHERN INDIANLYNN LAKEKISSEYNEWPIKWITONEINORTHERNSUPERIORGODS LAKEFLIN FLONPALEOZOICPIKWITONEIMOLSON LAKEPIKWITONEIISLAND LAKEMES0ZOICBERENS RIVERMES0ZOICUCHIBIRD RIVERWINNIPEGRIVERWABIGOONCRETACEOUSPALEOZOICPrincipal Geologic Domains in ManitobaLalor MineWinnipegFigure 5.1: Location of the Lalor Mine deposit on the map of geologic do-mains in Manitoba (adopted from the government of Manitoba’s web-site).991500 2000 2500 3000 3500 4000 4500 5000450050005500600065007000X (m)Y (m) Line 1Line 2TransmitterFigure 5.2: Layout of the SQUID ground loop TEM survey at the Lalor Mine.recorded using an EMIT SMARTem24 at 39 time channels from 0.1 to 371 ms.The data before 0.87 ms are highly contaminated by correlated noise; as a result,only the data from 0.87 to 371 ms are used in the inversion. The survey layout inFigure 5.2 is to be used as a representative example for both synthetic and fielddata studies throughout this chapter.5.2 Decomposition of Large TransmitterOne of the complications of decomposing a ground TEM survey is that the trans-mitter is too large to be treated as a localized source. In this section I representthe large transmitter loop by using the linear combination of some magnetic dipolesources (looplets), and show how they perform in computing the forward responsesand sensitivity.5.2.1 Atomic problem with long-offset source-receiver pairIn most cases, the source and receiver in the subproblems of a ground survey canhave offset from zero to a few kilometers. As a result, the local meshes used inthe subproblems of a ground survey need two local refinements. This subsectiontests a sequence of atomic problems with an offsetting source-receiver pair on a100-2.00-1.67-1.33-2030-1657-1243-1215X (m) -400 415 1230-829Z (m)-414-2.67-2.33-3.00-1.00log(S/m)0Figure 5.3: Conductivity model and locations of sources and receivers forthe atomic problem modeling test. The magnetic dipole transmitter ismarked by the red dot and the receivers by the white dots.synthetic model.The synthetic model is a 10−1 S/m conductive sphere buried in a 10−3 S/m uni-form half-space (Figure 5.3); the magnetic dipole source is very close to the targeton the surface (red dot in Figure 5.3), giving a challenging scenario for numericalmodeling. The receivers move along two lines (white dots in Figure 5.3 Line 1along the x-direction and Line 2 along the y-direction). This yields many differenttypes of orientations with respect to the source, receiver and target.Four delay times, 10−4, 10−3, 10−2 and 10−1 s, are modeled in this test. Eachatomic problem has its own time discretization requiring 15 steps from t = 0 to itsmodeled delay time, and its own local mesh based on the location of the sourceand receiver and also the time. The adaptive strategy described in Section 3.3 isemployed. The initial smallest cell size and domain size of a local mesh are firstinferred using the diffusion distance. The domain size then expands at a rate of1.5; the cell size refines at a rate of 0.6.The results at the four time channels obtained with local discretizations (plottedas dots) are compared to the results from the global discretization modeling (solidlines) in Figure 5.4. A high degree of agreement is observed for most of the data,especially for the two early time channels 10−4 and 10−3 s. The Hy data of Line 1are supposed to be zero due to the symmetry, and the atomic modeling results aretoo small to be on the chart. Some moderate degree of discrepancy are seen for Hx101−1500 −1000 −500 0 500 100010−1810−1610−1410−1210−10X (m)log10 (A/m)Hx − Line 1−1500 −1000 −500 0 500 100010−1810−1610−1410−1210−10X (m)log10 (A/m)Hy − Line 1−1500 −1000 −500 0 500 100010−1810−1610−1410−1210−10X (m)log10 (A/m)Hz − Line 1−1000 −500 0 500 1000 150010−1810−1610−1410−1210−10Y (m)log10 (A/m)Hx − Line 2−1000 −500 0 500 1000 150010−1810−1610−1410−1210−10Y (m)log10 (A/m)Hy − Line 2−1000 −500 0 500 1000 150010−1810−1610−1410−1210−10Y (m)log10 (A/m)Hz − Line 2Figure 5.4: Atomic problem modeling results obtained with local and globaldiscretizations along the two perpendicular lines in Figure 5.3. The localand global discretization modelings are plotted as dots and solid linesrespectively. The time channels are distinguished by colors.and Hy at 10−2 and 10−1 s when the local meshes rotate about 45◦ with respect tothe global coordinate system. This can be improved by specifically fine-tuning theparameters of the local mesh design at the cost of increased computation; however,I have noticed that the Hx and Hy components are mostly much weaker than the Hzcomponent due to the coupling with the horizontal transmitter loop; Hz is modeledreasonably well using local discretization everywhere at all time channels. Theslightly poorer modeling results for Hx and Hy becomes less problematic in theinversion when the data misfits are normalized by the total field strengths. Thenthe null-coupled component data are down-weighted.5.2.2 Adaptive loopletA typical ground TEM survey uses a large transmitter loop on the ground andmeasures the magnetic-field data at many receiver locations around the transmitter102at a sequence of time channels. The survey is first decomposed to many receiver-time pairs (R-T) sharing the same large transmitter. Within each R-T, the problemof modeling is solved by using many looplets inside the transmitter loop and thensumming the results with defined weights; every subproblem has a looplet as thesource and an offsetting receiver measuring data at a particular delay time.The first task is to find the locations of the looplets and the weights associ-ated with each of them. This problem is mathematically identical to the numeri-cal integration of a 2D function, which in the context of this EM modeling is theEM response (or sensitivity) measured at a fixed R-T when moving the loopletsource to different locations. The sampling rate that accurately computes the in-tegration varies depending on the complexity of the 2D function. As discussedin Section 3.4, R-T at late time channels may require far fewer looplets than atearly times due to the multi-scale nature of EM diffusion. The ideas of “adapt-to-the-scale” and “refine-until-no-change” are used to develop an adaptive scheme oflooplet selection featuring random sampling.The adaptive looplet procedure starts with only one looplet at a random lo-cation in the area bounded by the large transmitter wires; then another randomlylocated looplet is added to the existing collection of looplets and the sum of for-ward modeling responses from all the looplets is updated; the addition stops ifthere is no meaningful change of the sum when more looplets are added. For acollection of N looplets, the forward responses from them are summed with theweights, physically equivalent to the magnetic dipole moments or effective loopletarea calculated from the Voronoi diagram (Aurenhammer, 1991). Figure 5.5 showsan instance of adaptive looplets refinement in a time-lapse manner; six Voronoi di-agrams are generated as the number of looplets gradually increases from 1 to 6;the weights (moments or effective area) assigned to the looplets change if the tes-sellation changes; no additional looplets are needed if the summed responses fromN = 6 and N = 5 are sufficiently close.Every R-T has its own number of looplets, as well as its own summing matrix Sdescribed in equation 3.4 and 3.7 to combine the results from looplet subproblemstogether. If the data/sensitivity from a looplet is ordered in x-y-z (equation 3.5 and3.6), S can be assembled using weighted identity matrices. For example, for the ith103N = 1 N = 2 N = 3N = 4 N = 5 N = 6Figure 5.5: Decomposition of a transmitter loop using adaptive looplets re-finement based on the Voronoi tessellation. The tessellation and thelooplets’ moments are updated after every addition of looplets (blackdots).R-T pair that has n looplets, the summing matrix can beSi =w1 w2 wnw1 w2 · · · wnw1 w2 wn , (5.1)where wn is the weight for the nth looplet derived from the Voronoi diagram. Thefinal S for the entire survey is therefore obtained by putting many R-T summing104matrices (equation 5.1) together in a block-diagonal patternS =S1S2S3. . .. (5.2)5.2.3 Forward modeling using loopletsThe adaptive looplet approach is tested on the same synthetic model used in Sub-section 5.2.1, but the source is the actual large loop and the receiver spacing is nowrefined to 100 m. Figure 5.6 shows a cross section of the model (padding cells re-moved), as well as the transmitter loop (red lines) and receivers (white dots). Thelayout of the synthetic survey emulates the one used in the Lalor Mine SQUID sur-vey, but in a synthetic coordinate system. The data collected at the receiver location(-900, 500), marked by a yellow dot in Figure 5.6, at 10−4, 10−3, 10−2 and 10−1 scorresponding to four R-T pairs (one for each delay time), are taken as examplesfor a close-up examination.-2.00-1.67-1.33-2030-1657-1243-1215X (m) -400 415 1230-829Z (m)-414-2.67-2.33-3.00log(S/m)-1.000Figure 5.6: Conductivity model and survey layout for the adaptive loopletsmodeling test. The transmitter loop is indicated by the red lines andthe data at one of the receivers marked by a yellow dot are tested forcomparison.105For the purpose of benchmarking, the desired data are first modeled directlyon the global mesh with 169136 (62×62×44) cells. A complete forward modelingof this survey for time from 10−4 to 10−1 requires 4 different step lengths and 15time steps per step length (60 steps in total). The CPU time is 234 s on a 6-corecomputer in parallel mode. The peak memory usage is 11.4 GB.The adaptive looplet algorithm determines that 10, 7, 4 and 3 looplets are re-quired for the four R-T pairs at 10−4, 10−3, 10−2 and 10−1 s, given a tolerance of2%. The latest delay time may only need one or two looplets, but a minimum of 3looplets per R-T is required by the adaptive algorithm to detect the stop criterion.The final looplet weights of every R-T are obtained from the Voronoi diagrams inFigure 5.7; it is evident that the number of looplets for a R-T pair has a strongdependence on the delay time of the data and oversampling at later times can beminimized by using adaptive looplets.−400−200 0 200 400−600−400−2000200400600t = 10−4 sX (m)Y (m)−400−200 0 200 400−600−400−2000200400600t = 10−3 sX (m)Y (m)−400−200 0 200 400−600−400−2000200400600t = 10−2 sX (m)Y (m)−400−200 0 200 400−600−400−2000200400600t = 10−1 sX (m)Y (m)Figure 5.7: Locations of looplets and the associated Voronoi tessellations ofthe transmitter loop for the four delay times at the example receiver.The local mesh and model from one of the 10 looplet subproblems used for themodeling at 10−4 s are shown in Figure 5.8 (coordinates in local system after rota-tion and translation). This mesh serves one looplet-receiver pair marked by the redand yellow dots and has only 12000 (30×20×20) cells, considerably fewer thanthe cells in the global mesh. Since only one time channel is modeled, there areonly 15 time steps at a constant step length from 0 s to the time of measurementneeded. Modeling on such a mesh takes 2 s on the same computer and uses one106-2.00-1.67-1.33-2.33Z(m)-532-3414-2128-1596-2012X (m) -610 792 2194-1064-2.67-3.00-1.00log(S/m)0Figure 5.8: Local mesh and model used in one of the looplet subproblemsmodeling at 10−4 s.10−4 10−3 10−2 10−110−1010−910−810−710−610−510−4Time (s)H (A/m)  Hx − globalHx − localHy − globalHy − localHz − globalHz − localFigure 5.9: Forward modeled data using the global discretization and localdiscretization (adaptive looplets).core only and the memory usage is hardly noticeable. The forward modeling CPUtime of other local meshes/models ranges from about less than 1 s to 5 s depend-ing on the looplet-receiver offset and delay time. The total CPU time required bymodeling the entire survey will depend on the number of processors available andthis is where massive parallelization can play a role.The modeled H-field data using the global discretization and local discretiza-tion (looplets) are compared in Figure 5.9. Good overall agreement is observed,and the strong components always have the best accuracy.1075.2.4 Sensitivity using loopletsOnce the looplets are chosen, the weights are calculated and the associated localmesh and model are made. The sensitivities can then be computed just as done inthe airborne case. The sensitivities can be stored on the local meshes of the loopletsubproblems, if an explicit sensitivity is to be used. The sensitivities for an R-Tpair is then obtained by summing up looplet sensitivities in a way similar to theforward modeling. The summing matrix S in the previous subsection (equation 5.1or 5.2) is also valid for sensitivity.In order to show that the survey decomposition, after decomposing the trans-mitter into looplets, is capable of computing the correct sensitivity using the localdiscretization, the sensitivity of a particular datum measured at a receiver location(-900, 500) at 10−3 s for Hz is computed using equation 3.7. This is presented as a3D model and compared to the results obtained directly from the global discretiza-tion modeling in Figure 5.10. The sensitivities on the global mesh are computedwithout decomposing the transmitter. The sensitivity computed using decomposi-tion is very similar to the global mesh result. The positive and negative sensitivitiescaused by the conductive sphere and transmitter/receiver orientations are seen onboth cross sections at the same locations. The difference at the highest positivesensitivity is noticeable; and this is likely due to the coarser local meshes used inthe subproblems. For sensitivity, the correctness of the sign and having the correctcharacter is important in the inversion, so the sensitivity in Figure 5.10b should beeffective in the inversion.5.3 Adaptive Subsampling of ReceiversAfter an R-T pair can be efficiently modeled, there is another oversampling prob-lem: the large numbers of receivers and delay times in a ground survey yield manyR-T pairs, most of which are probably redundant. Therefore, special treatmentis desirable to reduce the number of R-T pairs from a field survey, exactly likereducing the number of the soundings in airborne case.In addition to adapting the number of data to the scale of inversion, like thestrategy used in the airborne case (Algorithm 1), I also want to sample unevenlyin time to take into account the large contrast of time-scales in a ground survey,1083e-0085e-0088e-008-829Z (m)-414-2030-1657-1243-1215X (m) -400 415 1230 -2e-0083e-009-4e-008Sensitivity1e-0070(a) Global discretization result3e-0085e-0088e-008-829Z (m)-414-2030-1657-1243-1215X (m) -400 415 1230 -2e-0083e-009-4e-008Sensitivity1e-0070(b) Local discretization resultFigure 5.10: Example sensitivity computed using the global discretizationand local discretization (adaptive looplets). The transmitter loop isindicated by the red lines and the receiver location by the yellow dot.as the late delay times, with smoother EM fields, may require fewer samples thanthe early times. Simultaneous subsampling with a non-uniform distribution in thespace spanned by the receiver locations and times is complicated and there arelikely many viable strategies. I choose the following. First I restrict the sampling tosome select time channels that are logarithmically spaced in time; then I do uniformrandom sampling of receivers within each time channel. The data subsamplingproblem now reduces to find the optimal numbers of receivers at different timechannels at a particular iteration of inversion.The same adaptive approach used in the airborne survey (Algorithm 1) can alsobe adopted here, because the optimization part is identical. However, some modi-fications have to be made to allow time-channel-wise cross validation and receiverrefinement. In the adaptive receiver algorithm designed for the ground survey (Al-gorithm 2), there are still the training and test subsets, but every delay time has itsown number of samples (receivers) Ni (for the ith time), its own subset Si and itsown normalized data misfit φi. Ni is the same in the training and test subsets. Eachadaptive receiver iteration computes the following: (1) a model update using theforward responses and sensitivities from the R-T pairs in the training subset; (2)the data misfits of the pre-update model from the R-T pairs in the test subset; (3)the data misfits of the post-update model from the R-T pairs in the same test subset.Then the two misfits from the same test subset, taken before and after the modelupdate, are compared by delay time to decide whether the model update can be ac-109cepted. The misfit check at a particular delay time is acceptable if at least one of thefollowing two statements is true: (1) the post-update misfit is below a prescribedtolerance; (2) the post-update misfit has made reasonable improvement from thepre-update misfit. This check is done for all delay times. There are three possibleoutcomes:• End of the inversion. This happens when the pre-update misfit at everydelay time is below a prescribed tolerance.• Accept the model update. This happens when all delay times pass thecheck. In this situation the model update is accepted and the number of re-ceivers at each time remains unchanged. The trade-off parameter is reducedfor the next iteration.• Reject the model update. This happens when one or more delay timescannot pass the check. The delay times that fail the check will increase Nifor both the training and test subsets, while other delay times do not need tochange anything. Then the inversion will start over again without changingto the model and trade-off parameter.Gradually, this adaptive scheme will add more receivers across all delay times,but emphasis is given to the early times or the times of high information density.The notations in Algorithm 2 are the same as in Algorithm 1: Nt is the number ofdelay times; Ni is the number of receivers at the ith delay time. I also use the ideaof recycling the computing results of the test subset from the previous iterationfor the training subset at the next iteration, so the computations done for the testsubset have additional use. It is important to note that there are two levels ofdynamic selection: (1) the random choice of R-T pairs changes at every iterationin Algorithm 2; (2) the numbers and locations of looplets within the changed R-Tpairs are also dynamically sought at every iteration. This mechanism ensures theinversion model is not biased by any fixed parameter. The effectiveness of thisadaptive receiver method is shown by a synthetic inversion in the next section.110Algorithm 2 Inversion with adaptive subsampling of receivers in ground TEMInitialization:Select Ni receivers at ith time channel for training subset Straini (i = 1, . . . ,Nt)Compute forward responses F(m)traini at Straini (i = 1, . . . ,Nt)Compute sensitivity J(m)traini at Straini (i = 1, . . . ,Nt)Empty test subset Stesti (i = 1, . . . ,Nt)repeatSolve for δm using β ,m,F(m)traini and J(m)traini at Straini (i = 1, . . . ,Nt)for i = 1, . . . ,Nt doif Stesti is empty thenSelect Ni receivers at ith delay time for test subset StestiCompute forward responses F(m)testi at StestiCompute data misfit φ(m)testi at Stestiend ifend forCompute forward responses F(m+δm)testi at Stesti (i = 1, . . . ,Nt)Compute data misfit φ(m+δm)testi at Stesti (i = 1, . . . ,Nt)if φ(m+δm)testi ≤ tol or φ(m+δm)testi < µ ·φ(m)testi at Stesti (i = 1, . . . ,Nt)thenCompute sensitivity J(m+δm)testi at Stesti (i = 1, . . . ,Nt)Straini ← Stesti (i = 1, . . . ,Nt)F(m)traini ← F(m)testi (i = 1, . . . ,Nt)J(m)traini ← J(m)testi (i = 1, . . . ,Nt)Empty Stesti , F(m)testi , J(m)testi (i = 1, . . . ,Nt)Reduce βm←m+δmelsefor i = 1, . . . ,Nt doif φ(m+δm)testi > tol and φ(m+δm)testi ≥ µ ·φ(m)testi at Stesti thenCompute sensitivity J(m)testi at StestiNi = Ni×2Straini ← Straini ∪ StestiF(m)traini ← F(m)traini ∪ F(m)testiJ(m)traini ← J(m)traini ∪ J(m)testiEmpty Stesti , F(m)testi , J(m)testiend ifend forend ifuntil φ(m)testi ≤ tol (i = 1, . . . ,Nt)1115.4 Synthetic Inversion of a Sphere-in-halfspace ModelThis section tests the inversion under the framework of survey decomposition usingadaptive looplets and adaptive receivers with the synthetic sphere model and the“East Loop” layout previously described in Subsection 5.2.1 (Figure 5.14a). Thesynthetic data set is created by modeling the soundings along the two perpendicularreceiver lines at seven delay times from 10−4 to 10−1 s. The global mesh used hereis the same as the one used in the adaptive looplets modeling test (Subsection 5.2.3)and the global time discretization has five different step lengths and 76 steps intotal. The data are noise-free but 5% of the total field strength is assigned as theuncertainty.5.4.1 Global discretization inversionThis synthetic data set is first inverted using the standard algorithm with the globaldiscretizations and an initial/reference model of 0.001 S/m half-space. The globaldiscretization inversion takes about 1 hour for the entire inversion: 40 s per factor-ization, 25 s per complete time stepping and about 13 minutes per model update inparallel mode on 12 cores of CPU. The inversion has to constantly use about 64 GBof memory. The large memory consumption is due to the fact that 5 factorizations,required by the time stepping from 0 to 10−1 s, and the simulated fields on theentire mesh at all 76 time steps are stored in the memory. The trade-off parameterβ , total normalized data misfit and model norm as functions of iteration in the in-version are plotted in Figure 5.11. The recovered model resembles the true modelwith a conductive body in a resistive host (see cross section in Figure 5.14b).5.4.2 Survey decomposition inversionThe same data set is then inverted using survey decomposition with adaptive loopletsand adaptive receivers. All seven delay times are used and the inversion parame-ters are the same as those in the global discretization inversion. Table 5.1 recordsthe number of receivers used at different delay times and iterations. For a partic-ular trade-off parameter β , the adaptive approach may take a few iterations to addmore receivers until satisfying the cross validation check. Interestingly, the lasttwo time channels have not seen any addition of receivers indicating there is very1121 2 3 400.050.10.150.2 (a) Trade−off ParameterIterationβ0 1 2 3 4012345 (b) Data MisfitIterationNormalized Data Misfit0 1 2 3 40100020003000400050006000 (c) Model NormIterationModel NormFigure 5.11: Summary of the synthetic inversion using the standard algorithmwith global discretizations.Table 5.1: Synthetic inversion using adaptive looplets and adaptive receivers.Iteration Number of receivers (time channel 1∼7) CPU time (s) Result1 2 2 2 2 2 2 2 182 add1 4 2 2 2 2 2 2 83 add1 8 4 2 2 2 2 2 133 pass2 8 4 2 2 2 2 2 205 add2 8 4 4 4 4 2 2 158 pass3 8 4 4 4 4 2 2 284 add3 16 4 8 8 4 2 2 266 pass4 16 4 8 8 4 2 2 370 add4 16 8 16 16 8 2 2 481 add4 32 16 16 16 8 2 2 444 passlittle information in the data at that late times; on the other hand, as the constrainton the model norm relaxes, more receivers at early times are demanded to con-struct fine-scale structures in the model. The randomly sampled R-T pairs used inthe test subset at the last iteration are plotted in Figure 5.12. This synthetic exam-ple demonstrates that the adaptive receivers approach, together with the adaptivelooplets, is able to substantially reduce the number of subproblems needed in aninversion. The trade-off parameter β , total normalized data misfit as estimated bythe test subset, and the model norm at every iteration are provided in Figure 5.13.The recovered model from this survey decomposition inversion is shown in Fig-ure 5.14c.113−2000−100001000−100001000200010−410−310−210−1X (m)Y (m)Time (s)Figure 5.12: Location of random samples of receiver-time (R-T) pair used atthe last iteration of the synthetic inversion. The dimension of time isdepicted as the depth and the transmitter loop is indicated by the redlines.1 2 3 400.0020.0040.0060.0080.010.012 (a) Trade−off ParameterIterationβ0 1 2 3 40246810 (b) Data MisfitIterationNormalized Data Misfit0 1 2 3 401234 x 104 (c) Model NormIterationModel NormFigure 5.13: Summary of the synthetic inversion using the survey decompo-sition with local discretizations.114The total CPU time of survey decomposition, about 43 minutes on the samecomputer (Table 5.1), is only marginally less than the time (1 hour) required us-ing the global discretization inversion. However, there are strong reasons for usingsurvey decomposition even though this synthetic example does not show an aston-ishing speed-up:• The two synthetic inversions use a different number of CG iterations whensolving equation 3.1. The standard algorithm has maximum 10 CG iterationsfor one Gauss-Newton step, whereas the survey decomposition approach has60 CG iterations. If the standard algorithm also uses 60 CG iterations, itstotal CPU time would be about six times longer (6 hours). This clearly showsthe computational advantages of using the survey decomposition frameworkand working with explicit sensitivity on the local meshes.• The global mesh of this example, having 169136 cells and only one trans-mitter, does not have a space complexity as large as that used in the airborneexamples. If the mesh becomes larger, the poor scalability will slow downthe modeling significantly as shown in Figure 3.1. But the survey decompo-sition scales much better to the number of subproblems since it does not doany forward or sensitivity modeling directly on the global mesh.• Adding more processors does not help to speed up the global discretizationinversion and sometimes it even slows down the inversion because of exces-sive communication overhead. In contrast, survey decomposition is able tobenefit from a large number of processors. At the last iteration in Table 5.1,there are 418 looplet subproblems in 92 R-T pairs, which takes 208 s for acomplete forward modeling using 12 cores. If every subproblem has its ownprocessor, the modeling time could be reduced to a few seconds.• The memory consumption of survey decomposition, which is about 5 GBfor the storage of the sensitivity and other matrices at the last iteration, isvery low compared to that needed for the global mesh inversion. This resultsbecause every subproblem only has 15 steps at a constant step length andmost importantly after the forward response and sensitivity are computed,all of the dense matrices (factors and fields) can be erased from the memory,115leaving only the sensitivities stored on the local meshes. In fact, the memoryrequired by the survey decomposition modeling is scaled by the number ofconcurrently running subproblems, and has nothing to do with the size ofglobal discretizations. Such flexibility of memory allows 3D inversion to beinexpensively carried out on low-end computers.5.4.3 Model interpretation and validationThe model recovered using survey decomposition is compared to the true modeland global discretization inversion result in Figure 5.14. Both inversions recoverthe conductive sphere at the correct location and depth; the weakened conductivityof the sphere in the inversion models is compensated by the smeared “tails” aroundand below the recovered conductors, which is what we should expect when thetarget is compact but the regularization measures the model norm based on L2.The initial/reference model, a resistive 0.001 S/m half-space, also contributes tothe underestimation of the conductivity of the sphere. The survey, which is notideal for 3D inversion because there is only a single large transmitter loop and thereceiver locations are sparse, also permits further non-uniqueness to the recoveryof the model. Regardless, the consistent images from the global discretizationinversion and survey decomposition inversion show that the forward responses andsensitivities are reasonably computed in the subproblems, and the entire frameworkworks as well as the standard algorithm does for the modeling of ground loopsurvey.Finally, a quantitative model validation is carried out by forward modeling thesurvey decomposition inversion model with the global mesh and global time dis-cretization. The true data misfit can then be calculated without any assumption,approximation and reduction used in the survey decomposition inversion. The nor-malized data misfits of the data are plotted as a histogram in Figure 5.15. For eachdatum, the normalized misfit is the absolute difference between the observed andpredicted data divided by the uncertainty, which is 5% of the observed total fieldstrength. For the 1134 data (54 receivers × 7 time channels × 3 components) usedhere, there were 780 data fit to less than the prescribed uncertainty. The overallnormalized data misfit is 1.08, a value close to the misfit estimated by the test sub-116-2.00-1.67-1.33-414Z (m)-2030-1657-1243-1215X (m) -400 415 1230-829-2.67-2.33-3.00log (S/m)-1.000(a) True Model-2030-1215-400X (m) 415 1230 -2.67-2.33-2.00-1.67-1.33-1.00log (S/m)-3.00-1657-1243Z (m)-829-4140(b) Global mesh inversion-2030-1215-400X (m) 415 1230 -2.67-2.33-2.00-1.67-1.33-1.00log (S/m)-3.00-1657-1243Z (m)-829-4140(c) Survey decomposition inversionFigure 5.14: Synthetic inversion models for the ground loop TEM survey.1170 1 2 3 4 50100200300400500600700800Normalized Data MisfitNumber of DataFigure 5.15: True data misfit of the final survey decomposition inversionmodel reassessed using the global discretizations.−2000−1500−1000−50005001000−1000−50005001000150020000100200 X (m)Y (m) H (10−7 A/m)Observed dataPredicted dataTransmitterReceiverFigure 5.16: Data fit of the final survey decomposition inversion model(showing delay time at 10−3 s) for the synthetic data.118set (Figure 5.13b) and to that achieved in the global inversion. The predicted dataof the final survey decomposition inversion model computed in the validation areplotted as 3D H-field vectors measured on the surface along with the observed datavectors in Figure 5.16. The distortion of the magnetic field, caused by the conduc-tor near the cross point of the two receiver lines, is recognized and reproduced bythe survey decomposition inversion.5.5 Inversion of SQUID Data at the Lalor MineIn this final section, I test the survey decomposition inversion with the SQUID fielddata at the Lalor Mine previously mentioned in Section 5.1 and compare it with theresult obtained from the standard method. The data set is assigned an uncertaintyof 10% of total field strength plus a noise floor of 10−7 A/m.5.5.1 Global discretization inversionThe global mesh used here is geometrically identical with the one used in the syn-thetic study in Section 5.4, but in the realistic coordinate system. The global timediscretization models 29 time channels from 1.47 to 71 ms using four different timestep lengths and 74 time steps in total. In terms of the numbers of cells and timesteps, the computational complexities of this inversion are almost the same as thoseof the synthetic inversion in the previous section, so the time and memory requiredby the factorization and time stepping for one model update (Gauss-Newton step)are similar (13 minutes).Starting from a 0.001 S/m half-space model, the inversion ran for 12 Gauss-Newton steps. However, the target misfit (normalized data misfit = 1) is achievedafter about 6 model updates in about 80 minutes. The trade-off parameter β , totalnormalized data misfit and model norm as a function of iteration in the inversionare plotted in Figure 5.17. The recovered model from Iteration 6 is shown on twocross sections in Figure 5.19a and 5.19b.5.5.2 Survey decomposition inversionWhen doing the survey decomposition inversion, the subsampling of receiver isrestricted to 8 delay times from 1.47 to 371 ms. As a result, there are 432 R-T pairs1190 5 1010−410−310−210−1 (a) Trade−off ParameterIterationβ0 5 100123456 (b) Data MisfitIterationNormalized Data Misfit0 5 100123456 x 105 (c) Model NormIterationModel NormFigure 5.17: Summary of the Lalor Mine field data inversion using the stan-dard algorithm with global discretizations.(54 receivers × 8 delay times) available in the database for selection. Table 5.2records the addition of receivers at different delay times as the inversion proceeds.Similar to the synthetic example, the field inversion also finds that the most R-Tpairs are needed at the early times and at those times encoded with rich informationabout the anomaly. Throughout the inversion, a total number of 402 R-T pairshave been used in the inversion, with no more than 44 in any single iteration. Theinitial/reference model is a 0.001 S/m half-space.The target data misfit is achieved after 11 Gauss-Newton steps in 42 minutes,about two times faster than the global discretization inversion, and can be furtherspeeded up with massive parallelization. About 2.8 GB memory is required to storethe sensitivities and other matrices at the last iteration.The trade-off parameter β , total normalized data misfit estimated by the testsubset and model norm at every iteration are provided in Figure 5.18. I note thatthe model norm of Iteration 11 in this survey decomposition inversion is close tothe model norm of Iteration 6 in the global discretization inversion. The recoveredmodel (Iteration 11) of the survey decomposition inversion is shown on two crosssections in Figure 5.19c and 5.19d.More insights can be gained by further comparing the synthetic and field datainversions. The two data sets, even though they are collected using the same surveylayout, are actually in different scales: the synthetic model has a shallower targetconductor and earlier delay times, while the field example has a deeper target anduses later times. It is difficult for the global discretization inversion, working on120Table 5.2: Field data inversion using adaptive looplets and adaptive receivers.Iteration Number of receivers (time channel 1∼8) CPU time (s) Result1 2 2 2 2 2 2 2 2 207 pass2 2 2 2 2 2 2 2 2 166 pass3 2 2 2 2 2 2 2 2 172 pass4 2 2 2 2 2 2 2 2 159 add4 4 2 2 2 2 2 2 2 100 pass5 4 2 2 2 2 2 2 2 159 pass6 4 2 2 2 2 2 2 2 173 add6 8 2 2 2 2 2 2 2 147 pass7 8 2 2 2 2 2 2 2 222 pass8 8 2 2 2 2 2 2 2 204 pass9 8 2 2 2 2 2 2 2 206 add9 8 4 2 2 2 2 2 2 134 pass10 8 4 2 2 2 2 2 2 234 pass11 8 4 2 2 2 2 2 2 231 pass12 8 4 2 2 2 2 2 2 246 pass13 8 4 2 2 2 2 2 2 236 add13 8 4 4 4 4 4 2 2 224 add13 16 8 4 4 4 4 2 2 255 pass0 5 1010−410−310−2(a) Trade−off ParameterIterationβ0 5 100123456 (b) Data MisfitIterationNormalized Data Misfit0 5 1001234x 105 (c) Model NormIterationModel NormFigure 5.18: Summary of the Lalor Mine field data inversion using the surveydecomposition with local discretizations.121the scales set by the global mesh and global time discretization, to adapt to the ap-propriate scales of investigation. However, the new approach is able to recognizethe difference and only uses 194 looplets in 44 R-T pairs in the last iteration ofthe field data inversion. This is about half the number used in the synthetic inver-sion and it is a clear indication about the over-computing carried out in the globaldiscretization inversion.5.5.3 Model interpretation and validationThe final conductivity models, recovered using the standard algorithm (Iteration 6of the global discretization inversion) and the survey decomposition (Iteration 11of the survey decomposition inversion), are compared on two perpendicular crosssections in x-direction and y-direction in Figure 5.19. Both inversions provide verysimilar images of the deep major conductor, in terms of location, depth, size andconductivity. They also have similar model norms if measured quantitatively.There are noticeable differences of the small-scale structures near the surface.This is likely because the near-surface features are minor anomaly compared tothe large conductor at depth. The survey decomposition inversion captures theprimary anomaly first by using fewer samples, and gradually builds more small-scale features as the adaptive receiver procedure adds more receivers to the earlydelay times. The model of Iteration 13 in the survey decomposition inversion hasthe near-surface structures similar to the global discretization inversion result. Thisshows that the quality of survey decomposition is not compromised after significantcomputational savings.Although the recovered conductor is a good candidate for VMS mineralization,further investigation involving other data sets and other geological information isrequired to verify the inversion model for the geological interpretation, which is outof the scope of this thesis. Therefore, the emphasis of model validation is given tothe similarity between the global discretization inversion and survey decompositioninversion.Finally, I again validate the survey decomposition inversion model using a for-ward modeling with the global discretizations. Note, although the survey decom-position is restricted to sample at 8 out of 29 available delay times in the inversion,122-2.00-1.50-4.00-3.50-3.00-2.507380 6565-1657-1105-552Elev (m)Y (m) 4935 41205750-1.00log (S/m)0(a) Global discretization inversion(y-direction cross section)-2.50-2.00-1.50Elev (m)-414-829-1243-16571470 2285X (m) 3100 3915 4730 -3.50-4.00-3.00-1.00log (S/m)0(b) Global discretization inversion(x-direction cross section)Y (m)-1657-1105Elev (m)7380 6565 5750-5520-3.50-3.00-2.50-2.00-1.50-1.00log (S/m)-4.004935 4120(c) Survey decomposition inversion(y-direction cross section)31002285X (m) 391514704730 -3.50-3.00-2.50-2.00-1.50-1.00log (S/m)-4.00-1657-1243-829Elev (m)-4140(d) Survey decomposition inversion(x-direction cross section)Figure 5.19: Inversion models for the SQUID field data at the Lalor MineVMS deposit. The two models are cut along x-direction and y-direction for cross sections.the validation uses all 29 time channels to calculate the data misfit. The normal-ized data misfits of the predicted data are plotted as a histogram in Figure 5.20; theoverall normalized data misfit is 0.86 for the final model.The field SQUID data and the predicted data of the final survey decomposi-tion inversion model computed in the validation are plotted as 3D H-field vectorsmeasured on the surface to show that the most important anomalies due to the ma-jor conductor in the data are reasonably reproduced by the survey decompositioninversion (Figure 5.21).1230 1 2 3 4 501000200030004000Normalized Data MisfitNumber of DataFigure 5.20: True data misfits of the final survey decomposition inversionmodel at the Lalor Mine reassessed using the global discretizations.150020002500300035004000450040004500500055006000650070000200400600 X (m)Y (m) H (10−8 A/m)Observed dataPredicted dataTransmitterReceiverFigure 5.21: Data fit of the final survey decomposition inversion model(showing delay time at 3.78×10−3 s) for the Lalor Mine SQUID data.1245.6 SummaryThis chapter implements the framework of survey decomposition in ground loopTEM data inversion. A ground large loop survey has two important computationalcomplexities: the large transmitter and a large contrast of modeling scales betweenearly and late delay times. Survey decomposition reduces these computationalcomplexities by modeling many subproblems, each of which is in fact an atomicproblem that only concerns a dipole source (looplet), a receiver and a particulartime.The implementation consists of two parts. The first part is to compute the dataor sensitivity at a particular receiver and delay time (R-T), given a large transmitterloop. This is solved by a procedure called adaptive looplets that finds a minimumcollection of randomly located looplets within the transmitter loop, and the prin-ciple of superposition, to effectively represent the original loop. The number oflooplets needed adapts to the scale of EM fields, so later times may need fewerlooplets than early times. A new round of search for the looplets is initiated whenthe selection of R-T pair is changed in the second part.The second part, once an R-T can be modeled, is to choose a minimum sub-set of all possible R-T for the inversion since the data set is always redundant. Ipropose another procedure called adaptive receivers to find the minimum collec-tion of receivers at each delay time by using a time-by-time cross validation. Thisprocedure also has the ability of adapting the number of receivers to the scale ofinvestigation, both in time and in the regularization of inversion.Together, the adaptive looplets and adaptive receivers can be used to ensurethat only the very necessary subproblems are modeled. Both procedures are testedwith a synthetic conductive sphere model. I have shown the forward responses andsensitivities can be reasonably computed in the subproblems on local discretiza-tions at much reduced costs. A synthetic data set based on the sphere model isinverted using survey decomposition with adaptive looplets and adaptive receivers,yielding an inversion model comparable with the global discretization inversionresults. The new methods show that savings in time and memory can be realizedand further speed-up by massive parallelization is very promising.Finally a field SQUID data set at the Lalor Mine VMS deposit is inverted us-125ing the newly-developed framework and procedures. The survey decompositioninversion obtains results similar to that from the standard inversion, but in a muchshorter time even using a small-scale parallelization. The adaptive procedures haveshown the ability of adapting the number of subproblems to the scale of the EMproblem.In both the synthetic and field data inversions, the final survey decompositioninversion models are validated by forward modeling using the standard algorithmwith the global discretizations. The true data misfits of the final models, calculatedwithout any assumption, approximation and reduction used in the new approach,have been shown to be reduced to an acceptable level.126Chapter 6ConclusionsIt is widely accepted that inverting electromagnetic data acquired from geophysi-cal surveys will yield a distribution of electrical conductivity that will be useful ingeoscience problems. Despite a handful of modeling algorithms developed in thelaboratory and greatly improved computers, practitioners are still facing prohibitivechallenges in using rigorous 3D modeling as an integrated tool in their explorationdecision makings. Much of the problem is traced to the computational costs of 3Dmodeling and can be summarized in two critical questions: (1) Is 3D modeling/in-version really necessary? (2) If yes, what can we do to make it more efficient? Thisthesis attempts to answer these questions and the findings are summarized in thefollowing sections.6.1 Conditions of Using 3D InversionIn the first part of this thesis, I explore in considerable detail a specific example ofwhy, and under what circumstance a 3D interpretation is needed.With a field example of airborne TEM survey (VTEM) from Mt. Milligan, Ifound there was a significant discrepancy between the model obtained by 1D in-terpretation and 3D geology. This contradiction was studied by a synthetic modelthat emulates the Mt. Milligan deposit: a resistive rock stock surrounded by aconductive halo. The 1D inversion of the synthetic data showed a misleading con-ductive anomaly at the location of the stock. This is explained by the fact that an127airborne sounding has a stronger sensitivity away from the sounding location thanbeneath it, so a sounding above the resistive stock can have high response becauseof the conductive halo around it. However, a 3D inversion, capable of modeling3D sensitivity and constructing a 3D model, correctly recovered the true model andthe geometry of the resistive stock and the conductive halo were reasonably delin-eated. Then a small region of the field data around the main stock was inverted in3D, producing a much more geologically plausible model. A multi-level strategyinvolving multi-resolution mesh and data grid was developed to speed up the 3Dinversion.Following the success of 3D inversion of airborne TEM data, I revisited afrequency-domain airborne EM data set (DIGHEM) at Mt. Milligan. It posed aparadox. A 1D inversion of the DIGHEM data had previously recovered the resis-tive nature of the stock, which seemingly discredited the necessity of 3D inversion.A critical analysis using a semi-quantitative measure of the footprint of an airbornesystem revealed that VTEM had a much larger footprint than DIGHEM, and thescale of the conductivity variation at Mt. Milligan just happened to be compara-ble to the VTEM footprint and larger than the DIGHEM footprint. This discoveryleads to the condition of using 3D inversion: when the scale of the target is largerthan the scale of the survey, a non-3D modeling could be justified; but if the scaleof the target is comparable to the scale of the survey, which is common in practice,a 3D modeling must be used.6.2 Over-computing in 3D EM ModelingThe technical route I choose to improve the speed of 3D modeling is somewhatunconventional. After reviewing the existing technology, I categorize the compu-tational difficulties into three complexities:• Space complexity. This is primarily caused by the large size of the modelingdomain and the large number of sources. A single forward modeling andinversion mesh covering a large area results in a large system to be solvedfor each source.• Time complexity. This becomes a concern when times/frequencies with alarge contrast of scale are modeled with a single discretization because: (1)128The mesh must have a large number of cells to model both the fine-scalefeatures for the early times and the large-scale background for the late times;(2) If a time stepping scheme is used, modeling all time channels with oneglobal time discretization can cause over-computing.• Optimization complexity. An inversion requires many iterations of forwardmodelings and solutions of an optimization problem on a 3D mesh. Over-computing is especially prevalent at the early stage of inversion, when onlylarge-scale information is needed but the computations micromanage the de-tails at an unnecessarily fine level.All three computational complexities share the same fundamental issue thatcan be found in the conventional methods: they attempt to work simultaneouslyor indiscriminately with problems involving multiple scales. This leads to largematrix systems to be solved for the forward problem and results in a slower andless flexible algorithm, and also unnecessarily models the redundant field data setfrom a survey.6.3 Survey Decomposition: A New PathThe difficulties associated with over-computing can be overcome by adopting a sur-vey decomposition framework. This new framework isolates the source, receiverand time/frequency in a survey so that the decomposed subproblems have moreefficient localized discretizations: (1) local meshes in space, and (2) local timediscretizations. A local mesh is much smaller than the global mesh and is only re-fined around source/receiver locations. A local time discretization only serves onetime channel, so it only needs one constant step length in an implicit time steppingapproach.If a transmitter or receiver is spatially large, then the modeling can be carriedout at an even finer level involving a dipole source and a dipole receiver workingwith single time or frequency and concerned with a particular component of the ob-served field. I refer to this as an atomic building block problem. Any CSEM surveycan be decomposed into these atomic problems. One or more atomic problems canbe combined into a subproblem that is treated as a self-contained computing unit129in a parallel environment. In my implementation, an airborne survey is straightfor-wardly decomposed to many sounding subproblems, so every airborne subproblemis assembled by the atomic building blocks sharing the same source and receiver;all delay times from a particular sounding are modeled together in one subprob-lem. A ground survey, assuming one large transmitter loop, is first decomposedto many receiver-time (R-T) pairs; then within each R-T, the transmitter is furtherdecomposed to many dipole looplet sources using superposition, yielding manylooplet-receiver-time units (i.e. atomic building blocks) as subproblems.A general workflow of survey decomposition can be described as follows:• Global mesh. A global mesh, the same as the one used in a standard methodinversion, must be designed beforehand. The global mesh must be largeenough to accommodate the entire survey. The number of cells in the globalmesh is not important because it is only used as a container for the modeland no forward modeling is ever carried out on it. Decoupling the inversionand forward modeling meshes is a critical item in the overall workflow.• Subproblem assignment. The global model on the global mesh, along withthe specifications of the subproblems, is sent to many parallel workers. Thelocal mesh and time discretization are specifically made within each individ-ual subproblem. The global model is projected onto the local meshes.• Computing. The parallel workers solve the subproblems with the local dis-cretizations. Both the forward responses and the sensitivities are computedand stored on the local meshes. The modeled data and sensitivity are madeavailable to the global mesh through simple linear transformations, includingrotation, interpolation and possibly summation if the transmitter is decom-posed. During the inversion, the operation of sensitivity-times-vector can bedone within the subproblems. In this way, the entire process of forward andinverse modeling is parallelizable.• Data subsampling. A survey decomposition usually yields many subprob-lems. This is tackled by adaptively choosing a subset of the entire data set forthe inversion. The adaptive subsampling, assisted by random sampling andcross validation, ensures that the number of data in a particular subproblem130is appropriate for the current scales of modeling in time and space, regardlessof how many data are collected in the field.The workflows designed for airborne and ground surveys are both tested usingsynthetic and field data. I have shown that the new framework of survey decom-position is able to produce satisfactory results with significantly reduced time andmemory usage. Further improvement of efficiency via massive parallelization iseasily achievable since the subproblems are both fine-grained and mutually inde-pendent.6.4 Further DevelopmentIn this thesis, I have developed a novel framework to improve the efficiency of 3DEM modeling and shown that it is viable. There are certain items in my proceduresthat can be improved in the future. There are also other research opportunitiesbased upon my work.6.4.1 Possible improvementThis thesis lays out the basic elements and workflows required by the survey de-composition. I provide some simple ways to implement some of the steps. How-ever, these deserve more careful investigation and potentially yield better accuracyand greater efficiency.• Material averaging. The linear operation of converting the model fromthe global mesh to the local mesh works well so far, but it can be made tobetter deal with more complicated situations, for example, high conductivitycontrast.• Un-structured or semi-structured mesh. This thesis uses regular mesh asthe example to show the effectiveness of survey decomposition. The overallperformance may be further improved if other types of mesh, like tetrahedralmesh, curvilinear mesh and octree mesh, are implemented.• Full space-time subsampling. In my examples, I restricted the subsamplingof the receivers at some preselected delay times. Eventually, the procedure131should be able to do random sampling simultaneously in space and time. Theultimate case is that a dipole source and a dipole receiver move freely in 3Dspace and measure data at an arbitrary time/frequency, and the data in this7D space (Sx,Sy,Sz,Rx,Ry,Rz,T/F) are subsampled for the inversion.• Smarter sampler. As the simplest strategy, I used a random sampler with auniform probability distribution. This can be made more efficient and accu-rate if the sampling concentrates on the area with complex signals.• Frequency-domain approach. My local meshes are designed for solvingtime-domain problems. If the problem is modeled in the frequency domainthe mesh may be changed accordingly.• Galvanic source/receiver. The concept of “linelet” for the galvanic sourcehas been proposed, and there should be no difficulty in implementing it.However, I have thus far focused attention on the inductive sources and re-ceivers.6.4.2 Other research opportunitiesThe framework of survey decomposition based on the atomic building blocks opensup a host of new research applications.Many ideas used in the survey decomposition, for example, “localized dis-cretization”, “adapt-to-the-scale”, and “refine-until-no-change”, are all generic.Exact the same approach can be applied to other types of surveys with little modi-fication:• Marine CSEM. The modeling of marine CSEM also suffers from the similarproblems as the airborne survey. It usually covers a large area with manysource and receiver locations. Survey decomposition can certainly speed itup.• Magnetotelluric (MT). MT is a good example of large (natural) source,large area and large contrast of scales. Because MT measures the electricfield, MT modeling may require very small cells everywhere in the survey132area. This can be potentially improved by modeling MT soundings sepa-rately on local meshes.• DC resistivity. A DC survey generally uses a large number of sources, eitherlocalized “dipole” arrays, or distributed “pole” arrays. Therefore, it is also agood candidate for a survey decomposition but with using linelet.Because of the blindness of the global thread to the computational details atlocal level, it is possible to use different source waveforms/frequencies, differentmeshes, different solvers, different types of data in one inversion. There is cer-tainly great opportunities to jointly invert multiple data sets under the frameworkof survey decomposition. Such a “hybrid inversion” can be highly efficient sincenot only the discretizations are localized, all the parameters of a subproblem (forexample, the solver, PDE, type of mesh, etc.) can be tailored to the specific needsof each individual subproblem.Finally I have to point out that my prototyping of survey decomposition wasdone on the desktop computer or small cluster. The speed-up can be almost lin-early increased if more processors can be used. This makes it an ideal applicationfor the emerging massive parallel computing devices, including GPUs and cloudcomputing. Infrastructure as a service (IaaS) in cloud computing offers virtuallyunlimited computing resources in a scalable manner, and thus could be a good plat-form to implement the framework. Once the massive parallelization of the surveydecomposition is established, the paradigm shift of using 3D modeling as a routinewould be just a fingertip away.133BibliographyAbraitis, P., Pattrick, R., & Vaughan, D., 2004. Variations in the compositional,textural and electrical properties of natural pyrite: A review, InternationalJournal of Mineral Processing, 74(1), 41–59. → pages 2Alumbaugh, D. L., Newman, G. A., Prevost, L., & Shadid, J. N., 1996.Three-dimensional wideband electromagnetic modeling on massively parallelcomputers, Radio Science, 31(1), 1–23. → pages 17Amestoy, P. R., Guermouche, A., L’Excellent, J. Y., & Pralet, S., 2006. Hybridscheduling for the parallel solution of linear systems, Parallel Computing,32(2), 136–156. → pages 18, 50Annan, A. P., Smith, R. S., Lemieux, J., O’Connell, M. D., & Pedersen, R. N.,1996. Resistive-limit, time-domain AEM apparent conductivity, Geophysics,61(1), 93–99. → pages 9Aurenhammer, F., 1991. Voronoi diagrams - a survey of a fundamental geometricdata structure, ACM Computing Surveys (CSUR), 23(3), 345–405. → pages 103Badea, E. A., Everett, M. E., Newman, G. A., & Biro, O., 2001. Finite-elementanalysis of controlled-source electromagnetic induction using Coulomb-gaugedpotentials, Geophysics, 66(3), 786–799. → pages 12Beamish, D., 2003. Airborne EM footprints, Geophysical Prospecting, 51(1),49–60. → pages 42Benkabbour, B., Toto, E., & Fakir, Y., 2004. Using DC resistivity method tocharacterize the geometry and the salinity of the Plioquaternary consolidatedcoastal aquifer of the Mamora plain, Morocco, Environmental Geology, 45(4),518–526. → pages 6Bernstone, C., Dahlin, T., Ohlsson, T., & Hogland, H., 2000. DC-resistivitymapping of internal landfill structures: Two pre-excavation surveys,Environmental Geology, 39(3-4), 360–371. → pages 6134Bo¨rner, R.-U., Ernst, O. G., & Spitzer, K., 2008. Fast 3-D simulation of transientelectromagnetic fields by model reduction in the frequency domain usingKrylov subspace projection, Geophysical Journal International, 173(3),766–780. → pages 52Brodie, R. & Sambridge, M., 2006. A holistic approach to inversion oftime-domain airborne EM, ASEG Extended Abstracts, 2006, 1–4. → pages 11Chambers, J. E., Kuras, O., Meldrum, P. I., Ogilvy, R. D., & Hollands, J., 2006.Electrical resistivity tomography applied to geologic, hydrogeologic, andengineering investigations at a former waste-disposal site, Geophysics, 71(6),B231–B239. → pages 2Chwala, A., Schultze, V., Stolz, R., Ramos, J., IJsselsteijn, R., Meyer, H.-G., &Kretzschmar, D., 2001. An HTS dc SQUID system in competition withinduction coils for TEM applications, Physica C: Superconductivity, 354(1),45–48. → pages 98Commer, M. & Newman, G., 2004. A parallel finite-difference approach for 3Dtransient electromagnetic modeling with galvanic sources, Geophysics, 69(5),1192–1202. → pages 12, 17, 18Commer, M. & Newman, G. a., 2006. An accelerated time domain finitedifference simulation scheme for three-dimensional transient electromagneticmodeling using geometric multigrid concepts, Radio Science, 41(3), RS3007.→ pages 18Commer, M. & Newman, G. a., 2008. New advances in three-dimensionalcontrolled-source electromagnetic inversion, Geophysical JournalInternational, 172(2), 513–535. → pages 18, 66, 151Commer, M., Newman, G. A., Carazzone, J. J., Dickens, T. A., Green, K. E.,Wahrmund, L. A., Willen, D. E., & Shiu, J., 2008. Massively parallelelectrical-conductivity imaging of hydrocarbons using the IBM Blue Gene/Lsupercomputer, IBM Journal of Research and Development, 52(1.2), 93–103.→ pages 17Constable, S., 2010. Ten years of marine CSEM for hydrocarbon exploration,Geophysics, 75(5), 75A67–75A81. → pages 2, 6Constable, S. C., Parker, R. L., & Constable, C. G., 1987. Occam’s inversion: Apractical algorithm for generating smooth models from electromagneticsounding data, Geophysics, 52(3), 289–300. → pages 16135Cox, L. H., Wilson, G. a., & Zhdanov, M. S., 2010. 3D inversion of airborneelectromagnetic data using a moving footprint, Exploration Geophysics, 41(4),250–259. → pages 19Cox, L. H., Wilson, G. a., & Zhdanov, M. S., 2012. 3D inversion of airborneelectromagnetic data, Geophysics, 77(4), WB59–WB69. → pages 19De Jong, E., Ballantyne, A., Cameron, D., & Read, D., 1979. Measurement ofapparent electrical conductivity of soils by an electromagnetic induction probeto aid salinity surveys, Soil Science Society of America Journal, 43(4),810–812. → pages 6Eaton, P. A., 1998. Application of an improved technique for interpreting transientelectromagnetic data, Exploration Geophysics, 29(2), 175–183. → pages 10Edwards, R., 1974. The magnetometric resistivity method and its application tothe mapping of a fault, Canadian Journal of Earth Sciences, 11(8), 1136–1156.→ pages 62Edwards, R. N., 1997. On the resource evaluation of marine gas hydrate depositsusing sea-floor transient electric dipole-dipole methods, Geophysics, 62(1),63–74. → pages 6Farquharson, C. G., 2007. Constructing piecewise-constant models inmultidimensional minimum-structure inversions, Geophysics, 73(1), K1–K9.→ pages 14Farquharson, C. G. & Oldenburg, D. W., 1993. Inversion of time-domainelectromagnetic data for a horizontally layered earth, Geophysical JournalInternational, 114(3), 433–442. → pages 11, 27Fincham, A., Christensen, J., Barker, J., Samier, P., et al., 2004. Up-gridding fromgeological model to simulation model: Review applications and limitations, inSPE Annual Technical Conference and Exhibition, Society of PetroleumEngineers. → pages 66Fitterman, D. V. & Deszcz-Pan, M., 1998. Helicopter EM mapping of saltwaterintrusion in Everglades National Park, Florida, Exploration Geophysics,29(1/2), 240–243. → pages 6Fitterman, D. V. & Stewart, M. T., 1986. Transient electromagnetic sounding forgroundwater, Geophysics, 51(4), 995–1005. → pages 2136Foks, N. L., Krahenbuhl, R., & Li, Y., 2014. Adaptive sampling of potential-fielddata: A direct approach to compressive inversion, Geophysics, 79(1),IM1–IM9. → pages 76Fraser, D. C., 1978. Resistivity mapping with an airborne multi-coilelectromagnetic system, Geophysics, 43(1), 144–172. → pages 5, 9Fullagar, P. K., Vrbancich, J., & Pears, G., 2010. Geologically-constrained 1DTEM inversion, ASEG Extended Abstracts, 2010(1), 1–4. → pages 11Grayver, A. V., Streich, R., & Ritter, O., 2013. Three-dimensional paralleldistributed inversion of CSEM data using a direct forward solver, GeophysicalJournal International, 193(3), 1432–1446. → pages 18, 50Guitton, A. & Symes, W. W., 2003. Robust inversion of seismic data using theHuber norm, Geophysics, 68(4), 1310–1319. → pages 14Haber, E., 1997. Numerical strategies for the solution of inverse problems, Ph.D.thesis, University of British Columbia. → pages 16Haber, E., 2004. A multilevel, level-set method for optimizing eigenvalues inshape design problems, Journal of Computational Physics, 198(2), 518–534.→ pages 14Haber, E. & Ascher, U. M., 2001. Fast finite volume simulation of 3Delectromagnetic problems with highly discontinuous coefficients, SIAM Journalon Scientific Computing, 22(6), 1943–1961. → pages 12Haber, E. & Oldenburg, D., 2000. A GCV based method for nonlinear ill-posedproblems, Computational Geosciences, 4(1), 41–63. → pages 16Haber, E., Ascher, U., Aruliah, D., & Oldenburg, D., 2000. Fast simulation of 3Delectromagnetic problems using potentials, Journal of Computational Physics,163(1), 150–171. → pages 12, 43Haber, E., Oldenburg, D. W., & Shekhtman, R., 2007. Inversion of time domainthree-dimensional electromagnetic data, Geophysical Journal International,171(2), 550–564. → pages 18, 51Hansen, P. C. & O’Leary, D. P., 1993. The use of the L-curve in the regularizationof discrete ill-posed problems, SIAM Journal on Scientific Computing, 14(6),1487–1503. → pages 16137Herrmann, F. J., 2010. Randomized sampling and sparsity: Getting moreinformation from fewer samples, Geophysics, 75(6), WB173–WB187. →pages 77Hestenes, M. R. & Stiefel, E., 1952. Methods of conjugate gradients for solvinglinear systems, Journal of research of the National Bureau of Standards, 49(6),409–436. → pages 16Hohmann, G. W., 1975. Three-dimensional induced polarization andelectromagnetic modeling, Geophysics, 40(2), 309–324. → pages 12, 17Holtham, E. & Oldenburg, D. W., 2012. Large-scale inversion of ZTEM data,Geophysics, 77(4), WB37–WB45. → pages 17Huang, H. & Won, I., 2003. Characterization of UXO-like targets usingbroadband electromagnetic induction sensors, Geoscience and Remote Sensing,IEEE Transactions on, 41(3), 652–663. → pages 2Hyndman, R. D. & Hyndman, D. W., 1968. Water saturation and high electricalconductivity in the lower continental crust, Earth and Planetary ScienceLetters, 4(6), 427 – 432. → pages 2Jago, C. P., 2008. Metal- and alteration-zoning, and hydrothermal flow paths atthe moderately-tilted, silica-saturated Mt. Milligan copper-gold alkalicporphyry deposit, Master’s thesis, University of British Columbia. → pages 25Jardani, A., Revil, A., Santos, F., Fauchard, C., & Dupont, J., 2007. Detection ofpreferential infiltration pathways in sinkholes using joint inversion ofself-potential and EM-34 conductivity data, Geophysical Prospecting, 55(5),749–760. → pages 6Johnson, W. J., 2003. Case histories of dc resistivity measurements to mapshallow coal mine workings, The Leading Edge, 22(6), 571–573. → pages 6Keating, P. B. & Crossley, D. J., 1990. The inversion of time-domain airborneelectromagnetic data using the plate model, Geophysics, 55(6), 705–711. →pages 10Kovacs, A., Holladay, J. S., & Bergeron Jr, C. J., 1995. The footprint/altitude ratiofor helicopter electromagnetic sounding of sea-ice thickness: comparison oftheoretical and field estimates, Geophysics, 60(2), 374–380. → pages 42Lane, R., Green, A., Golding, C., Owers, M., Pik, P., Plunkett, C., Sattel, D., &Thorn, B., 2000. An example of 3D conductivity mapping using the TEMPEST138airborne electromagnetic system, Exploration Geophysics, 31(2), 162–172. →pages 11Legault, J. M., Carriere, D., & Petrie, L., 2008. Synthetic model testing anddistributed acquisition dc resistivity results over an unconformity uraniumtarget from the Athabasca Basin, northern Saskatchewan, The Leading Edge,27(1), 46–51. → pages 6Leslie, K., Binks, R., Lam, S., Sullivan, P., Tilbrook, D., Thorn, R., & Foley, C.,2008. Application of high-temperature superconductor SQUIDs forground-based TEM, The Leading Edge, 27(1), 70–74. → pages 98Li, D., Beckner, B., Kumar, A., et al., 2001. A new efficient averaging techniquefor scaleup of multimillion-cell geologic models, SPE Reservoir Evaluation &Engineering, 4(4), 297–307. → pages 66Li, Y. & Oldenburg, D. W., 1996. 3-D inversion of magnetic data, Geophysics,61(2), 394–408. → pages 14Liu, G. & Becker, A., 1990. Two-dimensional mapping of sea-ice keels withairborne electromagnetics, Geophysics, 55(2), 239–248. → pages 42MacGregor, L. & Sinha, M., 2000. Use of marine controlled-sourceelectromagnetic sounding for sub-basalt exploration, Geophysical Prospecting,48(6), 1091–1106. → pages 6Macnae, J., 1998. Fast AEM data processing and inversion, ExplorationGeophysics, 29(2), 163–169. → pages 9, 10Macnae, J., Mortimer, R., & Gilgallon, K., 2010. Deep conductor delineationthrough improved EMFlow data processing, ASEG Extended Abstracts, 2010,1–4. → pages 10Marchant, D., Haber, E., & Oldenburg, D. W., 2013. Inductive source inducedpolarization, Geophysical Journal International, 192(2), 602–612. → pages 42Mitchinson, D. & Enkin, R., 2011. Continued investigations of physicalproperty-geology relationships in porphyry-deposit settings in the Quest andQuest-West project area, Central British Columbia (NTS 093E, K, L, M, N),Tech. rep., Geoscience BC. → pages 23, 25Mogi, T., Tanaka, Y., Kusunoki, K., Morikawa, T., & Jomori, N., 1998.Development of grounded electrical source airborne transient EM(GREATEM), Exploration Geophysics, 29(1/2), 61–64. → pages 62139Mwenifumbo, C., Elliott, B., Jefferson, C., Bernius, G., & Pflug, K., 2004.Physical rock properties from the Athabasca Group: Designing geophysicalexploration models for unconformity uranium deposits, Journal of AppliedGeophysics, 55(1), 117–135. → pages 2Nabighian, M. N. & Macnae, J. C., 1991. Time domain electromagneticprospecting methods, in Electromagnetic Methods in Applied Geophysics,Volume 2. Applications, pp. 427–520, ed. Nabighian, M. N., Society ofExploration Geophysicists. → pages 9, 27Newman, G. A. & Alumbaugh, D. L., 1997. Three-dimensional massively parallelelectromagnetic inversion - I. Theory, Geophysical Journal International,128(2), 345–354. → pages 17, 50Newman, G. A., Hohmann, G. W., & Anderson, W. L., 1986. Transientelectromagnetic response of a three-dimensional body in a layered earth,Geophysics, 51(8), 1608–1627. → pages 12, 17Oldenburg, D. W. & Li, Y., 1994a. Inversion of induced polarization data,Geophysics, 59(9), 1327–1341. → pages 50Oldenburg, D. W. & Li, Y., 1994b. Subspace linear inverse method, InverseProblems, 10(4), 915. → pages 50Oldenburg, D. W., McGillivray, P. R., & Ellis, R. G., 1993. Generalized subspacemethods for large-scale inverse problems, Geophysical Journal International,114(1), 12–20. → pages 50Oldenburg, D. W., Li, Y., & Ellis, R. G., 1997. Inversion of geophysical data overa copper gold porphyry deposit: A case history for Mt. Milligan, Geophysics,62(5), 1419–1431. → pages 40, 42Oldenburg, D. W., Haber, E., & Shekhtman, R., 2008. Forward modelling andinversion of multisource TEM data, in SEG Technical Program ExpandedAbstracts 2008, pp. 559–563, Society of Exploration Geophysicists. → pages18Oldenburg, D. W., Haber, E., & Shekhtman, R., 2013. Three-dimensionalinversion of multisource time domain electromagnetic data, Geophysics, 78(1),E47–E57. → pages 18, 29, 32, 35, 43, 50, 52, 67, 92, 148Osmond, R. T., Watts, A. H., Ravenhurst, W. R., Foley, C. P., & Leslie, K. E.,2002. Finding nickel from the B-field at Raglan - ‘To B or not DB’, in SEG140Technical Program Expanded Abstracts 2002, pp. 404–407, Society ofExploration Geophysicists. → pages 98Palacky, G., 1981. The airborne electromagnetic method as a tool of geologicalmapping, Geophysical Prospecting, 29(1), 60–88. → pages 5Palacky, G. J., 1988. Resistivity characteristics of geologic targets, inElectromagnetic Methods in Applied Geophysics, Volume 1. Theory, pp.52–129, ed. Nabighian, M. N., Society of Exploration Geophysicists. → pages1Palacky, G. J., 1993. Use of airborne electromagnetic methods for resourcemapping, Observations of Earth from Space, 13, 5–14. → pages 5, 9Palacky, G. J. & West, G. F., 1973. Quantitative interpretation of INPUT AEMmeasurements, Geophysics, 38(6), 1145–1158. → pages 9Palacky, G. J. & West, G. F., 1991. Airborne electromagnetic methods, inElectromagnetic Methods in Applied Geophysics, Volume 2. Applications, pp.811–877, ed. Nabighian, M. N., Society of Exploration Geophysicists. →pages 9Pfaffhuber, A., Grimstad, E., Domaas, U., Auken, E., Foged, N., & Halkjær, M.,2010. Airborne EM mapping of rockslides and tunneling hazards, The LeadingEdge, 29(8), 956–959. → pages 6Portniaguine, O. & Zhdanov, M. S., 1999. Focusing geophysical inversionimages, Geophysics, 64(3), 874–887. → pages 14Pridmore, D., Hohmann, G., Ward, S., & Sill, W., 1981. An investigation offinite-element modeling for electrical and electromagnetic data in threedimensions, Geophysics, 46(7), 1009–1024. → pages 12Raiche, A., 2004. Practical 3D airborne EM inversion in complex terranes, ASEGExtended Abstracts, 2004, 1–4. → pages 10Raiche, A., Jupp, D., Rutter, H., & Vozoff, K., 1985. The joint use of coincidentloop transient electromagnetic and Schlumberger sounding to resolve layeredstructures, Geophysics, 50(10), 1618–1627. → pages 11Reid, J. & Fullagar, P., 1998. Conductivity-depth transformation of slingramtransient electromagnetic data, Exploration Geophysics, 29(4), 570–576. →pages 10141Reid, J. & Vrbancich, J., 2004. A comparison of the inductive-limit footprints ofairborne electromagnetic configurations, Geophysics, 69(5), 1229–1239. →pages 42Reid, J., Pfaffling, A., & Vrbancich, J., 2006. Airborne electromagnetic footprintsin 1D earths, Geophysics, 71(2), G63–G72. → pages 42Ritter, O., Hoffmann-Rothe, A., Bedrosian, P., Weckmann, U., & Haak, V., 2005.Electrical conductivity images of active and fossil fault zones, GeologicalSociety, London, Special Publications, 245(1), 165–186. → pages 2SanFilipo, W. A. & Hohmann, G. W., 1985. Integral equation solution for thetransient electromagnetic response of a three-dimensional body in a conductivehalf-space, Geophysics, 50(5), 798–809. → pages 17Sattel, D., 2005. Inverting airborne electromagnetic (AEM) data with Zohdy’smethod, Geophysics, 70(4), G77–G85. → pages 11, 26Schwarz, G., Haak, V., & Rath, V., 1985. Electrical conductivity studies in theTravale geothermal field, Italy, Geothermics, 14(5), 653–661. → pages 2Schwarzbach, C. & Haber, E., 2013. Finite element based inversion fortime-harmonic electromagnetic problems, Geophysical Journal International,193(2), 615–634. → pages 12, 50Seigel, H. O., 1974. The magnetic induced polarization (MIP) method,Geophysics, 39(3), 321–339. → pages 62Siripunvaraporn, W. & Egbert, G., 2000. An efficient data subspace inversionmethod for 2-D magnetotelluric data, Geophysics, 65(3), 791–803. → pages 76Smith, R. & Annan, P., 1998. The use of B-field measurements in an airbornetime-domain system: Part I. benefits of B-field versus dB/dt data, ExplorationGeophysics, 29(1/2), 24–29. → pages 98Smith, R. S. & Lee, T. J., 2002. The moments of the impulse response: a newparadigm for the interpretation of transient electromagnetic data, Geophysics,67(4), 1095–1103. → pages 10Smith, R. S. & Wasylechko, R., 2012. Sensitivity cross-sections in airborneelectromagnetic methods using discrete conductors, Exploration Geophysics,43(2), 95–103. → pages 42142Spitzer, K. & Chouteau, M., 2003. A dc resistivity and IP borehole survey at theCasa Berardi gold mine in northwestern Quebec, Geophysics, 68(2), 453–463.→ pages 6Stolz, E. M., 2000. Electromagnetic methods applied to exploration for deepnickel sulphides in the Leinster area, Western Australia, ExplorationGeophysics, 31(1/2), 222–228. → pages 2Strack, K.-M., Hanstein, T., LeBrocq, K., Moss, D., Vozoff, K., & Wolfgram, P.,1989. Case histories of LOTEM surveys in hydrocarbon prospective area, FirstBreak, 7(12). → pages 6Strack, K.-M., Lu¨schen, E., & Ko¨tz, A., 1990. Long-offset transientelectromagnetic (LOTEM) depth soundings applied to crustal studies in theBlack Forest and Swabian Alb, Federal Republic of Germany, Geophysics,55(7), 834–842. → pages 6, 62Sun, J. & Li, Y., 2014. Adaptive lp inversion for simultaneous recovery of bothblocky and smooth features in a geophysical model, Geophysical JournalInternational, 197(2), 882–899. → pages 14Tarantola, A., 2005. Inverse problem theory and methods for model parameterestimation, SIAM. → pages 13Tikhonov, A. & Arsenin, V. Y., 1977. Methods for solving ill-posed problems,John Wiley and Sons, Inc. → pages 14Um, E. S., Harris, J. M., & Alumbaugh, D. L., 2010. 3D time-domain simulationof electromagnetic diffusion phenomena: A finite-element electric-fieldapproach, Geophysics, 75(4), F115–F126. → pages 52Vallee, M. A. & Smith, R. S., 2009. Inversion of airborne time-domainelectromagnetic data to a 1D structure using lateral constraints, Near SurfaceGeophysics, 7(1), 63–71. → pages 11van Schoor, M., 2002. Detection of sinkholes using 2D electrical resistivityimaging, Journal of Applied Geophysics, 50(4), 393–399. → pages 2Waff, H. S., 1974. Theoretical considerations of electrical conductivity in apartially molten mantle and implications for geothermometry, Journal ofGeophysical Research, 79(26), 4003–4010. → pages 2Wang, G. L., Torres-Verdı´n, C., Salazar, J. M., & Voss, B., 2009. Fast 2Dinversion of large borehole EM induction data sets with an efficientFre´chet-derivative approximation, Geophysics, 74(1), E75–E91. → pages 18143Wang, T. & Hohmann, G. W., 1993. A finite-difference, time-domain solution forthree-dimensional electromagnetic modeling, Geophysics, 58(6), 797–809. →pages 12Wang, T., Oristaglio, M., Tripp, A., & Hohmann, G., 1994. Inversion of diffusivetransient electromagnetic data by a conjugate-gradient method, Radio science,29(4), 1143–1156. → pages 17Waxman, M. H., Thomas, E., et al., 1974. Electrical conductivities in Shaly Sands- I. The relation between hydrocarbon saturation and resistivity index; II. Thetemperature coefficient of electrical conductivity, Journal of PetroleumTechnology, 26(02), 213–225. → pages 2Welhener, H. E., Labrenz, D. M., & Huang, J., 2007. Mt. Milligan project -resource report, Omenica Mining District, British Columbia, Tech. rep.,Independent Mining Consultants, Inc. → pages 22Wilson, G., Raiche, A., & Sugeng, F., 2006. 2.5D inversion of airborneelectromagnetic data, Exploration Geophysics, 37(4), 363–371. → pages 11Wolfgram, P. & Karlik, G., 1995. Conductivity-depth transform of GEOTEMdata, Exploration Geophysics, 26(3), 179–185. → pages 10Wolfgram, P., Sattel, D., & Christensen, N. B., 2003. Approximate 2D inversionof AEM data, Exploration Geophysics, 34(2), 29–33. → pages 11Xie, G., Li, J., Majer, E. L., Zuo, D., & Oristaglio, M. L., 2000. 3-Delectromagnetic modeling and nonlinear inversion, Geophysics, 65(3),804–822. → pages 17Yee, K. S., 1966. Numerical solution of initial boundary value problems involvingMaxwells equations, IEEE Transactions on Antennas and Propagation, 14(3),302–307. → pages 147Yin, C., Huang, X., Liu, Y., & Cai, J., 2014. Footprint for frequency-domainairborne electromagnetic systems, Geophysics, 79(6), E243–E254. → pages 42Yu, W. W., 2012. Inversion of airborne electromagnetic data in 2.5D, Master’sthesis, University of British Columbia. → pages 11Zang, M., 1993. UTEM case history of a base metal prospect Goianesia, Brazil,Exploration Geophysics, 24(3/4), 859–862. → pages 6144Zaslavsky, M., Druskin, V., & Knizhnerman, L., 2011. Solution of 3Dtime-domain electromagnetic problems using optimal subspace projection,Geophysics, 76(6), F339–F351. → pages 52Zaslavsky, M., Druskin, V., & Abubakar, A., 2013. Large-scale Gauss-Newtoninversion of transient controlled-source electromagnetic measurement datausing the model reduction framework, Geophysics, 78(4), E161–E171. →pages 52145Appendix A3D Forward Modeling UsingFinite Volume MethodUsing equation 1.1c, 1.1d, 1.2b and 1.2c and neglecting the displacement current,we can derive Maxwell’s equations with the quasi-static approximation in timedomain as∇× E+µ ∂H∂ t = 0, (A.1a)∇× H−σE = Js(t). (A.1b)Approximating the time derivative with finite differences using the backwardEuler (implicit) method, equation A.1 becomes∇× Ei+1 +µ0Hi+1−Hiδ t = 0, (A.2a)∇× Hi+1−σEi+1 = Ji+1s , (A.2b)where super-script i denotes the field at time i, and δ t is a small time step. Byeliminating the electric field we obtain a second-order system for the magneticfield∇× σ−1∇× Hi+1 + γHi+1 = ∇× σ−1Ji+1s + γHi, (A.3)146(Hz) i-½, j-½,k(Hx) i, j-½,k-½(Hy) i+½, j,k-½(Ez) i, j,k+½(Ex) i+½, j,k(Ey) i, j-½,kijkFigure A.1: Staggered discretization in 3D.where γ = µ0/δ t.The finite volume method integrates the differential Maxwell’s equations overelemental control volumes aligned with the mesh grid. The modeling domain isdiscretized by a rectilinear mesh consisting of many rectangular cells. I use astaggered grid (Yee, 1966) with H fields on the edges, E fields on the faces and theconductivity at the cell centres (Figure A.1).Upon discretization, equation A.3 can be written in a discrete form[C>diag(Av exp(m))−1 C+ γ i+1I]hi+1 = C>diag(Av exp(m))−1 ji+1s + γ i+1hi,(A.4)where C is a curl operator mapping a field from edges to faces, C> is also a curloperator but from faces to edges, Av is a harmonic averaging matrix mapping theconductivity values from cell centers to faces, exp(m) is a vector of conductivities(m = lnσ ) for all the cell centers, γ i+1 = µ0/δ t at time step i+1, h is a vector ofH fields in x, y, z directions, and js is a vector of the current density field Js due tothe source. If a closed transmitter loop is modeled, I first analytically compute themagnetic vector potential A of the loop source to ensure the source is divergence-147free, then obtain the primary current density fieldJs = ∇× µ−10 ∇× A. (A.5)Equation A.4 is for the H field at a particular time step. Expressing the Maxwellmatrix as A(m,δ t) and the first term in the right-hand side of equation A.4 as q,we have the system of equations for the entire modeling in timeA(m,δ t1)−γ(δ t2)I A(m,δ t2). . . . . .. . . . . .h1h2......=q1(m)+ γ(δ t1)h0q2(m)......(A.6)or in a symbolic formBh = rhs. (A.7)The initial field h0 can be obtained by solving a static problem.If the symmetric positive definite matrices A(m,δ ti) in equation A.6 are fac-torized into L and L> and stored, the operator B−1 is available by forward timestepping and B> by backward time stepping. The forward modeled data isF(m) = QB−1rhs, (A.8)where Q is a sparse space-time interpolation matrix mapping the fields from celledges to the receiver location and from time steps to the measurement times. I alsonote A only changes if δ t changes, so in practice, if many delay times from earlyto late are modeled, the rule of thumb is to adjust the length of δ t for each decadeto match the scale of time dynamics. In this way, only a few A matrices need to befactorized.A detailed description of the algorithm can be found in Oldenburg et al. (2013).148Appendix BCalculation of SensitivityDifferentiating both sides of equation A.7 with respect to m yields∂h∂m = B−1(∂rhs∂m −∂B∂mh)= B−1G. (B.1)Then using the same interpolation matrix Q in equation A.8, the sensitivity matrixisJ = Q∂h∂m = QB−1G, (B.2)and its transpose isJ> = G>B−T Q>, (B.3)where Q is again a space-time interpolation matrix. As the operators B−1 and B−>are already available in forward modeling (Appendix A), I derive other components149in G as∂rhs∂m =∂q1∂m∂q2∂m......=−C>diag(j1s )diag[(Av exp(m))−2]Avdiag(exp(m))−C>diag(j2s )diag[(Av exp(m))−2]Avdiag(exp(m))......,(B.4)and∂B∂mh =∂∂m[A(m)h1]∂∂m[A(m)h2]......=−C>diag(Ch1)diag[(Av exp(m))−2]Avdiag(exp(m))−C>diag(Ch2)diag[(Av exp(m))−2]Avdiag(exp(m)).......(B.5)Now we have options of implicit or explicit sensitivity. The implicit methodstores Q, B−1 (essentially several factorized A) and G for the operation of J timesa vector during inversion, eliminating the necessity of forming J. However, Grequires the field h everywhere in the mesh at every time, which actually consumesmore memory than storing J in EM modeling.The explicit method computes and stores the sensitivity matrix J. Since thenumber of data is always much less than the number of model parameters, I com-pute J> in equation B.3 column by column. An explicit J can be formed usingequation B.3 and solving for J>ei for the ith datum where ei is the zero vector witha unit value in the ith entry. Computation of each column of J> is equivalent to onecomplete forward modeling involving backward time stepping (B−> operation).After J is stored, the matrices and vectors associated with B, G, and Q can all bedeleted from the memory.150Appendix CFast Discrete Model ConversionBetween Arbitrary MeshesThe key of survey decomposition is to be able to solve many subproblems thateffectively represent the original problem for specific source-receiver pairs. Therepresentation of the global model on a local mesh is crucial in this process. Onepopular and proven method is intersecting-volume weighted material averagingsimilar to Commer & Newman (2008). Here I document an approach that imple-ments fast model conversion between two meshes embedded in different coordi-nate systems using the proxy of point cloud. I assume the following informationare known: the original model m1, a vector, on mesh 1 in coordinate system 1,the destination mesh 2 in coordinate system 2, and the coordinate transform pa-rameters, which, in case of two Cartesian systems, are a 3-by-3 rotation matrix Mand a displacement vector t so that for a point p1 = (x1,y1,z1) in system 1 and itscoordinate p2 = (x2,y2,z2) in system 2, we havep2 = M p1 + t. (C.1)In addition, there is no restrictions on the type of the meshes (regular or irreg-ular), but it requires that a point can be located in a mesh in terms of cell index.The goal is to find an averaging matrix A, with which we can get the model on the151second mesh m2 by applying the averaging matrixm2 = A m1. (C.2)An entry ai j in A represents the weight, essentially the normalized intersectingvolume, when calculating the relative contribution of cell j in the original model tocell i in the converted model. Summing elements of A along rows yields a vectorof all onesNm1∑j=1ai j = 1. (C.3)Matrix A is usually sparse because a cell in m2 may intersect only a few cells inm1. If inactive cells are involved, the corresponding rows and columns of A mustbe removed.The point cloud approach first initiates Np random points in every cell in m2knowing the cell index of every point in m2; then the points are transformed to sys-tem 1 and their cell indices in m1 are calculated; finally A is assembled by countingthe number of points: if there is a point from cell i in m2 is found in cell j in m1,then ai j adds one. Each row of A is normalized by its sum before A is applied tom1. When Np approaches infinity, the estimated intersecting volumes are identicalwith the results computed using the rigorous analytic geometry; however, I havenoticed even a small Np, for example 50, is adequate to quickly produce a goodconverted model for the purpose of EM modeling. The point cloud approach isespecially useful when using a rigorous method is expensive for complex meshes.152

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0166995/manifest

Comment

Related Items