c. I TIME - TERM /ANALYSIS USING LINEAR PROGRAMMING AND ITS APPLICATION TO REFRACTION DATA FROM THE QUEEN CHARLOTTE ISLANDS by DAVID NEIL BIRD B.Sc.(Hons. Solid Earth Geophysics), McGill University, 1979 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE RFJQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE FACULTY OF GRADUATE STUDIES (Department of Geophysics and Astronomy) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA August, 1981 (£) D. Neil Bird, 1981 In p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t of the requirements f o r an advanced degree a t the U n i v e r s i t y o f B r i t i s h Columbia, I agree t h a t the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and study. I f u r t h e r agree t h a t p e r m i s s i o n f o r e x t e n s i v e copying of t h i s t h e s i s f o r s c h o l a r l y purposes may be granted by the head o f my department or by h i s o r her r e p r e s e n t a t i v e s . I t i s understood t h a t c o p y i n g or p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l not be a llowed without my w r i t t e n p e r m i s s i o n . Department The U n i v e r s i t y of B r i t i s h Columbia 2075 Wesbrook P l a c e Vancouver, Canada V6T 1W5 DE-6 (2/79) i i i ABSTRACT The Queen Charlotte transform fault zone forms the boundary between the Pacific and America plates north of Vancouver Island o f f western Canada. This boundary is of interest since there is direct contact between oceanic and continental crustal material along i t s length. There are also peculiarities at i t s southernmost portion where oblique subduction occurs and an enigmatic terrace within the fault zone replaces the continental shelf. To investigate the crustal structure in this region, an onshore-offshore seismic refraction experiment consisting of two profiles shot into three ocean bottom seismometers (CBS's) and seven land-based stations was undertaken i n 1979. As a means of interpretating the areally distributted data, a new linear programming approach to time-term analysis is introduced. This new approach is more flexible than previous methods since there is a choice of solutions possible through a choice of objective functions. A "best f i t " solution i s found by minimizing the L^-norm of the misfit errors while upper and lower bounds for both the refractor velocity and delay-times are created by either maximizing or minimizing the sum of the time-terms. For a l l solutions the travel-time equations form the constraints of the problem. Various examples, both synthetic and real, are analyzed to exemplify the method. The areal data from the Queen Charlotte Islands are analyzed using the linear programming approach to time-term analysis. The data are subdivided by p value and range into four groups. Each i s then analyzed separately and a four layer interpretation i s made. The uppermost layer iv i s assumed to be the sediments (V = 2.3 km/s) and i s of constant thickness (1.4 km) to the west but of variable thickness within the fault zone. The second layer has a velocity of 5.2 km/s increasing to 5.8 km/s in the fault zone. This layer displays pronounced thickening beneath the terrace.,It i s considered as oceanir; layer 2. The third layer (V = 6.5 km/s) l i e s above the upper mantle (V = 7.9 km/s assumed). There is evidence for the interface between these two layers to dip eastward at 15° beneath the fault zone. It is hypothesized that the increasing velocity of layer 2 and i t s thickening beneath the terrace i s the result of compressional metamorphism caused by oblique subduction of the Pacific plate beneath the America plate. The dipping upper mantle is explained by the same mechanism. V Table of Contents Abstract . i i i • Table of Contents v List of Figures vi Acknowledgements v i i i CHAPTER 1. DSTTRCOUCTION 1 1.1 Tectonic Setting of the Study Area 1 1.2 Outline of Study 2 CHAPTER 2. A LINEAR PROGRAMMING APPROACH TO TIME-TERM ANALYSIS ... 5 2.1 Introduction 5 2.2 Linear Programming 6 2.3 Mathematical Formulation 10 2.4 Test Results 13 2.5 Summary 24 CHAPTER 3. APPLICATION OF THE LINEAR PROGRAMMING APPROACH TO TIME-TERM ANALYSIS TO REFRACTION DATA FROM THE QUEEN CHARLOTTE ISLANDS 26 3.1 The Experiment 26 3.2 Data Acquisition and Processing 29 3.3 Interpretation 32 3.4 Conclusion 48 CHAPTER 4. SUMMARY 51 References 53 Additional Bibliography 60 Appendix 1. On Linear Programming and the Simplex Algorithm 61 Appendix 2. The Expected Value of the One-norm for a Normally Distributed Variable 69 Appendix 3. Difficulties with Time-term Analysis 70 v i L i s t of Figures Figure Page 1.1 Tectonic map of the Queen Charlotte Islands region 3 2.1 A simple linear programming problem 9 2.2 Refracted ray geometry 10 2.3 Results from model 1 15 2.4 Model 2 geometry 17 2.5 Results from model 2 19 2.6 The Keen and Barrett (1971) study region 20 2.7 Smooth anisotropy curves from the Keen and Barrett(1971) study 22 2.8 A comparison between smooth and sectioned anisotropy curves .. 23 3.1 Detailed location map of the study area 28 3.2 Sound velocity pr o f i l e for water column 31 3.3 T-X plot for the entire data set 33 3.4 Refractor 1 "best f i t " interpretation 35 3.5 Refractor 2 "best f i t " interpretation 39 3.6 Corrected data from land stations IKE and FLM 41 3.7 Refractor 3a "best f i t " interpretation 43 3.8 Refractor 3a interpretation with V = 7.9 km/s 45 3.9 Refractor 3b interpretation with V = 6.5 km/s 47 3.10 Complete crustal interpretation 49 A.l Shot and station configuration for models 3 - 5 70 A.2 Model with velocity gradient 72 A.3 Results from models with sloping interfaces 73 A.4 Model with faulted interface 75 v i i A.5 Model with faulted interface and velocity gradient 76 A. 6 Slant ray geometry . 77 A.7 The results after one iteration for a model with a sloping interface and the faulted interface model with a velocity gradient 79 v i i i Acknowledgements I wish to thank my thesis advisors Dr. R. M. Clowes and Dr. R. M. E l l i s for their support, guidance and encouragement during the period of this research. Dr. Clowes* suggestions and advice during the interpretation of the Queen Charlotte data and production of this document were much appreciated. I acknowledge Dr. C. E. Keen of the Atlantic Geoscience Center for providing me with the data from the 1970 Hudson cruise, and a l l persons who aided in the acquisition of the Queen Charlotte data set. Finally, I wish to express my gratitude to P. K. Fullagar for his advice on the uses of linear programming; J . R. Horn for the many discussions we had during the course of the study; N. J. Carriere for her painstaking typing and proofreading of this document, and to a l l members of the department (geophysicists, astronomers, and secretaries alike) who made the period of research both stimulating and enjoyable. Funding for this research has been provided by a Natural Sciences and Engineering Research Council Canada Postgraduate Scholarship and Research Agreements 170-3-80 and 59-3-81 from Energy, Mines and Resources (Earth Physics Branch) Canada, and Natural Sciences and Engineering Reasearch Council Canada Operating Grants A2617 and A7707., 1 1. Introduction 1.1 Tectonic setting of the study area The northeast Pacific is one of the most tectonically complicated and well studied regions of the earth's surface. The entire west coast of North America has been discussed by Atwater (1970) and Keen and Hyndman (1979) have reviewed the tectonic characteristics of the Canadian west coast continental margin. In the region of the Queen Charlotte Islands, detailed geophysical studies have been undertaken to investigate the magnetic characteristics (Vine, 1968 and Currie et al., 1980); seismicity (Hyndman and E l l i s , 1981); gravity (Currie et al., 1980); heat flow (rryndman et al., 1981); and ocean floor topography (Chase and Tiffin, 1972 and Chase et al., 1975). Seismic refraction experiments carried Out to the west (Keen and Barrett,-1971) and north (Milne, 1964 and Shor, 1962) of the islands have added to the geophysical data from the region. The Queen Charlotte Islands region is of such interest because of three peculiar features. Firstly, there is a direct oceanic-continental crust contact along the Queen Charlotte transform fault zone which forms the boundary between the North American and Pacific plates. Secondly, the global plate analysis, RM2, by Minster and Jordan (1978), suggests about a 15° discrepancy between the actual plate motion and the trace of the Queen Charlotte transform fault. A component of underthrusting is needed for the two to be consistent. . First motion studies of a 2 magnitude 8.0 earthquake in 1949 (Hodgson and Milne, 1951), and other studies (Chandra, 1974) do not agree well with this hypothesis. Thirdly, there is a lack of a continental shelf. Instead, there is an enigmatic terrace about 25 km wide, at a depth of 2 km (Chase et a l . , 1975). It is bounded by two scarp faces. The magnetic lineations to the west of the Queen Charlottes disappear under the terrace (Currie et a l . , 1980) and there is also low heat flow and gravity over this region (Hyndman et a l . , 1981). These findings suggest that the terrace is not oceanic crust but that i t is a thick sediment wedge. Hyndman and E l l i s (1981) found seismicity under the inner face and suggested that i s was the active Queen Charlotte transform fault. The terrace, transform fault trace, RM2 relative plate motion vector and study area are depicted in figure 1.1. 1.2 Outline of study The original objective of this study was to c l a r i f y some of the peculiar features of the region by using time-term analysis of an areal seismic refraction data set recorded during a 1979 onshore-offshore experiment. In the course of the research a linear programming approach to time-term analysis was developed. The formulation and testing of this approach are presented in Chapter 2. In the third chapter, an interpretation of the data set is presented. Acquisition and data processing are briefly discussed, and an interpretation is made keeping the limitations of time-term analysis 3 Figure 1.1; The Queen Charlotte Islands showing the study area and the Queen Charlotte terrace (shaded) bounded by two faults (dotted l i n e s ) . The eastern fault i s the active Queen Charlotte transform fault (Hyndman and E l l i s , 1981). The arrow in the upper l e f t corner i s the direction of the P a c i f i c plate (west of the fault) with respect to the North American plate. The rate of motion i s approximately 5.6 cm/yr (Minster and Jordan, 1978). 4 (discussed in Appendix 3) in mind. The thesis closes with a f i n a l summary of findings and reflections r on the time-term method for analyzing seismic refraction data. 5 2. A Linear Programming Approach to Time-Term Analysis 2.1 Introduction The time-term method for analyzing seismic refraction data is a simple , yet powerful technique. It has advantages over other methods of refraction interpretation, in that time-term analysis allows for lateral inhomogeneities, velocity increases with depth, refractor velocity anisotropy, and data of areal extent. The method was first developed to handle seismic refraction data of both areal and linear extent, assuming a constant refractor velocity (Scheidegger and Willmore, 1957). Simplifications (Willmore and Bancroft, 1960) and modifications to the original technique have allowed for lateral variations in refractor velocity, velocity anisotropy (Raitt et al., 1969), and velocity gradients within the refractor (Smith et al., 1966). Complicated structure must be handled by an iterative procedure since the original formulation is only valid for near horizontal refractors. Simulations over a dome-shaped refractor (Willmore and Bancroft, 1960) and dipping refractors (Reiter, 1970) have shown that this approach is necessary. Certain data sets were not originally suitable for time-term analysis since there were more stations (unknowns) than travel-times (equations). This problem was overcome by Morris (1972) using his Delay-Time-Function Method, which assumes that the delay-time surface can be expressed by a continuous analytic function, and by Bamford's (1976) MOZAIC time-term analysis, in which he groups shots and receivers so as 6 to reduce the number of unknowns. Thus far the time-term equations have been solved using least-squares regression techniques. In this thesis linear programming is proposed as an alternative method for solving time-term equations. This new approach to the problem can provide estimates for upper and lower bounds on both the time-term surface and the refractor velocity while handling the problem of overmodelling discussed by Whitcombe and Maguire (1979). 2.2 Linear Programming Linear programming is a mathematical method for solving problems of optimization. It has been used extensively by economists since the development of the Simplex Algorithm by G. B. Dantzig in 1947 (for further discussion on the Simplex Algorithm see Appendix 1). The goal of a linear programming problem is to maximize or minimize an objective function subject to a set of constraints. Both the constraining functions and the objective function must be linear and a l l variables must be non-negative (a positivity constraint). The standard formulation of a problem is: Objective function: maximize or minimize u = c l x l "^ 2 x2 + 7 Constraints: a i l * i + a 2 2 x 2 + + a l n ^ ^ bi a21 x l + a22 x2 + + a 2 n ^ < b 2 ami x l + am2 x 2 + + a ^ ^ < Non-negative constraints: x x > 0 x 2 > 0 (2.1) *n *0 where a^j, c^j are coefficients and Xj are variables. As an example, suppose that you, a crustal seismologist, have some seismic records that need to be picked. You have in your employ a graduate and an undergraduate student, and you have a total of $200 with which to pay their wages. The graduate student charges $10 per hour, picks 10 records per hour, but can only work for a maximum of twelve 8 hours (after twelve hours the grad's productivity declines as he feels record picking is a b i t mundane). The undergraduate can pick 8 records per hour and can work for 18 hours (at which point he f a l l s asleep at his desk). You are paying the undergrad $6 per hour. You wish to maximize the number of records picked. Let x = the number of hours that the grad works y = the number of hours worked by the undergrad therefore, the objective function is u = lpx + 8y (max.) The constraints are lOx + 6y < 200 x < 12 y £ 18 In figure 2.1 the shaded area represents the region of possible solutions bounded by the constraining functions. The objective function defines a family of par a l l e l lines (solid lines) which intersect the solution region at numerous points. The maximum and minimum values of the objective function occur at the extrema of the solution space. The maximum occurs at point B (point A is the minimum of the objective function). The graduate works for 9.2 hours, the undergrad for 18 hours; and a total of 242 records are picked. 9 Figure 2.1; A linear programming problem. The objective functions are the solid lines. The solution space (shaded) i s bounded by the constraints. The maximum of the objective function occurs at B, an extrema of the solution space. The minimum occurs at A. 10 For a two-dimensional problem (ie. two unknowns) the procedure is easy to visualize and explain, but for the n-dimensional case the objective function i s a set of p a r a l l e l planes and the constraining functions form a convex confining space of solutions. I t i s time consuming to search and evaluate a l l the possible extrema, but fortunately the Simplex Algorithm finds the solution quickly and e f f i c i e n t l y (see Appendix 1 or Cooper and Steinberg, 1970). 2.3 Mathematical formulation F i g u r e 2.2: R e f r a c t e d ray geometry Assuming that the ray path is horizontal, the travel-time can be expressed as T i j = a i + bj + Xi;j/V (2.2) where T ^ j , X^j are the travel-time and distance respectively between shot i and station j ; ,bj are the delay times associated with shot i and station j and V is the refractor velocity. This assumes that the delay-times are independent of azimuth and have the form 11 h; 0-1 = (2.3) i s the depth to the refractor, and v(z) i s the overburden velocity-depth structure. An expression similar to equation 2.3 holds for bj. Introducing the slowness, U, for 1/V , and £ijr the misfit or solution error, equation 2.2 becomes T y = a t + bj + Xij U +£ij (2.4) This equation is linear in the unknowns a^, bj, U, and £^j. Now the problem must be formulated into the general linear programming notation (equation 2.1). There is a choice of objective functions. (i) Minimize the sum of the absolute value of the misfit errors, the L^-norm. (ii) ^ | (ai + bj): Maximize the sum of the time-terms. ( i i i ) £1 (ai + bj): Minimize the sum of the time-terms. The f i r s t objective function yields the "best f i t " solution while (ii) produces a "deepest" model with the fastest refractor velocity, and 12 (iii) has a solution which is the "shallowest" with the slowest refractor velocity. As constraints, there are the time-term equations (equation 2.4) and: (i) £;.; = 0 : The sum of the misfit errors is zero. (ii) a^ = q : Fix one of the delay times from prior information, thus removing the ambiguity inherent in the time-term problem (Willmore and Bancroft, 1960). If objective functions (ii) or (iii) have been chosen, then there is an additional constraint on the sum of the absolute value of the misfit errors. Parker and McNutt (1980) have shown that the -norm is related to the variance of the observational error ( go2) in the following manner (2.5) M is the total number of travel-times (see Appendix 2 for proof). From the sum of the absolute values of the misfit errors a solution variance, 6^, can be calculated. Following Whitcombe and Maguire (1979), the ratio 13 F = (2.6) indicates whether the resultant time-term model has over fitted (F < 1), adequately fitted (F « 1), or underfitted (F > 1) the data. If the data are underfitted, the cause may be complex structure, in which case an iterative series of solutions might reduce the lack of f i t (Bamford, 1973). To calculate the "deepest" and "shallowest" models the solution variance is constrained bo be equal to the observational variance (equation 2.5). For these models F is forced to be unity; thus the models are the deepest and shallowest allowed by the misfit errors. 2.4 Test results Two sets of synthetic data are generated and analyzed to exemplify different ways the linear programming approach to time-term analysis can be used. Finally, refraction data from the northeast Pacific exhibiting Pn velocity anisotropy (Keen and Barrett, 1971) are reinterpreted using this method. 2.4.1 Without Refractor Anisotropy Model 1: 2-layer, planar model Model 1 is a simple model consisting of two layers. The upper layer has a velocity, V = 3.5 km/s, over a lower isotropic layer with a velocity of 6.0 km/s. The second layer is at a depth of 1 km corresponding to an ideal time-term surface at 0.232 s. A random error 14 whose standard deviation i s 0.01 s is added to the exact travel-time data calculated from the model. Results are shown in figure 2.3. A comparison between the L-^ best f i t (Ll "best") and the least squares results (L2. "best") indicates that the delay-time surfaces predicted are similar and are both close to the true input model (within 0.014 s ). The linear programming approach has found a refractor velocity of 6.000 km/s while least squares ( L 2 ) predicts a velocity of 6.007 km/s. In both cases the data are overfitted (F < 1) and this i s reflected by the random scatter of points around the true time-term surface. Using linear programming, MAX FIT (the deepest model) and MIN FIT (the shallowest model) can be created. For these models, the variance of the solution errors was constrained to be the variance of the observational errors. These two solutions are forced to f i t the data adequately, and bounds on the time-term surface and velocity are found. The refractor velocity ranges between 5.954 and 6.077 km/s. The time-term for station 4 i s fixed at 0.232 + 0.01 s. 2.4.2 With Refractor Anisotropy Backus (1965) showed that refractor velocity anisotropy was possible and that i t could be expressed as a series of sines and cosines of azimuth. Raitt et a l . (1969) modified time-term analysis to allow for refractor anisotropy assuming a dependence of the form similar to equation 2.7, below. This shall be referred to as a Backus-type, or smooth anistropy. 15 * Shots A MIN FIT • Stations + LI "best" * Fixed term x L2 "best" True input 0 MAX FIT Figure 2.3; Results from model 1 showing the true input surface at 0.232 s, the "best f i t " surfaces and upper and lower bounds. The "best f i t " surfaces are within 0.014 s of the input surface. The linear programming approach, L l "best", calculated a refractor velocity of 6.000 km/s while a least squares approach, L2 "best", found a velocity of 6.007 km/s. The input velocity i s 6.0 km/s. The bounds on the velocity are 5.954 to 6.077 km/s as calculated by MIN and MAX FIT. 16 Refractor velocity anisotropy is allowed for by assuming either of the following functions for 13^, the refractor slowness: u i j = u o + A oos(2©;.j) + B cos(4e^j) + (2.7) + C sin(26ij) + D sin(40 tj). where 9 y i s the ray azimuth. or = U n : where n© < e<$ < (n+1)© (2.8) and 6 = 180/N - a sectioned anisotropy The sectioned anisotropy designates a l l rays in the pie sl i c e between n e and (n+1) e to have the slowness U n. © is an azimuthal digitization interval. This anisotropy function assumes no prior knowledge as to the form of the anisotropy; thus i t can provide a check on the Backus-type anisotropy. Model 2: 2-layer, pseudo-planar model The geometry of model 2 i s shown in Figure 2.4. It consists of two layers, the upper having an isotropic velocity of 5.3 km/s. Layer 2 has an anisotropic velocity of the form: V 2 = 8.0 + 0.5 sin(2©) (2.9) 17 - • • • • • • • • • • • • • * • • ... * • • • • • • • • • • • • • stations V,-5.3 km/s • shots V 2»8 .0*0.5sin(2e) km km/i h »14 *0.5 c km 0 20 40 Figure 2.4; Model 2 geometry. The circular shot pattern i s recorded at the center station (circled) only. The cross pattern of shots i s recorded at a l l five stations. 18 The depth to this interface i s 14.0 + 0.5 km. The cross pattern of shots has a station spacing of 10 km and i t is recorded at a l l five stations. The circular shot pattern, which has a radius of 60 km, i s recorded at the center station only. An error of 0.02 s is added to the travel-times. Figure 2.5 displays the results found. The true velocity curve is the dashed line (equation 2.9). The "best f i t " curve is lower in amplitude, but this i s a consequence of over f i t t i n g the data set (F = 0.16). The MAX FIT and MIN FIT curves, "deepest" and "shallowest" models respectively, form an envelope which bounds the true velocity. Again, these models are forced to adequately f i t the data (F = 1). Pn refraction data from the northeast Pacific. The data being reinterpreted were acquired in 1970 and interpreted by Keen and Barrett (1971). The experiment consisted of two reversed profiles, and a circular shot pattern recorded at the center (figure 2.6). The reverse profiles ran roughly N-S and E-W, and the radius of the shot c i r c l e varied between 65 and 85 km. The stippled areas are magnetic anomaly patterns on the ocean floor from a preliminary Canadian Resource charting map. They run 15° E of N, and are assumed to be parallel to the ridge crest at the time of their formation. Previous authors have found minimum velocities in this direction and maximum velocities perpendicular to the ridge crest (in this study 15° S of E). 19 Input - LI "best" MIN FIT MAX FIT CD 45 90 Azimuth (deg.) 135 180 Figure 2.5: Results of model 2 showing the input velocity function (dashed l i n e ) , the "best f i t " solution (F = 0.16) LI "best"; and upper and lower bounds for the velocity (F = 1.00). 20 Figure 2.6; The Keen and Barrett (1971) study region. The shaded areas are magnetic lineations which strike 15°E of N. The refraction survey indicated by the heavy solid lines, consists of a circular shot pattern and two reversed profiles. 21 Since there are no overlapping shots and receivers, the original time-term equations (equation 2.4) are modified. The time-terms are replaced by one delay-time common to a l l shots and receivers. Consequently, equation 2.4 becomes T i j = a + x i j u i j + £ i j (2-10) Physically, this change assumes a horizontal refractor. The same assumption was made for ease of calculation by Keen and Barrett (1971) in their interpretation. A comparison of results is shown in figure 2.7. The L l "best" is the best f i t solution from the linear programming approach. It has the same form as the Keen and Barrett curve, K&B(L2), except that the constant values of the slowness, U Q, are different (equation 2.7). Keen and Barrett assume a constant velocity, 1/UQ, of 8.07 km/s from the mean of the reversed profiles. The linear programming solution predicts a constant velocity of 7.92 km/s using the entire Pn arrival data set, including the reversed profiles. In both cases the data are overfitted. The maximum velocity occurs at 107°, or 17° S of E as expected. To this point only continuous, Backus-type anisotropy models have been presented. Figure 2.8.has a sectioned best f i t result, L l "sect", for comparison. The azimuthal space was divided into 20 sections, each section with a nine degree spread. There are, on average, four ray paths per section, except for the sections including the reversed profiles. The form is very similar to the "smooth" anisotropy solutions. 22 K&B (L2) Ll "best'* MIN FIT MAX F(T Figure 2.7: Results of the Pacific study presenting a comparison between Keen and Barrett's result, K&B(L2), a least squares interpretation and the linear programming "best f i t " result, L l "best". The dotted lines are upper and lower bounds. 23 45 90 Azlmufh (d«g.) 155 180 Figure 2.8: A comparison between "sectioned", "smooth" and Keen and Barrett's velocity anisotropy functions. The dashed line is Keen and Barrett's curve, K&B(L2),(F = 0.77). Linear programming using the "smooth" anisotropy formulation found L l "best" (F = 0.73) while a "sectioned" approach calculated L l "sect" (F = 0.50). These curves are a l l "best f i t " results. 24 The data are more over fitted (F = 0.50 as compared to F = 0.73, "best f i t " model; and F = 0.77, K&B model), which can account for discrepancies. The "sectioned" anisotropy approach makes no prior assumptions as to the form of the anisotropy and thus these results provide a positive check on a Backus-type assumption. 2.5 Summary It has been shown, using three examples, that a linear programming approach to time-term analysis can solve for the delay-times and refractor velocity simultaneously. "Best f i t " solutions are found by minimizing the -norm of the misfit errors. Other models can be generated by choosing different objective functions. In this manner, upper and lower bounds on both the time-terms and refractor velocities can be found. Overfitting of the data using a "best f i t " solution might not necessarily create a "true" model since errors in the travel-times can be absorbed by both the delay-times and velocities. For this reason, the generation of upper and lower bounds is an asset. Refractor anisotropy has been handled by two methods: a continuous, Backus-type, velocity function, and a "sectioned" anisotropy. The second type of anisotropy model makes no assumptions as to the form of the velocity function; but i t gives results very similar to models generated using a Backus-type anisotropy. The linear programming approach is more flexible than previous methods, but i t does not overcome the inadequacies of time-term analysis 25 and the complications arising from the interpretation of the delay-times as discussed by Whitcombe and Maguire (1979). Examples of these problems are presented in Appendix 3. 26 3. Application of the Linear Programming Approach to Time-Term Analysis to Refraction Data from the Queen Charlotte Islands 3.1 The Experiment In 1979, a detailed seismic experiment on and around the Queen Charlotte Islands was undertaken by the University of Br i t i s h Columbia in cooperation with the Earth Physics Branch at the Pacific Geoscience Centre. The experiment was designed to study the Queen Charlotte fault zone both by detailed seismicity and by an onshore-offshore seismic refraction program. The seismic ity has been analyzed by Hyndman and E l l i s (1981). A time-term analysis of the seismic refraction data i s described in this chapter. Figure 3.1 shows the positions of the ocean bottom seismometers (CBS's), land stations, large shots, and the location and orientation of the two shot lines. Also presented in this figure are the trace of the Queen Charlotte transform fault and the Queen Charlotte terrace, mentioned in the introduction. A total of 84 explosive charges ranging in size from 5 kg to 300 kg were fired during the experiment. Ex-1, the shot line perpendicular to the fault, had 31 shots, while Ex-2, which ran parallel bo the fault over the Queen Charlotte trough, had 50 shots. The three large shots were a l l 300 kg size. The seismic data were recorded at sea on three CBS's and on land at seven locations. The maximum shot-receiver separation i s 112 km. 27 Figure 3.1; The Queen Charlotte Islands study area. In this figure the dashed line is the trace of the Queen Charlotte transform fault. The approximate location of the terrace is represented by the shaded region. The shot lines are shown as the solid lines and are labelled Ex-1 and Ex-2. The land stations are; a. GOW, b. VIC, c.FLM, d. SLM, e. HAR, f. IKE and g. LOU. The locations of the three CBS's are also shown. Q.C. Islands 29 3.2 Data Acquisition and Processing The land stations consisted of five FM tape recording seismographs, two of which were moved to new locations during the experiment. The stations recorded 1 Hz vertical and horizontal geophones at different gains on slow speed (15/160 ips) FM magnetic tape recorders. Filters on the amplifiers were set for a 1 - 12 Hz pass band. WWVB was recorded for the timing of the events. At a few stations the WWVB signal was very poor; thus some loss of data was incurred. The data were played out onto a chart recorder and then picked by hand. Playing the tapes at a speed of 15/16 ips onto a chart recorder at 200 mm/s resulted in a record at a scale of 20 mm/s real time. This enabled the data to be picked to +15 ms at best. The OBS's were of the design described by Lister and Lewis (1976). They recorded the three components of ground motion from 4.5 Hz geophones, plus an internal 20 Hz time code from an internal clock, onto a four channel tape head at a recording speed of 1 mm/s. The frequency response of these units is bandlimited to 2 - 100 Hz. Processing of these data has been performed by James Horn following the procedures described by Cheung (1978) and Au (1981). The picks could be measured to +5 ms. The final steps in the processing sequence are range and time corrections needed to remove effects of ocean floor topography. There are two methods to accomplish this; either (i) the shot and receiver are placed on a specific datum: this is accomplished by replacing or 30 subtracting material with the appropriate velocity below and above the datum, or (ii) the travel-time and range to a particular interface are removed. Since the land stations are at sea level and the ocean depth is almost 3 km at some points, the first method is virtually impossible because i t requires knowledge of the replacement velocities. Instead, the water column was removed from the shots using a water velocity depth function compiled by the National Cceanographic Data Centre for this study area at the time of year of the study. This curve is shown in figure 3.2. Propagation of rays through the water column were traced for various incident angles using a ray-tracing program (Whittall and Clowes, 1979). For shots on Ex-1 recorded at CBS 1, 2 and land stations IKE and HAR, and Ex-2 recorded at (LBS 3, the actual rays were traced to the ocean floor to find the true depth at intersection since detailed topography along the shot lines was known. For other shot-receiver pairs the depth under the shot was taken as the true depth. Using these depths and the output from the ray-tracing program, the water column was removed. In cases where the new p-value was substantially different from the original calculated value, an iterative correction scheme was used. Sediment thicknesses were not available due to poor continuous seismic profile records and very deep sediments, so corrections to basement could not be made. Thus, interpretations include both topography and sediment thicknesses. The travel-time errors were estimated at +70 ms for distant shots (low p, steep angle of incidence) and up to ±150 ms for near shots Figure 3.2: Velocity-depth curve for the water column compiled by the National Oceanographic Data Centre for the study area at the time of year of the study. 32 (range less than 10 km). The majority of this error is due to the sea removal corrections. Errors in the range are +200 m. 3.3 Interpretation The entire corrected data set is presented in figure 3.3. It can be subdivided into four groups by ray parameter. Refractor 1 consists of a l l data with distances between 2 - 15 km and velocities in the 3 -5.0 km/s range. The second refractor includes a l l data with p-values appropriate to 5.5-6.5 km/s and ranges between 15 and 35 km. Greater than 35 km, there are two groups of arrivals. The data whose rays do not pass through the Queen Charlotte terrace exhibit upper mantle velocities (V = 7.5 - 8.5 km/s), while rays that travel through the terrace exhibit lower velocities (V = 6.5 km/s). These subsets will be designated refractors 3a and 3b respectively. 3.3.1 Refractor 1. Refractor 1 consists of a l l data with distances ranging between 2 -15 km. Originally, the entire subset was entered into the analysis but the calculated depths and velocities seemed questionable due to complicated ray paths. Thus the data were further subdivided into shots on the terrace and shot-receiver pairs west of the terrace. Entire data Distance (km) Figure 3 . 3 : The entire data set. The three distinct legs of the travel -time curve are labelled refractors 1 to 3b. Data from station SIM were not included because of poor timing. 34 (i) Refractor la - The terrace data. The velocity calculated by the "best f i t " option is 5.77 km/s. The time-terms were found using the assumption that at CBS 1 and Ex-1, shot #28 they are equal (distance separation is 0.9 km). Depths to the refractor are obtained using an average sediment velocity of 2.3 km/s, but since some ray paths are at greater than 10° to the horizontal, an iterative approach was used. The standard deviation of the misfit errors increased at the first iteration (from 0.047 s to 0.051 s). Also, the refractor velocity increased to 6.03 km/s. Thus, the original calculation was used as the final model. Assuming that the travel-time errors are 0.07 s, the "best f i t " model had over fitted the data. Thus, bounds to both the velocity and depths can be calculated. The lowest velocity allowed by the data is 4.6 km/s; the highest is 7.5 km/s. The "best f i t " surface is shown in figure 3.4a. (ii) Refractor lb - Data from the deep ocean. The subset consists of shots from Ex-2 recorded at CBS 2 and 3 that are within 15 km and have p-values associated with refractor velocities in the 3 - 5 km/s range. The linear programming "best f i t " approach found a refractor velocity of 5.15 km/s. Time-terms were "fixed" by smoothing the surface around OBS 3. The surface calculated has few slant ray paths (tf< 2°), so iteration is not necessary. Figure 3.4b represents the "best f i t " surface. The "best f i t " model was overfitted since F = 0.067. The range of refractor velocities allowed by the data 35 fsi E a 4) LO w ' Tarrac* _ -J 28 _ J ° 0 • i o a o - 12 o o ( > 0 1 ' 1 -1 20 40 Distance (km) 60 80 40 60 Distance (km) 80 100 Figure 3.4; Refractor 1 "best f i t " surfaces a. (top) Refractor l a (terrace data) Ex-1 p r o f i l e . b. (bottom) Refractor l b (oceanic data) Ex-2 p r o f i l e . The depths have error bars of +0.5 km as shown. The refractor velocity calculated varies from 5.2 to 5.8 km/s. The surface i s the sediment-basement interface. The solid triangles are the locations of the OBS's and the numbers at individual points refer to shots along the p r o f i l e s . 36 are between 4.0 and 8.3 km/s. ( i i i ) Interpretation. The refractor is interpreted as the sediment-basement interface. Its velocity of 5.2 - 5.8 km/s is a b i t higher than other studies i n the region (V = 4.4 - 4.7 km/s, Au, 1981 and V = 4.5 km/s, Keen and Barrett, 1971), but i t is within the usual velocity range of layer 2 (V = 5.07+0.63 km/s, Christensen and Salisbury, 1975). The increased velocity i s possibly due to the interpretation of turning rays or i t might be accounted for by increased metamorphism and deformation in the terrace. It is not significant due to the large error bounds. The depths to the interface are consistent with results by other workers in the area. A 16 1 airgun survey over CBS 1 and CBS 2 found the thickness of the sediments to be 1.0 km (Tjaden, 1981) and 3.3 km (Kirstiuk, 1981) respectively (compared to 0.8+0.1 km and 3.1+0.2 km). Also, Ex-1, shot #12 ties with Ex-2, shot #26. The depth to the interface at shot #12 is 3.7+0.8 km and at shot #26 i t i s 4.1+0.5 km. 3.3.2 Refractor 2. A l l data with shot-receiver distances between 15 - 35 km make up refractor 2. This leg of the travel-time curve has p-values equivalent to a refractor velocity in the 5 - 7 km/s range. The data are subdivided into two groups, since the entire data set was inconsistent and gave erroneous results. The boundary dividing the two data groups i s chosen 37 as the Queen Charlotte transform fault. I t separates the data into oceanic ray paths and continental ray paths (CBS 1 i s considered continental due to i t s proximity to the f a u l t ) . Refractor 2a includes a l l ray paths with the majority of the ray west of the transform f a u l t while continent rays are in refractor 2b. I t was originally planned that interpretation of the second refractor would be based on the interpretation of the f i r s t layer, but since there is a large error in the sediment thicknesses (+0.5 km on average), this becomes a huge error in the new thickness (on the order of + 3 km). I t has been decided that such an approach i s meaningless. Instead, the following velocity models based on the assumed average sediment velocity and a simplified version of the refractor 1 interpretation w i l l be used. The 5.5 km/s velocity beneath the lower terrace i s assumed so as to make the transition from the oceanic velocity to the upper terrace velocity less abrupt. o c e a n i c lowtr tarraca uppar tarraca r 1 1 J 5.2 k m/s * 5 - 8 km/s 5.5 km/a Shots #1 - 13 on Ex-1 and a l l shots on Ex-2 w i l l be analyzed assuming oceanic structure. Lower terrace structure w i l l be used to analyze Ex-1 shots, #14 - 24 and CBS 2. Layer 2 thickness under CBS 1 and Ex-1, shots #25 - 31 w i l l be found using upper terrace structure. 38 (i) Refractor 2a - Oceanic data. The time-terms are "fixed" on the assumption that CBS 3 and Ex-2 shot #35 have the same delay-times (they are 0.043 km apart). The "best f i t " model has a velocity of 6.5 km/s. Depths to the interface are calculated using simplified structure as discussed previously. They shew that nearly a l l the ray paths have shallow angles (lf< 10°), so an iteration was not attempted. The largest possible velocity allowed by the data is 8.7 km/s whereas the lower bound i s 5.0 km/s. The "best f i t " surfaces are shown i n figures 3.5a and b. The depths are good to +2.5 km. Only the lower bound could be calculated since the velocity for the upper bound i s less than the assumed model velocity. (ii) Refractor 2b - Continental data. The continental data set consisted of shots on Ex-1 recorded at the land stations IKE and BAR. The "best f i t " model gave spurious results (negative layer thicknesses), possibly due to the complication of ray paths crossing the fault zone. This data set w i l l not be interpreted further. ( i i i ) Interpretation. The refractor velocity of 6.5+1.5 km/s is interpreted as a layer 3 velocity. Although i t i s slightly low compared with normal values (6.7+0.3 km/s, Christensen and Salisbury, 1975), i t is within the large error bounds. The drop in depth under the terrace (4.0 km) i s significantly larger than the +2.5 km error bounds calculated. Its 39 Distance (km ) Distance (km ) Figure 3.5: Refractor 2 "best f i t " surfaces a. (top) Refractor 2 Ex-1 p r o f i l e . b. (bottom) Refractor 2 Ex-2 p r o f i l e . The depths have error bars of +2.5 km as shown. The refractor velocity calculated i s 6.5 km/s. The surface i s interpreted as the layer 2-layer 3 interface. 40 thickness (6.9 km average) agrees with the 5.7 km of 4.7 km/s material postulated by Hyndman and E l l i s (1981). The average thickness of layer 2 under the ocean floor i s 2.2 km (Keen and Barrett, 1971, found 2.0+0.4 km average thickness). The sharp decrease i n depth at the extreme end of figure 3.5b is possibly due to continental material east of the Queen Charlotte transform fault. 3.3.3 Refractor 3a and b. Refractors 3a and b consist of a l l shot-receiver pairs with separation greater than 35 km. There are two definite legs on the travel-time curve. Figure 3.6 shows Ex-2 shots recorded at IKE and FLM. Refractor 3a corresponds to the data that exhibit a refractor velocity V > 7.5 km/s, while refractor 3b corresponds to the remaining data. The velocity models used to calculate the depths to this refractor are oceanic 2.3 km/s 1.4 km — \ 5.2 km/s 2.2 km _ ] 6.5 km/s terrace 2.3 km/s 1.8 km 5.5 km/s 6.8 km 6.5 km/s continental 5.9 km/s 9 .0 k 6.3 km/s 4.0 km 6.9 k m / , o _ JKEEX2C1 in LJ CO CO X I o f M Vnorth = 6.10 30.0 x*xx> * * x X X „ Vsouth= 8.44 ~1 so.o 70.0 90.0 m °__ flMEX2Cl in a <_> CO CO X I a Vnorth = 6.11 X M x x x „ w XX X V S oufh=8.52 30.0 50.0 70.0 30.0 m Figure 3.6; Corrected data from IKE and FLM showing the two distinct legs at ranges greater than 35 km. Refractor 3a has V > 7.5 km/s, while refractor 3b has V = 6.1 km/s. 42 The oceanic and terrace models are based on simplified versions of the interpretations of the first two layers while the continental velocity model is taken from Johnson et al. (1972). A l l shots on Ex-2 and shots #1 - 17 on Ex-1 are considered oceanic. Under CBS 1 the terrace velocity model is assumed; and the continental velocity model is assumed under a l l land stations. (i) Refractor 3a. This subset consists of Ex-1 shots #1 - 14 and shots #1 - 17 on Ex-2 recorded at CBS 1, CBS 2, OBS 3, and a l l the land stations. These shots are at distances greater than 35 km and exhibit a ray parameter which is associated with upper mantle arrivals (V = 7.8 - 8.2 km/s). The Ll "best f i t " solution found a velocity of 7.5 km/s which is substantially lower than other studies in the area. Keen and Barrett (1971), whose data have been reinterpreted in this thesis, found an anisotropic velocity of 7.8 - 8.3 km/s. Shor (1962) and Milne (1964) found Pn arrivals with velocities of 8.3 and 8.1 km/s respectively. The velocity bounds are 7.2 km/s < V < 7.9 km/s. The time-terms were fixed on the assumption that the delay-time at CBS 3 was equal to the average delay-time on Ex-2. The depths to the interface are shown in figure 3.7. They are poorly constrained to +4 km. (ii) Interpretation of Refractor 3a. The pronounced thinning at the distant shots (Ex-1, #1-5, Ex-2, #1 - 5) and the low velocity seem suspicious. If the velocity is too low 43 Dispone* (km) 60 Distance (km) 80 100 Figure 3.7: Refractor 3a "best f i t " surfaces a. (top) Ex-1 pr o f i l e showing false dip due to too low a velocity. b. (bottom) Ex-2 pr o f i l e . Error bounds shown are +4 km. 44 then the slowness, U, i n equation 2.4 i s too large. Thus, there w i l l be too much time associated with the X^jU term and too small delay-times (too shallow). The thinning w i l l increase with distance. It is f e l t that the structure i s false for this reason. At this point, the linear programming algorithm has no constraints on the refractor velocity. When using the "best f i t " option, the program w i l l try to minimize the absolute value of the misfit errors, regardless of the velocity calculated. For this reason, the program was altered to r e s t r i c t solutions to reasonable velocities. As well, the operator can now force the solution to the velocity of his/her choice. The model with the refractor velocity of 7.92 km/s is presented in figure 3.8. This velocity was chosen since i t was the constant term found using a linear programming approach to time-term analysis on the Keen and Barrett data. It is also the upper bound on the velocity. Again, the time-term of CBS 3 was smoothed so as to equal the average delay-time on Ex-2. Using the terrace velocity model on page 40, the calculated delay-times of OBS 1 and CBS 2 convert to layer 3 thicknesses that are unreasonably small or negative; thus a minor change in this model is required. If the thickness of the sediments is reduced to 1.5 km from 1.8 km, and the thickness of layer 2 to 6 km from 6.8 km/s then under CBS's 1 and 2 layer 3 is ~3 km thick. The average thickness along Ex-1 is 4.4 km. This is thinner than Keen and Barrett's 5.22 km (Keen and Barrett, 1971), but thicker than 4.0 km found by Shor (1962). The mantle i s at 10 km depth under the ocean, increasing to "•'12.5 km under the 45 w Terrace Land h CD CD CD CD CD Q CL Q CD CD © © CD ° CD CD CD IT) O CM CD CD q "20" — r -40 60 Distance (km ) 80~ 100 N a. 10 0 0 C D O C D ° 0 0 < H D c b CD CD CD o CM 20~ 40 1$0~ Distance (km ) 100 Figure 3.8: Refractor 3a surfaces found with the velocity forced to 7 9 km/s. a. (top) Ex-1 profile. b. (bottom) Ex-2 profile. These surfaces are interpreted as the layer 3-mantle interface. 46 terrace and —'17 km beneath the Queen Charlotte Islands. (iii) Refractor 3b. Refractor 3b includes shots whose range is greater than 35 km and displays a slow velocity. This subset consists of Ex-2, shots #24 - 50, recorded at IKE, FLM, VIC, LOU, CBS 1, and OBS 2. A l l these travel paths spend the majority of their time in the terrace. The "best f i t " solution has a velocity of 8.8 km/s with bounds between 6.0 and 18.5 km/s. These estimates are made assuming that the standard deviation of the observational errors is 70 ms. Such a huge possible range of velocities makes an interpretation meaningless so an interpretation will be made with the assumption that the refractor is equivalent to layer 3 (V = 6.5 km/s), and using the velocity models shown on page 37, depths will be calculated. The calculated surface is shown in figure 3.9. It can be seen that the depth to the interface is ^ 10 km and the layer with a velocity of 5.5 km/s has an average thickness of 6.0 km. The depth and thickness are greater than previously found for layer 3 (figure 3.5) except at the north end of Ex-2. The rays for refractor 3b have ^ 9 km offsets; thus their travel paths have turning points beneath the Queen Charlotte terrace. As a result, the thickness found is the thickness of the terrace and i t is consistent with the assumption made to interpret refractor 3a (mantle arrivals) and assumptions made by Hyndman and E l l i s (1981). The bounds on this interface are very large ( +10 km), and the solution has been forced near to the upper bound. © & O m ffi o © u o i 1 1 r — 20 40 60 80 Distance (km) 100 Egg* 3'9i *fractor 3bSurface with the velocity forced to 6 .5 48 3.4 Conclusion The entire model for the Queen Charlotte fault zone is shown in figure 3.10. The first layer (V = 2.3 km/s) comprises the sediments. Its average thickness west of the Queen Charlotte terrace is 1.4 km. Beneath the terrace, the sediments range in thickness from 0.5 km at the western edge to 3.2 km under CBS 2. At CBS 1 they are on average 1.2 km thick. The second layer has a velocity of 5.2 to 5.8 km/s. West of the terrace i t is interpreted as oceanic layer 2 composed of pillow basalts and extrusives (Christensen and Salisbury, 1975), but the terrace material can not easily be interpreted as to rock type since there are poor constraints on its velocity (V = 5.8+1.5 km/s). As discussed on page 37, the author has chosen a velocity of 5.5 km/s and suggests that this unit is composed of layer 2 material which has undergone compressional deformation resulting in an increased velocity and thickening. The thickness varies dramatically from 2.2 km west of the terrace (Ex-2 profile) to 4.5 km below CBS 2, and increasing to about 8 km at the eastern limit of the terrace. Hyndman and E l l i s (1981) hypothesized this thickening on the basis of gravity and magnetic data. Layer 3 (V = 6.5 km/s) is composed of sheeted dikes, metagabbros and pyroxenes (Christensen and Salisbury, 1975). Its thickness along Ex-1 increases from 3.2 km at the western end to 5 km just west of the terrace consistent with the thickness at the crossing position of Ex-2. Below the terrace, layer 3 thins to 3 km. The velocity and thickness of this layer compare favourably with other results from the vicinity (Shor, 1962). 49 fsi 0 20 40 60 80 100 Distance (km) Distance (km) Figure 3.10: Entire interpretation a. (top) Ex-1 pro f i l e showing the position of the two faults, the thick t h e e i s l a n d s ? a t h *** *** i n c r e a s i n 9 o f **» mantle under b. (bottom) Ex-2 pro f i l e . 50 The mantle (V = 7.9 km/s) presents evidence for oblique subduction its depth increases from 10 km to 12.5 km under the terrace and finally to 17 km beneath the Queen Charlotte Islands. This result suggests that the mantle dips about 15° landward. The Queen Charlotte transform fault is interpreted to extend at least to a depth of 17 km creating a discontinuity in this horizon. Model 5b of appendix 3 shows that faulted surfaces are not well resolved by the time-term method so a larger discontinuity is s t i l l a possibility. It is hypothesized that the oblique subduction (1.5 cm/yr perpendicular to the coast) suggested by the global plate solution, RM2 (Minster and Jordan, 1978), creates a dipping mantle and a thickening of layer 2 beneath the Queen Charlotte terrace thus illuminating two peculiarities of the Queen Charlotte fault zone. 51 4. Summary It has been shown that a linear programming approach to time-term analysis can solve for the delay-times and refractor velocity simultaneously. I t is more flexible than other approaches since various models can be generated by choosing different objective functions. "Best f i t " solutions are found by minimizing the L^-norm of the misfit errors and bounds for both the delay-times and refractor velocity are calculated by maximizing or minimizing the sum of the time-terms. The interpreter has added f l e x i b i l i t y since two types of velocity anisotropy functions can be chosen. Either a smooth, Backus-type or a "sectioned" anisotropy can be used to generate models. The data from the Queen Charlotte Islands, when analyzed using time-term analysis exemplified the major d i f f i c u l t y of the method. Time-term analysis i s not robust enough. Many hours were spent organizing the data into subsets that were thought to be consistent with the fundamental assumptions of time-term analysis (discussed in appendix 3). Even then, as was shown by refractor 3a and b, the "best f i t " solution calculated models that were not plausible. Only by altering the algorithm, so that the interpreter could more ri g i d l y confine the solution (ie. choosing a velocity), were viable results obtained. The interpretation of the Queen Charlotte data i s pushing time-term analysis to i t s l i m i t . Many of the calculated surfaces have depths resolved to ±3 km. The complicated geology of the region suggests that the ray-paths are too complicated to be analyzed properly by the simple approach of time-term analysis. For these reasons, the model suggested 52 must be considered as a speculative one. It might be used as a starting model for ray-tracing but i t should not be considered as a f i n a l model of the crustal structure. The linear programming approach i s better than previous methods of time-term analysis because of i t s greater f l e x i b i l i t y . It unfortunately shows that time-term analysis has poor resolution of both the delay-times and refractor velocity. Though i t may s t i l l be the only way to properly solve for refractor anisotropy, the large range of possible solutions, the amount of interpreter interaction, and the fundamental assumptions make time-term analysis a rather poor method for interpreting seismic refraction data from geologically complicated regions. 53 References Atwater, T., 1970. Implications of plate tectonics for the Cenozoic tectonic evolution of western North America, Bull. Geol. Soc. Am., 81, 3513-3536. Au, C. Y. D., 1981. Crustal structure from an ocean bottom seismometer survey of the Nootka fault zone, PhD. thesis, Univ. of British Columbia, Vancouver, 131 pp. Backus, G. E. 1965. Possible forms of seismic anisotropy of the upper-most mantle under oceans, J. Geophys. Res., 70, 3429-3439. Bamford, D., 1976. MOZAIC time-term analysis, Geophys. J., 44, 433-446. Chandra, U., 1974. Seismicity, earthquake mechanisms and tectonics along the west coast of North America, from 42° to 61°N, Bull. Seism. Soc. Am., 64, 1529 -1549. 54 Chase, R. L., and Tiffin, D. L., 1972. Queen Charlotte fault zone, British Columbia. 24th International Geological Congress, Vancouver, B.C., Marine Geology and Geophysics Section, Vol. 8, pp. 17-28. Chase, R. L., Tiffin, D. L., and Murray, J. W., 1975. The western Canadian continental margin. In: Canada's continental margins and offshore petroleum exploration, edited by C. J. Yorath, E. R. Parker, and D. J. Glass, Can. Soc. Petr. Geol., Memoir 4, 701-721. Cheung, H. P. Y., 1978. Crustal structure near Explorer ridge: ocean bottom seismometer results parallel to Revere-Dellwood fracture zone, M. Sc. thesis, Univ. of British Columbia, Vancouver, 80 pp. Christensen, N., and Salisbury, M. H., 1975. Structure and constitution of the lower oceanic crust, Rev. Geophys. Space Phys., 13, 57-86 . \ Cooper, L., and Steinberg, D., 1970. Introduction to Methods of Optimization. W. B. Saunders Company, Toronto, 381 pp. 55 Currie, R. G., Stevens, L. E., Tiffin, D. L., and Riddihough, R. P., 1980. Marine geophysical survey off the Queen Charlotte Islands (Abstract), EOS, 61, 71. Grant, F. S. and West, G. F., 1965. Interpretation Theory in Applied Geophysics, McGraw - H i l l , New York, p. 151 Hodgson, J. H., and Milne, W. G., 1951. Direction of faulting in certain earthquakes of the north Pacific, Bull. Seism. See. Am., 44, 221-242. Hyndman, R. D., and E l l i s , R. M., 1981. Queen Charlotte fault zone: microearthquakes from a temporary array of land stations and ocean bottom seismographs, Can. J. Earth Sci., 18, 776-788. Hyndman, R. D., Wright, J. A., and Burgess, M., 1981. Queen Charlotte fault zone: heat flow measurements, for submission to Can. J. Earth Sci.. 56 Johnson, S. H., Couch, R. W., Gemperle, M., and Banks, E. R., 1972. Seismic refraction measurements in southeast Alaska and western British Columbia, Can. J. Earth Sci., 9, 1756-1765: Keen, C. E., and Barrett, D. L., 1971. A measurement of seismic anisotropy in the northeast Pacific, Can. J. Earth. Sci., 8, 1056-1064. Keen, C. E., and Hyndman, R. D., 1979. Geophysical review of the continental margins of eastern and western Canada, Can. J. Earth Sci., 16, 712-747. Kirstiuk, S., 1981. A preliminary investigation into the structure of the upper crust in the terrace region of the Queen Charlotte fault zone using seismic refraction data, B. Sc. thesis, Univ. of British Coumbia, Vancouver, 77 pp. Lister, C. R. B., and Lewis, B. T. R., 1976. An ocean - bottom seismometer suitable for arrays. Deep Sea Research, 23, 113 -124. 57 Milne, A. R., 1964. Two seismic refraction measurements: North Pacific basin and Dixon Entrance, Bull. Seism. Soc. Am., 54, 41 -50. Minster, J. B., and Jordan, T. H., 1978. Present - day plate motions, J. Geophys. Res., 83, 5331 -5354. Morris, G. B., 1972. Delay-Time-Punction-Method and its application to the Lake Superior refraction data, J. Geophys. Res., 74, 3095-3109. Parker, R. L. and McNutt, M. K., 1980. Statistics for the one-norm misfit error, J. Geophys. Res., 85, 4429-4430. Raitt, R. W., G. G. Shor, T. J. G. Francis, and G. B. Morris, 1969. Anisotropy of the Pacific upper mantle, J. Geophys. Res., 74, 297-314. Reiter, L., 1970. An investigation into the time-term method in refraction seismology, Bull. Seism. Soc. Am., 60, 1-13. 58 Rothenberg, R., 1979. Linear Programming, North Holland Inc., New York, 379 pp. Scheidegger, A. E. and Willmore, P. L., 1957. The "se of a least squares method for the interpretation of data from seismic surveys, Geophysics, 22, 9-22. Shor, G. G., 1962. Seismic refraction studies off the coast of Alaska: 1956 - 1957, Bull. Seism. Soc. Am., 52, 37-57. Smith, T. J., Steinhart, J. S., and Aldrich, L. J., 1966. Lake Superior crustal structure, J. Geophys. Res., 71, 1141-1172. Tjaden, G., 1981. Seismic modelling of refraction data from the shelf off the Queen Charlotte Islands, B. Sc. thesis, Univ. of British Columbia, Vancouver, 48 pp. Vine, F. J., 1968. Magnetic Anomalies associated with mid-ocean ridges. In: The History of the Earth's Crust, edited by R. A. Phinney, Princeton University Press, Princeton, pp. 73-89. 59 Whitcombe, D. N. and Maguire, P. K. H., 1979. The response of the time-term method to simulated crustal structures, Bull. Seism. Soc. Am., 69, 1455-1473. Whittall, K., and Clowes, R. M., 1979. A simple, efficient method for the calculation of traveltimes and ray paths in laterally inhomogeneous media, J. Can. Soc. Expl. Geophys., 15, 21-29. WilLmore, P. L. and Bancroft, A. M., 1960. The time-term approach to refraction seismology, Geophys. J., 3, 419-432. 60 Additional Bibliography Bamford, D., 1971. An interpretation of first arrival data from the Continental Margin Refraction Experiment, Geophys. J., 24, 213-229. Bamford, D., 1973. An example of the iterative approach to time-term analysis, Geophys. J., 31, 365-372. Riddihough, R. P., Currie, R. G., and Hyndman, R. D., 1980. The Dellwood knolls and their role in the triple junction tectonics off northern Vancouver Island, Can. J. Earth Sci., 17, 577-593. Yorath, C. J., and Chase, R. L., 1981. Tectonic history of the Queen Charlotte Islands and Adjacent areas - a model, submitted to Can. J. Earth Sci.. 61 Appendix 1: On Linear Programing and the Simplex Algorithm The standard linear programming problem has the formulation: Maximize: u = c l X l + c 2 x 2 + + CnXn Subject to: a l l x l + a12 x2 + + air^n = b l a12 x2 + a22 x2 + + a2n xn = b2 . (A.l) that i s ; *lA + -2^2 + + W n = \ x x > 0 x2 ^ 0 *m^ 0 u = c • x [max.] Ax = b (A.2) 62 x > 0 Problems involving inequalities are reformulated using slack variables as follows: (i) Greater than or equal to; Ax > b can be replaced by £1 aijXj - x ^ = bi x n + i > 0 i = 1 ... m (ii) Less than or equal to; Ax £ b becomes a^x.: + X j ^ - b^ i = 1 ... m xn+i ^ 0 xn+l slack variables. 63 Variables which might be less than zero are reformulated as the difference of two non-negative variables. In this manner, a l l linear programming problems are involved in the standard formulation. It was stated earlier that the maximum of the objective function occurs at an extrema of the constraint set, Sc, formed by the intersection of the constraint closed half-spaces. A rigorous proof of this statement calls upon some theorems from convex algebra theory which will be stated. The reader is referred bo various authors for proofs of these theorems. Definition: A set S in n-space is convex i f a straight line joining any two points of S is contained entirely in S (Rothenberg, 1979, p.51). Theorem 1: The intersection of any number of convex sets is itself a convex set (Rothenberg, 1979, p.54). Theorem 2: Hyperplanes, open half-spaces, and closed half-spaces are a l l convex (Rothenberg, 1979, p.60). Theorem 3: If a closed, bounded convex set has a finite number of extreme points, then any point in the set can be written as a convex combination of extreme points (Rothenberg, 1979, pp.74 - 75) (see also Cooper and Steinberg, 1970, p.75). In the context of the linear programming problem the constraining functions are convex sets (theorem 2), thus their intersection set is 64 also convex (theorem 1). Using theorem 3, one can show that the maximum of the objective function occurs at an extrema of the constraining set, which is theorem 4. Theorem 4; Suppose that the constraint set Sc of the standard linear programming problem of maximum type is bounded. Then the maximum value of u(x), the objective function, is attained at an extrema point of Sc (Rothenberg, 1979, pp.76 - 82). So i t is proven that the solution lies at an extrema. A linear programming problem is stated as Max: u = c • x constraints: Ax = b x > 0 where A is an m x n matrix and c, x, and b are n vectors. Let 5j be the column vectors of A. If the first m columns of A constitute a basis of the solution space, then: 5j = Zj X^ St j-- I ,a, ,/rv (A.3) 65 where B is the basis matrix. A can be portioned into the basis matrix and another non-basis matrix, N, as can ~x be partitioned into parts x 6 and x N such that Ax = Bxft + NxH = b but by definition x^ = 0 , therefore or 6 b Similarly, u = c • x = c"6 • x"B + c N . x H = co<6 (A.5) Thus, i f we have a solution to the constraints i t is a "basic feasible" solution, Xg. i t must be proven that this solution optimizes u, or a new basis and solution which improves u must be found. To do this, one removes vectors from a^ and replaces them by vectors from I j . Geometrically, one is moving from one extrema to another. Mathematically, let i j be a vector not in the basis, a^ , that we wish to swap with the vector a r, a basis vector. a r is chosen such that 6 6 h ~ X - j ^ r * £ X ^ (A . 6 ) — — Equation A.4 becomes A'x6 = b or (A.7) thus the vector a r has been replaced in the basis by the vector aj. Rearranging A . 6 (A. 8) The vector swapping is made such that x'&L , and x' Q r must remain non-negative (a constraint). 67 X<j ^ O (A.9) X 6 r s 0 (A.10) From A.9, X rj > o thus ifX ^ j > 0 one must choose the appropriate vector a"r such that A.8 is positive. Choose = ^ _£&Ll fw- d\ (A. 11) X«-; The choice must be made so that u, the objective function, increases. Originally A.5 was U = C 6 ' X 6 = £ C * l X f c L now u'=c'6. x^= gc',.x't. and c V = re «. •. i ^ r 6. = r c 6 . , i r r I CJ : i = r *r (A. 12) X^ Xrj adding c ( >t^ r - 2l4_X6r) o 68 The largest improvement in the objective function is wanted, therefore choose the vector aj. such that, CCj-UL;) (A. 14) is a maximum. The basic procedures in the Simplex Algorithm have been discussed. The complications of degeneracy (an x = 0) and unboundedness will not be discussed here. The reader is referred to Cooper and Steinberg (1970, pp. 207-214) for a complete treatise of these problems. \ 69 Appendix 2; The expected value of the one - norm for a normally distributed variable. The expected value of the - norm is eo J ma (A. 15) If x is normally distributed then J e (A. 16) (A. 17) Let u = x-x eo >fajr £ v- J ^ (A. 18) s/ TT (A.19) Q.E.D. 70 Appendix 3. Difficulties with Time-Term Analysis Assumptions made by time-term analysis are (i) the travel-times being analyzed are head wave arrivals, (ii) the refracting layer is sub-horizontal, (iii) the delay times are independent of azimuth. The first assumption limits interpretations to interfaces across which there are sharp velocity contrasts (the sediment-basement interface, the Mono). The other two assumptions do not hold in areas of complicated geology. Also, with average coverage there is the possibility that travel-times not associated with the refractor might be included in the analysis. These factors can lead to incorrect interpretations. This appendix will demonstrate possible difficulties, using synthetic data created from simple models by a ray-tracing program (Whittall and Clowes, 1979). The models consist of eleven stations at 1 km separation which record two shots at either end of the array. The data generated are perfect (ie. noise has not been added). In a l l cases the delay-times are smoothed so that the time-term at shot #1 equals the delay time at station #1. The shot and receiver configuration is shown in figure A.l 2 JL 3 4 5 6 7 8 9 m 10 JL 2 11 Figure A.l: Shot,*, and station, • , configuration. 71 A. Model 3. Model 3 is designed to show the effect produced i f assumption 1 above is not valid. The model consists of an upper layer with a constant velocity (V^ = 2.3 km/s) over an infinite half-space which has a velocity gradient (V2 = 4.7 + 0.85 z km/s). The half-space is at a 1 km depth. Head waves are generated by the interface, but the turning rays are the first arrivals and thus are interpreted. The travel-times and ray paths for shot #1 are shown in figure A.2 (a and b). Figure A.2c presents the results found. V 2 calculated is 5.006 km/s. The standard deviation of the misfit errors is 0.001 s. The slight curvature is a result of assumption #1 being false. B. Model 4a, b, c. The effect of sloping interfaces is investigated in model 4. It consists of a constant = 2.3 km/s layer, 1 km thick, over a second layer, V 2 = 4.7 km/s. The interface is sloping at various angles ( « = 10°, £ = 20°, * = 25°). The slope was limited to 25° maximum due to limitations in the ray-tracing program. The results are presented in figures A.3a, b and c. For low angles ( ~6< 10°), the calculated velocities and depths are quite close to the input values. The standard deviation of the misfit errors is not dependent on the angle of the interface. ds - 0.000 s in a l l cases. The velocities found are 72 Model 3 2.3 km/s () o o 1 o o O O o o • o -o 5.006 km/s 0 1 2 3 A 1 c — — i i — Distance (km) Figure A.2; Mcdel with a velocity gradient a. (top) Reduced ar r i v a l times ( V r e d =4.7 km/s). X's are head waves and « are reflection and turning ray arrivals. b. (middle) Ray-tracing of the input model. c. (bottom) Results of time-term analysis. The line i s the input model, o are the predicted depths to the interface. Model 4a 6 Dlilonee (km) Model 4b 6 ; - i 1 , , _ | 5 6 7 8 9 K) Dlflonce (km) Model 4c o r 2.3 km/s o E X. >9 Q •» -o u>- t — 5.181 km/s i — — i • o o o o Distance (km) Figure A.3; Results from models with sloping interfaces. The input model has V x = 2.3 km/s and V 2 = 4.7 km/s (shown as solid l i n e ) . a. (top) Results for interface with 10° dip b. (middle) Results for interface with 20° dip c. (bottom) Results for interface with 25° dip 74 V = 10° V 2 = 4.773 km/s * = 20° v 2 = 5.004 km/s * = 25° v 2 = 5.181 km/s As the angle increases, the velocities increase and the depths are shallower than the true depths. C. Model 5a, b Model 5a is a model of a near vertical fault. The interface separating the upper layer (V^ = 2.3 km/s) from the lower layer (V2 =4.7 km/s) is downfaulted 2 km under station 5. Figure A.4 shows the travel-times and ray paths from shot #1. The lower layer is of constant velocity, thus the first arrivals, except over the fault, are due to head waves travelling along the interface. Figure A.4c shows the results calculated. The linear programming approach has found a velocity very close to true velocity V 2 = 4.695 km/s, but the predicted surface is not near to the true surface (1.7 km deeper at shot #2). In this example the second two of the three assumptions of time-term analysis are invalid. Model 5a is not realistic since there are head waves travelling along the near vertical fault face. A more realistic model, similar in form to what is proposed at the Queen Charlotte Islands (see Hyndman and E l l i s , 1981), would include gradients in the lower layer. The travel-times and ray paths from shot #1 are shown in figure A.5a and b. The turning rays are now the first arrivals; therefore a l l three assumptions 75 §1 Model 5a -e- -e- - e - -e-2.3 km/s o U>-l « 5 6 DUIonct (km) 4.695 km/s K) Figure A.4; The faulted interface. a. (top) Reduced a r r i v a l times (V r e < 3 = 4.7 km/s). X's are head waves, are reflections. There are no turning rays. b. (middle) Ray-tracing of the input model. c. (bottom) Results for the model. Predicted depths (e ) d i f f e r by at most 2.5 km from the input model (solid l i n e ) . 76 Figure A.5: The faulted interface with a velocity gradient. a. (top) Reduced ar r i v a l times ( V r e d = 4.7 km/s). X's are head waves. • are reflections and turning rays. b. (middle) Ray-tracing of the input model. c. (bottom) Results for the model. Predicted depths (o ) differ by at most 2.5 km from the input model (solid l i n e ) . 77 are not true. The model predicted (figure A. 5c) i s a poor f i t to the input model and the calculated refractor velocity i s far too high (V2 = 10.010 km/s). As well, the data are rather poorly f i t , 4S = 0.037 s. The maximum deviation from the true surface i s 1.66 km at shot #2/station #11. Iterative schemes have been developed (Willmore and Bancroft, 1960) that correct for angled rays, but they assume knowledge of the curvature of the refractor beneath the station. This is found from the f i r s t results. Since, i n general the shot and receivers are not in l i n e , curvature of the refractor horizon cannot be found. An alternative iteration scheme has been tried with some success. Assume a direct slant ray from shot to receiver (figure A.6). Figure A.6 : Slant ray geometry This ray has a theoretical travel-time 78 where "tf is the angle that the ray travels with respect to the horizontal (Grant and West, 1965 p. 150). The delay times are now dependent upon azimuth since each ray has a different tf, but the average t will be used to transform each delay time to a depth. The results after one iteration step on model 4c are shown in figure A.7a. V 2 now equals 4.790 km/s and the surface is much nearer to the input interface. Model 5b after one iteration is presented in figure A.7b. As can be seen, the iteration scheme failed to improve on the original. The new velocity is V 2 = 10.617 km/s and 6S increased to 0.038 s. In conclusion, time-term analysis is limited by the three fundamental assumptions . An iterative scheme can improve simple models, but the more realistic and complicated models s t i l l generate false results. A step interface, which might occur at the Queen Charlotte transform fault is especially difficult to resolve accurately since the fundamental assumptions have been broken. 79 Model 5b (1st) Figure A.7: Results after one iteration. a T l ^ T R e s u l t s for the interface with a 25© d i D b (bottom) Results for the faulted interface S&a velocity gradient
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Time - term analysis using linear programming and its...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Time - term analysis using linear programming and its application to refraction data from the Queen Charlotte… Bird, David Neil 1981
pdf
Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
Page Metadata
Item Metadata
Title | Time - term analysis using linear programming and its application to refraction data from the Queen Charlotte Islands |
Creator |
Bird, David Neil |
Publisher | University of British Columbia |
Date Issued | 1981 |
Description | The Queen Charlotte transform fault zone forms the boundary between the Pacific and America plates north of Vancouver Island off western Canada. This boundary is of interest since there is direct contact between oceanic and continental crustal material along its length. There are also peculiarities at its southernmost portion where oblique subduction occurs and an enigmatic terrace within the fault zone replaces the continental shelf. To investigate the crustal structure in this region, an onshore-offshore seismic refraction experiment consisting of two profiles shot into three ocean bottom seismometers (CBS's) and seven land-based stations was undertaken in 1979. As a means of interpretating the areally distributted data, a new linear programming approach to time-term analysis is introduced. This new approach is more flexible than previous methods since there is a choice of solutions possible through a choice of objective functions. A "best fit" solution is found by minimizing the L[sub l]-norm of the misfit errors while upper and lower bounds for both the refractor velocity and delay-times are created by either maximizing or minimizing the sum of the time-terms. For all solutions the travel-time equations form the constraints of the problem. Various examples, both synthetic and real, are analyzed to exemplify the method. The areal data from the Queen Charlotte Islands are analyzed using the linear programming approach to time-term analysis. The data are subdivided by p value and range into four groups. Each is then analyzed separately and a four layer interpretation is made. The uppermost layer is assumed to be the sediments (V = 2.3 km/s) and is of constant thickness (1.4 km) to the west but of variable thickness within the fault zone. The second layer has a velocity of 5.2 km/s increasing to 5.8 km/s in the fault zone. This layer displays pronounced thickening beneath the terrace. It is considered as oceanic layer 2. The third layer (V = 6.5 km/s) lies above the upper mantle (V = 7.9 km/s assumed). There is evidence for the interface between these two layers to dip eastward at 15° beneath the fault zone. It is hypothesized that the increasing velocity of layer 2 and its thickening beneath the terrace is the result of compressional metamorphism caused by oblique subduction of the Pacific plate beneath the America plate. The dipping upper mantle is explained by the same mechanism. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2010-03-29 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0052539 |
URI | http://hdl.handle.net/2429/22851 |
Degree |
Master of Science - MSc |
Program |
Geophysics |
Affiliation |
Science, Faculty of Earth, Ocean and Atmospheric Sciences, Department of |
Degree Grantor | University of British Columbia |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-UBC_1981_A6_7 B57.pdf [ 3.09MB ]
- Metadata
- JSON: 831-1.0052539.json
- JSON-LD: 831-1.0052539-ld.json
- RDF/XML (Pretty): 831-1.0052539-rdf.xml
- RDF/JSON: 831-1.0052539-rdf.json
- Turtle: 831-1.0052539-turtle.txt
- N-Triples: 831-1.0052539-rdf-ntriples.txt
- Original Record: 831-1.0052539-source.json
- Full Text
- 831-1.0052539-fulltext.txt
- Citation
- 831-1.0052539.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
data-media="{[{embed.selectedMedia}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0052539/manifest