{"http:\/\/dx.doi.org\/10.14288\/1.0052010":{"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool":[{"value":"Science, Faculty of","type":"literal","lang":"en"},{"value":"Computer Science, Department of","type":"literal","lang":"en"}],"http:\/\/www.europeana.eu\/schemas\/edm\/dataProvider":[{"value":"DSpace","type":"literal","lang":"en"}],"https:\/\/open.library.ubc.ca\/terms#degreeCampus":[{"value":"UBCV","type":"literal","lang":"en"}],"http:\/\/purl.org\/dc\/terms\/creator":[{"value":"Immerzeel, Gerrit","type":"literal","lang":"en"}],"http:\/\/purl.org\/dc\/terms\/issued":[{"value":"2011-04-04T17:30:33Z","type":"literal","lang":"en"},{"value":"1972","type":"literal","lang":"en"}],"http:\/\/vivoweb.org\/ontology\/core#relatedDegree":[{"value":"Master of Science - MSc","type":"literal","lang":"en"}],"https:\/\/open.library.ubc.ca\/terms#degreeGrantor":[{"value":"University of British Columbia","type":"literal","lang":"en"}],"http:\/\/purl.org\/dc\/terms\/description":[{"value":"Romberg's quadrature is investigated with special emphasis on\r\nits use in automatic integration.\r\nSection 1 reproduces many of the facts known about Romberg's method and investigates the selection of stepsize.\r\nSection 2 discusses the integration of singular integrands, with emphasis on detecting algebraic endpoint singularities and the reliability of methods used for handling them.\r\nSection 3 looks at CADRE, the first automatic integration routine published based on the adaptive use of Romberg's method. It is adapted for use on the IBM 360 and is compared to SQUANK, an adaptive Simpson routine already available in the U.B.C. program library.","type":"literal","lang":"en"}],"http:\/\/www.europeana.eu\/schemas\/edm\/aggregatedCHO":[{"value":"https:\/\/circle.library.ubc.ca\/rest\/handle\/2429\/33257?expand=metadata","type":"literal","lang":"en"}],"http:\/\/www.w3.org\/2009\/08\/skos-reference\/skos.html#note":[{"value":"TOWARD BETTER NUMERICAL INTEGRATION by GERRIT IMMERZEEL B. S c , University of B r i t i s h Columbia, 1969 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science i n the Department of Computer Science We accept t h i s thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA January, 1972 In presenting t h i s thesis i n p a r t i a l f u l f i l m e n t of the requirements for an advanced degree at the University of B r i t i s h Columbia, I agree that the Library s h a l l make i t f r e e l y available for reference and study. I further agree that permission fo r extensive copying of t h i s thesis for scholarly purposes may be granted by the Head of my Department or by h i s representatives. It i s understood that copying or publication of t h i s thesis for f i n a n c i a l gain s h a l l not be allowed without my written permission. Department of Computer Science The University of B r i t i s h Columbia Vancouver 8, Canada Date April 12. 1972 i ABSTRACT Romberg's quadrature i s investigated with s p e c i a l emphasis on i t s use i n automatic i n t e g r a t i o n . Section 1 reproduces many of the facts known about Romberg's method and investigates the s e l e c t i o n of stepsize. Section 2 discusses the i n t e g r a t i o n of singular integrands, with emphasis on detecting algebraic endpoint s i n g u l a r i t i e s and the r e l i a b i l i t y of methods used for handling them. Section 3 looks at CADRE, the f i r s t automatic i n t e g r a t i o n routine published based on the adaptive use of Romberg's method. I t -is adapted for use on the IBM 360 and i s compared to SQUANK, an adaptive Simpson routine already a v a i l a b l e i n the U.B.C. program l i b r a r y . i i NOTATION The following common notation is used throughout the thesis: [n],[n,pp. ] refers to reference n in the bibiliography; l_tj the greatest integer less than or equal to t hi the set of natural numbers, 1, 2, 3, 4, f e C ([a,b]) f has k continuous derivatives on [a,b]; 0(h) term of order h, i.e. lim | | < \u00b0\u00b0; h+0 h Q ( h ) 0(h) term of order higher than h, i.e. lim \u2014 - \u2014 h-K) (a,b) x e (a,b) implies a < x < b [a,b] x e [a,b] implies a i x < b i i i TABLE OF CONTENTS Section 1 Romberg integration 1.1 Introduction 1 1.2 An alternate stepsize selection 2 1.3 F versus R 3 1.4 Bulirsch extrapolation 4 1.5 R0MNEV versus QUADS 5 1.6 Propagation of roundoff error 6 Section 2 Algebraic endpoint singularities 2.1 Integrating singular integrands 8 2.2 Algebraic endpoint singularities 9 2.3 About Theorem 1 11 2.4 Another theorem 13 2.5 Detecting the singularity 15 Section 3 Adaptive integration 3.1 Introduction 17 3.2 Stopping rules 18 3.3 CADRE 20 3.4 CADRE versus SQUANK 23 3.5 whither numerical integration? 27 Bibliography 30 Appendix 1. The program R0MNEV 32 2. The program QUADS 35 3. A simplified algorithm for CADRE 37 4. CADRE and SQUANK results 41 ACKNOWLEDGEMENT I would like to thank Dr. G.D. Johnson for suggesting the topic and for invaluable assistance in Section 2.4, Dr. C.F. Dunkl for helpful communication, Dr. Carl deBoor for providing access to CADRE before i t reached publication, and Dr. J.M. Varah for constructive criticism. I would also like to thank the National Research Council of Canada and the Department of Computer Science for financial suppo Romberg Integration 1.1 Introduction When Romberg introduced his \"Vereinfachte Numerische Integration\" [18], he awakened a considerable interest in numerical integration, as i s evidenced by the subsequent stream of articles based on his method; [2], [5], [8], [9] and many more. Bauer et al [1, esp. pp. 199-205], the source of much of the information in this introduction, and Carl deBoor [5] discuss the advantages of Romberg's method over other quadrature methods, such as Simpson's rule and Gauss' formula. Romberg's method is based on the composite trapezoidal rule, which approximates the integral of f over [a,b] by calculating the area b-a under line segments [ ( x \u00b1 , f ( x \u00b1 ) , ( x i + 1 , f ( x \u00b1 + 1 ) ) ] where x \u00b1 = a+i-^- for i = 0,1,2,...,n-l. Thus fb n-1 I =J f(x) dx = h[y(f(a)+f(b))+ I f(a+ih)] = T(h) i=l b-a whe re h = \u2014-\u2014 . Romberg reasoned that the sequence {T(h^)K_^ with b-a 2 h. = r was essentially an h process, namely the truncation error 1 2 1 2 was 0(h.^ ). This meant that the sequence could be eff i c i e n t l y ac-celerated using Richardson's \"deferred approach to the limit\". He then built the following table; T(h Q) = T 0\u00b0 T 1. m , m , m-I ,m .. 4-1 4-1 2. It can be verified that columns and T^ have the same values as would be obtained using Simpson's rule and the Newton-Cotes formula of order six, respectively, with the function values known at the same points. The characteristics of Romberg's method that make i t desirable for computer usage are i t s recursive definition and the fact that the convergence of the TQ column (that i s , the trapezoidal rule) implies convergence of a l l subsequent columns and diagonals to the same limit. Also, because the stepsize is halved at each step, i t has the property that a l l the function values used to compute T(2h) are used to calculate T(h), , n\/2 T(h) = yT(2h) + h I f (a-h+2ih) i=l where h = ^ a^ and n = 2 k for some k ^ 1. n This also illustrates the major disadvantage of Romberg's method. (b-a) k Let h^ = \u2014 7 - \u2014 , then e^ = 2 +1 is the number of function values 2 k required to compute {T(h )} Q. Thus i f the T-table computed using {T(tu)K_Q does not approximate the integral to within the required accuracy, we must calculate T(h^ +^) and the (k+l)st row of the T-table. The approximate doubling of the number of sampling points this demands is a s t i f f price. To alleviate this problem, we can do one of two things; we can use a method of stepsize selection other than successive halving, or we can introduce adaptivity. 1.2 An alternate stepsize selection Bauer et al [1, pp. 204-5] show that there is no reason to restrict ourselves to successive halving of the stepsize, since the Romberg procedure can be regarded as the Neville-Aitken modification 3. of Lagrange interpolation applied to the problem of extrapolating T(h), T(h\/2), T(h\/4),... to T(0)=I. They introduce another method of selecting the stepsize, but mainly for i l l u s t r a t i v e purposes, conceding the superiority of Romberg's procedure for general usage. The only serious challenge comes from Roland Bulirsch [2]. Bulirsch suggests the following method for selecting stepsizes. Let n Q = 1, n x = 2, n 2 = 3, n k = 2-nfc_2 for k > 3, and take h, = ^ a^, then instead of requiring approximately twice K. II, k as many function values for an additional row, we require essentially \/2 times as many. There are, however, some major disadvantages to ' oo using the Bulirsch sequence, F = '^ -\"^ .-n instead of Romberg s V o o R = { 2 >k=0 ' 1.3 F versus R Let the elements of the Bulirsch table be B., wi th B 3 = T(h ) where h = A ^ N . \u00a3 F. Let b. be the number of function values k i k required to compute ^ B Q^ = Q\u00ab Observing that n^ = 1, xx2]a-l_ = 2 \u00bb k-1 n 2 k = 3?2 for k > 1, we can use many, but not a l l of the function values used to calculate {BQ}^^ to compute b Q + ^ \u00ab I n doing this, we achieve the b_. given by Bulirsch [2, p. 14], and k+1 we find that b- = 2 with b_, =2 +1 and b = 3 with b\u201e. = 0 2k 1 2k+l ICb2k-H,2W-2) \" 3 , 2 V l f \u00b0 r k * It is obvious that the Romberg sequence, R is a subsequence of F, hence Romberg's TQ column is embedded in the BQ column of Bulirsch. In fact, T^ = BQ and m k+1 \u201e 2k+l n , u r r > i T Q = BQ for k e N u {0}. 4. But e, , = 2\u00ab2k+l < 3>2k+l = b\u201e, ,. for k > 1, hence the T n column k+1 2k+l 0 gives equal results while sampling the function at only about two-thirds as many points. This is due to the fact that Romberg's method does not \"waste\" any function values - a l l function values used to i k k+1 compute {TQ}_._Q are used to compute TQ . This is not the case with the Bulirsch method. Taking a different approach, let e^ = b^ and consider where this leaves us in the TQ and BQ columns. Clearly, m must be even, i k+1 so l e t e. = b\u201e, . We want e. = 2+1 = 2 +1 = e\u201e. , so j = k+1 and j Zk j ZK. e^ +^ = ''\"t r e c l u i r e s the same number of function values to t m i,k+l ,\u2022 r\u201e 1,2k mk+l \u201e2k+l compute {T Q J} Q and { B Q J } Q, but T Q = B Q The preceding shows that Romberg's f i r s t column is superior to that of Bulirsch, but there w i l l be almost twice as many entries in the BQ column for the same number of function evaluations, giving us more to work with when extrapolating. We now go to the computer to compare the two methods. 1.4 Bulirsch extrapolation OO For any increasing sequence S = {n^^_Q> w e c a r l compute an S-table for h. = ( b ~ a ) ; k n. k T(h Q) - S 0\u00b0 T 1. \\ \" \\+m ( r ^ ) 2 - 1 \\+m 1_ Note that for S = R, k \\+m = 2 and we get 5. . ( 2 m ) 2 S k +J\" - S k . 4 m Sk+] - S k , gk _ m-1 m-1 _ m-1 m-1 m ,\u201e >2 1 ,m (2m; - 1 4 - 1 and since TCh^) = SQ , the S-table i s exactly Romberg's T-table. It is apparent that the calculation of Bulirsch's B-table w i l l take \\ more effort, in terms of C.P.U. time, since the ratios r depend on k+m k as well as on m. To compare the time required to process the two methods, the F0RTRAN programs R0MNEV and QUADS were run on the U.B.C. IBM 360\/67 using the FORTRAN library routine SCL0CK. R0MNEV is based on Bulirsch'sALG0L procedure R0MNEVINT, but i t has the additional property that i t does not re-evaluate the function at points where i t s value is known, thus achieving the b_. previously discussed. QUADS is based -k on C.F. Dunkl's routine QUAD [4, p. 199], which uses h f c = 2 , with an additional capacity for handling algebraic endpoint singularities (see Section 2). R0MNEV and QUADS are given in Appendix 1 and 2, respectively. The functions used were F = 1.0 and F = SQRT. For F = 1.0, a comparison for 65, 129 and 257 function calls showed that R0MNEV took over 1.6 times as much C.P.U. time as QUADS. For F = SQRT, the results were similar for simple extrapolation, but R0MNEV required about 1.8 times as much C.P.U. time when extrapolation was carried out with ALPHA = 1.5 DO (see section 2.3). 1.5 R0MNEV versus QUADS When f i r s t tested, the program R0MNEV showed some problems, due l a r g e l y to the fa c t that the IBM 360\/67 provides at most 24 s i g n i f i c a n t b i t s i n i t s f l o a t i n g - p o i n t mantissa i n sing l e p r e c i s i o n and does i t s arithmetic i n base 16, hence leaving a guarantee of only 21 s i g n i f i c a n t b i t s , which provides us with l i t t l e more than s i x decimal d i g i t s . QUADS' d i f f i c u l t i e s were not nearly so great since much of i t s arithmetic involved powers of two, which led to good r e s u l t s . When the abscissae, trapezoidal sums of B- and T- tables were computed i n double p r e c i s i o n (at most 56 b i t s - 16 decimal places) R0MNEV became competitive with QUADS i n terms of achieved accuracy, l a r g e l y because the severe roundoff i n the extrapolation process had been eliminated. Even though on the average, R0MNEV achieved s l i g h t l y b e t t e r accuracy f o r the same number of function c a l l s , using a diverse set of test functions [3], there were s u f f i c i e n t reasons to leave R0MNEV and F behind and concentrate on QUADS and R. A major reason f o r h k deserting R0MNEV was the time involved i n computing -r , e s p e c i a l l y Vfm since this required double p r e c i s i o n to produce good results - this i s re f l e c t e d by the fact that R0MNEV takes 1.6 times as long to execute for the same number of function c a l l s . Another reason i s propagation of roundoff erro r . 1.6 Propagation of roundoff e r r o r ^ m h k B u l i r s c h introduces the numbers C . = II' \u2014~\u2014^- , where ' m k=0 hl-h2 k I indicates omission of the troublesome term, and says that Cp = m Y C . < 9.3 for his F, f o r a l l m [2, p. 13]. This C i s i n fact . c_ 1 mi 1 i=0 m , m , . exactly Y |c .I where C . are obtained from T = Y C . T. 1 . \u00ab mi 1 mi m , L n m,m-i 0 i=0 i=0 7. using the fact that a l l entries in the T-table must be linear combinations of the trapezoidal values in the same row or previous rows [1> pp. 202-3]. A F0RTRAN program was written which confirmed the statements of both Bulirsch and Bauer et a l . , namely Cj- < 9.3 and < 1.97. C measures the maximum roundoff error possible in the T-table, given that the trapezoidal values have a roundoff error of no more than one unit and perfect arithmetic is used to compute the T-table. This means that i f the trapezoidal values are correctly rounded to seven places, the f i n a l result with Romberg's algorithm w i l l have a roundoff error of not more than one unit in the seventh place, i f we use double precision arithmetic for the extrapolation. Propagation of roundoff error was one of Bulirsch's major reasons for recommending his method over Romberg's. We think the preceding and some of the next section w i l l show why we-believe successive halving to be the optimal method for selecting stepsizes. 8. Algebraic endpoint singularities 2.1 Integrating singular integrands If f i s complex-analytic in an open domain containing [a,b], the diagonals of Romberg's T-table w i l l converge superlinearly. In fact, i f f e C 2 m f 2 ( [ a , b ] ) , T k - I = 0 ( 4 - k ( m f l ) ) . m These results follow because, for a smooth function, E(h) = T(h)-I conforms to the underlying assumption of Romberg's method, namely, E(h) = C ^ 2 + C 2h 4 + C 3h 6 + (1) where the depend on f but are independent of h. For many functions which do not exhibit the required smoothness, the expansion of E(h) differs considerably from (1), causing Romberg extrapolation to con-verge slowly, sometimes more slowly than the trapezoidal column T Q [1, pp. 210-212]. Consider the reason for the success of Romberg extrapolation when dealing with well behaved functions. T Q\u00b0 = T(h) = I + C ^ 2 + tf(h4), TQ 1 = T(h\/2) = I + C^h\/2) 2 + 0((h\/2) 4) * I + j ^b2 + 0(h 4), T i 0 . 4 - = 31 ^ ( h 4 ) = T + 0 ( h 4 } _ Similarly, using and T^, we get T 2\u00b0 = I + 0(h 6). Thus the rapid convergence of Romberg's method is caused by two things; halving the stepsize, h, makes 0 ( h 2 n + 2 ) - hopefully - smaller in magnitude, so that T 3 has a smaller truncation error than T 2 1 , and the extra-n n polation causes 0(h 2 n) to be replaced by 0 ( h 2 n + 2 ) , making T ^ a higher order estimate than T^_^. It i s apparent that removing the h2\"^ terms w i l l not be very useful i f f has an essential singularity on [a,b], causing the expansion E(h) to have terms other than C.h 2 x. For example, i f O O f(x) = \/x, D(h) = Ch + \u00a3 C.h 1 , so that removing h 1 would not i=l 1 remove the dominant error term. It would be extremely useful to know what the error expansion, E(h) looks like for certain classes of singular integrands, to see i f Romberg's procedure can be modified to yield significant improvement over the trapezoidal values. Considerable work has been done in this area based on the Euler-; Maclaurin expansion for E(h) [8], [14], [16], [17], [24], producing error expansions for integrands with logarithmic and algebraic endpoint singularities. It i s the latter class that lends i t s e l f to an extrapolation procedure similar to Romberg's method. 2.2 Algebraic endpoint singularities Bulirsch introduced the idea of using a formula similar to Romberg's to generate a B-table to approximate the integral of a function with an algebraic endpoint singularity [2, pp. 13-15]. He attempts to remove terms of the form h from the expansion -il E(h) = T(h) - \\ (x-a) B g(x)dx ~ C 1h 1 + B+C 2h 2+C 3h 2 + B a+C4h3+e+C5h4+..., (2) where B e [0,1] and g is a smooth function on [a,b]. He builds a B-table with B Q = T(hfc) and 10. h a B k + 1 - h a B k d,01 B k + 1 - B k \u201e k k m-1 k+m m-1 Tc,m m-1 m-1 _ . .. B = = 1 for m > 1, m , a , a ,a .. h, -n, a, -1 k k+m k,m bk - k m where d. = -r and a = 1+8. For h =2 , d, =2 so we get K , m h. . k k,m k+m 2maTk+l _ Tk T k = \u2014 - m\u2014^ , which with a = 2 is again Romberg's formula. m A close look reveals that this method removes terms of the form ^i(l+8) i n s t e a ( j o f t n e h 1 + ^ which occur in (2). To see how to remove the correct terms, consider , Tk+1 Tk Tk+1 k k m m-1 m-1 _ Jk+1 m-1 m-1 m d - 1 m-1 d -.1 ' m m where d is the term to be determined. Assuming the previous m extrapolation has removed the lower order terms, and letting h \" hk+m' W e 8 6 t Tk+?- = i + c h Y + o ( Y ) = i + c h Y + o(h Y), m-1 k+m k+m and Tm_! = 1 + Chk+m-l + \u00b0 ( hk+m-l ) = 1 + C ( 2 h ) Y + \u00b0(( 2 h) Y) = 1 + 2 YCh Y+0(h Y) , N \u00b0 W k+1 _ k Tk = Tk+1 + Tm-1 - Tm-1 = + Y + Y Ch Y(l-2 Y) + 0(hY) m m-1 d - 1 d - 1 m m = 1 + chy + C b - Y ( 1 - 2 Y ) + o(hY> = i + o ( h Y ) i f d = 2Y. a \u2014 l m m 11. From this, we see that to extrapolate the trapezoidal column for g an integrand of the form f(x) = (x-a) g(x), we should use 2Ym Tk+1 _ Tk T = (3) 2Ym - 1 where = 1+B and Y_ ., == 2n, y. = 2n+B, Y~ ,, = 2n+l+B for n e hi. 1 3n-l 3n 3n+l This is exactly the method used by Carl deBoor [5], based on the work of Lyness and Ninham [14]. However, where Bulirsch restricted B to being in the interval [0,1], de Boor gives us the following [5, p. 424]; g Theorem 1. Given a function of the form f(x) = (x-a) g(x), where B e (-1,1], B * 0 and g e C ([a,b]), and with f(a) = 0 i f B < 0, then J~b I2k+1-Bl - - t o k o- 9 w i f (x)dx = I A.h 1 + P + I C.h3 + 0 ( h \/ l C + i ) , (4) a i=l 1 j=l 2 where the coefficients A. and C. are independent of h. 1 J Although (4) is theoretically valid for B e (-1,1], B *\u2022 0, de Boor's program CADRE (see sections 2.5 and 3.3) uses (3) to extra-polate trapezoidal values i f B e D = (-.863, -.007) u (.007,1.170). CADRE approximates B as well as doing the extrapolation, and D appears to be a heuristically effective domain determined by \"limited experimentation\" [5]. 2.3 About Theorem 1 Faith in the practical usefulness of Theorem 1 may be lost when one realizes that there is no assurance that the A. and C. of i 3 (4) are non-zero. If, for instance, y = rrrfB and A = 0, column n m 12. n w i l l show l i t t l e or no improvement over column n - l . Consider the case g = 1; i t is known [2, pp. 13-15] that (x-a) edx - A h 1 + e + I C. h 2 j , (5) a j=l J \u2022f + R that i s , A_j. = 0 for a l l i ^ 2, thus removing a l l the h terms that occur in (4) would be far from optimal. k+6 Another example is f (x) = x , for which A_^ = 0 when i * k+1. The accuracy obtained using (3) is better for k=l and vastly superior for k=0, but for k>2 Romberg's original algorithm returns more accurate estimates. The next section w i l l try to restore some faith in the practical use of (3). The author feels there should be some additional comment about f(a) = 0 for 3 < 0. It should be noted that i f g has at least one Xlltl continuous derivative, L = f(x) is either zero or i n f i n i t e , so x->a the restriction f(a) = 0 is not that severe. After a l l this concern over truncation error, we should point out that roundoff error may become a problem when we leave the realm of Romberg's original algorithm. We can no longer guarantee, as we did in section 1.6, that the roundoff error in the trapezoidal values is at worst doubled. For instance, i f we extrapolate so as to remove the terms indicated in (5), we get 1=1 i=l 2 Y -1 2 - 1 i=l 2 - 1 2 - 1 which becomes large for 3 close to -1. For QUADS with ALPHA e (0.001, 1.999), C R is considerably larger. .13. 2.4 Another theorem We would now like to take a closer look at (4) and the conditions pertaining to i t . Will simply assigning f the value zero at x=0 p l cause the (truncation) error expansion, E(h) = T(h) - I f(x)dx, lim x->-0H to look like (4), even i f , f(x) * 0? Just what do the A. and C. in (4) look like? Both these questions can be answered by making use of the following result of Waterman, Yos and Abodeely [24]: Theorem 2. For 8 > -1 and g holomorphic (complex-analytic) in a complex domain containing [0,1], ^1 n\u20141 0 0 i \u201e \u00a3 , \u00ab A , - i g (but finite) otherwise. Combining (6) and (7), we get g(x)x3dx ~ =YL+ I A. h \u00b1 + B 0 i=l OO + I C h 2 j as h \u2022* 0, (8) j=l J where A. = ^ ( j . ^ ? (-i-g-1) and = g(x)x t \" ^ ' [ d x 2 ^ 1 .14. x=]J are clearly independent of h. If we truncate the two summations in (8) appropriately, (8) w i l l be identical to (4) (with [a,b]= [0,1]) except for the term \"\"Lti ~2~' *f \u00a7 i s holomorphic, L=0 for 8 > 0 and L = 0 or 0 0 for 8 < 0. Since \u00b0\u00b0 is arbitrary on the computer anyway, we may as well assign L = 0, and then (4) and (8) are essentially the same. If we look at the definition of the A_^ following equation (8) , we see that A_^ w i l l be zero i f and only i f g ^ (0) = 0. Thus we are usually quite safe in assuming that A_^ * 0. Since (3) w i l l frequently be used heuristically, the following example is meant as a warning to a potential use who i s not sure that his g(x) is holomorphic around [a,b]. -1\/2 1\/2 Consider two functions, C(x) = x cos(x ) and S(x) = -1\/2 . , 1\/2V _ _ N \u2022 ,0 i f x=0 A l 4 . , x sin(x ). Let f*(x) = s . . Although neither f(x) otherwise 0 1\/2 1\/2 sin(x ) nor cos(x ) are analytic, i t seems likely that the terms introduced by them in the error expansion w i l l resemble the terms -1\/2 1 resulting from x , so we w i l l try to use (3) with g = -\"V. Clearly r1 r 1 \/ f(x)dx = \/ f*(x)dx in the Riemann sense, so we w i l l try to 1 ^ \u00b0 1 find \/ C*(x)dx and \/ S*(x)dx using QUADS with ALPHA = 0.5 DO. 1 1 i1 f C*(x)dx = \/ C(x)dx = 2 s i n ( x 1 \/ 2 ) | = 2 sin 1 - 1.682 942. -JO -^0 lx=0 QUADS obtained five figure accuracy with 17 evaluations and seven figures with 33. With ALPHA = 2.0 DO, QUADS failed to achieve two significant figures in 257 evaluations. This showed that our assump-tion regarding the truncation error is valid. f S*(x)dx = f Jo Jo S(x)dx = -2 c o s ( x 1 \/ 2 ) 15. 1 - .9193954. x=0 QUADS f a i l e d to achieve three s i g n i f i c a n t figures with 257 evaluations. Our assumption regarding the truncation error i s s t i l l v a l i d , so what i s the difference between S and C that causes t h i s discrepency? Consider L ( f ) = l l m f ( x ) , then L(s) = 1 and L(C) = \u00bb. Herein x-K) l i e s the e s s e n t i a l d i f f e r e n c e . L(C) = \u00b0\u00b0 indicates that a s i n g u l a r i t y does i n f a c t e x i s t at x=0, and since \u00b0\u00b0 i s b a s i c a l l y a r b i t r a r y on the computer, we can l e t C(0) = 0. L(S) = 1 indicates that S does not have a s i n g u l a r i t y at 0, r e f l e c t i n g the fact that the terms i n t r o -1\/2 -1\/2 duced to E(h) by sin(x ) cancel those introduced by x . I f we l e t S**(x) - S(x).for x * 0 and S**(0)=1, then with ALPHA = 2.0 DO, QUADS obtained seven figures f o r \/ S**(x)dx with j u s t 5 evaluations. J 0 2.5 Detecting the s i n g u l a r i t y C a r l de Boor points out [5] that i t i s sometimes possible to use Romberg-3 i n t e g r a t i o n (as we w i l l henceforth r e f e r to extrapolation using (3)) even i f the user does not indic a t e the nature of the s i n g u l a r i t y . Define v - T(h\/2) - T(h) V n ; T(h\/4) - T(h\/2) ' the r a t i o of successive differences i n the trapezoidal column. Now Ct ct l e t us assume that T(h) - I = K h + 0 ( h ) , which i s the case f o r integrands with algebraic endpoint s i n g u l a r i t i e s . I f we ignore the Ct 0 ( h ) term, we get R ( h ) = I+K(h\/2) a - (I+Kh a) = Kh\" \u2022 ( 2 \" a - l ) 2 ~ a - l = ^ 0 I+K(h\/4) a - (I+K(h\/2) a) Kh a \u2022 (4~ a-2~ a) 2 _ a ( 2 _ a - l ) Ct Since 0 ( h ) i s not r e a l l y zero, RQ O O w i l l be an approximation, but 16. i t should be usable for sufficiently small h. This, with a slight modification to discussed later, is the basis of de Boor's method. If the sequence {RQO O}^^ appears to be converging to some value, say R Q, he abandons Romberg's method, unless of course RQ - 4. If RQ - 2, a jump discontinuity i n the integrand is indicated. If RQ e {x | x = 2 where S e D} (see section 2.2), then we assume f(x) = (x-a) g(x) and use Romberg-B extrapolation with 3 = log^Rg - 1. Instead of calculating Rg(h) for decreasing values of h, we could calculate . . . Tn+l-i _ Tn-x R. (h) = 1 , 0 . ^ r r \u2022 \u00bb where h = \u2014 - . i Tn+2-i _ Tn+l-i 2n i i 1\/2 Thus i f i=l and f(x) = x g(x) we have T n _ 1 - I = Kh 1' 5 + 0(h 2' 5) instead of T Q n - I = T(h) - I = Kh 1* 5 + 0(h 2), making R^(h) probably a better estimate than R^(h). In de Boor's CADRE the values R^(h) for i ^ 1 are used to check RQ and perhaps to give a better estimate for B. de Boor warns of possible problems when using this in practice [5, pp. 418-419], especially i f our estimate for B is not good. Returning to the example in section 2.4, CADRE detected a - . s i r 1 singularity of the type x for I C*(x)dx, no singularities for j S**(x)dx and a jump of size 1.0 at x = 0.0 for \/ S*(x)dx. This Jo J0 supports the discussion accompanying the example. Adaptive Integration 3.1 Introduction When one uses an i n t e g r a t i o n r u l e to obtain an approximation, A k of I k = J b f ( x ) d x , one usually s t a r t s with an approximation Si cl **3. A^, based on a few function values and a second estimate A^, computed using a d d i t i o n a l function values. Assuming an unsophisticated stopping r u l e , i f A^ - A^, A J 3 = A^ i s accepted as the estimate of I k. i f A and A_ d i f f e r s i g n i f i c a n t l y , the function i s sampled at more points and A computed. I f A\u201e - A , A ^ = A , etc. This process can lead to many unnecessary function evaluations i f the function i s w e l l behaved except i n a small subinterval of [a,b]. I t i s f o r t h i s reason that a d a p t i v i t y i s e s s e n t i a l for a general purpose computer i n t e g r a t i o n package. In an adaptive routine, we apply the appropriate rule at most n times, where n i s some predetermined number. I f the rule has not produced an acceptable estimate a f t e r n i t e r a t i o n s , we use the rule c b b c b to compute A and A for some c e (a,b) and obtain A = A +A a c a a c as an approximation to I ^ = I \u00b0+I ^. Also, i f the rule i s Si Si c unsuccessful on one of the subintervals a f t e r n i t e r a t i o n s , the b ^T ~^2 b s u b d i v i s i o n i s repeated. Thus A = A +A +...+A where a a c. c. 1 J a < c. < c\u201e < ... < c. < b. With a judicious choice of c, some of -*- *\u2022 J the function values used to compute the {A.} to approximate I can c b be applied to calculate the {A.} needed to obtain A and A X EL C Adaptivity i s an excellent t o o l when used with Romberg's method since i t a l l e v i a t e s the problem of doubling the number of function evaluations for successive rows of the T-table. In addition, i f an n-level T-table has failed to produce an acceptable value for A a a+b and we take c = , we can build an (n-l)-level T-table for c b both A and A without any additional function evaluations. Before a c J we discuss CADRE [5], Carl deBoor's program based on this idea, we should take a brief look at stopping rules. 3.2 Stopping rules Considerable work has been done on the error behaviour of Romberg's quadrature [5], [6], [9], etc. How does one go about determining that the estimate produced is \"acceptable\"? In most programs, the user provides a parameter e > 0 and w i l l accept A as an approximation to I i f |l-A| < e. But since I i s unknown, the program must have some other stopping criterion. oo If we assume that f e C ([a,b]), then the diagonal of Romberg s T-table converges superlinearly. As a result, the test IT \u00b0 - T\u00b0 A + I T 0 , - T \u00b0 J n n-l n-l n-2 is frequently used in computer programs, notably Charles Dunkl's QUAD [4], [7]. If the test is satisfied, A = T n\u00b0 w i l l often satisfy CO 11\u2014A| < e for f e C ([a,b]). A more conservative rule would lead to overworking in many cases, and the occasional overoptimism for OO f j! C ([a,b]) is probably the lesser of two evils. In fact, Carl deBoor points out [5], [6] that i f we have faith in our estimate RQ, we can use an even less restrictive stopping rule, namely i f n-l n-l | T \u00b0 - T\u00b0 , | y n n - 1 2 Y n - 1 < e we accept A = T^0. It is this stopping rule that i s used in both QUADS and CADRE. 19. Instead of an absolute error tolerance, the user may wish to specify a relative error tolerance, & > 0 and request an A such that |A-I| < 6 | l | . (9) In actual practice, of course, | l | has to be replaced by an approxi-mation, but rather than use |A|, i t is preferable to use the trapezoidal result for |f|, that is A * = y(|f(a)| + |f(b)|) + I |f(a+ih)| where h = ^. 1 i=l 2 Thus A * approximates I* = J ^ |f(x)|dx, and instead of (9) we try to satisfy | A-I| < 6 I*. (10) We use I* instead of | l | largely to avoid overworking when dealing with oscillatory integrands, and i t does not present any problems as long as the user i s aware of the fact that the program is attempting to satisfy (10), not the somewhat more restrictive (9). The user may also wish to specify both e and 6, and require |A-Ij < max(6I*,e). (11) To compromise, the program may try to return an approximation A so that |A-IJ < e-max(l*,l). (12) Since (12) is simply (11) with 6=e, the program w i l l attempt to satisfy the absolute or relative error requirement for I* less than or greater than one, respectively. A further problem appears when adaptivity is introduced. For f 1 simplicity, let (a,b) = (0,1) and C = 1\/2, then take I = J- f(x)dx, rl\/2 J0 A and I* as before, and E = |I\u2014A|. Now let I = \/ f(x)dx and I 2 = J f(x)dx, and define 1^ , A ^ , E^ and 1^ , A^ and E^ accordingly. 1 \/2 20. Then we would like to have E < max(6I1 , e\/2) and E 2 < max(6I2 ,e\/2) (13) implies E < max(6I*,e). (14) Unfortunately, i t is f a i r l y easy to find an example for which this is not true. So (13) does not imply (14), but we can show that (13) implies E < max(e,6I*,e\/2 + 6max(I * ,I 2 *)) < e + 61*. (15) Although this result may shake our faith in claimed error bounds somewhat, heuristically i t should not be a problem, since integrands for which (14) does not follow from (13) are not common. Most programs are reasonably successful meeting either the relative error condition or a prorated absolute error condition on subintervals. In fact, since errors may cancel, adaptive routines are frequently too conservative [ 6 ] . 3.3 CADRE Several programs have been written in an attempt to achieve a \"black box\" integrator; a,b,f ,e for some t e [-1,1]. As deBoor points out [6 , pp. 201-202], i t is clear that no such program can be written which is failsafe, so instead one tries to construct an algorithm which has (i) Input: two real numbers a,b; access to a real function f for a l l arguments x e [a,b]; a desired relative error tolerance 6; a desired absolute error tolerance e , and which produces ( i i ) Output: A, an approximation to I = J f(x)dx, which -'a satisfies |I\u2014A[ < max(\u00a3,6I*), where ( i i i ) Further, this output should be produced e f f i c i e n t l y , specifically with as few function evaluations as possible. (iv) The domain of the algorithm should include most of the quadrature problems commonly solved at a typical computer installation; also, the algorithm should recognize failure, in some sense. The last point, (iv), is necessarily vague. One can usually increase the domain of the algorithm at the cost of efficiency, thus one must balance ( i i i ) and (iv). The author i s not at one with deBoor regarding his decision to take the number of function evaluations to be the sole measure of efficiency. Adaptivity allows us to attain a good balance of ( i i i ) and (iv). There are many adaptive automatic integration packages available, most based on Simpson's rule, but we wish to discuss one based on Romberg's method. It is Carl deBoor's CADRE (Cautious Adaptive Romberg Extrapolation) [5], for which there i s a simplified algorithm in Appendix 3. Because deBoor developed CADRE on the CDC 6500, which carries 22. a 48 b i t mantissa in single precision, he did not have to worry about roundoff error. He suggests three possible remedies for round-off error problems on shorter wordlength machines [5, pp. 427-428]: (a) calculating the trapezoidal sums in double precision, but with single precision function values; (b) accumulating the results from various subintervals in double precision; (c) defining G(X) = F(A + (B-A) * X) within CADRE and integrating G over [0,1], with accompanying adjustments. These changes were incorporated in CADRE individually and tested using appropriate functions selected from those given by Casaletto et al [3](see Appendix 4). The f i r s t change, (a) was retained because, although in general i t provided only slight improvement, i t brought significant improvement when the integrand displayed an algebraic endpoint singularity. In this case i t returned more accurate results with considerably fewer evaluations for the same requested error tolerance. This was a result of the roundoff error propagation problem inherent in Romberg-8 integration (see Section 2.3). This change seemed well worth the cost, a surcharge of less than one percent on the execution time. The other changes were rejected. (b) showed no improvement for ^29' F30\u00bb F 3 l a n c* F5G\" a n d i t ; s e e m e a unlikely that CADRE would run into a function, that needed so much subdivision tnat (b) would be helpful, often enough to warrant retention of(b). (c) was rejected because i t did nothing to alleviate the \"problem\" for which i t was intended, namely roundoff error in calculating abscissae. \u2022 .23. In fact, there was no indication that the \"problem\" existed. One of the drawbacks of adaptive routines i s that the user does not know what strategy was used to obtain the result. To handle this, CADRE's parameter l i s t includes LEVEL, which lets the user control the level of output printed by CADRE. With LEVEL < 1 there i s no output, and with LEVEL > 5 there is a record of a l l subdivisions, accompanying decisions as to the nature of the integrand on each subinterval, and complete T-tables. These values of LEVEL along with the intermediate integers should supply the user with adequate control. One problem with this, however, is that there are many format statements to take up core and many \" i f \" statements to impede execution. But the feature i s virtually essential and i t adds only a few percent to the execution time when LEVEL < 1. A double precision version of CADRE was written and run for the f i f t y test functions (see Appendix 4). No improvement was noted except for F ^ to F^g, where better results were obtained with fewer evaluations. The function values were returned to CADRE in single precision, so the improvement must be due to less roundoff error in extrapolation and\/or more accurate approximation of a. For the same number of evaluations, the double precision version took 5 to 10 percent longer than single precision CADRE. 3.4 CADRE versus SQUANK To get some idea of CADRE's r e l i a b i l i t y and efficiency, i t s performance on the test functions [3](see Appendix 4), was compared to that of SQUANK [13,23], the best integration package currently available at U.B.C. The other available program, C0SIM[22] was 24. rejected because i t printed a message whenever i t examined a strip smaller than (B-A)\/2X^, and ignored this interval when estimating the integral. C0SIM quit altogether (that i s , terminated the F0RTRAN run) i f twenty strips were ignored for a particular integrand, making i t extremely undesirable for use as a general purpose program. When i t worked to completion, C0SIM's estimate of the integral, number of function calls and C.P.U. time used were almost identical to those of SQUANK. In-the comparison, given in Appendix 4, both CADRE and SQUANK were called twice for each test function, once with the absolute -3 -5 error tolerance, e = 10 and once with e = 10 . These values of e seemed reasonable, and represent the range of accuracy many users appear to want. CADRE is designed so that the relative error tolerance is at least .5 x 10 , even i f 0.0 is specified in the function c a l l . For the f i r s t twenty functions, polynomials with degree zero through nineteen, both integrators were always successful. CADRE is more cautious, needing more evaluations as the degree of the polynomials increases, and producing estimates considerably more accurate than requested. SQUANK appears to estimate achieved error more r e a l i s t i c a l l y , producing acceptable estimates with fewer evaluations. CADRE also consumes more C.P.U. time per evaluation than does SQUANK. 2 5 . Table 1 SQUANK CADRE e=10~ 3 e=10~ 5 e=10~ 3 e=10\" F29 T F30 T T F F31 T F34 F M M F36 F F40 F F45 F F F46 F F \u2022 F47 D D D D 5 T - total failure; f a i l s to achieve order of magnitude. F - failure; does not meet error requirement, but returns reasonable estimate. M - mitigated failure; true error does not exceed estimated error. D - failure to be discussed. Table 1 and the computer output in Appendix 4 indicate that, on the basis of the test functions, CADRE is considerably more reliable than SQUANK. CADRE's apparently unnecessary labours on F^Q should be balanced with SQUANK's total failure on F^Q - i t appears a \"cautious\" test for f i r s t column convergence might be a useful addition to CADRE. Note also SQUANK's total failure on F \u201e despite the f a c t that i t has c a l l e d the function 601 times. CADRE's a b i l i t y to recognize functions with algebraic endpoint s i n g u l a r i t i e s shows i t s advantage c l e a r l y i n the r e s u l t s obtained f o r F_, to F o r.. CADRE's f a i l u r e on F\u201e_. and F_. i s tempered somewhat 36 39 30 34 by the fac t that the estimated e r r o r exceeds the requested tolerance, i n d i c a t i n g u n r e l i a b i l i t y of the r e s u l t . The apparent f a i l u r e of both integrators on F ^ was due to the answer provided by Casaletto et a l [3] being i n c o r r e c t . The correct answer i s 1085.2526, so both routines were quite successful. The f a i l u r e s on F._ bear further discussion. 47 The fact that both routines return i d e n t i c a l estimates with no i n d i c a t i o n of trouble i s suspicious. Consider \u201e t s. JO i f .49 < x < .50, F 4 7 ( x ) = I 2 ^-1000(x -x) otherwise. The r e s u l t returned by both routines i s correct to seven places f o r f 1 2 j - 1000(x -x)dx, because neither routine found i t necessary to sample the function i n the (open) i n t e r v a l (.49, .50). I f we define i f .49 < x < .50, G 4 7 ( x ) = I 2 , ' -1000(x-x) otherwise, then both routines w i l l immediately note the s i n g u l a r i t y (thus should r 1 r 1 not be fooled), and I F ^ = | G^. EPS RESULT EST. ERROR EVALUATIONS CADRE 10~l 164.1663 .84 x 1 0 _ 3 298 10 164.1666 .60 x 10 288 SQUANK 10~l 153.9716 33.2 29 10 153.9716 33.2 29 CADRE i s quite successful - i t stops the second time with an estimated (absolute) error exceeding the requested tolerance because the relative error is less than .5 x 10 SQUANK is s t i l l not successful (although there i s now an indication of unreliability) , but as F^ ,. and F^^ showed, SQUANK is simply not comfortable with jump discontinuities. From these results we see that, as a rule, CADRE is more reliable than SQUANK, but somewhat slower - again the tradeoff between ( i i i ) and (iv) mentioned iii Section 3.3. 3.5 whither numerical integration? In his paper [18], Romberg showed that his recursion could also be used to extrapolate the values obtained by the composite midpoint (rectangle) rule, U n _ 1 = U(h) = Y\" f(a+(i+l\/2)h), h = ^ , u i=0 with . u i + 1 - u 1 U, 1 = U, , + T for k > 1. This does more than 4 - 1 provide an alternative to the trapezoidal table, i t gives us something that can be used in concert with i t . One important result i s that, i i r b under f a i r l y general conditions, T^ and U^ bracket I f(x)dx. It is known that f (x) being of constant sign over [a,b] is a O sufficient condition [10]. Havie [9] suggests using this to provide an error estimate, hence a stopping rule. He recommends accepting . i'1 - < rb small, when (hopefully) T 1 - e fc < j f(x)dx < T ^ + e\u00b1 ' J a ' i rb i T k _ 1 u k _ 1 i T^ as the estimate of j f (x)dx i f ^ = is sufficiently 28. Since the UQ\"*\" require no additional function values ( U ^ = 2T^*^-T^) , this estimate costs us no additional function evaluations, but does require the calculation of the U-table. To my knowledge, no adaptive program has been written based on this idea. Dr. G.D. Johnson used the bracketing principle, but in addition to the T- and U-tables, he constructed T*- and U*-tables by accelerating the f i r s t columns of the T- and U-tables using Wynn's e-algorithm O [25]. He then accepted the f i r s t estimate for which the Havie error estimate was small enough. For the test functions [3], this program was competitive with S Q U A N K in terms of r e l i a b i l i t y , function calls needed and C . P . U . time used. It appears that an adaptive program based on this idea might be worthwhile, especially since the e-algorithm showed potential for handling functions with algebraic and logarithmic singularities. When we started looking at deBoor's work [5], [6], we were excited about finding a \"black box\" integrator that worked for a large class of functions, and hoped to find ways of enlarging that class. But we soon became convinced that i t is not always desirable to increase the size of the class at the cost of efficiency. We agree with William Squire [20, pp. 125-126] that providing an a l l purpose integration package \"is not a proper u t i l i z a t i o n of the capabilities of the computer. It is an attempt to accomplish the impossible by brute force\". A computing center should provide a package capable of handling most common functions (CADRE or something less ambitious) and packages that are known to be effective for certain classes of integrands. The user should be willing to analyze his integrand 29. sufficiently to be able to determine which routine to use. We feel the computer should be more than a high-speed calculating machine, but we do not think canned routines should take over the user's role of thinking, unless i t is extremely cheap in terms of C.P.U. time. 30. BIBLIOGRAPHY 1. Bauer, F.L., Rutishauser, H. and Stief e l , E., \"New Aspects in Numerical Quadrature\". Experimental Arithmetic, High Speed Computing and Mathematics, Providence, Rhode Island: American Mathematical Society, 1963, pp. 199-218. 2. Bulirsch, Roland, \"Bemerkungen zur Romberg-Integration\". Num. Math. 6 (1964), pp. 6-16. 3. Casaletto, J., Pickett, M. and Rice, J., \"A Comparison of some Numerical Integration Programs\". Signum Newsletter 4 (1969), pp. 30-40. 4. Davis, P.J., and Rabinowitz, P., Numerical Integration, Toronto: Blaisdell, 1967. 5. deBoor, C., \"CADRE: An Algorithm for Numerical Quadrature\". Mathematical Software (ed., J.R. Rice), New York: Academic Press, 1971, pp. 417-449. 6. deBoor, C., \"On Writing an Automatic Integration Algorithm\". Mathematical Software (ed., J.R. Rice), New York: Academic Press, 1971, pp. 201-209. 7. Dunkl, C.F., letter dated July 21, 1970. 8. Fox, L., \"Romberg Integration for a Class of Singular Integrands\". Computer Journal -10 (1967), pp. 87-93. 9. Havie, T., \"On a Modification of Romberg's Algorithm\". B.I.T. 6 (1966), pp. 24-30. 10. Joyce, D.C., \"Survey of Extrapolation Processes in Numerical Analysis\". S.I.A.M. Review 13 (1971), pp. 435-490. 11. Krasun, A.M. and Prager, W., \"Remark on Romberg Quadrature\". Comm. A.CM. 8 (1965), pp. 236-237. 12. Lynch, R.E., \"Generalized Trapezoid Formulas and Errors in Romberg Quadrature\". Blanch Anniversary Volume, Aerospace Research Laboratories, Office of Aerospace Research, U.S.A.F., 1967, pp. 215-229. 13. Lyness, J.N., Algorithm 379, \"SQUANK (Simpson Quadrature Used Adaptively - Noise Killed)\". Comm. A.CM. 13, 1970, pp. 260-263. 14. Lyness, J.N. and Ninham, B.W., \"Numerical Quadrature and Asymptotic Expansions\". Math, of Comp. XXI (1967), pp. 162-178. 31. 15. Malcolm, M. \"An Algorithm for Floating-point Accumulation of Sums with Small Relative Error\". Stanford Comp. S c i . Dept. Report 163, 1970. 16. Navot, I., \"An Extension of the Euler-Maclaurin Summation Formula to Functions with a Branch S i n g u l a r i t y \" . J . Math and Phys. 40 (1961), pp. 271-276. 17. Navot, I., \"A Further Extension of the Euler-Maclaurin Summation Formula\". J. Math, and Phys. 41 (1962), pp. 155-163. 18. Romberg, W., \"Vereinfachte Numerische Integration\". Norske Vid. Selsk. Forh. (Trondheim) 28 (1955), pp. 30-36. 19. Schweikert, W., \"A Comparison of Erro r Improvement Estimates for Adaptive Trapezoid Integration\". Comm. A. CM. 13 (1970), pp. 163-166. 20. Squire, W., Integration for Engineers and S c i e n t i s t s , New York: E l s e v i e r , 1970. 21. Strom, T., \" S t r i c t E r r o r Bounds i n Romberg Quadrature\". B.I.T. ]_ (1967), pp. 314-321. 22. U n i v e r s i t y of B r i t i s h Columbia, C0SIM Integration using Simpson's Rule with Er r o r Control, 1969. 23. U n i v e r s i t y of B r i t i s h Columbia,'SQUANK Integration using Simpson's Rule with Refined Error Control, 1970. 24. Waterman, P.C., Yos, J.M. and Abodeely, R.J., \"Numerical Integration of Non-Analytic Functions\". J. Math and Phys. 43 (1964), pp. 45-50. 25. Wynn, P., \"Acceleration Techniques i n Numerical Analysis, with P a r t i c u l a r Reference to Problems i n one Independent Variable\". Information Processing (ed.., CM. Popplewell). Amsterdam, North-Holland: IFIP, 1962, pp. 149-156. 32. A P P E N D I X I \u2022 THE '.PROGRAM RCMNE V ; FUNCTION' ROMNEV (C \u00bb B .F .A LPHA \u00bbM IN \u00bb MX ) C R C M 3 E R C--NEVILLE I N T E G R A T I O N C I F ALPHA IN I N T E R V A L (1 .. 1 .999) . A L G E B R A I C C. ' S I N G U L A R I T Y X * * ( ALPHA-1 ) ASSUMED \u2022 I M P L I C I T REAL*8 (A.D\u2014H.Q.S-W.Y) R EA L* 8 A ( 15 ) \u00bb N (18 ) R EA L * 4 F , S N G L . F L O A T ''\u2022 L O G I C A L 3 O 0 L , POWR * C T H I S ROUNDS AL C O R R E C T L Y TO 7 D E C I M A L P L A C E S R.ND (AL) =SNGL'(AL+(AL-DBLE (SNGL (AL>) )').. . .- DATA N\/1. ODO. 1. ODO, 2. ODO .3 .ODO .4 .OOO .6. ODO .S . OD.O . .-+. 12 . ODO, 16. ODO, 2<+. 0D0.32 .ODO ,43 . ODO .54 .ODO . 9 6 . 0 D 0 . 12 8.00 0. 1 9 S . 0 D O . 2 S S . 0 D 0 . 3 S 4 . 0 D 0 \/ . DATA TWDTH\/ Z 40 A A A A A A A A A A A A AB \/ 3 GOL=.TRUE. C POWER IS TRUE IF. WE ARE DOING ST R A I G H T FORWARD C E X T R A P O L A T I O N P O V R - D A B S ( 2 . O D O - A L P H A ) . L T . 0 . 0 01D0 C COMPUTING F I R S T TWO ROWS D C= DB LE ( C ) -TL=DBLE ( E J - D C F 1 - D B L E ( F (C ) ) H r Q , 5 D 0 * T L \u2022 ' \u2022 F 2=D3LE IF (C+RND (H) ) ) F 3= DBLE (F (B ) ) T l = ( F 1 + 2 . 0 D 0 * F 2 + F 3 ) * . 5 D 0 ' \u2022 T-T 1 *H I F ( P C W R ) G O TO. .2 00 A L - T - H * ( F l + F 3) . A (1 ) = T + A L \/ ( 2 . 0 D O * * A L P H A - 1 . 0 D O ) '; '-GO TO 201 \u2022 \u2022 \u2022\u2022 \u2022 2 0 0 A (1 ) = ( T+H*F2) *TWOTH C . COMPUTING THIRD. ROW 201 H.= H*TWOTH \u2022X=RND (OC + H) X X = R ND ( DB LE (\u2022 8 ) - H ) \u2022 ,'; . S UM = 0 BL E (F(.X) ) +DBLE (F ( X X ) ) .12= ( F 1 > 2 . 0 D ' 0 * S U M + F 3 ) *.5D0 S S - T 2 * H A ( 3 ) = S S IF(POWR )GO TO 888 A (2.) = SS + (-SS-T ) \/ ( 1. 5D0* * ALPHA-1 .ODO ) . A ( 1 ) ' - A ( 2 ) * ( ' A ( 2 ) - A ( 1 ) ) \/ ( 3 . 0 0 0 * * A L P H A - 1 . 0 DO) G 0 TO 8 c S \u2022:' 888 A ( 2 ) - S S + '( S S - T ) * .8D0 A ( 1 ) r A ( 2 ) + ( A ( 2 ) - A ( 1 ) ) \u00bb . 1 2 5 D 0 88 9 MX=5 Q=4.000 33.^ C DO LOOP TO 'COMPUTE FOURTH AND SUBSEQUENT ROWS . D C 7 K-=3 .15 B COL- .NO T . BOOL. SUM-0 .DO H-TL\/Q \u2022 NQ=G-i.ODO C THIS LOOP COMPUTES THE TRAPEZOIDAL VALUE FOR ROW K+ 1 '. 'DO 5 j=i,No\u00bb? \u2022\u2022 \u2022Y=DFL'OAT ( J) .;' \u2022'''\u2022\u2022':'' : C . ; IF\" BOOL IS TRUE . Q' IS DI VIS ISLE BY 3 \u00bb HENCE T r J C IS A'MULTIPLE CF 3 , Y*H = ( J \/ G ) * T L HAS C \" OCCUPED IN A PREVIOUS SUM . IF(SOOL.ANO. FLOAT(J\/3).EQ. SNGL (Y)\/3.) GO TC 5. X -RbiD ';\u2022 Mx=>vx + I \" . \" ' \u2022 . \" \" \u2022 ' . \u2022 \u2022\u2022'SU M= SUM+DELE(F(X)) \"S CONTINUE C .': I F BOOL IS TRUE .WE MUST ADD F l ' - . WHICH. REPRESENTS C THE FUNCTION VALUES FOR WHICH.J WAS A MULTIPLE C OF 3 . IF(BOOL) S'UM = SUM+F1 - . . 'IF (BOOL) GOTO 27 \u2022 :. ;\u2022' F 1 = F2 \u2022'- F2-SUM \u2022 27 . T=T1+SUK :\"' ' ;\u2022 ' . f 1 = T2-. \u2022 T2 = T :,' '....\"';' :\u2022.-.'. D-N ( K + 2) \u2022 C . TRAPEZOIDAL VALUE FOR ROW K+1 A (K + 1)-H *T C ; ; . I N I T I A L I Z A T I O N FOR EXTRAPOLATION I=K+1 \u2022 JK = I. \u2022\"\u2022.:':. ;'.' \u2022.; \u2022 \"\u2022\u2022. '\u2022' '. JU=I T; DD=D\/N \u2022 \u2022 V ' '. GO TO -k> ' , 65 \u2022 J J : J J - 1 \u2022\u2022' '.0 3=05 DS=D\/N[JJ) .'\u2022 -DS = DS*OS - i.OCO .'. GC TO S o \u2022 '. \u2022 . ' ' . '' .; 55 0E = DD*DD-1.0D0. D D = D\/N( K+ 1- J) 56 A ( I ) = A ( I + 1)+ ( A ( I + 1 ) - A ( I ) )\/DB '\u00abT \u2022\u2022\u2022 \u2022 G CNTINUE >''.:. C REMOVING C* WILL PRODUCE T-TA3LE PRINTOUT C \". \u2022\u2022 KK = KK + 1 . . ' .'. C* W R I T E ( \u00a3 V 3 5 ) M X , ( A ( K + 2 - L ) \u00bb L = l t K K ) . C*35 FORMAT) RETURN S FORMAT(*. ROMNEV F A I L E D TO CONVERGE . ANSWER GIVEN I S \u2022 'APPROXIMATE M . \u2022 END.' ' . ;' APPENDI X .2 35. ; THE PROGRAM SUADS \u2022\/ 3 LOCK DATA ''. ' '. \"\u2022 C O M M O N \/ S D S \/ A L P H A . Q X . M I N . N P P-EA L*3 ALPHA \\ DATA '- .MIN.. A L P H A \/ 5, 2 . 000\/ END . . . \u2022 ' ; \u2022'\u2022'\"'. .FUNCTION QUADS (OA . G B . G F , Q E P S . Q D E L T A ) I M P L I C I T REAL * 8 ( A - H . T - Y ) THIS FUNCTION RETURNS THE VALUE QUADS . \u2022-' ' , ' APPROXIMATING THE INTEGRAL OF GF FROM OA TO QB \u2022 SO THAT ( HOPEFULLY ) I Q U A D - I I < MAX (' OEPS . QDELTA '* 11 t ) , : : THERE IS ALSO A COMMON BLOCK L A B E L L E D \u00bbQDS* WHICH. . . . L E T S THE USER PROVIDE : . . ' RE A L * 8 ALPHA .* TO ALLOW THE HANDLING OF . \u2022 A L G E B R A I C S I N G U L A R I T I E S IN THE INTEGRAND ; RE A L * 4- OX J T C ALLOW THE PROGRAM TO RETURN AN APPROXIMATION OF ABSOLUTE ERROR \u00bb INTEGER MIN ; TO ALLOW THE USER' T C- 'FORCE' THE PROGRAM TO E X E C U T E AT L E A S T \"*MlN F U N C T I O N C A L L S BEFORE T E S T I N G . FOR CONVERGENCE .* . . ' I N T E G E R . MP'' i TO ALLOW THE PROG RAM. TO RETURN THE NUMBER O F . F U N C T I O N C A L L S USED TO A C H I E V E CONVERGENCE ', \u2022' \u2022 Q U A D S IS B A S E D O N A PROGRAM SY CHARLES F . DUNKL . ' R E A L *8 A(16 ) t F S ( 17) COMMON \/ G D S \/ D AL PH \u00bb GX t MIN \u00bbNP T H I S S T A T E M E N T F U N C T I O N ROUNDS C ORR EC TL Y ON T H E \u2022 ISM\/3so \u2022 ' :\u2022 RN'D (AL) =AL+< A.L-DBLE (SNGL T E S T TO SEE I F WE ARE DEALING WITH AN A L G E B R A I C .:. S I N G U L A R I T Y \u2022 I F ( D A ' . L P H . L T . 0.1D - 2 . . 0 R . D A L . P H . G T.l i '99300 >6Q TO 55; GENERATE A P P R O P R I A T E F S ( I )~2.0 * * G A M M A ( I ) F S( 1 ) -2. C D O * * D A L P H ' \u2022.'.'\u2022'.' '. .'\/' I F ( D A L P H . G T . 1 . 0 D 0 ) G 0 TO S O . F S < 2 ) = 2 . 0 D 0 \u00ab F S < 1 > ji=<+ , . \u2022'\u2022 GO TO 6 5 J 1=3 \". \u2022'.';\"\".. '. ' :\u2022\u2022\u2022\u2022'.\u2022\u2022 F S ( J l - 1 ) = 4 . 000 , \u2022' D G 6 6 J = J1. 15.3 . F S ( J) =2 . ODO*FS ( J - 2 ) F S ( J+l) = 2 . 0 D 0 * F S(J) FS ( J\u00ab-2) =4 . O D O * F S ( J - l ) GO TO SS ' \u2022 \u2022 . - \u2022 \u2022 F S { 1) =it . ODO .'\u2022.\u2022'\u2022 0.0 56 J = 2.1 5 F S < J ) = F S ( J - l ) * 1 . O D O 36. \" C I N I T I A L I Z A T I O N OF\/ROMEERG PROCEDURE : SS \u2022. D.QA'=C3LE (GA) H = D B L E ( QB ) -D QA \u2022' DH=DABS (H) '. . \u2022\" F A = DBLE ( GF (G A) ) ' : F'B = DBLE.(\u2022'QF (Q6) ) . T = H* (FA + FB)\u00bb.500 ...TA3S= (DABS (FA) + DABS (FB) ) *.5D0 \" NX = 2 \u2022 . \u2022 * \" :'- NP=-2\u2022\u2022.. \u2022 C THIS DO LOOP DOES ROMBERG'S PROCEDURE RECURSIVELY \u2022 \u2022 \u2022\u2022 DO 12 N-1,15. H=0.5D0*H ' ' \u2022\"\/' TSUM=O.OOO \u2022 \"; C ... THIS. LOOP COMPUTES THE NEXT TRAPEZOIDAL SUM , DO 2 1 = 1 , NX.2 . - \u2022 X =DGA +H* DFLO A T ( I ) FXrDBLE(GF(RND (X)) ) :.' T ABS-TABS+DABS (FX) . 2 TSUM = TSUM+FX' NP-NP+NX\/2 ': \u2022;\u2022 \"\"\u2022 ' T.2 = T' ; \" T=O.SDO*T+H*TSUM C COMPUTE. NEXT TRAPEZOIDAL VALUE, -\\ A (N) -=T+ (T-T2) \/ (FS( D - l.ODO ) I F ( N . L T . 2 ) G 0 TO 12 C . THIS LOOP DOES THE EXTRAPOLATION . \u2022DO 4 . J = 2 , N I = N+ i - j <* A ( I ) = A ( 1 + 1) + ( A ( I+l )-A (I ) \u2022)'\/ (FS (J) -1 .ODO) I F ( NP .LT. MIN ) GOTO 12 C THESE STOPPING RULES COME FROM DEBOOR *S CADRE OX = RND(DABS(A ( l)-A (2) )) IF{GX.LT .GEPS)GC TO 11 Q X - Q X \/ R N D (D H * T A B S) \u2022 IF(QX.LT.QDELTA)GO TO 11 . .' : 12 NX=NX*2 .WRITE(6, 19) ' 11 QU A D S-RND(A( 1) ) RETURN , 1 9 FORMATC QUADS FAILED TO CONVERGE , ANSWER GIVEN \u00bb - ' I S APPROXIMATE.*) END ' 37. APPENDIX 3 A Simplified Algorithm for CADRE f(z) dz, and also returns x an approximation of the error and an indication as to how well behaved the function appeared to be. It is also possible for the user to control the messages the program prints, from nothing at a l l to a complete record of subdivision with accompanying T-tables. The following is a greatly simplified algorithm to show how CADRE handles one subinterval, (A,B). The array T(I,J) contains the elements of the Romberg T-table as well as appropriate differences and ratios. The array TS(N) saves the functions values to avoid re-evaluation. Note that z^ ~ z^ is machine-dependent, meaning almost machine equivalent. The algorithm 1 TT=0.0 STEP=B-A T(1,1)=0.5(FA+FB) L=l 2 L=L+1 3 If we don't need new function values to compute the next trapezoidal sum, go to 5. 4 Insert the values of the function at the required absicca points in the appropriate places in TS 5 Compute the next trapezoidal sum and store i t in T(L,1) Comments FA=F(A) and FB=F(B) are already available in TS unless (A,B)=(X,Y). L is the row of the T-table on which we are currently working. Frequently the appropriate function values are already in TS. Considerable bookkeeping is necessary to shuffle saved values around and insert new ones to f a c i l i t a t e step 5. using T(L-1,1) and selected entries in TS. 38. The algorithm (cont'd) Comments (cont'd) 6 Compute the Lth row of the T-table, T(L,2) to T(L,L) with Romberg extrapolation. 7 The differences of successive e.g. (T(1,L)=T(L,2)-T(L,1) entries in this row are now stored in column L; T(1,L), T(2,1),...T(L-1,L) 8 i f L 4 2, go to 11 9 i f T(2,l) ~ T ( l , l ) , go to #SL suspect straight line 10 go to 2. 11 Replace the differences i n the (L-l)st column by appropriate ratios T(J,L-l)=|g^ -^ unless T(J,L) = 0, in which case. T(J,L-1) = 0. These ratios are used to approximate RQ (see 2i5) . 12 T0LD=TT 13 TT=T(1,L-1) T(L-1,1)-T(L-2,1) T T T(L,1)-T(L-1,1) \u00b0 r \u00b0* 14 i f |TT\u20144.I ^ .15, go to #H2 h convergence suspected 15 i f TT=0. , go to 20 this means T(L,1) : T(L-1,1) but integrand was not a straight line noise suspected. 16 i f ||TTI \u2014 2. | < .01, go to #JD integrand appears to have a jump discontinuity. 17 i f L=3, go to 2 compute another row 18 i f |TT-T0LD|\/|TT| < 0.1, go to #DS successive ratios close, may have detectable singularity The algorithm (cont'd) 19 i f L=4, go to 2 20 i f error requirement is met, go to #PN 21 go to #SD Comments (cont'd) try one more row possible noise further subdivision needed #SL - possible straight line We test the function at four \"random\" points in the interval (A,B). If i t s t i l l appears~to be a straight line, we accept T(2,2)*STEP as the integral over [A,B], and go to the next subinterval to be considered, i f any. If i t i s not a straight line, we go to 2. 2 #H2 - probable h convergence Test to see i f error requirements met, also tests for possible noise. May decide to go to either #SD or 2. #JD - probable jump discontinuity Subdivision necessary unless error condition met. #DS - probable detectable singularity If TT e (0,4), we appear to have an algebraic endpoint singularity, and can extrapolate using the appropriate (see sections 2.2, 2.4). If TT i (0,4), go to #SD. #PN - probable noise Accepts STEP*T(L,L) and sets flag to indicate noise. #SD - subdivision required We subdivide the interval i f regular behaviour cannot be detected after a certain depth in the T-table. deBoor adds a nice touch [6]. He looks f i r s t at the left-hand interval. If the function can be successfully integrated over this interval without further subdivision, fine. If not, i t is marked for subdivision, and the right hand interval i s examined. In this way, none of the unresolved intervals are larger than the one on which we are currently working. 41. APPENDIX 4 CADRE and SQUANK Results The f i f t y test functions of Casaletto, Pickett and Rice [3] were coded as single precision functions with single precision arguments, but a l l internal calculations were done in double precision and the value returned was rounded correctly to single precision. In addition, F^g t o were coded so that the interval of integration was [0,1] (but changed for testing deBoor's suggestion (c) in Section 3.3) for ease of coding the calls to CADRE and SQUANK. For ease of reference, the test functions are reproduced here; the computer output gives F (x) dx for i = 1,2,3, 0 F 1 = 1 7. = xF j_ 1 + ( - D J + 1 j for j = 2,3,4 20. F = e X 21 F 2 2 = sin(IIx) F23 = C O S X x(e x-l) for x 4 0 otherwise F25 = (l+x2)- 1 F26 = 2(2+sin(10nx))~ 1 F27 = (1+xV1 F = 28 ( l + eX ) _ 1 F = 29 4n2x sin (60Hx) cos (2U.x) F ' = 30 4 n2 x sin(60nx) cos(lOOnx) r 2 4n x sin(60n x) (1-x) F31 for x 4 1 otherwise 1.84 cosh (2x-l) - 2 cos(2x-l) 2-(16x4 - 32x3 + 28x2 - 12x + 2.9) 1000 n 2 ( i - x 2 ) 1 \/ 2 sin(lOOnx) (l+x)\" 1 x 1 \/ 2 - F = x 1 \/ 4 - F = x 1 \/ 8 - F \/ | x2-.25 | 3\/2 \/,|x2-.25| 5\/2 \/ I 2 o ^ l 5 \/ x -.25 LlO x j r x 0 < x < .333 x+1 .333 < x < .667 ^x+2 .667 < x < 1 { 0 .49 < x < .50 2 -1000 (x -x) otherwise (x+2)\"1 0 < x < e-2 e-2 < x < 1 10 4 (x-. 1) (x-. 11) (x-. 12) (x-. 13) sin(lOOnx) 43. r-l r-t ,\u2014i \u20141 1\u20141 r-o o o O 1 .'J l \u2022 o i CJ i \u2022 o 1 1 .-'J 1 ..u 1 LU 1 LL! t LiJ 1 i ' i i i L J ! LU LU -c v f LO OJ \u2022.M i.o 0 C j CTN o o- CO o . \u2014 ! -n LT .-a ,\\: _ \u2022jo. CO o CO r-^ 'O CO ' V ' \u2014 JO . res r- is C \u20141 \u2022T cn -o CO oj CV IV! (V vi- L O \u2022o \u2022C * . e a a \u2022 \u2022 \u00bb o O o O o o \u20220' o CO \u2014 ^ .0-, . ry.. co LT' G\" O' v- CO CO f\u20141 .\u20141 CO \u2014t i . r t-r-i CJ OJ C\\) cv rv uo \u2022JL O O o CJ O o Ci C; O O o CJ O o o o CJ \u2022c o OJ. 1 1 1 1 1 1 1 1 1 1 1 1 U.I Ii) L b . L L d UJ UJ LL UJ L L i . i U J LLI LU LU LLi LL U l i j j U : O C O c CJ> o o .-0 o <: rs-\\ a ro cv l.O r~ Cs rO -JO a O O o O CJ \u2022^ ji o o r>\"1 rv -< 0 - o- \u202201 sO Oj \u2022 \u2014 n- -o o CJ vC sO r~\" - - \u2022 \u2022\u2022~ o o O . ^-^ . \u2014 LT-' \u20141 r-l r- \u2014 r-l .\u2014 ' L;N. L- CJ c ' o o LO LO m fO LT' cr- \u2022\u2022C o CJ 1\u2014 f~- v7\" 1 r-l < \u2014i .\u2014i 1 cv. CV cv cv \u2022 rfi ro vT vT Lt> LT LT\\ LO. v'J r- r~ o o \u2022 o co c o O o O CJ O O ' o O o o CO o o i -; 1 1 1 ! 1 1 1 1 I I i \u2014 ' < i\u2014i r-l _, r-l \u20141 .\u2014i .\u20141 o . o o o O c o o CD LL> C ! a C ! c o '__' JT) C i >\u2014 . o o 'JC r-~ c . o vT \u2022 CO ISO ro r\u2014 vT r-L i \u2022 r-l .\u20141 .CV cv ci LO \u2022 o s\u2014 \u2022 0 e o a \u2022 \u2022 o ? o Y ' o ? CJ o f o t~* CJ O JO\" ro -+ LP. C j r- '~0 CO \u2022\u2014i ', LU L L L L L i . LL ' i i j _ u . \u2022 L L FUNCTION TRUE I N T E G R A L SQUANK I N T E G R A L E S T . EP.FOK. TRUE ERROR- TOL fcVALS . TIME F l l . 0 . 7 6 3 5 5 2 3 0 0 1 0 . 7 8 3 8 6 1 . 4 E . 0 . 7 8 3 0 5 2 3 E -01 0 1 .. - 0 . 2 7 F - 9 3 - 0 . 6 3 E - 0 5 . o 0 . 8 6 \u00a3 - 0 4 . 4 8 \u00a3-05 0 . 1 0 9 - 0 2 . \u2022 1 0 9 - 0 4 9 2 1 ' . 0 . 8 3 0 8 6 E - 0 i \u00a3 1 2 - 0 . 8 4 9 1 7 3 9 0 0 1 - 0 . 8 4 9 1 6 1 0 E - C . 8 4 9 17 3 4 E 0 1 01 - 0 , 2 0 9 - 0 3 0 . 7 2 \u00a3 - 0 6 0 o . 1 3 F - 0 3 . 5 7 \u00a3 - 0 5 3 0 . 1 0 E - 0 2 . , 1 0 E - 0 9 9 2 1 - 0 . c5 9 1 4 1 \u00a3 - 0 1 F 1 3 0 . 9 2 2 1 S 7 3 D 0 1 9 . 9 2 2 2 0 6 7 E 0 . 9 2 2 1 3 6 9 E 01 01 - 0 . 3 7 \u00a3 - 0 3 - 0 . 3 o r - 0 5 0 0 . 1 9 9 - 0 3 . 3 B E - 0 5 0 0 . 1 0 E - 0 2 . I C \u00a3-0 4 9 2 9 0 . 1 1 6 2 1 \u00a3 0 0 F 1 4 - 0 . 9 8 8 0 5 7 8 0 0 1 - 0 . 9 8 8 0 3 1 5 E - C . 9 8 8 9 5 7 1 \u00a3 0 1 0 1 - O . 2 6 E - 0 3 - 0 . 4 2 E - 0 7 0 n . 2 6 E - 0 3 . 6 7 \u00a3 - 0 5 0 r . 1 0 E - 0 2 . 1 0 \u00a3 - 0 4 9 2 9 . 1 2 2 6 6 t 0 0 F 1 5 0 . 1 0 6 0 5 9 5 D 0 2 0 . I C 6 0 6 3 I E 0 . 1 0 6 0 5 9 4 9 0 2 0 2 - 0 .45E- JC-3 - 0 . 4 0 E - 0 5 0 0 . 3 6 9 - 0 3 . 4 3 9 - 0 5 0 . 1 0 9 - 6 2 . 1 0 \u00a3-0 4 9 2.9 0 . 1 2 7 9 4 0 0 o F 1 6 - 0 . 1 1 2 6 3 8 2 ! . ' : 0 2 - 0 . 1 12 6 8 3 7 E - 0 . 1 1 2 6 8 8 1 c 0 2 0 2 - 0 . 3 2 F - 0 3 0 . 4 t,F-C6 r, \"o . 9 5 0 - 0 3 . 7 6 E - 0 5 n 0 . 1'. \u00a3 - 0 2 . 10 9 - 0 4 Q \"> 4 -' 3 . 1 3 2 5 6 c 0 0 F 1 7 0 . 1 1 9 9 C 5 2 D 0 2 0 .1 19<5 10 9 E 0 . 1 1.9 9 0 9 I F 0 2 0 2 - 0 . 5 3 E - 0 3 - 0 . 4 2 . E - f 5 0 o . 5 7 E - 0 3 . 2 8 9 - 0 5 ' \u2022 0 0 . 1': \u00a3 - 0 2 . 1 0 \u00a3-0 4 9 2.9 0 . 1 3 - J 9 7\u00a3 0 0 F 1 8 - 0 . 1 2 6 5 6 6 6 9 C 2 - 0 . 1 2 6 : 5 5 9 6 E - 0 . i 2 6 3 8 ' 5E 0 2 0 2 - 0 . 3 7 \u00a3 - 0 3 0 . 8 1 \u00a3 - 0 6 r f: . 6 0 E - 0 3 . 6 7 9 - 0 5 0 0 . 1 0 E - 0 2 . 1 0 \u00a3 - 0 4 9 ?r-> 0 . 1 4 0 4 6 E 0 0 F i g 0 . 1 3 3 7 5 4 3 0 0 2 0 . 1 3 3 i ' 5 4 6 E 0 . 1 3 3 7 5 4 2 E 0 2 0 2 - 0 . I 7 E - 0 3 - 0 . 4 5 E - 0 5 0 0 . 3 5 9 - 0 4 . 5 7 9 - 0 5 0 0 . ] ( 9 - 0 2 . I f \u00a3 - 0 4 1 3 2 9 0 . 1 6 6 8 4 0 uo F 2 U - 0 . 1 4 0 4 4 2 0 0 0 2 - 0 . I - ' i 0 4 3 2 6 E - C . 1 4 C 4 4 1 9 F 0 2 0 2 - 0 . 4 0 9 - 0 3 0 . 1 3 E - 0 3 f' 0 . 9 ^ 9 - 0 3 . 6 6 \u00a3-05 o 0 . 1 0 9 - 0 2 . 1 0 \u00a3-0 4 9 2 9 Q . 1 3 ^ 5 30 0 0 -P-45 1\u2014< .\u20141 .\u20141 oo i\u2014i .\u2014I '? o 1 C f 1 1 .0 Ci 1 '\u2022CJ CO i ! :.d . L'J \u2022;.J \u2022jJ 1 IJJ \u2022JJ U J t JJ ' J J ( UJ L U . L P - UO X O'l \u2022C' r.\". . \u2014 i 1\u20141 -i-.sn ' L O r~ \u2022\u20220 \u2022vl- \u2022<0 o 0 s \u2022o ,-o C O ! -0 \u2022t '.P. \u2022J> \u2022o CV r LP. ro ! \u2014 ' \u2022 M V \u2014 : PJ \u2014 TO \u2014! ' sJ \u2022X JO \u2022X' rv P.I o <\u2022\u2022 PJ \u2022 a 9 a \u00bb \u2022 \u2022 \u00ab o 9 c ; o. \u2022O a o \u2022o' o o a o s CT' o- ro ro .\u2014i CO CO r-- CO \u2022o ro ~< r'r-. ;> \u2014 ! C O o .\u2014* PJ ,\u2014i L U P J l.P \u2022\u2022- U J \u2022j... L'J L'J i d LL.' UJ 1 LU L U U J 1 Lij '.C UJ LU LLi UJ L P LP -o CO- -00 o -c \u2022 CO C\\i r jo ; .\u2014; C O a~ 00 Is- -J- rv PJ a . 0 ' C ' 0 i CO PJ ro OJ r\u2014i , \u2014 , i\u2014i o' \u2022< i i--; r- CvJ i\u2014i i\u2014! -7; C C C J C O c- o ,\u2014-i O CO C O c ; o co' C J C C O C J r- ( C O C O 1\u2014 C L . CO LP. LP- o- LP L A LP Is- r- < : LP, CO Li\"1. -o M J vi- X \u2022PN' ro L'C LC C O l ' 1 co 1 1 ' O 1 o 1 T o i c 1 1 . C_\". 1 o \u2022 1 ' .' CO 1 r 1 ce I CO 1 r-t 1 1 UJ UJ LLi LU ' L U UJ L U UJ LxJ LU LU UJ LU UJ UJ LU u_; i l j . \u2022 LU CO CO \u20141 \u2022 C M C\\J rv rv d .\u20141 U P 00 o Cs.' iVi ^i U0 \u2022JJ' \u2022J-\u00bb (\u2014 CO LP. . \u2014 \u2022 \u2022 CJ | \u20141 1.U UJ U J LU U J LU UJ LU UJ a.i LL! UJ L Li .' UJ UJ UJ U J 'J..I UJ 1 UJ 1 UJ r-' -\u2022^' \u2022 r\u2014 Is- C O PJ PJ \u2014 i r j CO CO r- o'J OO CP PJ I S \" IS-e CO\"' C O \u2022J- co co C J o C ' C O CJ CJ O C O o C C J 1 1 1 ._) \u00ab l .\u2014i C J ' o o CJ CJ O o CiJ o co CJ o o c , o CJ c CO 'JJ C P o CO o CO o C) CO h- Kj CO o PvJ \u20144 c- LP. \u2022 LP r~ Jc CC' co 03 o (V, LiO '.V co >\u2014\u2022 P J -^1 o U' S Is- - r\u2014 CC Is- o x \u2022\u2022\u2022a ' r\u2014 co Is- CO r~l \u2022c- I S - CO cc C O o- 1\u20141 X- rv , 1 t\u2014 a \u00ab \u2022 \u2022 e s e e o o - C O CO o o O 1 o o CT 1\u2014I rv co i n X C O o P J rv PJ P J P J pj rv P J ro, Z> u_ LL LL u_ u_ LI. LL a . LU r H \u2014i i\u20141 04 O O i o 1 ' O \u2022\u2022 UJ i LU LU LU LU \u2014< ro LO o ~r CO co \u20141 CO .\u20144 ro O m CO r -LU vO .43 LP. O C\\J o 43 \u20141 r-l OJ vT 04 04 1\u20141 CNJ cr \u2014\/ \u2022 e \u2022 p p p p p p cr o o o f>. o o o o i\u2014 o i ' a LO IO cr co CO \u2022 o O o o o o 00 1 1 r J < \u2014t o *\u20141 m o a o o o o o O UJ CS o b Q c \\\u2014 Q OJ ro r - 04 z ' 40 CO 01 LO r-r-1 CM 04 04 m ro cr OJ cr F\u20141 UJ \u2022 \u2014i \u2022\u20141 p-H