Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Predictor-corrector procedures for systems of ordinary differential equations Zahar, Ramsay Vincent Michel 1964

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1964_A8 Z3.pdf [ 3.17MB ]
Metadata
JSON: 831-1.0302286.json
JSON-LD: 831-1.0302286-ld.json
RDF/XML (Pretty): 831-1.0302286-rdf.xml
RDF/JSON: 831-1.0302286-rdf.json
Turtle: 831-1.0302286-turtle.txt
N-Triples: 831-1.0302286-rdf-ntriples.txt
Original Record: 831-1.0302286-source.json
Full Text
831-1.0302286-fulltext.txt
Citation
831-1.0302286.ris

Full Text

PREDICTOR-CORRECTOR PROCEDURES POR SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS by RAMSAY VINCENT MICHEL ZAHAR B o A o S c ( E n g i n e e r i n g P h y s i c s ) , The U n i v e r s i t y o f B r i t i s h C o lumbia, 1962 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS i n the Department o f MATHEMATICS We a c c e p t t h i s t h e s i s as c o n f o r m i n g t o the r e q u i r e d s t a n d a r d THE UNIVERSITY OF BRITISH COLUMBIA A p r i l , 1964 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study,, I further agree that per-mission for extensive copying of this thesis for scholarly purposes may be granted by the Head of my Department or by his representatives. It is understood that.copying or publi-cation of this thesis for financial gain shall not be allowed without my written permission* Department of Mathematics The University of British Columbia, Vancouver 8, Canada D a t e A p r i l 1 0 , 1 9 6 4 ~ i i -ABSTRACT Some of the most accurate and economical of the known numerical methods for solving the initial-value problem are of the predictor-corrector type. For systems of equations, the predictor-corrector pro-cedures are defined in the same manner as they are for single equations. For a given problem and domain of t , a plot of the maximum error in the numerical approximation to x(t) obtained by a predictor-corrector procedure, versus the step-size, can be divided into three general regions - round-off, truncation, and i n s t a b i l i t y . The most practical procedures are stable and have a small truncation error. The st a b i l i t y of a method depends on the magnitudes of the eigenvalues of a certain matrix that is associated with the matrix dx dt = f(t,x) x ( t n ) = X 0 G = (g, J = ( ) When the functions f^ are complicated, predictor-corrector procedures involving two evaluations per step seem to be the most efficient for general-purpose applications. ACKNOWLEDGEMENTS I thank Professor T. E. Hull of.the Department of Mathematics at the University of Toronto for his valuable theoretical ideas and for his assistance In interpreting the experimental results. I also thank him for the advice and encouragement he has given me throughout the course of my work. I wish to express my appreciation to Professor R, D. James, Head of the Department of Mathematics at the University of British Columbia, and Professor C. C. Gotlleb, Chairman of the Department of Computer Science at the University of Toronto for the financial aid they presented.to me during my.graduate studies. - i i i -TABLE OF CONTENTS Page INTRODUCTION 1 THEORY 5 ECONOMY CONSIDERATIONS 23 .EXPERIMENTAL TECHNIQUES 25 EXPERIMENTAL RESULTS 2 8 CONCLUSIONS 37 APPENDICES I Graph of js*| vs. hg 39 I I The hg-Regions of Stability 40 I I I The Generalized Error Equation 41 IV Graph of log 1 0(Error) vs. log 2(h) 45 V Maximum Errors - Problem (A) 46 VI Maximum Errors - Problem (B) 47 V I I Maximum Errors - Problem (C) 48 V I I I . Maximum Deviations in x^ - Problem (D) 49 IX The Atmospheric Reentry Problem 50 BIBLIOGRAPHY 53 1 -INTRODUCTION In this thesis, we shall consider the initial-value problem x- = f(t,x) , x ( t Q ) = x Q (1) where In general, x and X Q are N-dimensional vectors. In fact, (l) represents the system of f i r s t order ordinary differential equations X^ = f (t s o » o t X J J ) Xg = f 2 (t, X j , Xg i o o o , X J J ) o o e o o o o o o e o e o Xj-j = f JJ( t,X-j^ ,Xg ^  . « o , X J J ) where the values of the x^ , 1=1,2,...,N, are specified for an i n i t i a l time t^ . For convenience In expressing the following results, we shall adopt the standard notation of functional analysis. Let S be a complete normed linear space (a Banach space). Let the elements of S be u^,Ug,.,. . The norm on S Is a real-valued function, whose value at ;u Is denoted by jju|| , and which satisfies (a) ||u1+u2|| < l u j + ||u2l| (b) ||u|| > 0 and ||u|| = 0 i f and only i f u = 0 , the zero element of S (c) ||et u|| = jet-1 ||u|| for any complex number a S becomes a metric space i f we define a distance function between any two points u^ and u^ of S as -2-A sequence of functions u^,Ug,... , satisfies the / Cauehjf ' Criterion i f l l u „ " u ^ l l * — > 0 as n,m — > co Since S is complete, each such Cauchy sequence possesses a limit u , where u is an element of S . We say that the sequence *£ujj> converges to u . We now let S be the real Euclidean N-dimensional space „ If x is an element of R^  , x i s the vector with real components x^ , 1=1,2,...,N . R^  is complete (see e.g. [ l l ] , p. 9 6 ) . Later, we w i l l let However, any valid norm may be used in the following discus-sion. We introduce the usual notation regarding the signs ^ , ^ , > , ^ . For example, x ^ y i f x ^ ^ y 1 for each Certain existence theorems for. (1,) exist. The Cauchy-Lipschitz Theorem states that i f f(t,x) i s continuous and satisfies the Lipschitz-condition ||f(t,x) - f(t,x)|| < L ||x-x|| in a region containing ( t ^ , X Q ) . for some positive real number L , then there exists a unique solution x(t) to problem (l) over a suitable interval containing . To prove this 1 ~3-theorem and other similar theorems (see \_9j), one f i r s t intro-duces a method such as the Euler polygonal method to generate approximate solutions. Then one proves that these approxi-mate solutions converge to the solution of ( l ) . Although a given function f may satisfy the conditions of the above theorem, thus assuring the existence and unique-ness of a solution to ( l ) , i t may be d i f f i c u l t or even Impossible to represent this solution In closed form (as fi n i t e combinations of elementary functions such as poly-nomials, exponential functions and logarithms, and of indefi-nite integrals of such functions). For example, It has been proven that the equation x' = t+t 2+x 2 , x(0) - 0 cannot be solved in terms of elementary functions although i t s solution does exist, and can be tabulated „ However, the proof of the existence theorem provides us with a method of constructing approximations to the solution of (l) when i t exists. In practice, only some of these methods, particularly the Runge-Kutta and multi-step methods, are useful in numerical calculations. Consequently, i f we w i s h to generate a numerical approximation to the solution of ( l ) , we must f i r s t have a criterion for choosing the numerical method. In general, the required accuracy is prescribed. There-fore^ we choose the numerical method that w i l l give this accuracy at a minimum cost. It i s the purpose of this thesis to show in general, that predictor-corrector methods can be used to generate numerical solutions to (l) of the required accuracy, and in particular, that certain such methods w i l l yield the pres-cribed accuracy at a minimum cost. The special case N=l , when (l) simplifies to a single equation, has been investi-gated thoroughly in [8] . In the next three sections are represented the theoreti-cal results upon which our investigation is based. The reasoning for the general case i s quite similar to that for the case N=l . However, as we shall see, certain generali-zations are quite impractical because of their complicated nature. To specify a3. predictor-corrector method, we must choose a starting procedure, the predictor and corrector formulas, a rule for Iterating with the corrector formula, and a step-size. In section 5* we present experimental results for a number of problems. We use a variety of different procedures for each problem. Any restrictions on the chosen ranges of parameters and on the considered formulas are based on the results for the case N=l . Throughout this thesis, we shall assume that the function f(t,x) is reasonably complicated. Thus, the cost of any procedure w i l l be directly proportional to the number of times this function i s evaluated. THEORY In this section, we f i r s t define the predictor and corrector formulas and the rules for iteration. Next, we show that the numerical sequence determined by the iteration procedure converges to a limit„ We then investigate how closely this limit approximates the actual solution of the system of differential equations. Finally, we obtain specific conditions for the sta b i l i t y of the Adams method. The predictor formula can be expressed as k # k-fl # , Y n - X_ a 1y n_ 1 + h b 1y n_ ± (2) and the corrector formula as k k ^ = ^  V n - i + h £Q V n - i w where h i s the step-size, t = t^+nh , y n_^ = y(t n-Ih) and y' . = f ( t -ih,y . ) . (The terminology is.the same Jt * .J- X J. 1 A -LB as in \_b\\ . ) We emphasize that the y's and the y''s are * N-dimensional vectors. We have chosen the unknowns a^ , Di , and b^ to be one-dimensional constants and have thus restricted ourselves to using the same predictor and corrector formulas for each of the N components. Our j u s t i -fication for doing this l i e s only in the relative simplicity of the resulting analysis. Assuming that the values of Yn_j_ a n d y n - i n e e c i e ^ i n (2) are known, a predictor-corrector procedure at time t is defined by f i r s t using (2) to calculate an approxima->6-tion v to x„ . Then the y„ so determined i s substituted n n ^n into f(t,x) to evaluate an approximation to x^ . (3) is then used to correct, giving a new approximation to x n . We may then evaluate obtaining a new y^ , and then correct and so on. We shall denote the resulting sequence of values a s y n < v y n i »'' • > y r , m where y n is the approximation to IX P \J r l ji J_ Xl p 111 Ti p \J x obtained from (2), y . Is the approximation to x i i n p.fj Ti obtained from the j application of the corrector, and . m is the number of iterations. We distinguish between two cases, those that end on a correction and those that end on an evaluation. We denote the former by P(EC) m and the latter by PE(CE) m . We understand that whichever case i s used, m w i l l be constant throughout the domain of t con-sidered. It i s worth noting that under the assumption that f(t,x) Is relatively complicated, the cost of a correction w i l l be negligible with respect to the cost of an evaluation. Thus, the method ending on a correction is the most logical to use. To use any predictor-corrector method on an i n i t i a l -value problem, the starting values y ^ ^ . o . ^ y ^ must be secured by another method known as the starting procedure. One-step methods such as the Runge-Kutta method and others (see [ l ] , p. 8 l ) are frequently used. We now ask the following question. Por arbitrarily chosen values y ^ ^ . . . ^ y , - n , does the Infinite sequence y . converge to a unique y which satisfies ( 3 ) exactly? n $ j TX The answer is given by the following theorem (cf. [joj, p. 216) -7-which i s a special case of a classical theorem of functional equations (see e.g. [ l ] , p. 3 8 ) . Before we state and prove the theorem, we note that in (2), y n does not appear in the l e f t hand side. Therefore, (2) determines y n e x p l i c i t l y as a function of the y n - l * * ** , yn-k-l *• A l s o> u s l n S ( 3 ) * y n ^ j can be expressed in terms of y , , by ii y J = X k k where HY^^) = X + h b Q f U n , y n > J . 1 ) + . $~ b ^ ^ • Theorem 1 ; Let the function F be defined for a l l y in RJJJ and suppose F satisfies a Lipschitz-condition with K < 1 . Then the iteration ( 4 ) defines a sequence y ,»y ,,,,, which converges to a unique solution y of ( 3 ) o Also, !lyn~yn,m! < l^K l yn,l~ yn,oll If L i s the Lipschitz constant for f(t , x ) , then a Lipschitz constant K for F is given by K=hbQL , and K < 1 for a l l h < h Q == 1 / b QL . Now l y n , r y n , j - l l l = l ^ ^ j - l ^ ^ n , ^ ! < K l l yn,j-l- yn,j-2l l Therefore !lyn, j = y n , j-lH ^ || yn,l"°yn,oil and since ||yn,v +u~yn,yll < Hyn, V - t ^ n , iJ +u-J + ' " 0 0 0 + Hyn,v» +l~yn,vll - 8 -T H E N »*„, J +n-yn,Jl < ( K , M ^ - 1 + . . . + K ) l | y n > 1 - y n , , -tyi ' n , * " ' ^ v 0 ' * 7 il J n , l * n , 0 I-K l yn,l~ yn , 0 Therefore l| +u°yn,i; I ~ - * 0 a s ^ — * 0 0 Thus, the sequence y n Q*yn i> «• • is a Cauchy sequence and converges to a limit y n „ Now, since F satisfies a Llpschitz-condition, F is continuous. Therefore, lim lim y n = ,1 ->• co ( y ^ ) =• J - > co F ( y n j ^ _±) = F ( l J i i m c o y ^ ^ ) . F(y n) . That i s , y n satisfies (3) exactly. Suppose the sequence convergesito y n as well. Then y n = F ( y n ) a n d yn = F ^ y n ^ ' a n d I yn" yn I < K II yn- ynll But since K-< 1 , this is a contradiction unless l|yn""ynH = °* which implies that yR-= y n „ Therefore, y n is unique. Finally, letting p.—co in the inequality for l y n , ^ + u - y n , J ' w e § e t yn""yn,ml < : l^K l y n , l ~ y n , 0 This completes the proof of the Theorem. The inequality in the statement of the theorem i s signi-ficant because i t indicates the existence of an estimate for yn~yn,m -9-Now t h a t t h e convergence o f our i t e r a t i o n p r o c e s s i s e s t a b l i s h e d , i t remains f o r us t o i n v e s t i g a t e how c l o s e l y the v e c t o r y a p p r o x i m a t e s x , t h e s o l u t i o n o f ( l ) a t XI p 111 li t n To i n d i c a t e t h e r e l a t i o n between y and x , we •'n n ' w r i t e ^ * % t l * * <n " 21 a i x n - l + h ^ V n - i + T n ^ ) -it where we d e f i n e the v e c t o r T t o be t h e t r u n c a t i o n e r r o r n o f the p r e d i c t o r f o r m u l a . S i m i l a r l y , we l e t k k *n - T a l V l + h X V n - i + T n ^ d e f i n e t h e t r u n c a t i o n e r r o r o f the c o r r e c t o r f o r m u l a T n The y are t h e v a l u e s we expect t o o b t a i n when we p e r f o r m the n u m e r i c a l c a l c u l a t i o n s . However, the n u m e r i c a l v a l u e s we a c h i e v e a r e not e x l i c t , f o r t h e y a r e t r u n c a t e d o r rounded t o a f i n i t e number o f s i g n i f i c a n t f i g u r e s . We l e t the v e c t o r s Z . denote the rounded r e s u l t s . U s i n g t h e p r e v i o u s n o t a t i o n , Z n i s t h e rounded r e s u l t o b t a i n e d from the p r e d i c t o r f o r m u l a a t t , and Z . , j=l,...,m i s the n n, j t h rounded r e s u l t a f t e r the j a p p l i c a t i o n o f the c o r r e c t o r . I n the f o l l o w i n g , we s h a l l c o n s i d e r the p r o c e d u r e P ( E C ) m , wh i c h t u r n s out t o be the more c o m p l i c a t e d . The case P E ( C E ) m can be t r e a t e d s i m i l a r l y and t h e r e s u l t s w i l l be g i v e n l a t e r . # We d e f i n e t h e v e c t o r r , the r o u n d - o f f e r r o r a t the p r e d i c t o r by -10-k k+l ' 1= 1 n-i,m £ ^ i n-i,m-l n v'.' and the v e c t o r r . , the round-off e r r o r at the j t h • • n, j ...... a p p l i c a t i o n of the c o r r e c t o r by k Z . = J " a,Z„ , m + h b n z ' . , n,j £ ^ i n-I,m 0 n, j - 1 + h £ V n - i , m - l ^ n , j . (8> We d e f i n e the propagated e r r o r to be the v e c t o r e„ = x -Z^ m . We d e f i n e the m a t r i x G by the equation n n n,m J ^ f ( t , u ) - f ( t , v ) = G°(u~v). Applying the Theorem of the Mean (see; e . g . jjLo], p. 224) to each component f ^ of f , we se t h a t o f , ( t , u \ ) G = (g : ) ( 1 1 ) l i J dx, J where the u^ are c e r t a i n v e c t o r s such t h a t u <: u^. '< v . We now s u b t r a c t equation (7) from (5) and equations (8.) from (6) o b t a i n i n g _ = > a , e „ , m + hG > b.e^ , m + (T +r ) n , 0 ^— I n-1 }m 1 n-1,m v n n' e^ = 5~a^„ ^  m + hb nGe^ n + hG £ b . e ^ , m , + (T + n , l «y- i n~I ,m 0 n , 0 j - I n-i,m-l x n 4 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 = V a . e + h b n G e „ m , + hG j ^ b , e n . , + (T + 1 e „ 4r- 1 n=I,m 0 n,m~l ~r~ 1 n-l,m-l v n n,m 1 ' 3 1 (9) T h i s i s a set o f (m+1) d i f f e r e n c e equations In the (m+1) unknowns e . = JL -Z . , j - 0,l,...,m . Before s o l v i n g n} J n n, j .11-. these equations, we replace the T* , r* , T , r , by X I : X i XX X i p J respective constant values T* , r* , T , r „ We also assume that G i s the same constant matrix for each equation. It is clear that the e , satisfying these new equations w i l l approximate the actual en . only i f T* , r* .', T ' , r . , and G change very l i t t l e over the Intervals under consider-ation (those intervals used when the Theorem of the Mean is applied). Por now, we assume that the intervals are small enough so that this is true. It w i l l turn out that the resulting expressions are only slightly dependent on the changes of these variables. Thus, we would expect our re-sulting expressions to indicate reasonably the way in which the error propagation depends on the choice of procedure. Actually, a more rigorous analysis can be outlined as follows. If we replace the a^ and b^ In equations (9) by their corresponding absolute values, and assume that each component of the T* , T , r* , r i s bounded in absolute * n * n ' n ' n value by the corresponding component of T* , T , r* and r , respectively, we obtain 'dominating1 difference equations, the last of which Is E„ m = Si |aj E„ . + h l b J G E , n,m 1 ^ 1 n-l,m 1 0' n,m-l + hG 2^ I b. I E . -, + T+r. . n 1 l 1 n-l.m-1 i=l where G i s defined in the Lipsehitz sense by < g i j | X j = X J I = 1 2 -Upon solving the dominating difference equations correspond-ing to (9), we obtain an expression for the E . It i s easy to see that the E are bounds for the e n,m n,m However, these bounds turn out to be ultra-conservative in general (because the b i are not a l l positive, for example) so we shall not consider them further. Now, we are interested in the propagated error en m en m ° Thus, eliminating e n from the last equa-tion of (9)> and then e 0 and so on, gives en,m - ( I ^ ) ( I = @ ) = 1 [ | : a i e n - i , m + h G ^ V n - i , m - l + ( T + r mr k k + 1 (10) where 9 = hbQG and I is the identity matrix. For this expression to be correct, i t i s necessary and sufficient that a l l eigenvalues of © are less than one in absolute value. p. 6 0 ) , This condition i s quite similar to the sufficient condition for the convergence of our iter a -tions in Theorem 1 . However, this condition on © i s not sufficient to ensure that e is bounded. n If m —» 00 , e « -1 — e , „ and © m —> the zero ' n-i,m-l n-i,m matrix, so ( 1 0 ) becomes the difference equation of order k for the vectors e ,..,,e , n n=k k k e n - ( I - © ) " 1 ^ a i e n „ i + h G X V n - 1 + (T+r3« (1:L) ( 1 1 ) represents a system of N difference equations for each of the N components of each of the vectors involved. Beeause G is a matrix, each of the N equations contains, in general, a l l of these N(k+1) components. The resulting equations are quite complicated and d i f f i c u l t to solve in this form. Even the case N=l , when (l) and thus (11) become single equations, i s not easy. We shall now consider this less general case in detail. When N=l, (11) becomes k e n - X (a±+hgb1) e n = i = (T+r) (12) "df (t,x) where g = •— • for some x . This difference equation ox can be solved by a standard method ([6], p. 209). The solu-tion e n can be written in the form e n = A x s J +...+ A ks£ + e ( l 3 ) where the A i are constants. The f i r s t k terms are solu-tions of the homogeneous equation obtained from (12) by putting the right hand side equal to) 0, and e is the particular solution of the non-homogeneous equation, e i s formed by assuming that e = e , = . . „ = e , = e , a constant. Thus ii ii*^x n^ jkt P = {T+r).. k 1 ~ X (a,+hgb. ) 0 1 1 Substituting e ^ = As , 1=0,,„.,k into the homogeneous equation obtained from (12), the results show that the s^ are roots of the polynomial C ( B ) = s k -^(a.+hgb, ) s*" 1 0 We shall assume that the s^ are distinct. If they are not, the resulting discussion, and in particular, equation ( 1 3 ) must be modified. However, these modifications are not com-plicated and are described in [ 6 ] , We have written e = x -Z . Thus, using the Theorem n n n,m ° of the Mean, e' = f ( t ,x ) - f ( t ,Z ) = 'ge n v n* n y v n' n,m; n where g = , f o r some x , x„ <r x < Z ox " n , m On the basis of this equation, we would expect the error to be proportional to e & , which i s a solution of this equa-tion for constant g = g . In fact, as we shall see later, i f T = 0(h p) , one root of C(s) is then s x = e H S + 0 ( h p ) . S g f o o o * 1 ^ a r e extraneous roots and have arisen because we have approximated a f i r s t order differential equation with a difference equation of order k . When m i s f i n i t e , equation ( 1 0 ) must be modified. ( 1 0 ) now contains the other unknowns e T i , ~ -i -But ( 1 0 ) and the last equation of ( 9 ) can be considered as two simultaneous difference equations in the variables e . and e . , for a l l i = 0 , . . . , k . To solve these n-I,m n~I,m~l •* equations, we put n,m * n,m-l and, as before, the solution e n w i l l be a linear combina-th tion of the n powers of the resulting roots s , plus a •particular integral'. After cancellation, the equations become two homogeneous linear equations in A and B . In order that solutions for A and B exist, the determinant of these equations must be 0 . This condition yields the required polynomial In s ', The s i are then the (2k+l) roots of the polynomial s k + 1 C ( s ) + ^ ^ [ ^ ( a ^ a j + e a j ) a** 1" 1] ¥ 1 a * k+1-1 k-1 k r V o k + l - i ^ l l - 2_ i3 z_ b^s - s 2_ " i 3 •/_!_[ (14) As m —3» co , (k+1) roots of this polynomial approach 0 . The other k roots become the roots of G(s) . Before writing down the complete expression for e n , we shall discuss how the , a^ , b^ and b^ are chosen. Closely related with this problem is the concept of s t a b i l i t y . Roughly speaking, a method i s defined to be stable If the error e n i s insensitive to small changes In the local errors - the errors at each step of the calculation. Now, one term in the expansion of e n Is proportional to the n^ n power of the root s-^  that approximates e h g and i s to be expected from the differential equation. We want the contributions from the other s^ to be negligible with respect to the contribution from . If g i s positive, ehg . For the term of e containing s-^  to be the dominating term, the other s^ must be at least less than in absolute value. If g i s negative, we want the -16-terms of e n to be bounded. Thus, we define a method to be stable i f | s±\ <r s± for g > 0 and i f | s^J < 1 for g < 0 . Stability is defined in this way so that the relative error of a stable method is small in absolute value. The degree of a predictor or corrector formula is de-fined to be the largest integer p such that T = 0(h p) . Obviously, we would like to choose our constants so that the method has as high a degree as possible. We now return to equation (5)> and using the Taylor expansion x n - i = x ( t n " i h ) = x ( t n ) + (lh)x'(t n) + I i L l i x " ( t ) +„.„„„ 21  n and a similar one for x n _ j _ > we expand the right-hand side in" powers of h , By equating the coefficients of the same powers of h on each side of the resulting equation, we obtain the relations * * # a 1 + a 2 + ...+ a k = 1 a-^ + 2a 2 +..,+ ka k = (b 1 + b 2 +...+ ) a* + 2 2a* +„,,+ k 2a* = 2(b* + 2b* + ...+ (k+l)b* + 1) o o o o o o o o o o o o o o o o o a* + 2 k + 1 a * +...+ k k + 1 a k ^ (k+1 )(b*+ 2 kb* + ,,,+ (k+l)^*^) 0 0 0 0 0 0 0 0 0 0 9 0 0 6 0 0 0 . . (15) We f i r s t note that at least the f i r s t two of these equations must be satisfied i f the true-solution x(t) is to satisfy the predictor equation except for terms of o(h) as h —> 0 , -17-(These two e q u a t i o n s a r e the n e c e s s a r y and s u f f i c i e n t c o n d i -t i o n s f o r the c o n s i s t e n c y o f a method [V].) S i n c e we have. (2k+l) unknowns, i t would seem r e a s o n a b l e t o s o l v e the f i r s t (2k+l) e q u a t i o n s o f (15) f o r our unknowns and thus o b t a i n a method w i t h a t r u n c a t i o n e r r o r o f degree 2 k+l. However, D a h l q u i s t [2] proved t h e r a t h e r r e m a r k a b l e theorem t h a t the degree o f a s t a b l e o p e r a t o r o f o r d e r k cannot exceed (k+2) ( o r k+3 i n an e x c e p t i o n a l case which we w i l l not c o n s i d e r ) . I f we use the f i r s t (k+2) e q u a t i o n s t o determine the c o n s t a n t s , T n w i l l be 0(h ) . Now, t h e d e t e r m i n a n t o f the c o e f f i c i e n t s o f t h e b£ i s a Vandermonde d e t e r m i n a n t and t h e r e f o r e i 3 not 0 „ Thus, t h e r e i s a l a r g e number o f s t a b l e methods w i t h p = k+2 . I n p r a c t i c e , the a^ a r e chosen f i r s t , and then the b.^  a r e determined I n t h i s manner. The c o r r e c t o r c o n s t a n t s a r e chosen i n t h e same way as the p r e d i c t o r c o n s t a n t s . The r e s u l t i n g e q u a t i o n s a r e s i m i l a r t o (15) w i t h b^ a p p e a r i n g on t h e r i g h t hand s i d e o f t h e second e q u a t i o n . A l s o , does n o t e x i s t . By s u b s t i t u t i n g s x = e h g + 0 ( h k + 2 ) i n t o C(s) and u s i n g the c o r r e c t o r e q u a t i o n s c o r r e s p o n d i n g t o equations (15)* we see t h a t i s ind e e d a r o o t o f C ( s ) and thus an approximate r o o t o f ( 1 4 ) . We now r e t u r n t o w r i t i n g down t h e complete e x p r e s s i o n f o r e n f o r s t a b l e o p e r a t o r s . To d e t e r m i n e t h e p a r t i c u l a r s o l u t i o n o f t h e non-homogeneous e q u a t i o n , we put e . = e X1 -L f II and e , , = e , a c o n s t a n t , f o r a l l i and e l i m i n a t e e n--I,m-l from e q u a t i o n (10) and t h e l a s t e q u a t i o n o f ( 9 ) , f i n a l l y 18 obtaining an expression for e „ Since we are assuming that the operator is stable, we shall neglect the effects of the roots other than s 1 . Also, we neglect the effect of starting errors by assuming that e Q = 0 . The resulting approximate expression for e n i s ^ eP-^i-ejhg ^ b * r / 1 1 v T+r 1=0 1 + £ ± i = a l ( T . + P . ) ] ( . ( 1 6 ) (1-er) J Since T and T are 0(h ) , they are negligible with respect to r and r for small h . This region of h is called the round-off region. We shall see later that in this region, the error can be minimized by using proper computational techniques. As h becomes large, T* and T become the more Important terms. This region of h for stable operators is called the truncation region. For large m , e n i s approximately 0 (h k + 1) in the truncation region. It i s important to determine precisely the conditions for a method to be stable. Dahlquist's theorem provides us with a condition on the degree of a method i f i t Is to be stable. In general, we cannot increase h indefinitely and s t i l l maintain s t a b i l i t y . For m = oo , the stability requirement places a condi-tion on the root of G(s) of maximum absolute value, say s* . For given constants a^ , b i and k , C(s) determines j s*j as a function of hg . An example of a plot of | s* -19-versus hg i s given in Appendix I „ For hg > 0 , s* = s^ ^ , the expected root, in the region of small hg . However, as hg decreases in the negative direction, s* becomes one of s becomes greater than 1, and instability occurs. We say that the hg-region of st a b i l i t y is (d,oo ) . A graph such as that in Appendix I i s a practical aid in determining s t a b i l i t y . During the course of solving a differential equation, g is calculated - after each step i f necessary - and h i s chosen so that the value hg l i e s in the region (d,oo ) . For m / oo , and the method P(EC) m , the determination of the hg-reglon of sta b i l i t y i s as above, except that the polynomial (14) is used instead of C(s) . For the case PE(CE) m and fi n i t e m , the polynomial is even simpler. It results from equation (10) with e . , replaced by e „ -( ™ o It turns out to be n-l,ra sC(s) + © m ^ (a1-a*+ea*+hg(bi-b*+©b*))sk+1"°1 .. (17) The hg-regions of sta b i l i t y for values of k=l,0..,8 and certain values of m for procedures based on the formulas of Adams type are shown in Appendix II. The Adams formulas are defined by taking a-^  = a^ = 1 , but otherwise a^ = a^ = 0 The predictor then becomes the Adams-Bashforth formula and the corrector, the Adams-Moulton. We note three characteristics of the table in Appendix II. For given m , d increases as k goes from k=2 to k=8 . -20-For given k and in , the hg-region for PE(CE) m contains the hg-regipn for P(EC) m and P.(EC) m + 1. Thirdly, the hg-reglon for m = oo i s the largest for each given k .• To conclude this section, we shall investigate the extent to which the preceding results for N=l can be general-ized. It is shown in Appendix III, that e n can be represented as the f i r s t N components of the vector A ny Q where A is an Nk by Nk matrix and y Q i s the 'initial-condition' vector. The growth of e n i s determined by the eigenvalues of A . As shown in Appendix III, eh*" + 0 (h k + 2) where A is any eigenvalue of G , i s an eigenvalue of A . These are to be expected as before. Therefore, we define st a b i l i t y in a manner analogous to the case N=l . Let A-^  = e h^ be the eigenvalue of e h G of maximum absolute value ( J A - J is called the spectral radius of hf" e ). Let the other eigenvalues of A be denoted by . We say a method is stable i f <C A-J for | A-^ > 1 and i f | A j < 1 for | A - J -< 1 . The definition of st a b i l i t y is rather arbitrary. Sta-b i l i t y should be defined to suit the requirements placed on the numerical solution. Unless we note otherwise, we shall use the above definition which is suitable for our purposes. If a method satisfies this definition of st a b i l i t y , the relative error of the solution vector of the system of equa-tions, as measured by our norm for example, w i l l remain small. However, the above definition does not ensure the smallness of the relative error of each component of the -21-numerical solution„ If we replace A-^ In the above defini -tion by the eigenvalue of e of minimum absolute value, we obtain a stronger st a b i l i t y condition. This new condition is sufficient - but not necessary, as we shall see later - to ensure the smallness of the relative error of each component of the solution. We define absolute stability by saying that a method is absolutely stable i f the extraneous eigenvalues A^ satis-fy |x | <: 1 . Another quite restrictive definition of sta b i l i t y requires that the A I satisfy | A ^ | < | A 2 I > h C where A 2 is the eigenvalue of e of minimum:absolute value, even If | A 2 | <Cl . It turns out that for a stable method, an approximate expression for efi is e n * (s^-i) [ ( i - e P - ^ i - e J C i - ^ J ^ h a.^bJ) G"V +*> 1 h l b 0 ± + © m* 1(l-9)(l-© m)" 1(T*+r*)] (18) where ^ = e h G+ 0 ( h k + 2 ) . Noting the similarity between equations (18) and (16), we define the round-off and truncation regions for the general case in the same way as before. The right hand side of ( l 8 ) provides an estimate for | e | For Adams method, the region of sta b i l i t y can be deter-mined i f the eigenvalues of A are determined in terms of h . Methods for finding these eigenvalues are given in Appendix III. In general, these eigenvalues are d i f f i c u l t -22 to determine, particularly If the matrix G varies considerably during the course of a calculation. It i s quite impractical to calculate the eigenvalues of A with any regularity while the method is used. However, a preliminary study of the system might yield pertinent information on the A i . For instance, i f G is a constant matrix over the domain of t under consideration, the can be predetermined for a given k and h , giving the conditions on h necessary for s t a b i l i t y . -23-ECONOMY CONSIDERATIONS In deciding which numerical methods to use for solving ( l ) , the cost of such procedures should be taken into account. We shall compare the accuracy of methods that cost the same. Since we are assuming that f i s relatively complicated, the cost of a procedure Is proportional to the number of times f i s evaluated. Thus, m/h i s a reasonable measure of cost for the predictor-corrector procedures. A typical plot of the relative error that results when Adams method is used versus h Is given in Appendix TV. In the round-off region, the relative error is quite constant. As h increases through the truncation region, the relative error increases. Finally, Instability occurs. Depending on the accuracy required, we would like to choose h so that the method is operating in the extreme l e f t hand side of the truncation region. The problem then becomes one of determining the most practical m . Obviously, we would like to keep m as small as possible. We assume that T* and T are the dominant terms in ( 1 8 ) . It may happen that h i s so small that the f i r s t term of ( 1 8 ) dominates. In this case, increasing m has l i t t l e effect in changing e n . Otherwise, the second term dominates. In this case It is better to decrease h , thus decreasing the magnitude of the terms of 9111""1 and of T* , rather than increasing m . In any case, i t i s clear that we need consider only small values of m . Before making a specific choice of m , we should con--24-sider the stability question„ For the case N=l and Adams method/ i t is clear from the values in Appendix II that the PEC method is rather unstable for general purposes. The sta b i l i t y regions for cases with more evaluations per step than PECEC are not considerably magnified. Bearing in mind that we wish to keep m small, we must therefore choose between the methods PECE and PECEC. The procedure PECE has a larger region of st a b i l i t y than PECEC for each given k „ However, i t may be that the extra accuracy obtained by the added correction - which costs practically nothing - warrants the use of PECEC over PECE . The stability question for the general case is harder to discuss since we do not have tables such as those in Appendix II. However, we expect stability to behave the same qualitatively as in the simple case. EXPERIMENTAL TECHNIQUES Because the r o u n d - o f f e r r o r r appears where i t does i n e q u a t i o n (16), i t can he o f c o n s i d e r a b l e Importance u n l e s s I t s magnitude i s kept s m a l l . The use o f d o u b l e - p r e c i s i o n a r i t h m e t i c d e c r e a s e s th e magnitudes o f the I n d i v i d u a l r o u n d - o f f e r r o r s . However, r o u n d - o f f e f f e c t s a r e m i n i m i z e d more s i m p l y and more e c o n o m i c a l l y by u s i n g ' p a r t i a l d o u b l e -p r e c i s i o n ' ( [ 6 ] , p. . 9 4 ) . The v a l u e s o f y n I n (2) and (3) a r e s t o r e d I n d o u b l e - p r e c i s i o n and the a d d i t i o n s o f the terms i n v o l v i n g h a r e done i n d o u b l e - p r e c i s i o n . But the terms i n (2) and (3) i n v o l v i n g h are c a l c u l a t e d i n s i n g l e -p r e c i s i o n . F o r i n s t a n c e , i f Z n _ j _ denotes the s i n g l e -p r e c i s i o n rounded v a l u e of t h e number y n _ ^ > t h e n p a r t i a l d o u b l e - p r e c i s i o n f o r the Adams c o r r e c t o r i s d e f i n e d by k y n - y n - l + h X Q V n - i > where the second term i s l e f t unrounded and i s added i n i t s e n t i r e t y t o y n _ ] _ • T h i s p r o c e d u r e I s s u c c e s s f u l because the method i n v o l v e s the a d d i t i o n o f s m a l l terms t o r e l a t i v e l y l a r g e r ones. I n f a c t , h can now be d e c r e a s e d t o a c o n s i d e r a b l y l o w e r v a l u e than i n the s i n g l e - p r e c i s i o n c a s e , w i t h o u t f e a r o f a c c u m u l a t i o n o f r o u n d - o f f e r r o r . A measure o f the l o c a l t r u n c a t i o n e r r o r - the t r u n c a -t i o n e r r o r from a s i n g l e s t e p o f a c a l c u l a t i o n - can be found by e x p e r i m e n t a l methods. F i r s t , we w r i t e - 2 6 -TP = ° p h k + 2 a n d T c • - ' C c h k 4 2 where T and T are the truncation errors due to the p c particular applications of the predictor and corrector respectively at the point t , and = R* x + Q ( h ) a n d c a R r _ + 0 ( h ) P (k+1)! c (k+l)j ( k + 2 ) p v ( k + 2 ) Therefore, neglecting round-off errors, ( y p - y ° ) = ( o p - c o ) h k + 2 where y p is the y n calculated from (2) and y c the value of y n calculated from ( 3 ) . Thus, K l - T ^ - T l^.Vi « -!- - r ^ v i l . (19) c - c p c R*-R R and R* depend on the and b i , a* and b* . For Adams method and k = 6 , for example, R/(R*-R) = - 1 3 7 5 / 3 8 1 7 4 . It i s a simple matter to calculate this approximation to || T || at any stage of the procedure. In practice, the maximum local error to be allowed would be prescribed. During a calculation, h would be chosen so that the mea-sure of the truncation error given by (1.9) is less than the prescribed maximum. We shall compare the results obtained by the applica-tion of various predictor-corrector methods to the popular Runge-Kutta method defined by -27-= yn + \ ( k i + 2 k 2 + 2 k 3 + k 4 ) where k k l 3 h f ( t n , x n ) hf(t n+h/2, y ^ / 2 ) hf(t n+h/2, y n+k 2/2) k •4 hf(t n+h, y n+k 3) The truncation error of this method is 0(hD) .. We note that this method involves four evaluations of f per step and thus costs the same as the PECEC method with a step-size of h/2 . Before concluding this section, we note that in Appendices V, VI, VII and VIII, the step-sizes for the Runge-Kutta procedure are chosen so that the methods in the same row of the tables cost the same. That i s , for the PEC table, the Runge-Kutta step-size i s equal to four times the predictor-corrector step-size. For the PECE and PECEC tables, the Runge-Kutta step-size Is twice the predictor-corrector step-size. In these Appendices, h denotes the predictor-corrector step-size. - 2 8 -.. EXPERIMENTAL RESULTS We tested the preceding theory on four separate problems. We considered only systems of equations. The f i r s t two problems are simple examples of circular motion [Y]. Pro-blem (A) Is linear whereas (B) i s nonlinear. Each has the same trigonometric solution. For problem (C) we chose a linear system of equations with an exponential^s'olution. The equations in problem (D) are the equations of motion of a vehicle reentering the earth's atmosphere A theoreti-cal solution of the last problem is not known - as is generally the case in actual practice. Therefore, problem (D) was solved under r e a l i s t i c conditions. We restricted ourselves to procedures using formulas of Adams type only. Much of our theory is based on these formulas. Actually, procedures using Adams formulas are as reliable as most of the other procedures and are generally considered to be representative 'general-purpose1 procedures. We considered procedures based on the iteration method P(EC) m for m-1,2,-3 and on PE(GE)m for m=l,2 . In each of these cases, we took k = 4 , 5 > 6 , 7 . Problems (A) and (B) were run over the domain t=0 to t = 107T , problems (C) and (D) (after normalization) over t=0 to t=30 . For 0 - 1 - 7 each m and k we used h = 2 , 2 , „ . „ , 2 . Some of the results for problem (A) are representative of the results for the other problems so we shall discuss (A) in detail. Then we w i l l note where the results from the other problems d i f f e r . =29-Problem (A) Is defined by x{(t) - X2 X l ( o ) = 1 x*(t) = " x l x 2(0) = 0 x^(t) - x4 x 3(0) = 0 xi(t) = -x 3 x 4(0) = 1 . (A) It is easy to see that the solution is given by x-L(t) . = cos t x 2 ( t ) = -sin t x^(t) = sin t x^(t) = cos t The matrix G i s a constant matrix and i t s eigenvalues are i and - i , each repeated. We note that a l l the eigenvalues of e have absolute value equal to one, Thus, i f another eigenvalue of A has absolute value greater than one, instability w i l l occur. We use the norm E » |je|| = |e1| •.+ |e2 | + |e3| + |e^ | , t h where e^ is the error of the i component of the numerical solution, as a measure of the error. The table of maximum errors corresponding to this pro-blem is given in Appendix V, A plot of maximum error versus h is given in Appendix IV for the special case k=6 and -30-the PECEC method. In each Instance, the maximum error occurred very near t=107T We f i r s t note that for each k and h ., the error in the round-off region is approximately constant. In the case k=7 and the PECEC method, the round-off error varies only slightly over the domain h = 2~ 1 0 to h=2~^ . This is a result of the partial double-precision technique. Instability occurs for each method and each value of k „ The transition between the truncation region and the insta b i l i t y region is sharply defined in each case. However, instability arises at different values of h for different methods. For a given k , inst a b i l i t y appears at smaller values of h for the PEC method than for other methods. For the PEC method, the h at which in s t a b i l i t y occurs decreases rather markedly as k increases; however, for methods other than PEC , this phenomenon does not appear. In fact, no indication as to which k is best exists for methods other than PEC , For the methods PECE to' P(EC)^ , the round-off regions and the errors in these regions are very nearly the same. Also, the increase of the errors through the trunca-tion regions in these eases behaves approximately the same, although the error rises slightly faster for k=4 than for k=5>6,7 . However, Instability appears for smaller h In the cases PECE and PECEC than in the cases PE(CE) 2 and P(EC) ( i t occurs for slightly smaller h in the case PECEC than in PECE), But since we are interested in using values of h such that the method operates in the l e f t hand - 3 1 -side of the truncation region, the PEGE and PECEC methods are as suitable for this problem as any other method. For methods other than PEC , and each value of k , the error in the truncation region does not increase as quickly as expected. Assuming that the terms involving 9 in equation ( l 8 ) can be neglected, we expect the error In this region to be approximately ©(h^ 4 - 1) . However, since m is small, i t might be that our assumptions about 9 are not s t r i c t l y true. For this problem and a l l methods, the truncation region is not particularly wide. Comparing the predictor-corrector methods PECE and PECEC to the Runge-Kutta method of the same cost (the Runge-Kutta method with twice the step-size), we see f i r s t that in both of their round-off regions, the error is approxi-mately the same. However, the truncation region for the Runge-Kutta method starts at much smaller h than for the other methods. From the value of h at which the trunca-tion region of the Runge-Kutta method starts to the points where Instability of the predictor-corrector methods occurs, the error of the Runge-Kutta method i s considerably higher than that of the predictor-corrector methods. And i t is precisely in this region that we wish to operate our predictor-corrector method, thus obtaining the maximum accuracy for the minimum cost. Problem (B) is x{(t) = x 2 x^O) = 1 x 2 ( t ) = -x^" 3 x 2 ( 0 ) = 0 -32--3 x 3 ( 0 ) = 0 x 4 ( 0 ) = 1 (B) where r + X 2 3 The s o l u t i o n i s the same as i n (A) . Even though (A) and (B) are s i m i l a r problems and have i d e n t i c a l s o l u t i o n s , we cannot expect the experimental r e s u l t s to be the same f o r both problems, f o r the terms i n v o l v i n g r i n (B) a f f e c t the eigenvalues of G . The c h a r a c t e r i s t i c equation f o r G i s not d i f f i c u l t to con-s t r u c t . However, i t depends on x.-^  and x^ and i s q u i t e complicated. The r e s u l t s f o r (B) are tabulated i n Appen-d i x VI. The I n s t a b i l i t y regions f o r methods other than PEC are not w e l l - d e f i n e d f o r problem (B). In f a c t , i t appears as i f i n s t a b i l i t y does not occur f o r these methods., The e r r o r s i n a l l the round-off regions are ap p r o x i -mately the same. The t r u n c a t i o n region f o r the PEC method s t a r t s at lower values of h than f o r other methods, as i s the case w i t h (A). A l s o , f o r a l l given methods other than PEC , the t r u n c a t i o n region f o r k=4 s t a r t s at smaller h than f o r k=5j>6,7 » For the l a t t e r methods, the t r u n c a t i o n regions f o r (B) s t a r t at approximately the same h as those f o r (A). Again, the t r u n c a t i o n e r r o r does not r i s e as f a s t as expected . The p r e d i c t o r - c o r r e c t o r methods used on (B) compare w i t h the Runge-Kutta method i n e s s e n t i a l l y the same way as - 3 3 -b e f o r e . F o r problem ( C ) , we chose 1 1 0 0 (c) The s o l u t i o n I s x x ( t ) x 2 ( t ) iCeVe"*) iCe^ e-*) x 4 ( t ) x 3 ( t ) The e i g e n v a l u e s o f G a r e .1 and -1 , each r e p e a t e d . We use the same norm as b e f o r e and we l e t E / 2(e^) be a measure o f the r e l a t i v e e r r o r . The t a b l e o f maximum r e l a t i v e e r r o r s i s g i v e n i n Appendix V I I , F o r each method, the maximum r e l a t i v e e r r o r o c c u r e d a t t=30 , The e i g e n v a l u e s o f f o r t h i s problem a r e e*1 and ""IT. h e , S i n c e e > 1 , we ex p e c t a t l e a s t one component o f the e r r o r t o behave l i k e a p o s i t i v e e x p o n e n t i a l , A l s o , i t i s r e a s o n a b l e t o expect e h t o be the e i g e n v a l u e o f t h e m a t r i x A o f maximum a b s o l u t e v a l u e f o r most methods. Thus, t h e s e methods would be s t a b l e . The t a b l e i m p l i e s t h a t i n s t a b i l i t y does n o t o c c u r f o r methods o t h e r than PEC And"since each "component o f the s o l u t i o n i s p r o p o r t i o n a l t o e^ , each component has a s m a l l r e l a t i v e e r r o r . We note, however, t h a t a s i m p l e t r a n s f o r m a t i o n -34-changes equations (C) into a system in which'one of the com-ponents has a solution equal to e"fc „ By i t s very nature, this component would not have a small relative error. The truncation region for methods with k=4 other than PEC start at smaller h than for methods with k=5,6,7 . The truncation errors f o r k=5,6,7 are similar. The methods PECE and PECEC are considerably better than the corresponding Runge-Kutta method. Finally, problem (D) i s defined by ( C^ S x|(s ) = J ^ " /° xl "" 2 (g^inx^-g^cosx^cosxg-g^cosx^sinxg ) - 2(co Xgcosx^ Ksinx^cosx^+cosx^sinx^cosXg ) CTS -, cosx 0sinx 0sinxi. X 7 S ) = L , o 1 + 3 2 4 2X ; 2m / cosx 0 x ^ c o s x i . p co X / - c o s x i . s i n x i . s i n x n 0 , s l n x 0 c o s x 0 c o s x i , + — — _____ + _ • i =: 1- sinxi. J X, COSX 0 . r~— COSX-, 4' 1 3 V x i 3 C L S „ , 1 . s i , — ^ ' COSX-~ (g^sinx qcosx Q+g slnx Qsinx 0) - --=-__ eosx^sinXo l J ^ H J d v ^ i ' 2 CO XgCOSX^ — — (cosx-,sinxi.-sinx 0sinxi.cosx 0 ) x-^ v 3 4 3 4 2 7 -35-cosx 0eosx 0 x;(s) . . — 1 _ 2 cosx 0sinx 0 X ' ( S ) = . 3 A 5V ' xgcosx^ x^(s) = - sinx 3 , (D) wh ere f> = ^ e 2 R y =• x 6 - R d ( l - f s i n 2 x 4 - ^ - - s l n 2 2 x 4 ) ( x 6 ) 2 2 x 6 »A 83 7^72 C 2 0 ( ~ ) 2 s l n x 4 c o s x 4 (x6^ x6 su = 0 A physical interpretation of the problem and i t s para-meters is given in Appendix IX, The Important variables are x-^  , x^ and x^ , We normalized the problem by putting -5 t = 3s x 10 ^ „ The domain of t allowed the vehicle to make more than one complete 1 skip 1 through the earth 1s outer atmosphere, Each component of the solution to (D) behaved as expected. For a l l methods, a region of h existed in which each com-ponent was constant; these respective constants were the -36-same for a l l methods. These regions were the round-off regions.. Similarly, truncation regions existed. As a measure of the error in each x^ , we used the deviation |x^-x.J where x^ was the solution in the round-off region. The greatest relative deviations were observed for x^ . The results for the other x^ were very similar. The maximum deviations for x^ are given in Appendix VIII, We observe that the PEC method gives results that are s i m i l a r to t h o s e of t h e p r e v i o u s p r o b l e m s . Instability occurs only for the PEC method with k=6,7 . - The PECE and PECEC methods yield deviations that are as small as the corresponding deviations for the other predictor-corrector methods a n d considerably smaller than the corresponding deviations for the Runge-Kutta method. -37-CQNCLUSIONS In general, we have shown that predictor-corrector method's can be used effectively to generate accurate numeri-cal solutions to systems of ordinary differential equations. We have described the theory and usage of predictor-corrector procedures in detail. We considered methods of the form P(EC) m and PE(CE) m for different values of k . We have discussed the concept of st a b i l i t y for single equa-tions and have generalized this concept for systems of equations. We have obtained an approximate expression for the error vector for stable methods and we have discussed the behaviour of the error for varying step-sizes. In particular, we have Investigated theoretically the behaviour of the error for Adams method applied to the case N=l o Por a given method, the error in the truncation region is lowered as k increases. However, the table in Appendix II indicates that i f g < 0 , Instability w i l l occur at smaller h as k becomes larger. On the other hand, for either P(EC) m or PE(CE) m and a given k , similar reasoning indicates that in s t a b i l i t y w i l l occur at larger h as m increases. The experimental results in [8J exhibit this behaviour. Finally, for a given m and k , the method PE(CE)ra i s stable over a wider range of h than either P(EC) m or P(EC) m + 1 . Gn the basis of these facts and the additional requirement that m be kept small, we decided that the methods PECE and PECEC : with the inter-mediate values of k = 5 , 6 are the most efficient for N=l . -38-For the general case, the sta b i l i t y conditions are not easy to analyse. However, from the results for our four different and representative problems, we are able to draw certain conclusions. The P E C method yielded the poorest accuracy and was quite unstable, especially for large values of k . The p o P E ( C E ) and P t E C ) " 3 methods were not significantly better than the methods P E C E and P E C E C for any of the problems. The results from the P E C E and P E C E C methods were quite similar. However, for k=4 , the truncation region started at lower values of h than for k= 5 , 6 , 7 . Of the methods considered, the P E C E and P E C E C methods with k= 5 > 6 , 7 are the best for general purposes. For the problems we investigated, these methods gave considerably better results than.the Runge-Kutta method of the same cost. I t is possible that significant results may be obtained by investigating methods using larger values of k . The possibility of determining the eigenvalues of the matrix A and controlling s t a b i l i t y during the course of a calculation should be investigated. -40-APPENDIX I I The hg-Reglons o f S t a b i l i t y ^Method P ( E C ) 0 0 PEC PECE P ( E C ) ^ PE(CE)^ P(EC)- 3 1 - 1 . oo - 1 . 3 7 + 0 . 6 0 + O . 3 8 ± 0 . 5 0 - 00 2 -0.30 - 1 . 7 0 - 1 . 1 3 - 1 . 2 5 - 1 . 0 0 (-00) •3 - 0 . 1 5 - 1 . 2 5 - O . 8 7 " - 1 . 1 0 - 0 . 8 7 (-00) 4 - 0 . - 1 . 0 0 - 0 . 6 2 - 0 . 8 7 - 0 . 7 0 - 1 . 8 0 5 - 0 . - 0 . 7 0 - O . 5 O - 0 . 7 0 - 0 . 5 5 - 1 . 1 3 6 - o . • - 0 . 5 0 - 0 . 3 8 - 0 . 5 0 - 0 . 4 5 - 0 . 7 5 7 - 0 . - 0 . 3 8 - O . 2 5 - 0 . 3 8 - 0 . 3 . 5 - 0 . 5 0 8 - 0 . - O . 3 0 - 0 . 2 0 - 0 . 2 5 - 0 . 2 5 - 0 . 3 5 The v a l u e s o f d , where (d ,00 ) i s t h e reg Sion o f s t a b i l i t y , a r e p l o t t e d . The e q u a t i o n s used a r e p ( E c f : B k + 1 c ( s ) + ^ 1 <Tes k[Ea,s k + 1- 1-(i -e) ^Vs -^ 1 - 1 ] L 1 1 1 1 . r k + l f , c k - i ^ M Q u f , 0 k - i v 1,* k + l - i +hg[_s Z _ b i s +(!-©.)( Z _ a i s ' A - b i s V- 1 * k+l-i k - 1 k Y 1 ^ * k+l-irTI 2_a,s " j L ^ s -s 2 _ b i s ) J r 1 1 1 1 1 1 J = 0 k + 1 P E ( C E ) m s s C ( s ) + 9 m £ (a 1-a*+©a*+hg(b I-b*+©b*))s k = 1 + 1 = 0 m = 00 ? C ( s ) = (l=©)s k =y3 (a,+hgb, ) s k ~ 1 = 0 1 where f o r Adams method, a-^  = a* = 1 and o t h e r w i s e = 0 © = h b Q g . -41-APPENDIX III The Generalized Error Equation For the general case (N/£L ) , the error equation is (11) for m=oo and a more complicated equation determined by (10) and the last equation of (9) for f i n i t e m . We rewrite the homogeneous equation in the form A l e n + k + A 2 e n + k - l + — + A k + l e n = 0 (20) where, for m = oo A 1 = I - hbQG A2 = -I-hb1G k± = -hbi__1G i —3 ? o Q a y ]?C~i"l The A^ are polynomials In G for m/oo The transformation Jr -. = e.. 'nl n yn2 " en+l o o o o o ynk * en+k-l / y n l \ yn - y. n2 \ ynk / yo =1 * k - l / results in the equation yn = A y Q , where / P I ft .. IS A = 1 0 .0 9 I o 0 . . • -. . 0 o o o o o --• \-A- 1A x K l Ak+1 * : 1 A2 4 2 -(assuraing that A1 is nonsingular) and I and 0 are the N by N identity and zero matrices respectively. The growth of e n - which is represented as the f i r s t N components of y n - i s determined by the eigenvalues of A , Por example, i f a l l of the eigenvalues of A are less than one in absolute value, A n —> the zero matrix as n oo . The eigenvalues A. of A satisfy the character-i s t i c equation d e t ( a 1 X k + A 2 A ^ 1 +,,,+ A k + 1 ) - .0 . ( 2 1 ) Returning to equation ( 2 0 ) with the A^ for m=oo "th. we replace e n + ^ with the (n+i) power of the matrix S1 = e h G + 0 ( h k + 2 ) , Using the corrector equations corresponding to ( 1 5 ) and the relation -1 = 1 J we see that satisfies A 1 S 1 + 1 + A 2 s i + i ~ 1 ' + ° « + Ak+i = 0 Therefore, satisfies the scalar equation g(S-^) = 0 where g( A ) = det(A 1 A k +....+ A k + 1 ) (see [ 5 ] , p, 2 2 8 ) , Now, i f T i s a matrix such that T=1S-LT i s the Jordan cononical form of S-^  , the equation T~ 1g(S 1)T = 0 implies that each eigenvalue of S-^ s a t i s f i e s g ( A ) - 0 . That i s , each e i g e n v a l u e o f • i s an e i g e n v a l u e o f A . However, t h e e i g e n v a l u e s o f are o f the form e* 1 7^ 1 + 0 ( h k + 2 / " where A^ i s an e i g e n v a l u e o f G . T h e r e f o r e , E where E i s a c o n s t a n t v e c t o r , s a t i s f i e s (20). The complete s o l u t i o n o f (20) can be w r i t t e n as E p l u s o t h e r terms t h a t depend on A and whose b e h a v i o u r depends on the e i g e n v a l u e s o f A . F o r s t a b l e methods, t h e s o l u t i o n o f (20) i s a p p r o x i m a t e l y E . Under s p e c i a l c i r c u m s t a n c e s , the above a n a l y s i s can be s i m p l i f i e d . We assume t h a t t h e r e e x i s t s a m a t r i x T such t h a t T^GT = D , a d i a g o n a l m a t r i x w i t h the N e i g e n v a l u e s o f G ( p o s s i b l y r e p e a t e d ) as t h e d i a g o n a l e l e m e n t s . We l e t e n + i = ^ v n + l ' i = 0 > • • a n d s u b s t i t u t e t h e s e e x p r e s s i o n s I n t o e q u a t i o n s (20). P r e m u l t i p l y i n g (20) by T = 1 and n o t i n g the s p e c i a l form o f t h e A^ , we o b t a i n a system o f N uncoupled d i f f e r e n c e e q u a t i o n s , each o f the components o f v n + i a p p e a r i n g i n o n l y one o f t h e e q u a t i o n s . Each o f t h e s e e q u a t i o n s may be s o l v e d by the u s u a l method. However, P-^  = e n ^ + ©(h^"1"2) i s a s o l u t i o n o f the r e s u l t i n g system o f e q u a t i o n s and thus e h ^ 1 + .©(h^"1"2 ) i s a c h a r a c t e r i s t i c r o o t o f the i ^ h d i f f e r e n c e e q u a t i o n . (A s u f f i c i e n t c o n d i -t i o n f o r the s m a l l n e s s o f the r e l a t i v e . t e r r o r o f each com-ponent o f v n can be found by a p p l y i n g the s t a b i l i t y c o n d i t i o n f o r N=l t o each d i f f e r e n c e e q u a t i o n . ) F o r s t a b l e systems, v n i s a p p r o x i m a t e l y P^ F f o r some v e c t o r F and thus e n i s a p p r o x i m a t e l y TP^ F = E as before,. To w r i t e down th e complete e x p r e s s i o n f o r e n , we p r o --44 ceed as in the ease N=d . We f i r s t find the particular integral of the nonhomogeneous equation. We put e n _ ^ m E and e . m 1 = E , both constant vectors, for a l l i and n-l,m-l ' eliminate H from (10) and the last equation of (9) . We obtain the expression E = ( - n - e M d - e X i - d V ^ a X^bJ) £ i l 2 ± £ i 0 1 - ^ ( i - e J d r S ^ J ^ C T V r * ) . We then assume that e n can be written approximately as S*^  E + E , and, neglecting starting errors by putting e^ = 0 , we find that E = -E . The f i n a l expression for e n is given by (18). To determine the eigenvalues of A , we use equation (21) or other methods that take advantage of the special form of the A i . We note that the above analysis specializes to the analysis for single equations when N=l . APPENDIX IV l o g 1 0 ( E r r o r ) vs. log 2(h) (PECEC, k = 6 , Problem (A)) INSTABILITY REGION > TRUNCATION REGION ROUND-OFF REGION V • - 4 . 0 • 8 . 0 - 6 , 0 - 4 . 0 - 2 . 0 0 . 0 l o g 0 h -46-APFENDIX V Maximum E r r o r s - Problem (A) The u n i t s of E are 1 0 , \ k h \ 4 5 6 7 Runge-Kutta 2l 1.669 2 , 1 0 1 2 , 4 8 8 2 . 1 3 1 1 . 9 5 2 2 X 1 , 6 5 4 2 . 1 1 6 2 . 5 0 3 —. 1 2 . 6 3 6 2 - 4 1 , 6 6 9 2 . 1 4 6 — — 1 7 7 . 9 2 0 2\ 1.714 8 2 5 9 . 9 8 2 — — 2 9 7 8 . 9 5 1 - l | 3 4 4 1 . 2 6 7 — — — 4 7 2 2 3 . 3 8 2 2 - 1 — — — — 6 1 0 2 3 7 . 4 4 4 2 ~ 0 — — — — — 2 -2"-6 1 . 6 5 4 2 , 0 8 6 2 . 4 7 4 2 . 1 6 1 1 . 3 2 6 2X 1 . 6 5 4 2 , 0 8 6 2 , 4 7 4 2.146 1 . 9 5 2 2-l 1.684 2,101 2.488 2 . 1 6 1 1 2 . 6 3 6 2=3 2 . 0 8 6 2,235 2,593 2.310 1 7 7 . 9 2 0 5 6 . 5 0 5 10.304: 7 . 3 0 2 7 . 1 9 7 2 9 7 8 . 9 5 1 2i 2 2° 2 5 4 2 . 7 1 9 5 8 2 , 2 7 8 2 6 6 , 6 9 4 1 7 4 . 4 9 3 4 7 2 2 3 . 3 8 2 137785.740 55772.796 — — 610237.444 -3 2 - 2 2 - l 2 ~ 0 1 . 669 1 , 6 5 4 1 . 6 9 9 2.444 5 2 . 9 4 4 2 2 0 4 , 1 0 5 I I I I 6 5 . 6 6 5 2 , 1 0 1 2 , 1 1 6 2 . 1 1 6 2 , 2 6 5 1 0 . 5 9 5 5 5 3 . 5 5 6 2.474 2,488 2,459 2,563 6.924 2 . 1 7 6 2 . 1 7 6 2 . 1 7 6 2 . 3 3 9 7 . 1 2 3 1 9 8 , 4 6 9 2 2 3 2 4 1 . 9 6 2 1 . 3 2 6 1 . 9 5 2 1 2 , 6 3 6 1 7 7 . 9 2 0 2 9 7 8 , 9 5 1 4 7 2 2 3 . 3 8 2 6 1 0 2 3 7 . 4 4 4 2 " ! 1 . 6 5 4 2 . 1 0 1 2 . 4 7 4 2 . 1 7 6 2 ~ ° 1 . 6 5 4 2 , 1 1 6 2 . 4 7 4 2 , 1 6 1 2 " g 1 . 6 9 9 2 . . O 8 6 2 . 5 0 3 2 . 1 9 0 2~\ 2 . 4 4 4 2 . 2 8 0 2 . 5 9 3 2 . 3 1 0 2~i. 4 9 . 8 7 4 1 0 . 3 1 2 6 . 8 1 0 7 . 0 7 8 2 i 1 8 3 0 . 8 7 6 4 6 2 . 6 8 1 1 7 7 . 9 6 5 1 6 1 . 2 3 1 2 Q 7 3 9 3 9 . 6 5 9 3 1 2 0 0 . 4 0 9 1 4 8 4 9 . 7 2 2 5 3 2 9 . 1 1 7 2l 1.654 2 , 1 0 1 2 . 4 7 4 2 . 1 7 6 2 i 1.654 2 , 1 1 6 2 , 4 8 8 2.146 2-l 1 . 6 9 9 2 , 0 8 6 2 . 5 0 3 2 . 2 0 5 2 \ 2 . 4 2 1 2 , 2 2 0 2 . 5 6 3 2 . 3 1 0 21 4 8 . 1 7 5 1 0 , 1 0 3 6 , 8 6 9 7 . 0 9 3 2-I 1 6 3 4 . 4 7 1 4 2 4 , 8 5 4 1 6 8 , 2 0 4 1 6 1 , 3 5 0 2 x 2 ° 5 8 9 9 7 . 7 4 3 2 6 1 7 6 . 8 7 0 1 1 4 9 3 , 8 6 2 4 9 1 0 . 8 0 4 A dash denotes an e r r o r greater than 1 . -47-APPENDIX VI Maximum Errors - Problem (B) The units of E are 10 . PEC PECE P(EC)' ?E{CEY P(EC)-h \ 4 5 6 7 Runge-Kutta 2 l . 9 0 9 5 . 6 7 0 2 . 7 8 7 4 . 1 9 5 1 0 . 3 7 1 2 X . 4 9 2 10.148 4 . 8 8 0 — 9 7 . 1 8 5 2 - 4 2 . 9 1 3 1 5 . 2 6 6 — — 2 1 9 3 . 4 2 1 2 o ~3 Z-2. 1 1 2 . 5 2 6 6 8 2 3 8 . 0 3 0 ? -1 - ~ — — 2 ° — — — 2~-l 1 . 0 5 8 1 . 4 3 8 1 . 3 3 4 . 5 2 9 5 . 6 1 8 . 6 3 3 3 . 8 5 9 2 . 1 9 8 1 . 2 8 1 1 0 . 3 7 1 2 - 4 3 . 0 7 0 • 8 . 2 8 5 2 . 5 0 3 1 . 0 9 5 9 7 . 1 8 5 24 1 3 8 . 8 2 7 3 . 9 1 9 9 . 8 7 2 3.047 2193.421 2 - 2 3 0 9 3 . 6 7 5 5 0 1 . 5 8 8 9 8 . 2 5 8 1 1 5 . 7 6 7 6 8 2 3 8 . 0 3 0 2 2 ° 6 7 8 7 . 3 7 5 5 1 6 8 9 . 4 1 6 1 0 4 8 2 , 2 5 2 4 9 7 9 . 7 6 7 . 6 7 1 . 6 5 6 1 . 2 0 7 2 . 1 3 8 5 . 6 1 8 2X 1 . 1 9 2 . 4 7 7 . 5 9 6 4 . 7 1 6 10.371 2 - 4 5.242 2 . 8 7 6 , 6 7 1 5 . 9 0 8 97.185 2X 1 5 3 . 5 0 4 1 . 5 3 5 2 . 1 3 1 8.099 2193.421 O 3 4 1 9 3 . 2 4 6 148.892 6 9 . 4 9 2 103.004 6 8 2 3 8 . 0 3 0 2 f 2" 2 ° 122627.400 1 1 4 6 7 . 7 8 5 2 7 9 1 . 9 0 4 _ _ 2-l . 6 7 1 . 6 5 6 1 . 2 2 9 2 . 2 1 3 2X 1 . 0 3 6 . 5 5 9 . 5 0 7 4 . 9 9 2 • 2 - 4 4 . 1 5 0 2 . 4 9 6 1 . 1 9 2 5 . 6 1 0 2X I55.6O5 2 . 5 7 8 1 . 8 1 8 9 . 5 0 7 2 1 4 3 ^ 4 . 0 8 5 1 5 8 . 7 7 2 6 6 . 4 8 9 99.644 2-l 1 3 9 1 2 6 . 9 7 9 14779.426 2462.968 6070.942 2 2 ° -- _ 1 3 7 5 5 1 . 3 7 5 5 0 3 8 5 4 . 6 3 2 ?~ 7 -6 .671 . 6 5 6 1.214 1.825 1 .013 .827 . 5 0 7 4.575 2-4 2-i 5.640 1.974 1.110 4.120 156.805 1 . 8 1 0 4 . 2 9 9 8 . 1 2 9 2 - 2 4403.017 159.815 67 .525 97.781 2X 147231.624 16434.923 2359.815 5988.881 ? 2 ° — — 191579.305 — A dash denotes an error greater than 1. -48-APPENDIX VII Maximum Errors - Problem (C) The units of E/2(e t) are 10" PEC PECE P(EC)' PE(CE)' P(EC): h \ 4 5 6 7 Runge-Kutta . 6 7 5 .846 . 9 3 2 . 7 8 5 . 7 3 6 2 A . 6 7 5 . 8 7 1 . 9 3 2 — 4 . 1 0 8 2 - 4 . 6 6 2 . 9 0 8 — 5 5 . 4 6 3 2 A . 7 2 4 4 9 7 9 0 5 . 3 9 0 — — 7 9 3 . 5 4 7 2 • > -0 3 8 . 7 0 3 — — — 1 0 2 7 5 . 044 2 — —. — 1 0 4 1 6 5 . 8 0 0 1° — — — - 5 5 5 7 6 6 . 1 0 0 . 6 7 5 .846 . 9 4 4 .797 .515 2-5 . 6 7 5 .846 . 9 3 2 .797 . 7 3 6 . 6 5 0 . 8 5 9 . 9 3 2 .797 4 . 1 0 8 2 -•3 . 3 9 2 . 8 9 5 . 9 8 1 . 8 5 8 5 5 . 4 6 3 2 - 2 2 . 6 6 1 1.840 2 . 2 5 7 2.441 7 9 3 . 5 4 7 2-? 1 2 0 . 2 2 4 6 3 . 7 1 8 4 8 . 3 7 4 4 9 . 0 9 8 1 0 2 7 5 . 0 4 4 2 0 8 9 5 0 . 6 4 5 4 1 9 8 . 8 6 4 2 3 4 4 . 8 6 5 1 7 1 6 . 0 2 7 1 0 4 1 6 5 . 8 0 0 2 ° 2 0 3 5 1 3 . 7 3 0 1 3 7 0 7 9 . 0 9 0 9 4 8 5 3 . 1 3 1 6 9 6 4 0 . 7 6 2 5 5 5 7 6 6 . 1 0 0 - 6 .675 . 8 3 4 . 9 4 4 . 7 9 7 . 5 1 5 2 _.c: .675 .846 .944 . 7 9 7 . 7 3 6 o -> -4 . 6 5 0 . 8 3 4 . 9 3 2 . 8 2 2 4 . 1 0 8 2 .209 . .846 . 9 8 1 . 8 9 5 5 5 . 4 6 3 2 ^ —O 1 1 . 7 0 1 . 8 3 4 2 . 1 9 5 . 2 5 0 7 9 3 . 5 4 7 rt «= 2 5 7 . 9 2 5 1 0 . 9 0 4 3 3 . 3 8 6 4 6 . 0 9 2 1 0 2 7 5 . 0 4 4 2 1 5 8 1 . 8 5 8 5 1 8 . 8 9 2 1 0 3 4 . 1 5 5 1 2 4 3 . 2 1 5 1 0 4 1 6 5 . 8 0 0 2 ° 8 6 8 3 3 . 2 5 7 6 8 9 4 9 . 9 6 0 5 5 1 0 1 . 1 6 1 4 6 5 0 1 . 9 8 8 5 5 5 7 6 6 . I O O 2-1 . 6 7 5 . 8 3 4 . 9 4 4 . 7 9 7 s i . 6 7 5 .846 . 9 4 4 . 7 9 7 p-p -4 .650 . 8 3 4 . 9 3 2 . 8 2 1 ? = •3 . 2 0 9 .846 . 9 8 1 . 8 9 5 _o 1 2 . 4 2 5 . 7 1 1 2 . 1 8 3 2 . 5 0 2 2 - l 3 2 2 . 8 0 8 2 4 . 1 0 1 3 0 . 6 6 3 4 5 . 5 7 8 2 0 5 4 2 4 . 5 2 8 2 6 8 . 0 9 6 5 2 7 . 0 2 4 1 0 5 6 . 2 9 3 2 ° 2 1 5 3 6 . 4 6 9 6 7 0 8 . 0 2 2 1 8 8 6 5 . 5 2 1 2 5 3 1 4 . 1 5 8 2 ~ 7 . 6 7 5 . 8 3 4 . 9 4 4 . 7 9 7 2 . 6 7 5 .846 . 9 4 4 . 7 9 7 Q P .650 . 8 3 4 . 9 3 2 . 8 2 2 . 2 0 9 .846 . 9 8 1 . 8 9 5 2 l l 1 2 . 8 3 0 .699 2 . I 8 3 . 2 5 0 2 - l 3 5 5 . 5 6 9 3 0 . 2 4 6 2 9 . 4 7 3 4 5 . 3 4 4 2 7 2 3 7 . 4 3 9 1 4 6 4 . 0 3 8 3 2 4 . 1 5 8 9 8 5 . 7 6 8 2 ° 6 9 4 4 6 . 6 2 8 1 7 3 7 7 . 9 8 2 6 1 1 7 . 5 6 2 1 8 3 7 5 . 6 9 6 A dash denotes a relative error greater than 1. - 4 9 -APPENDIX VIII Maximum Deviations in - Problem (D) o The units are 1 0 " ° radians. ,h. 7 Runge-Kutta 21 . 1 . 0 . 1 . 0 1 . 1 a i . 8 . 7 . 8 . 7 1 . 5 H . 3 . 3 1 . 0 . 6 . 4 . 5 . 8 1 . 5 7 . 8 . 1 . 6 1 . 0 1 1 . 1 2 1 2 . 4 . 6 — 2 9 0 . 8 2 " 1 2 ° 1 2 2 . 8 4 1 2 8 . 0 — — 6 6 8 6 . 8 1 1 9 1 2 . 8 4 9 8 3 1 6 . 8 — — 2 " 4 2 " 3 _o . 1 . 8 . 2 . 3 . 1 . 0 . 7 . 3 . 2 . 1 . 2 . 7 . 3 . 2 . 3 ' . 1 . 7 . 2 . 1 . 3 . 0 1 . 1 1 . 5 . 6 1 . 0 2 1 . 9 . 9 . 5 . 5 1 1 . 1 2 2 ° 2 4 . 2 1 0 . 6 1 0 . 8 4 . 8 2 9 0 . 8 2 1 9 1 . 5 9 2 1 . 0 7 1 . 9 4 3 1 . 7 6 6 8 6 . 8 21 1-1 2 - 4^ 0 . 1 . 8 . 2 . 2 . 0 . 7 . 3 . 2 . 2 . 7 . 3 . 1 . 0 . 7 . 2 . 2 . 0 1 . 1 1 . 5 . 6 . 3 . 4 . 2 . 0 1 . 0 2 "? . 5 . 9 . 8 . 8 1 1 . 1 2 1 2 ° 3 6 . 1 1 6 . 6 3 . 5 . 7 2 9 0 . 8 5 2 8 . 5 5 7 5 . 7 5 9 3 . 4 3 1 9 . 0 6 6 8 6 . 8 2 ~ 6 . 1 . 0 . 2 . 0 2 i . 8 . 7 . 7 . 7 2 5 . 2 . 3 . 3 . 2 2 5 . 3 . 2 . 1 . 1 2 ' 2 . 3 . 4 . 0 . 1 2 7 . 5 . 8 . 9 1 . 0 1 O CM OJ 3 1 . 8 1 1 . 8 2 . 6 2 . 5 1 O CM OJ 2 1 0 . 0 5 1 8 . 9 2 6 7 . 7 3 6 . 4 ; - 3 ;- 2 ; - i ;o . . 1 . 8 , 2 . 3 . 3 . 5 2 9 . 5 6 5 1 . 7 . 0 . 7 . 3 . 2 . 4 . 8 1 0 . 8 5 2 7 . 7 . 2 . 7 . 3 . 1 . 1 . 9 . 3 1 2 9 . 0 . 0 . 7 . 2 . 2 . 3 1 . 0 3 . 4 8 7 . 4 A dash denotes an overflow on the I.B.M. 7 0 9 0 . - 5 0 -APPENDIX IX The Atmospheric Reentry Problem The Important variables of problem (D) are defined by x-j^  = the square of the velocity of the vehicle, relative to the earth; X g = the azimuthal angle (measured from the north); x^ = the flight path angle (the angle of depression of the velocity vector from the "horizontal" through the vehicle; x^ = the latitude of the vehicle (positive north); x^ = the longitude (measured from Greenwich); X g = the distance of the vehicle from the center of the earth; s = 3t x 1 0 5 = the length of the flight path . In addition, y = the distance of the vehicle from the earth's surface; /O = the density of the atmosphere; ) s g r a r e the components of the earth's gravitational force in the coordinate system determined by , x^ and Xg respectively. These variables are expressed in terms of the following parameters: m = the mass of the vehicle; co = the angular velocity of the earth relative to i n e r t i a l space; - 5 1 -CD(ct) , the drag coefficient; C T(aO , the l i f t coefficient; ct = the angle of attack of the vehicle above the zero l i f t line; S = the reference area of the vehicle; p> = the angle of bank of the vehicle . The constants are defined by p'1 = 2 3 , 5 0 0 ft..; ,/o0 = 0 . 0 0 2 7 slugs/ft. 3 ; R Q = the equatorial radius of the earth = 2 0 9 2 5 8 4 0 f t . ; f = the earth flattening factor = I / 2 9 8 . 2 8 ; GM = the gravitational constant = 1 . 4 0 7 6 5 3 6 x 1 0 1 6 f t . 3 / s e c . 2 ; C 2 Q = - . 0 0 1 0 8 2 4 8 . We used the r e a l i s t i c values CDS/m = . 9 7 7 , C L S / 2 m = . 4 8 9 and ft = %/Z radians. As i n i t i a l conditions, we used x±(0) = ( 2 5 9 6 0 ft./sec. ) 2 x 2 ( 0 ) = 77/4 radians x ^ ( 0 ) .. = 0 . 0 6 radians x 4 ( 0 ) = 77/6 radians x r ^ ( 0 ) = 0 radians (arbitrary) xAo) = ( R n + 3 5 0 , o o o ) f t . = 2 1 2 7 5 8 4 0 f t . Using the above definitions and i n i t i a l conditions, the flight of a vehicle entering the earth's atmosphere at 3 5 0 , 0 0 0 f t . above the surface of the earth can be f u l l y des-- 5 2 -cribed. Por the particular values we used, the vehicle descends to approximately 2 0 0 , 0 0 0 f t . (at which point x^ becomes negative), returns to 2 6 5 , 0 0 0 f t . , and then turns toward the earth again, thus completing one 'skip 1. (The above equations(D) must be modified i f the vehicle descends below y = 1 0 0 , 0 0 0 f t . ) At the end of our interval, s = 9 x 1 0 f t . , xn = ( 2 0 8 1 2 . 3 6 5 ft./sec.f _L-x 2 = 1 . 2 9 4 3 8 8 5 radians x 3 = . 0 2 8 7 9 0 2 4 6 radians x^ = . 7 3 7 8 5 4 8 6 radians x^ = . 4 5 5 4 6 7 0 8 radians x 6 = ( R 0 + 2 3 8 7 9 5 . 2 4 ) f t . Since x^ > 0 , the vehicle's height y is decreasing. These exact values for x^ were the solutions In the round-off regions of a l l the predictor-corrector methods and of the Runge-Kutta method. This fact indicates :that these x^ are good approximations to the solution of (D). - 5 3 -BIBLIOGRAPHY 1 . Collatz, L. /The Numerical Treatment of Differential Equations, ( 1 9 6 0 ) . 2 . Dahlquist, G. Convergence and st a b i l i t y in the numeri-cal integration of ordinary differential equations, Math. Seand., 4 ( 1 9 5 6 ) , pp. 3 3 - 5 3 . 3 . Paddeeva, V.N-. Computational Methods of Linear Algebra, ( 1 9 5 9 ) . 4. Fine, J. Doctoral Thesis, Aerospace Institute, Toronto. 5 . Gantmaeher, F.R. . Matrix Theory, ( 1 9 5 9 ) . 6 . Henricl, Peter. Discrete Variable Methods in Ordinary Differential Equations, ( 1 9 6 2 ) . 7 . Henrici, Peter. Error Propagation for Difference Methods, ( 1 9 6 3 ) . 8. Hull, T.E„ and Creemer, A.L. Efficiency of predictor-corrector procedures, JACM, 1 0 ( 1 9 6 3 ) , pp. 2 9 1 - 3 0 1 . 9 . Hull, T.E. and Luxemburg, W.A.J. Numerical methods and existence theorems for ordinary differential equations, Numerische Math., 2 ( i 9 6 0 ) , pp. 3 0 - 4 1 . 1 0 . Taylor, A.E. Advanced Calculus, ( 1 9 5 5 ) . 1 1 . Taylor, A.E. Functional Analysis, ( 1 9 5 8 ) . 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0302286/manifest

Comment

Related Items