Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

An algorithm for solving index 1 differential algebraic equations in boundary value problems Chan, Pat S. Y. 1989

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1989_A6_7 C39.pdf [ 2.86MB ]
Metadata
JSON: 831-1.0051912.json
JSON-LD: 831-1.0051912-ld.json
RDF/XML (Pretty): 831-1.0051912-rdf.xml
RDF/JSON: 831-1.0051912-rdf.json
Turtle: 831-1.0051912-turtle.txt
N-Triples: 831-1.0051912-rdf-ntriples.txt
Original Record: 831-1.0051912-source.json
Full Text
831-1.0051912-fulltext.txt
Citation
831-1.0051912.ris

Full Text

AN ALGORITHM FOR SOLVING INDEX 1 DIFFERENTIAL ALGEBRAIC EQUATIONS IN BOUNDARY VALUE PROBLEMS By PAT S.Y. CHAN B.Sc, University of Toronto, Canada, 1987 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE FACULTY OF GRADUATE STUDIES (DEPARTMENT OF COMPUTER SCIENCE) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA August 1989 © Pat S.Y. Chan, 1989 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. The University of British Columbia Vancouver, Canada Department DE-6 (2/88) Abstract In this thesis we consider the numerical solution of boundary value problems for differential algebraic equations (DAEs) of index 1. It was shown in Ascher [1], [2], that if symmetric schemes are used for linear index 1 DAEs, the stability is controlled by the stability of an auxiliary (ghost) differential problem which is not necessarily stable even if the given problem is. We examine an algorithm which preserves the stability of the ghost problem by arranging the auxiliary boundary conditions properly. The algorithm has been implemented in a code (DAESOLVE) and tested on numerous examples. This code has been compared with a code which chooses the boundary conditions arbitrarily, showing favourable results. Contents Abstract i Contents ii Acknowledgement iii 1 Introduction 1 2 Background Material For BVP 6 2.1 Projecting Boundary Conditions 13 3 Background Material For DAE 17 3.1 Classification of DAEs 17 3.2 Symmetric Schemes for DAEs in BVPs 20 3.2.1 Analysis 23 4 Algorithm (linear index 1 DAEs in BVPs) 28 4.1 Details of Step 1 30 4.2 Details of Step 2 33 4.3 Details of Step 3 35 4.4 Details of Step 4 39 5 Numerical Examples 40 5.1 Example A 41 5.2 Example B 43 5.3 Example C 46 6 Conclusion and Future Research 51 Bibliography 53 ii Acknowledgement First, I would like to extend my special thanks to my supervisor Dr. Uri Ascher for suggesting my thesis topic, for the many helpful "remote" e-mail discussions, and for the financial support. Thanks also go to Dr. Feng Gao who acts as the second reader of my thesis. Thanks to the all of my friends in the department and in my church. Last but not the least, my deepest thanks go to my parents, brother, sisters, brother-in-law, and sister-in-law, for their endless help, support and love without which none of this would have been possible. I dedicate this thesis to my whole family. iii Chapter 1 Introduction In recent years there has been a lot of interest in numerical treatment of differential algebraic equations (DAEs). DAEs occur in some modelling problems in which the state variables (representing the state of the system) are related by both differential equations and algebraic constraints (e.g. the simulation of electrical networks, the solution of certain equations in fluid dynamics, and the simulation of mechanical systems). They are often characterized by their index (see discussion in Section 3.1). In this thesis, we present and study an algorithm used to solve linear boundary value problems (BVPs) for DAEs of (global) index 1. In general, a system of ordinary differential equations (ODE) can be written as F(x,x',t) = 0 (1.1) a <t < b. Here x € TZn and x' € Hn. A special case is when (1.1) is linear E{t)x' = A(t)x + q(t) (1.2) 1 CHAPTER 1. INTRODUCTION 2 If E(t) is non-singular, then (1.2) can be written as an explicit ODE x' = E-\t)A(t)x + E-1 (t)q(t), a < t < b Likewise, (1.1) can be converted, at least in principle, to an explicit ODE form when J(t) is nonsingular on a < t < b, where However this conversion may not always be simple or possible in practice. Equation (1.1) is related to (1.2) in the sense that at each iteration of a quasilinearization of (1.1) we have a linear DAE of the form (1.2) with E(t) = J(t). When E(t) is singular, it means that the differential equations are mixed together with algebraic constraints. It is the so-called DAE. A simple remark is that any ODE is a DAE. But not every DAE is an ODE. A simple example is as follows: [11] Example 1.1 where a < t < b. Obviously the solution is x\ = g(t), x2 = g'(t) and x3 = g"(t). However, if initial values are given at x(a) = [xi(a),x2(a),x3(a)], then one has to make J(t) = dF(x(t),x'{t),t) dx' (1.3) X\ — g(t) x[ — x2 = 0 x'2 — £ 3 = 0 CHAPTER 1. INTRODUCTION 3 sure that x(a) is compatible with [g(a),g'(a),g"(a)], or else there is no solution. This simple example shows how easily one can get into trouble when one does not look at DAEs carefully. Much effort has been put in studying DAE for initial value problems (IVPs); see for instance [11], [7], [17] and [12]. Gear [7] was the first to propose using BDF (Backward Differentiating Formulas) to solve DAEs. Marz [15] shows that BDF methods converge if J(t) of (1.3) has constant rank and the index is one. It is proved in [20] that if a k-step , with k < 7, constant stepsize BDF method is applied to a linear constant coefficient DAE (with index m) the solution is 0(hk) accurate globally after a maximum of (m — l)k + 1 steps. But BDF methods may fail when the step size is not constant [10]. Further, and more importantly, when BDF methods are applied to more general higher index problems, instability could occur, unless certain restrictive conditions hold [19], [4], [1], [9]. Petzold [18] studies order and stability properties for implicit Runge-Kutta meth-ods applied to index 1 linear constant coefficient DAEs. There is a reduction of order under some conditions. This order reduction effect also occurs for some of the implicit Runge-Kutta methods in the stiff ODE literature. Kvaerno [13] has extended Petzold's work for implicit Runge-Kutta methods for index 1 problems in which J(t) has a constant rank. Campbell [5] gives a very interesting algorithm applied to general linear DAEs CHAPTER 1. INTRODUCTION 4 (1.2). However this method involves differentiating the coefficient matrices and therefore is impractical for certain problems. The existing theory is well-developed for index 1 DAE in IVP and a general purpose code DASSL (by Petzold [16]) is available. But less work has been done on BVP. The theory is less satisfactory, and no general purpose code has been developed for even index 1 problems, let alone for higher index ones. It is, therefore, the purpose of this thesis to present an algorithm to solve linear index 1 BVPs using symmetric schemes. Symmetric schemes are used because, as is well-known in numerical BVP literature, they weakly preserve dichotomy, whereas those one-sided schemes do not (see, e.g. [3]). The analysis is based on Ascher [1], [2]. It was shown that if symmetric schemes are used for linear index 1 DAEs, the stability is controlled by the stability of an auxiliary (ghost) ODE problem which is not necessarily stable even if the given problem is. The algorithm we study attempts to preserve the stability of the ghost problem by arranging the auxiliary boundary conditions properly. In Chapter 2 we provide the background material necessary for understanding the relevant theory of BVP. The ideas of well-conditioning and dichotomy are introduced. It also discusses using eigenvalues to approximate kinematic eigenvalues. This discussion is important since it constitutes a crucial step in the proposed algorithm of this thesis. In Chapter 3 we discuss the theory for using symmetric schemes in solving linear index 1 DAEs. In Chapter 4 we give the details of the proposed algorithm. In Chapter 5 CHAPTER 1. INTRODUCTION 5 experimental results are presented based on the algorithm in Chapter 4, and in the last chapter we draw conclusions and suggest potential future improvement. Chapter 2 Background Material For B V P In this chapter we study the concept of well-conditioning for linear BVPs. The concept of dichotomy is introduced as it plays a crucial role in the study. In particular, the relationship between dichotomy and kinematic eigenvalues (and also Lyapunov equation) is investigated. Since it is very difficult to find the kinematic eigenvalues in practice, usual eigenvalue analysis is used to approximate the kinematic eigenvalues. A crucial step for the algorithm proposed in Chapter 4 uses eigenvalue analysis to study the behaviour of the difference scheme, which involves a ghost problem. If the ghost problem is found not to be well-conditioned, the proposed algorithm will quit. Therefore it is very important to investigate how good this approximation is. Consider the following BVP y' = A(t)y + q(t) a<t<b (2.1) Bay{a) + Bby{b) = $ (2.2) 6 CHAPTER 2. BACKGROUND MATERIAL FOR BVP 7 where y(t),q(t) € Kn, and A(t) e KnXn. Intuitively the BVP (2.1), (2.2) is well-conditioned if a small change in q(t) and ft should produce a "small" change in the solution y. In contrast, it is ill-conditioned if a small change produces a "large" change in y. It does not make too much sense to solve a very ill-conditioned problem because starting with inexact data, the solution will be arbitrarily inexact no matter how it is computed. (Remark: an algorithm is stable if it yields a solution that is the exact solu-tion of a slightly perturbed problem). It is the purpose of this chapter to further quantify and characterize this important idea. Assume that max(||fla||,||J36||) = l The norm || • || used in this chapter is assumed to be a Holder norm. The induced matrix norm will also be denoted by || • ||. Furthermore, assume that the BVP (2.1) and (2.2) has a unique solution y(t). This solution can be written as y(t) = Y{t)B~lft + t G{t, s)q(s)ds (2.3) Ja B = BaY(a) + BbY{b) (2.4) G(t,s) = where Y(t) is a fundamental solution and G(t, s) is the Green's function. Taking the norm of (2.3) we obtain IMloo < Kl||/?||oo + K2|k||p (2-6) CHAPTER 2. BACKGROUND MATERIAL FOR BVP 8 where «i = ll^g-Mloo K2 = sup([/6 WGfaaWdt]1'*), 1/p +l/q=l x Ja The conditioning constant is defined as K = max{/Cx, K2} Equation (2.6) becomes IMIoo<«[||/3||oo + lkllp] (2.7) One can easily see from (2.7) that n provides a bound on the perturbations of )3 and q, i.e. it measures how sensitive the change of y(t) with respect to the perturbation of /3 and q(s) is. The linear BVP (2.1), (2.2) is said to be well-conditioned if K is a constant of moderate size. One can show that Hi < 2/c2. (see [3], Chapter 3). Actually, for a well-conditioned BVP, the ODE must have a dichotomy [3]. Definition 2.1 (Dichotomy) Let Y(t) be the fundamental solution of y' = A(t)v (2.8) The ODE (2.8) has an exponential dichotomy if there exist a constant orthogonal projection matrix P € 7?.nXn of rank p, 0 < p < n, and positive constants K, \, fx with K of "moderate" size, such that \\Y{t)PY-\s)\\ < Ke-W-) fort>s (2.9) CHAPTER 2. BACKGROUND MATERIAL FOR BVP 9 | \Y {^PY-1 (a) 11 < Ke-^->) fors<t (2.10) for a < t, s < b. It has an ordinary dichotomy if A = 0 or fi = 0 in (2.9) or (2.10). It is said to be dichotomic if it has either an ordinary dichotomy or an exponential dichotomy. Moreover, the ideas of well-conditioning, dichotomy and boundary conditions are closely related with each other. A particularly simple, but important, case is when the boundary conditions (2.2) are separated, i.e. Ba = Ba\ 0 Bb = 0 Bb2 Baly(a) = p0 £wy(&) = ft (2.11) (2.12) where Bal € 7lm X n, Bb2 € n(n-m)xn. If the BVP is well-conditioned, then it must have a dichotomy. The BC Ba\y(a) = /30 and Bb2y(b) = fix must control the nonincreasing modes and nondecreasing modes, respectively. This idea is stated formally in the following theorem. Theorem 2.1 Suppose the BVP (2.1) with separated BC (2.12) is well-conditioned. Let Y(t) be any fundamental solution, B be as defined in (2.4), Ip be a p x p identity matrix, and P be a projection matrix where P := BaY{a)B-1 = IP 0 0 0 Then CHAPTER 2. BACKGROUND MATERIAL FOR BVP 10 ( a ) m = p; (b) range(BT)x fl range(Y(a)P) = {0}, i.e., no nonzero vector in range(Y(a)P) is orthogonal to range(Bj1); (c) range{Bj2)L n range(Y(b)(I - P)) = {0}; (d) If the system has a dichotomy with (b — a)min{\, fx} —> oo, then the rank of the projection matrix must be m. Proof: (see [3], Chapter 3, Theorem 107.). This theorem implies that the number of BC's at t = a must be at least the number of rapidly decreasing modes. Likewise the number of BC's at t = b must be at least the number of rapidly increasing modes. One of the crucial steps in the proposed algorithm in Chapter 4 is based on this theorem, especially (b) and (c). However it is impractical to use Definition (2.1) to check for dichotomy and then conclude well-conditioning, since we have to compute a fundamental solution. It is natural to ask if we can characterize the ideas of dichotomy and stability involving A(t). Consider the ODE (2.8). Suppose that there exists a transformation matrix Q(t), Q(t) is differentiable and ||Q(0IIII2-1(S)II 1S uniformly bounded for t > s, such that V(t) is upper-triangular, where (2.13) CHAPTER 2. BACKGROUND MATERIAL FOR BVP 11 Let w(t) = Q-1(t)y(t) (2.14) Substituting (2.14) into (2.8) (note: y' = Q'(t)w(t) + Q(t)w'(t)), we obtain a new ODE w' = V(t)w (2.15) Let W(t) be the fundamental solution for (2.15) satisfying W(a) = I Wn W12 " W:= „r 0 1 ;;r00 (2.16) where Wn € KpXp, Wu € Kpxn, W21 G n ^ x p , and W22 € Then one can show that W(t) = Q~lY(t) and W(t) is block upper-triangular, i.e. W21 = 0. Moreover, both Wu and VK22 are nonsingular. Definition 2.2 (Kinematic Eigenvalues) The system (2.8) and (2.15) are kinemat-ically similar. The diagonal elements ofV(t) are called the kinematic eigenvalues corresponding to Q(t). Equation (2.13) is the famous Lyapunov equation. The following theorem relates kine-matic eigenvalues with dichotomy. Theorem 2.2 Consider the ODE (2.8). Suppose that there exists a transformation matrix Q(t) satisfying (2.13), (2.14) and (2.15). Denote the kinematic eigenvalues by Ai, A 2 , . . . , A n . Then (2.8) has a dichotomy iff there exist positive constants c and A such CHAPTER 2. BACKGROUND MATERIAL FOR BVP 12 that for p kinematic eigenvalues ? Xi{r)dr) < -X(t - s) ift-s> c (2.17) and for the other n — p kinematic eigenvalues Mil A,-(r)rfr) >X(t-s) ift-s>c (2.18) Proof: (see [3] , Chapter 3, Theorem 101). Equations (2.17) and (2.18) imply that there are p nonincreasing modes and n — p non-decreasing modes. The transformation matrix Q(t) actually decouples the system (2.8) into two modes, namely nonincreasing modes and nondecreasing modes. The diagonal elements of V(t) (i.e. the kinematic values) can be used to determine if the system has a dichotomy. Let where V(t) is upper triangular, and its diagonal elements are the eigenvalues of A(t). A simple observation on (2.13) is that Q'(t) = 0 if A(t) is a constant matrix. The eigenvalues are the same in this case as the kinematic eigenvalues. One could then ask if it is always possible to use eigenvalues to approximate the kinematic eigenvalues for a general A(t). The answer is negative. We demonstrate this by a simple example. Example 2.1 V(t) = QT(t)A(t)Q(t) (2.19) A(t) = /3cos(2ai) /3sm(2at) + a P sin(2oi) - a -/? cos(2ai) (2.20) CHAPTER 2. BACKGROUND MATERIAL FOR BVP 13 With the following transformation matrix Q(t) we obtain the kinematic eigenvalues cos(otf) sin(crf) — sin(crf) cos(at) V = P 0 0 -0 (2.21) However a straightforward computation shows that the eigenvalues of (2.20) are ± - c? (2.22) The eigenvalues are only close to the kinematic eigenvalues when \a\ is small compared to \(3\. The example is pathological because a large \a\ indicates that modes rotate fast. This is rare in practice. It should be noted that often it is the number of nonincreasing modes and the number of nondecreasing modes we are interested in, and not the exact magnitude. Even though eigenvalue analysis may fail, it is the only simple tool we have to approximate the kinematic eigenvalues. Besides if A(t) does not change rapidly it is often good enough. 2.1 Projecting Boundary Conditions In this section we make use of the material presented in this chapter to answer a question we face in Chapter 4 (when performing the projection in Section 4.2). The question is as follows: Consider the BVP (2.1) with separated BC (2.12). Let V, Q, W and w be defined as in (2.13), (2.14), (2.15) and (2.16). Assuming that the ODE has a dichotomy, there are n~ nonincreasing modes and n+ nondecreasing modes (n~ + n+ = n). Furthermore, CHAPTER 2. BACKGROUND MATERIAL FOR BVP 14 the real parts of kinematic eigenvalues of V are arranged from large negative to large positive. Suppose that we are given na BC's at t = a and nb BC's at t = b, with na > n~, nb > n+ and na + nb > n. The BC are consistent, i.e. there is a unique solution. However the numerical algorithm used only requires n BC's. We have more BC's than we need. How should we choose n BC's from these na + nb BC's so as to obtain a well-conditioned BVP if possible? From Theorem 2.1, we know that there should be n~ BC's at t = a and n+ BC's at t = b. It implies that n~ BC's and n+ BC's should be chosen from Bai and from B^, respectively (cf. (2.12)). Denote these n~ BC's at t = a and n+ BC's at t = b as Baly(a) = i% Bb2y(b) = p1 (2.23) Then BaXQ-\a)y{a) = k Bb2 Q"1 (6)2/(6) = p\ . . Bal:=BalQ(a)=:[BlaUB2al} Bh2 := Bb2Q(b) =: [B\2, Bfa { ' ) where Bal € ftn"Xn, Bb2 6 7 ln + X n, B\x € Tln~Xn~, and B\2 € 7T+ X n +. Under what conditions will the BVP (2.1), (2.23) be well-conditioned? Let P be the projection matrix defined as in Theorem 2.1. By part (b) of Theorem 2.1, we must have rang e(Bj1)x n range(Y(a)P) = {0}. Let the vector z € range(Y(a)P), i.e. z = Y(a)[c,0]T for some c G H.n~. If also z £ rangelB^)1-, then for any x € Hn~ xTBalY(a)[c,0]T = 0 CHAPTER 2. BACKGROUND MATERIAL FOR BVP 15 So using W(t) = Q-\t)Y(t) xTBalQ-1(a)Y(a)[c,0]T = 0 *r[^i,^Wa)[c,0]r = 0 xTB\lWl\a)c = 0 \iB\x is nonsingular, then B\lWn(a) is also nonsingular, since Wn is nonsingular. This implies that c = 0. It means that there does not exist a z ^  0 such that z G range(Y(a)P) and also z G range[B^L. Hence part (b) of Theorem 2.1 is satisfied. Therefore among the na BC's given by the user at t = a, we should choose a combination of n~ of them such that B\x is nonsingular. By part (c) of Theorem 2.1, we also must have that range{Bj2)L D range(Y(b)(I — P)) = {0}. Using the same technique, let the vector z G range{Y(b){I — P)), i.e. z = Y(b)[0, c]T for some c G TZn+. If also z £ range^B^)1, then for any x G lZn* xTBb2Y(b)[0,c]T = 0 So using W{t) = Q-ityYit) xTBb2Q-1(b)Y(a)[Q,c]T = 0 xT[%2,Bl2]W{b)[Q,cf = 0 xT(Bl2W12(b) + B2b2W22(b))c = 0 CHAPTER 2. BACKGROUND MATERIAL FOR BVP 16 If B22 is nonsingular and B]2 — 0, then (B\2W12(b) + B^2W22(b)) is nonsingular, since W22 is nonsingular. This implies that c = 0. Hence part (c) of Theorem 2.1 is satisfied. However, the nonsingularity of B22 in itself is not sufficient to guarantee that (Bl2W12(b)+ B22W22(b)) is nonsingular (so it is a bit different from the previous case). In practice, one may replace Q"1 by Q'1 (see (2.19)). The n~ BC's at t = a and the n+ BC's at t = b are chosen such that B^ and B22 are nonsingular. Of course, B^ and B22 should not be ill-conditioned too. Even though these two conditions may not guarantee that part (c) of Theorem 2.1 holds, it is often good enough in practice. Chapter 3 Background Material For D A E In this chapter we review some fundamental concepts of DAE. In Section 3.1 we discuss the classification of DAEs. In Section 3.2 we consider the use of symmetric schemes for linear index 1 DAE. It will be shown that the stability of the discretization is controlled by a ghost problem. 3.1 Classification of DAEs Not all DAEs of the form (1.1) can be solved. Some of them may have no solution or infinitely many solutions. (Of course, it depends on the BC too). Actually it is not reasonable to try to directly solve those DAEs with either no solution or infinitely many solutions numerically. Therefore we need a definition to classify the solvability of DAEs. If a DAE is solvable, we further categorize it based on its structure. The latter is the notion of index. The following definition is due to Campbell, [14] 17 CHAPTER 3. BACKGROUND MATERIAL FOR DAE 18 Definition 3.1 (Solvability) The DAE (1.1) is solvable on a subset ft of1ln+l if there is an r-parameter family of solutions x(t,c) (defined for t € H, c € W) such that (i) (t,x(t,c)) 6 ft, and (ii) if x(t) is any solution whose graph lies in ft, then x = x(t,c) for some c, and (iii) the graph x(t,c) is an (r-hl)-dimensional manifold. We generally take ft = [t0,T] x Tln. The definition guarantees that there are solutions, and that they are uniquely defined by their value at t = t0. Having defined solvability of a DAE, we could further categorize its structure with a quantity called its index. This notion of index has been defined in many ways, for a local index and a global index. We are primarily interested in the global index in this thesis and often we just write "index" for "global index". The definition given below is due to Gear, [8]. Definition 3.2 (Index) The global index, m, of a solvable DAE (1.1) with F sufficiently smooth is defined as follows: m = 0: if J(t) (as defined in (1.3)) is nonsingular. m > 0.' otherwise, consider the system of equations F{x,x',t) = 0 dF dF (2, dF , OF n at ox' ox ot 02F 8F {3) dt2 dx CHAPTER 3. BACKGROUND MATERIAL FOR DAE 19 daF OF f , + 1 v at' dx> + ~u as a system of equations with dependent variables x',x^2\ ... , x(4+1) and with independent variables x and t. The global index, m, is defined as the smallest s such that it is possible to solve for x' in terms of x and t. It may suffice to give some simple examples in here. Example 3.1 Any explicit ODE is a DAE with index 0. Example 3.2 Example (1.1) is an index 3 DAE. x\ = g x[ — X? x'2 — X 3 «x- j — \j \ — 2 — 3 43) = 9(3) It implies that [x'^ x^ , x'3] = [x2,x3,p^]. Since three steps of differentiation are required, it is of index 3. It should be mentioned that, in principle, it is possible to use analytical differentiation techniques to rewrite a DAE from a higher index to a lower index. Intuitively, it is that by differentiating the algebraic equations, we can solve for x' in terms of x and t. However, each differentiation will introduce additional constants of integration. Gear [8] discusses CHAPTER 3. BACKGROUND MATERIAL FOR DAE 20 an interesting algorithm for reducing higher index problems to lower index ones without introducing any constants of integration. In practice, such a reduction may not be feasible. But it helps to understand the underlying analytical structure of the problem. Sometimes it may be useful to look at some special forms of (1.1). A DAE is in semi-explicit form if x\ = f(x1,x2,t) (3.1) 0 = g(xux2it) (3.2) It is in "triangular" form if x'i = f{xi,x3,t) (3.3) 0 = g(xltt) (3.4) It is in linear time-varying form if it can be written as (1.2). When both E[t) and A(t) of (1.2) are constant matrices, then we have a linear constant coefficient form. Ex' = Ax + f(t) (3.5) 3.2 Symmetric Schemes for DAEs in BVPs In this section we study in detail the stability issues arising when using symmetric schemes for DAEs in BVPs. Ascher in [1] and [2] shows that there is a potential risk in using sym-metric schemes on general linear index 1 DAEs (1.2). If the DAE is not in semi-explicit CHAPTER 3. BACKGROUND MATERIAL FOR DAE 21 form, then the stability of the discretization is controlled by a ghost problem. Therefore the stability constant of the discretization method depends on the conditioning constant of this ghost problem. For the numerical solution of index 1 IVPs, there is a better alternative, namely BDF methods. However symmetric schemes are more appropriate for BVPs, since a well-conditioned problem may have both fast nonincreasing and fast nondecreasing modes. Symmetric schemes preserve dichotomy in a weak form (see [3], Ch.3,10) and BDF methods do not. Let us study a simple example to show the potential danger of using symmetric schemes. Example 3.3 Consider a general linear DAE of the form (1.2) where E{t) = 0 0 - X t A(t) = ~P Pt + 1 1 - ( t + 1 ) cos(t) 0 (3.6) where (3 is a parameter, 0 < t < 1, and xi(Q) = —1. The exact solution is xx = -(1 + f3t)e-* - tcos{t) x2 = -Pe~l - cos(t) We apply the midpoint scheme E ( t i + 1 / 2 ) ^ - = A(ti+1/3)3*?a- + q(ti+1/2), l<i<N (3.7) on a mesh 7r : 0 = tx < t2 < • • • < tN < tN+i = 1 (3.8) hi := t,-+i — U, h := maxx<,-<jv hi, ti+\/2 := ti + \hi. CHAPTER 3. BACKGROUND MATERIAL FOR DAE 22 Here we choose a uniform mesh (h — l/N). In order to use the above midpoint scheme, we need another BC. Consider the following two cases: Case A: x2(0) = - / ? - l Case B: -px1{l) + {l+P)x2(l) = -coa{l) Table 3.1 and Table 3.2 show the errors and the rate of convergence of x of Case A and Case B, respectively. /? is chosen to be 10.0. The errors computed are the maximum absolute errors. One can clearly see that errors in Table 3.1 are very large, while the errors in Table 3.2 are relatively small. The large error is not due to roundoff error accumulation, as the computed rates of convergence indicate. One can see easily from these two tables that if the extra BC is not posed properly, large errors could occur. Table 3.1 h errorl error2 ratel rate2 0.500-01 0.156+04 0.156+04 0.250-01 0.354+03 0.354+03 0.214+01 0.214+01 0.125-01 0.863+02 0.863+02 0.203+01 0.203+01 0.625-02 0.214+02 0.214+02 0.201+01 0.201+01 0.313-02 0.535+01 0.535+01 0.195+01 0.915+01 CHAPTER 3. BACKGROUND MATERIAL FOR DAE 23 h 0.500-01 0.250-01 0.125-01 0.625-01 0.313-01 error1 0.263-01 0.666-02 0.167-02 0.432-03 0.111-03 Table 3.2 error2 0.615-01 0.155-01 0.388-02 0.971-03 0.243-03 ratel 0.198+01 0.200+01 0.195+01 0.196+01 rate2 0.199+01 0.200+01 0.200+01 0.200+01 3.2.1 Analysis In this section we recall an analysis [2] of the stability issue of using symmetric schemes for a general index 1 linear DAE (1.2). Given a DAE (1.2), assume that E(t) can be written as r_1(*) (3.9) where both S(t) and T(t) are smooth for all t, a < t < b. With the following transfor-mation = x = T~xx (3.10) (1.2) can be transformed to semi-explicit form 0 = Uu(t)y + Ul2(t)z + gl(t) (3.11) z< = U21(t)y + U22(t)z + g2(t) (3.12) where T-XT' (3.13) E(t) = S(t) Un U21 U12 U22 = U = S-'AT 0 0 0 I CHAPTER 3. BACKGROUND MATERIAL FOR DAE 24 In (3.11) there are ny equations and in (3.12) there are nz equations, ny + nz = n. The general linear DAE (1.2) has index 1 if Un(t) is nonsingular for all t G [a,b]. Using (3.11) we then have y(t) = -[Uu(t)}-1[U12(t)z(t) + g\t)} (3.14) By substituting (3.14) into (3.12), we obtain an explicit ODE with z only. Therefore we actually require only nz BC's for z. However unless a transformation like (3.10) is used we must require another ny BC's when applying a numerical method to solve (1.2) so as to fully specify the BC'c required for x. If a midpoint scheme is used for (1.2) then we are actually approximating y(t) in a wrong space, in the sense that we force y(t) to be as smooth as z(t). A result of this is that the stability of the scheme depends on the stability of a ghost ODE y' = -M(t)y (3.15) where M = [t711]-1(S"Mr')11 G nn"xn" (3.16) (if Un = 0 then simply M = (T~lT')n) (see [1] and [2] for details). The new function y comes from the discretization of y. Though the DAE may be well-conditioned (which we will assume), the stability constant of the discretization, which is controlled by the conditioning constant of the above ghost ODE subject to the BC used, may not be small CHAPTER 3. BACKGROUND MATERIAL FOR DAE 25 ' 1 0 ' T_ l = ' t -1' 0 1 1 0 at all. If the BC's for y are not properly posed with respect to the ghost problem, instability will arise. One simple observation is that it is better to transform the DAE (1.2) to semi-explicit form (3.11), (3.12) first because M = 0 in that case (as V = 0). However this explicit transformation may not be available or feasible in practice. Using Example 3.3 one can show that with the following transformation S = The ghost problem is y = i.e. M — —Obviously this symmetric scheme will have a large stability constant if /3 is large and positive, and the BC for y is put at the left hand side (see Table 3.1), but it has a moderate stability constant if the BC for y is put at the right hand size (see Table 3.2). In the next chapter we give an algorithm to arrange the BC's for y automatically (in fact the BC used for Table 3.2 has been generated by the proposed algorithm). Remarks: The decomposition (3.9) is not unique. Given T and S satisfying (3.9), we can generally write E(t) = 5 ( 1 ) 0 G(t) 0 0 0 / f-\t) *W = x W [ J G0t) (3.17) (3.18) Here FS,FT € HnvXn» and G(t) € TZn'Xnz are sufficiently smooth non-singular scaling matrices. The rectangular matrices Rr(t) £ ftnvXn* and Rs(t) € Tln*Xn» can be chosen CHAPTER 3. BACKGROUND MATERIAL FOR DAE 26 to obtain certain special structure of U in (3.13) (see [1]). With the above transformation matrices T(x) and S(x) we can show that the ghost ODE is y' = -M{t)y (3.19) M = F j1 M FT + FTXFT (3.20) where M is defined as (3.16). The change from the old to the new ghost ODE only depends on FT- If the fundamental solution of the original ghost ODE, (3.15), is Y0u, then the fundamental solution of the new ghost ODE, (3.19), is YNEW — FTYOU, Let us define a transformation Q(t) such that -M =QVQ~l + Q'Q-1 (3.21) where V is upper-triangular and so its diagonal elements are the kinematic eigenvalues of the ghost ODE. Then we can show that - M = QVQ-1 + Q'Q-1 (3.22) where Q = Ff1Q (3.23) This means that the kinematic eigenvalues are the same no matter how T is changed. It is very important from the perspective of whether scaling will affect the magnitude and the sign of kinematic eigenvalues, and the answer is negative. Of course, this conclusion CHAPTER 3. BACKGROUND MATERIAL FOR DAE 27 is based on the assumption that FT is smooth and nonsingular within the interval of integration. Chapter 4 Algorithm (linear index 1 DAEs in BVPs) In the last chapter we have seen the danger of using symmetric schemes in solving linear index 1 DAEs. The problem is that the algebraic part, y, is approximated in the wrong space, and the ghost problem generated may not have a small conditioning constant. If the boundary conditions with respect to the algebraic part, ?/, are not posed properly, large stability constants may occur. In this chapter we propose an algorithm to arrange the boundary conditions for the algebraic part so as to preserve the well-conditioning, so long as the ghost ODE has a dichotomy. The main skeleton of the algorithm is presented first, followed by details of each step. Consider the DAE (1.2) subject to the consistent separated BC Bax(a) = p0 Bbx{b) = px (4.1) 28 CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 29 A L G O R I T H M Step 1: Find a T and a X' satisfying (3.9) at the two interval ends. Using T we may now isolate the solution components y and z, and form UU, U12, g1 at each interval end. Step 2: Using the relation (3.14), obtain from (4.1) a set of BC's for z alone. Denote them by Baz(a) = 0o Bbz(b) = Pi (4.2) If there are more than nx BC's, then they may be projected appropriately. That is, substitute (3.14) into (3.12) (only at t = a, b), and obtain an explicit differential equation on z alone, denote it as z' = H(t)z + h(t) (4.3) Form the matrices H(a) and H(b), and analyze the eigenvalues of H using the QR algorithm. Let V(t) = QT(t)H{t)Q{t) at t = a, b where Q is orthogonal and V is upper triangular with the eigenvalues arranged in increasing order of real parts, from large negative to large positive. Let (nj ) be the number of eigenvalues of H(a) which have a large negative (positive) real part, and similarly define , njj" for H(b), If na + njj" > nz then (a) no projection is performed, and just pick arbitrary BC's for t = a, and nz — BC's for t = 6; else (b) write the first part of (4.2) as follows: (c) set fc:=max(riQ ,n^); (d) pick k linearly independent BC's from Ba, denote these BC's as Ba 8aQTz(a) = 0o . v fl.:=[fl.i,fi«] ( 4 - 5 ) where Bal £ Tlkxk, and 5O 2 € • R k x ( " - - k ) (e) set the BC (4.5) at t = o; (f) perform the same process for pick n2 — k linearly independent BC's from Bb, denote these BC's as Bb, BbQTz(b) = p1 Bb:=[BbuBb2] (4-6) where Bbl 6 ft(".-*)x\ and Bb2 g ft(n'-fe)x("«-*) (h) set the BC (4.6) at t = b; End if CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 30 Step 3: Form the matrices M(a) and M(b) by (3.16), and analyze the eigenvalues using the QR algorithm. Let V(t) = QT(t)M(t)Q(t) at t = a, b where Q is orthogonal and V is upper triangular with the eigenvalues arranged in increasing order of real parts, from large negative to large positive. Let ria (uj) be the number of eigenvalues of M{a) which have a large negative (positive) real part, and similarly define n£", njj" for M(6). If no + n J > ny or n£ + > nv then (a) another method (e.g. transforming explicitly to (3.11), (3.12) everywhere first) should be used; (because there may not be a dichotomy for y) else (b) set k:=max(rio ,n^); (c) set the last nv — k components of QTy at t = a according to (3.14); (d) set the first k components of QTy at t — b according to (3.14); (e) transform back to obtain BC for x(a) and x(b). end if Step 4: Solve the discretized equations (3.7) with the obtained BC from Step 2 and Step 3. This completes specification of the BC for the application of the midpoint scheme (3.7). 4.1 Details of Step 1 Step 1 is performed by the singular value decomposition (SVD). Let E(a) = UY>VT, where S is a diagonal matrix and its elements are the singular values of E(a). De-note these singular values, which are above a threshold t0 > 0, by o~i, cr2, • • • ? °n,- In general, nz ^ n and nz is the dimension of the differential part of the DAE. Then set S(a) = t7S and T~x = VT, where S is the n x n diagonal matrix with diagonal elements [oi, 0"2, . . . , crnz, 1,..., 1]. Likewise perform the same steps for E(b). This is a key step of the whole algorithm, and therefore has to be carried out correctly. We have remarked at the end of Chapter 3 that this decomposition is not unique (cf. (3.17)). Even though the non-uniqueness will not affect the kinematic eigenvalues, CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 31 it affects the eigenvalues in the actual implementation. We will discuss this in detail in Section 4.3. The derivative of T (i.e. 7") is approximated (e.g. at t = a) by T'(a) = T { a + k ) ~ T { a ) h T(a + h) and T(a) are obtained by applying SVD to E(a + h) and E(a) respectively. Of course, h has to be small enough. In the actual implementation, it is chosen to be the square root of the machine epsilon. But we know that T is not unique, and therefore the two transformations computed at a and a + h may not belong to the same T(x). That is, performing SVD at E(a) obtains Ti(a) and performing SVD at E(a + h) obtains T2(a + h), where T\(x) may be different from T2{x). It does not make sense to approximate T' using the above formula if T\(x) ^ T2(x). Even though h is chosen to be very small in the actual implementation, the T computed is still not unique up to the sign and permutation of the first ny columns of T. It is because when we look at (3.9), 0 0 0 / T-1 = 71.11 71.12 inv inv 7-121 7122 inv inv 0 0 0 I 0 0 7121 7122 inv inv One can easily see that E in (3.9) does not depend on the first ny rows of T-1, but the last n — ny rows of it are unique. It implies that the first ny columns of T computed are not unique (since the T computed is orthogonal, by SVD). This problem is resolved by comparing and rearranging the first ny columns of Ti{a) and T2(a + h) so that Ti(x) is the same as T2(x). Given the ith column of T\(a), CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 32 look for a column in the first ny columns of T2(a + h) such that these two columns are very close to each other. Swap this located column with the ith column of T2(a + h). By doing so, hopefully Ti(x) = T2(x). The following describes the algorithm in detail. For i = 1 to ny - pick the ith column T\ (a), call it tlf, - find the maximum element (in absolute value) of say the jth element, denote it as 41^ ; - compare <lj (in absolute value) with the jth elements of column i to column Tiy of Ti(a + h): i.e If there exists a column k such that (1) ||llf'|-|«2i||$ei and (2) t2k *- aign(t2}k)sign^)t2k; (rt,- - sign(tli) * *«yn(*2J) * t2*)r(rt, - signal?) * sign(t23k) * t2k) < «2 : then swap the kth column of T2(o + h) with the ith column of Ti(a + h); Else : stop the whole algorithm; (because fail to compute T properly) End if End For The two parameters ei and t2 are two small positive numbers. Condition (2) guarantees that the square sum of the residual of 21,- — t2j is less than some small number. However, 21,(2) may equal to ±22j(2) and therefore we have to consider the sign of 22 j. Similarly, perform the same steps for T at 2 = b. This simple algorithm has performed very well in practice, for all the test cases we have tried. The theory behind this simple algorithm is as follows [6]: Denote the ith column of T(t) as TW(t). Since we are using SVD, T(t) is an orthogonal matrix, and \\T®{a) -T^(a)\\ = 2,i^j. \\T®(a) - T^(a)\\ = \\T®(a) - ^ ( o + h) - T®(a) + T®(a + h)\\ < ||rW(a) ~ T^(a + h)\\ + \\TW(a + h) - T^{a)\\ CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 33 ||rW(o) - T^{a)\\ -\\TU\a + h) - T®{a)\\ < \\T®{a) - T^\a + h)\\ 2 - \\T^(a + h) - T^(a)\\ < ||T«(a) - T0")(a + h)\\ i^j Since T(t) is continuous, \\T^\a) - T^(a + h)\\ -» 0 as h -> 0. Therefore l<\\T®{a)-T®{a + h)\\ It implies that for all columns in T2(a + h), there exists only one of them which is close to the ith column of 7i(a), 1 < i < n. This explains why the algorithm performs so well in practice. 4.2 Details of Step 2 Equation (4.2) can be obtained in the following way. Suppose that there are na BC's at t = a and n& BC's at t = b. Using (3.14), we have (e.g. at t = a) BaT(a) BaT Obviously, in (4.2) -[UxxYxUX2z{a) z(a) Bax(a) = BaT{a) + y z -[fu]-y(<o 0 (a) = fa = h -[Un]-xUu(a) I z(a) = BaT [ t f " ] -V(«) 0 Baz(a) = BaT I 0 Likewise, the same process is performed on Bb-CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 34 However sometimes the user may supply more than nz BC's (i.e. n0 + n& > nz), even though only nz of them are required to specify the BC's for the whole problem. In this case, the BC's have to be projected properly. The technique used to perform projection in here has been explained in Section 2.1. 6ai in (4.5) and 6b2 in (4.6) should be nonsingular. The BC's at t = a have to control the decreasing modes of z and the ones at t = b have to control the increasing modes. By substituting (3.14) into (3.12) (for the endpoints only), we obtain a set of explicit differential equations on z alone, z' = {U22 - U2l[Un}-xUX2}z + {g2 - tf21[*7"]-y} Therefore in (4.3), H = {U22 - U21[Ull]-lU12} h = {g2- U2l[Un)-xg1} Step 2(d) and Step 2(e) can be performed in a very simple way. Suppose that w(a) — QTz(a), then (4.4) can be written as Baw{a) = & (4.7) Let us write Ba as Ba := [Bai,Ba2] where Bal € 7ln°x* and Ba2 G nnaX(n*~kl Assume that Bal has a row rank of at least k, where k is as defined in Step 2(c). Compute the Gaussian Elimination for (4.7) with partial pivoting. Use an array, say PIVOT, to keep track of which rows have been CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 35 swapped. For instance, row i and row j have been swapped. Then PIVOT(i) = j and PIVOT(j) = i. (A QR decomposition is not used because it is more expensive). Let say, after the Gaussian Elimination, (4.7) becomes Baw(a) = /3Q Ba := Ball Bai2 Ba2i Baii Poi PQ2 (4.8) where Ball € 7lfcxfc, Ba22 € ft(»--*)x(«.-*), f301 e Kk and $m 6 H^~kl Then Ba and fi0 in (4.5) can be chosen in the following way: Row i of Sa is the same as row PIVOT(i) of Ba in (4.7), and #>(i) = $0(PIVOT(i)), 1 < * < k (k as defined in Step 2(c)). The rows of Ba in (4.7) are used for Ba, instead of the rows in Ba, because Gaussian Elimination does not preserve conditioning. Using the notation of (4.5) we know that Bai is nonsingular because £?all is upper triangular. Step 2(f) and 2(h) can be performed in the same way. 4.3 Details of Step 3 This is another key step of the whole algorithm. By studying the ghost problem and rearranging the BC's for y, hopefully large stability constants may be prevented. However there is a major assumption in this step, and it is that the eigenvalue analysis does represent the kinematic eigenvalue analysis. When we look at the ghost problem (3.15), ideally we should find a transformation Q such that V = Q-\-M)Q-Q-lQ' CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 36 where V is upper-triangular and the diagonal elements are the kinematic eigenvalues. This is the famous Lyapunov Equation (see Definition 2.2). But in actual practice, we use the eigenvalue analysis (though it may be misleading as shown in Chapter 2) because it is expensive (or even infeasible) to find the kinematic eigenvalues. The eigenvalues are the same as the kinematic eigenvalues in general only when M (hence Q) is a constant matrix, and they approximate the kinematic eigenvalues well only when the second term of Lyapunov Equation is relatively small (i.e. is small when compared with the In other words, it is that when M(t) has eigenvalues with large real part (the only case where this matters), its variation in t is slow compared to the size of such eigenvalues. Then the eigenvalue analysis approximates Lyapunov's equation well. The decoupling can be done by the transformation QTy. In the last remark of Chapter 3, we show that the non-uniqueness of T(t) and S(t) does not affect the kinematic eigenvalues of the ghost ODE. But does the non-uniqueness affect the eigenvalues of Ml The answer is positive. Rewriting (3.21) and (3.22) using (3.23), we obtain first term). V = Q-\-M)Q-Q-1Q' V = Q~l(-M)Q - QrxQ! (4.9) Q-\-M)Q - [Q~lQ' - Q-lFTF^1 Q] (4.10) CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 37 Of course, if FT = Q, then the second term of the above equation will disappear. Even if ||2-1<2'|| is small, Hi^Ff11| may not be. Similarly, HG^Q'H may be large, but ||Q-1Q' — Q~lFTFjl Q\\ may be small. One may think of trying to find an FT such that the second term is small. But it is hard to control this term in actual implementation. There are two more practical issues: 1. How does one know that the eigenvalue analysis is accurate enough? 2. If the eigenvalue analysis is insufficient, what should one do? We attempt to answer the first question, and discuss the second. We can approximate Q and Q' using the following formulas: V(t) = QxxM{t)Qx V(t + h) = Q^M(t + h)Q2 Q ~ Qi 0! « (Q.2-Qi)/h where Qx, Q2 are obtained by the QR algorithm. Substitute Q and Q' into the Lyapunov Equation to see if V is close to upper-triangular. Of course, if M is a constant matrix, this algorithm works well. In practice, however, this simple algorithm does not always work well. Even if the elements of the strictly lower triangular part of the approximated V are small, it does not guarantee that the eigenvalue analysis is accurate enough. But CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 38 if they are large, it may be a good indicator that the eigenvalue analysis may not be sufficient. Another difficulty is the expense involved: in order to obtain Qlt we require an eigenvalue analysis of M(t), and in order to obtain Q2, we require another eigenvalue analysis of M(t + h). Performing an eigenvalue analysis could be very expensive. The second question is much harder to answer. As far as we know, eigenvalue analysis is the only tool we have to approximate the kinematic eigenvalues efficiently. One may think of using Riccati-like methods. However, these methods require the user to supply the number of nonincreasing modes and the number of non decreasing modes, which the user usually does not have. We cannot use the result from the eigenvalue analysis because we do not know if it is correct or not. Another problem is that it is quite inefficient. On the other hand, if the eigenvalue analysis fails, the worst that could happen is large truncation errors. This could be detected by error estimation in a code. So one can switch to another method if the errors are too large. This step also includes a check for dichotomy of the ghost problem (UQ + nf > ny and TIQ+TII > ny). If this condition is not satisfied then it simply means that a dichotomy does not exist. We know that a necessary condition for well-conditioning for BVP is the existence of dichotomy. The intuitive meaning of these two conditions is that the total number of rapidly increasing modes and rapidly decreasing modes should be no more than ny. If it is more than that, it means that one of the modes changes from one type to the other. CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 39 The last part of this step is transforming back to obtain BC on x(a) and x(b). This can be done quite straightforwardly. We have nz BC's from Step 2, and ny BC's from this step. Then we have (e.g. at t = a) Ba y(a) z(a) = P BaT-\a)x{a) = 0 4.4 Details of Step 4 The last step is just solving the problem with the BC's configured. Chapter 5 Numerical Examples The algorithm proposed in Chapter 4 has been implemented in a code, DAESOLVE. The code uses the symmetric midpoint scheme. DAESOLVE has been tested on numerous index 1 DAEs, and in this chapter we examine three representative examples. Each example illustrates certain aspect of the algorithm proposed in Chapter 4. By looking at these three examples, the performance of the algorithm will then be demonstrated. The first two examples illustrate the importance of arranging the boundary conditions for the ghost ODE and performing the projection. The third one illustrates how the non-uniqueness of the decomposition of E(t) (see the remark at the end of Chapter 3) affects the eigenvalue analysis (as mentioned in Section 4.1 and Section 4.3), and as a result fails to arrange the boundary conditions for the algebraic part of the DAE properly. For the following examples the code DAESOLVE uses a uniform mesh (h = l/N) (see (3.7) and (3.8)). The errors computed are the maximum absolute errors. All com-putations were performed on a Sun 3 running a UNIX f77 compiler in double precision. 40 CHAPTER 5. NUMERICAL EXAMPLES 41 5.1 Example A 0 0 0 E = 0 0 0 A = -t 2 . This example demonstrates that the algorithm works well in a case when the eigenvalue analysis of the ghost matrix M (3.15) approximates the kinematic eigenvalues properly, even if M is not a constant matrix. Using the notation of (1.2), the DAE is as follows: t(l + 2/3) + (1 - p) -(1 + 2/3) 0 t(l -(3)-2/3 t(l + 2/3) -(1 + 2/3) 0 t -1 (/?-l)e-7/?, t(l + 2p) + (2/9 - (1 - fit)*'*/?, <-!/*] fl is a parameter, xi(l) = e~l/fl, and 1 < t < 1.1. The exact solution is: xx = e-'/fl, x2 = te-t/P, x3 = t2e-*/P + t The transformation matrices S and T can be chosen as follows: S = I and T = Then 1 -t 0 ' t 1 - t* 1 T . <2 2t- <3 * . 3/1 t + e~ 7/3 ' V2 = 1 z - 1 1 -t 1 0 - 1 1/t -t 2 -1/t det(T) = - t M - t (t + l)(l//3 - t) +1 1 1//3 - (i + 1) With the following transformation we can find the kinematic eigenvalues of M, V T~\MT - T). T * + l 1 1 0 V = IIP 1 0 -1 Therefore the kinematic eigenvalues of M are l/P and —1. The stability constant is very large when P is close to 0 and the BC is not properly posed. Actually the index of this CHAPTER 5. NUMERICAL EXAMPLES 42 DAE is close to 2 when P is close to 0. This can easily be seen when we look at Un of (3.13) U n 1-P -'(I-/?)-(1 + 2(3) t(l -p)-2p -t2(l -p)-t det(Uu) = -20(1 + 2P) If P = 0, then det(Uu) = 0. Note that M(t) and T(i) are not constant. However, a simple computation shows that the eigenvalues of M are X(l-0 + y/l+2ft + &fP) L 2/3 (i - p - vT+W+W) i_ -0.5 when P —> 0 (5.1) The large eigenvalue is very close to the large kinematic eigenvalue. This implies that the algorithm should perform well. The experimental results do show that this is true. In order to use the numerical scheme, two other BC's are required. Suppose that these two BC's are as follows: xa(l) = e-1/P, x3(l) = e"1//? + 1 and p is chosen to be —0.01. Table 5.1a shows the numerical results when the DAE is posed as an IVP using the BC's above (i.e. the algorithm proposed in Chapter 4 is not used). Table 5.1b shows the numerical results when the code DAESOLVE is used. CHAPTER 5. NUMERICAL EXAMPLES 43 Table 5.1a h error1 error2 error3 ratel rate2 rate3 0.200-01 0.204+04 0.433+04 0.704+04 0.100-01 0.701+03 0.151+04 0.245+04 0.152+01 0.152+01 0.152+01 0.500-02 0.229+03 0.486+03 0.792+03 0.163+01 0.163+01 0.163+01 0.250-02 0.622+02 0.132+03 0.215+03 0.188+01 0.188+01 0.188+01 0.125-02 0.159+02 0.337+02 0.549+02 0.197+01 0.197+01 0.197+01 0.625-03 0.400+01 0.847+01 0.138+02 0.199+01 0.199+01 0.199+01 Table 5.1b h error1 error2 error3 ratel rate2 rate3 0.200-01 0.135+00 0.275+00 0.444+00 0.100-01 0.412-01 0.838-01 0.135+00 0.171+01 0.171+01 0.172+01 0.500-02 0.111-01 0.226-01 0.360-01 0.189+01 0.189+01 0.190+01 0.250-02 0.284-02 0.578-02 0.917-02 0.197+01 0.197+01 0.197+01 0.125-02 0.714-03 0.145-02 0.230-02 0.199+01 0.199+01 0.199+01 0.625-03 0.179-03 0.364-03 0.576-03 0.200+01 0.200+01 0.200+01 Actually the computed eigenvalues for Table 5.1b are -101.6 and -0.39 at t=l -101.6 and -0.42 at t=l.l These values are close to the ones in (5.1). For Table 5.1b, the algorithm allocates two BC's at t = 1 and one at t = 1.1. 5.2 Example B This example is used to demonstrate the importance of projecting the boundary condi-tions for the differential part of the DAE (i.e. Step 2 of the proposed algorithm). Since sometimes the user provides more boundary conditions than the algorithm requires, the CHAPTER 5. NUMERICAL EXAMPLES 44 projection algorithm in Section 4.2 will choose an appropriate combination from those available. Using the notation of (1.2), consider E = P > 0, 0 < t < 1 and The exact solution is: 0 0 -(2 + 1)"1 1 A = P l-P(t + l) 0 1 T <1 = 0, (/ + l)"1 - 2P - pt xi(l) = -2 + 4/5 (5.2) xx = -(t+l)+P(t + l)2, x2 = p(t + l) However the user may think that one boundary condition is not enough, because the BVP has two variables, namely X\ and x2. Therefore the user may also specify one more BC. Let say, for instance, this BC is as follows: Xl(0) = -l + p (5.3) and P is chosen to be 10. When we look at the proposed algorithm in Chapter 4, we know that Step 3 (see Section 4.3) generates BC's for the algebraic part of the DAE without using any given BC's (4.1). However, the algorithm requires the BC for the differential part to be spec-ified. Using the above example, the dimension of the differential part is one, it implies CHAPTER 5. NUMERICAL EXAMPLES 45 that only one BC is required for the differential part. But there are two BC's that the algorithm could use. Which one should be chosen? We consider two cases. The first one is that no projection is performed (i.e. Step 2 of the proposed algorithm in Chapter 4 is omitted) and the algorithm randomly picks (5.3) as the BC for the differential part. Table 5.2a shows the experimental results of this case. The second case is that the projection step (See Section 4.2) is performed. Table 5.2b shows the experimental results of this case. One can easily see that the errors recorded in Table 5.2b are much better than those in Table 5.2a. Table 5.2a h errorl error2 ratel rate2 0.500-01 0.555+10 0.294+10 0.250-01 0.177+10 0.933+09 0.165+01 0.166+01 0.125-01 0.506+09 0.267+09 0.180+01 0.181+01 0.625-02 0.132+09 0.694+08 0.194+01 0.194+01 Table 5.2b h errorl error2 ratel rate2 0.500-01 0.108+00 0.561-01 0.250-01 0.299-01 0.152-01 0.185+01 0.189+01 0.125-01 0.772-02 0.388-02 0.195+01 0.197+01 0.625-02 0.195-02 0.975-03 0.199+01 0.199+01 The reason can be seen as follows: Using the following transformation matrices S = I T = ' (t + i) 0 ' T - l = 1 1 (t + I ) - 1 0 - ( t+1) - 1 1 det(T) = t + 1 CHAPTER 5. NUMERICAL EXAMPLES 46 the linear DAE can be reduced to a semi-explicit form t/ = [ / ? ( * + ! ) - ! ] * , z' = [l + (t + l)-1]y + z + q2 where q2 = (t + l )- 1 — 2/3 — flt. Substituting y into the ODE for z z' = (l- z)q2 (5.4) If /? > 1/2, then (5.4) will have a growing mode, and therefore the boundary condition for the differential part should be specified at t = 1. This explains the poor results in Table 5.2a. The results in Table 5.2b are much better because the projection algorithm studies the sign and magnitude of q2 in (5.4), and decides to put the boundary condition of the differential part at t = 1. It should be noted that the phenomenon demonstrated in this example has little to do with the use of the mid-point scheme. Rather, it points out the general danger present when randomly picking boundary conditions in a consistent, overdetermined problem, regardless of the numerical discretization. This example is used to demonstrate that the eigenvalue analysis for the ghost ODE could be sensitive to the choice of T. Since T is not unique, a different T may generate a different ghost problem, and hence produce different eigenvalues. (See Section 4.3 for 5.3 Example C CHAPTER 5. NUMERICAL EXAMPLES 47 details). Using the notation of (1.2), consider -t 2 -1/t ' fit E = 0 0 0 A = 2fit-2 0 0 0 _ (a2 + fi2)t2 -2 fit -fi 0 (a 2 +/? 2 ) t (a2+fi2) -2a, - a ( a 2 + 02)22 - 2a(/3t - l)e~\ 2a0te-* ] 0 < 0, a > 0, 1 < t < 2, and x1(l) = ae - l The exact solution is: Xi = ae~\ x2 = ate *, cc3 = c*i 2 (l + e _ t ) If we choose the following S and T T = a /3 0 at /3t - 1 1 at2 fit2 - 2t t S = 0 0 1 0 1 0 1 0 0 1/a -fila fil at 0 1 -1/t - t 2 -1/t det(T) = at then /9t + e - ' V2 = -at 2 -at M _ f fi a Since M is a constant matrix, the kinematic eigenvalues are the same as the eigenval-ues, namely fldzai. Because /? is negative, the two boundary conditions for the algebraic part y should be put at t = 2, and also because a > 0, the boundary condition for the differential part z should be put at t = 1. That is, there should be one boundary condition at the left end point, and two boundary conditions at the right end point. CHAPTER 5. NUMERICAL EXAMPLES 48 However the matrix function M of the ghost problem could be quite different if another T is chosen. For instance, using the above S with 1 0 0 1 0 0 Tnew — 0 1 0 T-l _ 0 1 0 -t 2 2t -t. -t 2 -1/t _ then Mnew — (a 2 + p2)t - ( a 2 + fl2) (a2 + p2)t2 - 2/3t - ( a 2 + (32)t + 2(l and the eigenvalues of M n e w are 0 and 2/5. But, we know that the kinematic eigenvalues are /3±m. Thus, the eigenvalues are very different from the kinematic eigenvalues. This implies that the algorithm only performs well when T is properly chosen. The following will demonstrate this phenomenon. In order to use the numerical scheme, two other BC's are required. Suppose that these two BC's are as follows: x2{l) = ae~\ xz(l) = a(l + e"1) Table 5.3a shows the numerical results of running DAESOLVE with /? = — 5 and a = 15. The kinematic eigenvalues are then —5 ± 15i. The eigenvalues computed are -5.5 + 4.7* and -5.5 - 4.7* at t = 1 -5.4 + 7.0* and -5.4 - 7.0* at t = 2 The real parts of the eigenvalues are close to the real parts of the kinematic eigenvalues, which explains the good results of Table 5.3a. CHAPTER 5. NUMERICAL EXAMPLES 49 Table 5.3b shows the numerical results of running DAESOLVE with /? = -10 and o: = 6. The kinematic eigenvalues are then —10 ± 6i. But the eigenvalues computed are -19.4 and -1.6 at t = 1 -18.2 and -2.5 at t = 2 The eigenvalues are very different from the kinematic eigenvalues. Assuming that the eigenvalues —1.6 and —2.5 are not considered as large, the algorithm proposed in Chapter 4 will then set two boundary conditions at t = 1 and one at t = 2. However we know that there should be one boundary condition at t = 1 and two at t = 2. This explains the poor results of Table 5.3b. Table 5.3a h errorl error2 error3 ratel rate2 rate3 0.500-01 0.131+01 0.194+01 0.396+01 0.250-01 0.385+00 0.626+00 0.128+01 0.177+01 0.163+01 0.163+01 0.125-01 0.996-01 0.167+00 0.341+00 0.195+01 0.191+01 0.191+01 0.625-02 0.251-01 0.424-01 0.866-01 0.199+01 0.198+01 0.198+01 0.313-02 0.209-02 0.361-02 0.753-02 0.200+01 0.194+01 0.194+01 Table 5.3b h errorl error2 error3 ratel rate2 rate3 0.500-01 0.149+03 0.281+03 0.529+03 0.250-01 0.553+02 0.105+03 0.200+03 0.143+01 0.142+01 0.140+01 0.125-01 0.282+02 0.539+02 0.103+03 0.970+00 0.964+00 0.957+00 0.625-02 0.934+01 0.179+02 0.342+02 0.159+01 0.159+01 0.159+01 0.313-02 0.253+01 0.485+01 0.928+01 0.188+01 0.188+01 0.188+01 Though this example only shows that the eigenvalue analysis fails to approximate the kinematic eigenvalues of the ghost matrix M, a similar failure is also expected when using CHAPTER 5. NUMERICAL EXAMPLES 50 the eigenvalues to approximate the kinematic eigenvalues in the process of projection (see Section 4.2). Chapter 6 Conclusion and Future Research In this thesis we have presented an algorithm to solve linear boundary value problems for DAEs of index 1 with separated boundary conditions. The algorithm arranges the boundary conditions for the algebraic part and rearranges the boundary conditions for the differential part of the DAE. By doing so, we hope that the conditioning constants of the ODEs (from the differential part and from the additional ghost ODE generated by the symmetric discretization of the algebraic part) will be of moderate size. These features have been implemented in a code, DAESOLVE. DAESOLVE has been tested on numerous examples, and the results have been com-pared with a code which chooses the BC arbitrarily. The experimental results are promis-ing. DAESOLVE outperforms the other code in most cases, and performs as well as the other one in other cases. However, DAESOLVE has two weaknesses which may sometimes cause it to fail to perform better than with arbitrary BC. The first one is that its performance is critically 51 CHAPTER 6. CONCLUSION AND FUTURE RESEARCH 52 affected by the ability of using the eigenvalues to approximate the kinematic eigenvalues. It is well-known in BVP literature that this approximation may fail in some pathological cases. The second, which is more serious, comes from the non-uniqueness of the decom-position of E(i). The non-uniqueness of T(t) has created an important problem: It may affect the eigenvalue analysis (see Section 4.3). In theory, the non-uniqueness of T(t) could lead to an unpredictable effect on the eigenvalue analysis (cf. (4.10)). The magnitude of J|Q_1Q/ - Q^F^F^QW could be either large or small depending on Fj. However, the problem lies in the uncertainty of what could happen. It is very hard to control what could happen in practice. Like most works, this thesis suggests more problems than it solves. The most obvious are the ones mentioned above. A simple extension is to allow DAESOLVE to handle non-separated boundary conditions. The whole algorithm proposed in Chapter 4, except for the step of projection, could be applied to the case of non-separated boundary con-ditions. It is a non-trivial question to ask how non-separated boundary conditions could be projected. Ascher in [1] has suggested one way to do so, but it cannot be used easily in practice. Last, but not the least, is to apply the algorithm proposed in this thesis for non-linear problems. Bibliography [1] U. Ascher, "On numerical differential algebraic problems with application to semi-conductor device simulation," To appear in SIAM J. Numer. Anal, 1989. [2] U. Ascher, "Symmetric schemes and differential-algebraic equations," To appear in SIAM J. Sci. Stat. Comput, 1989. [3] U. Ascher, R. Mattheij h R.D. Russel, "Numerical solution of boundary value prob-lems for ordinary differential equations," Prentice-Hall, 1988. [4] K.E. Brenan & B.E. Engquist, "Backward differentiation approximations of nonlin-ear differential/algebraic systems," revised manuscript, 1987. [5] S. Campbell, "The numerical solutions of higher index linear time varying singular systems of differential equations," SIAM J. Sci. Stat. Comput., v. 6, pp. 334-347, 1985. [6] F. Gao, Private communication. 53 BIBLIOGRAPHY 54 [7] C.W. Gear, "Simultaneous numerical solution of differential-algebraic equations," IEEE Trans. Circuit Theory, CT-18, 1971, pp. 89-95. [8] C.W. Gear, "Differential-Algebraic equation index transformation," SIAM J. Sci. Stat. Comput., v. 6, pp. 334-347, 1989. [9] C.W. Gear, G.K. Gupta & B. Leimkuhler, "Automatic integration of Euler-Lagrange equations with constraints," J. Computational & Applied Math., 12-13, pp. 77-90, 1985. [10] C.W. Gear & H.H. Hsu & L. Petzold, "Differential/algebraic equations revisited," Proc. numerical methods for solving stiff I VPs, Oberwolfach, W. Germany, June 28 - July 4, 1981. [11] C.W. Gear & L.R. Petzold, "ODE methods for the solution of differential/algebraic systems," SIAM J. Numer. Anal, v. 21, 1984, pp. 716-728. [12] E. Griepentrog &; R. Marz, "Differential-algebraic equations and their numerical treatment," Teubner-Texte zur Mathematik Baud 88, Leipzig 1986. [13] A. Kvaerno, "Order conditions for Runge-Kutta methods applied to differential/ algebraic systems of index 1," Math, of Computation, 1987. BIBLIOGRAPHY 55 [14] B.J. Leimkuhler, "Approximation methods for the consistent initialization of differential-algebraic equations," Ph.D. Thesis, Report No. 88-1450, Aug. 1988, U. of Illinois at Urbana-Champaign, Dept. of Computer Science. [15] R. Marz, "On difference and shooting methods for boundary value problems in differential-algebraic equations," Zamm, 11, pp. 463-473, 1984. [16] L.R. Petzold, "A description of DASSL: A differential/ algebraic system solver," Proceedings of IMACS World Congress, Montreal, Canada, 1982. [17] L.R. Petzold, "Differential/Algebraic equations are not ODE's," SIAM J. Sci. Stat. Stat. Comput., v. 3, 1982, pp. 367-384. [18] L.R. Petzold, "Order results for implicit Runge-Kutta methods applied to differen-tial/algebraic systems," SIAM J. Num. Ana., v. 4, 1986, pp. 837-852. [19] L.R. Petzold k P. Lotsedt, "Numerical solution of nonlinear differential equations with algebraic constaints II: Practical implication," SIAM J. Sci. Stat. Comput., v. 7, 1986, pp. 720-733. [20] R.F. Sincovec, B. Dembart, M.A. Erisman, A.M. Erisman, J.W. Manke k EX. Yip, "Solvability of large scale descriptor systems," Boeing Computer Services Company, report, BOE Contract No. ET-78-C-01-2876. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051912/manifest

Comment

Related Items