"Science, Faculty of"@en . "Computer Science, Department of"@en . "DSpace"@en . "UBCV"@en . "Chan, Pat S. Y."@en . "2010-08-16T03:22:22Z"@en . "1989"@en . "Master of Science - MSc"@en . "University of British Columbia"@en . "In this thesis we consider the numerical solution of boundary value problems for differential algebraic equations (DAEs) of index 1. It was shown in Ascher [1], [2], that if symmetric schemes are used for linear index 1 DAEs, the stability is controlled by the stability of an auxiliary (ghost) differential problem which is not necessarily stable even if the given problem is. We examine an algorithm which preserves the stability of the ghost problem by arranging the auxiliary boundary conditions properly. The algorithm has been implemented in a code (DAESOLVE) and tested on numerous examples. This code has been compared with a code which chooses the boundary conditions arbitrarily, showing favourable results."@en . "https://circle.library.ubc.ca/rest/handle/2429/27404?expand=metadata"@en . "AN ALGORITHM FOR SOLVING INDEX 1 DIFFERENTIAL ALGEBRAIC EQUATIONS IN BOUNDARY VALUE PROBLEMS By PAT S.Y. CHAN B.Sc, University of Toronto, Canada, 1987 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE FACULTY OF GRADUATE STUDIES (DEPARTMENT OF COMPUTER SCIENCE) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA August 1989 \u00C2\u00A9 Pat S.Y. Chan, 1989 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. The University of British Columbia Vancouver, Canada Department DE-6 (2/88) Abstract In this thesis we consider the numerical solution of boundary value problems for differential algebraic equations (DAEs) of index 1. It was shown in Ascher [1], [2], that if symmetric schemes are used for linear index 1 DAEs, the stability is controlled by the stability of an auxiliary (ghost) differential problem which is not necessarily stable even if the given problem is. We examine an algorithm which preserves the stability of the ghost problem by arranging the auxiliary boundary conditions properly. The algorithm has been implemented in a code (DAESOLVE) and tested on numerous examples. This code has been compared with a code which chooses the boundary conditions arbitrarily, showing favourable results. Contents Abstract i Contents ii Acknowledgement iii 1 Introduction 1 2 Background Material For BVP 6 2.1 Projecting Boundary Conditions 13 3 Background Material For DAE 17 3.1 Classification of DAEs 17 3.2 Symmetric Schemes for DAEs in BVPs 20 3.2.1 Analysis 23 4 Algorithm (linear index 1 DAEs in BVPs) 28 4.1 Details of Step 1 30 4.2 Details of Step 2 33 4.3 Details of Step 3 35 4.4 Details of Step 4 39 5 Numerical Examples 40 5.1 Example A 41 5.2 Example B 43 5.3 Example C 46 6 Conclusion and Future Research 51 Bibliography 53 ii Acknowledgement First, I would like to extend my special thanks to my supervisor Dr. Uri Ascher for suggesting my thesis topic, for the many helpful \"remote\" e-mail discussions, and for the financial support. Thanks also go to Dr. Feng Gao who acts as the second reader of my thesis. Thanks to the all of my friends in the department and in my church. Last but not the least, my deepest thanks go to my parents, brother, sisters, brother-in-law, and sister-in-law, for their endless help, support and love without which none of this would have been possible. I dedicate this thesis to my whole family. iii Chapter 1 Introduction In recent years there has been a lot of interest in numerical treatment of differential algebraic equations (DAEs). DAEs occur in some modelling problems in which the state variables (representing the state of the system) are related by both differential equations and algebraic constraints (e.g. the simulation of electrical networks, the solution of certain equations in fluid dynamics, and the simulation of mechanical systems). They are often characterized by their index (see discussion in Section 3.1). In this thesis, we present and study an algorithm used to solve linear boundary value problems (BVPs) for DAEs of (global) index 1. In general, a system of ordinary differential equations (ODE) can be written as F(x,x',t) = 0 (1.1) a s (2.9) CHAPTER 2. BACKGROUND MATERIAL FOR BVP 9 | \Y {^PY-1 (a) 11 < Ke-^->) fors oo, then the rank of the projection matrix must be m. Proof: (see [3], Chapter 3, Theorem 107.). This theorem implies that the number of BC's at t = a must be at least the number of rapidly decreasing modes. Likewise the number of BC's at t = b must be at least the number of rapidly increasing modes. One of the crucial steps in the proposed algorithm in Chapter 4 is based on this theorem, especially (b) and (c). However it is impractical to use Definition (2.1) to check for dichotomy and then conclude well-conditioning, since we have to compute a fundamental solution. It is natural to ask if we can characterize the ideas of dichotomy and stability involving A(t). Consider the ODE (2.8). Suppose that there exists a transformation matrix Q(t), Q(t) is differentiable and ||Q(0IIII2-1(S)II 1S uniformly bounded for t > s, such that V(t) is upper-triangular, where (2.13) CHAPTER 2. BACKGROUND MATERIAL FOR BVP 11 Let w(t) = Q-1(t)y(t) (2.14) Substituting (2.14) into (2.8) (note: y' = Q'(t)w(t) + Q(t)w'(t)), we obtain a new ODE w' = V(t)w (2.15) Let W(t) be the fundamental solution for (2.15) satisfying W(a) = I Wn W12 \" W:= \u00E2\u0080\u009Er 0 1 ;;r00 (2.16) where Wn \u00E2\u0082\u00AC KpXp, Wu \u00E2\u0082\u00AC Kpxn, W21 G n ^ x p , and W22 \u00E2\u0082\u00AC Then one can show that W(t) = Q~lY(t) and W(t) is block upper-triangular, i.e. W21 = 0. Moreover, both Wu and VK22 are nonsingular. Definition 2.2 (Kinematic Eigenvalues) The system (2.8) and (2.15) are kinemat-ically similar. The diagonal elements ofV(t) are called the kinematic eigenvalues corresponding to Q(t). Equation (2.13) is the famous Lyapunov equation. The following theorem relates kine-matic eigenvalues with dichotomy. Theorem 2.2 Consider the ODE (2.8). Suppose that there exists a transformation matrix Q(t) satisfying (2.13), (2.14) and (2.15). Denote the kinematic eigenvalues by Ai, A 2 , . . . , A n . Then (2.8) has a dichotomy iff there exist positive constants c and A such CHAPTER 2. BACKGROUND MATERIAL FOR BVP 12 that for p kinematic eigenvalues ? Xi{r)dr) < -X(t - s) ift-s> c (2.17) and for the other n \u00E2\u0080\u0094 p kinematic eigenvalues Mil A,-(r)rfr) >X(t-s) ift-s>c (2.18) Proof: (see [3] , Chapter 3, Theorem 101). Equations (2.17) and (2.18) imply that there are p nonincreasing modes and n \u00E2\u0080\u0094 p non-decreasing modes. The transformation matrix Q(t) actually decouples the system (2.8) into two modes, namely nonincreasing modes and nondecreasing modes. The diagonal elements of V(t) (i.e. the kinematic values) can be used to determine if the system has a dichotomy. Let where V(t) is upper triangular, and its diagonal elements are the eigenvalues of A(t). A simple observation on (2.13) is that Q'(t) = 0 if A(t) is a constant matrix. The eigenvalues are the same in this case as the kinematic eigenvalues. One could then ask if it is always possible to use eigenvalues to approximate the kinematic eigenvalues for a general A(t). The answer is negative. We demonstrate this by a simple example. Example 2.1 V(t) = QT(t)A(t)Q(t) (2.19) A(t) = /3cos(2ai) /3sm(2at) + a P sin(2oi) - a -/? cos(2ai) (2.20) CHAPTER 2. BACKGROUND MATERIAL FOR BVP 13 With the following transformation matrix Q(t) we obtain the kinematic eigenvalues cos(otf) sin(crf) \u00E2\u0080\u0094 sin(crf) cos(at) V = P 0 0 -0 (2.21) However a straightforward computation shows that the eigenvalues of (2.20) are \u00C2\u00B1 - c? (2.22) The eigenvalues are only close to the kinematic eigenvalues when \a\ is small compared to \(3\. The example is pathological because a large \a\ indicates that modes rotate fast. This is rare in practice. It should be noted that often it is the number of nonincreasing modes and the number of nondecreasing modes we are interested in, and not the exact magnitude. Even though eigenvalue analysis may fail, it is the only simple tool we have to approximate the kinematic eigenvalues. Besides if A(t) does not change rapidly it is often good enough. 2.1 Projecting Boundary Conditions In this section we make use of the material presented in this chapter to answer a question we face in Chapter 4 (when performing the projection in Section 4.2). The question is as follows: Consider the BVP (2.1) with separated BC (2.12). Let V, Q, W and w be defined as in (2.13), (2.14), (2.15) and (2.16). Assuming that the ODE has a dichotomy, there are n~ nonincreasing modes and n+ nondecreasing modes (n~ + n+ = n). Furthermore, CHAPTER 2. BACKGROUND MATERIAL FOR BVP 14 the real parts of kinematic eigenvalues of V are arranged from large negative to large positive. Suppose that we are given na BC's at t = a and nb BC's at t = b, with na > n~, nb > n+ and na + nb > n. The BC are consistent, i.e. there is a unique solution. However the numerical algorithm used only requires n BC's. We have more BC's than we need. How should we choose n BC's from these na + nb BC's so as to obtain a well-conditioned BVP if possible? From Theorem 2.1, we know that there should be n~ BC's at t = a and n+ BC's at t = b. It implies that n~ BC's and n+ BC's should be chosen from Bai and from B^, respectively (cf. (2.12)). Denote these n~ BC's at t = a and n+ BC's at t = b as Baly(a) = i% Bb2y(b) = p1 (2.23) Then BaXQ-\a)y{a) = k Bb2 Q\"1 (6)2/(6) = p\ . . Bal:=BalQ(a)=:[BlaUB2al} Bh2 := Bb2Q(b) =: [B\2, Bfa { ' ) where Bal \u00E2\u0082\u00AC ftn\"Xn, Bb2 6 7 ln + X n, B\x \u00E2\u0082\u00AC Tln~Xn~, and B\2 \u00E2\u0082\u00AC 7T+ X n +. Under what conditions will the BVP (2.1), (2.23) be well-conditioned? Let P be the projection matrix defined as in Theorem 2.1. By part (b) of Theorem 2.1, we must have rang e(Bj1)x n range(Y(a)P) = {0}. Let the vector z \u00E2\u0082\u00AC range(Y(a)P), i.e. z = Y(a)[c,0]T for some c G H.n~. If also z \u00C2\u00A3 rangelB^)1-, then for any x \u00E2\u0082\u00AC Hn~ xTBalY(a)[c,0]T = 0 CHAPTER 2. BACKGROUND MATERIAL FOR BVP 15 So using W(t) = Q-\t)Y(t) xTBalQ-1(a)Y(a)[c,0]T = 0 *r[^i,^Wa)[c,0]r = 0 xTB\lWl\a)c = 0 \iB\x is nonsingular, then B\lWn(a) is also nonsingular, since Wn is nonsingular. This implies that c = 0. It means that there does not exist a z ^ 0 such that z G range(Y(a)P) and also z G range[B^L. Hence part (b) of Theorem 2.1 is satisfied. Therefore among the na BC's given by the user at t = a, we should choose a combination of n~ of them such that B\x is nonsingular. By part (c) of Theorem 2.1, we also must have that range{Bj2)L D range(Y(b)(I \u00E2\u0080\u0094 P)) = {0}. Using the same technique, let the vector z G range{Y(b){I \u00E2\u0080\u0094 P)), i.e. z = Y(b)[0, c]T for some c G TZn+. If also z \u00C2\u00A3 range^B^)1, then for any x G lZn* xTBb2Y(b)[0,c]T = 0 So using W{t) = Q-ityYit) xTBb2Q-1(b)Y(a)[Q,c]T = 0 xT[%2,Bl2]W{b)[Q,cf = 0 xT(Bl2W12(b) + B2b2W22(b))c = 0 CHAPTER 2. BACKGROUND MATERIAL FOR BVP 16 If B22 is nonsingular and B]2 \u00E2\u0080\u0094 0, then (B\2W12(b) + B^2W22(b)) is nonsingular, since W22 is nonsingular. This implies that c = 0. Hence part (c) of Theorem 2.1 is satisfied. However, the nonsingularity of B22 in itself is not sufficient to guarantee that (Bl2W12(b)+ B22W22(b)) is nonsingular (so it is a bit different from the previous case). In practice, one may replace Q\"1 by Q'1 (see (2.19)). The n~ BC's at t = a and the n+ BC's at t = b are chosen such that B^ and B22 are nonsingular. Of course, B^ and B22 should not be ill-conditioned too. Even though these two conditions may not guarantee that part (c) of Theorem 2.1 holds, it is often good enough in practice. Chapter 3 Background Material For D A E In this chapter we review some fundamental concepts of DAE. In Section 3.1 we discuss the classification of DAEs. In Section 3.2 we consider the use of symmetric schemes for linear index 1 DAE. It will be shown that the stability of the discretization is controlled by a ghost problem. 3.1 Classification of DAEs Not all DAEs of the form (1.1) can be solved. Some of them may have no solution or infinitely many solutions. (Of course, it depends on the BC too). Actually it is not reasonable to try to directly solve those DAEs with either no solution or infinitely many solutions numerically. Therefore we need a definition to classify the solvability of DAEs. If a DAE is solvable, we further categorize it based on its structure. The latter is the notion of index. The following definition is due to Campbell, [14] 17 CHAPTER 3. BACKGROUND MATERIAL FOR DAE 18 Definition 3.1 (Solvability) The DAE (1.1) is solvable on a subset ft of1ln+l if there is an r-parameter family of solutions x(t,c) (defined for t \u00E2\u0082\u00AC H, c \u00E2\u0082\u00AC W) such that (i) (t,x(t,c)) 6 ft, and (ii) if x(t) is any solution whose graph lies in ft, then x = x(t,c) for some c, and (iii) the graph x(t,c) is an (r-hl)-dimensional manifold. We generally take ft = [t0,T] x Tln. The definition guarantees that there are solutions, and that they are uniquely defined by their value at t = t0. Having defined solvability of a DAE, we could further categorize its structure with a quantity called its index. This notion of index has been defined in many ways, for a local index and a global index. We are primarily interested in the global index in this thesis and often we just write \"index\" for \"global index\". The definition given below is due to Gear, [8]. Definition 3.2 (Index) The global index, m, of a solvable DAE (1.1) with F sufficiently smooth is defined as follows: m = 0: if J(t) (as defined in (1.3)) is nonsingular. m > 0.' otherwise, consider the system of equations F{x,x',t) = 0 dF dF (2, dF , OF n at ox' ox ot 02F 8F {3) dt2 dx CHAPTER 3. BACKGROUND MATERIAL FOR DAE 19 daF OF f , + 1 v at' dx> + ~u as a system of equations with dependent variables x',x^2\ ... , x(4+1) and with independent variables x and t. The global index, m, is defined as the smallest s such that it is possible to solve for x' in terms of x and t. It may suffice to give some simple examples in here. Example 3.1 Any explicit ODE is a DAE with index 0. Example 3.2 Example (1.1) is an index 3 DAE. x\ = g x[ \u00E2\u0080\u0094 X? x'2 \u00E2\u0080\u0094 X 3 \u00C2\u00ABx- j \u00E2\u0080\u0094 \j \ \u00E2\u0080\u0094 2 \u00E2\u0080\u0094 3 43) = 9(3) It implies that [x'^ x^ , x'3] = [x2,x3,p^]. Since three steps of differentiation are required, it is of index 3. It should be mentioned that, in principle, it is possible to use analytical differentiation techniques to rewrite a DAE from a higher index to a lower index. Intuitively, it is that by differentiating the algebraic equations, we can solve for x' in terms of x and t. However, each differentiation will introduce additional constants of integration. Gear [8] discusses CHAPTER 3. BACKGROUND MATERIAL FOR DAE 20 an interesting algorithm for reducing higher index problems to lower index ones without introducing any constants of integration. In practice, such a reduction may not be feasible. But it helps to understand the underlying analytical structure of the problem. Sometimes it may be useful to look at some special forms of (1.1). A DAE is in semi-explicit form if x\ = f(x1,x2,t) (3.1) 0 = g(xux2it) (3.2) It is in \"triangular\" form if x'i = f{xi,x3,t) (3.3) 0 = g(xltt) (3.4) It is in linear time-varying form if it can be written as (1.2). When both E[t) and A(t) of (1.2) are constant matrices, then we have a linear constant coefficient form. Ex' = Ax + f(t) (3.5) 3.2 Symmetric Schemes for DAEs in BVPs In this section we study in detail the stability issues arising when using symmetric schemes for DAEs in BVPs. Ascher in [1] and [2] shows that there is a potential risk in using sym-metric schemes on general linear index 1 DAEs (1.2). If the DAE is not in semi-explicit CHAPTER 3. BACKGROUND MATERIAL FOR DAE 21 form, then the stability of the discretization is controlled by a ghost problem. Therefore the stability constant of the discretization method depends on the conditioning constant of this ghost problem. For the numerical solution of index 1 IVPs, there is a better alternative, namely BDF methods. However symmetric schemes are more appropriate for BVPs, since a well-conditioned problem may have both fast nonincreasing and fast nondecreasing modes. Symmetric schemes preserve dichotomy in a weak form (see [3], Ch.3,10) and BDF methods do not. Let us study a simple example to show the potential danger of using symmetric schemes. Example 3.3 Consider a general linear DAE of the form (1.2) where E{t) = 0 0 - X t A(t) = ~P Pt + 1 1 - ( t + 1 ) cos(t) 0 (3.6) where (3 is a parameter, 0 < t < 1, and xi(Q) = \u00E2\u0080\u00941. The exact solution is xx = -(1 + f3t)e-* - tcos{t) x2 = -Pe~l - cos(t) We apply the midpoint scheme E ( t i + 1 / 2 ) ^ - = A(ti+1/3)3*?a- + q(ti+1/2), l nz then (a) no projection is performed, and just pick arbitrary BC's for t = a, and nz \u00E2\u0080\u0094 BC's for t = 6; else (b) write the first part of (4.2) as follows: (c) set fc:=max(riQ ,n^); (d) pick k linearly independent BC's from Ba, denote these BC's as Ba 8aQTz(a) = 0o . v fl.:=[fl.i,fi\u00C2\u00AB] ( 4 - 5 ) where Bal \u00C2\u00A3 Tlkxk, and 5O 2 \u00E2\u0082\u00AC \u00E2\u0080\u00A2 R k x ( \" - - k ) (e) set the BC (4.5) at t = o; (f) perform the same process for pick n2 \u00E2\u0080\u0094 k linearly independent BC's from Bb, denote these BC's as Bb, BbQTz(b) = p1 Bb:=[BbuBb2] (4-6) where Bbl 6 ft(\".-*)x\ and Bb2 g ft(n'-fe)x(\"\u00C2\u00AB-*) (h) set the BC (4.6) at t = b; End if CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 30 Step 3: Form the matrices M(a) and M(b) by (3.16), and analyze the eigenvalues using the QR algorithm. Let V(t) = QT(t)M(t)Q(t) at t = a, b where Q is orthogonal and V is upper triangular with the eigenvalues arranged in increasing order of real parts, from large negative to large positive. Let ria (uj) be the number of eigenvalues of M{a) which have a large negative (positive) real part, and similarly define n\u00C2\u00A3\", njj\" for M(6). If no + n J > ny or n\u00C2\u00A3 + > nv then (a) another method (e.g. transforming explicitly to (3.11), (3.12) everywhere first) should be used; (because there may not be a dichotomy for y) else (b) set k:=max(rio ,n^); (c) set the last nv \u00E2\u0080\u0094 k components of QTy at t = a according to (3.14); (d) set the first k components of QTy at t \u00E2\u0080\u0094 b according to (3.14); (e) transform back to obtain BC for x(a) and x(b). end if Step 4: Solve the discretized equations (3.7) with the obtained BC from Step 2 and Step 3. This completes specification of the BC for the application of the midpoint scheme (3.7). 4.1 Details of Step 1 Step 1 is performed by the singular value decomposition (SVD). Let E(a) = UY>VT, where S is a diagonal matrix and its elements are the singular values of E(a). De-note these singular values, which are above a threshold t0 > 0, by o~i, cr2, \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 ? \u00C2\u00B0n,- In general, nz ^ n and nz is the dimension of the differential part of the DAE. Then set S(a) = t7S and T~x = VT, where S is the n x n diagonal matrix with diagonal elements [oi, 0\"2, . . . , crnz, 1,..., 1]. Likewise perform the same steps for E(b). This is a key step of the whole algorithm, and therefore has to be carried out correctly. We have remarked at the end of Chapter 3 that this decomposition is not unique (cf. (3.17)). Even though the non-uniqueness will not affect the kinematic eigenvalues, CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 31 it affects the eigenvalues in the actual implementation. We will discuss this in detail in Section 4.3. The derivative of T (i.e. 7\") is approximated (e.g. at t = a) by T'(a) = T { a + k ) ~ T { a ) h T(a + h) and T(a) are obtained by applying SVD to E(a + h) and E(a) respectively. Of course, h has to be small enough. In the actual implementation, it is chosen to be the square root of the machine epsilon. But we know that T is not unique, and therefore the two transformations computed at a and a + h may not belong to the same T(x). That is, performing SVD at E(a) obtains Ti(a) and performing SVD at E(a + h) obtains T2(a + h), where T\(x) may be different from T2{x). It does not make sense to approximate T' using the above formula if T\(x) ^ T2(x). Even though h is chosen to be very small in the actual implementation, the T computed is still not unique up to the sign and permutation of the first ny columns of T. It is because when we look at (3.9), 0 0 0 / T-1 = 71.11 71.12 inv inv 7-121 7122 inv inv 0 0 0 I 0 0 7121 7122 inv inv One can easily see that E in (3.9) does not depend on the first ny rows of T-1, but the last n \u00E2\u0080\u0094 ny rows of it are unique. It implies that the first ny columns of T computed are not unique (since the T computed is orthogonal, by SVD). This problem is resolved by comparing and rearranging the first ny columns of Ti{a) and T2(a + h) so that Ti(x) is the same as T2(x). Given the ith column of T\(a), CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 32 look for a column in the first ny columns of T2(a + h) such that these two columns are very close to each other. Swap this located column with the ith column of T2(a + h). By doing so, hopefully Ti(x) = T2(x). The following describes the algorithm in detail. For i = 1 to ny - pick the ith column T\ (a), call it tlf, - find the maximum element (in absolute value) of say the jth element, denote it as 41^ ; - compare 0. Therefore l<\\T\u00C2\u00AE{a)-T\u00C2\u00AE{a + h)\\ It implies that for all columns in T2(a + h), there exists only one of them which is close to the ith column of 7i(a), 1 < i < n. This explains why the algorithm performs so well in practice. 4.2 Details of Step 2 Equation (4.2) can be obtained in the following way. Suppose that there are na BC's at t = a and n& BC's at t = b. Using (3.14), we have (e.g. at t = a) BaT(a) BaT Obviously, in (4.2) -[UxxYxUX2z{a) z(a) Bax(a) = BaT{a) + y z -[fu]-y( nz), even though only nz of them are required to specify the BC's for the whole problem. In this case, the BC's have to be projected properly. The technique used to perform projection in here has been explained in Section 2.1. 6ai in (4.5) and 6b2 in (4.6) should be nonsingular. The BC's at t = a have to control the decreasing modes of z and the ones at t = b have to control the increasing modes. By substituting (3.14) into (3.12) (for the endpoints only), we obtain a set of explicit differential equations on z alone, z' = {U22 - U2l[Un}-xUX2}z + {g2 - tf21[*7\"]-y} Therefore in (4.3), H = {U22 - U21[Ull]-lU12} h = {g2- U2l[Un)-xg1} Step 2(d) and Step 2(e) can be performed in a very simple way. Suppose that w(a) \u00E2\u0080\u0094 QTz(a), then (4.4) can be written as Baw{a) = & (4.7) Let us write Ba as Ba := [Bai,Ba2] where Bal \u00E2\u0082\u00AC 7ln\u00C2\u00B0x* and Ba2 G nnaX(n*~kl Assume that Bal has a row rank of at least k, where k is as defined in Step 2(c). Compute the Gaussian Elimination for (4.7) with partial pivoting. Use an array, say PIVOT, to keep track of which rows have been CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 35 swapped. For instance, row i and row j have been swapped. Then PIVOT(i) = j and PIVOT(j) = i. (A QR decomposition is not used because it is more expensive). Let say, after the Gaussian Elimination, (4.7) becomes Baw(a) = /3Q Ba := Ball Bai2 Ba2i Baii Poi PQ2 (4.8) where Ball \u00E2\u0082\u00AC 7lfcxfc, Ba22 \u00E2\u0082\u00AC ft(\u00C2\u00BB--*)x(\u00C2\u00AB.-*), f301 e Kk and $m 6 H^~kl Then Ba and fi0 in (4.5) can be chosen in the following way: Row i of Sa is the same as row PIVOT(i) of Ba in (4.7), and #>(i) = $0(PIVOT(i)), 1 < * < k (k as defined in Step 2(c)). The rows of Ba in (4.7) are used for Ba, instead of the rows in Ba, because Gaussian Elimination does not preserve conditioning. Using the notation of (4.5) we know that Bai is nonsingular because \u00C2\u00A3?all is upper triangular. Step 2(f) and 2(h) can be performed in the same way. 4.3 Details of Step 3 This is another key step of the whole algorithm. By studying the ghost problem and rearranging the BC's for y, hopefully large stability constants may be prevented. However there is a major assumption in this step, and it is that the eigenvalue analysis does represent the kinematic eigenvalue analysis. When we look at the ghost problem (3.15), ideally we should find a transformation Q such that V = Q-\-M)Q-Q-lQ' CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 36 where V is upper-triangular and the diagonal elements are the kinematic eigenvalues. This is the famous Lyapunov Equation (see Definition 2.2). But in actual practice, we use the eigenvalue analysis (though it may be misleading as shown in Chapter 2) because it is expensive (or even infeasible) to find the kinematic eigenvalues. The eigenvalues are the same as the kinematic eigenvalues in general only when M (hence Q) is a constant matrix, and they approximate the kinematic eigenvalues well only when the second term of Lyapunov Equation is relatively small (i.e. is small when compared with the In other words, it is that when M(t) has eigenvalues with large real part (the only case where this matters), its variation in t is slow compared to the size of such eigenvalues. Then the eigenvalue analysis approximates Lyapunov's equation well. The decoupling can be done by the transformation QTy. In the last remark of Chapter 3, we show that the non-uniqueness of T(t) and S(t) does not affect the kinematic eigenvalues of the ghost ODE. But does the non-uniqueness affect the eigenvalues of Ml The answer is positive. Rewriting (3.21) and (3.22) using (3.23), we obtain first term). V = Q-\-M)Q-Q-1Q' V = Q~l(-M)Q - QrxQ! (4.9) Q-\-M)Q - [Q~lQ' - Q-lFTF^1 Q] (4.10) CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 37 Of course, if FT = Q, then the second term of the above equation will disappear. Even if ||2-1<2'|| is small, Hi^Ff11| may not be. Similarly, HG^Q'H may be large, but ||Q-1Q' \u00E2\u0080\u0094 Q~lFTFjl Q\\ may be small. One may think of trying to find an FT such that the second term is small. But it is hard to control this term in actual implementation. There are two more practical issues: 1. How does one know that the eigenvalue analysis is accurate enough? 2. If the eigenvalue analysis is insufficient, what should one do? We attempt to answer the first question, and discuss the second. We can approximate Q and Q' using the following formulas: V(t) = QxxM{t)Qx V(t + h) = Q^M(t + h)Q2 Q ~ Qi 0! \u00C2\u00AB (Q.2-Qi)/h where Qx, Q2 are obtained by the QR algorithm. Substitute Q and Q' into the Lyapunov Equation to see if V is close to upper-triangular. Of course, if M is a constant matrix, this algorithm works well. In practice, however, this simple algorithm does not always work well. Even if the elements of the strictly lower triangular part of the approximated V are small, it does not guarantee that the eigenvalue analysis is accurate enough. But CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 38 if they are large, it may be a good indicator that the eigenvalue analysis may not be sufficient. Another difficulty is the expense involved: in order to obtain Qlt we require an eigenvalue analysis of M(t), and in order to obtain Q2, we require another eigenvalue analysis of M(t + h). Performing an eigenvalue analysis could be very expensive. The second question is much harder to answer. As far as we know, eigenvalue analysis is the only tool we have to approximate the kinematic eigenvalues efficiently. One may think of using Riccati-like methods. However, these methods require the user to supply the number of nonincreasing modes and the number of non decreasing modes, which the user usually does not have. We cannot use the result from the eigenvalue analysis because we do not know if it is correct or not. Another problem is that it is quite inefficient. On the other hand, if the eigenvalue analysis fails, the worst that could happen is large truncation errors. This could be detected by error estimation in a code. So one can switch to another method if the errors are too large. This step also includes a check for dichotomy of the ghost problem (UQ + nf > ny and TIQ+TII > ny). If this condition is not satisfied then it simply means that a dichotomy does not exist. We know that a necessary condition for well-conditioning for BVP is the existence of dichotomy. The intuitive meaning of these two conditions is that the total number of rapidly increasing modes and rapidly decreasing modes should be no more than ny. If it is more than that, it means that one of the modes changes from one type to the other. CHAPTER 4. ALGORITHM (LINEAR INDEX 1 DAES IN BVPS) 39 The last part of this step is transforming back to obtain BC on x(a) and x(b). This can be done quite straightforwardly. We have nz BC's from Step 2, and ny BC's from this step. Then we have (e.g. at t = a) Ba y(a) z(a) = P BaT-\a)x{a) = 0 4.4 Details of Step 4 The last step is just solving the problem with the BC's configured. Chapter 5 Numerical Examples The algorithm proposed in Chapter 4 has been implemented in a code, DAESOLVE. The code uses the symmetric midpoint scheme. DAESOLVE has been tested on numerous index 1 DAEs, and in this chapter we examine three representative examples. Each example illustrates certain aspect of the algorithm proposed in Chapter 4. By looking at these three examples, the performance of the algorithm will then be demonstrated. The first two examples illustrate the importance of arranging the boundary conditions for the ghost ODE and performing the projection. The third one illustrates how the non-uniqueness of the decomposition of E(t) (see the remark at the end of Chapter 3) affects the eigenvalue analysis (as mentioned in Section 4.1 and Section 4.3), and as a result fails to arrange the boundary conditions for the algebraic part of the DAE properly. For the following examples the code DAESOLVE uses a uniform mesh (h = l/N) (see (3.7) and (3.8)). The errors computed are the maximum absolute errors. All com-putations were performed on a Sun 3 running a UNIX f77 compiler in double precision. 40 CHAPTER 5. NUMERICAL EXAMPLES 41 5.1 Example A 0 0 0 E = 0 0 0 A = -t 2 . This example demonstrates that the algorithm works well in a case when the eigenvalue analysis of the ghost matrix M (3.15) approximates the kinematic eigenvalues properly, even if M is not a constant matrix. Using the notation of (1.2), the DAE is as follows: t(l + 2/3) + (1 - p) -(1 + 2/3) 0 t(l -(3)-2/3 t(l + 2/3) -(1 + 2/3) 0 t -1 (/?-l)e-7/?, t(l + 2p) + (2/9 - (1 - fit)*'*/?, <-!/*] fl is a parameter, xi(l) = e~l/fl, and 1 < t < 1.1. The exact solution is: xx = e-'/fl, x2 = te-t/P, x3 = t2e-*/P + t The transformation matrices S and T can be chosen as follows: S = I and T = Then 1 -t 0 ' t 1 - t* 1 T . <2 2t- <3 * . 3/1 t + e~ 7/3 ' V2 = 1 z - 1 1 -t 1 0 - 1 1/t -t 2 -1/t det(T) = - t M - t (t + l)(l//3 - t) +1 1 1//3 - (i + 1) With the following transformation we can find the kinematic eigenvalues of M, V T~\MT - T). T * + l 1 1 0 V = IIP 1 0 -1 Therefore the kinematic eigenvalues of M are l/P and \u00E2\u0080\u00941. The stability constant is very large when P is close to 0 and the BC is not properly posed. Actually the index of this CHAPTER 5. NUMERICAL EXAMPLES 42 DAE is close to 2 when P is close to 0. This can easily be seen when we look at Un of (3.13) U n 1-P -'(I-/?)-(1 + 2(3) t(l -p)-2p -t2(l -p)-t det(Uu) = -20(1 + 2P) If P = 0, then det(Uu) = 0. Note that M(t) and T(i) are not constant. However, a simple computation shows that the eigenvalues of M are X(l-0 + y/l+2ft + &fP) L 2/3 (i - p - vT+W+W) i_ -0.5 when P \u00E2\u0080\u0094> 0 (5.1) The large eigenvalue is very close to the large kinematic eigenvalue. This implies that the algorithm should perform well. The experimental results do show that this is true. In order to use the numerical scheme, two other BC's are required. Suppose that these two BC's are as follows: xa(l) = e-1/P, x3(l) = e\"1//? + 1 and p is chosen to be \u00E2\u0080\u00940.01. Table 5.1a shows the numerical results when the DAE is posed as an IVP using the BC's above (i.e. the algorithm proposed in Chapter 4 is not used). Table 5.1b shows the numerical results when the code DAESOLVE is used. CHAPTER 5. NUMERICAL EXAMPLES 43 Table 5.1a h error1 error2 error3 ratel rate2 rate3 0.200-01 0.204+04 0.433+04 0.704+04 0.100-01 0.701+03 0.151+04 0.245+04 0.152+01 0.152+01 0.152+01 0.500-02 0.229+03 0.486+03 0.792+03 0.163+01 0.163+01 0.163+01 0.250-02 0.622+02 0.132+03 0.215+03 0.188+01 0.188+01 0.188+01 0.125-02 0.159+02 0.337+02 0.549+02 0.197+01 0.197+01 0.197+01 0.625-03 0.400+01 0.847+01 0.138+02 0.199+01 0.199+01 0.199+01 Table 5.1b h error1 error2 error3 ratel rate2 rate3 0.200-01 0.135+00 0.275+00 0.444+00 0.100-01 0.412-01 0.838-01 0.135+00 0.171+01 0.171+01 0.172+01 0.500-02 0.111-01 0.226-01 0.360-01 0.189+01 0.189+01 0.190+01 0.250-02 0.284-02 0.578-02 0.917-02 0.197+01 0.197+01 0.197+01 0.125-02 0.714-03 0.145-02 0.230-02 0.199+01 0.199+01 0.199+01 0.625-03 0.179-03 0.364-03 0.576-03 0.200+01 0.200+01 0.200+01 Actually the computed eigenvalues for Table 5.1b are -101.6 and -0.39 at t=l -101.6 and -0.42 at t=l.l These values are close to the ones in (5.1). For Table 5.1b, the algorithm allocates two BC's at t = 1 and one at t = 1.1. 5.2 Example B This example is used to demonstrate the importance of projecting the boundary condi-tions for the differential part of the DAE (i.e. Step 2 of the proposed algorithm). Since sometimes the user provides more boundary conditions than the algorithm requires, the CHAPTER 5. NUMERICAL EXAMPLES 44 projection algorithm in Section 4.2 will choose an appropriate combination from those available. Using the notation of (1.2), consider E = P > 0, 0 < t < 1 and The exact solution is: 0 0 -(2 + 1)\"1 1 A = P l-P(t + l) 0 1 T <1 = 0, (/ + l)\"1 - 2P - pt xi(l) = -2 + 4/5 (5.2) xx = -(t+l)+P(t + l)2, x2 = p(t + l) However the user may think that one boundary condition is not enough, because the BVP has two variables, namely X\ and x2. Therefore the user may also specify one more BC. Let say, for instance, this BC is as follows: Xl(0) = -l + p (5.3) and P is chosen to be 10. When we look at the proposed algorithm in Chapter 4, we know that Step 3 (see Section 4.3) generates BC's for the algebraic part of the DAE without using any given BC's (4.1). However, the algorithm requires the BC for the differential part to be spec-ified. Using the above example, the dimension of the differential part is one, it implies CHAPTER 5. NUMERICAL EXAMPLES 45 that only one BC is required for the differential part. But there are two BC's that the algorithm could use. Which one should be chosen? We consider two cases. The first one is that no projection is performed (i.e. Step 2 of the proposed algorithm in Chapter 4 is omitted) and the algorithm randomly picks (5.3) as the BC for the differential part. Table 5.2a shows the experimental results of this case. The second case is that the projection step (See Section 4.2) is performed. Table 5.2b shows the experimental results of this case. One can easily see that the errors recorded in Table 5.2b are much better than those in Table 5.2a. Table 5.2a h errorl error2 ratel rate2 0.500-01 0.555+10 0.294+10 0.250-01 0.177+10 0.933+09 0.165+01 0.166+01 0.125-01 0.506+09 0.267+09 0.180+01 0.181+01 0.625-02 0.132+09 0.694+08 0.194+01 0.194+01 Table 5.2b h errorl error2 ratel rate2 0.500-01 0.108+00 0.561-01 0.250-01 0.299-01 0.152-01 0.185+01 0.189+01 0.125-01 0.772-02 0.388-02 0.195+01 0.197+01 0.625-02 0.195-02 0.975-03 0.199+01 0.199+01 The reason can be seen as follows: Using the following transformation matrices S = I T = ' (t + i) 0 ' T - l = 1 1 (t + I ) - 1 0 - ( t+1) - 1 1 det(T) = t + 1 CHAPTER 5. NUMERICAL EXAMPLES 46 the linear DAE can be reduced to a semi-explicit form t/ = [ / ? ( * + ! ) - ! ] * , z' = [l + (t + l)-1]y + z + q2 where q2 = (t + l )- 1 \u00E2\u0080\u0094 2/3 \u00E2\u0080\u0094 flt. Substituting y into the ODE for z z' = (l- z)q2 (5.4) If /? > 1/2, then (5.4) will have a growing mode, and therefore the boundary condition for the differential part should be specified at t = 1. This explains the poor results in Table 5.2a. The results in Table 5.2b are much better because the projection algorithm studies the sign and magnitude of q2 in (5.4), and decides to put the boundary condition of the differential part at t = 1. It should be noted that the phenomenon demonstrated in this example has little to do with the use of the mid-point scheme. Rather, it points out the general danger present when randomly picking boundary conditions in a consistent, overdetermined problem, regardless of the numerical discretization. This example is used to demonstrate that the eigenvalue analysis for the ghost ODE could be sensitive to the choice of T. Since T is not unique, a different T may generate a different ghost problem, and hence produce different eigenvalues. (See Section 4.3 for 5.3 Example C CHAPTER 5. NUMERICAL EXAMPLES 47 details). Using the notation of (1.2), consider -t 2 -1/t ' fit E = 0 0 0 A = 2fit-2 0 0 0 _ (a2 + fi2)t2 -2 fit -fi 0 (a 2 +/? 2 ) t (a2+fi2) -2a, - a ( a 2 + 02)22 - 2a(/3t - l)e~\ 2a0te-* ] 0 < 0, a > 0, 1 < t < 2, and x1(l) = ae - l The exact solution is: Xi = ae~\ x2 = ate *, cc3 = c*i 2 (l + e _ t ) If we choose the following S and T T = a /3 0 at /3t - 1 1 at2 fit2 - 2t t S = 0 0 1 0 1 0 1 0 0 1/a -fila fil at 0 1 -1/t - t 2 -1/t det(T) = at then /9t + e - ' V2 = -at 2 -at M _ f fi a Since M is a constant matrix, the kinematic eigenvalues are the same as the eigenval-ues, namely fldzai. Because /? is negative, the two boundary conditions for the algebraic part y should be put at t = 2, and also because a > 0, the boundary condition for the differential part z should be put at t = 1. That is, there should be one boundary condition at the left end point, and two boundary conditions at the right end point. CHAPTER 5. NUMERICAL EXAMPLES 48 However the matrix function M of the ghost problem could be quite different if another T is chosen. For instance, using the above S with 1 0 0 1 0 0 Tnew \u00E2\u0080\u0094 0 1 0 T-l _ 0 1 0 -t 2 2t -t. -t 2 -1/t _ then Mnew \u00E2\u0080\u0094 (a 2 + p2)t - ( a 2 + fl2) (a2 + p2)t2 - 2/3t - ( a 2 + (32)t + 2(l and the eigenvalues of M n e w are 0 and 2/5. But, we know that the kinematic eigenvalues are /3\u00C2\u00B1m. Thus, the eigenvalues are very different from the kinematic eigenvalues. This implies that the algorithm only performs well when T is properly chosen. The following will demonstrate this phenomenon. In order to use the numerical scheme, two other BC's are required. Suppose that these two BC's are as follows: x2{l) = ae~\ xz(l) = a(l + e\"1) Table 5.3a shows the numerical results of running DAESOLVE with /? = \u00E2\u0080\u0094 5 and a = 15. The kinematic eigenvalues are then \u00E2\u0080\u00945 \u00C2\u00B1 15i. The eigenvalues computed are -5.5 + 4.7* and -5.5 - 4.7* at t = 1 -5.4 + 7.0* and -5.4 - 7.0* at t = 2 The real parts of the eigenvalues are close to the real parts of the kinematic eigenvalues, which explains the good results of Table 5.3a. CHAPTER 5. NUMERICAL EXAMPLES 49 Table 5.3b shows the numerical results of running DAESOLVE with /? = -10 and o: = 6. The kinematic eigenvalues are then \u00E2\u0080\u009410 \u00C2\u00B1 6i. But the eigenvalues computed are -19.4 and -1.6 at t = 1 -18.2 and -2.5 at t = 2 The eigenvalues are very different from the kinematic eigenvalues. Assuming that the eigenvalues \u00E2\u0080\u00941.6 and \u00E2\u0080\u00942.5 are not considered as large, the algorithm proposed in Chapter 4 will then set two boundary conditions at t = 1 and one at t = 2. However we know that there should be one boundary condition at t = 1 and two at t = 2. This explains the poor results of Table 5.3b. Table 5.3a h errorl error2 error3 ratel rate2 rate3 0.500-01 0.131+01 0.194+01 0.396+01 0.250-01 0.385+00 0.626+00 0.128+01 0.177+01 0.163+01 0.163+01 0.125-01 0.996-01 0.167+00 0.341+00 0.195+01 0.191+01 0.191+01 0.625-02 0.251-01 0.424-01 0.866-01 0.199+01 0.198+01 0.198+01 0.313-02 0.209-02 0.361-02 0.753-02 0.200+01 0.194+01 0.194+01 Table 5.3b h errorl error2 error3 ratel rate2 rate3 0.500-01 0.149+03 0.281+03 0.529+03 0.250-01 0.553+02 0.105+03 0.200+03 0.143+01 0.142+01 0.140+01 0.125-01 0.282+02 0.539+02 0.103+03 0.970+00 0.964+00 0.957+00 0.625-02 0.934+01 0.179+02 0.342+02 0.159+01 0.159+01 0.159+01 0.313-02 0.253+01 0.485+01 0.928+01 0.188+01 0.188+01 0.188+01 Though this example only shows that the eigenvalue analysis fails to approximate the kinematic eigenvalues of the ghost matrix M, a similar failure is also expected when using CHAPTER 5. NUMERICAL EXAMPLES 50 the eigenvalues to approximate the kinematic eigenvalues in the process of projection (see Section 4.2). Chapter 6 Conclusion and Future Research In this thesis we have presented an algorithm to solve linear boundary value problems for DAEs of index 1 with separated boundary conditions. The algorithm arranges the boundary conditions for the algebraic part and rearranges the boundary conditions for the differential part of the DAE. By doing so, we hope that the conditioning constants of the ODEs (from the differential part and from the additional ghost ODE generated by the symmetric discretization of the algebraic part) will be of moderate size. These features have been implemented in a code, DAESOLVE. DAESOLVE has been tested on numerous examples, and the results have been com-pared with a code which chooses the BC arbitrarily. The experimental results are promis-ing. DAESOLVE outperforms the other code in most cases, and performs as well as the other one in other cases. However, DAESOLVE has two weaknesses which may sometimes cause it to fail to perform better than with arbitrary BC. The first one is that its performance is critically 51 CHAPTER 6. CONCLUSION AND FUTURE RESEARCH 52 affected by the ability of using the eigenvalues to approximate the kinematic eigenvalues. It is well-known in BVP literature that this approximation may fail in some pathological cases. The second, which is more serious, comes from the non-uniqueness of the decom-position of E(i). The non-uniqueness of T(t) has created an important problem: It may affect the eigenvalue analysis (see Section 4.3). In theory, the non-uniqueness of T(t) could lead to an unpredictable effect on the eigenvalue analysis (cf. (4.10)). The magnitude of J|Q_1Q/ - Q^F^F^QW could be either large or small depending on Fj. However, the problem lies in the uncertainty of what could happen. It is very hard to control what could happen in practice. Like most works, this thesis suggests more problems than it solves. The most obvious are the ones mentioned above. A simple extension is to allow DAESOLVE to handle non-separated boundary conditions. The whole algorithm proposed in Chapter 4, except for the step of projection, could be applied to the case of non-separated boundary con-ditions. It is a non-trivial question to ask how non-separated boundary conditions could be projected. Ascher in [1] has suggested one way to do so, but it cannot be used easily in practice. Last, but not the least, is to apply the algorithm proposed in this thesis for non-linear problems. Bibliography [1] U. Ascher, \"On numerical differential algebraic problems with application to semi-conductor device simulation,\" To appear in SIAM J. Numer. Anal, 1989. [2] U. Ascher, \"Symmetric schemes and differential-algebraic equations,\" To appear in SIAM J. Sci. Stat. Comput, 1989. [3] U. Ascher, R. Mattheij h R.D. Russel, \"Numerical solution of boundary value prob-lems for ordinary differential equations,\" Prentice-Hall, 1988. [4] K.E. Brenan & B.E. Engquist, \"Backward differentiation approximations of nonlin-ear differential/algebraic systems,\" revised manuscript, 1987. [5] S. Campbell, \"The numerical solutions of higher index linear time varying singular systems of differential equations,\" SIAM J. Sci. Stat. Comput., v. 6, pp. 334-347, 1985. [6] F. Gao, Private communication. 53 BIBLIOGRAPHY 54 [7] C.W. Gear, \"Simultaneous numerical solution of differential-algebraic equations,\" IEEE Trans. Circuit Theory, CT-18, 1971, pp. 89-95. [8] C.W. Gear, \"Differential-Algebraic equation index transformation,\" SIAM J. Sci. Stat. Comput., v. 6, pp. 334-347, 1989. [9] C.W. Gear, G.K. Gupta & B. Leimkuhler, \"Automatic integration of Euler-Lagrange equations with constraints,\" J. Computational & Applied Math., 12-13, pp. 77-90, 1985. [10] C.W. Gear & H.H. Hsu & L. Petzold, \"Differential/algebraic equations revisited,\" Proc. numerical methods for solving stiff I VPs, Oberwolfach, W. Germany, June 28 - July 4, 1981. [11] C.W. Gear & L.R. Petzold, \"ODE methods for the solution of differential/algebraic systems,\" SIAM J. Numer. Anal, v. 21, 1984, pp. 716-728. [12] E. Griepentrog &; R. Marz, \"Differential-algebraic equations and their numerical treatment,\" Teubner-Texte zur Mathematik Baud 88, Leipzig 1986. [13] A. Kvaerno, \"Order conditions for Runge-Kutta methods applied to differential/ algebraic systems of index 1,\" Math, of Computation, 1987. BIBLIOGRAPHY 55 [14] B.J. Leimkuhler, \"Approximation methods for the consistent initialization of differential-algebraic equations,\" Ph.D. Thesis, Report No. 88-1450, Aug. 1988, U. of Illinois at Urbana-Champaign, Dept. of Computer Science. [15] R. Marz, \"On difference and shooting methods for boundary value problems in differential-algebraic equations,\" Zamm, 11, pp. 463-473, 1984. [16] L.R. Petzold, \"A description of DASSL: A differential/ algebraic system solver,\" Proceedings of IMACS World Congress, Montreal, Canada, 1982. [17] L.R. Petzold, \"Differential/Algebraic equations are not ODE's,\" SIAM J. Sci. Stat. Stat. Comput., v. 3, 1982, pp. 367-384. [18] L.R. Petzold, \"Order results for implicit Runge-Kutta methods applied to differen-tial/algebraic systems,\" SIAM J. Num. Ana., v. 4, 1986, pp. 837-852. [19] L.R. Petzold k P. Lotsedt, \"Numerical solution of nonlinear differential equations with algebraic constaints II: Practical implication,\" SIAM J. Sci. Stat. Comput., v. 7, 1986, pp. 720-733. [20] R.F. Sincovec, B. Dembart, M.A. Erisman, A.M. Erisman, J.W. Manke k EX. Yip, \"Solvability of large scale descriptor systems,\" Boeing Computer Services Company, report, BOE Contract No. ET-78-C-01-2876. "@en . "Thesis/Dissertation"@en . "10.14288/1.0051912"@en . "eng"@en . "Computer Science"@en . "Vancouver : University of British Columbia Library"@en . "University of British Columbia"@en . "For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use."@en . "Graduate"@en . "An algorithm for solving index 1 differential algebraic equations in boundary value problems"@en . "Text"@en . "http://hdl.handle.net/2429/27404"@en .