The Application of Symmetry Methods and Conservation Laws to Ordinary Differential Equations and a Linear Wave Equation by Jeremy G. Hoskins B.Sc., The University of British Columbia, 2011 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty of Graduate Studies (Mathematics) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) October 2012 c© Jeremy G. Hoskins 2012 Abstract Symmetry analysis and conservation laws are widely used in analyzing and solving differential equations. Conservation laws are also called first integrals when dealing with ordinary differential equations (ODEs). In this thesis, the complementary nature of these two approaches is explored; specifically, the use of symmetries to find integrating factors and, conversely, the use of con- servation laws to seek new symmetries. In Part 1, building upon results in [3] and [10], it is shown that a higher-order symmetries of an ODE in- duces a point symmetry of the corresponding integrating factor determining equations (IFDE), and an explicit expression for this induced symmetry is obtained. Secondly, it is shown that the converse also holds for a special class of Lie point symmetries of the IFDE; namely, all Lie point symmetries of the IFDE which are of the form ξ(x, Y, Y1, . . . , Yn−1) ∂ ∂x + η(x, Y, Y1, . . . , Yn−1) ∂ ∂Y + + n−1∑ i=1 ηi(x, Y, Y1, . . . , Yn−1) ∂ ∂Yi + Λh(x, Y, Y1, . . . , Yn−1) ∂ ∂Λ (1) project onto point symmetries of the original scalar ODE. In Part 2, the use of conservation laws to find non-local symmetries is shown for a linear one-dimensional wave equation in a two-layered medium with a smooth transition layer. The resulting analytic solutions are then studied in order to investigate the effect of the transmission and reflection of energy between the two media. It is found that the reflection and trans- mission coefficients depend on the ratio of the wave speeds in the two media as well as the ratio of the characteristic length of the incoming signal to ii Abstract the width of the transition layer. Approximations of the dependence of the reflection and transmission coefficients on these two parameters are also presented, obtained via numerical experiments performed using both the analytic solution and a finite element method. iii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . x Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi 1 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 I First Integral Symmetries 3 2 First Integrals of Ordinary Differential Equations . . . . . 4 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 First Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 Theoretical Background . . . . . . . . . . . . . . . . . . . . . 8 2.3.1 Direct Method . . . . . . . . . . . . . . . . . . . . . . 8 2.4 Motivation and Aims . . . . . . . . . . . . . . . . . . . . . . 11 2.4.1 Symmetries Map First Integrals into other First Inte- grals . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4.2 Symmetry Ansatzes and Invariant Solutions . . . . . 17 2.4.3 Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 iv Table of Contents 3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 First-Order ODEs . . . . . . . . . . . . . . . . . . . . . . . . 19 3.3 Second-Order ODEs . . . . . . . . . . . . . . . . . . . . . . . 21 3.3.1 Scalar Integrating Factor Equations . . . . . . . . . . 22 3.3.2 System Integrating Factor Equations . . . . . . . . . 26 3.3.3 Comparison of the Two Approaches . . . . . . . . . . 32 3.4 General ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.4.2 First Integral Determining Equation . . . . . . . . . . 35 3.4.3 First Integral Symmetries . . . . . . . . . . . . . . . . 37 3.4.4 Multiplier Determining Equations . . . . . . . . . . . 43 3.4.5 Induced Multiplier Symmetries . . . . . . . . . . . . . 49 3.4.6 Connection between the System and Scalar Multiplier Determining Equations . . . . . . . . . . . . . . . . . 65 4 Conclusions and Further Work . . . . . . . . . . . . . . . . . 89 II Analysis of an Exact Solution of a Wave Equation 91 5 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . 92 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.2 The Wave Equation and its Applications . . . . . . . . . . . 92 5.3 Conservation of Energy . . . . . . . . . . . . . . . . . . . . . 97 5.4 Reflection and Transmission Coefficients . . . . . . . . . . . 99 6 Solutions of a Family of Wave Equations . . . . . . . . . . . 101 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 6.2 Symmetry Classification of (5.2) . . . . . . . . . . . . . . . . 101 6.3 Formulation of a Nonlocally Related System . . . . . . . . . 103 6.4 Symmetry Analysis of a Nonlocally Related System . . . . . 105 6.5 Formulation of an Analytic Solution . . . . . . . . . . . . . . 109 v Table of Contents 7 Numerical Implementation of the Analytic Solution . . . . 115 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 7.2 Details of the Implementation . . . . . . . . . . . . . . . . . 116 7.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 8 Numerical Approximation to the Solution . . . . . . . . . . 127 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 8.2 Details of the Implementation . . . . . . . . . . . . . . . . . 128 8.2.1 Weak Formulation . . . . . . . . . . . . . . . . . . . . 129 8.2.2 Discretization . . . . . . . . . . . . . . . . . . . . . . 130 8.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 9 Conclusions and Further Work . . . . . . . . . . . . . . . . . 137 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 vi List of Tables 6.1 Group classification of the linear wave equation (5.2) appear- ing in [5] based on [6]. . . . . . . . . . . . . . . . . . . . . . . 102 vii List of Figures 6.1 A solution of (6.11) for c1 = 12 , c2 = 1, and λ = 0.8 obtained using a Runge-Kutta method [22] implemented in MatlabTM. 107 6.2 A plot of U(x(y)) given by (6.32) with N = 1 and y† = e−7.5 obtained using a finite number of terms of the series solution (6.28). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 7.1 A plot of the first mode u0(y, t) given in (6.29) when λ = 1, c2 = 2, and t = −8. . . . . . . . . . . . . . . . . . . . . . . . 119 7.2 A plot of the first mode u0(y, t) given in (6.29) when λ = 1, c2 = 2, and t = 0. . . . . . . . . . . . . . . . . . . . . . . . . . 120 7.3 A plot of the first mode u0(y, t) given in (6.29) when λ = 1, c2 = 2, and t = 8. . . . . . . . . . . . . . . . . . . . . . . . . 120 7.4 The first mode u0(y, t) plotted against log(y) when λ = 1, c2 = 2, and t = 10. . . . . . . . . . . . . . . . . . . . . . . . . 121 7.5 The first mode u0(y, t) plotted against log(pi−y) when λ = 1, c2 = 2, and t = 10. . . . . . . . . . . . . . . . . . . . . . . . . 121 7.6 The dependence of the transmission coefficient on the ratio of the asymptotic wave speeds for the first mode u0(y, t) given in (6.29) when λ = 1. The red and green points were cal- culated from the two wave pulses created by the collision of the two incident wave pulses. The blue points represent the transmission coefficient for a sharp transition with the same wave speed ratio. . . . . . . . . . . . . . . . . . . . . . . . . . 122 viii List of Figures 7.7 A plot of the transmission coefficients for a smooth transition with λ = 1 (red) and the transmission coefficients for a sharp transition (green) where the initial conditions are given by (6.32) with N = 1 and y† = e−7.5 where γ = c2c1 is the ratio of the asymptotic wave speeds. . . . . . . . . . . . . . . . . . 126 8.1 A numerical approximation to a solution of the wave equation (5.2) with wave speed satisfying (6.11) for c1 = 1, c2 = 2, λ = 1 and initial conditions given by (6.32) where N = 1 and y† = e−7.5, obtained using a finite element method implemented in MatlabTM. Blue: t = 0, green: t = 0.8, red: t = 3.4. A plot of the wave speed is given in (6.1). . . . . . . . . . . . . . . . 134 8.2 A numerical approximation to a solution of the wave equation (8.1) with wave speed satisfying (6.11) for c1 = 12 , c2 = 1, and λ = 2.3 at three different times: t = 5 (blue), t = 23 (green), and t = 27 (red). The approximation was obtained using a finite element method implemented in MatlabTM. The corresponding wave speed as a function of x is given in (6.1). 135 8.3 A plot of the smooth transmission coefficients T (κ) when c1 = 1 2 , and c2 = 1 with Fresnel transmission coefficient TF = 8 9 . . 136 ix Acknowledgements The author would like to thank his supervisors, Dr. George Bluman and Dr. Alexei Cheviakov for their guidance. He would also like to thank Dr. Dominik Schoetzau for his assistance. x Dedication To my parents. xi Chapter 1 Preface In the study of ordinary and partial differential equations (ODEs and PDEs respectively) symmetry analysis and conservation law methods are partic- ularly effective techniques which enable one to find solutions to differential equations arising in many areas of science. In this thesis the complementary nature of these two methods is explored; both the use of symmetries to find integrating factors as well as the use of conservation laws to find symmetries. Part 1 begins with a discussion of the direct method for finding integrating factors which arise as solutions of an over-determined PDE system called the scalar integrating factor determining equations (IFDE). In general, solving the scalar IFDE directly is not possible and one instead seeks particular solutions. A result from [10] is presented in Chapter 2 which shows that every contact symmetry of a scalar ODE induces a point symmetry of its corresponding IFDE. This is useful in two ways. Firstly, given a contact symmetry of the scalar ODE one can seek solutions of the IFDE which are invariant under the induced point symmetry. Secondly, given an integrating factor one can look at its image under a symmetry. Mapping in this way can yield more than one new integrating factor. In Chapter 3 the correspondence between contact symmetries of a scalar ODE and its associated scalar IFDE is extended to the case of higher-order symmetries of the scalar ODE. In particular, it is shown that every higher- order symmetry of the scalar ODE induces a point symmetry of its associated IFDE. Conversely, it is shown that if a transformation is a Lie point symme- try of the scalar IFDE of a particular form (commonly used when seeking symmetries of a PDE system) then that point symmetry must project onto 1 Chapter 1. Preface a higher-order symmetry of the original scalar ODE. Part 2 demonstrates the use of conservation laws to find symmetries of a PDE systematically which cannot be obtained using the classical methods of Lie. In particular, it is shown that for a certain linear one-dimensional equation one can find two such symmetries, called non-local symmetries, which allow one to find new analytic solutions to this equation [7]. The wave speed for this specific wave equation corresponds to two homogeneous media separated by a smooth transition region. In physical applications the fraction of an incident wave’s energy which passes through the transition region is particularly important. Chapter 5 introduces the particular form of the wave equation under consideration, as well as its applications, and formalizes the definition of energy and the fraction of transmitted energy. In Chapter 6 non-local symmetry methods are outlined and applied to the wave equation enabling one to find an analytic solution for a four-parameter family of wave speeds. The analysis of the resulting analytic solution is given in Chapter 7 and compared with the results of a finite element method approach in Chapter 8. 2 Part I First Integral Symmetries 3 Chapter 2 First Integrals of Ordinary Differential Equations 2.1 Introduction In this chapter, the use of first integrals for solving scalar ordinary differ- ential equations (ODEs) is discussed. In particular, the direct method is introduced which allows one to seek integrating factors systematically by solving an over-determined system of partial differential equations (PDEs) called the integrating factor determining equations (IFDE). Often it is not possible to obtain general solutions of these equations by directly solving the IFDEs and one must instead seek particular solutions. In this chapter two methods, based on symmetries, are presented which can be used to seek first integrals of scalar ODEs systematically. 2.2 First Integrals The mathematical modelling of physical phenomena frequently generates differential equations using conserved quantities (first integrals for ordinary differential equations) such as mass, energy, momentum and charge. Numer- ically, first integrals are used in generating finite element methods as well as to check the accuracy of numerical schemes. Recently there have been great advances in the development of systematic methods for finding first integrals. Using first integrals one can obtain analytic solutions to many otherwise-intractable differential equations. 4 2.2. First Integrals The present work focuses on nth order ordinary differential equations (ODEs) of the form y(n) − f(x, y, y′, . . . , y(n−1)) = 0. (2.1) In this case, a first integral enables the reduction of (2.1) to an (n − 1)th order ODE. Definition 2.1. A function φ(x, Y, Y1, . . . , Yn−1) yields a first integral of (2.1) if, for arbitrary functions Y = {Y, Y1, . . . , Yn−1}, there exists Λ = Λ(x, Y ) such that Λ(x, Y ) [Yn − f(x, Y )] = Dφ(x, Y ). (2.2) In (2.2), D = ∂∂x + Y1 ∂ ∂Y + . . . + Yn ∂ ∂Yn−1 is the total derivative operator. The function Λ is called an integrating factor of (2.1), and φ(x, y, y′, . . . , y(n−1)) = const is the corresponding first integral. Unless otherwise stated, Y, . . . , Yn−1 are assumed to be arbitrary, while y, y′, . . . , y(n−1) will denote a solution of (2.1) and its derivatives. Example 2.2. As an example, consider y′′ − y = 0. (2.3) Two distinct integrating factors of (2.3) are Λ1(x, Y, Y1) = ex, and Λ2(x, Y, Y1) = e−x. The multiplication of (2.3) by Λ1 and replacement of y and y′ by arbitrary 5 2.2. First Integrals functions Y and Y1 gives ex(Y2 − Y ) = D[ex(Y1 − Y )]. It follows that φ1(x, y, y′) = ex(y′ − y) = a is a first integral of (2.3). Using Λ2 yields another first integral φ2(x, y, y′) = e−x(y′ + y) = b. Solving for y in the two first integral equations, one finds the general solution of (2.3): y = a 2 e−x + b 2 ex. (2.4) Using a Lie point symmetry to reduce (2.1) results in an (n− 1)th order equation v(n−1) − g(u, v, v′, . . . , v(n−2)) = 0, (2.5) where v(i) = d iv dui , u = u(x, y) and v = v(x, y, y′) has an essential dependence on y′. Since v depends on y′, writing (2.5) in terms of (x, y, y′, . . . , y(n)) yields an nth order ODE. In terms of the variables (x, y, y′, . . . , y(n)) no reduction has taken place. Reducing via a first integral yields an (n− 1)th order ODE depending upon x, y, the derivatives of y to order n− 1, and an arbitrary constant λ. The fact that first integrals allow reduction in terms of the original variables aids significantly in subsequent analysis and further reduction. First integrals are systematically found using integrating factors which satisfy an over-determined linear PDE system called the integrating factor determining equations (IFDE). Given an integrating factor, calculating a first integral and reducing (2.1) is then straightforward. Due to the nature of the set of solutions of the IFDE, discussed further in Section 2.3.1, obtaining a general solution of the IFDE by directly solving the over-determined PDE system is seldom possible. Instead, one seeks particular solutions of the IFDE, since any non-zero integrating factor allows one to reduce (2.1) to an (n − 1)th order ODE. If n functionally-independent integrating factors are 6 2.2. First Integrals known, a general solution of (2.1) can then be determined. To obtain particular solutions of an IFDE, one often considers a particu- lar ansatz for the integrating factors, such as Λ(x, Y ) = Λ̃(x, Y, Y1, . . . , Yk) where k < n. Alternatively, one can employ symmetry methods to find integrating factors in the following ways: • given a symmetry of the IFDE of (2.1), one can look for solutions invariant under that symmetry [10]. This reduces the equations, sim- plifying the task of finding particular solutions. • if a non-zero integrating factor is known, a symmetry of the IFDE will map it to one or more possibly distinct integrating factors [10]. The use of symmetries of the IFDE thus presents a systematic way to search for integrating factors of the ODE (2.1). The above methods are outlined in greater detail in Sections 2.4.1 and Section 2.4.2, respectively. In order to use symmetry methods one must first know at least one sym- metry of the IFDE. The easiest way to find symmetries of the IFDE is to seek contact symmetries of (2.1), since every contact symmetry of (2.1) in- duces a point symmetry of the IFDE [10]. There are many examples for which this technique has proven effective in obtaining integrating factors and corresponding first integrals. In this work it will be shown that the converse of the previous result also holds for a certain class of point symmetries of the IFDE. In particular, all point symmetries of the IFDE which are of the form X = ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y + n−1∑ j=1 ηj(x, Y ) ∂ ∂Yj + Λh(x, Y ) ∂ ∂Λ project onto higher-order symmetries of the scalar ODE (2.1). 7 2.3. Theoretical Background 2.3 Theoretical Background 2.3.1 Direct Method Recent progress in the study of first integrals has resulted from the direct method [3]. Analogous to symmetry analysis, the direct method produces an over-determined linear PDE system which the integrating factors must satisfy. Only ODEs will be considered here, though the method generalizes to PDEs. The aim of the direct method is to find functions Λ(x, Y ) for which there exists φ(x, Y ) such that the pair (Λ, φ) satisfies (2.2). Necessary and suffi- cient conditions for Λ to be an integrating factor are obtained by applying the Euler operator. Definition 2.3. [4] The Euler operator E is the linear differential operator defined by E = ∂ ∂Y + ∞∑ i=1 (−1)iDi ∂ ∂Yi . One then has the following result. Theorem 2.4. [3] Given a differentiable function g(x, Y, . . . , Yk), the fol- lowing statements are equivalent when either one holds: • Eg(x, Y, Y1 . . . , Yk) ≡ 0 • there exists a function h(x, Y, Y1 . . . , Yk−1) such that g(x, Y, Y1, . . . , Yk) = Dh(x, Y, Y1, . . . , Yk−1). Corollary 2.5. Λ(x, Y ) is an integrating factor of (2.1) if and only if E [Λ(x, Y ) {Yn − f(x, Y )}] ≡ 0. (2.6) Proof. Applying the Euler operator to both sides of (2.2) the result follows immediately from Theorem 2.4. 8 2.3. Theoretical Background Along with Y, Y1, . . . , Yn−1 the left-hand side of (2.6) also depends on Yn, . . . , Y2n−1; introduced by the repeated differentiation associated with E. Since Λ does not depend on Yn, . . . , Y2n−1, the nature of the Euler operator implies that the left-hand side of (2.6) must be a polynomial in Yn, . . . , Y2n−1. Let Λ denote Λ and all its partial derivatives with respect to (x, Y ). The left-hand side of (2.6) can then be re-written as E [Λ(x, Y ) {Yn − f(x, Y )}] = ∑ i1,i2,...,in ai1i2...in(x, Y ,Λ)Y i1 n Y i2 n+1 . . . Y in 2n−1, (2.7) where the ai1...in(x, Y ,Λ) are linear in Λ and its derivatives. Since each Yi is arbitrary, and (2.6) vanishes for all (Y, . . . , Y2n−1), it follows that each coefficient ai1...in(x, Y ,Λ) must be identically zero. The resulting system, called the IFDE, is given by ai1...in(x, Y ,Λ) = 0, for all i1, . . . , in. (2.8) The IFDE is a linear system of PDEs with dependent variable Λ and inde- pendent variables (x, Y ). Example 2.6. Consider the second-order ODE y′′ = 0. (2.9) After multiplying (2.9) by Λ = Λ(x, Y, Y1) and applying the Euler operator one obtains ΛY Y2 −D (ΛY1Y2) + D2Λ = 0. (2.10) Expanding (2.10) one finds that ΛY Y2 − [Y3ΛY1 + Y2 (ΛxY1 + Y1ΛY Y1 + Y2ΛY1Y1)] + [Λxx + 2ΛxY Y1+ + 2ΛxY1Y2 + 2ΛY Y1Y1Y2 + Y 2 2 ΛY1Y1 + Y 2 1 ΛY Y + Y3ΛY1 + Y2ΛY ] = 0. (2.11) 9 2.3. Theoretical Background The cancellation of terms in (2.11) produces ( Λxx + 2Y1ΛxY + Y 21 ΛY Y ) + Y2 (2ΛY + Y1ΛY Y1 + ΛxY1) = 0. (2.12) (2.12) is a polynomial in Y2, each coefficient of which must be identically zero. Splitting (2.12) gives Λxx + 2Y1ΛxY + Y 21 ΛY Y = 0, (2.13a) 2ΛY + Y1ΛY Y1 + ΛxY1 = 0. (2.13b) To solve (2.13) let r = Y1x−Y and Λ(x, Y, Y1) = G(x, r, Y1). Then (2.13a) becomes ∂2G ∂x2 = 0. (2.14) It follows that G(x, r, Y1) = F1(r, Y1) + xF2(r, Y1). (2.15) The substitution of (2.15) into (2.13b) yields ∂F1(r, Y1) ∂r − ∂F2(r, Y1) ∂Y1 = 0. (2.16) From (2.16) it follows immediately that there exists a function H(r, Y1) such that F1 = ∂H∂Y1 and F2 = ∂H ∂r . Thus Λ(x, Y, Y1) = x ∂H(r, Y1) ∂r ∣∣∣∣ r=Y1x−Y + ∂H ∂Y1 ∣∣∣∣ r=Y1x−Y (2.17) It is then easily shown that (2.13) has the following general solution: Λ = ∂ ∂Y1 H(xY1 − Y, Y1), where H is any suitably differentiable function. 10 2.4. Motivation and Aims Dimension of the Space of Integrating Factors Finding a general solution to the IFDE can be difficult due to the size of its solution set, as demonstrated by the following result. Theorem 2.7. Let φ1, . . . , φk be functions of (x, Y ) yielding first integrals of (2.1). Then if h(φ1, . . . , φk) is any differentiable function, µ(x, Y ) = ∂φi ∂Yn−1 ∂h(φ1, . . . , φk) ∂φi is an integrating factor of (2.1). Proof. From (2.2), it is clear that ∂φi∂Yn−1 = Λi is an integrating factor of (2.1) for i = 1, . . . , k. Applying D to h and assuming summation over repeated indices gives dh dx = ∂h ∂φi dφi dx = ∂h ∂φi Λi [Yn − f(x, Y )] = µ(x, Y ) [Yn − f(x, Y )] . (2.18) It follows from (2.2) that µ(x, Y ) is an integrating factor with corresponding first integral h(φ1, . . . , φk) = c. Though the solutions to (2.1) and the corresponding integrating factor determining equations are of the same cardinality, the set of solutions to the latter is an infinite-dimensional space. In contrast, each solution of (2.1) can be represented by a point in Rn. 2.4 Motivation and Aims 2.4.1 Symmetries Map First Integrals into other First Integrals One important way of finding first integrals is by using symmetries of (2.1). In particular, as the next theorem shows, each contact symmetry of the 11 2.4. Motivation and Aims original ODE (2.1) induces a symmetry of both its first integrals and its integrating factors. Theorem [10] 2.8. Suppose (2.1) is invariant under a contact transforma- tion given by x∗ = x∗(x, y, y′), y∗ = y∗(x, y, y′), y∗1 = y ∗ 1(x, y, y ′). (2.19) In addition, let Λ(x, Y ) be an integrating factor of (2.1). If φ(x, y, y′, . . . , y(n)) = const is the corresponding first integral then φ̃(x, y, y′, . . . , y(n−1)) = φ ( x∗(x, y, y′), y∗0(x, y, y ′), . . . , y∗n−1(x, y) ) = const (2.20) is also a first integral, where Y ∗i = DY ∗i−1 Dx∗ for i = 2, . . . , n−1. The integrating factor corresponding to the first integral (2.20) is Λ̃ = [Dx∗]A(x, Y ) Λ(x∗(x, Y, Y1), Y ∗(x, Y )), (2.21) where A(x, Y ) is defined implicitly by Y ∗n − f(x∗, Y ∗) = A(x, Y )[Yn − f(x, Y )]. (2.22) Proof. If (2.19) represents a symmetry of (2.1), it must leave invariant the solution set of (2.1) [10]. It follows that there exists A(x, Y ) such that Y ∗n − f(x∗, Y ∗) = A(x, Y )[Yn − f(x, Y )]. (2.23) Let D∗ = ∂ ∂x∗ + Y ∗1 ∂ ∂Y ∗ + . . .+ Y ∗n ∂ ∂Y ∗n−1 , 12 2.4. Motivation and Aims denote the total derivative operator with respect to x∗. Since D and D∗ are total derivative operators they must satisfy the chain rule D∗ = [Dx∗]−1D. (2.24) By replacing (x, Y ) with (x∗, Y ∗) in (2.2) and using D∗ in place of D it is clear that Λ(x∗, Y ∗)[Y ∗n − f(x∗, Y ∗)] = D∗φ(x∗, Y ∗). (2.25) Applying (2.24) and (2.23) to (2.25) gives Λ(x∗, Y ∗)A(x, Y )[Yn − f(x, Y )] = [Dx∗]−1Dφ(x∗(x, Y ), Y ∗(x, Y )). (2.26) After multiplying (2.26) by Dx∗, the resulting expression is of the form (2.2) with first integral φ̃(x, y, y′, . . . , y(n−1)) = φ ( x∗(x, y, y′), y∗(x, y, y′, . . . , y(n−1)) ) = c (2.27) and corresponding integrating factor Λ̃ = [Dx∗]A(x, Y ) Λ(x∗(x, Y, Y1), Y ∗(x, Y )). (2.28) In the preceding theorem, no assumptions apart from differentiability are made about the nature of the symmetry (2.19). Thus, this theorem holds even when (2.19) is not a Lie symmetry of (2.1). Example 2.9. Consider the equation y′′ − y = 0. (2.29) In Example 2.2, it was shown that φ1(x, y, y′) = ex(y′ − y) = a is a first integral of (2.29) with correspond integrating factor ex. By inspection, it is 13 2.4. Motivation and Aims also clear that (2.29) has the discrete symmetry: x∗ = −x, y∗ = y. (2.30) Under transformation (2.30), φ1 becomes φ̃1(x, y, y′) = φ1(−x, Y,−Y1) = −e−x(Y + Y1), which yields the other first integral e−x(y′ + y) = b from Example 2.2. Similarly, since A(x, Y ) = 1 and Dx∗ = −1, (2.28) gives the integrating factor Λ̃(x, Y, Y1) = −e−x, which is equivalent to Λ2 in Example 2.2. Focusing on Lie contact symmetries of (2.1), the results of Theorem 2.8 can be extended, allowing one to obtain possibly more than one new first integral per symmetry. Theorem 2.10. [10] Suppose (2.1) admits the following Lie contact sym- metry with parameter  : x∗ = x∗(x, y, y′; ), Y ∗ = Y ∗(x, y, y′; ), Y ∗1 = Y ∗ 1 (x, y, y ′; ). (2.31) As in Theorem 2.8, let φ(x, y) = const be a first integral of (2.1) with corre- sponding integrating factor Λ(x, Y ). Furthermore, let the infinitesimal gen- erator of transformation (2.31) be denoted by X = ξ(x, y, y′) ∂ ∂x + η(x, y, y′) ∂ ∂y + η1(x, y, y′) ∂ ∂y′ , (2.32) and its prolongations by X(1),X(2), . . . ,X(n−1). It then follows that there ex- ists an infinite sequence of first integrals and integrating factors of (2.1) 14 2.4. Motivation and Aims given by φ̂i(x, y) = 1 i! [X(n−1)]iφ(x, y), Λ̂i(x, Y ) = ∑ m+p+l=i 1 m! p! l! D[(X(n−1))mx] [( ∂p ∂p ∂Y ∗n ∂Yn )∣∣∣∣ =0 ] [(X(n−1))lΛ(x, Y )]. (2.33) Each level set of φ̂i generates a first-integral when restricted to solutions of (2.1). For (2.1) there cannot be more than n independent first integrals and thus the above method will generate at most n− 1 new first integrals. Proof. Before proceeding further, it is necessary to derive an explicit expres- sion for A(x, Y ; ) which is defined implicitly in Theorem 2.8 by [Y ∗n − f(x∗, Y ∗; )] = A(x, Y ; )[Yn − f(x, Y )]. (2.34) Note that A(x, Y ; ) does not depend on Yn. In addition, for a contact sym- metry, Y ∗, . . . , Y ∗n−1 are independent of Yn, and Y ∗n is linear in Yn. This follows directly from the prolongation formulae [3]. Differentiating (2.34) with respect to Yn gives ∂Y ∗n (x, Y ; ) ∂Yn = A(x, Y ; ). (2.35) Expanding A(x, Y ; ) as a Taylor series in  yields A(x, Y ; ) = ∞∑ k=0 k k! ∂kA ∂k ∣∣∣∣ =0 . (2.36) By Theorem 2.8, if (2.1) admits the Lie contact symmetry (2.31) and has an integrating factor Λ(x, Y ) with a corresponding first integral φ(x, y, y′, . . . , y(n−1)) = const, 15 2.4. Motivation and Aims then the following is also an integrating factor of (2.1) for all values of  : Λ̃(x, Y ) = [Dx∗]A(x, Y ; ) Λ(x∗(x, Y ; ), Y ∗(x, Y ; )). (2.37) The first integral associated with the integrating factor (2.37) is φ̃(x, y, y′ . . . , y(n−1)) = φ ( x∗(x, y, y′; ), Y ∗(x, y, y′, . . . , y(n); ) ) = const. (2.38) Using a result from [3], if F (x, Y ) is any sufficiently smooth function then F (x∗, Y ∗) = eX (n−1) F (x, Y ), (2.39) where X is the infinitesimal generator associated with transformation (2.31) and X(n−1) is its (n− 1)th prolongation. Using (2.36), and (2.39) it follows that φ̃ = ∞∑ l=0 l l! X(n−1)φ, Λ̃ = D [ ∞∑ i=0 i i! Xix ] ∞∑ j=0 j j! ( ∂j ∂j ∂Y ∗n ∂Yn )∣∣∣∣ =0 ∞∑ k=0 k k! ( X(n−1) ) Λ(x, Y ) = ∞∑ i=0 i ∑ m+p+l=i 1 m! p! l! D[(X(n−1))mx] [( ∂p ∂p ∂Y ∗n ∂Yn )∣∣∣∣ =0 ] [(X(n−1))lΛ(x, Y )] (2.40) Equating powers of  on either side of the equation Λ̃[Yn − f(x, Y )] = Dφ̃ (2.41) yields Λ̃i[Yn − f(x, Y )] = Dφ̃, i = 0, . . . (2.42) where φi and Λi are given by (2.33). It follows from (2.42) that Λ̃i(x, Y ) and φ̃i(x, y, y′, . . . , y(n)) = c are integrating factors and first integrals, re- 16 2.4. Motivation and Aims spectively, of (2.1). 2.4.2 Symmetry Ansatzes and Invariant Solutions One does not always have a first integral or, equivalently, an integrating factor from which to generate others. Moreover, even if an integrating factor is known, it is not necessarily true that using symmetries as outlined in Theorems 2.8, and 2.10 will generate enough first integrals to reduce (2.1) the n times required to obtain a general solution. It is therefore necessary to have a method for finding integrating factors directly from the IFDE. It was proved in Section 2.4.1 that every contact symmetry of (2.1) gives rise to a transformation mapping each integrating factor into another, pos- sibly distinct, integrating factor. It follows that symmetries of (2.1) must induce symmetries of the corresponding IFDE. From Theorem 2.10, given a symmetry of (2.1) with infinitesimal generator X = ξ ∂ ∂x + η ∂ ∂Y + ηi ∂ ∂Yi , (2.43) the following formula provides the action of the induced symmetry on (x, Y ,Λ) [3]: X̂ = ξ ∂ ∂x + η ∂ ∂Y + ηi ∂ ∂Yi + ( ∂ηn−1 ∂Yn−1 ) Λ ∂ ∂Λ . (2.44) Solutions of the IFDE invariant under (2.44) are then sought using the method of differential invariants [3]. 2.4.3 Aims This work will extend the above results to higher-order symmetries of (2.1), potentially increasing the number of integrating factors which can be gen- erated by both of the methods outlined above. It will also demonstrate the completeness of the ODE symmetry method for finding symmetries of the corresponding IFDE by showing that all point symmetries of the IFDE project onto symmetries of (2.1). There is thus no need to generate and 17 2.4. Motivation and Aims solve the symmetry determining equations of the IFDE to ensure that all useful symmetries have been found. 18 Chapter 3 Results 3.1 Introduction In this chapter connections between the local symmetries of (2.1) and those of the IFDE of (2.1) are discussed. In particular, it is shown that all contact symmetries of (2.1) induce point symmetries of the corresponding IFDE and all higher-order symmetries of (2.1) induce higher-order symmetries of the IFDE. Conversely, it is also proved that every point symmetry of an IFDE which is of a certain form projects onto a local symmetry of (2.1). In Sections 3.2 and 3.3 the relationship between the symmetries of an ODE and its corresponding IFDE is proved for first and second order ODEs by directly comparing the symmetry determining equations of the IFDE with those of (2.1). Finally, in Section 3.4 the result is proved for a general ODE using first integrals as an intermediary between (2.1) and its corresponding IFDE. 3.2 First-Order ODEs Consider the first-order ODE y′ = f(x, y). (3.1) The following theorem shows that every symmetry of (3.1) must induce a symmetry of the corresponding IFDE and, conversely, every symmetry of the IFDE must project onto a symmetry of (3.1). 19 3.2. First-Order ODEs Theorem 3.1. Every symmetry of the integrating factor determining equa- tions corresponding to (3.1) of the form X = ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y + Λh(x, Y ) ∂ ∂Λ projects onto a point symmetry of (3.1). Moreover, every point symmetry of (3.1) induces a point symmetry of its IFDE. Proof. It follows from the direct method that the IFDE of (3.1) is Λx(x, Y ) + f(x, Y )ΛY (x, Y ) + Λ(x, Y )fY (x, Y ) = 0. (3.2) Consider the one-parameter Lie group of transformations of (x, Y,Λ) with infinitesimal generator X = ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y + Λh(x, Y ) ∂ ∂Λ . (3.3) The transformation (3.3) is a symmetry of (3.2) if and only if [4] ξxf − ηx + ξfx + f2ξY + ηfY − fηY = 0, (3.4a) f hY + ξfxY + fξY fY + ηfY Y + hx + ξxfY = 0. (3.4b) Consider the projection of X onto a vector field over (x, Y ), Xp = ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y . (3.5) This corresponds to a symmetry of (3.1) if and only if [3] −ξfx − ηfY + ηx + ηY f − fξx − ξY f2 = 0. (3.6) Comparing (3.6) with (3.4a) it follows that if X has coefficients satisfying (3.4) then Xp has coefficients satisfying (3.6). To prove the converse it is necessary to show that if Xp satisfies (3.6) then there exists an h(x, Y ) such that X satisfies (3.4). Since (3.4a) is 20 3.3. Second-Order ODEs identical to (3.6) it is automatically satisfied. The remaining equation (3.4b) is equivalent to f hY + hx = −(ξfxY + ξxfY + ηfY Y + fξY fY ), (3.7) where f, ξ, η are known. By differentiating (3.4a) with respect to Y and using (3.7), one finds that fηY Y + ηxY = (ξfx)Y + ηfY Y + (fξx)Y + (ξY f2)Y = ξfxY + ξxfY + ηfY Y + ξY ffY + ξY fx + ξxY f + fY fξY + ξY Y f2 = ξfxY + ξxfY + ηfY Y + ξY ffY + (ξY f)x + f(ξY f)Y = −(hx + fhY ) + (ξY f)x + f(ξY f)Y . (3.8) Finally, it follows from (3.8) that if Xp is a symmetry of (3.1) then there exists an h such that X is a symmetry of (3.4), where h = −(ηY − fξY ). Thus every point symmetry of (3.1) induces a symmetry of (3.2), and the proof is complete. 3.3 Second-Order ODEs The analysis of the second-order ODE y′′ − f(x, y, y′) = 0, (3.9) follows a method similar to that used for first-order ODEs. One can system- atically find the symmetry determining equations of (3.9) and its correspond- ing IFDE. By direct comparison one can show that all contact symmetries of (3.9) induce point symmetries of the IFDE and that all point symmetries of the IFDE project onto contact symmetries of (3.9). These results are presented in Section 3.3.1. 21 3.3. Second-Order ODEs For ODEs which are second-order and higher there is another approach; instead of using the original ODE one generates an equivalent first-order ODE system by treating each derivative of y as an independent variable and augmenting contact conditions through the replacement of y by y, . . . , yn−1. The equivalent first-order ODE system of the scalar ODE (2.1) is y′i = yi+1, i = 0, . . . , n− 2, y′n−1 = f(x, y, . . . , yn−1). (3.10) where y0 = y. The IFDE of (3.10) can then be found and the symmetries analyzed. The results obtained by this method are shown in Section 3.3.2 and are compared with those of the scalar ODE approach in Section 3.3.3. 3.3.1 Scalar Integrating Factor Equations Before introducing the main theorem of this section, the following two results are needed. Lemma 3.2. A one-parameter Lie group of transformations is a contact symmetry of (3.9) if and only if its corresponding infinitesimal generator X = ξ(x, Y, Y1) ∂ ∂x + η(x, Y, Y1) ∂ ∂Y + η(1)(x, Y, Y1) ∂ ∂Y1 (3.11) satisfies the following conditions: 1. η(1) = Dη − Y1Dξ (3.12) 2. Let η(2) = Dη(1) − Y2Dξ. Then( η(2) − ξfx − ηfY − η(1)fY1 )∣∣∣ Y2=f(x,Y,Y1) = 0 (3.13) 3. ηY1 = Y1ξY1 1. (3.14) 1More generally, if ξ = ξ(x, Y, Y1, . . . , Yk) then invertibility of the corresponding trans- formation on (x, Y, Y1, . . . , Yk) requires that η (k−1) Yk = YkξYk . 22 3.3. Second-Order ODEs Proof. The proof follows immediately from classical symmetry results. To- gether, conditions 1 and 2 are necessary and sufficient for (3.11) to be a local symmetry of (3.9): condition 1 is the contact condition required to preserve the geometric meaning of derivatives while condition 2 is the infinitesimal criterion [3] for (3.9) to be invariant under (3.11). Condition 3 is required to ensure that (3.11) is a contact symmetry and thus invertible on (x, Y, Y1). This is true if and only if η(1)Y2 ≡ 0. A straightforward calculation shows that this is equivalent to ηY1 − Y1ξY1 ≡ 0. Lemma 3.3. The infinitesimal generator XΛ = ξ(x, Y, Y1) ∂ ∂x + η(x, Y, Y1) ∂ ∂Y + η1(x, Y, Y1) ∂ ∂Y1 + Λh(x, Y, Y1) ∂ ∂Λ (3.15) is a point symmetry of the IFDE of (3.9) if and only if ξ, η, η1, and h satisfy the system ηY1 = Y1ξY1 , (3.16a) ξx = −2ξY Y1 − η1Y1 + ηY , (3.16b) ηx = −Y 21 ξY − η1Y1Y1 + η1, (3.16c) fxξY1 = −f2ηY1 + η1x + Y 21 η1Y + 2fη1Y1Y1+ (3.16d) + Y 21 ξY f − η1fY1Y1 − ηfY Y1 − ηY Y1f, hxξY 2 1 = −f2η2Y1 + Y 21 η1Y ηY1 − ξY 21 η1xY1 + η1xY1ηY1+ (3.16e) + Y 21 ξY fηY1 − η1fY1Y1ηY1 − ηfY Y1ηY1 − ηY Y1fηY1+ + 2fη1Y1Y1ηY1 − ξY 21 fη1Y1Y1 − 2ξY 21 ξY f − ξY 21 ηY Y1f, hY Y1 = −Y1η1Y Y1 + fY ηY1 + fηY Y1 , (3.16f) hY1Y 2 1 = fηY1Y1Y1 − Y 21 η1Y1Y1 − fηY1 + Y1ηY1fY1 . (3.16g) Note that no relationship between ξ, η, and η1 is assumed when formulating (3.16). Proof. The IFDE of (3.9), generated systematically using the direct method, 23 3.3. Second-Order ODEs are ΛY Y1Y1 + 2ΛY + ΛxY1 + (Λf)Y1Y1 = 0, (3.17a) − (fΛ)Y + (Λf)xY1 + (Λf)Y Y1Y1 + Λxx + ΛY Y Y 21 + 2Y1ΛxY = 0. (3.17b) Note that the IFDE (3.17) is now an over-determined system. When con- sidering such a system the given equations do not necessarily represent the simplest differential relations describing solutions of (3.17). Using methods of differential algebra implemented in MapleTM it is possible to write (3.17) in a form which eliminates this issue. Making use of the GeM [12] software package for Maple, one can then obtain the symmetry determining equations for point symmetries (3.15) of (3.17), given by (3.16). One then has the following theorem. Theorem 3.4. Every point symmetry of (3.17) which is of the form X = ξ(x, Y, Y1) ∂ ∂x + η(x, Y, Y1) ∂ ∂Y + η1(x, Y, Y1) ∂ ∂Y1 + Λh(x, Y, Y1) ∂ ∂Λ (3.18) projects onto a contact symmetry of (3.9). Conversely, every contact sym- metry of (3.9) induces a point symmetry of (3.17). Proof. First, suppose that X = ξ(x, Y, Y1) ∂ ∂x + η(x, Y, Y1) ∂ ∂Y + η1(x, Y, Y1) ∂ ∂Y1 + Λh(x, Y, Y1) ∂ ∂Λ (3.19) is a symmetry of the IFDE (3.17) where no relation is assumed to exist between η and η1. From Lemma 3.3 it follows that ξ, η, η1, and h satisfy (3.16). 24 3.3. Second-Order ODEs Solving for η1Y1 in (3.16b) and substituting it into (3.16c), one finds that η1 = ηx + Y1ηY − Y1ξx − Y 21 ξY . (3.20) Together (3.16a) and (3.20) yield η1 = Dη − Y1Dξ. (3.21) The substitution of (3.16a) and (3.16b) into (3.16d) gives η1x + η 1 Y Y1 + η 1 Y1f − f(ξx + Y1ξY + fξY1) = fxξ + fY η + fY1η1 (3.22) which is equivalent to ( Dη1 − Y2Dξ − (fxξ + fY η + fY1η1) )∣∣ Y2=f(x,Y,Y1) = 0. (3.23) It follows from (3.16a), (3.23), (3.21) and Lemma 3.2 that the infinitesimal generator Xp = ξ(x, Y, Y1) ∂ ∂x + η(x, Y, Y1) ∂ ∂Y + η1(x, Y, Y1) ∂ ∂Y1 obtained by projecting X onto a transformation of (x, Y, Y1) is a contact symmetry of (3.9). To prove the converse one must show that given a contact symmetry of (3.9) with first prolongation X(1)p there exists some function h(x, Y, Y1) such that X = X(1)p + Λh(x, Y, Y1) ∂ ∂Λ (3.24) is a symmetry of (3.17). From the above calculations, given a symmetry Xp of (3.9), ξ, η, and η1 clearly satisfy(3.16a)-(3.16d). One must then show that there exists a function h(x, Y, Y1) such that ξ, η, η1 and h satisfy the remaining equations 25 3.3. Second-Order ODEs (3.16e)-(3.16g). Integrating (3.16f) and (3.16g) one obtains h(x, Y, Y1) = −η1Y1 + (fξY1) + p(x, Y1), h(x, Y, Y1) = −η1Y1 + (fξY1) + q(x, Y ), (3.25) respectively, which implies that h(x, Y, Y1) = −η1Y1 + (fξY1) + g(x), (3.26) where g(x) is an arbitrary function of x. After substituting (3.16a)-(3.16d) into (3.16e) one finds that hx = −η1xY1 + fxξY1 + fξxY1 . (3.27) The substitution of (3.26) into (3.27) implies that g′(x) = 0 and thus that h = −η1Y1 + fξY1 + C, (3.28) the infinitesimal generator X given by (3.24) yields a point symmetry of the IFDE (3.17). 3.3.2 System Integrating Factor Equations Consider the first-order ODE system y′ = y1, y′1 = f(x, y, y1) (3.29) which is equivalent to (3.9). Letting D̃ = ∂ ∂x + Y1 ∂ ∂Y + f(x, Y, Y1) ∂ ∂Y1 (3.30) denote the total derivative operator D restricted to solutions of (3.9), the following result can be proved. Lemma 3.5. A Lie group of point transformations acting on (x, Y, Y1) with 26 3.3. Second-Order ODEs infinitesimal generator X = ξ(x, Y, Y1) ∂ ∂x + η(x, Y, Y1) ∂ ∂Y + η1(x, Y, Y1) ∂ ∂Y1 (3.31) is a point symmetry of the first-order ODE system (3.29) if and only if ξ, η, and η1 satisfy the following conditions: 1. D̃η − Y1D̃ξ = η1 2. D̃η1 − fD̃ξ = fxξ + fY η + fY1η1. where D̃ = ∂∂x + Y1 ∂ ∂Y + f(x, Y, Y1) ∂ ∂Y1 is the total derivative operator re- stricted to solutions of (3.29). Proof. Denote the first prolongation of (3.31) by X(1) = X+ η(1) ∂ ∂Y 1 + η(1)1 ∂ ∂Y 11 (3.32) where Y 1, Y 11 are arbitrary functions corresponding to y ′ and y′1, respectively. Using the prolongation formula [4], one finds that η(1) = ηx + Y 1ηY + Y 11 ηY1 − Y 1(ξx + Y 1ξY + Y 11 ξY1), η (1) 1 = η 1 x + Y 1η1Y + Y 1 1 η 1 Y1 − Y 11 (ξx + Y 1ξY + Y 11 ξY1). (3.33) The infinitesimal criteria for (3.31) to be a point symmetry of (3.29) are( η(1) − η1 )∣∣∣ Y1=Y 1, Y 11 =f(x,Y,Y1) = 0( η (1) 1 − fxξ − fY η − fY1η1 )∣∣∣ Y1=Y 1, Y 11 =f(x,Y,Y1) = 0. (3.34) Substituting (3.33) into (3.34) and replacing (Y 1, Y 11 ) with (Y1, f(x, Y, Y1)) one obtains η1 = D̃η − Y1D̃ξ, D̃η1 − fD̃ξ = fxξ + fY η + fY1η1 (3.35) as required. 27 3.3. Second-Order ODEs In Lemma 3.5 notice that the closure condition (3.14) is absent. This is explained by the following result. Lemma 3.6. Every higher-order symmetry of (3.9) corresponds to a point symmetry of (3.29). Proof. Every higher-order symmetry of (3.9) in evolutionary form is equiv- alent to a symmetry with an infinitesimal generator of the form X̂ = η̃(x, Y, Y1) ∂ ∂Y . (3.36) To illustrate this, suppose that η = η(x, Y, Y1, . . .). On solutions of (3.9), η(x, Y, Y1, . . . , Yk) = η(x, Y, Y1, f, D̃f, . . .) which is a function only of x, Y, Y1. Thus, there exists a function η̃ such that the action of X = η(x, Y, Y1, . . .) ∂ ∂Y on solutions of (3.9) is equivalent to that of (3.36). Each symmetry (3.36) then corresponds to a point symmetry of (3.29) given by X = η̃(x, Y, Y1) ∂ ∂Y + D̃η̃ ∂ ∂Y1 . For ODE systems such as (3.29) a modification of the direct method out- lined in Section 2.3.1 is required. For a system, the single integrating factor Λ is replaced by a set of multipliers Λi. In particular, Λ1(x, Y, Y1)[Y 1 − Y1] + Λ2(x, Y, Y1)[Y 11 − f(x, Y, Y1)] = Dφ(x, Y, Y1) (3.37) 28 3.3. Second-Order ODEs is used in place of (2.1), where D = ∂ ∂x + Y 1 ∂ ∂Y + Y 11 ∂ ∂Y1 + . . . Defining EY = ∂ ∂Y −D ∂ ∂Y 1 and EY1 = ∂ ∂Y1 −D ∂ ∂Y 11 , (3.38) it can be shown that Λ1,Λ2 are multipliers of (3.29) if and only if [4] EY {Λ1(x, Y, Y1)[Y 1 − Y1] + Λ2(x, Y, Y1)[Y 11 − f(x, Y, Y1)]} ≡ 0, EY1{Λ1(x, Y, Y1)[Y 1 − Y1] + Λ2(x, Y, Y1)[Y 11 − f(x, Y, Y1)]} ≡ 0. (3.39) Expanding (3.39) one obtains (Λ1)Y [Y 1 − Y1] + (Λ2)Y [Y 11 − f(x, Y, Y1)]−DΛ1 − fY Λ2 = 0, (Λ1)Y1 [Y 1 − Y1] + (Λ2)Y1 [Y 11 − f(x, Y, Y1)]− Λ1 −DΛ2 − fY1Λ2 = 0. (3.40) Since Λ1 and Λ2 are independent of Y 1 and Y 11 , (3.40) splits to give (Λ2)Y − (Λ1)Y1 = 0, − Y1(Λ1)Y − f(Λ2)Y − (Λ1)x − fY Λ2 = 0, − Y1(Λ1)Y1 − f(Λ2)Y1 − Λ1 − (Λ2)x − fY1Λ2 = 0, (3.41) which are the multiplier determining equations of (3.29). Lemma 3.7. The infinitesimal generator XΛ =ξ(x, Y, Y1) ∂ ∂x + η(x, Y, Y1) ∂ ∂Y + η1 ∂ ∂Y1 + [Λ1h1(x, Y, Y1)+ + Λ2g1(x, Y, Y1)] ∂ ∂Λ1 + [Λ1h2(x, Y, Y1) + Λ2g2(x, Y, Y1)] ∂ ∂Λ2 (3.42) is a point symmetry of the multiplier determining equations of (3.29) if and 29 3.3. Second-Order ODEs only if ξ, η, η1, h1, h2, g1 and g2 satisfy the system ηx = ξxY1 − fηY1 + Y1ξY1f + (Y1)2ξY − Y1ηY + η1, (3.43a) ξfx = Y1(η1)Y − Y1ξY f + f(η1)Y1 − η1fY1 − ξxf − f2ξY1+ (3.43b) + (η1)x − ηfY , h2 = −ηY1 + ξY1Y1, (3.43c) g2 = h1 + ηY − (η1)Y1 + ξY1f − ξY Y1, (3.43d) g1 = −(η1)Y + ξY f, (3.43e) (h1)Y1 = −ηY Y1 + ξY Y1Y1 + ξY , (3.43f) (h1)Y = −ηY Y + ξY Y Y1, (3.43g) (h1)x = −(η1)Y − Y1ξY1fY + fηY Y1 − Y1ξY Y1f − (Y1)2ξY Y + (3.43h) + Y1ηY Y + fY ηY1 . Proof. The multiplier determining equations of (3.29), generated systemat- ically using the direct method are (Λ2)Y − (Λ1)Y1 = 0, (3.44a) (Λ1)Y Y1 + (Λ1)x + (fΛ2)Y = 0, (3.44b) (Λ1)Y1Y1 + Λ1 + (Λ2)x + (fΛ2)Y1 = 0. (3.44c) The GeM [12] software package for MapleTM was used to find the symme- try determining equations for point symmetries (3.42) of the system multi- plier determining equations (3.44), producing (3.43). The following theorem connects the results of Lemmas 3.5 and 3.7. 30 3.3. Second-Order ODEs Theorem 3.8. Every point symmetry of (3.44) which is of the form X =ξ(x, Y, Y1) ∂ ∂x + η(x, Y, Y1) ∂ ∂Y + η1(x, Y, Y1) ∂ ∂Y1 + [Λ1h1(x, Y, Y1)+ + Λ2g1(x, Y, Y1)] ∂ ∂Λ1 + [Λ1h2(x, Y, Y1) + Λ2g2(x, Y, Y1)] ∂ ∂Λ2 (3.45) projects onto a point symmetry of (3.29). Conversely, every point symmetry of (3.29) induces a point symmetry of (3.44). Proof. Let X be a point symmetry of (3.44) which is of the form (3.45). From Lemma 3.7 it follows that ξ, η, η1, h1, h2, g1, and g2 satisfy (3.43). Solving for η1 in (3.43a), one obtains η1 = ηx + ηY Y1 + ηY1f − Y1(ξx + Y1ξY + fξY1) (3.46) which can be written as η1 = D̃η − D̃ξ. (3.47) Rearranging (3.43b), one obtains (η1)x+ (η1)Y Y1 + (η1)Y1f −f(ξx+Y1ξY +fξY1) = ξfx+ηfY +η1fY1 (3.48) which is equivalent to D̃η − D̃ξ = ξfx + ηfY + η1fY1 . (3.49) It follows immediately from (3.47), (3.49) and Lemma 3.5 that the in- finitesimal generator Xp, found by projecting X, given by (3.45) onto a transformation of (x, Y, Y1), is a point symmetry of (3.29). To establish the converse result, let the infinitesimal generator Xp = ξ(x, Y, Y1) ∂ ∂x + η(x, Y, Y1) ∂ ∂Y + η1(x, Y, Y1) ∂ ∂Y1 (3.50) 31 3.3. Second-Order ODEs be a point symmetry of (3.29). Then it must be shown that there exist functions h1, h2, g1, g2 such that X = Xp + [h1Λ1 + g1Λ2] ∂ ∂Λ1 + [h2Λ1 + g2Λ2] ∂ ∂Λ2 (3.51) is a point symmetry of (3.44). Clearly, if (3.50) is a point symmetry of (3.29) then ξ, η, η1 satisfy (3.43a) and (3.43b). The substitution of (3.43a) into (3.43h), after some simplifica- tion, yields (h1)x = −ηxY + Y1ξxY . (3.52) Integrating (3.43f), (3.43g) and (3.52) one obtains h1 = −ηY + ξY Y1 + C. (3.53) From (3.43c)-(3.43e) and (3.53), the functions h2, g1 and g2 can be obtained, giving h1 = −ηY + ξY Y1 + C, h2 = −ηY1 + ξY1Y1, g1 = −(η1)Y + ξY f, g2 = (−η1)Y + ξY1f + C. (3.54) If h1, h2, g1 and g2 are of the form (3.54) then it follows that X, given by (3.50) is a point symmetry of (3.44) and the proof is complete. 3.3.3 Comparison of the Two Approaches In order to compare the results of Section 3.3.1 and Section 3.3.2, it is useful to establish a connection between solutions of the IFDE (3.44) for the ODE system (3.29) and solutions of the IFDE (3.17) for the corresponding scalar ODE (3.9). This is given by the following theorem. 32 3.3. Second-Order ODEs Theorem 3.9. Every solution of the IFDE (3.44) for the ODE system (3.29) yields a unique solution of the IFDE (3.17) for the corresponding scalar ODE (3.9). Conversely, every solution of the IFDE (3.17) for the scalar ODE (3.9) yields a unique solution of the IFDE (3.44) for the ODE system (3.29). Proof. Suppose (Λ1,Λ2) solves the IFDE (3.44). Solving (3.44) for Λ1, one obtains Λ1 = − [(Λ2)x + (Λ2)Y Y1 + (Λ2f)Y1 ] . (3.55) The substitution of (3.55) into (3.44b) yields −(fΛ2)Y +(fΛ2)xY1 +(fΛ2)Y Y1Y1 +(Λ2)xx+(Λ2)Y Y (Y1)2 +2Y1(Λ2)xY = 0. (3.56) Now differentiating (3.55) with respect to Y1 and using (3.44a) to eliminate (Λ1)Y1 one finds that (Λ2)Y Y1Y1 + 2(Λ2)Y + (Λ2)xY1 + (Λ2f)Y1Y1 = 0. (3.57) Together (3.55) and (3.57) are equivalent to (3.17) and thus Λ = Λ2 is a solution of the IFDE (3.17). Conversely, suppose Λ is a solution of the IFDE (3.17). Let µ = −[Λx + ΛY Y1 + (Λf)Y1 ]. (3.58) Differentiating (3.58) with respect to Y1, one obtains µY1 = −ΛxY1 − ΛY − Y1ΛY Y1 − (Λf)Y1Y1 . (3.59) Subtracting ΛY from both sides of (3.17a) and substituting the resulting expression into (3.59) one finds that µY1 = ΛY . (3.60) 33 3.3. Second-Order ODEs From (3.58) and (3.60) it follows immediately that µ+ Λx + µY1Y1 + (Λf)Y1 = 0. (3.61) Finally, (3.17b) may be re-written as [Λx + Y1ΛY + (Λf)Y1 ]x + Y1[Λx + ΛY Y1 + (Λf)Y1 ]Y − (fΛ)Y = 0. (3.62) The substitution of (3.58) into (3.62) yields µY Y1 + µx + (Λf)Y = 0. (3.63) Together (3.63), (3.61) and (3.60) imply that (µ,Λ) is a solution (3.44) where (Λ1,Λ2) = (µ,Λ). By construction, µ is clearly unique, completing the proof. Theorem 3.9 implies that every local symmetry of the second-order scalar IFDE (3.17) corresponds to a local symmetry of the corresponding first-order system IFDE (3.44). In particular, if X = ξ ∂ ∂x + η ∂ ∂Y + η1 ∂ ∂Y1 + [Λ1h1 + Λ2h1] ∂ ∂Λ1 + [h2Λ1 + g2Λ2] ∂ ∂Λ2 (3.64) is a point symmetry of (3.44) then X′ = ξ ∂ ∂x + η ∂ ∂Y + η1 ∂ ∂Y1 + [h2(Λx + ΛY Y1 + (Λf)Y1) + g2Λ] ∂ ∂Λ (3.65) is a symmetry of (3.17) obtained by substituting (3.58) into (3.64). The symmetry (3.65) is a point symmetry of (3.17) only if the coefficient of ∂∂Λ is independent of the derivatives of Λ which implies that h2 ≡ 0. This is true if and only if ηY1 = Y1ξY1 in which case (3.65) is identical to the form found in Theorem 3.4. Thus, the scalar method is a special case of the system method obtained by requiring that h2 ≡ 0. 34 3.4. General ODEs 3.4 General ODEs 3.4.1 Overview To generalize the results of the previous sections to ODEs of arbitrary order a new method is necessary due to the difficulty associated with analyzing the corresponding IFDE directly. Instead of the direct approach employed in previous sections, one considers transformations which leave invariant the set of first integrals of (2.1) (first integral symmetries); using them as an intermediary between (2.1) and its corresponding IFDE. In Section 3.4.2 necessary and sufficient conditions are formulated for a function to be a first integral of (2.1), yielding a first-order PDE called the first integral determining equation. The connection between the point symmetries of the first integral determining equation and the higher-order symmetries of (2.1) is proved in Section 3.4.3. In Section 3.4.4, the multiplier determining equations of the first-order ODE system (3.10) are derived from the first integral determining equation; and in Section 3.4.5 the connection between the point symmetries of the multiplier determining equations which are of the form ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y + n−1∑ i=1 ηi(x, Y ) ∂ ∂Yi + n−1∑ i,j=0 αji (x, Y )µ i ∂ ∂µj (3.66) and the point symmetries of the first integral determining equation of (2.1) is established. Finally, in Section 3.4.6 the symmetries of the multiplier determining equations are connected with those of the IFDE for the scalar ODE (2.1). 3.4.2 First Integral Determining Equation By definition, a first integral of (2.1) is a function φ(x, Y, Y1, . . . , Yn−1) such that Dφ(x, Y, Y1, . . . , Yn−1)|Yn=f(x,Y,Y1,...,Yn−1) = 0, (3.67) 35 3.4. General ODEs where D = ∂ ∂x + Y1 ∂ ∂Y + Y2 ∂ ∂Y1 + . . .+ Yn ∂ ∂Yn−1 + . . . (3.68) The restriction of the total derivative operator to solutions of (2.1) yields the new operator D̃ = ∂ ∂x + Y1 ∂ ∂Y + Y2 ∂ ∂Y1 + . . .+ Yn−1 ∂ ∂Yn−2 + f(x, Y, Y1, . . . , Yn−1) ∂ ∂Yn−1 , (3.69) which enables one to re-write (3.67) as D̃φ = 0. (3.70) Associated with (3.70) are the characteristic equations [3] dx 1 = dY Y1 = dY1 Y2 = . . . = dYn−2 Yn−1 = dYn−1 f = dφ 0 . (3.71) Example 3.10. Consider the equation y′′ = 0. (3.72) From the general form of the first integral determining equation given in (3.70), the first integral determining equation of the ODE (3.72) is φx + Y1φY + 0 · φY1 = 0. (3.73) Note that in this case one can easily solve (3.73) using the method of char- acteristics to obtain φ(x, Y, Y1) = g(Y1, xY1 − Y ) (3.74) where g is an arbitrary function of its arguments. Let s(x, Y, Y1) = Y1 and z(x, Y, Y1) = x− Y Y1. Differentiating φ(x, Y, Y1), it follows that Dφ(x, Y, Y1) = ∂g(s, z) ∂s (Y2) + ∂g(s, z) ∂z (xY2 + Y1 − Y1) (3.75) 36 3.4. General ODEs and hence φ(x, Y, Y1) is a first integral of (3.72) with corresponding integrat- ing factor µ = ∂g∂s + x ∂g ∂z . Moreover, since (3.72) is a second-order ODE and φ is an arbitrary function of two variables s(x, Y, Y1) and z(x, Y, Y1) which are functionally-independent, it follows that all first integrals of the ODE (3.73) must satisfy (3.73). Example 3.11. As a second example, consider the Blasius equation y′′′ + 1 2 yy′′ = 0. (3.76) From the first integral determining equation (3.70) for an nth order ODE in solved form, the first integral determining equation of the ODE (3.76) is φx + Y1φY + Y2φY1 − 1 2 Y Y2 = 0. (3.77) Note that unlike in the previous example one cannot easily solve this first integral determining equation since doing so would be equivalent to finding the general solution of the Blasius equation. 3.4.3 First Integral Symmetries Defining Y = (Y, Y1, . . . , Yn−1), one can prove the following theorem. Theorem 3.12. Consider a one-parameter () Lie group of point transfor- mations x∗ = x∗(x, Y ; ), Y ∗ = Y ∗(x, Y ; ), φ∗ = φ∗(x, Y , φ; ). (3.78) The point transformation (3.78) is a point symmetry of (3.70) if and only if it projects onto a higher-order symmetry of (2.1). Furthermore, each higher- order symmetry of (2.1) induces a point symmetry of (3.70). Proof. A transformation (3.78) when extended to act on the space of first partial derivatives of φ is a symmetry of (3.70) if and only if it leaves invariant 37 3.4. General ODEs the characteristic equations (3.71). In particular, this implies that dx∗ 1 = dY ∗ Y ∗1 = dY ∗1 Y ∗2 = . . . = dY ∗n−2 Y ∗n−1 = dY ∗n−1 f(x∗, Y ∗) , (3.79) which are necessary and sufficient conditions for the projection of (3.78) onto a transformation of (x, Y, Y1, . . . , Yn−1)-space to be a symmetry of (2.1). To prove the second part of the theorem, let Xp = ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y + ηi(x, Y ) ∂ ∂Yi (3.80) be the infinitesimal generator of a local symmetry of (2.1), assuming sum- mation over repeated indices. One must show that there exists a function β(x, Y , φ) such that X = ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y + ηi(x, Y ) ∂ ∂Yi + β(x, Y , φ) ∂ ∂φ (3.81) is a point symmetry of (3.70). Let Y−1 = x, Y0 = Y and ~Y = (Y−1, Y0, . . . , Yn−1). In addition, let Di = ∂ ∂Yi + φYi ∂ ∂φ + . . . (3.82) be the total derivative operator with respect to Yi, i = −1, . . . , n − 1. Ex- tending the generator (3.81) via the contact conditions to act on partial derivatives of φ, one obtains the first prolongation of (3.81) given by X(1) = X+ β(1)i (x, Y , φ, φx, φY , φY1 , . . . , φYn−1) ∂ ∂φYi (3.83) where β (1) i = Diβ −Di(ηj)φYj . (3.84) 38 3.4. General ODEs The infinitesimal criterion for (3.81) to be a point symmetry of (3.70) is n−2∑ 0 ηi+1φYi + n−1∑ −1 fYiη iφYn−1 + n−2∑ 0 Yi+1β (1) i + fβ (1) n−1 + β (1) −1 = 0. (3.85) Defining (w−1, . . . , wn−1) = w = D̃~Y = (1, Y1, . . . , Yn−2, f(x, Y )), (3.86) one can re-write (3.85) as ηjDj(wi)φYi + w iβ (1) i = 0. (3.87) The substitution of (3.84) into (3.87) yields [ηjDj(wi)− wjDj(ηi)]φYi + wiDiβ = 0. (3.88) Using the fact that Diβ = ∂β∂Yi + ∂β ∂φφYi , it follows that [ηiDi(wj)− wiDi(ηj)]φYj + wi ∂β ∂Yi + ∂β ∂φ wiφYi = 0. (3.89) The last term of (3.89) vanishes on solutions of (3.70) since wiφYi = D̃(Yi)φYi = φx + Y1φY + Y2φY1 + . . .+ Yn−1φYn−2 + fφYn−1 = D̃φ. (3.90) Using (3.70) one can write φx = − ∑n−1 0 φYj D̃Yj = − ∑n−1 0 φYjw j which, upon substitution into (3.90), gives n−1∑ i=−1  n−1∑ j=0 [ ηiDi(wj)− wiDi(ηj) + wjwiDi(ξ) ] φYj + w i ∂β ∂Yi  = 0. (3.91) Since ξ, η, η1, . . . , ηn−1, β, and w do not depend on the partial derivatives of φ, each coefficient of φYj must vanish independently of the others allowing 39 3.4. General ODEs one to split (3.91) to obtain D̃β(x, Y , φ) = 0. (3.92a) ηiDi(wj)− wiDi(ηj) + wjwiDi(ξ) = 0, j = 0, . . . , n− 1, i = −1 . . . n− 1. (3.92b) One can re-write (3.92b) as Xp(D̃Yj) = D̃(ηj)− D̃(Yj)D̃(ξ), j = 0, . . . , n− 1 (3.93) which is true if and only if Xp is a higher-order symmetry of (2.1). Then, choosing β ≡ 0 it is clear that the coefficients of X satisfy (3.92a) and (3.92b) and hence that X is a symmetry of (3.70). Example 3.13. Consider the equation y′′ = 0. (3.94) The transformation of (x, Y, Y1)-space Xp = ξ(x, Y, Y1) ∂ ∂x + η(x, Y, Y1) ∂ ∂Y + η1(x, Y, Y1) ∂ ∂Y1 (3.95) is a symmetry of (3.94) if and only if ξ, η, and η1 satisfy the symmetry determining equations η1 = D̃η − Y1D̃ξ, (3.96a) D̃η1 − 0 · D̃ξ = 0 (3.96b) where D̃ = ∂∂x + Y1 ∂ ∂Y + 0 · ∂∂Y1 is the total derivative operator restricted to solutions of the ODE (3.94). From Example 3.10, the corresponding first integral determining equation is φx + Y1φY + 0 · φY1 = 0. (3.97) 40 3.4. General ODEs Using MapleTM one can show that the point transformation of (x, Y, Y1, φ)- space with infinitesimal operator X = ξ(x, Y, Y1) ∂ ∂x + η(x, Y, Y1) ∂ ∂Y + η1(x, Y, Y1) ∂ ∂Y1 + β(x, Y, Y1, φ) ∂ ∂φ (3.98) is a point symmetry of (3.97) if and only if the coefficients ξ, η, η1, and β satisfy the symmetry determining equations η1 = (ηx + Y1ηY )− Y1(ξx + Y1ξY ), (3.99a) η1x + Y1η 1 Y = 0, (3.99b) βx + Y1βY = 0. (3.99c) Comparing (3.99a) and (3.99b) with (3.96a) and (3.96b), respectively, it follows that every point symmetry (3.78) of the first integral determining equation (3.73) projects onto a higher-order symmetry Xp of the original ODE (3.72). Conversely, each higher-order symmetry Xp of the ODE (3.94) induces a point symmetry X of the first integral determining equations where β can be chosen to be identically zero. Example 3.14. As a second example, consider the Blasius equation y′′′ + 1 2 yy′′ = 0. (3.100) The transformation of (x, Y, Y1, Y2)-space Xp = ξ(x, Y, Y1, Y2) ∂ ∂x + η(x, Y, Y1, Y2) ∂ ∂Y + η1(x, Y, Y1, Y2) ∂ ∂Y1 + +η2(x, Y, Y1, Y2) ∂ ∂Y2 (3.101) is a higher-order symmetry of (3.100) if and only if ξ, η, η1, and η2 satisfy 41 3.4. General ODEs the symmetry determining equations η1 = D̃η − Y1D̃ξ, (3.102a) η2 = D̃η2 − Y2D̃ξ, (3.102b) D̃η2 + 1 2 Y Y2 · D̃ξ + 12[Y2η + Y η 2] = 0, (3.102c) where D̃ = ∂∂x + Y1 ∂ ∂Y + Y2 ∂ ∂Y1 − 12Y Y2 ∂∂Y2 is the total derivative operator restricted to solutions of (3.100). From Example 3.11 the corresponding first integral determining equation is φx + Y1φY + Y2φY1 − 1 2 Y Y2φY2 = 0. (3.103) Using the GeM software package, one finds that the point transformation of (x, Y, Y1, Y2, φ)-space with infinitesimal operator X =ξ(x, Y, Y1, Y2) ∂ ∂x + η(x, Y, Y1, Y2) ∂ ∂Y + η1(x, Y, Y1, Y2) ∂ ∂Y1 + + η2(x, Y, Y1, Y2) ∂ ∂Y2 + β(x, Y, Y1, Y2, φ) ∂ ∂φ (3.104) is a symmetry of (3.103) if and only if the coefficients ξ, η, η1, η2, and β satisfy the determining equations η1 = (ηx + Y1ηY + Y2ηY1 − 1 2 Y Y2ηY2)− (3.105a) − Y1(ξx + Y1ξY + Y2ξY1 − 1 2 Y Y2ξY2), η2 = (η1x + Y1η 1 Y + Y2η 1 Y1 − 1 2 Y Y2η 1 Y2)− (3.105b) − Y2(ξx + Y1ξY + Y2ξY1 − 1 2 Y Y2ξY2), 0 = (η2x + Y1η 2 Y + Y2η 2 Y1 − 1 2 Y Y2η 2 Y2)− (3.105c) − Y2(ξx + Y1ξY + Y2ξY1 − 1 2 Y Y2ξY2) + 1 2 Y2η + 1 2 Y η2, βx + Y1βY + Y2βY1 − 1 2 Y Y2βY2 = 0. (3.105d) 42 3.4. General ODEs Comparing (3.105a)-(3.105c) with (3.102a)-(3.102c), respectively, it fol- lows that every symmetry X of the first integral determining equation (3.103) projects onto a symmetry Xp of the original ODE (3.100). Conversely, each higher-order symmetry Xp of the ODE (3.103) induces a point symmetry X of the first integral determining equations where β can be chosen to be identically zero. 3.4.4 Multiplier Determining Equations To connect the point symmetries of (3.70) with the point symmetries of the multiplier determining equations of the ODE system (3.10), one first seeks a mapping between solutions of the first integral determining equation and the corresponding system multiplier determining equations. This can be done by direct construction of the system multiplier determining equations from the first integral determining equation. To accomplish this, first let (µ0, . . . , µn−1) = µ = (φY , . . . , φYn−1). (3.106) Later it will be shown that µ corresponds to a set of multipliers of the ODE system (3.10). In order for the function φ(x, Y ) to exist, it must satisfy the integrability conditions D−1φYi = Diφx, i = 0, . . . , n− 1, DjφYi = DiφYj , i, j = 0, . . . , n− 1, (3.107) where Di is given by (3.82) for i = −1, . . . , n− 1, and Y0 = Y. Furthermore, if φ(x, Y ) is a solution of (3.70) then DiD̃φ = 0, i = 0, . . . , n− 1. (3.108) 43 3.4. General ODEs The substitution of (3.106) into (3.107) and (3.108) yields µiYj = µ j Yi , i, j = 0, . . . , n− 1, D̃µ0 = −µn−1fY , D̃µi = −µi−1 − µn−1fYi , i = 1, . . . , n− 1. (3.109) In the following results it is shown that a set of functions (µ0, . . . , µn−1) solves (3.109) if and only if (µ0, . . . , µn−1) is a set of multipliers of the ODE system (3.10). As a consequence, (3.109) are the multiplier determining equations of the ODE system (3.10), also called the system multiplier deter- mining equations. Lemma 3.15. A function φ(x, Y, Y1, . . . , Yn−1) corresponds to a first inte- gral of (2.1) if and only if it corresponds to a first integral of (3.10) with corresponding multipliers µ0 = ∂φ∂Y , µ 1 = ∂φ∂Y1 , . . . , µ n−1 = ∂φ∂Yn−1 . Proof. Suppose the function φ(x, Y, Y1, . . . , Yn−1) corresponds to a first in- tegral of (2.1). It follows immediately from (3.69) and (3.70) that − (φY Y1 + φY1Y2 + . . .+ φYn−2Yn−1 + φYn−1f) = φx. (3.110) As in Section 3.3.2, let Y 1j be an arbitrary function corresponding to y ′ j , where y and its derivatives have been replaced with the independent func- tions y, y1, . . . , yn−1. The total derivative operator D then becomes D′ = ∂ ∂x + Y 1 ∂ ∂Y + Y 11 ∂ ∂Y1 + . . .+ Y 1n−1 ∂ ∂Yn−1 . (3.111) The addition of D′φ− ∂φ∂x to both sides of (3.110) gives ∂φ ∂Y ( Y 1 − D̃Y ) + n−1∑ 1 ∂φ ∂Yi ( Y 1i − D̃Yi ) = D′φ. (3.112) Thus φ(x, Y, Y1, . . . , Yn−1) corresponds to a first integral of (3.10) with as- sociated multipliers given by φY , φY1 , . . ., φYn−1 . 44 3.4. General ODEs Conversely, if φ(x, Y, Y1, . . . , Yn−1) corresponds to a first integral of (3.10) then it must satisfy (3.112). The subtraction of D′φ − φx from both sides of (3.112) yields D̃φ = 0. Thus, φ(x, Y, Y1, . . . , Yn−1) is a solution of (3.70) and hence is a first integral of (2.1). Theorem 3.16. The function µ(x, Y, Y1, . . . , Yn−1) = (µ0(x, Y, Y1, . . . , Yn−1), . . . , µn−1(x, Y, Y1, . . . , Yn−1)) satisfies the system multiplier determining equations (3.109) if and only if it corresponds to a set of multipliers of the first order ODE system (3.10). Proof. Suppose µ(x, Y, Y1, . . . , Yn−1) is a set of multipliers of the first-order ODE system (3.10) with corresponding first integral φ(x, Y, Y1, . . . , Yn−1). From the proof of Lemma 3.15 it follows that φ(x, Y, Y1, . . . , Yn−1) satisfies (3.70) with µ0 = φY , µ1 = φY1 , . . . , µ n−1 = φYn−1 . It is then clear from the construction of (3.109) that µ is a solution. Conversely, suppose µ satisfies the system multiplier determining equa- tions (3.109) and let σ = −Y1µ0 − . . .− Yn−1µn−2 − fµn−1. (3.113) In addition, define the function φ(x, Y, Y1, . . . , Yn−1) implicitly by φx = σ, φY = µ0, φYi = µ i, i = 1, . . . , n− 1. (3.114) 45 3.4. General ODEs Note that if µ satisfies (3.109) and σ satisfies (3.114) then the integrability conditions of φ are satisfied. In particular, using (3.113) one can see that (φx)Y = σY = µ0x = (φY )x, (φx)Yi = σYi = µ i x = (φYi)x, i = 1, . . . , n− 1, (φY )Yi = µ 0 Yi = µ i Y = (φYi)Y , i = 1, . . . , n− 1, (φYj )Yi = µ j Yi = µiYj = (φYj )Yi , i, j = 1, . . . , n− 1. (3.115) Hence φ(x, Y, Y1, . . . , Yn−1) exists and is unique up to a constant. The substitution of (3.114) into (3.113) yields D̃φ = 0, (3.116) which proves that φ(x, Y, Y1, . . . , Yn−1) corresponds to a first integral of (2.1). Hence, from the proof of Lemma 3.15 it follows that µ is a set of multipliers of (3.10) as required. From Theorem 3.16 it follows immediately that equations (3.109) are the multiplier determining equations for the first order ODE system (3.10). Example 3.17. Consider the ODE y′′ = 0, (3.117) which is equivalent to the first-order ODE system y′ = y1, y′1 = 0. (3.118) Using the direct method one finds that the multiplier determining equations 46 3.4. General ODEs of the ODE system (3.118) are µ0Y1 = µ 1 Y , µ0x + Y1µ 0 Y = 0, µ1x + Y1µ 1 Y + µ 0 = 0. (3.119) From Example 3.10 the first integral determining equation of (3.117) is φx + Y1φY = 0. (3.120) The integrability conditions of φ imply that ∂φY ∂Y1 = ∂φY1 ∂Y (3.121) while differentiation of (3.120) with respect to Y and Y1, respectively, gives φxY + Y1φY Y = 0, φxY1 + φY + Y1φY Y1 = 0. (3.122) Letting µ0 = φY and µ1 = φY1 in (3.121) and (3.122) one obtains the system multiplier determining equations µ0Y1 = µ 1 Y , µ0x + Y1µ 0 Y = 0, µ1x + Y1µ 1 Y + µ 0 = 0, (3.123) which are identical to those obtained via the direct method, given in (3.119). Example 3.18. Consider the ODE y′′′ + 1 2 yy′′ = 0 (3.124) 47 3.4. General ODEs which is equivalent to the first-order ODE system y′ = y1, y′1 = y2, y′2 = − 1 2 yy2. (3.125) The direct method yields the multiplier determining equations of the ODE system (3.125) µ0Y1 = µ 1 Y , µ0Y2 = µ 2 Y , µ1Y2 = µ 2 Y1 , µ0x + Y1µ 0 Y + Y2µ 1 Y − 1 2 Y Y2µ 2 Y = 1 2 Y2µ 2, µ1x + Y1µ 0 Y1 + Y2µ 1 Y1 − 1 2 Y Y2µ 2 Y1 = −µ0, µ2x + Y1µ 0 Y2 + Y2µ 1 Y2 − 1 2 Y Y2µ 2 Y2 = −µ1 + 1 2 Y µ2. (3.126) From Example 3.11, the first integral determining equation of (3.124) is φx + Y1φY + Y2φY1 − 1 2 Y Y2φY2 = 0. (3.127) The integrability conditions of φ imply that ∂φY ∂Y1 = ∂φY1 ∂Y , ∂φY ∂Y2 = ∂φY2 ∂Y , ∂φY1 ∂Y2 = ∂φY2 ∂Y1 (3.128) while differentiation of (3.127) with respect to Y, Y1, and Y2, respectively, gives φxY + Y1φY Y + Y2φY Y1 − 1 2 Y Y2φY Y2 = 1 2 Y2φY2 , φxY1 + Y1φY Y1 + Y2φY1Y1 − 1 2 Y Y2φY1Y2 = −φY , φxY2 + Y1φY Y2 + Y2φY1Y2 − 1 2 Y Y2φY2Y2 = −φY1 + 1 2 Y φY2 . (3.129) 48 3.4. General ODEs Letting µ0 = φY , µ1 = φY1 , and µ 2 = φY2 in (3.128) and (3.129) one obtains the system multiplier determining equations µ0Y1 = µ 1 Y , µ0Y2 = µ 2 Y , µ1Y2 = µ 2 Y1 , µ0x + Y1µ 0 Y + Y2µ 1 Y − 1 2 Y Y2µ 2 Y = 1 2 Y2µ 2, µ1x + Y1µ 0 Y1 + Y2µ 1 Y1 − 1 2 Y Y2µ 2 Y1 = −µ0, µ2x + Y1µ 0 Y2 + Y2µ 1 Y2 − 1 2 Y Y2µ 2 Y2 = −µ1 + 1 2 Y µ2. (3.130) which are equivalent to those obtained via the direct method, given in (3.126). 3.4.5 Induced Multiplier Symmetries Using the results of the previous section, one can prove the following theo- rem. Theorem 3.19. Every point symmetry of the first integral determining equation (3.70) which is of the form ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y + n−1∑ i=1 ηi(x, Y ) ∂ ∂Yi + β(x, Y , φ) ∂ ∂φ induces a point symmetry of the multiplier determining equations (3.109) for the ODE system (3.10). Conversely, if X = ξ(x, Y ) ∂ ∂x + ηi(x, Y ) ∂ ∂Yi + [αij(x, Y )µ j ] ∂ ∂µi is a point symmetry of (3.109) then it is induced by a point symmetry of (3.70). Proof. To connect the point symmetries of the first integral determining equation (3.70) with the point symmetries of the multiplier determining 49 3.4. General ODEs equations (3.109), first note that by Theorem 3.16, if (µ0, . . . , µn−1) is a solution of (3.109) then there exists a function φ(x, Y ), unique up to a constant, satisfying both (3.70) and (φY , . . . , φYn−1) = µ = (µ 0, . . . , µn−1). (3.131) Moreover, if φ(x, Y, Y1, . . . , Yn−1) satisfies (3.70) then µ(x, Y, Y1, . . . , Yn−1) defined by (3.131) is a solution of the multiplier determining equations (3.109). Thus there exists a non-invertible mapping which takes each so- lution of the first integral determining equation (3.70) into a solution of the multiplier determining equations (3.109) and which takes each solution of (3.109) into solutions of (3.70). It follows immediately that if X = ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y + ηi(x, Y ) ∂ ∂Yi + β(x, Y , φ) ∂ ∂φ (3.132) is a point symmetry of (3.70) then it corresponds to a symmetry X′ = ξ(x, Y ) ∂ ∂x +η(x, Y ) ∂ ∂Y +ηi(x, Y ) ∂ ∂Yi + n−1∑ i=0 β1i (x, Y , φ, µ 0, . . . , µn−1) ∂ ∂µi (3.133) of (3.109). Using (3.131) one can calculate the functions β1i by prolonging (3.132) to an extended point transformation X(1) acting on (x, Y , φ, φx, φY , φY1 , . . . , φYn−1)-space. In particular, X(1) =ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y + ηi(x, Y ) ∂ ∂Yi + β(x, Y , φ) ∂ ∂φ + + β(1)i (x, Y , φ, φx, φY0 , φY1 , . . . , φYn−1) ∂ ∂Yi (3.134) where β(1)i is given by the prolongation formula (3.84). One can eliminate φx from β (1) i by using (3.70) to write φx = −φY Y1 − φY1Y2 − . . .− φYn−2Y1 − φYn−1f(x, Y ). (3.135) The substitution of (3.131) and (3.135) into the prolongation formula (3.84) 50 3.4. General ODEs yields β1i = β (1) i (x, Y , φ, φx, µ 0, . . . , µn−1) ∣∣∣ φx=−µ0Y1−...µn−2Yn−1−µn−1f(x,Y ) (3.136) where i = 0, . . . , n− 1 and once again Y0 = Y. The symmetry (3.133) is a point symmetry of the multiplier determining equations (3.109) if and only if each function β1i (x, Y , φ, µ) does not depend explicitly on φ [4]. From the proof of Theorem 3.12, if X is a point sym- metry of (3.70) induced by a higher-order symmetry of (2.1) then one can choose β(x, Y , φ) ≡ 0, in which case ∂β1i∂φ ≡ 0. Consequently, from (3.84) one sees that (3.133) is a local symmetry of (3.109) and thus each higher-order symmetry of (2.1) induces a point symmetry of (3.109). Now consider the converse. In order to determine sufficient conditions for the converse to hold, suppose X = ξ(x, Y ) ∂ ∂x + ηi(x, Y ) ∂ ∂Yi + [αij(x, Y )µ j ] ∂ ∂µi (3.137) is a point symmetry of (3.109) such that ξ, η0, . . . , ηn−1 are not all identically zero. Now consider the variable σ = −µiD̃(Yi) as defined by (3.113). It follows from (3.113) that Xσ = − n−1∑ i=0 µiX[D̃Yi]− n−1∑ i,j=0 αijµ jD̃(Yi) = n−1∑ i=0 ai(x, Y )µi (3.138) for some functions ai(x, Y ), i = 0, . . . , n− 1. One can then write the point symmetry (3.137) of the multiplier determining equations (3.109) as a point transformation acting on (x, Y , µ0, . . . , µn−1, σ)-space given by the infinites- imal generator X̃ = X+ aiµi ∂ ∂σ = ξ(x, Y ) ∂ ∂x + ηi(x, Y ) ∂ ∂Yi + [αij(x, Y )µ j ] ∂ ∂µi + aiµi ∂ ∂σ . (3.139) 51 3.4. General ODEs Recall that if µ = (µ0, . . . , µn−1) satisfies (3.109), and σ is given by (3.113) then there exists a function φ(x, Y ), unique up to a constant, which satisfies (3.114) and (3.70). From (3.114), the point transformation (3.139) corre- sponds to a point symmetry of the first integral determining equations (3.70) X′ = ξ(x, Y ) ∂ ∂x + ηi(x, Y ) ∂ ∂Yi + β(x, Y )φ ∂ ∂φ , (3.140) if and only if the coefficients of ∂∂φx , ∂ ∂φY , ∂∂φY1 , . . . , ∂∂φYn−1 in the first prolon- gation X′(1) of (3.140) are equal to the coefficients of ∂∂σ , ∂ ∂µ0 , ∂ ∂µ1 . . . , ∂ ∂µn−1 in X̃ defined by (3.139) when (φx, φY , φY1 , . . . , φYn−1) = (σ, µ0, . . . , µn−1) and D̃φ = 0. Defining D†x = ∂ ∂x + ∂φ ∂x ∂ ∂φ + ∂µi ∂x ∂ ∂µi , D†0 = ∂ ∂Y + ∂φ ∂Y ∂ ∂φ + ∂µi ∂Y ∂ ∂µi , D†j = ∂ ∂Yj + ∂φ ∂Yj ∂ ∂φ + ∂µi ∂Yj ∂ ∂µi j = 1, . . . , n− 1, (3.141) the first prolongation of (3.140) is X′(1) = X′ + ρ ∂ ∂φx + n−1∑ i=0 β (1) i ∂ ∂φYi (3.142) where ρ = D†x[β(x, Y )φ]− φxD†x(ξ)− n−1∑ j=0 φYjD † x(η j), β (1) i = D † i [β(x, Y )φ]− φxD†i (ξ)− n−1∑ j=0 φYjD † i (η j), i = 0, . . . , n− 1 (3.143) 52 3.4. General ODEs and Y0 = Y. The substitution of (3.113) and (3.114) into (3.143) yields ρ = ∂β(x, Y ) ∂x φ− n−1∑ i=0 β(x, Y )µiD̃(Yi)− n−1∑ j=0 µj [D†x(η j)−D†x(ξ)D̃(Yj)], β (1) i = ∂β(x, Y ) ∂Yi φ+ β(x, Y )µi − n−1∑ j=0 µj [D†i (η j)−D†i (ξ)D̃(Yj)], (3.144) where i = 0, . . . , n − 1. Equating the coefficients of ∂∂σ , ∂∂µ0 , . . . , ∂∂µn−1 in (3.142) and (3.139) when (φx, φY , φY1 , . . . , φYn−1) = (σ, µ 0, µ1, . . . , µn−1), one obtains n−1∑ k=0 akµ k = ∂β(x, Y ) ∂x φ− n−1∑ i=0 β(x, Y )µiD̃(Yi)− n−1∑ j=0 µj [D†x(η j)−D†x(ξ)D̃(Yj)], n−1∑ k=0 αikµ k = ∂β(x, Y ) ∂Yi φ+ β(x, Y )µi − n−1∑ j=0 µj [D†i (η j)−D†i (ξ)D̃(Yj)], (3.145) where i = 0, . . . , n− 1. The left-hand sides of (3.145) are independent of φ, implying that ∂β ∂x = 0, ∂β ∂Y = 0, ∂β ∂Yi = 0, i = 1, . . . , n− 1, (3.146) 53 3.4. General ODEs and hence β(x, Y ) = c, a constant. It then follows from (3.145) that cσ = −c n−1∑ j=0 µjD̃Yj = n−1∑ j=0 [aj + D†x(η j)−D†x(ξ)D̃Yj ]µj , cµi = n−1∑ j=0 [αij + D † i (η j)−D†i (ξ)D̃(Yj)]µj , i = 0, . . . , n− 1. (3.147) Since the functions αij(x, Y ), η j(x, Y ) and ξ(x, Y ) are independent of µ, the coefficients of each µi in (3.147) must vanish independently, and hence aj + D†x(η j)− [D†x(ξ)− c]D̃Yj = 0, αij + D † i (η j)−D†i (ξ)D̃Yj = cδji , i, j = 0, . . . , n− 1, (3.148) where δji is 1 if i = j and 0 otherwise. Therefore, if the coefficients of the point transformation (3.139) satisfy (3.148) then it corresponds to a point symmetry (3.140) of the first integral determining equation (3.70). Now suppose that (3.137) is a point symmetry of the system multiplier determining equations (3.109). Note in particular that the system multiplier determining equations (3.109) contain the subsystem µiYj = µ j Yi , i, j = 0, . . . , n− 1, i 6= j. (3.149) Let γ (1)i j = D † j(α i kµ k)−D†j(ηk)µik −D†j(ξ)µix (3.150) be the coefficient of ∂ ∂µiYj in the first prolongation X(1) of (3.137). Hence it follows from (3.149) that if (3.137) is a point symmetry of (3.109) then γ (1)i j = γ (1)j i i, j = 0, . . . , n− 1, i 6= j (3.151) on solutions of (3.109). 54 3.4. General ODEs Next let g0 = −µn−1D†0(f), gi = −µi−1 − µn−1D†i (f), i = 1, . . . , n− 1. (3.152) It follows from the system multiplier determining equations (3.109) that µix = −µiYkD̃(Yk) + gi, i = 0, . . . , n− 1. (3.153) The substitution of (3.153) into (3.150) yields γ (1)i j = −D†j(ξ)gi + µkD†j(αik) +αikµkYj − [D†j(ηk)−D†j(ξ)D̃(Yk)]µiYk . (3.154) Using (3.149) to replace µjYk with µ k Yj , one obtains γ (1)i j = −D†j(ξ)gi + µkD†j(αik) + αikµjYk − [D † j(η k)−D†j(ξ)D̃(Yi)]µiYk . (3.155) Substituting (3.155) into (3.151), one finds after rearranging terms in the resulting expression that [D†j(α i k)−D†i (αjk)]µk = n−1∑ k=0 [αjk + D † j(η k)−D†j(ξ)D̃Yk]µiYk− − n−1∑ k=0 [αik + D † i (η k)−D†i (ξ)D̃Yk]µjYk + D † j(ξ)gi −D†i (ξ)gj (3.156) where i, j = 0, . . . , n− 1 and i 6= j. Then, using the fact that µiYj = µ j Yi , one can re-write (3.156) as [D†j(α i k)−D†i (αjk)]µk = ∑ k 6=j [αjk + D † j(η k)−D†j(ξ)D̃Yk]µiYk− − ∑ k 6=i [αik + D † i (η k)−D†i (ξ)D̃Yk]µjYk + D † j(ξ)gi −D†i (ξ)gj+ + { αjj + D † j(η j)−D†j(ξ)D̃(Yj)− [ αii + D † i (η i)−D†i (ξ)D̃(Yi) ]} µiYj (3.157) 55 3.4. General ODEs for all i, j = 0, . . . , n− 1, i 6= j. Equating the coefficients of {µl}n−1l=0 and {µlYk}n−1l,k=0 in (3.157) to zero one obtains αjk + D † j(η k)−D†j(ξ)D̃Yk ≡ 0, j, k = 0, . . . , n− 1, j 6= k, (3.158a) αjj + D † j(η j)−D†j(ξ) = αii + D†i (ηi)−D†i (ξ), i, j = 0, . . . , n− 1, (3.158b) D†j(α i k)−D†i (αjk) = 0, k 6= (i− 1), (j − 1), (n− 1), (3.158c) D†j(α i i−1)−D†i (αji−1) = −D†j(ξ), i > 0, (3.158d) D†j(α i j−1)−D†i (αjj−1) = D†i (ξ), j > 0, (3.158e) D†j(α i n−1)−D†i (αjn−1) = D†j(f)Di(ξ)−D†i (f)Dj(ξ), (3.158f) The right-hand sides of (3.158c)-(3.158f) are found by substituting the func- tions gi and gj defined by (3.152) into (3.157) and equating the coefficients of µ0, . . . , µn−1 to zero. Defining α̃jk = −D†j(ηk) + D†j(ξ)D̃Yk and κjk = αjk − α̃jk it follows immedi- ately from (3.158a) that κjk = 0, k 6= j (3.159) and from (3.158b) that κii = κ j j , i, j = 0, . . . , n− 1. (3.160) Thus there exists a function q(x, Y ) such that q(x, Y ) = κii, i = 0, . . . , n− 1. The substitution of αjk = −D†j(ηk) + D†j(ξ)D̃(Yk) + q(x, Y )δkj (3.161) 56 3.4. General ODEs into (3.158c)-(3.158f) yields D†iq(x, Y ) = 0, i = 0, . . . , n− 1. (3.162) To see this, suppose i < (n − 1) and let j 6= (i + 1). Then, choosing k = i, the substitution of (3.161) into (3.158c) gives 0 = D†j(α i i)−D†(αji ) = D†j [−D†i (ηi) + D†i (ξ)D̃(Yi) + q(x, Y )]−D†i [−D†j(ηi) + D†j(ξ)D̃(Yi)] = D†j [q(x, Y )] + D † i (ξ)D † j(Yi+1)−D†j(ξ)D†i (Yi+1) = D†j [q(x, Y )] + D † i (ξ) ∂Yi+1 ∂Yj −D†j(ξ) ∂Yi+1 ∂Yi = D†j [q(x, Y )] (3.163) provided that j 6= (i+1). Since (3.163) holds for all j = 0, . . . , n−1, provided that i is chosen so that i 6= (j − 1), (n− 1), it is clear that D†j [q(x, Y )] = 0, j = 0, . . . , n− 1. (3.164) Hence there exists a function h(x) such that q(x, Y ) = h(x) and from (3.161) one sees that αjk + D † j(η k)−D†j(ξ)D̃Yk = δkj h(x). (3.165) Defining σ = −µiD̃Yi as in (3.113), it follows from the system multiplier determining equations (3.109) that σY = −D†0 n−1∑ j=0 (µjD̃Yj)  = µ0x, σYi = −D†i n−1∑ j=0 (µjD̃Yj)  = µix, i = 1, . . . , n− 1. (3.166) 57 3.4. General ODEs Furthermore, Xσ = −D̃(Yi)X[µi]− µiX[D̃Yi] = ak(x, Y )µk for some functions ak(x, Y ), k = 0, . . . , n − 1. If (3.166) is invariant under the symmetry X given by (3.137) then, using an argument identical to the one used for (3.151), one obtains [D†j(ak)−D†x(αjk)]µk = [αjk + D†j(ηk)−D†j(ξ)D̃Yk]σYk− − [ak + D†x(ηk)−D†x(ξ)D̃Yk]µjYk + D † j(ξ)ĝ −D†x(ξ)gj , (3.167) on solutions of (3.109) and (3.113) where j = 0, . . . , n−1 and ĝ = −fxµn−1. The substitution of (3.165) and (3.113) into (3.167) yields [D†j(ak)−D†x(αjk)]µk = −[ak + D†x(ηk) + h(x)D̃Yk −D†x(ξ)D̃Yk]µjYk+ + D†j(ξ)ĝ −D†x(ξ)gj − h(x)µkD†j [D̃(Yk)]. (3.168) Splitting (3.168) by setting to zero the coefficients of the µi and their partial derivatives, one obtains ak = −D†x(ηk) + D†x(ξ)D̃Yk − h(x)D̃Yk, (3.169a) D†j(ak)−Dx(αjk) = −h(x)D†j(D̃Yk), k 6= (j − 1), (n− 1), (3.169b) D†j(aj−1)−D†x(αjj−1) = D†x(ξ)− h(x)D†j(D̃Yj−1), j > 0, (3.169c) D†j(an−1)−D†x(αjn−1) = D†j(f)D†x(ξ)−D†x(f)D†j(ξ)− h(x)D†j(f), (3.169d) j = 0, . . . , (n− 1). 58 3.4. General ODEs Hence if j = 0, . . . , n − 2, the substitution of (3.169a) and (3.165) into (3.169b) yields −h(x)D†j [D̃(Yj)] = D†j(aj)−D†x(αjj) = D†j [−D†x(ηj) + D†x(ξ)D̃(Yj)− h(x)D̃Yj ]− −D†x[−D†j(ηj) + D†j(ξ)D̃(Yj) + h(x)] = −h(x)D†j [D̃(Yj)] + D†x(ξ)D†j(Yj+1)− −D†j(ξ)D†x(Yj+1)−D†x[h(x)] = h′(x)− h(x)D†j [D̃(Yj)] (3.170) Comparing the left- and right-hand sides of (3.170) it follows that h′(x) = 0. Similarly, if j = n− 1 one obtains D†n−1(f)D † x(ξ)−D†x(f)D†n−1(ξ)− h(x)D†n−1(f) = D†n−1(an−1)−D†x(αn−1n−1) = D†n−1 { −D†x(ηn−1) + [D†x(ξ)− h(x)]D̃(Yn−1) } − −D†x[−D†n−1(ηn−1) + D†n−1(ξ)D̃(Yn−1) + h(x)] = −h(x)D†n−1(f) + D†x(ξ)D†n−1(f)−D†n−1(ξ)D†x(f)− −D†xh(x), (3.171) which after simplification yields h′(x) = 0 and hence h(x) = c. Thus, if the point transformation (3.137) is a symmetry of the multiplier determining equations (3.109) then the coefficients must satisfy ak = −D†x(ηk) + [D†x(ξ)− c]D̃(Yk), k = 0, . . . , n− 1, αjk = −D†j(ηk) + D†j(ξ)D̃(Yk) + cδjk, j, k = 0, . . . , n− 1. (3.172) The system (3.172) is identical to (3.148) and thus each point symmetry (3.137) of the multiplier determining equations (3.109) is induced by a point symmetry (3.140) of the first integral determining equation (3.70). Finally, since every point symmetry of (3.70) projects onto a point symmetry of 59 3.4. General ODEs (3.10), all point symmetries (3.137) of the multiplier determining equations (3.109) must also project onto point symmetries of the ODE system (3.10). Moreover, since the multiplier determining equations (3.109) of the ODE system (3.10) form a first-order linear system, one can prove directly that all local symmetries of (3.109) are induced by point symmetries of the ODE system (3.10). Theorem 3.20. All local symmetries of the system multiplier determining equations (3.109) are induced by point symmetries of the ODE system (3.10) and thus by higher-order symmetries of the scalar ODE (2.1). Proof. First, note that if µ solves the system multiplier determining equa- tions (3.109) then D̃µ0 = µn−1fy (3.173) which has the corresponding characteristic equation [3] dx 1 = dY Y1 = dY1 Y2 = . . . = dYn−2 Yn−1 = dYn−1 f = dµ0 µn−1fy . (3.174) In order for a symmetry to leave invariant (3.109), it must also leave invariant (3.174) when µ satisfies (3.109). In particular, if x∗ = x∗(x, Y ), Y ∗ = Y ∗(x, Y ), µ∗ = µ∗(x, Y , µ, µ x , µ Y , . . .) (3.175) is a local symmetry of (3.109) where µ Y = { ∂µi ∂Yj }n−1 i,j=0 , then dx∗ 1 = dY ∗ Y ∗1 = dY ∗1 Y ∗2 = . . . = dY ∗n−2 Y ∗n−1 = dY ∗n−1 f(x∗, Y ∗) . (3.176) Note that (3.176) are necessary and sufficient conditions for a transformation of (x, Y )-space to be a point symmetry of (3.10). Hence any local symmetry 60 3.4. General ODEs of (3.109) must be induced by a point symmetry of the ODE system (3.10), and the proof is complete. Example 3.21. Consider the ODE y′′ = 0 (3.177) which in Example 3.17 was shown to have corresponding system multiplier determining equations µ0Y1 = µ 1 Y , µ0x + Y1µ 0 Y = 0, µ1x + Y1µ 1 Y + µ 0 = 0. (3.178) Using the GeM package [12] for MapleTM, one can show that the transfor- mation of (x, Y, Y1, µ0, µ1)-space X =ξ(x, Y, Y1) ∂ ∂x + η(x, Y, Y1) ∂ ∂Y + η1(x, Y, Y1) ∂ ∂Y1 + + [f0(x, Y, Y1)µ0 + f1(x, Y, Y1)µ1] ∂ ∂µ0 + + [g0(x, Y, Y1)µ0 + g1(x, Y, Y1)µ1] ∂ ∂µ1 (3.179) is a point symmetry of (3.178) if and only if the coefficients ξ, η, η1, f0, f1, g0, and g1 satisfy the corresponding symmetry determining equations η1 = ηx + Y1ηY − Y1(ξx + Y1ξY ), (3.180a) η1x + Y1η 1 Y = 0, (3.180b) f0 = −(ηY − Y1ξY ) + c, (3.180c) f1 = −η1Y , (3.180d) g0 = −(ηY1 − Y1ξY1), (3.180e) g1 = −η1Y1 + c. (3.180f) 61 3.4. General ODEs Comparing (3.180a) and (3.180b) with the symmetry determining equations of the ODE (3.177) given by (3.96a) and (3.96b) in Example 3.13, it fol- lows that all point symmetries (3.179) of the system multiplier determining equations (3.178) project onto higher-order symmetries of the corresponding ODE (3.177). Conversely, each higher-order symmetry Xp = ξ(x, Y, Y1) ∂ ∂x + η(x, Y, Y1) ∂ ∂Y + η1(x, Y, Y1) ∂ ∂Y1 of the ODE (3.177) induces a point symmetry (3.179) of the system multi- plier determining equations (3.178) with g0, g1, f0, and f1 given by (3.180c)- (3.180f), which agrees with the general result given in (3.172) where n = 2 and f(x, Y, Y1) ≡ 0. Example 3.22. Consider the ODE y′′′ + 1 2 yy′′ = 0 (3.181) which in Example 3.18 was shown to have corresponding system multiplier determining equations µ0Y1 = µ 1 Y , µ0Y2 = µ 2 Y , µ1Y2 = µ 2 Y1 µ0x + Y1µ 0 Y + Y2µ 1 Y − 1 2 Y Y2µ 2 Y = 1 2 Y2µ 2, µ1x + Y1µ 0 Y1 + Y2µ 1 Y1 − 1 2 Y Y2µ 2 Y1 = −µ0, µ2x + Y1µ 0 Y2 + Y2µ 1 Y2 − 1 2 Y Y2µ 2 Y2 = −µ1 + 1 2 Y µ2. (3.182) Using the GeM package [12] with MapleTM, one can show that the transfor- 62 3.4. General ODEs mation of (x, Y, Y1, Y2, µ0, µ1, µ2)-space X =ξ(x, Y, Y1, Y2) ∂ ∂x + η(x, Y, Y1, Y2) ∂ ∂Y + η1(x, Y, Y1, Y2) ∂ ∂Y1 + + [f0(x, Y, Y1, Y2)µ0 + f1(x, Y, Y1, Y2)µ1 + f2(x, Y, Y1, Y2)µ2] ∂ ∂µ0 + + [g0(x, Y, Y1, Y2)µ0 + g1(x, Y, Y1, Y2)µ1] + g2(x, Y, Y1, Y2)µ2] ∂ ∂µ1 + + [h0(x, Y, Y1, Y2)µ0 + h1(x, Y, Y1, Y2)µ1] + h2(x, Y, Y1, Y2)µ2] ∂ ∂µ2 (3.183) is a point symmetry of (3.182) if and only if the coefficients ξ, η, η1, f0, f1, f2, g0, g1, g2, h0, h1, and h2 satisfy the corresponding symmetry determining 63 3.4. General ODEs equations η1 = ηx + Y1ηY + Y2ηY1 − 1 2 Y Y2ηY2− − Y1(ξx + Y1ξY + Y2ξY1 − 1 2 Y Y2ξY2), (3.184a) η2 = η1x + Y1η 1 Y + Y2η 1 Y1 − 1 2 Y Y2η 1 Y2− − Y2(ξx + Y1ξY + Y2ξY1 − 1 2 Y Y2ξY2), (3.184b) η2x + Y1η 2 Y + Y2η 2 Y1 − 1 2 Y Y2η 2 Y2+ 1 2 Y Y2(ξx + Y1ξY + Y2ξY1 − 1 2 Y Y2ξY2) + 1 2 Y2η + 1 2 Y η2 = 0, (3.184c) f0 = −ηY + Y1ξY + c, (3.184d) f1 = −η1Y + Y2ξY , (3.184e) f2 = −η2Y − 1 2 Y Y2ξY , (3.184f) g0 = −ηY1 + Y1ξY1 , (3.184g) g1 = −η1Y1 + Y2ξY1 , (3.184h) g2 = −η2Y1 − 1 2 Y Y2ξY1 , (3.184i) h0 = −ηY2 + Y1ξY2 , (3.184j) h1 = −η1Y2 + Y2ξY2 , (3.184k) h2 = −η2Y1 − 1 2 Y Y2ξY2 + c. (3.184l) Comparing (3.184a)-(3.184c) with the symmetry determining equations of the ODE (3.181) given by (3.102a)-(3.102c) in Example 3.14, it follows that all point symmetries (3.183) of the system multiplier determining equations (3.182) project onto higher-order symmetries of the corresponding ODE (3.181). Conversely, each higher-order symmetry Xp = ξ(x, Y, Y1, Y2) ∂ ∂x + η(x, Y, Y1, Y2) ∂ ∂Y + η1(x, Y, Y1, Y2) ∂ ∂Y1 + + η2(x, Y, Y1, Y2) ∂ ∂Y2 64 3.4. General ODEs of the ODE (3.181) induces a point symmetry (3.183) of the system multi- plier determining equations (3.182) with f0, f1, f2, g0, g1, g2, h0, h1, and h2 given by (3.184d)-(3.184l). 3.4.6 Connection between the System and Scalar Multiplier Determining Equations The IFDE of the scalar ODE (2.1) may be constructed from the system multiplier determining equations (3.109) in the following way. Using the multiplier determining equations (3.109) of the ODE system (3.10), the dependent variables µn−2, . . . , µ0 can be written as functions of the independent variables (x, Y ), µn−1 and its derivatives. Specifically, one has the recurrence relation µi−1 = −D̃µi − µn−1fYi , i = 1, . . . , n− 1 (3.185) where the total derivative D̃ is D̃ = D†x + Y1D † 0 + n−2∑ k=1 Yk+1D † k + f(x, Y )D † n−1, (3.186) and D†x, D†0, . . . ,D † n−1 are given by (3.141). Defining Λ = µn−1, the substitution of {µj(x, Y ,Λ,Λx,ΛY , . . .)}n−20 into the remaining equations of the system multiplier determining equations (3.109) yields a corresponding PDE system Si[x, Y ,Λ,Λx,ΛY , . . .] = 0, i = 1, . . . , [n(n− 1) + 1]. (3.187) The following theorem proves that the resulting system (3.187) is the IFDE of (2.1). Theorem 3.23. A function Λ(x, Y ) is a solution of (3.187) if and only if it is an integrating factor of (2.1). Consequently, the system (3.187) obtained 65 3.4. General ODEs algorithmically from the system multiplier determining equations (3.109) is the IFDE of the scalar ODE (2.1). Proof. Let Λ be an integrating factor of the scalar ODE (2.1). From Theo- rem 3.16 and Lemma 3.15 there exist µ0, . . ., µn−2 such that (µ0, . . . , µn−2,Λ) solves the system multiplier determining equations (3.109). By construction, it follows immediately that Λ satisfies (3.187). Conversely, suppose Λ solves (3.187) and define µ0(x, Y ,Λ,Λx,ΛY , . . .), . . . , µn−2(x, Y ,Λ,Λx,ΛY , . . .) through (3.185) with µn−1 = Λ. By the con- struction of (3.187), the functions (µ0, . . . , µn−2, µn−1) = (µ0(x, Y ,Λ), . . . , µn−2(x, Y ,Λ,Λx,ΛY , . . .),Λ) satisfy the system multiplier determining equations (3.109) and thus from Theorem 3.16 correspond to a set of multipliers of the ODE system (3.10). Finally, it follows from the proof of Lemma 3.15 that Λ is an integrating factor of the scalar ODE (2.1), and the proof is complete. Example 3.24. Consider the ODE y′′ = 0. (3.188) From Example 3.17, the system multiplier determining equations are µ0Y1 = µ 1 Y , (3.189a) µ0x + Y1µ 0 Y = 0, (3.189b) µ1x + Y1µ 1 Y + µ 0 = 0. (3.189c) Solving for µ0 in (3.189c) one obtains µ0 = −µ1x − Y1µ1Y , which upon sub- stitution into (3.189a) and (3.189b) yields − µ1xY1 − µ1Y − Y1µ1Y Y1 = µ1Y , (3.190a) − µ1xx − Y1µ1xY − Y1(µ1xY + Y1µ1Y Y ) = 0. (3.190b) 66 3.4. General ODEs The system (3.190) is clearly equivalent to the IFDE of (3.24), given by (2.13) in Example 2.6, when µ1 = Λ. Example 3.25. Consider the ODE y′′′ + 1 2 yy′′ = 0. (3.191) Using Maple and the direct method one can obtain the IFDE of (3.191) Λxxx =− 2Y Y1Y2ΛY Y1 + 3 4 Y Y1Y 2 2 ΛY2Y2 − 2Y2Λ− 2Y Y2ΛY − 3Y1Y2ΛY1+ + Y1Λx + Y 21 ΛY − 5 2 Y 22 ΛY2 + 3 2 Y Y1Y2ΛY2 − Y 2Y2ΛY1+ 1 2 Y 3Y2ΛY2 + 5 8 Y 3Y 22 ΛY2Y2 + 6Y1Y2ΛY Y − 9 4 Y 2Y 22 ΛY1Y2+ + 2Y Y 22 ΛY1Y1 − 3 2 Y1Y 2 2 ΛY1Y2 + 1 2 Y Λxx − 3Y1ΛxxY − 3Y 21 ΛxY Y + + 6Y2ΛxY + 3Y 22 ΛxY1Y1 − Y 31 ΛY Y Y + 9Y 22 ΛY Y1 + 2Y 32 ΛY1Y1Y1− − 3 2 Y Y 22 ΛY Y2 + 1 2 Y Y 21 ΛY Y − 2Y Y2ΛxY1 + 1 8 Y 3Y 32 ΛY2Y2Y2− − 3 4 Y 2Y 32 ΛY1Y2Y2 + Y Y1ΛxY + 3Y1Y 2 2 ΛY Y1Y1 , ΛxxY1 =− 3ΛxY + Λ− 2Y2ΛxY1Y1 + 1 2 Y Y1ΛY Y1 − 2Y1Y2ΛY Y1Y1+ + 1 2 Y Y2ΛY Y2 + 1 2 Y1Y2ΛY1Y2 + 1 4 Y 2Y 22 ΛY1Y2Y2 − Y Y2ΛY1Y1+ + 1 2 Y 2Y2ΛY1Y2 − 3Y1ΛY Y + 1 2 Y ΛxY1 + Y ΛY + 1 2 Y1ΛY1− − Y 21 ΛY Y Y1 + Y2ΛY2 − 4Y2ΛY Y1 − Y 22 ΛY1Y1Y1 − 2Y1ΛxY Y1 , ΛxY2 =− 2ΛY1 + Y ΛY2 − Y1ΛY Y2 − Y2ΛY1Y2 + 1 2 Y Y2ΛY2Y2 . (3.192) Additionally, from Example 3.18 the system multiplier determining equa- 67 3.4. General ODEs tions are µ0Y1 = µ 1 Y , (3.193a) µ0Y2 = µ 2 Y , (3.193b) µ1Y2 = µ 2 Y1 , (3.193c) µ0x + Y1µ 0 Y + Y2µ 1 Y − 1 2 Y Y2µ 2 Y = 1 2 Y2µ 2, (3.193d) µ1x + Y1µ 0 Y1 + Y2µ 1 Y1 − 1 2 Y Y2µ 2 Y1 = −µ0, (3.193e) µ2x + Y1µ 0 Y2 + Y2µ 1 Y2 − 1 2 Y Y2µ 2 Y2 = −µ1 + 1 2 Y µ2. (3.193f) Solving for µ1 in (3.193f), after replacing µ0Y2 and µ 1 Y2 with µ2Y and µ 2 Y1 , respectively, and using (3.193b) and (3.193c), one obtains µ1 = −µ2x − Y1µ2Y − Y2µ2Y1 + 1 2 Y Y2µ 2 Y2 + 1 2 Y µ2. (3.194) One can then solve for µ0 in terms of µ1, which yields µ0 = −µ1x − Y1µ1Y − Y2µ1Y1 + 1 2 Y Y2µ 1 Y2 . (3.195) The substitution of (3.194) and (3.195) into (3.193a)-(3.193d) yields the 68 3.4. General ODEs following system after simplification with Maple: µ2xxx =− 2Y Y1Y2µ2Y Y1 + 3 4 Y Y1Y 2 2 µ 2 Y2Y2 − 2Y2µ2 − 2Y Y2µ2Y − 3Y1Y2µ2Y1+ + Y1µ2x + Y 2 1 µ 2 Y − 5 2 Y 22 µ 2 Y2 + 3 2 Y Y1Y2µ 2 Y2 − Y 2Y2µ2Y1+ + 5 8 Y 3Y 22 µ 2 Y2Y2 + 6Y1Y2µ 2 Y Y − 9 4 Y 2Y 22 µ 2 Y1Y2 + 2Y Y 2 2 µ 2 Y1Y1− − 3 2 Y1Y 2 2 µ 2 Y1Y2 + 1 2 Y µ2xx − 3Y1µ2xxY − 3Y 21 µ2xY Y + 6Y2µ2xY + + 3Y 22 µ 2 xY1Y1 − Y 31 µ2Y Y Y + 9Y 22 µ2Y Y1 + 2Y 32 µ2Y1Y1Y1 − 3 2 Y Y 22 µ 2 Y Y2+ + 1 2 Y Y 21 µ 2 Y Y − 2Y Y2µ2xY1 + 1 8 Y 3Y 32 µ 2 Y2Y2Y2 − 3 4 Y 2Y 32 µ 2 Y1Y2Y2+ + Y Y1µ2xY + 3Y1Y 2 2 µ 2 Y Y1Y1 + 1 2 Y 3Y2µ 2 Y2 , (3.196a) µ2xxY1 =− 3µ2xY + µ2 − 2Y2µ2xY1Y1 + 1 2 Y Y1µ 2 Y Y1 − 2Y1Y2µ2Y Y1Y1+ + 1 2 Y Y2µ 2 Y Y2 + 1 2 Y1Y2µ 2 Y1Y2 + 1 4 Y 2Y 22 µ 2 Y1Y2Y2 − Y Y2µ2Y1Y1+ + 1 2 Y 2Y2µ 2 Y1Y2 − 3Y1µ2Y Y + 1 2 Y µ2xY1 + Y µ 2 Y + 1 2 Y1µ 2 Y1− − Y2µ2Y2 − 4Y2µ2Y Y1 − Y 22 µ2Y1Y1Y1 − 2Y1µ2xY Y1 − Y 21 µ2Y Y Y1 , (3.196b) µ2xY2 =− 2µ2Y1 + Y µ2Y2 − Y1µ2Y Y2 − Y2µ2Y1Y2 + 1 2 Y Y2µ 2 Y2Y2 . (3.196c) which is clearly equivalent to the IFDE (3.193) when µ2 = Λ. In the proof of Theorem 3.23 it was shown that the mapping Λ 7→ (µ0(x, Y ,Λ,Λx,ΛY , . . .), . . . , µn−2(x, Y ,Λ,Λx,ΛY , . . .),Λ), (3.197) which takes each solution of the IFDE (3.187) into a solution of the sys- tem multiplier determining equations (3.109), is invertible; thus implying that all local symmetries of the IFDE (3.187) correspond to local symme- tries of the system multiplier determining equations (3.109) and conversely, all local symmetries of the system multiplier determining equations (3.109) 69 3.4. General ODEs correspond to local symmetries of the IFDE (3.187) [4]. Hence, from Theo- rem 3.20 it follows that that all local symmetries of the IFDE (3.187) must project onto point symmetries of (3.10). In particular, one has the following result. Theorem 3.26. A point symmetry X = ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y + ηi(x, Y ) ∂ ∂Y i + n−1∑ i,j=0 [αijµ j ] ∂ ∂µi (3.198) of the system multiplier determining equations (3.109) yields a point sym- metry X† = ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y + ηi(x, Y ) ∂ ∂Y i + s(x, Y )Λ ∂ ∂Λ (3.199) of the IFDE (3.187) for some s(x, Y ) if and only if αn−1j ≡ 0, j = 0, . . . , n− 2. Moreover, every point symmetry (3.199) of the IFDE (3.187) yields a point symmetry of the system multiplier determining equations (3.109). Proof. Suppose X = ξ ∂ ∂x + η ∂ ∂Y + ηi ∂ ∂Y i + n−1∑ i,j=0 [αijµ j ] ∂ ∂µi (3.200) is a point symmetry of the system multiplier detemining equations (3.109). Let ∂kΛ denote the kth order partial derivatives of Λ with respect to x, Y, Y1, . . . , Yn−1. Then, from the construction of the invertible mapping in the remarks following the proof of Theorem 3.23, X must correspond to a 70 3.4. General ODEs higher-order symmetry of the scalar IFDE (3.108) given by X† = ξ ∂ ∂x + η ∂ ∂Y + ηi ∂ ∂Y i + +  n−2∑ j=0 [αn−1j (x, Y )µ j(x, Y ,Λ, ∂Λ, . . . , ∂nΛ)] + αn−1n−1(x, Y )Λ  ∂∂Λ (3.201) where the functions {µi(x, Y ,Λ, ∂Λ, . . . , ∂nΛ)}n−20 are defined recursively by (3.185) with µn−1 = Λ. Since µi(x, Y ,Λ, ∂Λ, . . . , ∂nΛ) has an essential dependence on the [(n−1)−i]th order derivatives of Λ and no dependence on higher-order derivatives of Λ, it is clear that (3.201) is a point symmetry of (3.187) if and only if αn−1j (x, Y ) ≡ 0 for all j = 0, . . . , n− 2. If αn−1j (x, Y ) 6= 0 for some j ∈ {0, . . . , n − 2} then it follows that X†, defined by (3.201), is a higher-order symmetry of the scalar IFDE. Now it is shown that every point symmetry X† = ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y + n−1∑ i=0 ηi(x, Y ) ∂ ∂Yi + s(x, Y )Λ ∂ ∂Λ (3.202) of the scalar IFDE (3.187) induces a point symmetry X = ξ(x, Y ) ∂ ∂x +η(x, Y ) ∂ ∂Y + n−1∑ i=1 ηi(x, Y ) ∂ ∂Yi + n−1∑ i,j=0 αij(x, Y )µ j ∂ ∂µi (3.203) of the corresponding scalar multiplier determining equations (3.109) for some functions αij(x, Y ). Suppose (3.202) is a point symmetry of the scalar IFDE (3.187). Then, using the invertible mapping constructed in the remarks fol- lowing the proof of Theorem 3.23, one can construct the corresponding local symmetry of the system multiplier determining equations (3.109) in the fol- lowing way. First let X†(n) denote the prolongation of X† (3.202) to a vector field acting on (x, Y ,Λ, ∂Λ, . . . , ∂nΛ)-space, and let µ0(x, Y ,Λ, ∂Λ, . . . , ∂nΛ), . . . , µn−1(x, Y ,Λ, ∂Λ, . . . , ∂nΛ) be the image of Λ under the invertible map- 71 3.4. General ODEs ping (3.197). Specifically, the functions µ0, . . . , µn−1 are defined recursively using (3.185) as follows µn−1(x, Y ,Λ, ∂Λ, . . . , ∂nΛ) = Λ, µn−2(x, Y ,Λ, ∂Λ, . . . , ∂nΛ) = −D̃Λ− ΛfYn−1 , µn−3(x, Y ,Λ, ∂Λ, . . . , ∂nΛ) = −D̃[−D̃Λ− ΛfYn−1 ]− ΛfYn−2 , ... µ0(x, Y ,Λ, ∂Λ, . . . , ∂nΛ) = −D̃[µ1(x, Y ,Λ, ∂Λ, . . . , ∂nΛ)]− ΛfY1 , (3.204) where D̃ is defined by (3.186). Through (3.204) one can extend X†(n) to act on (x, Y ,Λ, ∂Λ, . . . , ∂nΛ, µ0, . . . , µn−1)-space, producing X†(n) + n−1∑ i=0 X†(n)[µi(x, Y , ∂Λ, . . . , ∂nΛ)] ∂ ∂µi . (3.205) It is important to note that if Λ is a solution of the scalar IFDE (3.187), then the functions µ0, . . . , µn−1 defined by (3.204) solve the system multiplier de- termining equations (3.109). Moreover, for each solution (µ0, . . . , µn−1) of the system multiplier determining equations (3.109), there exists a unique Λ such that (3.204) holds and Λ satisfies the scalar IFDE (3.187). Hence, the point transformation with infinitesimal generator (3.205) must map each so- lution of the system multiplier determining equations (3.109) into another, possibly distinct, solution and thus corresponds to a symmetry of the sys- tem multiplier determining equations (3.109). Substituting Λ = µn−1 one obtains a corresponding transformation of (x, Y , µ0, . . . , µn−1)-space by pro- 72 3.4. General ODEs jection. By construction the resulting transformation X = ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y + n−1∑ i=0 ηi(x, Y ) ∂ ∂Yi + + n−1∑ i=0 { X†(n)[µi(x, Y , ∂Λ, . . . , ∂nΛ)] }∣∣∣ Λ=µn−1, ∂Λ=∂µn−1, ..., ∂nΛ=∂nµn−1 ∂ ∂µi , (3.206) must also be a symmetry of the system multiplier determining equations (3.109). The transformation X defined by (3.206) is a point transformation of (x, Y , µ0, . . . , µn−1)-space if and only if the coefficients{ X†(n)[µi(x, Y , ∂Λ, . . . , ∂nΛ)] }∣∣∣ Λ=µn−1, ∂Λ=∂µn−1, ..., ∂nΛ=∂nµn−1 (3.207) where i = 0, . . . , n − 1, can be made independent of the components of ∂µn−1, . . . , ∂nµn−1 through substitution of the system multiplier determin- ing equations (3.109). Since the system multiplier determining equations (3.109) are linear in µ0, . . . , µn−1, the transformation X defined by (3.206) is a point symmetry of the system multiplier determining equations (3.109) if and only if it is of the form (3.203). Hence, one must show that there exist functions αij(x, Y ), i, j = 0, . . . , n− 1, such that X[µi] = n−1∑ j=0 αij(x, Y )µ j , i = 0, . . . , n− 1 (3.208) on solutions of the system multiplier determining equations (3.109). This is shown inductively as follows. 73 3.4. General ODEs First note that since µn−1(x, Y ,Λ, ∂Λ, . . . , ∂nΛ) = Λ it follows immedi- ately that Xµn−1 = [ X†(n)µn−1(x, Y ,Λ) ]∣∣∣ Λ=µn−1, ∂Λ=∂µn−1, ..., ∂nΛ=∂nµn−1 = [s(x, Y )Λ]|Λ=µn−1, ∂Λ=∂µn−1, ..., ∂nΛ=∂nµn−1 = s(x, Y )µn−1. (3.209) Then, fixing 0 ≤ i < n − 1, suppose there exist functions αi+1j (x, Y ) such that X†(n)[µi+1(x, Y ,Λ, ∂Λ, . . . , ∂nΛ)] = n−1∑ j=0 αi+1j (x, Y )µ j(x, Y ,Λ, ∂Λ, . . . , ∂nΛ) (3.210) on solutions of the scalar IFDE (3.187). From (3.206) it follows that Xµi+1 = n−1∑ j=0 αi+1j (x, Y )µ j (3.211) on solutions of the system multiplier determining equations (3.109). The substitution of µn−1 = Λ into the equation (3.185) yields µi(x, Y ,Λ, ∂Λ, ∂nΛ) = −D̃ [µi+1(x, Y ,Λ, ∂Λ, ∂nΛ)]− ΛfYi+1 (3.212) and hence after applying X†(n) to both sides of (3.212) one finds X†(n) [ µi(x, Y ,Λ, ∂Λ, ∂nΛ) ] = −X†(n) [ D̃µi+1(x, Y ,Λ, ∂Λ, ∂nΛ) ] − − X†(n)[ΛfYi+1 ]. (3.213) 74 3.4. General ODEs The application of X†(n) to the second term on the right-hand side of (3.212) yields X†(n)[ΛfYi+1 ] = fYi+1s(x, Y )Λ + Λ fxYi+1ξ + fY Yi+1η + n−1∑ j=1 fYjYi+1η j  (3.214) from which it follows that{ X†(n)[ΛfYi+1 ] }∣∣∣ Λ=µn−1, ∂Λ=∂µn−1, ..., ∂nΛ=∂nµn−1 = µn−1 fYi+1s(x, Y ) + fxYi+1ξ + fY Yi+1η + n−1∑ j=1 fYjYi+1η j  . (3.215) Before proceeding further it is necessary to derive an expression for how the functions µi+1x (x, Y ,Λ, ∂Λ, . . . , ∂ nΛ), µi+1Y (x, Y ,Λ, ∂Λ, . . . , ∂ nΛ), µi+1Y1 (x, Y ,Λ, ∂Λ, . . . , ∂ nΛ), . . . , µi+1Yn−1(x, Y ,Λ, ∂Λ, . . . , ∂ nΛ), transform un- der the action of X†(n). Let x∗ = x∗(x, Y ; ), Y ∗ = Y ∗(x, Y ; ), Λ∗ = Λ∗(x, Y ,Λ; ), (∂Λ)∗ = ∂Λ∗(x, Y ,Λ, ∂Λ; ), ... (∂nΛ)∗ = ∂nΛ∗(x, Y ,Λ, ∂Λ, . . . , ∂nΛ; ), (3.216) be the one-parameter Lie-group of transformations with infinitesimal oper- ator X†(n). If (µi+1)∗ = µi+1(x∗, Y ∗,Λ∗, (∂Λ)∗, . . . , (∂nΛ)∗) 75 3.4. General ODEs is the image of µi+1(x, Y ,Λ, ∂Λ, . . . , ∂nΛ) under the transformation (3.216) then the contact conditions imply that d(µi+1)∗ = (µi+1x ) ∗dx∗, d(µi+1)∗ = (µi+1Y ) ∗dY ∗, d(µi+1)∗ = (µi+1Yj ) ∗dY ∗j , j = 1, . . . , n− 1. (3.217) Re-writing the contact conditions (3.217) in terms of the infinitesimal gen- erator X†(n) [3], one obtains X†(n)[µi+1x ] = D†x[X† (n) (µi+1)]− ξxµi+1x − ηxµi+1Y − n−1∑ j=1 ηjxµ i+1 Yj , X†(n)[µi+1Y ] = D † 0[X †(n)(µi+1)]− ξY µi+1x − ηY µi+1Y − n−1∑ j=1 ηjY µ i+1 Yj , X†(n)[µi+1Yk ] = D † k[X †(n)(µi+1)]− ξYkµi+1x − ηYkµi+1Y − n−1∑ j=1 ηjYkµ i+1 Yj , k = 1, . . . , n− 1 (3.218) where D†x, D†0, . . . , D † n−1 are given by (3.141). Noting that D̃µ i+1 = µi+1x + Y1µ i+1 Y + ∑n−1 j=1 µ i+1 j D̃Yj it follows that X†(n)[D̃µi+1] = X†(n) µi+1x + Y1µi+1Y + n−1∑ j=1 µi+1j D̃Yj  (3.219a) = X†(n)(µi+1x ) + Y1X†(n)(µi+1Y ) + n−1∑ j=1 [D̃(Yj)]X† (n) (µi+1Yj ) + (3.219b) + µi+1Y η1 + n−2∑ j=1 ηj+1µi+1Yj + µ i+1 Yn−1 ( ξfx + ηfY + n−1∑ k=1 fYkη k ) . 76 3.4. General ODEs Using (3.218) one can re-write the first term on the right-hand side of (3.219b) to giveX†(n)(µi+1x ) + Y1X†(n)(µi+1Y ) + n−1∑ j=1 [D̃(Yj)]X† (n) (µi+1Yj )  = D†x[X†(n)(µi+1)]− ξxµi+1x − ηxµi+1Y − n−1∑ j=1 ηjxµ i+1 Yj + Y1 D†0[X†(n)(µi+1)]− ξY µi+1x − ηY µi+1Y − n−1∑ j=1 ηjY µ i+1 Yj + n−1∑ k=1 [D̃(Yk)] D†k[X†(n)(µi+1)]− ξYkµi+1x − ηYkµi+1Y − n−1∑ j=1 ηjYkµ i+1 Yj  . (3.220) Changing the order of summation in (3.220), one obtainsX†(n)(µi+1x ) + Y1X†(n)(µi+1Y ) + n−1∑ j=1 [D̃(Yj)]X† (n) (µi+1Yj )  = ( D†x[X† (n) (µi+1)] + Y1D † 0[X †(n)(µi+1)] + n−1∑ k=1 [D̃(Yk)]D † k[X †(n)(µi+1)] ) − − ( ξxµ i+1 x + Y1ξY µ i+1 x + n−1∑ k=1 [D̃(Yk)]ξYkµ i+1 x ) − − ( ηxµ i+1 Y + Y1ηY µ i+1 Y + n−1∑ k=1 [D̃(Yk)]ηYkµ i+1 Y ) − − n−1∑ j=1 ( ηjxµ i+1 Yj + Y1η j Y µ i+1 Yj + n−1∑ k=1 [D̃(Yk)]η j Yk µi+1Yj ) . (3.221) 77 3.4. General ODEs It follows from the definition of D̃ given in (3.186) that D̃ [ X†(n)(µi+1) ] = ( D†x[X† (n) (µi+1)] + Y1D † 0[X †(n)(µi+1)]+ + n−1∑ k=1 [D̃(Yk)]D † k[X †(n)(µi+1)] ) , µi+1x D̃(ξ) = ( ξxµ i+1 x + Y1ξY µ i+1 x + n−1∑ k=1 [D̃(Yk)]ξYkµ i+1 x ) , µi+1Y D̃(η) = ( ηxµ i+1 Y + Y1ηY µ i+1 Y + n−1∑ k=1 [D̃(Yk)]ηYkµ i+1 Y ) , µi+1Yj D̃(η j) = ( ηjxµ i+1 Yj + Y1η j Y µ i+1 Yj + n−1∑ k=1 [D̃(Yk)]η j Yk µi+1Yj ) , j = 1, . . . , n− 1, (3.222) and thusX†(n)(µi+1x ) + Y1X†(n)(µi+1Y ) + n−1∑ j=1 [D̃(Yj)]X† (n) (µi+1Yj )  = D̃ [X†(n)(µi+1)] − µi+1x D̃(ξ)− µi+1Y D̃(η)− n−1∑ j=1 µi+1Yj D̃(η j). (3.223) Using (3.210) the first term on the right-hand side of (3.223) becomes D̃ [ X†(n)(µi+1) ] = D̃ n−1∑ j=0 αi+1j (x, Y )µ j  = n−1∑ j=0 [ µjD̃ ( αi+1j (x, Y ) )] + n−1∑ j=0 [ αi+1j (x, Y )D̃ ( µj )] . (3.224) 78 3.4. General ODEs To simplify the second term on the right-hand side of (3.224) note that from (3.185) when µn−1 = Λ one has that D̃µ0 = −ΛfY , D̃µj = −µj−1 − ΛfYj , j = 1, . . . , n− 1. (3.225) Thus, it follows that n−1∑ j=0 [ αi+1j (x, Y )D̃ ( µj )] = − n−1∑ j=1 αi+1j (x, Y )µ j−1 − n−1∑ j=0 αi+1j (x, Y )ΛfYj (3.226) which, upon substitution into the second term on the right-hand side of (3.224) yields D̃ [ X†(n)(µi+1) ] = n−1∑ j=0 [ µjD̃ ( αi+1j (x, Y ) )] − − n−1∑ j=1 αi+1j (x, Y )µ j−1 − n−1∑ j=0 αi+1j (x, Y )ΛfYj . (3.227) Substituting (3.227) into (3.223) one obtainsX†(n)(µi+1x ) + Y1X†(n)(µi+1Y ) + n−1∑ j=1 [D̃(Yj)]X† (n) (µi+1Yj )  = n−1∑ j=0 [ µjD̃ ( αi+1j (x, Y ) )] − n−1∑ j=1 αi+1j (x, Y )µ j−1 − n−1∑ j=0 αi+1j (x, Y )ΛfYj− − µi+1x D̃(ξ)− µi+1Y D̃(η)− n−1∑ j=1 µi+1Yj D̃(η j). (3.228) One can then use the fact that µi+1x = −Y1µi+1Y − n−1∑ j=1 [D̃(Yj)]µi+1Yj − ΛfYi+1 , 79 3.4. General ODEs which follows from the system multiplier determining equations (3.109), to write (3.228) asX†(n)(µi+1x ) + Y1X†(n)(µi+1Y ) + n−1∑ j=1 [D̃(Yj)]X† (n) (µi+1Yj )  = n−1∑ j=0 [ µjD̃ ( αi+1j (x, Y ) )] − n−1∑ j=1 αi+1j (x, Y )µ j−1 − n−1∑ j=0 αi+1j (x, Y )ΛfYj− − µi+1Y [D̃(η)− D̃(Y )D̃(ξ)]− n−1∑ j=1 µi+1Yj [D̃(η j)− D̃(Yj)D̃(ξ)]. (3.229) The substitution of (3.229) into the first term on the right-hand side of (3.219b) yields X†(n)[D̃µi+1] = n−1∑ j=0 [ µjD̃ ( αi+1j (x, Y ) )] − n−1∑ j=1 αi+1j (x, Y )µ j−1− − n−1∑ j=0 αi+1j (x, Y )ΛfYj − µi+1Y [D̃(η)− D̃(Y )D̃(ξ)]− − n−1∑ j=1 µi+1Yj [D̃(η j)− D̃(Yj)D̃(ξ)]+ + µi+1Y η1 + n−2∑ j=1 ηj+1µi+1Yj + µ i+1 Yn−1 ( ξfx + ηfY + n−1∑ k=1 fYkη k ) . (3.230) 80 3.4. General ODEs The terms on the right-hand side of (3.230) can be re-ordered to give X†(n)[D̃µi+1] = n−1∑ j=0 [ µjD̃ ( αi+1j (x, Y ) )] − n−1∑ j=1 αi+1j (x, Y )µ j−1− − n−1∑ j=0 αi+1j (x, Y )ΛfYj + µ i+1 Y { η1 − [D̃(η)− D̃(Y )D̃(ξ)] } − − n−2∑ j=1 µi+1Yj { ηj+1 − [D̃(ηj)− D̃(Yj)D̃(ξ)] } + + µi+1Yn−1 ( ξfx + ηfY + n−1∑ k=1 fYkη k − [ D̃(ηn−1)− D̃(Yn−1)D̃(ξ) ]) . (3.231) In the remarks following Theorem 3.23 it was shown that all local symme- tries of the scalar IFDE (3.187) must project onto higher-order symmetries of the scalar ODE (2.1). In particular, since Xp = ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y + n−1∑ j=1 ηj(x, Y ) ∂ ∂Yj is the projection of X (3.198) onto a transformation of (x, Y, Y1, . . . , Yn−1)- space, its coefficients ξ, η, η1, . . . , ηn−1 satisfy the symmetry determining equations for the scalar ODE (2.1) given by η1 = D̃(η)− Y1D̃ξ, ηj+1 = D̃(ηj)− D̃(Yj)D̃ξ, j = 1, . . . , n− 2, fxξ + fY η + n−1∑ j=1 fYjη j = D̃(ηn−1)− D̃(Yn−1)D̃ξ. (3.232) 81 3.4. General ODEs The substitution of (3.232) into the right-hand side of (3.231) yields X†(n)[D̃µi+1] = n−1∑ j=0 [ µjD̃ ( αi+1j (x, Y ) )] − n−1∑ j=1 αi+1j (x, Y )µ j−1− − n−1∑ j=0 αi+1j (x, Y )ΛfYj . (3.233) Hence the substitution of (3.233) and (3.214) into the right-hand side of (3.213) gives X†(n)[µi(x, Y ,Λ, ∂Λ, . . . , ∂Λ)] = − n−1∑ j=0 [ µjD̃ ( αi+1j (x, Y ) )] + + n−1∑ j=1 αi+1j (x, Y )µ j−1 + n−1∑ j=0 αi+1j (x, Y )ΛfYj − fYi+1s(x, Y )Λ− − Λ fxYi+1ξ + fY Yi+1η + n−1∑ j=1 fYjYi+1η j  . (3.234) Finally, from (3.234) it follows that X[µi] = { X†(n)[µi(x, Y ,Λ, ∂Λ, . . . , ∂nΛ)] }∣∣∣ Λ=µn−1,∂Λ=∂µn−1,...,∂nΛ=∂nµn−1 = − n−2∑ j=0 [ αi+1j+1(x, Y )− D̃ ( αi+1j (x, Y ) )] µj+ + −D̃ (αi+1n−1(x, Y ))+ n−1∑ j=0 αi+1j (x, Y )fYj − fYi+1s(x, Y ) − − fxYi+1ξ + fY Yi+1η + n−1∑ j=1 fYjYi+1η j µn−1 (3.235) on solutions of the system multiplier determining equations (3.109) and is of the required form (3.208). Thus if there exist functions αi+1j (x, Y ) such 82 3.4. General ODEs that X[µi+1] = n−1∑ j=0 αi+1j (x, Y )µ j then there exist functions αij(x, Y ) such that X[µi] = n−1∑ j=0 αij(x, Y )µ j on solutions of the system multiplier determining equations (3.109). Noting that X[µn−1] = s(x, Y )µn−1 it follows by induction that X[µi] = n−1∑ j=0 αij(x, Y )µ j , i = 0, . . . , n− 1 and hence that all point symmetries of the scalar IFDE (3.187) correspond to point symmetries of the system multiplier determining equations (3.109). Example 3.27. In Example 2.6, it was shown that the ODE y′′ = 0 (3.236) has the corresponding scalar IFDE Λxx + 2Y1ΛxY + Y 21 ΛY Y = 0, 2ΛY + Y1ΛY Y1 + ΛxY1 = 0. (3.237) Using the GeM package for MapleTM one finds that the point transformation of (x, Y, Y1,Λ)-space X = ξ(x, Y, Y1) ∂ ∂x + η(x, Y, Y1) ∂ ∂Y + η1(x, Y, Y1) ∂ ∂Y1 + s(x, Y, Y1)Λ ∂ ∂Λ (3.238) is a symmetry of the scalar IFDE (3.237) if and only if its coefficients ξ, η, η1, 83 3.4. General ODEs and s satisfy the symmetry determining equations ηY1 = ξY1Y1, (3.239a) η1 = ηx + Y1ηY − Y1(ξx + Y1ξY ), (3.239b) η1x + Y1η 1 Y = 0, (3.239c) s = −η1Y1 + c. (3.239d) Hence, from (3.239a)-(3.239c) it follows that all symmetries of the scalar IFDE (3.237) project onto contact symmetries of the ODE (3.236). Con- versely, all contact symmetries Xp = ξ(x, Y, Y1) ∂ ∂x + η(x, Y, Y1) ∂ ∂Y + η1(x, Y, Y1) ∂ ∂Y1 yield a point symmetry (3.238) of the scalar IFDE (3.237) where s = −η1Y1 . In particular, the scalar IFDE (3.237) admits the point symmetry X† = x2 ∂ ∂x + xY ∂ ∂Y + (Y − xY1) ∂ ∂Y1 + xΛ ∂ ∂Λ . (3.240) Let µ1(x, Y, Y1,Λ) = Λ and µ0(x, Y, Y1,Λ,Λx,ΛY ) = −Λx − Y1ΛY . The first prolongation of X† (3.240) is given by X†(1) =X† + [−Λxx− Λ− Y ΛY + Y1ΛY1 ] ∂ ∂Λx − − ΛY1 ∂ ∂ΛY + 2xΛY1 ∂ ∂ΛY1 and hence X†(1)µ0(x, Y, Y1,Λ, ∂Λ) = [Λxx−Λ +Y ΛY −Y1ΛY1 +Y1ΛY1 − (Y − xY1)ΛY ] 84 3.4. General ODEs which can be simplified to yield X†(1)µ0(x, Y, Y1,Λ,Λx,ΛY ) = −Λ− x[−Λx − Y1ΛY ]. Thus, the point symmetry (3.240) of the scalar IFDE (3.237) induces a corresponding point symmetry of the system multiplier determining equa- tions given by X = x2 ∂ ∂x + xY ∂ ∂Y + (Y − xY1) ∂ ∂Y1 − [xµ0 + µ1] ∂ ∂µ0 + xµ1 ∂ ∂µ1 . (3.241) In order to use X† to find particular solutions of the scalar IFDE (3.237) one first determines differential invariants arising as constants of integration of dx x2 = dY xY = dY1 Y − xY1 = dΛ xΛ + r (3.242) where the additional parameter r is introduced because the scalar IFDE (3.237) is linear and thus has a scaling symmetry. Integrating (3.242) one obtains s = Y x , t = Y1x− Y, Λ = −r 2x + C3. (3.243) Letting C3 = f(s, t) and substituting Λ(x, Y, Y1) = −r2x+f(s(x, Y, Y1), t(x, Y, Y1)) into the scalar IFDE (3.237) one obtains r = 0, f(s, t) = cs t2 + h(t), (3.244) where c is an arbitrary constant and h is an arbitrary function of t. Thus Λ(x, Y, Y1) = cY (Y − xY1)2 + xh(Y − xY1) 85 3.4. General ODEs are integrating factors of the scalar ODE (3.236) yielding the two function- ally independent first integrals φ1(x, y, y′) = y′ = a, φ2(x, y, y′) = y − xy′ = b. Example 3.28. As a second example consider the second-order nonlinear ODE [9] 2 ( K(y)y′ )′ + xy′ = 0 (3.245) arising in the study of plasma physics [1] and polymer science [15]. Adapting a result from [8], one can show that if K(y) = 1 1 + y2 , then (3.245) admits the contact symmetry X = ( 2K(y)y′ + xy ) ∂ ∂x − (1 + y2) ∂ ∂y − 3yy′ ∂ ∂y′ . (3.246) One can then find the induced point symmetry of the system multiplier determining equations X′ = (2K(Y )Y1 + xY ) ∂ ∂x − (1 + Y 2) ∂ ∂Y − 3Y Y1 ∂ ∂Y1 + + (α00µ 0 + α01µ 1) ∂ ∂µ0 + (α10µ 0 + α11µ 1) ∂ ∂µ1 (3.247) where the coefficients α00(x, Y, Y1), α 0 1(x, Y, Y1), α 1 0(x, Y, Y1), and α 1 1(x, Y, Y1) are given by the equations (3.172) when n = 2 and f(x, Y, Y1) = −(1 + Y 2) 2 ( −2Y Y 21 (1 + Y 2)2 + xY1 ) . 86 3.4. General ODEs Explicitly, one finds that X′ = (2K(Y )Y1 + xY ) ∂ ∂x − (1 + Y 2) ∂ ∂Y − 3Y Y1 ∂ ∂Y1 + + [( 2Y − 4Y Y 2 1 (1 + Y 2)2 + xY1 + c ) µ0+ + ( 3Y1 − 2K ′(Y )Y 21 + xY1 2K(Y ) (2K ′(Y )Y1 + x) ) µ1 ] ∂ ∂µ0 + + [ 2Y1 1 + Y 2 µ0 + (3Y − 2Y 21 K ′(Y )− xY1 + c)µ1 ] ∂ ∂µ1 , (3.248) where c is a constant. Using MapleTM, one can find differential invariants arising as constants of integration of the system dx 2K(Y )Y1 + xY = dY −(1 + Y 2) = dY1 −3Y Y1 = dµ0[( 2Y − 4Y Y 21 (1+Y 2)2 + xY1 + c ) µ0 + ( 3Y1 − 2K ′(Y )Y 21 +xY1 2K(Y ) (2K ′(Y )Y1 + x) ) µ1 ] = dµ1[ 2Y1 1+Y 2 µ0 + (3Y − 2Y 21 K ′(Y )− xY1 + c)µ1 ] . (3.249) Integration of (3.249) yields C1 = Y1 (1 + Y 2) 3 2 , C2 = √ 1 + Y 2x+ 2Y Y1 (1 + Y 2) 3 2 , µ1 = ec arctanY (1 + Y 2) 3 2 [C3Y + C4] , (3.250) and an additional equation for µ0 in terms of c, C3, C4, x, Y, and Y1. From Theorem 3.23, if {µ0(x, Y, Y1), µ1(x, Y, Y1)} is a set of multipliers of the sys- tem multiplier determining equations corresponding to the ODE (3.245), then Λ = µ1 is a solution of the corresponding scalar integrating fac- 87 3.4. General ODEs tor determining equations. Letting s(x, Y, Y1) = Y1 (1+Y 2) 3 2 , t(x, Y, Y1) = √ 1 + Y 2x + 2Y Y1 (1+Y 2) 3 2 , C3 = f(s, t), and C4 = g(s, t) one obtains the fol- lowing ansatz for the integrating factors of (3.249) Λ = ec arctanY [Y f(s(x, Y, Y1), t(x, Y, Y1)) + g(s(x, Y, Y1), t(x, Y, Y1))] (1 + Y 2) 3 2 . (3.251) After substituting (3.251) into the scalar integrating factor determining equations of (3.245) one can solve the resulting system for f(s, t), g(s, t) and c to obtain c = 0, f(s, t) = ∂h ∂t , g(s, t) = ∂h ∂s , h(s, t) = k + r ( s2e 1 2 (s2+t2) ) , (3.252) which yields the integrating factor Λ(x, Y, Y1) = r ( s2(x, Y, Y1)e 1 2 (s2(x,Y,Y1)+t2(x,Y,Y1)) ) e 1 2 (s2(x,Y,Y1)+t2(x,Y,Y1)) (1 + Y 2) 3 2 × × [Y t(x, Y, Y1)s2(x, Y, Y1) + s(x, Y, Y1) (2 + s2(x, Y, Y1))] (3.253) where s(x, Y, Y1) = Y1 (1+Y 2) 3 2 and t(x, Y, Y1) = √ 1 + Y 2x+ 2Y Y1 (1+Y 2) 3 2 . 88 Chapter 4 Conclusions and Further Work In Part 1 of this thesis the relationship between the symmetries of a scalar ODE in solved form and the symmetries of its corresponding integrating factor determining equations (IFDE) was examined. In particular it was shown that every higher-order symmetry of the scalar ODE induces a Lie point symmetry of the corresponding IFDE. An explicit formula for this induced point symmetry was obtained, generalizing a result from [3] where a formula was derived for the symmetries of the IFDE induced by Lie point symmetries of the original scalar ODE. Using this formula an integrating factor was found for the scalar ODE 2 ( y′ 1 + y2 )′ + xy′ = 0 (4.1) which is invariant under the point symmetry of the IFDE induced by the contact symmetry of (4.1) X = ( 2 y′ 1 + y2 + xy ) ∂ ∂x − (1 + y2) ∂ ∂y − 3yy′ ∂ ∂y′ . (4.2) Conversely, it was shown that all point symmetries of the IFDE which are of the form ξ(x, Y ) ∂ ∂x + η(x, Y ) ∂ ∂Y + n−1∑ j=1 ηj(x, Y ) ∂ ∂Yj + Λh(x, Y ) ∂ ∂Λ (4.3) must project onto higher-order symmetries of the corresponding scalar ODE. 89 Chapter 4. Conclusions and Further Work Hence the symmetry ansatz approach in which one uses higher-order sym- metries of the scalar ODE to find point symmetries of the corresponding IFDE is equivalent to analyzing the symmetry determining equations of the IFDE for point symmetries of the form (4.3). In light of the connection demonstrated between symmetries of a scalar ODE and its corresponding integrating factor determining equations, there are many ways in which to proceed in future research. Firstly, one could use the formula derived here to find new integrating factors for scalar ODEs which admit only contact and higher-order symmetries, potentially allow- ing one to find new general solutions to ODEs. Secondly, one could seek to extend the results presented here to either include all point symmetries of the IFDE or to show that all such point symmetries, except those cor- responding to the addition of a particular solution, can be written in the form (4.3). If this were true, then it would imply that one does not miss useful point symmetries of the IFDE by considering only those point sym- metries which are induced by higher-order symmetries of the corresponding scalar ODE. Finally, it would be interesting to see if a similar result occurs for partial differential equations (PDEs). Namely, do there exist PDEs for which the corresponding multiplier determining equations admit point sym- metries which are not scalings of the independent variables, the addition of a particular solution, or induced by a higher-order symmetry of the original PDE? 90 Part II Analysis of an Exact Solution of a Wave Equation 91 Chapter 5 The Wave Equation 5.1 Introduction In this chapter, the wave equation and its applications are discussed. Clas- sical results for the one-dimensional wave equation are presented in Section 5.2 along with the role of the wave equation in seismology, acoustics, elastic- ity theory, solid mechanics and electromagnetism. In Section 5.3 the energy of the wave equation is precisely defined and a proof of the conservation of total energy is given. Finally, in Section 5.4, by defining the reflection and transmission coefficients the transfer of energy in two-layered media is quantified and the Fresnel equations are presented. 5.2 The Wave Equation and its Applications The wave equation utt(~x, t) = c2(~x, t, u)∇2u(~x, t) (5.1) is an important tool for modelling physical problems in acoustics, electro- magnetics, fluid mechanics and seismology in the absence of dissipative and driving forces. Here subscripts denote partial differentiation, t rep- resents time, and ~x = (x1, . . . , xm) is the position in space. The function c(~x, t, u), called the wave speed, encodes the fundamental properties of the medium through which disturbances propagate. In this work, only linear one-dimensional wave equations with time-invariant media will be consid- ered; though using non-local symmetries the solutions found in this case can be used to find solutions of certain non-linear wave equations [5]. With 92 5.2. The Wave Equation and its Applications these simplifying assumptions, (5.1) becomes utt = c2(x)uxx. (5.2) A great deal of insight can be gained by considering the case where the wave speed is a constant, c. The following classical result is due to d’Alembert [14]: Theorem 5.1. Consider the initial value problem utt = c2uxx, x ∈ (−∞,∞) u(x, 0) = f(x), ut(x, 0) = g(x) (5.3) where the functions f(x) and g(x) are twice differentiable. The solution of (5.3) is given by ũ(x, t) = 1 2 [f(x− ct) + f(x+ ct)] + 1 2c ∫ x+ct x−ct g(s)ds. (5.4) Proof. Let ũ(x, t) be defined as in (5.4). Evaluating ũ(x, 0) one obtains ũ(x, 0) = 1 2 [f(x) + f(x)] + 1 2c ∫ x x g(s)ds = f(x). (5.5) The differentiation of ũ with respect to t gives ũt(x, t) = 1 2 [−cf(x− ct) + cf(x+ ct)] + 1 2c [cg(x+ ct)− (−cg(x− ct))] and hence ũt(x, 0) = 1 2 [−cf(x) + cf(x)] + 1 2c [cg(x) + cg(x)] = g(x). (5.6) Together, (5.6) and (5.5) show that ũ satisfies the initial conditions of (5.3). 93 5.2. The Wave Equation and its Applications The differentiation of ũ twice with respect to x gives ũxx(x, t) = 1 2 [f ′′(x− ct) + f ′′(x+ ct)] + 1 2c [g′(x+ ct)− g′(x− ct)] (5.7) and twice with respect to t gives ũtt(x, t) = 1 2 [(−c)2f ′′(x−ct)+c2f ′′(x+ct)]+ 1 2c [c2g′(x+ct)−(−c)2g′(x−ct)], (5.8) from which it follows immediately that ũtt = c2ũxx (5.9) which completes the proof. From Theorem 5.1 it follows that any initial disturbance will propagate outwards with speed c. Moreover, if ut(x, 0) ≡ 0, then the shape of the initial disturbance will match the shapes of the leftward and rightward propagating waves. In geology, particularly seismology, the wave equation (5.2) is used to model the propagation of earthquakes through the crust and upper mantle [24]. It is especially important in the oil and mining industries where the velocity profile is recovered from measurements of the propagation time and strength. The starting point for this analysis is the seismic wave equation [24]: ρ~utt = ∇λ(∇·~u)+∇µ · [∇~u+(∇~u)T ]+(λ+2µ)∇∇·~u−µ∇×∇×~u, (5.10) where ~u is the velocity of a seismic disturbance, ρ is the density of the medium, µ(x) is the viscosity, and λ(x) is the Lamé parameter. Considering only one-dimensional waves, (5.10) reduces to {[λ(x) + 2µ(x)]ux}x = utt. (5.11) 94 5.2. The Wave Equation and its Applications In Section 6.3 it will be shown that each solution of (5.11) corresponds to a solution of (5.2) where c(x) = √ λ(x) + 2µ(x) [6]. Conversely, each solution of (5.2) yields a solution of (5.11). Similarly, in elasticity theory [19] the linear wave equation can be used to model the behaviour of small elastic oscillations in a solid body. In particular, considering the one-dimensional problem, if u(x, t) represents the longitudinal displacement of the material from its equilibrium position then utt = [ E(1− σ) ρ(1 + σ)(1− 2σ) ] uxx (5.12) where the density (ρ), Young’s modulus (E) and Poisson’s ratio (σ) are assumed to be constant throughout the medium. The equation (5.12) is clearly of the form (5.2) where c = √ E(1− σ) ρ(1 + σ)(1− 2σ) . An identical expression can be derived for the transverse displacement from equilibrium, though in this case the wave speed is c = √ E 2ρ(1 + σ) . The wave equation (5.2) also arises in the study of electromagnetism. Consider Maxwell’s equations in a linear material in the absence of free charge and free current [16]: ∇ · (E) = 0, (5.13a) ∇ ·B = 0, (5.13b) ∇×E = −∂B ∂t , (5.13c) ∇× B µ = ∂E ∂t , (5.13d) 95 5.2. The Wave Equation and its Applications where µ is the permeability of the material, and  is the permittivity. In many materials, µ can be treated as constant [16]. In addition, it will be assumed that  depends only on ~x and that ∇ is perpendicular to E. Taking the cross product of (5.13c) and using the identity [16] ∇× (∇×E) = ∇(∇ ·E)−∇2E, (5.14) one obtains ∇(∇ ·E)−∇2E = −∂(∇×B) ∂t . (5.15) Substituting (5.13d) into (5.15) and using the fact that µ is a constant one finds that ∇(∇ ·E)−∇2E = −µ∂ 2E ∂t2 . (5.16) Finally, (5.13a) gives ∇ ·E +∇ ·E = 0 (5.17) where by assumption the second term of (5.17) is zero. Using (5.17) in (5.16) it follows that ∂2E ∂t2 = 1 µ ∇2E, (5.18) which when restricted to one dimension is of the form (5.2) with c(x) = 1√ µ(x) . (5.19) Alternatively, suppose that E = E(x, t)ŷ, µ is a constant, and  = (x). It follows from (5.13c) that B = B(x, t)ẑ for some function B(x, t). Defining ψ(x, t) implicitly by ψt(x, t) = E(x, t), ψx(x, t) = −B(x, t), (5.20) from (5.13c) it is clear that ψtx = ψxt, while the equations (5.13a), (5.13b) are automatically satisfied by the initial assumptions. Finally, (5.13d) be- 96 5.3. Conservation of Energy comes ψxx(x, t) µ =  ψtt(x, t). (5.21) Thus, ψ(x, t) must also satisfy the wave equation (5.2) with wave speed (5.19). When considering the applications of the wave equation (5.2) the following definition is useful. Definition 5.2. A wave pulse is a compactly-supported disturbance prop- agating through a medium in a uniform direction. In physical applications, the behaviour of an incoming wave pulse crossing the interface between two media is particularly important, especially when considering the optical properties of materials [16], electromagnetic waves in plasmas [11], and the propagation of seismic waves between different types of rock within the Earth [24]. In this situation, only a fraction of the wave is transmitted into the new medium; the rest being reflected backwards. When the transition is sharp, ie. there is an abrupt transition between the two homogeneous media, the behaviour of an incident wave pulse is governed by the well-known Fresnel equations [16]. In this limiting case, the fraction of energy transmitted between media is independent of the shape of the incoming wave pulse, depending solely upon the ratio of the wave speeds in the two media [16]. In subsequent chapters, these results are extended to a four-parameter family of smooth wave speeds with transition regions of non-zero width. 5.3 Conservation of Energy In order to proceed further it is necessary to give a definition of energy in the context of (5.2) that has the following features: • corresponds to the definition of energy used in the physical applications • is a conserved quantity on solutions of (5.2). 97 5.3. Conservation of Energy In electromagnetism, the energy density of an electromagnetic field is given by [16] U = 1 2 [ E2 + B2 µ ] . (5.22) In terms of ψ defined in (5.20) it becomes U = 1 2µ [ µψ2t + ψ 2 x ] = 1 2µ [ ψ2t c2(x) + ψ2x ] . (5.23) The factor 1µ is a constant and can be eliminated by rescaling U. The energy functional, E , can then be defined as follows. Definition 5.3. Let u(x, t) be a solution of (5.2), t > 0 and Ω ⊂ (−∞,∞). Then E(u; Ω, t) = 1 2 ∫ Ω ( u2t c2(x) + u2x ) dx (5.24) is the energy of u(x, t) in Ω at time t, and φ(u; ∂Ω, t) = (uxut)|∂Ω (5.25) is the flux through the boundary of Ω (denoted by ∂Ω) at time t. In order to show that (5.24) is conserved, first note that (5.2) can be re-written in the form of a conservation law [14]( ut c2(x) ) t + (−ux)x = 0. (5.26) The multiplication of both sides of (5.26) by ut gives ut ( ut c2(x) ) t = ut(ux)x. (5.27) 98 5.4. Reflection and Transmission Coefficients The left-hand side of (5.27) is the derivative of 12( ut c2(x) )2 with respect to time and, since ut(ux)x = (utux)x − uxutx, it follows that 1 2 ( u2t c2(x) ) t = (utux)x − uxutx. (5.28) The last term on the right-hand side of (5.28) is equal to −12 (u 2 x)t implying that 1 2 ( u2t c2(x) + u2x ) t = (utux)x. (5.29) The integration of both sides of (5.29) with respect to x over a region Ω with boundary ∂Ω yields d dt 1 2 ∫ Ω ( u2t c2(x) + u2x ) dx = (utux)|∂Ω . (5.30) From (5.30) it follows immediately that if u is a solution of (5.2) which is bounded and compactly-supported in x for all t, then d dt E(u;R, t) = d dt ∫ ∞ −∞ ( u2t c2(x) + u2x ) dx = lim x→∞(utux)− limx→−∞(utux) = 0. (5.31) The requirement that u has compact support can be relaxed provided that for all t, u and its first partial derivatives decay sufficiently rapidly as x → ±∞. By (5.31), though the energy in a finite region is not necessarily conserved, the total energy of the entire system is constant on solutions of (5.2). 5.4 Reflection and Transmission Coefficients When considering the propagation of waves in a two-layered medium, the definition of energy (5.24) provides a means of quantifying the reflection and transmission occurring in the transition between the layers via reflection and transmission coefficients. Definition 5.4. The reflection coefficient, R, is the fraction of the total 99 5.4. Reflection and Transmission Coefficients energy of an incoming wave pulse incident upon a transition region separat- ing two homogeneous media that returns to, or remains within, the original media. Consider the case where the initial wave pulse originates to the left of the transition region which has width 2l and is centered about x = 0. It follows that R = lim t→∞ E(u; (−∞,−l], t) E(u; (−∞,−l], 0) . (5.32) The transmission coefficient, T, is the total energy of an incoming wave, incident upon a transition region separating two homogeneous media, that is transmitted into the second medium. If the initial wave pulse is located to the left of the transition region (of width 2δ) and is moving towards it at t = 0, then the transmission coefficient is given by T = lim t→∞ E(u; [l,∞), t) E(u; (−l,−δ], 0) . (5.33) Analogous expressions can be derived in the case where the incident wave pulse is originally on the right side of the transition region. When the transition region is sharp, ie. δ → 0, one can find an explicit formula for R and T which holds for any shape of the incoming wave pulse. Theorem 5.5. [16] Suppose one has a sharp transition, centered at x = 0, separating two media: medium 1 with wave speed c1 and medium 2 with wave speed c2. In addition, suppose that the incident wave pulse starts entirely within medium 1. Then, the reflection and transmission coefficients are given by the Fresnel equations R = ( γ − 1 γ + 1 )2 , T = 4γ (1 + γ)2 , (5.34) where γ = c2c1 . 100 Chapter 6 Solutions of a Family of Wave Equations 6.1 Introduction In this chapter the wave equation (5.2) is analyzed using symmetry methods. A symmetry group classification of (5.2) is presented in Section 6.2, yielding functions c(x) for which (5.2) has additional symmetries. In Section 6.3, two new systems are given which are related to (5.2) via a non-invertible mapping. The point symmetries of one of these systems are found in Section 6.4 and used in Section 6.5 to find useful solutions of the wave equation (5.2) for a four-parameter family of wave speeds. 6.2 Symmetry Classification of (5.2) Symmetry methods provide a systematic way to seek particular solutions of PDEs such as (5.2). In particular, given a Lie group of point transfor- mations admitted by (5.2) one seeks corresponding invariant solutions by solving a reduced equation with fewer independent variables. For an arbi- trary wave speed c(x), (5.2) has two point symmetries [6]: translation in t; and scaling in u. Unfortunately, these symmetries are insufficient to reduce (5.2) to obtain physically-relevant solutions; other symmetries are required. It is thus necessary to determine those c(x) for which one can determine additional symmetries for (5.2) so that useful solutions can be found. Performing a group classification of the wave equation (5.2) allows one to determine the forms of the wave speed c(x), arising as solutions of a 101 6.2. Symmetry Classification of (5.2) Form of c(x) Symmetries Arbitrary X1 = ∂∂t , X2 = u ∂ ∂u , X3 = g(x, t) ∂∂u where gtt = c 2(x)gxx [(Bx2 + C) exp (A ∫ (Bx2 + C)−1dx)]−1, X1, X2, X3, A, B, C are constants X4 = 12u(A+ 2Bx) ∂ ∂u + (Bx 2 + C) ∂∂x −At ∂∂t X5 = 12u(A+ 2Bx)t ∂ ∂u + (Bx 2 + C)t ∂∂x +[−12At2 + ∫ c2(x)(Bx2 + C) dx] ∂∂t x−C , C 6= 0, 1, 2 X1, X2, X3, X6 = x ∂∂x + (1 + C)t ∂∂t X7 = −12C t u ∂∂u + xt ∂∂x + [x 2+2C 1+C + 1 2(1 + C)t 2] ∂∂t x−2 An infinite number x−1 X1, X2, X3, X6, X8 = 12ut ∂ ∂u + xt ∂ ∂x + log(x) ∂ ∂t ex X1, X2, X3, X9 = ∂∂x + t ∂ ∂t , X10 = −12ut ∂∂u + t ∂∂x + 12 [e2x + t2] ∂∂t c(x) satisfies (α′ +Hα)′ = σ2c2(x)α, X1, X2, X3 where σ = const 6= 0, H(x) = c′(x)/c(x), K11,12 = e±σt[−12uαH ∂∂u + α ∂∂x ± σ−1(α′ +Hα) ∂∂t ] α2(x) = [H2(x)− 2H ′(x)]−1 Table 6.1: Group classification of the linear wave equation (5.2) appearing in [5] based on [6]. set of ODEs found systematically from the original PDE, for which (5.2) admits extra point symmetries in addition to the two given above. A point symmetry group classification was performed on the wave equation (5.2) in [6], the results of which are presented in Table 6.1. After performing a symmetry group classification, one must then ensure that the resulting forms of the wave speed c(x) are physically meaningful. Specifically, the wave speed must be bounded since, from the special theory of relativity, the speed of energy and matter cannot exceed that of light in a vacuum, 3.00× 108ms−1. Thus, in electromagnetism, seismology and acous- tics, unbounded wave speeds are not useful. Similarly, physical situations where the wave speed is, or asymptotically approaches, zero are rare since at such points the energy density goes to infinity. 102 6.3. Formulation of a Nonlocally Related System Considering the wave speeds given in Table 6.1 one can show that all of them either represent non-physical situations or yield wave equations which admit point symmetries that have non-physical invariant solutions. Thus the application of Lie symmetry methods does not provide new solutions of (5.2) for physically relevant wave speeds. 6.3 Formulation of a Nonlocally Related System Nonlocal symmetry analysis presents an extension to the classical Lie symmetry methods, yielding new analytic solutions to many equations, and in particular for (5.2). Definition 6.1. [2] A symmetry of a PDE is called nonlocal if the coef- ficients of its corresponding infinitesimal generator do not depend solely upon the independent variables, dependent variables, and a finite number of derivatives of its dependent variables. Though Lie’s method for finding local symmetries does not directly gen- eralize to nonlocal symmetries, systematic methods for seeking nonlocal symmetries systematically do still exist. In particular, such nonlocal sym- metries often arise as point symmetries of other PDE systems which are nonlocally related to the original PDE. Definition 6.2. [4] Two PDE systems are nonlocally related if there exists a non-invertible mapping that takes each solution of one system into one or more solutions of the other, and vice versa. Non-locally related systems of a given PDE can be found systematically by using conservation laws to generate corresponding potential systems [4]. To illustrate this process, note that (5.2) has the conservation law (ux)x + ( − ut c2(x) ) t = 0. (6.1) 103 6.3. Formulation of a Nonlocally Related System Defining v(x, t) implicitly by vt = ux, vx = ut c2(x) . (6.2) one can form a potential system by augmenting (5.2) with (6.2). In this case, however, (5.2) is a differential consequence of (6.2) obtained from the integrability condition vtx = vxt, (6.3) and can therefore be omitted. One then has the following result. Lemma 6.3. Each solution of the wave equation (5.2) yields a solution of the potential system (6.2) and a solution of vtt = (c2(x)vx)x. (6.4) Conversely, every solution of (6.2) or (6.4) yields a solution of the wave equation (5.2). Proof. Let u(x, t) be a solution of (5.2). One must show that there exists a function v(x, t) such that u(x, t) and v(x, t) together satisfy (6.2). The function v(x, t) exists if and only if v(x, t) satisfies (vt)x = (vx)t. (6.5) From (6.2) it follows that v(x, t) satisfies (6.5) if and only if u(x, t) satisfies (5.2), which is true by assumption. For a given u(x, t) the corresponding v(x, t) is defined up to a constant and therefore the mapping between solu- tions of (5.2) and (6.2) is non-invertible. Conversely, suppose (u, v) solves (6.2). One must then show that u(x, t) solves (5.2). Since v(x, t) exists it must satisfy the integrability condition 104 6.4. Symmetry Analysis of a Nonlocally Related System (6.5). From (6.2) and (6.5) it follow that (ux)x = ( ut c2(x) ) t , (6.6) and thus u(x, t) is a solution of (5.2) as required. Furthermore, if (u, v) solves (6.2) it follows immediately that u(x, t) must satisfy the integrability condition (ut)x = (ux)t. (6.7) Substituting (6.2) into (6.7) one finds that (c2(x)vx)x = vtt. (6.8) Thus every solution u(x, t) of (5.2) yields a solution of (6.4) which is defined up to a constant. Conversely, suppose v(x, t) satisfies (6.4). Then, for a given v(x, t) there exists a function u(x, t) such that together (u, v) satisfies (6.2) if and only if (6.7) holds. Using (6.2), (6.7) is satisfied if and only if v(x, t) is a solution of (6.4). Hence every solution of (6.4) yields a solution of (6.2) and thus of (5.2). Note that for a given v(x, t) the corresponding u(x, t) is unique up to a constant and thus the mapping is non-invertible. 6.4 Symmetry Analysis of a Nonlocally Related System Performing a group classification of (6.2) [6], one finds that if c(x) satisfies the ODE [ cc′ ( c c′ )′′]′ = 0, (6.9) 105 6.4. Symmetry Analysis of a Nonlocally Related System then (6.2) admits two extra symmetries in addition to the two which are visible by inspection: X1 = ∂ ∂t , and X2 = u ∂ ∂u + v ∂ ∂v . (6.10) In [6], the fourth-order ODE (6.9) has a reduction to the first-order ODE c′(x) = λ ν sin ( ν log( c c1 ) ) , (6.11) where λ, ν, c1 > 0. If c(x) solves (6.11) it follows that if c(x) = c1 and c(x) = c2 = c1e pi ν then c′(x) = 0. Since c′(x) is positive when c(x) ∈ (c1, c2) it fol- lows that if there exists x∗ such that c(x∗) ∈ (c1, c2) then limx→−∞ c(x) = c1 and limx→∞ c(x) = c2 since c(x) is monotonically increasing [7]. Without loss of generality one can center the transition region about x = 0, in which case c(0) = c1e pi 2ν and c′(0) = λν . Since c ′(x) ≤ c′(0) = λν , it is clear that λ determines the sharpness of the transition between c1 and c2. A representa- tive solution of (6.11) is shown in Figure 6.1, and was obtained numerically using a Runge-Kutta method [22] implemented in MatlabTM [20]. Note that if 0 < c(x)c1 − 1 << 1 then, using a Taylor expansion, one finds that c′(x) = λ ν sin(ν log( c(x) c1 )) = λ ν [ ν ( c(x) c1 − 1 ) + O ( ( c(x) c1 − 1)2 )] . (6.12) Considering only the leading-order term of the series in (6.12) one obtains the approximation c′(x) ≈ λc(x)− c1 c1 , (6.13) which can be solved analytically to obtain c(x) ≈ Ae λc1 x + c1. (6.14) 106 6.4. Symmetry Analysis of a Nonlocally Related System Similarly, when 0 < c2c1 − c(x) c1 << 1 one finds that c′(x) = λ ν sin(ν log( c(x) c1 )) = λ ν [ sin(pi) + cos(pi) νc1 c2 ( c(x) c1 − c2 c1 ) + O (( c2 c1 − c(x) c1 )2)] (6.15) and thus that c(x) ≈ c2 −Be −λx c2 . (6.16) −8 −6 −4 −2 0 2 4 6 8 100.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 x c( x) Student Version of MATLAB Figure 6.1: A solution of (6.11) for c1 = 12 , c2 = 1, and λ = 0.8 obtained using a Runge-Kutta method [22] implemented in MatlabTM. 107 6.4. Symmetry Analysis of a Nonlocally Related System In order to find the point symmetries of (6.2) for wave speeds satisfying (6.9) consider the point transformation of (x, t, u, v)-space represented by the infinitesimal generator X = ξ(x, t) ∂ ∂x + τ(x, t) ∂ ∂t + [f(x, t)u+ g(x, t)v] ∂ ∂u + + [h(x, t)u+ k(x, t)v] ∂ ∂v . (6.17) It can be shown that (6.17) is a point symmetry of (6.2) if and only if the functions ξ, τ, f, g, h, and k satisfy the symmetry determining equations [7] ht − fx = 0, (6.18a) kt − gx = 0, (6.18b) k − τt − f + ξx = 0, (6.18c) c′ξ + c[τt − ξx] = 0, (6.18d) c2hx − ft = 0, (6.18e) c2kx − gt = 0, (6.18f) c2τx − ξt = 0, (6.18g) c2h− g = 0. (6.18h) 108 6.5. Formulation of an Analytic Solution Adapting a result from [6], one can solve the symmetry determining equa- tions (6.18a)-(6.18h) to obtain the following point symmetries of (6.2): Xr = ∂ ∂t (6.19a) Xs = u ∂ ∂u + v ∂ ∂v (6.19b) Xp = eλt [ λH ∂ ∂x + (H ′ − 1) ∂ ∂t + [(1− 1 2 H ′)λu− λ 2 2 Hv] ] ∂ ∂u − (6.19c) − eλt ( λ2u 2cc′ + λ 2 H ′v ) ∂ ∂v (6.19d) Xq = e−λt ( λH ∂ ∂x − (H ′ − 1) ∂ ∂t + [(1− 1 2 H ′)λu+ λ2 2 Hv] ) ∂ ∂u − (6.19e) − eλt ( −λ 2u 2cc′ + λ 2 H ′v ) ∂ ∂v (6.19f) where H(x) = c ′(x) c(x) . 6.5 Formulation of an Analytic Solution To find solutions of the system (6.2) which are invariant under the point symmetry 1λ (Xp + Xq) one first seeks corresponding differential invariants arising as constants of integration of the equations dx 2β′(t)H(x) = dt 2β(t)[H ′(x)− 1] = du [β′(t)(2−H ′(x)) + s]u− β′′(t)H(x)v = = dv [−β′(t)H ′(x) + s]v − uβ′′(t)c(x)c′(x) = d. (6.20) 109 6.5. Formulation of an Analytic Solution where β(t) = 1λ sinλt. The integration of (6.20) yields the differential invari- ants [7] z = sinhλt sin y, f2 = [ u√ c(x) + √ c(x)v ]2 c′(x)e2s[λβ(t) cos y + β′(t)] , g2 = [ u√ c(x) −√c(x)v]2 c′e2s[λβ(t) cos y − β′(t)] , E = 2+ 1 ν √ γ arctan (√ γ cot y β′(t) ) , (6.21) where y = ν log ( c(x) c1 ) and γ = 4pq. Without loss of generality, one can choose p = q using the invariance of (6.2) under translation in t. Solving (6.21) for u and v and letting f and g depend on z, one finds that u = 1 2 [c(x)c′(x)] 1 2 e−s[λβ(t) cos y + β′(t)] 1 2 f(z)+ + 1 2 [c(x)c′(x)] 1 2 es[λβ(t) cos y − β′(t)] 12 g(z), v = 1 2 [ c′(x) c(x) ] 1 2 es[λβ(t) cos y + β′(t)] 1 2 f(z)− − 1 2 [ c′(x) c(x) ] 1 2 es[λβ(t) cos y − β′(t)] 12 g(z). (6.22) The substitution of (6.22) into (6.2), after simplifying, gives [7] f ′′(z) + 3z z2 + 1 f ′(z) + 1 z2 + 1 [ 1 +R2 − zσ + σ 2 z2 + 1 ] f = 0, (6.23a) g = √ z2 + 1 R [ f ′ + z + σ z2 + 1 f ] (6.23b) where R = 12ν , and σ = −s 2ν √ γ . 110 6.5. Formulation of an Analytic Solution The equations (6.23a) and (6.23b) are solved in [7] when σ = 0, yielding f(z; ζ) = A√ z2 + 1 cos(Rψ(z) + ζ), g(z; ζ) = A√ z2 + 1 sin(Rψ(z) + ζ). (6.24) where ψ(z) = log(z + √ z2 + 1) and ζ is an arbitrary constant. Furthermore, from [18] and [7] if fn(z), gn(z) denotes the solution of (6.23a) and (6.23b) when σ = −ın, then solutions of fn(z; ζ) and gn(z; ζ) are given by the following recurrence relation Re fn+1 (z ; ζ) Im fn+1 (z ; ζ) Re gn+1 (z ; ζ) Im gn+1 (z ; ζ)  = − √ (n+ 1 2 )2 +R2 ·N(ϕ, βn)  Re fn(z ; ζ) Im fn(z ; ζ) Re gn(z ; ζ) Im gn(z ; ζ)  (6.25) where N(ϕ, βn) is the orthonormal matrix N(ϕ, βn) =  cosϕ cosβn sinϕ cosβn − sinβn 0 − sinϕ cosβn cosϕ cosβn 0 − sinβn sinβn 0 cosϕ cosβn − sinϕ cosβn 0 sinβn sinϕ cosβn cosϕ cosβn  , (6.26) βn = arctan ( 2R 2n+1 ) and ϕ = arccotz. Since (6.23) is linear one can rescale fn(z; ζ) and gn(z; ζ) so that Re fn+1 (z ; ζ) Im fn+1 (z ; ζ) Re gn+1 (z ; ζ) Im gn+1 (z ; ζ)  = N(ϕ, βn)  Re fn(z ; ζ) Im fn(z ; ζ) Re gn(z ; ζ) Im gn(z ; ζ)  . (6.27) 111 6.5. Formulation of an Analytic Solution Finally, since (6.2) is a linear system one can use superposition to build solutions of the form u(x, t) = ∞∑ m=−∞ Amum(x, t), v(x, t) = ∞∑ m=−∞ Amvm(x, t), (6.28) where um(x, t; ζ2m) =e−2mı arctan[cot ysech t][c(x) sin y] 1 2× × { [coshλt+ sinhλt cos y] 1 2 f2m(z; ζ2m)+ +[coshλt− sinhλt cos y] 12 g2m(z; ζ2m) } , vm(x, t; ζ2m) =e−2mı arctan[cot ysech(t)] [ sin y c(x) ] 1 2 ×{ [coshλt+ sinhλt cos y] 1 2 f2m(z; ζ2m)− −[coshλt− sinhλt cos y] 12 g2m(z; ζ2m) } , (6.29) and the ζ2m are arbitrary constants. At t = 0, (6.28) becomes u(x, 0) [c(x) sin(y)] 1 2 = ∞∑ −∞ (−1)mAm[f2m(0, ζ2m) + g2m(0, ζ2m)]e2ımy, v(x, 0)c 1 2 (x) [sin(y)] 1 2 = ∞∑ −∞ (−1)mAm[f2m(0, ζ2m)− g2m(0, ζ2m)]e2ımy, (6.30) and thus if U(x) = u(x, 0) and V (x) = v(x, 0) then, since y ∈ [0, pi], it follows from classical Fourier analysis [23] that (−1)mAm[f2m(0, ζ2m) + g2m(0, ζ2m)] = Bm 112 6.5. Formulation of an Analytic Solution and (−1)mAm[f2m(0, ζ2m)− g2m(0, ζ2m)] = Cm where Bm = 1 pi ∫ pi 0 U(x(y)) [c(x(y)) sin(y)] 1 2 e−2mıydy, Cm = 1 pi ∫ pi 0 V (x(y))c 1 2 (x(y)) [sin(y)] 1 2 e−2mıydy. (6.31) Let α be a positive constant, κ = α4N+1 , and y † = −arccot(κ). Then for u(x(y), 0) = U(x(y)) ( sin y sin y† )2N+ 1 2 e α(y−y†) 2 and v(x(y), 0) = V (x(y)) = 0, (6.32) (6.31) can be integrated analytically to give Cm = 0 and [7] Bm = (2N)!A(κ, 2N) pi (ebpi − 1) b+ 2mı b2 + 4m2 N∏ k=1 b2 + 4k2 − 4m2 + 4bmı (b2 + 4k2 − 4m2)2 + 16b2m2 (6.33) where κ = α4N+1 , y † = −arccot(κ), b = 12 [(4N + 1)κ− 1ν ] and A(κ, 2N) = [sin y†eκy†]−(2N+ 1 2 ). In Chapter 7 the solutions of (6.2) with initial conditions u(x, 0) = U(x), v(x, 0) = V (x) given by (6.32) are analyzed in greater detail. A plot of U(x(y)) as a function of y is shown in Figure 6.2. Since x = −∞ corresponds to y = 0, if the initial data is located far to the left of the transition region then when it is plotted against y the initial data will appear as a narrow peak near y = 0. The narrow width of the peak of the initial data means that many terms will be required to approximate the exact series solution (6.28) accurately. 113 6.5. Formulation of an Analytic Solution -0.2 0 0.2 0.4 0.6 0.8 1 0 0.5 1 1.5 2 2.5 3 3.5 U( x( y) ) y "grid0002.fast" using (($1)):($2) Figure 6.2: A plot of U(x(y)) given by (6.32) with N = 1 and y† = e−7.5 obtained using a finite number of terms of the series solution (6.28). 114 Chapter 7 Numerical Implementation of the Analytic Solution 7.1 Introduction In Chapter 6 useful solutions of (5.2) were found for wave speeds satisfying the ODE (6.9). The resulting solutions are given in terms of the eigenfunc- tions f2m and g2m where m = 0, 1, 2, . . . The nature of the analytic solution obtained makes direct analysis of its behaviour difficult. In particular, since one cannot in general obtain a closed form expression for the solution, it can only be approximated by considering only a finite number of terms. Moreover, the solution is given in terms of the variable y satisfying dy dx = ν d dx log ( c(x) c1 ) = λ c sin y (7.1) which cannot be written explicitly as a function of x; thus, evaluating each term of the series in (6.28) requires numerical quadrature. Finally, the physically-relevant initial conditions correspond to the situation where the initial wave pulse is localized to a small region on the y-axis near zero. In such circumstances the corresponding Fourier series (6.30) will converge slowly, requiring many terms of (6.28) to obtain an accurate approximation. It is therefore necessary to use a computational approach in conjunction with the analytic solution (6.28) to examine the properties of the general solution of (6.2) obtained in Chapter 6. To estimate the reflection and transmission coefficients for solutions of (5.2) a Fortran program was constructed which approximated (6.28) using 115 7.2. Details of the Implementation a truncated series. Details of the algorithms used are provided in Section 7.2 and the results are given in Section 7.3. 7.2 Details of the Implementation In order to ensure the algorithm developed is both accurate and efficient, the solution is approximated along two lines in the xt-plane: 1. (x, 0) where x ∈ (−∞,∞) 2. (0, t) where t ≥ 0. The first line represents the initial data and is used to calculate the total energy of the system as well as to determine the number of terms required to obtain reasonable convergence. Since energy is conserved, to estimate the total energy one need only approximate it at t = 0, which from (5.24) is E(u;R, 0) = 1 2 ∫ ∞ −∞ [ u2t c2(x) + u2x ] dx. (7.2) If u(x, 0) = f(x) and ut(x, 0) = g(x) the energy at t = 0 is given by E(u;R, 0) = 1 2 ∫ ∞ −∞ [ (g(x))2 c2(x) + ( f ′(x) )2] dx. (7.3) Alternatively, it follows from (6.2) that if v(x, 0) = h(x), and u(x, 0) = f(x) then ut(x, 0) = g(x) where h′(x) = g(x) c2(x) . Hence the initial energy in terms of f(x) and h(x) is E(u;R, 0) = 1 2 ∫ ∞ −∞ [ (h′(x))2c2(x) + (f ′(x))2 ] dx, (7.4) which can be approximated using standard numerical techniques. In this work, the derivatives were approximated to first order using finite differences and the integration was done using the trapezoid rule [22]. 116 7.2. Details of the Implementation The second line corresponds to the values of u and v at x = 0, when t > 0. Using this line it is possible to determine the amount of energy from the initial wave pulse which is transmitted into the new medium. Using con- servation of energy, the reflection coefficient can then be calculated. From (5.30) it follows that d dt E(u; (−∞, 0], t) = (utux)|0−∞ . (7.5) If the initial data is compactly-supported then, since the wave speed is bounded, it follows that lim x→−∞ut = 0 = limx→−∞ux for all t. The fundamental theorem of calculus then implies that E(u; (−∞, 0], tf )− E(u; (−∞, 0], 0) = ∫ tf 0 ut(0, t)ux(0, t) dt (7.6) and thus the net change of energy in the region (−∞, 0] is given by ∆E = lim tf→∞ E(u; (−∞, 0], tf )− E(u; (−∞, 0], 0) = ∫ ∞ 0 ut(0, t)ux(0, t) dt. (7.7) The substitution of (6.2) into (7.7) yields ∆E = ∫ ∞ 0 ut(0, t)vt(0, t) dt. (7.8) If the wave pulse is entirely within (−∞, 0] at t = 0 then ∆E is the total energy which crosses the transition region and therefore, by definition, the transmitted energy. It follows that T = ∆E E(u; (−∞, 0], 0) and R = 1− T. (7.9) 117 7.3. Results The line x = 0 is chosen to simplify the terms of the series expansion. In particular, since y = pi2 when x = 0, um(0, t; ζ2m) = c 1 2 (0)[cosh(λt)] 1 2 [f2m(sinh(λt); ζ2m) + g2m(sinh(λt); ζ2m)] vm(0, t; ζ2m) = c −1 2 (0)[cosh(λt)] 1 2 [f2m(sinh(λt); ζ2m)− g2m(sinh(λt); ζ2m)]. (7.10) The solution restricted to x = 0 is u(0, t)√ c(0) = cosh 1 2 λt ∞∑ −∞ Am[f2m(sinhλt; ζ2m) + g2m(sinhλt; ζ2m)], √ c(0)v(0, t) = cosh 1 2 λt ∞∑ −∞ Bm[f2m(sinhλt; ζ2m)− g2m(sinhλt; ζ2m)]. (7.11) One can approximate the functions u(0, t) and v(0, t) by requiring that |m| ≤ M. Denoting these truncations by ũ(t) and ṽ(t) one can then discretize time, letting tk = (∆t)k with t0 = 0. Approximating the derivative to second-order by ũt(tk) = ũ(tk+1)− ũ(tk−1) 2∆t + O((∆t)2), (7.12) one can then integrate (7.8) numerically using the trapezoid rule. 7.3 Results In order to understand the nature of the solutions of (6.2) it is useful to consider first the behaviour of the modes given by (6.29). In particular, the form of the first mode, corresponding to m = 0, can be analyzed directly using MapleTM [21]. It consists of two pulses originating on either side of the origin which collide in the center of the transition region at t = 0, as shown in Figures 7.1-7.3. In order to visualize the first mode, the variable y = ν log ( c(x) c1 ) is used in place of x, where y → 0 corresponds to x→ −∞ and y → pi corresponds to x → ∞. In addition, using (6.14) one can show that 118 7.3. Results for y sufficiently small, log(y) ∝ x. Plots of u against log(y) can therefore be used to approximate the shape of u(x, t) up to a scaling of x by a constant factor. Similarly, if (pi − y) is sufficiently small, log(pi − y) ∝ x and thus for large x the shape of u(log(pi−y), t) will match that of u(x, t) up to a scaling of x. Plots of these two limits are shown in Figures 7.4 and 7.5. Since the two wave pulses overlap when t ≥ 0, calculating the reflection and transmission coefficients directly from the two pulses produced by the collision is not straightforward due to the fact that the energy functional (5.24) is non- linear; thus if two wave pulses have overlapping support, constructive and destructive interference will occur. Figure 7.1: A plot of the first mode u0(y, t) given in (6.29) when λ = 1, c2 = 2, and t = −8. 119 7.3. Results Figure 7.2: A plot of the first mode u0(y, t) given in (6.29) when λ = 1, c2 = 2, and t = 0. Figure 7.3: A plot of the first mode u0(y, t) given in (6.29) when λ = 1, c2 = 2, and t = 8. 120 7.3. Results Figure 7.4: The first mode u0(y, t) plotted against log(y) when λ = 1, c2 = 2, and t = 10. Figure 7.5: The first mode u0(y, t) plotted against log(pi − y) when λ = 1, c2 = 2, and t = 10. 121 7.3. Results Though for the first mode u0(x, t) one cannot calculate the reflection and transmission coefficients directly due to interference between the two incoming pulses, one can estimate the coefficients indirectly by examining the amplitudes of the wave pulses before and after entering the transition region. Comparing the amplitudes found analytically with those expected for a sharp transition enables one to calculate an effective wave speed ratio: the value of c2c1 necessary to produce the observed ratio of amplitudes between incoming and outgoing waves for a sharp transition. Using this effective wave speed ratio one can then calculate the reflection and transmission coefficients necessary for a sharp transition to give rise to the observed amplitude ratios. The results of this analysis for various values of γ = c2c1 are shown in Figure 7.6. 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 6 7 8 Tr an sm iss ion C oe ffic ien t a "veff.txt" using ($1):(f($2)) "veff.txt" using ($1):(f($3)) "veff.txt" using ($1):(f($1)) Figure 7.6: The dependence of the transmission coefficient on the ratio of the asymptotic wave speeds for the first mode u0(y, t) given in (6.29) when λ = 1. The red and green points were calculated from the two wave pulses created by the collision of the two incident wave pulses. The blue points represent the transmission coefficient for a sharp transition with the same wave speed ratio. 122 7.3. Results Additionally, it was found that varying λ for fixed c1 and c2 produced no change in the observed transmission and reflection coefficients. Note that in this situation the width of the initial wave pulses is also changing. In particular, it can be shown that for the fundamental mode, m = 0, the initial pulse width is inversely proportional to λ. Thus, as the transition is made sharper the incoming wave pulse is also compressed by the same factor. The lack of change in the transmission coefficients when λ is varied with c1 and c2 fixed suggests that the important parameter for determining the amount of reflection or transmission is the ratio δl where δ is the width of the incoming wave pulse and l is the width of the transition region which is of the order ν(c2−c1) λ . This observation is confirmed by the following proposition. Proposition 7.1. Consider the initial value problem (IVP) utt = c2(x)uxx, −∞ < x <∞, t ≥ 0, u(x, 0) = f( x δ ), ut(x, 0) = ∂ ∂t ∣∣∣∣ t=0 f ( x− tc1 δ ) , (7.13) with wave speed c(x) satisfying c′(x) = λ ν sin ( ν log ( c c1 )) , (7.14) and where f(xδ ) is a compactly-supported function of width δ located far to the left of the transition region. If f(x), c1 and c2 are fixed then the reflection and transmission coefficients depend solely on the parameter κ = (c2−c1)νλδ which represents the ratio between the width of the transition region and the width of the incoming pulse. 123 7.3. Results Proof. Let γ = c2c1 , l = (c2−c1)ν λ , τ = ν(γ−1) λ , κ = l δ and X = x l , S = t τ , C(X) = c(X(x)) c1 , U(X,S) = u(x(X,S), t(X,S)). (7.15) It can then be shown that if u(x, t) satisfies IVP (7.13), then U(X,S) satisfies USS(X,S) = C2(X)UXX(X,S), −∞ < X <∞, S ≥ 0, U(X, 0) = f(κX), US(X, 0) = ∂ ∂S ∣∣∣∣ S=0 f (κ(X − T )) , (7.16) where C ′(X) = (γ − 1) sin [ν logC(X)] . One can express the energy of a solution u(x, t) in terms of U, X, and T in the following way E(u; (−∞,−l], t1) = 12 ∫ −l −∞ [ u2t c2(x) + u2x ] dx = λ 2ν(c2 − c1) ∫ −1 −∞ [ U2S(X, t1 τ ) C2(X) + U2X(X, t1 τ ) ] dX, E(u; [l,∞), t1) = 12 ∫ ∞ l [ u2t c2(x) + u2x ] dx = λ 2ν(c2 − c1) ∫ ∞ 1 [ U2S(X, t1 τ ) C2(X) + U2X(X, t1 τ ) ] dX, E(u; (−∞,∞), t1) = 12 ∫ ∞ −∞ [ u2t c2(x) + u2x ] dx t = λ 2ν(c2 − c1) ∫ ∞ −∞ [ U2S(X, t1 τ ) C2(X) + U2X(X, t1 τ ) ] dX. (7.17) 124 7.3. Results Note that for fixed c1 and c2 the function U(X,S) does not depend explicitly on the width of the transition region l or the width of the initial pulse δ, but only on their ratio κ. Using the definition of the reflection and transmission coefficients R and T given in (5.32) and (5.33) it follows that R = lim t1→∞ ∫ −1 −∞ [ U2S(X, t1 τ ) C2(X) + U2X(X, t1 τ ) ] dX ∫∞ −∞ [ U2S(X, t1 τ ) C2(X) + U2X(X, t1 τ ) ] dX = lim S→∞ ∫ −1 −∞ [ U2S(X,S) C2(X) + U2X(X,S) ] dX∫∞ −∞ [ U2S(X,S) C2(X) + U2X(X,S) ] dX , T = lim t1→∞ ∫∞ 1 [ U2S(X, t1 τ ) C2(X) + U2X(X, t1 τ ) ] dX ∫∞ −∞ [ U2S(X, t1 τ ) C2(X) + U2X(X, t1 τ ) ] dX = lim S→∞ ∫∞ 1 [ U2S(X,S) C2(X) + U2X(X,S) ] dX∫∞ −∞ [ U2S(X,S) C2(X) + U2X(X,S) ] dX , (7.18) and thus that the reflection and transmission coefficients depend solely on the parameter κ = lδ rather than on l and δ independently. The method outlined in Section 7.2 was implemented for the initial condi- tions u(x, 0) = U(x(y)), v(x, 0) = V (x(y)) given in (6.32). It is important to note that since the initial data was given in terms of y and not x, varying c2c1 in this case changes the initial conditions. Thus the numerical experiment performed corresponds to the situation where both the speed ratio and the initial data are being changed simultaneously. Sample results of these exper- iments are shown in Figure 7.7. Plots of the solution as a function of time were approximated using the finite element method outlined in Chapter 8 and are presented in Figure 8.1. 125 7.3. Results 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 6 7 8 Tr an sm iss ion C oe ffic ien t a "analy.txt" using (f($1)):(-$2) "analy.txt" using (f($1)):(g(f($1)))Figure 7.7: A plot of the transmission coefficients for a smooth transition with λ = 1 (red) and the transmission coefficients for a sharp transition (green) where the initial conditions are given by (6.32) with N = 1 and y† = e−7.5 where γ = c2c1 is the ratio of the asymptotic wave speeds. 126 Chapter 8 Numerical Approximation to the Solution 8.1 Introduction Consider the initial value problem utt = c2(x)uxx, −∞ < x <∞, t > 0 u(x, 0) = f(x), ut(x, 0) = g(x), (8.1) with wave speed c(x) satisfying (6.9). In order to approximate the solutions of (8.1) numerically, one must work on a finite domain. Given the exponen- tial convergence of c(x) to c1 and c2 as x goes to −∞ and ∞, respectively, for large and small x one can approximate the solution of (8.1) by the con- stant wave speed solution (5.4). Letting l denote the width of the transition region, which is of the order ν(c2−c1)λ , choose xa, and xb such that xa l << −1, and xbl >> 1. One can split the x-axis into the following five regions: 1. x << xa, 2. x = O(xa), xa ≥ x, 3. xa ≤ x ≤ xb, 4. x = O(xb), x ≥ xb, 5. x >> xb. 127 8.2. Details of the Implementation One then wishes to find an approximation to the solution of (8.1) in the case where a rightward-moving wave pulse originates in Region 1, passes through Region 2 and reaches the transition region. The transmitted wave pulse continues on through Regions 4 and 5 while the reflected wave pulse returns through Region 2 to Region 1. Due to the exponential convergence of c(x) to its limiting values as x → ±∞, in Region 1 and Region 5 the maximum value of |c′| can be made ar- bitrarily small by a suitable choice of xa and xb, in which case the constant wave speed solution (5.4) will be a good approximation to the solution in these regions. The solution in Region 3 can then be found using a finite element method and matching with the asymptotic solutions in Regions 2 and 4. Choosing the initial datum to be a wave pulse in Region 1 facili- tates the matching of asymptotic and numerical solutions since in this case, for sufficiently large time, it is expected that the reflected and transmitted pulses will lie entirely within Regions 1 and 5, respectively. This ensures that accurate results can be obtained using only a finite number of time steps for the numerical solver. In order to measure the reflection and transmission coefficients the initial wave was chosen to be a Gaussian function centered well outside the transi- tion region. The initial energy of the wave pulse was calculated from (5.24) both analytically and numerically. A finite element scheme was then run until the support of u lay entirely within the Regions 2 and 4. The energy in each region was then calculated numerically and compared to the initial energy. The finite element method used is outlined in Section 8.2 and the results obtained are given in Section 8.3. 8.2 Details of the Implementation Before formulating a finite element method to approximate the solution of (8.1) in Region 3, first notice that since the domain [xa, xb] is finite, letting 128 8.2. Details of the Implementation X = x−xaxb−xa one obtains the following approximate initial value problem utt(X, t) = [c†(x)]2uXX(x, t), X ∈ (0, 1), t ∈ (0, tf ), u(X, 0) = f †(X), X ∈ (0, 1), uX(X, 0) = g†(X), X ∈ (0, 1), uX(0, t) = uX(1, t) = 0, (8.2) where c†(X) = c(x(X))xb−xa , f †(X) = U(x(X)), g†(X) = [xb − xa]g(x(X)). The boundary conditions are arbitrary, provided the wave pulse does not reach X = 0 or X = 1. This restriction can be enforced by an appropriate choice of the initial conditions and final time tf . 8.2.1 Weak Formulation In order to obtain a weak formulation of the initial value problem (8.2), first let u(X, t) be a solution of (8.2) which, for fixed t, is in H1(Ω) where Ω = (0, 1) and H1(Ω) is the space of square-integrable functions on Ω whose derivatives are also square-integrable [14]. Multiplying 8.2 by a test function w(X) ∈ H1(Ω) and integrating over Ω, one obtains 0 = ∫ 1 0 uttw [c†(X)]2 dX − ∫ 1 0 uXXwdX = ∫ 1 0 uttw [c†(X)]2 dX + ∫ 1 0 uXwXdX − (uXw)|10 = ∫ 1 0 uttw [c†(X)]2 dX + ∫ 1 0 uXwXdX. (8.3) where integration by parts, and the boundary conditions in (8.2) have been used to simplify the integrals. Since w(X) is independent of time t, for u(X, t) ∈ H1(Ω× (0, tf )) and w(X) ∈ H1(Ω) one may take the time deriva- tive outside the integral. One then has the following weak formulation of (8.2). Definition 8.1. The weak formulation of the initial value problem (8.2) is to find u(X, t) ∈ H1(Ω × (0, tf )) such that for all w(X) ∈ H1(Ω), and all 129 8.2. Details of the Implementation t ∈ (0, tf ): ∫ 1 0 uttv [c†(X)]2 dX + ∫ 1 0 uXwXdX = 0 (8.4) with u(X, 0) = f †(X) and ut(X, 0) = g†(X). The existence, uniqueness and stability of solutions of the weak formu- lation of (8.2) are a direct consequence of a classical result presented in [14]. 8.2.2 Discretization Solving the weak formulation exactly is equivalent to solving (8.2) and thus to proceed further one must typically employ a method, such as the finite element method, to approximate its solutions. The finite element method approximates solutions of the weak formulation by restricting w(X) ans u(X, t), where t is fixed, to a finite-dimensional subspace of the infinite- dimensional space of square-integrable functions on (0, 1), reducing the prob- lem to an algebraic one. Following a method similar to that given in [13], let x0, . . . , xN be uniformly- spaced points in [0, 1] with X0 = 0 and XN = 1 and let V N+1 be the set of square-integrable functions which are linear on each interval Ki = [xi−1, xi]. In addition, let ϕi(X) i = 0, . . . , N be the square-integrable functions linear on each interval Ki such that ϕi(Xj) = δij . Since the functions in V N+1 are completely defined by their values at x0, . . . , xN , it is clear that the functions {ϕi(x)}N0 form a basis of V N+1. The initial condition f † is replaced by the vector f which has components (f)i = ∫ 1 0 f †(X)φi(X)dX, (8.5) with g similarly defined. For a fixed t, one then approximates the solution u(X, t) and the test function w(X) by ũ(X; t), w̃(X) ∈ V N+1 ⊂ H1(Ω). 130 8.2. Details of the Implementation Using the hat functions as a basis, one can write ũ(X, t) = N∑ 0 uj(t)φj(X), and w̃(X) = N∑ j=0 wjφ j(X). (8.6) Working in V N+1 the problem is then to find u = (u1(t), . . . , uN (t)) ∈[ H1([0, T ]) ]N such that for all w = (w1, . . . , wN ) ∈ RN ∑ j,k u′′j (t)wk ∫ 1 0 φj(X)φk(X) [c†(X)]2 dX + ∑ j,k uj(t)wk ∫ 1 0 dφj(X) dX dφk(X) dX dX = 0. (8.7) Defining Mi,j = ∫ 1 0 φj(X)φk(X) [c†(X)]2 dX and Ai,j = ∫ 1 0 dφj(X) dX dφk(x) dX dX, (8.8) one finds that M d2u dt2 +Au = 0, (8.9) where u(0) = f and dudt = g. From Taylor’s theorem it follows that utt = u((k + 1)∆t)− 2u(k∆t) + u((k − 1)∆t) (∆t)2 + O ( (∆t)2 ) . (8.10) Substituting (8.10) into (8.9) and approximating u(k∆t) by uk, one obtains M(uk+1 − 2uk + uk−1) + (∆t)2Auk = 0 (8.11) when k > 0. If k = 0, then a slight modification is required, since u−1 is not given. Noting that the initial conditions give ut(0) = g and that Taylor’s theorem implies that ut(x, 0) = u(∆t)− u(−∆t) 2∆t + O ( (∆t)2 ) 131 8.3. Results it follows that u−1 ≈ u(−∆t) ≈ u(∆t)− 2(∆t)v0 ≈ u1 − 2(∆t)v0. (8.12) One can then use (8.11) with u−1 = u1 − 2∆tv0. (8.13) Solving for uk+1 in (8.11) one finds the explicit second-order scheme uk+1 = −M−1(∆t)2Auk + 2uk − uk−1, (8.14) where u0 = u0 and u−1 are defined in (8.5) and (8.13), respectively. In order for this numerical scheme to be stable an additional Courant- Friedrichs- Lewy (CFL) condition must be imposed. Since the wave speed is bounded from above and below by c1 and c2, respectively, the CFL condition requires that [17] ∆t ≤ h(xb − xa) c2 . (8.15) One can show that this CFL condition is necessary and sufficient for stability of the numerical scheme, by adapting a well-known method from finite difference methods [25]. 8.3 Results The method outlined in Section 8.2 was implemented in MatlabTM [20]. To accurately approximate the wave speed c(x), a fourth-order Runge-Kutta method was used with the step-size chosen sufficiently small to ensure that the error in c(x) was negligible relative to the error introduced by discretiza- tion in space and time. The entries of the matrix A were calculated analyt- ically while those of M were calculated from (8.8) using the trapezoid rule [22]. When calculating the energy, ut(x, t) and ux(x, t) were approximated 132 8.3. Results by ut(mh, (∆t)k) ≈ u k+1 m − uk−1m 2∆t and ux(mh, (∆t)k) ≈ ukm+1 − ukm−1 2h (8.16) which have local truncation errors of O((∆t)2) and O(h2) respectively. A plot of the finite element solution for the initial conditions (6.32) is shown in Fig. 8.1. At t = 0, the solution is a wave pulse centered to the left of the origin. As time progresses, it splits into two pulses travelling in opposite directions. The rightward moving pulse eventually hits the the transition region where it separates into a reflected and transmitted pulse. It was found that the reflection and transmission coefficients obtained using this method agreed with those found using the exact solution in Chapter 7. 133 8.3. Results Figure 8.1: A numerical approximation to a solution of the wave equation (5.2) with wave speed satisfying (6.11) for c1 = 1, c2 = 2, λ = 1 and initial conditions given by (6.32) where N = 1 and y† = e−7.5, obtained using a finite element method implemented in MatlabTM. Blue: t = 0, green: t = 0.8, red: t = 3.4. A plot of the wave speed is given in (6.1). 134 8.3. Results The reflection and transmission coefficients of the system were tested using the finite element solver for Gaussian wave pulses u(x, 0) = U0e (x−x0)2 2δ2 ut(x, 0) = 0, (8.17) where x0 was chosen to lie outside of the transition region. The parameters δ, λ, and c2 were varied to determine the effect of each one on the reflection and transmission coefficients. The behaviour of the incoming pulse for one set of parameters is shown at three different times in Figure 8.2. Figure 8.2: A numerical approximation to a solution of the wave equation (8.1) with wave speed satisfying (6.11) for c1 = 12 , c2 = 1, and λ = 2.3 at three different times: t = 5 (blue), t = 23 (green), and t = 27 (red). The approximation was obtained using a finite element method implemented in MatlabTM. The corresponding wave speed as a function of x is given in (6.1). 135 8.3. Results The dependence of the reflection and transmission coefficients on the pa- rameter κ = (c2−c1)νλδ , defined to be the ratio of the transition region width to the width of the incoming pulse, was examined by fixing c1 and c2 while varying κ. From these numerical experiments it was found that as κ was made arbitrarily small the reflection and transmission coefficients converged to those given by the Fresnel equations (5.34) in the case of a sharp transi- tion (having zero width). In particular, it was found that the transmission coefficient converged to the classical result from above with the correspond- ing reflection coefficient converging from below. Typical results of the nu- merical simulations showing the convergence of the transmission coefficients to the Fresnel value with decreasing κ are plotted in Figure 8.3. The dif- ference between the experimental and Fresnel reflection coefficients must satisfy (R(λ)− RF ) = −(T (λ)− TF ) since TF + RF = 1 = T (λ) + R(λ) by conservation of energy. 0 20 40 60 80 100 120 140 160 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98 1 1.02 kappa T ra n s m is s io n C o e ff ic ie n t Figure 8.3: A plot of the smooth transmission coefficients T (κ) when c1 = 12 , and c2 = 1 with Fresnel transmission coefficient TF = 89 . 136 Chapter 9 Conclusions and Further Work In this part of the thesis, solutions to the one-dimensional linear wave equa- tion (5.2) in a two-layered medium with a smooth transition were analyzed both numerically and analytically. In particular, the four-parameter family of wave speeds given by solutions of the ODE (6.11) was used. These wave speeds were considered both because one could find an analytic solution to (5.2) when c(x) satisfied (6.11) and because of their relevance to problems in electromagnetism and seismology. In these applications one is particularly interested in the situation where a wave pulse originating in one medium reaches the transition layer and splits into a reflected and transmitted pulse travelling in opposite directions. One can then calculate the reflection and transmission coefficients which represent the fraction of the energy of the original pulse which is reflected and transmitted, respectively. When the transition region is of zero width, the Fresnel equations [16] provide an analytic expression for the reflection and transmission coeffi- cients, given in (5.34), which is independent of the shape of the incoming wave. Through the numerical and analytic methods it was found that, for a smooth transition, the coefficients were no longer independent of the in- coming pulse. Rather than depending just on the speed ratio γ = c2c1 , the reflection and transmission coefficients also depended upon the ratio of the width of the transition region to the width of the incoming pulse. This ratio, l δ , is proportional to the unitless quantity κ = ν(c2−c1) δλ . 137 Chapter 9. Conclusions and Further Work As expected, when κ → 0, and the transition region becomes small rel- ative to the width of the incoming pulse, the transmission and reflection coefficients converge to those given in the Fresnel equations from above and below, respectively. Conversely, if κ → ∞, and thus the transition is flat relative to the length-scale δ, the transmission coefficient approaches 1 while the reflection coefficient approaches 0, as to be expected. In the course of the analysis, it was found that the two methods described in Chapters 7 and 8 each have their own distinct advantages and disadvan- tages. For the finite element method, in order to compare the solution at t = 0 and a later time t = tf , it is necessary to accurately approximate the solution for all intervening times. Hence, one must approximate the solu- tion over the rectangle [−40, 40]× [0, tf ] in the xt-plane. For the numerical approximation of the analytic method it is only necessary to determine the solution along two lines, x = 0 and t = 0. This difference is important when a high level of accuracy is required over large distances or times. In addi- tion, when the transition region is very small or the incoming pulse is very narrow, the finite element method formulated in Chapter 8 becomes less accurate due to the discretization of the time derivatives. Conversely, the terms of the analytic solution converge most rapidly for narrow pulses near or within the transition region. To model the behaviour of large pulses far away from the origin requires a large number of terms, making the solution inaccurate due to the accumulation of rounding errors. To extend this work an obvious approach would be to try to establish the dependence of the transmission and reflection coefficients on the parameter κ directly from the analytic solution presented in Chapter 7. In such a case, one would expect to obtain an asymptotic series in κ whose first term matches the coefficient given by the Fresnel equations. Finally, one could also apply the method of non-local symmetries, and the approaches outlined in this thesis, to different wave speeds by using the non-local symmetries found in [5] and searching for additional ones. 138 Bibliography [1] J. Berryman and C. Holland. Nonlinear diffusion problem arising in plasma physics. Physical Review Letters, 40:1720–1722, 1978. [2] G.W. Bluman. Use and construction of potential symmetries. Mathe- matical and Computer Modelling, 8:1–14, 1993. [3] G.W. Bluman and S. C. Anco. Symmetry and Integration Methods for Differential Equations. Springer, 2002. [4] G.W. Bluman, S.C. Anco, and A.F. Cheviakov. Applications of Sym- metry Methods to Partial Differential Equations. Springer, 2010. [5] G.W. Bluman and A. F. Cheviakov. Nonlocally related systems, lin- earization and nonlocal symmetries for the nonlinear wave equation. Journal of Mathematical Analysis and Applications, 333:93–111, 2007. [6] G.W. Bluman and S. Kumei. On invariance properties of the wave equation. Journal of Mathematical Physics, 28:307–318, 1987. [7] G.W. Bluman and S. Kumei. Exact solutions for wave equations of two-layered media with smooth transition. Journal of Mathematical Physics, 29:86–96, 1988. [8] G.W. Bluman and S. Kumei. Symmetries and Differential Equations. Springer, 1989. [9] G.W. Bluman and G.J. Reid. New symmetries for ordinary differential equations. IMA Journal of Applied Mathematics, 40:87–94, 1988. 139 Bibliography [10] G.W. Bluman, Temuerchaolu, and S. Anco. New conservation laws obtained directly from symmetry action on a known conservation law. Journal of Mathematical Analysis and Applications, 322:233–250, 2006. [11] B. Chakraborty. Electromagnetic wave propagation through slowly varying plasma. Journal of Mathematical Physical, 11:2570, 1970. [12] A. Cheviakov. Gem symbolic software package. http://math.usask.ca/ shevyakov/gem/, 2009. [13] S.H. Christiansen. Foundations of finite element methods for wave equa- tions of Maxwell type. In E. Quak and T. Soomere, editors, Applied Wave Mathematics, pages 335–393. Springer, 2009. [14] L.C. Evans. Partial Differential Equations. American Mathematical Society, 2nd edition, 2002. [15] H. Fujita. The exact pattern of a concentration-dependent diffusion in a semi-infinite medium. Textile Research Journal, 22:757–827, 1952. [16] D.J. Griffiths. Introduction to Electrodynamics. Prentice-Hall, Inc., 3rd edition, 1999. [17] P. Holy and J. Rodriguez. Optimized higher order time discretization of second order hyperbolic problems: Construction and numerical study. Journal of Computational and Applied Mathematics, 234:1953–1961, 2010. [18] L. Infeld and T.E. Hull. The factorization method. Reviews of Modern Physics, 23:21–68, 1951. [19] L.D. Landau and E.M. Lifshitz. Theory of Elasticity. Pergamon Press, 2nd english edition edition, 1970. [20] MATLAB. version 7.14.0 (R2012a). The MathWorks Inc., Natick, Massachusetts, 2012. 140 Bibliography [21] M.B. Monagan, K. O. Geddes, K. M. Heal, G. Labahn, S.M. Vorkoet- ter, J. McCarron, and P. DeMarco. Maple 14 Programming Guide. Maplesoft, Waterloo ON, Canada, 2010. [22] A. Ralston and P. Rabinowitz. A First Course in Numerical Analysis. McGraw-Hill, second edition, 1978. [23] W. Rudin. Principles of Mathematical Analysis. McGraw-Hill, 3rd edition, 1976. [24] Peter M. Shearer. Introduction to Seismology. Cambridge University Press, 2nd edition, 2009. [25] J.C. Strikwerda. Finite Difference Schemes and Partial Differential Equations. SIAM, 2004. 141