UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The application of symmetry methods and conservation laws to ordinary differential equations and a linear.. Hoskins, Jeremy G. 2012

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_2012_fall_hoskins_jeremy.pdf [ 1.18MB ]
[if-you-see-this-DO-NOT-CLICK]
[if-you-see-this-DO-NOT-CLICK]
Metadata
JSON: 1.0073229.json
JSON-LD: 1.0073229+ld.json
RDF/XML (Pretty): 1.0073229.xml
RDF/JSON: 1.0073229+rdf.json
Turtle: 1.0073229+rdf-turtle.txt
N-Triples: 1.0073229+rdf-ntriples.txt
Original Record: 1.0073229 +original-record.json
Full Text
1.0073229.txt
Citation
1.0073229.ris

Full Text

The Application of Symmetry Methods and Conservation Laws to Ordinary Differential Equations and a Linear Wave Equation by Jeremy G. Hoskins  B.Sc., The University of British Columbia, 2011  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty of Graduate Studies (Mathematics)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) October 2012 c Jeremy G. Hoskins 2012  Abstract Symmetry analysis and conservation laws are widely used in analyzing and solving differential equations. Conservation laws are also called first integrals when dealing with ordinary differential equations (ODEs). In this thesis, the complementary nature of these two approaches is explored; specifically, the use of symmetries to find integrating factors and, conversely, the use of conservation laws to seek new symmetries. In Part 1, building upon results in [3] and [10], it is shown that a higher-order symmetries of an ODE induces a point symmetry of the corresponding integrating factor determining equations (IFDE), and an explicit expression for this induced symmetry is obtained. Secondly, it is shown that the converse also holds for a special class of Lie point symmetries of the IFDE; namely, all Lie point symmetries of the IFDE which are of the form ξ(x, Y, Y1 , . . . , Yn−1 ) n−1  ∂ ∂ + η(x, Y, Y1 , . . . , Yn−1 ) + ∂x ∂Y  ∂ ∂ + Λh(x, Y, Y1 , . . . , Yn−1 ) η (x, Y, Y1 , . . . , Yn−1 ) ∂Yi ∂Λ  (1)  i  + i=1  project onto point symmetries of the original scalar ODE. In Part 2, the use of conservation laws to find non-local symmetries is shown for a linear one-dimensional wave equation in a two-layered medium with a smooth transition layer. The resulting analytic solutions are then studied in order to investigate the effect of the transmission and reflection of energy between the two media. It is found that the reflection and transmission coefficients depend on the ratio of the wave speeds in the two media as well as the ratio of the characteristic length of the incoming signal to ii  Abstract the width of the transition layer. Approximations of the dependence of the reflection and transmission coefficients on these two parameters are also presented, obtained via numerical experiments performed using both the analytic solution and a finite element method.  iii  Table of Contents Abstract  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iv  List of Tables  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . .  x  Dedication  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  xi  1 Preface  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1  I  First Integral Symmetries  2 First Integrals of Ordinary Differential Equations  3 . . . . .  4  . . . . . . . . . . . . . . . . . . . . . . . . . . .  4  2.1  Introduction  2.2  First Integrals  . . . . . . . . . . . . . . . . . . . . . . . . . .  4  2.3  Theoretical Background . . . . . . . . . . . . . . . . . . . . .  8  2.3.1  8  2.4  Direct Method . . . . . . . . . . . . . . . . . . . . . .  Motivation and Aims 2.4.1  . . . . . . . . . . . . . . . . . . . . . .  11  Symmetries Map First Integrals into other First Integrals  . . . . . . . . . . . . . . . . . . . . . . . . . . .  2.4.2  Symmetry Ansatzes and Invariant Solutions  2.4.3  Aims  11  . . . . .  17  . . . . . . . . . . . . . . . . . . . . . . . . . . .  17  iv  Table of Contents 3 Results  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  3.1  Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . .  19  3.2  First-Order ODEs . . . . . . . . . . . . . . . . . . . . . . . .  19  3.3  Second-Order ODEs . . . . . . . . . . . . . . . . . . . . . . .  21  3.3.1  Scalar Integrating Factor Equations . . . . . . . . . .  22  3.3.2  System Integrating Factor Equations  . . . . . . . . .  26  3.3.3  Comparison of the Two Approaches . . . . . . . . . .  32  General ODEs . . . . . . . . . . . . . . . . . . . . . . . . . .  35  3.4.1  Overview . . . . . . . . . . . . . . . . . . . . . . . . .  35  3.4.2  First Integral Determining Equation . . . . . . . . . .  35  3.4.3  First Integral Symmetries . . . . . . . . . . . . . . . .  37  3.4.4  Multiplier Determining Equations . . . . . . . . . . .  43  3.4.5  Induced Multiplier Symmetries . . . . . . . . . . . . .  49  3.4.6  Connection between the System and Scalar Multiplier  3.4  Determining Equations . . . . . . . . . . . . . . . . . 4 Conclusions and Further Work  II  19  . . . . . . . . . . . . . . . . .  Analysis of an Exact Solution of a Wave Equation  5 The Wave Equation  65 89  91  . . . . . . . . . . . . . . . . . . . . . . . .  92  . . . . . . . . . . . . . . . . . . . . . . . . . . .  92  5.1  Introduction  5.2  The Wave Equation and its Applications  . . . . . . . . . . .  92  5.3  Conservation of Energy . . . . . . . . . . . . . . . . . . . . .  97  5.4  Reflection and Transmission Coefficients  99  . . . . . . . . . . .  6 Solutions of a Family of Wave Equations . . . . . . . . . . . 101 6.1  Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . 101  6.2  Symmetry Classification of (5.2) . . . . . . . . . . . . . . . . 101  6.3  Formulation of a Nonlocally Related System  6.4  Symmetry Analysis of a Nonlocally Related System  6.5  Formulation of an Analytic Solution . . . . . . . . . . . . . . 109  . . . . . . . . . 103 . . . . . 105  v  Table of Contents 7 Numerical Implementation of the Analytic Solution . . . . 115 7.1  Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . 115  7.2  Details of the Implementation  7.3  Results  . . . . . . . . . . . . . . . . . 116  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118  8 Numerical Approximation to the Solution . . . . . . . . . . 127 8.1  Introduction  8.2  Details of the Implementation  8.3  . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 . . . . . . . . . . . . . . . . . 128  8.2.1  Weak Formulation . . . . . . . . . . . . . . . . . . . . 129  8.2.2  Discretization  Results  . . . . . . . . . . . . . . . . . . . . . . 130  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132  9 Conclusions and Further Work  . . . . . . . . . . . . . . . . . 137  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139  vi  List of Tables 6.1  Group classification of the linear wave equation (5.2) appearing in [5] based on [6]. . . . . . . . . . . . . . . . . . . . . . . 102  vii  List of Figures 6.1  A solution of (6.11) for c1 = 12 , c2 = 1, and λ = 0.8 obtained using a Runge-Kutta method [22] implemented in MatlabTM . 107  6.2  A plot of U (x(y)) given by (6.32) with N = 1 and y † = e−7.5 obtained using a finite number of terms of the series solution (6.28). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114  7.1  A plot of the first mode u0 (y, t) given in (6.29) when λ = 1, c2 = 2, and t = −8.  7.2  . . . . . . . . . . . . . . . . . . . . . . . 119  A plot of the first mode u0 (y, t) given in (6.29) when λ = 1, c2 = 2, and t = 0. . . . . . . . . . . . . . . . . . . . . . . . . . 120  7.3  A plot of the first mode u0 (y, t) given in (6.29) when λ = 1, c2 = 2, and t = 8.  7.4  . . . . . . . . . . . . . . . . . . . . . . . . 120  The first mode u0 (y, t) plotted against log(y) when λ = 1, c2 = 2, and t = 10. . . . . . . . . . . . . . . . . . . . . . . . . 121  7.5  The first mode u0 (y, t) plotted against log(π − y) when λ = 1, c2 = 2, and t = 10. . . . . . . . . . . . . . . . . . . . . . . . . 121  7.6  The dependence of the transmission coefficient on the ratio of the asymptotic wave speeds for the first mode u0 (y, t) given in (6.29) when λ = 1. The red and green points were calculated from the two wave pulses created by the collision of the two incident wave pulses. The blue points represent the transmission coefficient for a sharp transition with the same wave speed ratio. . . . . . . . . . . . . . . . . . . . . . . . . . 122  viii  List of Figures 7.7  A plot of the transmission coefficients for a smooth transition with λ = 1 (red) and the transmission coefficients for a sharp transition (green) where the initial conditions are given by (6.32) with N = 1 and y † = e−7.5 where γ =  c2 c1  is the ratio  of the asymptotic wave speeds. . . . . . . . . . . . . . . . . . 126 8.1  A numerical approximation to a solution of the wave equation (5.2) with wave speed satisfying (6.11) for c1 = 1, c2 = 2, λ = 1 and initial conditions given by (6.32) where N = 1 and y † = e−7.5 , obtained using a finite element method implemented in MatlabTM . Blue: t = 0, green: t = 0.8, red: t = 3.4. A plot of the wave speed is given in (6.1). . . . . . . . . . . . . . . . 134  8.2  A numerical approximation to a solution of the wave equation (8.1) with wave speed satisfying (6.11) for c1 =  1 2,  c2 = 1,  and λ = 2.3 at three different times: t = 5 (blue), t = 23 (green), and t = 27 (red). The approximation was obtained using a finite element method implemented in MatlabTM . The corresponding wave speed as a function of x is given in (6.1). 135 8.3  A plot of the smooth transmission coefficients T (κ) when c1 = 1 2,  and c2 = 1 with Fresnel transmission coefficient TF = 98 . . 136  ix  Acknowledgements The author would like to thank his supervisors, Dr. George Bluman and Dr. Alexei Cheviakov for their guidance. He would also like to thank Dr. Dominik Schoetzau for his assistance.  x  Dedication To my parents.  xi  Chapter 1  Preface In the study of ordinary and partial differential equations (ODEs and PDEs respectively) symmetry analysis and conservation law methods are particularly effective techniques which enable one to find solutions to differential equations arising in many areas of science. In this thesis the complementary nature of these two methods is explored; both the use of symmetries to find integrating factors as well as the use of conservation laws to find symmetries. Part 1 begins with a discussion of the direct method for finding integrating factors which arise as solutions of an over-determined PDE system called the scalar integrating factor determining equations (IFDE). In general, solving the scalar IFDE directly is not possible and one instead seeks particular solutions. A result from [10] is presented in Chapter 2 which shows that every contact symmetry of a scalar ODE induces a point symmetry of its corresponding IFDE. This is useful in two ways. Firstly, given a contact symmetry of the scalar ODE one can seek solutions of the IFDE which are invariant under the induced point symmetry. Secondly, given an integrating factor one can look at its image under a symmetry. Mapping in this way can yield more than one new integrating factor. In Chapter 3 the correspondence between contact symmetries of a scalar ODE and its associated scalar IFDE is extended to the case of higher-order symmetries of the scalar ODE. In particular, it is shown that every higherorder symmetry of the scalar ODE induces a point symmetry of its associated IFDE. Conversely, it is shown that if a transformation is a Lie point symmetry of the scalar IFDE of a particular form (commonly used when seeking symmetries of a PDE system) then that point symmetry must project onto 1  Chapter 1. Preface a higher-order symmetry of the original scalar ODE. Part 2 demonstrates the use of conservation laws to find symmetries of a PDE systematically which cannot be obtained using the classical methods of Lie. In particular, it is shown that for a certain linear one-dimensional equation one can find two such symmetries, called non-local symmetries, which allow one to find new analytic solutions to this equation [7]. The wave speed for this specific wave equation corresponds to two homogeneous media separated by a smooth transition region. In physical applications the fraction of an incident wave’s energy which passes through the transition region is particularly important. Chapter 5 introduces the particular form of the wave equation under consideration, as well as its applications, and formalizes the definition of energy and the fraction of transmitted energy. In Chapter 6 non-local symmetry methods are outlined and applied to the wave equation enabling one to find an analytic solution for a four-parameter family of wave speeds. The analysis of the resulting analytic solution is given in Chapter 7 and compared with the results of a finite element method approach in Chapter 8.  2  Part I First Integral Symmetries  3  Chapter 2  First Integrals of Ordinary Differential Equations 2.1  Introduction  In this chapter, the use of first integrals for solving scalar ordinary differential equations (ODEs) is discussed. In particular, the direct method is introduced which allows one to seek integrating factors systematically by solving an over-determined system of partial differential equations (PDEs) called the integrating factor determining equations (IFDE). Often it is not possible to obtain general solutions of these equations by directly solving the IFDEs and one must instead seek particular solutions. In this chapter two methods, based on symmetries, are presented which can be used to seek first integrals of scalar ODEs systematically.  2.2  First Integrals  The mathematical modelling of physical phenomena frequently generates differential equations using conserved quantities (first integrals for ordinary differential equations) such as mass, energy, momentum and charge. Numerically, first integrals are used in generating finite element methods as well as to check the accuracy of numerical schemes. Recently there have been great advances in the development of systematic methods for finding first integrals. Using first integrals one can obtain analytic solutions to many otherwise-intractable differential equations.  4  2.2. First Integrals The present work focuses on nth order ordinary differential equations (ODEs) of the form y (n) − f (x, y, y , . . . , y (n−1) ) = 0.  (2.1)  In this case, a first integral enables the reduction of (2.1) to an (n − 1)th order ODE. Definition 2.1. A function φ(x, Y, Y1 , . . . , Yn−1 ) yields a first integral of (2.1) if, for arbitrary functions Y = {Y, Y1 , . . . , Yn−1 }, there exists Λ = Λ(x, Y ) such that Λ(x, Y ) [Yn − f (x, Y )] = Dφ(x, Y ). In (2.2), D =  ∂ ∂x  (2.2)  ∂ + Y1 ∂Y + . . . + Yn ∂Y∂n−1 is the total derivative operator.  The function Λ is called an integrating factor of (2.1), and φ(x, y, y , . . . , y (n−1) ) = const is the corresponding first integral. Unless otherwise stated, Y, . . . , Yn−1 are assumed to be arbitrary, while y, y , . . . , y (n−1) will denote a solution of (2.1) and its derivatives.  Example 2.2. As an example, consider y − y = 0.  (2.3)  Two distinct integrating factors of (2.3) are Λ1 (x, Y, Y1 ) = ex ,  and  Λ2 (x, Y, Y1 ) = e−x .  The multiplication of (2.3) by Λ1 and replacement of y and y by arbitrary 5  2.2. First Integrals functions Y and Y1 gives ex (Y2 − Y ) = D[ex (Y1 − Y )]. It follows that φ1 (x, y, y ) = ex (y − y) = a is a first integral of (2.3). Using Λ2 yields another first integral φ2 (x, y, y ) = e−x (y + y) = b. Solving for y in the two first integral equations, one finds the general solution of (2.3): y=  a −x b x e + e . 2 2  (2.4)  Using a Lie point symmetry to reduce (2.1) results in an (n − 1)th order equation v (n−1) − g(u, v, v , . . . , v (n−2) ) = 0, where v (i) =  di v , dui  (2.5)  u = u(x, y) and v = v(x, y, y ) has an essential dependence  on y . Since v depends on y , writing (2.5) in terms of (x, y, y , . . . , y (n) ) yields an nth order ODE. In terms of the variables (x, y, y , . . . , y (n) ) no reduction has taken place. Reducing via a first integral yields an (n − 1)th order ODE depending upon x, y, the derivatives of y to order n − 1, and an arbitrary constant λ. The fact that first integrals allow reduction in terms of the original variables aids significantly in subsequent analysis and further reduction. First integrals are systematically found using integrating factors which satisfy an over-determined linear PDE system called the integrating factor determining equations (IFDE). Given an integrating factor, calculating a first integral and reducing (2.1) is then straightforward. Due to the nature of the set of solutions of the IFDE, discussed further in Section 2.3.1, obtaining a general solution of the IFDE by directly solving the over-determined PDE system is seldom possible. Instead, one seeks particular solutions of the IFDE, since any non-zero integrating factor allows one to reduce (2.1) to an (n − 1)th order ODE. If n functionally-independent integrating factors are 6  2.2. First Integrals known, a general solution of (2.1) can then be determined. To obtain particular solutions of an IFDE, one often considers a particu˜ lar ansatz for the integrating factors, such as Λ(x, Y ) = Λ(x, Y, Y1 , . . . , Yk ) where k < n. Alternatively, one can employ symmetry methods to find integrating factors in the following ways: • given a symmetry of the IFDE of (2.1), one can look for solutions invariant under that symmetry [10]. This reduces the equations, simplifying the task of finding particular solutions. • if a non-zero integrating factor is known, a symmetry of the IFDE will map it to one or more possibly distinct integrating factors [10]. The use of symmetries of the IFDE thus presents a systematic way to search for integrating factors of the ODE (2.1). The above methods are outlined in greater detail in Sections 2.4.1 and Section 2.4.2, respectively. In order to use symmetry methods one must first know at least one symmetry of the IFDE. The easiest way to find symmetries of the IFDE is to seek contact symmetries of (2.1), since every contact symmetry of (2.1) induces a point symmetry of the IFDE [10]. There are many examples for which this technique has proven effective in obtaining integrating factors and corresponding first integrals. In this work it will be shown that the converse of the previous result also holds for a certain class of point symmetries of the IFDE. In particular, all point symmetries of the IFDE which are of the form ∂ ∂ X = ξ(x, Y ) + η(x, Y ) + ∂x ∂Y  n−1  η j (x, Y ) j=1  ∂ ∂ + Λh(x, Y ) ∂Yj ∂Λ  project onto higher-order symmetries of the scalar ODE (2.1).  7  2.3. Theoretical Background  2.3 2.3.1  Theoretical Background Direct Method  Recent progress in the study of first integrals has resulted from the direct method [3]. Analogous to symmetry analysis, the direct method produces an over-determined linear PDE system which the integrating factors must satisfy. Only ODEs will be considered here, though the method generalizes to PDEs. The aim of the direct method is to find functions Λ(x, Y ) for which there exists φ(x, Y ) such that the pair (Λ, φ) satisfies (2.2). Necessary and sufficient conditions for Λ to be an integrating factor are obtained by applying the Euler operator. Definition 2.3. [4] The Euler operator E is the linear differential operator defined by ∂ E= + ∂Y  ∞  (−1)i Di i=1  ∂ . ∂Yi  One then has the following result. Theorem 2.4. [3] Given a differentiable function g(x, Y, . . . , Yk ), the following statements are equivalent when either one holds: • Eg(x, Y, Y1 . . . , Yk ) ≡ 0 • there exists a function h(x, Y, Y1 . . . , Yk−1 ) such that g(x, Y, Y1 , . . . , Yk ) = Dh(x, Y, Y1 , . . . , Yk−1 ). Corollary 2.5. Λ(x, Y ) is an integrating factor of (2.1) if and only if E [Λ(x, Y ) {Yn − f (x, Y )}] ≡ 0.  (2.6)  Proof. Applying the Euler operator to both sides of (2.2) the result follows immediately from Theorem 2.4. 8  2.3. Theoretical Background Along with Y, Y1 , . . . , Yn−1 the left-hand side of (2.6) also depends on Yn , . . . , Y2n−1 ; introduced by the repeated differentiation associated with E. Since Λ does not depend on Yn , . . . , Y2n−1 , the nature of the Euler operator implies that the left-hand side of (2.6) must be a polynomial in Yn , . . . , Y2n−1 . Let Λ denote Λ and all its partial derivatives with respect to (x, Y ). The left-hand side of (2.6) can then be re-written as i2 in , . . . Y2n−1 ai1 i2 ...in (x, Y , Λ)Yni1 Yn+1  E [Λ(x, Y ) {Yn − f (x, Y )}] = i1 ,i2 ,...,in  (2.7) where the ai1 ...in (x, Y , Λ) are linear in Λ and its derivatives. Since each Yi is arbitrary, and (2.6) vanishes for all (Y, . . . , Y2n−1 ), it follows that each coefficient ai1 ...in (x, Y , Λ) must be identically zero. The resulting system, called the IFDE, is given by ai1 ...in (x, Y , Λ) = 0,  for all i1 , . . . , in .  (2.8)  The IFDE is a linear system of PDEs with dependent variable Λ and independent variables (x, Y ).  Example 2.6. Consider the second-order ODE y = 0.  (2.9)  After multiplying (2.9) by Λ = Λ(x, Y, Y1 ) and applying the Euler operator one obtains ΛY Y2 − D (ΛY1 Y2 ) + D2 Λ = 0.  (2.10)  Expanding (2.10) one finds that ΛY Y2 − [Y3 ΛY1 + Y2 (ΛxY1 + Y1 ΛY Y1 + Y2 ΛY1 Y1 )] + [Λxx + 2ΛxY Y1 + + 2ΛxY1 Y2 + 2ΛY Y1 Y1 Y2 + Y22 ΛY1 Y1 + Y12 ΛY Y + Y3 ΛY1 + Y2 ΛY = 0. (2.11)  9  2.3. Theoretical Background The cancellation of terms in (2.11) produces Λxx + 2Y1 ΛxY + Y12 ΛY Y + Y2 (2ΛY + Y1 ΛY Y1 + ΛxY1 ) = 0.  (2.12)  (2.12) is a polynomial in Y2 , each coefficient of which must be identically zero. Splitting (2.12) gives Λxx + 2Y1 ΛxY + Y12 ΛY Y = 0,  (2.13a)  2ΛY + Y1 ΛY Y1 + ΛxY1 = 0.  (2.13b)  To solve (2.13) let r = Y1 x−Y and Λ(x, Y, Y1 ) = G(x, r, Y1 ). Then (2.13a) becomes  ∂2G = 0. ∂x2  (2.14)  G(x, r, Y1 ) = F1 (r, Y1 ) + xF2 (r, Y1 ).  (2.15)  It follows that  The substitution of (2.15) into (2.13b) yields ∂F1 (r, Y1 ) ∂F2 (r, Y1 ) − = 0. ∂r ∂Y1  (2.16)  From (2.16) it follows immediately that there exists a function H(r, Y1 ) such that F1 =  ∂H ∂Y1  and F2 =  ∂H ∂r .  Λ(x, Y, Y1 ) = x  Thus  ∂H(r, Y1 ) ∂r  + r=Y1 x−Y  ∂H ∂Y1  (2.17) r=Y1 x−Y  It is then easily shown that (2.13) has the following general solution: Λ=  ∂ H(xY1 − Y, Y1 ), ∂Y1  where H is any suitably differentiable function.  10  2.4. Motivation and Aims Dimension of the Space of Integrating Factors Finding a general solution to the IFDE can be difficult due to the size of its solution set, as demonstrated by the following result. Theorem 2.7. Let φ1 , . . . , φk be functions of (x, Y ) yielding first integrals of (2.1). Then if h(φ1 , . . . , φk ) is any differentiable function, µ(x, Y ) =  ∂φi ∂h(φ1 , . . . , φk ) ∂Yn−1 ∂φi  is an integrating factor of (2.1). Proof. From (2.2), it is clear that  ∂φi ∂Yn−1  = Λi is an integrating factor of (2.1)  for i = 1, . . . , k. Applying D to h and assuming summation over repeated indices gives dh ∂h dφi = dx ∂φi dx ∂h Λi [Yn − f (x, Y )] = ∂φi  (2.18)  = µ(x, Y ) [Yn − f (x, Y )] . It follows from (2.2) that µ(x, Y ) is an integrating factor with corresponding first integral h(φ1 , . . . , φk ) = c. Though the solutions to (2.1) and the corresponding integrating factor determining equations are of the same cardinality, the set of solutions to the latter is an infinite-dimensional space. In contrast, each solution of (2.1) can be represented by a point in Rn .  2.4 2.4.1  Motivation and Aims Symmetries Map First Integrals into other First Integrals  One important way of finding first integrals is by using symmetries of (2.1). In particular, as the next theorem shows, each contact symmetry of the 11  2.4. Motivation and Aims original ODE (2.1) induces a symmetry of both its first integrals and its integrating factors. Theorem [10] 2.8. Suppose (2.1) is invariant under a contact transformation given by x∗ = x∗ (x, y, y ), y ∗ = y ∗ (x, y, y ),  (2.19)  y1∗ = y1∗ (x, y, y ). In addition, let Λ(x, Y ) be an integrating factor of (2.1). If φ(x, y, y , . . . , y (n) ) = const is the corresponding first integral then ˜ y, y , . . . , y (n−1) ) = φ x∗ (x, y, y ), y ∗ (x, y, y ), . . . , y ∗ (x, y) φ(x, 0 n−1  (2.20)  = const is also a first integral, where Yi∗ =  ∗ DYi−1 Dx∗  for i = 2, . . . , n−1. The integrating  factor corresponding to the first integral (2.20) is ˜ = [Dx∗ ]A(x, Y ) Λ(x∗ (x, Y, Y1 ), Y ∗ (x, Y )), Λ  (2.21)  where A(x, Y ) is defined implicitly by Yn∗ − f (x∗ , Y ∗ ) = A(x, Y )[Yn − f (x, Y )].  (2.22)  Proof. If (2.19) represents a symmetry of (2.1), it must leave invariant the solution set of (2.1) [10]. It follows that there exists A(x, Y ) such that Yn∗ − f (x∗ , Y ∗ ) = A(x, Y )[Yn − f (x, Y )]. Let D∗ =  (2.23)  ∂ ∂ ∂ + Y1∗ + . . . + Yn∗ ∗ , ∗ ∗ ∂x ∂Y ∂Yn−1  12  2.4. Motivation and Aims denote the total derivative operator with respect to x∗ . Since D and D∗ are total derivative operators they must satisfy the chain rule D∗ = [Dx∗ ]−1 D.  (2.24)  By replacing (x, Y ) with (x∗ , Y ∗ ) in (2.2) and using D∗ in place of D it is clear that Λ(x∗ , Y ∗ )[Yn∗ − f (x∗ , Y ∗ )] = D∗ φ(x∗ , Y ∗ ).  (2.25)  Applying (2.24) and (2.23) to (2.25) gives Λ(x∗ , Y ∗ )A(x, Y )[Yn − f (x, Y )] = [Dx∗ ]−1 Dφ(x∗ (x, Y ), Y ∗ (x, Y )). (2.26) After multiplying (2.26) by Dx∗ , the resulting expression is of the form (2.2) with first integral ˜ y, y , . . . , y (n−1) ) = φ x∗ (x, y, y ), y ∗ (x, y, y , . . . , y (n−1) ) = c (2.27) φ(x, and corresponding integrating factor ˜ = [Dx∗ ]A(x, Y ) Λ(x∗ (x, Y, Y1 ), Y ∗ (x, Y )). Λ  (2.28)  In the preceding theorem, no assumptions apart from differentiability are made about the nature of the symmetry (2.19). Thus, this theorem holds even when (2.19) is not a Lie symmetry of (2.1). Example 2.9. Consider the equation y − y = 0.  (2.29)  In Example 2.2, it was shown that φ1 (x, y, y ) = ex (y − y) = a is a first integral of (2.29) with correspond integrating factor ex . By inspection, it is  13  2.4. Motivation and Aims also clear that (2.29) has the discrete symmetry: x∗ = −x,  (2.30)  y ∗ = y. Under transformation (2.30), φ1 becomes φ˜1 (x, y, y ) = φ1 (−x, Y, −Y1 ) = −e−x (Y + Y1 ),  which yields the other first integral e−x (y + y) = b from Example 2.2. Similarly, since A(x, Y ) = 1 and Dx∗ = −1, (2.28) gives the integrating factor ˜ Λ(x, Y, Y1 ) = −e−x , which is equivalent to Λ2 in Example 2.2. Focusing on Lie contact symmetries of (2.1), the results of Theorem 2.8 can be extended, allowing one to obtain possibly more than one new first integral per symmetry. Theorem 2.10. [10] Suppose (2.1) admits the following Lie contact symmetry with parameter  : x∗ = x∗ (x, y, y ; ), Y ∗ = Y ∗ (x, y, y ; ), Y1∗  =  Y1∗ (x, y, y  (2.31)  ; ).  As in Theorem 2.8, let φ(x, y) = const be a first integral of (2.1) with corresponding integrating factor Λ(x, Y ). Furthermore, let the infinitesimal generator of transformation (2.31) be denoted by X = ξ(x, y, y )  ∂ ∂ ∂ + η(x, y, y ) + η 1 (x, y, y ) , ∂x ∂y ∂y  (2.32)  and its prolongations by X(1) , X(2) , . . . , X(n−1) . It then follows that there exists an infinite sequence of first integrals and integrating factors of (2.1) 14  2.4. Motivation and Aims given by 1 φˆi (x, y) = [X(n−1) ]i φ(x, y), i! 1 ˆ i (x, Y ) = Λ D[(X(n−1) )m x] m! p! l! m+p+l=i  ∂ p ∂Yn∗ ∂ p ∂Yn  [(X(n−1) )l Λ(x, Y )]. =0  (2.33) Each level set of φˆi generates a first-integral when restricted to solutions of (2.1). For (2.1) there cannot be more than n independent first integrals and thus the above method will generate at most n − 1 new first integrals. Proof. Before proceeding further, it is necessary to derive an explicit expression for A(x, Y ; ) which is defined implicitly in Theorem 2.8 by [Yn∗ − f (x∗ , Y ∗ ; )] = A(x, Y ; )[Yn − f (x, Y )].  (2.34)  Note that A(x, Y ; ) does not depend on Yn . In addition, for a contact sym∗ metry, Y ∗ , . . . , Yn−1 are independent of Yn , and Yn∗ is linear in Yn . This  follows directly from the prolongation formulae [3]. Differentiating (2.34) with respect to Yn gives ∂Yn∗ (x, Y ; ) = A(x, Y ; ). ∂Yn Expanding A(x, Y ; ) as a Taylor series in ∞  A(x, Y ; ) = k=0  (2.35)  yields  k  ∂kA k! ∂ k  .  (2.36)  =0  By Theorem 2.8, if (2.1) admits the Lie contact symmetry (2.31) and has an integrating factor Λ(x, Y ) with a corresponding first integral φ(x, y, y , . . . , y (n−1) ) = const,  15  2.4. Motivation and Aims then the following is also an integrating factor of (2.1) for all values of ˜ Λ(x, Y ) = [Dx∗ ]A(x, Y ; ) Λ(x∗ (x, Y ; ), Y ∗ (x, Y ; )).  :  (2.37)  The first integral associated with the integrating factor (2.37) is ˜ y, y . . . , y (n−1) ) = φ x∗ (x, y, y ; ), Y ∗ (x, y, y , . . . , y (n) ; ) φ(x, (2.38) = const. Using a result from [3], if F (x, Y ) is any sufficiently smooth function then F (x∗ , Y ∗ ) = e  X(n−1)  F (x, Y ),  (2.39)  where X is the infinitesimal generator associated with transformation (2.31) and X(n−1) is its (n − 1)th prolongation. Using (2.36), and (2.39) it follows that ∞  φ˜ = l=0  l  l!  X(n−1) φ,  ∞  ˜ =D Λ i=0  i  i!  ∞ i  Xx  ∞ i  = i=0  m+p+l=i  j=0  j  j!  ∂ j ∂Yn∗ ∂ j ∂Yn  1 D[(X(n−1) )m x] m! p! l!  ∞ =0 k=0  k  k!  X(n−1) Λ(x, Y )  ∂ p ∂Yn∗ ∂ p ∂Yn  [(X(n−1) )l Λ(x, Y )] =0  (2.40) Equating powers of  on either side of the equation ˜ n − f (x, Y )] = Dφ˜ Λ[Y  (2.41)  yields ˜ ˜ i [Yn − f (x, Y )] = Dφ, Λ  i = 0, . . .  (2.42)  ˜ i (x, Y ) where φi and Λi are given by (2.33). It follows from (2.42) that Λ and φ˜i (x, y, y , . . . , y (n) ) = c are integrating factors and first integrals, re16  2.4. Motivation and Aims spectively, of (2.1).  2.4.2  Symmetry Ansatzes and Invariant Solutions  One does not always have a first integral or, equivalently, an integrating factor from which to generate others. Moreover, even if an integrating factor is known, it is not necessarily true that using symmetries as outlined in Theorems 2.8, and 2.10 will generate enough first integrals to reduce (2.1) the n times required to obtain a general solution. It is therefore necessary to have a method for finding integrating factors directly from the IFDE. It was proved in Section 2.4.1 that every contact symmetry of (2.1) gives rise to a transformation mapping each integrating factor into another, possibly distinct, integrating factor. It follows that symmetries of (2.1) must induce symmetries of the corresponding IFDE. From Theorem 2.10, given a symmetry of (2.1) with infinitesimal generator X=ξ  ∂ ∂ ∂ +η + ηi , ∂x ∂Y ∂Yi  (2.43)  the following formula provides the action of the induced symmetry on (x, Y , Λ) [3]: ˆ = ξ ∂ + η ∂ + ηi ∂ + X ∂x ∂Y ∂Yi  ∂η n−1 ∂Yn−1  Λ  ∂ . ∂Λ  (2.44)  Solutions of the IFDE invariant under (2.44) are then sought using the method of differential invariants [3].  2.4.3  Aims  This work will extend the above results to higher-order symmetries of (2.1), potentially increasing the number of integrating factors which can be generated by both of the methods outlined above. It will also demonstrate the completeness of the ODE symmetry method for finding symmetries of the corresponding IFDE by showing that all point symmetries of the IFDE project onto symmetries of (2.1). There is thus no need to generate and  17  2.4. Motivation and Aims solve the symmetry determining equations of the IFDE to ensure that all useful symmetries have been found.  18  Chapter 3  Results 3.1  Introduction  In this chapter connections between the local symmetries of (2.1) and those of the IFDE of (2.1) are discussed. In particular, it is shown that all contact symmetries of (2.1) induce point symmetries of the corresponding IFDE and all higher-order symmetries of (2.1) induce higher-order symmetries of the IFDE. Conversely, it is also proved that every point symmetry of an IFDE which is of a certain form projects onto a local symmetry of (2.1). In Sections 3.2 and 3.3 the relationship between the symmetries of an ODE and its corresponding IFDE is proved for first and second order ODEs by directly comparing the symmetry determining equations of the IFDE with those of (2.1). Finally, in Section 3.4 the result is proved for a general ODE using first integrals as an intermediary between (2.1) and its corresponding IFDE.  3.2  First-Order ODEs  Consider the first-order ODE y = f (x, y).  (3.1)  The following theorem shows that every symmetry of (3.1) must induce a symmetry of the corresponding IFDE and, conversely, every symmetry of the IFDE must project onto a symmetry of (3.1).  19  3.2. First-Order ODEs Theorem 3.1. Every symmetry of the integrating factor determining equations corresponding to (3.1) of the form X = ξ(x, Y )  ∂ ∂ ∂ + η(x, Y ) + Λh(x, Y ) ∂x ∂Y ∂Λ  projects onto a point symmetry of (3.1). Moreover, every point symmetry of (3.1) induces a point symmetry of its IFDE. Proof. It follows from the direct method that the IFDE of (3.1) is Λx (x, Y ) + f (x, Y )ΛY (x, Y ) + Λ(x, Y )fY (x, Y ) = 0.  (3.2)  Consider the one-parameter Lie group of transformations of (x, Y, Λ) with infinitesimal generator X = ξ(x, Y )  ∂ ∂ ∂ + η(x, Y ) + Λh(x, Y ) . ∂x ∂Y ∂Λ  (3.3)  The transformation (3.3) is a symmetry of (3.2) if and only if [4] ξx f − ηx + ξfx + f 2 ξY + ηfY − f ηY = 0,  (3.4a)  f hY + ξfxY + f ξY fY + ηfY Y + hx + ξx fY = 0.  (3.4b)  Consider the projection of X onto a vector field over (x, Y ), Xp = ξ(x, Y )  ∂ ∂ + η(x, Y ) . ∂x ∂Y  (3.5)  This corresponds to a symmetry of (3.1) if and only if [3] −ξfx − ηfY + ηx + ηY f − f ξx − ξY f 2 = 0.  (3.6)  Comparing (3.6) with (3.4a) it follows that if X has coefficients satisfying (3.4) then Xp has coefficients satisfying (3.6). To prove the converse it is necessary to show that if Xp satisfies (3.6) then there exists an h(x, Y ) such that X satisfies (3.4). Since (3.4a) is 20  3.3. Second-Order ODEs identical to (3.6) it is automatically satisfied. The remaining equation (3.4b) is equivalent to f hY + hx = −(ξfxY + ξx fY + ηfY Y + f ξY fY ),  (3.7)  where f, ξ, η are known. By differentiating (3.4a) with respect to Y and using (3.7), one finds that f ηY Y + ηxY = (ξfx )Y + ηfY Y + (f ξx )Y + (ξY f 2 )Y = ξfxY + ξx fY + ηfY Y + ξY f fY + ξY fx + ξxY f + fY f ξY + ξY Y f 2 = ξfxY + ξx fY + ηfY Y + ξY f fY + (ξY f )x + f (ξY f )Y = −(hx + f hY ) + (ξY f )x + f (ξY f )Y . (3.8) Finally, it follows from (3.8) that if Xp is a symmetry of (3.1) then there exists an h such that X is a symmetry of (3.4), where h = −(ηY − f ξY ). Thus every point symmetry of (3.1) induces a symmetry of (3.2), and the proof is complete.  3.3  Second-Order ODEs  The analysis of the second-order ODE y − f (x, y, y ) = 0,  (3.9)  follows a method similar to that used for first-order ODEs. One can systematically find the symmetry determining equations of (3.9) and its corresponding IFDE. By direct comparison one can show that all contact symmetries of (3.9) induce point symmetries of the IFDE and that all point symmetries of the IFDE project onto contact symmetries of (3.9). These results are presented in Section 3.3.1. 21  3.3. Second-Order ODEs For ODEs which are second-order and higher there is another approach; instead of using the original ODE one generates an equivalent first-order ODE system by treating each derivative of y as an independent variable and augmenting contact conditions through the replacement of y by y, . . . , yn−1 . The equivalent first-order ODE system of the scalar ODE (2.1) is yi = yi+1 ,  i = 0, . . . , n − 2,  (3.10)  yn−1 = f (x, y, . . . , yn−1 ). where y0 = y. The IFDE of (3.10) can then be found and the symmetries analyzed. The results obtained by this method are shown in Section 3.3.2 and are compared with those of the scalar ODE approach in Section 3.3.3.  3.3.1  Scalar Integrating Factor Equations  Before introducing the main theorem of this section, the following two results are needed. Lemma 3.2. A one-parameter Lie group of transformations is a contact symmetry of (3.9) if and only if its corresponding infinitesimal generator X = ξ(x, Y, Y1 )  ∂ ∂ ∂ + η(x, Y, Y1 ) + η (1) (x, Y, Y1 ) ∂x ∂Y ∂Y1  (3.11)  satisfies the following conditions: 1. η (1) = Dη − Y1 Dξ  (3.12)  2. Let η (2) = Dη (1) − Y2 Dξ. Then η (2) − ξfx − ηfY − η (1) fY1  Y2 =f (x,Y,Y1 )  =0  (3.13)  3. ηY1 = Y1 ξY1 1 .  (3.14)  1  More generally, if ξ = ξ(x, Y, Y1 , . . . , Yk ) then invertibility of the corresponding trans(k−1) formation on (x, Y, Y1 , . . . , Yk ) requires that ηYk = Yk ξYk .  22  3.3. Second-Order ODEs Proof. The proof follows immediately from classical symmetry results. Together, conditions 1 and 2 are necessary and sufficient for (3.11) to be a local symmetry of (3.9): condition 1 is the contact condition required to preserve the geometric meaning of derivatives while condition 2 is the infinitesimal criterion [3] for (3.9) to be invariant under (3.11). Condition 3 is required to ensure that (3.11) is a contact symmetry and thus invertible on (x, Y, Y1 ). (1)  This is true if and only if ηY2 ≡ 0. A straightforward calculation shows that this is equivalent to ηY1 − Y1 ξY1 ≡ 0.  Lemma 3.3. The infinitesimal generator XΛ = ξ(x, Y, Y1 )  ∂ ∂ ∂ ∂ + η(x, Y, Y1 ) + η 1 (x, Y, Y1 ) + Λh(x, Y, Y1 ) ∂x ∂Y ∂Y1 ∂Λ (3.15)  is a point symmetry of the IFDE of (3.9) if and only if ξ, η, η 1 , and h satisfy the system ηY1 = Y1 ξY1 ,  (3.16a)  ξx = −2ξY Y1 − ηY11 + ηY ,  (3.16b)  ηx = −Y12 ξY − ηY11 Y1 + η 1 , 2  fx ξY1 = −f ηY1 + +  Y12 ξY f  ηx1  +  Y12 ηY1  (3.16c) +  2f ηY11 Y1 +  (3.16d)  1  − η fY1 Y1 − ηfY Y1 − ηY Y1 f,  1 hx ξY12 = −f 2 ηY21 + Y12 ηY1 ηY1 − ξY12 ηxY + ηx1 Y1 ηY1 + 1  (3.16e)  + Y12 ξY f ηY1 − η 1 fY1 Y1 ηY1 − ηfY Y1 ηY1 − ηY Y1 f ηY1 + + 2f ηY11 Y1 ηY1 − ξY12 f ηY11 Y1 − 2ξY12 ξY f − ξY12 ηY Y1 f, hY Y1 = −Y1 ηY1 Y1 + fY ηY1 + f ηY Y1 , hY1 Y12 = f ηY1 Y1 Y1 − Y12 ηY11 Y1 − f ηY1 + Y1 ηY1 fY1 .  (3.16f) (3.16g)  Note that no relationship between ξ, η, and η 1 is assumed when formulating (3.16). Proof. The IFDE of (3.9), generated systematically using the direct method, 23  3.3. Second-Order ODEs are ΛY Y1 Y1 + 2ΛY + ΛxY1 + (Λf )Y1 Y1 = 0, − (f Λ)Y + (Λf )xY1 + (Λf )Y Y1 Y1 + Λxx +  (3.17a) ΛY Y Y12  + 2Y1 ΛxY = 0. (3.17b)  Note that the IFDE (3.17) is now an over-determined system. When considering such a system the given equations do not necessarily represent the simplest differential relations describing solutions of (3.17). Using methods of differential algebra implemented in MapleTM it is possible to write (3.17) in a form which eliminates this issue. Making use of the GeM [12] software package for Maple, one can then obtain the symmetry determining equations for point symmetries (3.15) of (3.17), given by (3.16).  One then has the following theorem. Theorem 3.4. Every point symmetry of (3.17) which is of the form X = ξ(x, Y, Y1 )  ∂ ∂ ∂ ∂ + η(x, Y, Y1 ) + η 1 (x, Y, Y1 ) + Λh(x, Y, Y1 ) ∂x ∂Y ∂Y1 ∂Λ (3.18)  projects onto a contact symmetry of (3.9). Conversely, every contact symmetry of (3.9) induces a point symmetry of (3.17). Proof. First, suppose that X = ξ(x, Y, Y1 )  ∂ ∂ ∂ ∂ + η(x, Y, Y1 ) + η 1 (x, Y, Y1 ) + Λh(x, Y, Y1 ) ∂x ∂Y ∂Y1 ∂Λ (3.19)  is a symmetry of the IFDE (3.17) where no relation is assumed to exist between η and η 1 . From Lemma 3.3 it follows that ξ, η, η 1 , and h satisfy (3.16).  24  3.3. Second-Order ODEs Solving for ηY11 in (3.16b) and substituting it into (3.16c), one finds that η 1 = ηx + Y1 ηY − Y1 ξx − Y12 ξY .  (3.20)  Together (3.16a) and (3.20) yield η 1 = Dη − Y1 Dξ.  (3.21)  The substitution of (3.16a) and (3.16b) into (3.16d) gives ηx1 + ηY1 Y1 + ηY11 f − f (ξx + Y1 ξY + f ξY1 ) = fx ξ + fY η + fY1 η 1  (3.22)  which is equivalent to Dη 1 − Y2 Dξ − (fx ξ + fY η + fY1 η 1 )  Y2 =f (x,Y,Y1 )  = 0.  (3.23)  It follows from (3.16a), (3.23), (3.21) and Lemma 3.2 that the infinitesimal generator Xp = ξ(x, Y, Y1 )  ∂ ∂ ∂ + η(x, Y, Y1 ) + η 1 (x, Y, Y1 ) ∂x ∂Y ∂Y1  obtained by projecting X onto a transformation of (x, Y, Y1 ) is a contact symmetry of (3.9). To prove the converse one must show that given a contact symmetry of (1)  (3.9) with first prolongation Xp there exists some function h(x, Y, Y1 ) such that X = X(1) p + Λh(x, Y, Y1 )  ∂ ∂Λ  (3.24)  is a symmetry of (3.17). From the above calculations, given a symmetry Xp of (3.9), ξ, η, and η1  clearly satisfy(3.16a)-(3.16d). One must then show that there exists a  function h(x, Y, Y1 ) such that ξ, η, η 1 and h satisfy the remaining equations  25  3.3. Second-Order ODEs (3.16e)-(3.16g). Integrating (3.16f) and (3.16g) one obtains h(x, Y, Y1 ) = −ηY11 + (f ξY1 ) + p(x, Y1 ), h(x, Y, Y1 ) = −ηY11 + (f ξY1 ) + q(x, Y ),  (3.25)  respectively, which implies that h(x, Y, Y1 ) = −ηY11 + (f ξY1 ) + g(x),  (3.26)  where g(x) is an arbitrary function of x. After substituting (3.16a)-(3.16d) into (3.16e) one finds that 1 hx = −ηxY + fx ξY1 + f ξxY1 . 1  (3.27)  The substitution of (3.26) into (3.27) implies that g (x) = 0 and thus that h = −ηY11 + f ξY1 + C,  (3.28)  the infinitesimal generator X given by (3.24) yields a point symmetry of the IFDE (3.17).  3.3.2  System Integrating Factor Equations  Consider the first-order ODE system y = y1 ,  (3.29)  y1 = f (x, y, y1 ) which is equivalent to (3.9). Letting ˜ = ∂ + Y1 ∂ + f (x, Y, Y1 ) ∂ D ∂x ∂Y ∂Y1  (3.30)  denote the total derivative operator D restricted to solutions of (3.9), the following result can be proved. Lemma 3.5. A Lie group of point transformations acting on (x, Y, Y1 ) with 26  3.3. Second-Order ODEs infinitesimal generator X = ξ(x, Y, Y1 )  ∂ ∂ ∂ + η(x, Y, Y1 ) + η1 (x, Y, Y1 ) ∂x ∂Y ∂Y1  (3.31)  is a point symmetry of the first-order ODE system (3.29) if and only if ξ, η, and η 1 satisfy the following conditions: ˜ − Y1 Dξ ˜ = η1 1. Dη ˜ 1 − f Dξ ˜ = fx ξ + fY η + fY η1 . 2. Dη 1 ˜ = where D  ∂ ∂x  ∂ + Y1 ∂Y + f (x, Y, Y1 ) ∂Y∂ 1 is the total derivative operator re-  stricted to solutions of (3.29). Proof. Denote the first prolongation of (3.31) by X(1) = X + η (1)  ∂ (1) ∂ + η1 1 ∂Y ∂Y11  (3.32)  where Y 1 , Y11 are arbitrary functions corresponding to y and y1 , respectively. Using the prolongation formula [4], one finds that η (1) = ηx + Y 1 ηY + Y11 ηY1 − Y 1 (ξx + Y 1 ξY + Y11 ξY1 ), (1)  η1 = ηx1 + Y 1 ηY1 + Y11 ηY11 − Y11 (ξx + Y 1 ξY + Y11 ξY1 ).  (3.33)  The infinitesimal criteria for (3.31) to be a point symmetry of (3.29) are η (1) − η1  Y1 =Y 1 , Y11 =f (x,Y,Y1 )  =0  (1)  η1 − fx ξ − fY η − fY1 η1  Y1 =Y 1 , Y11 =f (x,Y,Y1 )  (3.34) = 0.  Substituting (3.33) into (3.34) and replacing (Y 1 , Y11 ) with (Y1 , f (x, Y, Y1 )) one obtains ˜ − Y1 Dξ, ˜ η1 = Dη ˜ 1 − f Dξ ˜ = fx ξ + fY η + fY η1 Dη 1  (3.35)  as required. 27  3.3. Second-Order ODEs  In Lemma 3.5 notice that the closure condition (3.14) is absent. This is explained by the following result. Lemma 3.6. Every higher-order symmetry of (3.9) corresponds to a point symmetry of (3.29). Proof. Every higher-order symmetry of (3.9) in evolutionary form is equivalent to a symmetry with an infinitesimal generator of the form ˆ = η˜(x, Y, Y1 ) ∂ . X ∂Y  (3.36)  To illustrate this, suppose that η = η(x, Y, Y1 , . . .). On solutions of (3.9), ˜ . . .) η(x, Y, Y1 , . . . , Yk ) = η(x, Y, Y1 , f, Df, which is a function only of x, Y, Y1 . Thus, there exists a function η˜ such that the action of X = η(x, Y, Y1 , . . .)  ∂ ∂Y  on solutions of (3.9) is equivalent to that of (3.36). Each symmetry (3.36) then corresponds to a point symmetry of (3.29) given by X = η˜(x, Y, Y1 )  ∂ ˜η ∂ . + D˜ ∂Y ∂Y1  For ODE systems such as (3.29) a modification of the direct method outlined in Section 2.3.1 is required. For a system, the single integrating factor Λ is replaced by a set of multipliers Λi . In particular, Λ1 (x, Y, Y1 )[Y 1 − Y1 ] + Λ2 (x, Y, Y1 )[Y11 − f (x, Y, Y1 )] = Dφ(x, Y, Y1 ) (3.37)  28  3.3. Second-Order ODEs is used in place of (2.1), where D=  ∂ ∂ ∂ +Y1 + Y11 + ... ∂x ∂Y ∂Y1  Defining EY =  ∂ ∂ −D 1 ∂Y ∂Y  and  EY1 =  ∂ ∂ − D 1, ∂Y1 ∂Y1  (3.38)  it can be shown that Λ1 , Λ2 are multipliers of (3.29) if and only if [4] EY {Λ1 (x, Y, Y1 )[Y 1 − Y1 ] + Λ2 (x, Y, Y1 )[Y11 − f (x, Y, Y1 )]} ≡ 0, EY1 {Λ1 (x, Y, Y1 )[Y 1 − Y1 ] + Λ2 (x, Y, Y1 )[Y11 − f (x, Y, Y1 )]} ≡ 0.  (3.39)  Expanding (3.39) one obtains (Λ1 )Y [Y 1 − Y1 ] + (Λ2 )Y [Y11 − f (x, Y, Y1 )] − DΛ1 − fY Λ2 = 0, (Λ1 )Y1 [Y 1 − Y1 ] + (Λ2 )Y1 [Y11 − f (x, Y, Y1 )] − Λ1 − DΛ2 − fY1 Λ2 = 0. (3.40) Since Λ1 and Λ2 are independent of Y 1 and Y11 , (3.40) splits to give (Λ2 )Y − (Λ1 )Y1 = 0, − Y1 (Λ1 )Y − f (Λ2 )Y − (Λ1 )x − fY Λ2 = 0,  (3.41)  − Y1 (Λ1 )Y1 − f (Λ2 )Y1 − Λ1 − (Λ2 )x − fY1 Λ2 = 0, which are the multiplier determining equations of (3.29). Lemma 3.7. The infinitesimal generator ∂ ∂ ∂ + η(x, Y, Y1 ) + η1 + [Λ1 h1 (x, Y, Y1 )+ ∂x ∂Y ∂Y1 ∂ ∂ + Λ2 g1 (x, Y, Y1 )] + [Λ1 h2 (x, Y, Y1 ) + Λ2 g2 (x, Y, Y1 )] ∂Λ1 ∂Λ2  XΛ =ξ(x, Y, Y1 )  (3.42)  is a point symmetry of the multiplier determining equations of (3.29) if and  29  3.3. Second-Order ODEs only if ξ, η, η1 , h1 , h2 , g1 and g2 satisfy the system ηx = ξx Y1 − f ηY1 + Y1 ξY1 f + (Y1 )2 ξY − Y1 ηY + η1 ,  (3.43a) 2  ξfx = Y1 (η1 )Y − Y1 ξY f + f (η1 )Y1 − η1 fY1 − ξx f − f ξY1 +  (3.43b)  + (η1 )x − ηfY , h2 = −ηY1 + ξY1 Y1 ,  (3.43c)  g2 = h1 + ηY − (η1 )Y1 + ξY1 f − ξY Y1 ,  (3.43d)  g1 = −(η1 )Y + ξY f,  (3.43e)  (h1 )Y1 = −ηY Y1 + ξY Y1 Y1 + ξY ,  (3.43f)  (h1 )Y = −ηY Y + ξY Y Y1 ,  (3.43g) 2  (h1 )x = −(η1 )Y − Y1 ξY1 fY + f ηY Y1 − Y1 ξY Y1 f − (Y1 ) ξY Y +  (3.43h)  + Y1 ηY Y + fY ηY1 . Proof. The multiplier determining equations of (3.29), generated systematically using the direct method are (Λ2 )Y − (Λ1 )Y1 = 0,  (3.44a)  (Λ1 )Y Y1 + (Λ1 )x + (f Λ2 )Y = 0,  (3.44b)  (Λ1 )Y1 Y1 + Λ1 + (Λ2 )x + (f Λ2 )Y1 = 0.  (3.44c)  The GeM [12] software package for MapleTM was used to find the symmetry determining equations for point symmetries (3.42) of the system multiplier determining equations (3.44), producing (3.43).  The following theorem connects the results of Lemmas 3.5 and 3.7.  30  3.3. Second-Order ODEs Theorem 3.8. Every point symmetry of (3.44) which is of the form ∂ ∂ ∂ + η(x, Y, Y1 ) + η 1 (x, Y, Y1 ) + [Λ1 h1 (x, Y, Y1 )+ ∂x ∂Y ∂Y1 ∂ ∂ + Λ2 g1 (x, Y, Y1 )] + [Λ1 h2 (x, Y, Y1 ) + Λ2 g2 (x, Y, Y1 )] ∂Λ1 ∂Λ2 (3.45)  X =ξ(x, Y, Y1 )  projects onto a point symmetry of (3.29). Conversely, every point symmetry of (3.29) induces a point symmetry of (3.44). Proof. Let X be a point symmetry of (3.44) which is of the form (3.45). From Lemma 3.7 it follows that ξ, η, η1 , h1 , h2 , g1 , and g2 satisfy (3.43). Solving for η1 in (3.43a), one obtains η1 = ηx + ηY Y1 + ηY1 f − Y1 (ξx + Y1 ξY + f ξY1 )  (3.46)  which can be written as ˜ − Dξ. ˜ η1 = Dη  (3.47)  Rearranging (3.43b), one obtains (η1 )x + (η1 )Y Y1 + (η1 )Y1 f − f (ξx + Y1 ξY + f ξY1 ) = ξfx + ηfY + η1 fY1 (3.48) which is equivalent to ˜ − Dξ ˜ = ξfx + ηfY + η1 fY . Dη 1  (3.49)  It follows immediately from (3.47), (3.49) and Lemma 3.5 that the infinitesimal generator Xp , found by projecting X, given by (3.45) onto a transformation of (x, Y, Y1 ), is a point symmetry of (3.29). To establish the converse result, let the infinitesimal generator Xp = ξ(x, Y, Y1 )  ∂ ∂ ∂ + η(x, Y, Y1 ) + η1 (x, Y, Y1 ) ∂x ∂Y ∂Y1  (3.50) 31  3.3. Second-Order ODEs be a point symmetry of (3.29). Then it must be shown that there exist functions h1 , h2 , g1 , g2 such that X = Xp + [h1 Λ1 + g1 Λ2 ]  ∂ ∂ + [h2 Λ1 + g2 Λ2 ] ∂Λ1 ∂Λ2  (3.51)  is a point symmetry of (3.44). Clearly, if (3.50) is a point symmetry of (3.29) then ξ, η, η1 satisfy (3.43a) and (3.43b). The substitution of (3.43a) into (3.43h), after some simplification, yields (h1 )x = −ηxY + Y1 ξxY .  (3.52)  Integrating (3.43f), (3.43g) and (3.52) one obtains h1 = −ηY + ξY Y1 + C.  (3.53)  From (3.43c)-(3.43e) and (3.53), the functions h2 , g1 and g2 can be obtained, giving h1 = −ηY + ξY Y1 + C, h2 = −ηY1 + ξY1 Y1 , g1 = −(η1 )Y + ξY f,  (3.54)  g2 = (−η1 )Y + ξY1 f + C. If h1 , h2 , g1 and g2 are of the form (3.54) then it follows that X, given by (3.50) is a point symmetry of (3.44) and the proof is complete.  3.3.3  Comparison of the Two Approaches  In order to compare the results of Section 3.3.1 and Section 3.3.2, it is useful to establish a connection between solutions of the IFDE (3.44) for the ODE system (3.29) and solutions of the IFDE (3.17) for the corresponding scalar ODE (3.9). This is given by the following theorem.  32  3.3. Second-Order ODEs Theorem 3.9. Every solution of the IFDE (3.44) for the ODE system (3.29) yields a unique solution of the IFDE (3.17) for the corresponding scalar ODE (3.9). Conversely, every solution of the IFDE (3.17) for the scalar ODE (3.9) yields a unique solution of the IFDE (3.44) for the ODE system (3.29). Proof. Suppose (Λ1 , Λ2 ) solves the IFDE (3.44). Solving (3.44) for Λ1 , one obtains Λ1 = − [(Λ2 )x + (Λ2 )Y Y1 + (Λ2 f )Y1 ] .  (3.55)  The substitution of (3.55) into (3.44b) yields −(f Λ2 )Y + (f Λ2 )xY1 + (f Λ2 )Y Y1 Y1 + (Λ2 )xx + (Λ2 )Y Y (Y1 )2 + 2Y1 (Λ2 )xY = 0. (3.56) Now differentiating (3.55) with respect to Y1 and using (3.44a) to eliminate (Λ1 )Y1 one finds that (Λ2 )Y Y1 Y1 + 2(Λ2 )Y + (Λ2 )xY1 + (Λ2 f )Y1 Y1 = 0.  (3.57)  Together (3.55) and (3.57) are equivalent to (3.17) and thus Λ = Λ2 is a solution of the IFDE (3.17). Conversely, suppose Λ is a solution of the IFDE (3.17). Let µ = −[Λx + ΛY Y1 + (Λf )Y1 ].  (3.58)  Differentiating (3.58) with respect to Y1 , one obtains µY1 = −ΛxY1 − ΛY − Y1 ΛY Y1 − (Λf )Y1 Y1 .  (3.59)  Subtracting ΛY from both sides of (3.17a) and substituting the resulting expression into (3.59) one finds that µY1 = ΛY .  (3.60)  33  3.3. Second-Order ODEs From (3.58) and (3.60) it follows immediately that µ + Λx + µY1 Y1 + (Λf )Y1 = 0.  (3.61)  Finally, (3.17b) may be re-written as [Λx + Y1 ΛY + (Λf )Y1 ]x + Y1 [Λx + ΛY Y1 + (Λf )Y1 ]Y − (f Λ)Y = 0. (3.62) The substitution of (3.58) into (3.62) yields µY Y1 + µx + (Λf )Y = 0.  (3.63)  Together (3.63), (3.61) and (3.60) imply that (µ, Λ) is a solution (3.44) where (Λ1 , Λ2 ) = (µ, Λ). By construction, µ is clearly unique, completing the proof.  Theorem 3.9 implies that every local symmetry of the second-order scalar IFDE (3.17) corresponds to a local symmetry of the corresponding first-order system IFDE (3.44). In particular, if X=ξ  ∂ ∂ ∂ ∂ ∂ +η + η1 + [Λ1 h1 + Λ2 h1 ] + [h2 Λ1 + g2 Λ2 ] (3.64) ∂x ∂Y ∂Y1 ∂Λ1 ∂Λ2  is a point symmetry of (3.44) then X =ξ  ∂ ∂ ∂ ∂ +η + η1 + [h2 (Λx + ΛY Y1 + (Λf )Y1 ) + g2 Λ] ∂x ∂Y ∂Y1 ∂Λ  (3.65)  is a symmetry of (3.17) obtained by substituting (3.58) into (3.64). The symmetry (3.65) is a point symmetry of (3.17) only if the coefficient of  ∂ ∂Λ  is  independent of the derivatives of Λ which implies that h2 ≡ 0. This is true if and only if ηY1 = Y1 ξY1 in which case (3.65) is identical to the form found in Theorem 3.4. Thus, the scalar method is a special case of the system method obtained by requiring that h2 ≡ 0.  34  3.4. General ODEs  3.4 3.4.1  General ODEs Overview  To generalize the results of the previous sections to ODEs of arbitrary order a new method is necessary due to the difficulty associated with analyzing the corresponding IFDE directly. Instead of the direct approach employed in previous sections, one considers transformations which leave invariant the set of first integrals of (2.1) (first integral symmetries); using them as an intermediary between (2.1) and its corresponding IFDE. In Section 3.4.2 necessary and sufficient conditions are formulated for a function to be a first integral of (2.1), yielding a first-order PDE called the first integral determining equation. The connection between the point symmetries of the first integral determining equation and the higher-order symmetries of (2.1) is proved in Section 3.4.3. In Section 3.4.4, the multiplier determining equations of the first-order ODE system (3.10) are derived from the first integral determining equation; and in Section 3.4.5 the connection between the point symmetries of the multiplier determining equations which are of the form ∂ ∂ ξ(x, Y ) + η(x, Y ) + ∂x ∂Y  n−1  ∂ η (x, Y ) + ∂Yi  n−1  αij (x, Y )µi  i  i=1  i,j=0  ∂ ∂µj  (3.66)  and the point symmetries of the first integral determining equation of (2.1) is established. Finally, in Section 3.4.6 the symmetries of the multiplier determining equations are connected with those of the IFDE for the scalar ODE (2.1).  3.4.2  First Integral Determining Equation  By definition, a first integral of (2.1) is a function φ(x, Y, Y1 , . . . , Yn−1 ) such that Dφ(x, Y, Y1 , . . . , Yn−1 )|Yn =f (x,Y,Y1 ,...,Yn−1 ) = 0,  (3.67)  35  3.4. General ODEs where D=  ∂ ∂ ∂ ∂ + . . . + Yn + ... + Y1 + Y2 ∂x ∂Y ∂Y1 ∂Yn−1  (3.68)  The restriction of the total derivative operator to solutions of (2.1) yields the new operator ˜ = ∂ + Y1 ∂ + Y2 ∂ + . . . + Yn−1 ∂ + f (x, Y, Y1 , . . . , Yn−1 ) ∂ , D ∂x ∂Y ∂Y1 ∂Yn−2 ∂Yn−1 (3.69) which enables one to re-write (3.67) as ˜ = 0. Dφ  (3.70)  Associated with (3.70) are the characteristic equations [3] dx dY dY1 dYn−2 dYn−1 dφ = = = ... = = = . 1 Y1 Y2 Yn−1 f 0  (3.71)  Example 3.10. Consider the equation y = 0.  (3.72)  From the general form of the first integral determining equation given in (3.70), the first integral determining equation of the ODE (3.72) is φx + Y1 φY + 0 · φY1 = 0.  (3.73)  Note that in this case one can easily solve (3.73) using the method of characteristics to obtain φ(x, Y, Y1 ) = g(Y1 , xY1 − Y )  (3.74)  where g is an arbitrary function of its arguments. Let s(x, Y, Y1 ) = Y1 and z(x, Y, Y1 ) = x − Y Y1 . Differentiating φ(x, Y, Y1 ), it follows that Dφ(x, Y, Y1 ) =  ∂g(s, z) ∂g(s, z) (Y2 ) + (xY2 + Y1 − Y1 ) ∂s ∂z  (3.75)  36  3.4. General ODEs and hence φ(x, Y, Y1 ) is a first integral of (3.72) with corresponding integrating factor µ =  ∂g ∂s  + x ∂g ∂z . Moreover, since (3.72) is a second-order ODE and  φ is an arbitrary function of two variables s(x, Y, Y1 ) and z(x, Y, Y1 ) which are functionally-independent, it follows that all first integrals of the ODE (3.73) must satisfy (3.73). Example 3.11. As a second example, consider the Blasius equation 1 y + yy = 0. 2  (3.76)  From the first integral determining equation (3.70) for an nth order ODE in solved form, the first integral determining equation of the ODE (3.76) is 1 φx + Y1 φY + Y2 φY1 − Y Y2 = 0. 2  (3.77)  Note that unlike in the previous example one cannot easily solve this first integral determining equation since doing so would be equivalent to finding the general solution of the Blasius equation.  3.4.3  First Integral Symmetries  Defining Y = (Y, Y1 , . . . , Yn−1 ), one can prove the following theorem. Theorem 3.12. Consider a one-parameter ( ) Lie group of point transformations x∗ = x∗ (x, Y ; ), Y ∗ = Y ∗ (x, Y ; ),  (3.78)  φ∗ = φ∗ (x, Y , φ; ). The point transformation (3.78) is a point symmetry of (3.70) if and only if it projects onto a higher-order symmetry of (2.1). Furthermore, each higherorder symmetry of (2.1) induces a point symmetry of (3.70). Proof. A transformation (3.78) when extended to act on the space of first partial derivatives of φ is a symmetry of (3.70) if and only if it leaves invariant 37  3.4. General ODEs the characteristic equations (3.71). In particular, this implies that ∗ dY ∗ dYn−1 dx∗ dY ∗ dY ∗ = ∗ = ∗1 = . . . = ∗n−2 = , 1 Y1 Y2 Yn−1 f (x∗ , Y ∗ )  (3.79)  which are necessary and sufficient conditions for the projection of (3.78) onto a transformation of (x, Y, Y1 , . . . , Yn−1 )-space to be a symmetry of (2.1). To prove the second part of the theorem, let Xp = ξ(x, Y )  ∂ ∂ ∂ + η(x, Y ) + η i (x, Y ) ∂x ∂Y ∂Yi  (3.80)  be the infinitesimal generator of a local symmetry of (2.1), assuming summation over repeated indices. One must show that there exists a function β(x, Y , φ) such that X = ξ(x, Y )  ∂ ∂ ∂ ∂ + η(x, Y ) + η i (x, Y ) + β(x, Y , φ) ∂x ∂Y ∂Yi ∂φ  (3.81)  is a point symmetry of (3.70). Let Y−1 = x, Y0 = Y and Y = (Y−1 , Y0 , . . . , Yn−1 ). In addition, let Di =  ∂ ∂ + φ Yi + ... ∂Yi ∂φ  (3.82)  be the total derivative operator with respect to Yi , i = −1, . . . , n − 1. Extending the generator (3.81) via the contact conditions to act on partial derivatives of φ, one obtains the first prolongation of (3.81) given by (1)  X(1) = X + βi (x, Y , φ, φx , φY , φY1 , . . . , φYn−1 )  ∂ ∂φYi  (3.83)  where (1)  βi  = Di β − Di (η j )φYj .  (3.84)  38  3.4. General ODEs The infinitesimal criterion for (3.81) to be a point symmetry of (3.70) is n−2  n−2  n−1  η i+1 φYi + −1  0  (1)  fYi η i φYn−1 +  Yi+1 βi  (1)  (1)  + f βn−1 + β−1 = 0.  (3.85)  0  Defining ˜ Y = (1, Y1 , . . . , Yn−2 , f (x, Y )), (w−1 , . . . , wn−1 ) = w = D  (3.86)  one can re-write (3.85) as (1)  η j Dj (wi )φYi + wi βi  = 0.  (3.87)  The substitution of (3.84) into (3.87) yields [η j Dj (wi ) − wj Dj (η i )]φYi + wi Di β = 0. Using the fact that Di β =  ∂β ∂Yi  +  ∂β ∂φ φYi ,  (3.88)  it follows that  [η i Di (wj ) − wi Di (η j )]φYj + wi  ∂β ∂β i + w φYi = 0. ∂Yi ∂φ  (3.89)  The last term of (3.89) vanishes on solutions of (3.70) since ˜ i )φY = φx + Y1 φY + Y2 φY + . . . + Yn−1 φY ˜ wi φYi = D(Y 1 n−2 + f φYn−1 = Dφ. i (3.90) Using (3.70) one can write φx = −  n−1 ˜ j φYj DY 0  =−  n−1 φ Yj w j 0  which,  upon substitution into (3.90), gives n−1   n−1  i=−1    j=0  η i Di (wj ) − wi Di (η j ) + wj wi Di (ξ) φYj + wi   ∂β  ∂Yi   = 0. (3.91)  Since ξ, η, η 1 , . . . , η n−1 , β, and w do not depend on the partial derivatives of φ, each coefficient of φYj must vanish independently of the others allowing  39  3.4. General ODEs one to split (3.91) to obtain ˜ Dβ(x, Y , φ) = 0. i  j  i  (3.92a) j  j  i  η Di (w ) − w Di (η ) + w w Di (ξ) = 0,  j = 0, . . . , n − 1, i = −1 . . . n − 1. (3.92b)  One can re-write (3.92b) as ˜ j ) = D(η ˜ j ) − D(Y ˜ j )D(ξ), ˜ Xp (DY  j = 0, . . . , n − 1  (3.93)  which is true if and only if Xp is a higher-order symmetry of (2.1). Then, choosing β ≡ 0 it is clear that the coefficients of X satisfy (3.92a) and (3.92b) and hence that X is a symmetry of (3.70).  Example 3.13. Consider the equation y = 0.  (3.94)  The transformation of (x, Y, Y1 )-space Xp = ξ(x, Y, Y1 )  ∂ ∂ ∂ + η(x, Y, Y1 ) + η 1 (x, Y, Y1 ) ∂x ∂Y ∂Y1  (3.95)  is a symmetry of (3.94) if and only if ξ, η, and η 1 satisfy the symmetry determining equations  ˜ = where D  ∂ ∂x  ∂ + Y1 ∂Y +0·  ˜ − Y1 Dξ, ˜ η 1 = Dη  (3.96a)  ˜ 1 − 0 · Dξ ˜ =0 Dη  (3.96b)  ∂ ∂Y1  is the total derivative operator restricted to  solutions of the ODE (3.94). From Example 3.10, the corresponding first integral determining equation is φx + Y1 φY + 0 · φY1 = 0.  (3.97)  40  3.4. General ODEs Using MapleTM one can show that the point transformation of (x, Y, Y1 , φ)space with infinitesimal operator X = ξ(x, Y, Y1 )  ∂ ∂ ∂ ∂ + β(x, Y, Y1 , φ) + η(x, Y, Y1 ) + η 1 (x, Y, Y1 ) ∂x ∂Y ∂Y1 ∂φ (3.98)  is a point symmetry of (3.97) if and only if the coefficients ξ, η, η 1 , and β satisfy the symmetry determining equations η 1 = (ηx + Y1 ηY ) − Y1 (ξx + Y1 ξY ),  (3.99a)  ηx1 + Y1 ηY1 = 0,  (3.99b)  βx + Y1 βY = 0.  (3.99c)  Comparing (3.99a) and (3.99b) with (3.96a) and (3.96b), respectively, it follows that every point symmetry (3.78) of the first integral determining equation (3.73) projects onto a higher-order symmetry Xp of the original ODE (3.72). Conversely, each higher-order symmetry Xp of the ODE (3.94) induces a point symmetry X of the first integral determining equations where β can be chosen to be identically zero. Example 3.14. As a second example, consider the Blasius equation 1 y + yy = 0. 2  (3.100)  The transformation of (x, Y, Y1 , Y2 )-space ∂ ∂ ∂ + η(x, Y, Y1 , Y2 ) + η 1 (x, Y, Y1 , Y2 ) + ∂x ∂Y ∂Y1 (3.101) ∂ +η 2 (x, Y, Y1 , Y2 ) ∂Y2  Xp = ξ(x, Y, Y1 , Y2 )  is a higher-order symmetry of (3.100) if and only if ξ, η, η 1 , and η 2 satisfy  41  3.4. General ODEs the symmetry determining equations ˜ ˜ − Y1 Dξ, η 1 = Dη  (3.102a)  ˜ ˜ − Y2 Dξ, η = Dη ˜ 2 + 1 Y Y2 · Dξ ˜ + 1 [Y2 η + Y η 2 ] = 0, Dη 2 2  (3.102b)  2  ˜ = where D  ∂ ∂x  2  (3.102c)  ∂ + Y1 ∂Y + Y2 ∂Y∂ 1 − 21 Y Y2 ∂Y∂ 2 is the total derivative operator  restricted to solutions of (3.100). From Example 3.11 the corresponding first integral determining equation is 1 φx + Y1 φY + Y2 φY1 − Y Y2 φY2 = 0. 2  (3.103)  Using the GeM software package, one finds that the point transformation of (x, Y, Y1 , Y2 , φ)-space with infinitesimal operator ∂ ∂ ∂ + η(x, Y, Y1 , Y2 ) + η 1 (x, Y, Y1 , Y2 ) + ∂x ∂Y ∂Y1 (3.104) ∂ ∂ 2 + η (x, Y, Y1 , Y2 ) + β(x, Y, Y1 , Y2 , φ) ∂Y2 ∂φ  X =ξ(x, Y, Y1 , Y2 )  is a symmetry of (3.103) if and only if the coefficients ξ, η, η 1 , η 2 , and β satisfy the determining equations 1 η 1 = (ηx + Y1 ηY + Y2 ηY1 − Y Y2 ηY2 )− 2  (3.105a)  1 − Y1 (ξx + Y1 ξY + Y2 ξY1 − Y Y2 ξY2 ), 2 1 η 2 = (ηx1 + Y1 ηY1 + Y2 ηY11 − Y Y2 ηY12 )− (3.105b) 2 1 − Y2 (ξx + Y1 ξY + Y2 ξY1 − Y Y2 ξY2 ), 2 1 0 = (ηx2 + Y1 ηY2 + Y2 ηY21 − Y Y2 ηY22 )− (3.105c) 2 1 1 1 − Y2 (ξx + Y1 ξY + Y2 ξY1 − Y Y2 ξY2 ) + Y2 η + Y η 2 , 2 2 2 1 βx + Y1 βY + Y2 βY1 − Y Y2 βY2 = 0. (3.105d) 2  42  3.4. General ODEs Comparing (3.105a)-(3.105c) with (3.102a)-(3.102c), respectively, it follows that every symmetry X of the first integral determining equation (3.103) projects onto a symmetry Xp of the original ODE (3.100). Conversely, each higher-order symmetry Xp of the ODE (3.103) induces a point symmetry X of the first integral determining equations where β can be chosen to be identically zero.  3.4.4  Multiplier Determining Equations  To connect the point symmetries of (3.70) with the point symmetries of the multiplier determining equations of the ODE system (3.10), one first seeks a mapping between solutions of the first integral determining equation and the corresponding system multiplier determining equations. This can be done by direct construction of the system multiplier determining equations from the first integral determining equation. To accomplish this, first let (µ0 , . . . , µn−1 ) = µ = (φY , . . . , φYn−1 ).  (3.106)  Later it will be shown that µ corresponds to a set of multipliers of the ODE system (3.10). In order for the function φ(x, Y ) to exist, it must satisfy the integrability conditions i = 0, . . . , n − 1,  D−1 φYi = Di φx ,  i, j = 0, . . . , n − 1,  Dj φYi = Di φYj ,  (3.107)  where Di is given by (3.82) for i = −1, . . . , n − 1, and Y0 = Y. Furthermore, if φ(x, Y ) is a solution of (3.70) then ˜ = 0, Di Dφ  i = 0, . . . , n − 1.  (3.108)  43  3.4. General ODEs The substitution of (3.106) into (3.107) and (3.108) yields µiYj = µjYi ,  i, j = 0, . . . , n − 1,  ˜ 0 = −µn−1 fY , Dµ  (3.109)  ˜ i = −µi−1 − µn−1 fY , Dµ i  i = 1, . . . , n − 1.  In the following results it is shown that a set of functions (µ0 , . . . , µn−1 ) solves (3.109) if and only if (µ0 , . . . , µn−1 ) is a set of multipliers of the ODE system (3.10). As a consequence, (3.109) are the multiplier determining equations of the ODE system (3.10), also called the system multiplier determining equations. Lemma 3.15. A function φ(x, Y, Y1 , . . . , Yn−1 ) corresponds to a first integral of (2.1) if and only if it corresponds to a first integral of (3.10) with corresponding multipliers µ0 =  ∂φ ∂Y ,  µ1 =  ∂φ ∂Y1 ,  . . . , µn−1 =  ∂φ ∂Yn−1 .  Proof. Suppose the function φ(x, Y, Y1 , . . . , Yn−1 ) corresponds to a first integral of (2.1). It follows immediately from (3.69) and (3.70) that − φY Y1 + φY1 Y2 + . . . + φYn−2 Yn−1 + φYn−1 f = φx .  (3.110)  As in Section 3.3.2, let Yj1 be an arbitrary function corresponding to yj , where y and its derivatives have been replaced with the independent functions y, y1 , . . . , yn−1 . The total derivative operator D then becomes D =  ∂ ∂ ∂ ∂ 1 +Y1 + Y11 + . . . + Yn−1 . ∂x ∂Y ∂Y1 ∂Yn−1  The addition of D φ −  ∂φ ∂x  (3.111)  to both sides of (3.110) gives  ∂φ ˜ Y 1 − DY ∂Y  n−1  + 1  ∂φ ˜ i = D φ. Y 1 − DY ∂Yi i  (3.112)  Thus φ(x, Y, Y1 , . . . , Yn−1 ) corresponds to a first integral of (3.10) with associated multipliers given by φY , φY1 , . . ., φYn−1 .  44  3.4. General ODEs Conversely, if φ(x, Y, Y1 , . . . , Yn−1 ) corresponds to a first integral of (3.10) then it must satisfy (3.112). The subtraction of D φ − φx from both sides ˜ = 0. Thus, φ(x, Y, Y1 , . . . , Yn−1 ) is a solution of (3.70) of (3.112) yields Dφ and hence is a first integral of (2.1).  Theorem 3.16. The function µ(x, Y, Y1 , . . . , Yn−1 ) = (µ0 (x, Y, Y1 , . . . , Yn−1 ), . . . , µn−1 (x, Y, Y1 , . . . , Yn−1 )) satisfies the system multiplier determining equations (3.109) if and only if it corresponds to a set of multipliers of the first order ODE system (3.10). Proof. Suppose µ(x, Y, Y1 , . . . , Yn−1 ) is a set of multipliers of the first-order ODE system (3.10) with corresponding first integral φ(x, Y, Y1 , . . . , Yn−1 ). From the proof of Lemma 3.15 it follows that φ(x, Y, Y1 , . . . , Yn−1 ) satisfies (3.70) with µ0 = φY , µ1 = φY1 , . . . , µn−1 = φYn−1 . It is then clear from the construction of (3.109) that µ is a solution. Conversely, suppose µ satisfies the system multiplier determining equations (3.109) and let σ = −Y1 µ0 − . . . − Yn−1 µn−2 − f µn−1 .  (3.113)  In addition, define the function φ(x, Y, Y1 , . . . , Yn−1 ) implicitly by φx = σ, φ Y = µ0 , φ Yi = µ i ,  (3.114) i = 1, . . . , n − 1.  45  3.4. General ODEs Note that if µ satisfies (3.109) and σ satisfies (3.114) then the integrability conditions of φ are satisfied. In particular, using (3.113) one can see that (φx )Y = σY = µ0x = (φY )x , (φx )Yi = σYi = µix = (φYi )x , (φY )Yi = µ0Yi = µiY = (φYi )Y ,  i = 1, . . . , n − 1, i = 1, . . . , n − 1,  (φYj )Yi = µjYi = µiYj = (φYj )Yi ,  (3.115)  i, j = 1, . . . , n − 1.  Hence φ(x, Y, Y1 , . . . , Yn−1 ) exists and is unique up to a constant. The substitution of (3.114) into (3.113) yields ˜ = 0, Dφ  (3.116)  which proves that φ(x, Y, Y1 , . . . , Yn−1 ) corresponds to a first integral of (2.1). Hence, from the proof of Lemma 3.15 it follows that µ is a set of multipliers of (3.10) as required.  From Theorem 3.16 it follows immediately that equations (3.109) are the multiplier determining equations for the first order ODE system (3.10). Example 3.17. Consider the ODE y = 0,  (3.117)  which is equivalent to the first-order ODE system y = y1 ,  (3.118)  y1 = 0. Using the direct method one finds that the multiplier determining equations  46  3.4. General ODEs of the ODE system (3.118) are µ0Y1 = µ1Y , µ0x + Y1 µ0Y = 0,  (3.119)  µ1x + Y1 µ1Y + µ0 = 0. From Example 3.10 the first integral determining equation of (3.117) is φx + Y1 φY = 0.  (3.120)  The integrability conditions of φ imply that ∂φY ∂φY1 = ∂Y1 ∂Y  (3.121)  while differentiation of (3.120) with respect to Y and Y1 , respectively, gives φxY + Y1 φY Y = 0,  (3.122)  φxY1 + φY + Y1 φY Y1 = 0. Letting µ0 = φY and µ1 = φY1 in (3.121) and (3.122) one obtains the system multiplier determining equations µ0Y1 = µ1Y , µ0x + Y1 µ0Y = 0,  (3.123)  µ1x + Y1 µ1Y + µ0 = 0, which are identical to those obtained via the direct method, given in (3.119). Example 3.18. Consider the ODE 1 y + yy = 0 2  (3.124)  47  3.4. General ODEs which is equivalent to the first-order ODE system y = y1 , y1 = y2 , 1 y2 = − yy2 . 2  (3.125)  The direct method yields the multiplier determining equations of the ODE system (3.125) µ0Y1 = µ1Y , µ0Y2 = µ2Y , µ1Y2 = µ2Y1 , 1 1 µ0x + Y1 µ0Y + Y2 µ1Y − Y Y2 µ2Y = Y2 µ2 , 2 2 1 1 0 2 1 µx + Y1 µY1 + Y2 µY1 − Y Y2 µY1 = −µ0 , 2 1 1 µ2x + Y1 µ0Y2 + Y2 µ1Y2 − Y Y2 µ2Y2 = −µ1 + Y µ2 . 2 2  (3.126)  From Example 3.11, the first integral determining equation of (3.124) is 1 φx + Y1 φY + Y2 φY1 − Y Y2 φY2 = 0. 2  (3.127)  The integrability conditions of φ imply that ∂φY ∂φY1 = , ∂Y1 ∂Y  ∂φY ∂φY2 = , ∂Y2 ∂Y  ∂φY1 ∂φY2 = ∂Y2 ∂Y1  (3.128)  while differentiation of (3.127) with respect to Y, Y1 , and Y2 , respectively, gives 1 1 φxY + Y1 φY Y + Y2 φY Y1 − Y Y2 φY Y2 = Y2 φY2 , 2 2 1 φxY1 + Y1 φY Y1 + Y2 φY1 Y1 − Y Y2 φY1 Y2 = −φY , 2 1 1 φxY2 + Y1 φY Y2 + Y2 φY1 Y2 − Y Y2 φY2 Y2 = −φY1 + Y φY2 . 2 2  (3.129)  48  3.4. General ODEs Letting µ0 = φY , µ1 = φY1 , and µ2 = φY2 in (3.128) and (3.129) one obtains the system multiplier determining equations µ0Y1 = µ1Y , µ0Y2 = µ2Y , µ1Y2 = µ2Y1 , 1 1 µ0x + Y1 µ0Y + Y2 µ1Y − Y Y2 µ2Y = Y2 µ2 , 2 2 1 2 1 0 1 µx + Y1 µY1 + Y2 µY1 − Y Y2 µY1 = −µ0 , 2 1 1 µ2x + Y1 µ0Y2 + Y2 µ1Y2 − Y Y2 µ2Y2 = −µ1 + Y µ2 . 2 2  (3.130)  which are equivalent to those obtained via the direct method, given in (3.126).  3.4.5  Induced Multiplier Symmetries  Using the results of the previous section, one can prove the following theorem. Theorem 3.19. Every point symmetry of the first integral determining equation (3.70) which is of the form ∂ ∂ ξ(x, Y ) + η(x, Y ) + ∂x ∂Y  n−1  η i (x, Y ) i=1  ∂ ∂ + β(x, Y , φ) ∂Yi ∂φ  induces a point symmetry of the multiplier determining equations (3.109) for the ODE system (3.10). Conversely, if X = ξ(x, Y )  ∂ ∂ ∂ + η i (x, Y ) + [αji (x, Y )µj ] i ∂x ∂Yi ∂µ  is a point symmetry of (3.109) then it is induced by a point symmetry of (3.70). Proof. To connect the point symmetries of the first integral determining equation (3.70) with the point symmetries of the multiplier determining 49  3.4. General ODEs equations (3.109), first note that by Theorem 3.16, if (µ0 , . . . , µn−1 ) is a solution of (3.109) then there exists a function φ(x, Y ), unique up to a constant, satisfying both (3.70) and (φY , . . . , φYn−1 ) = µ = (µ0 , . . . , µn−1 ).  (3.131)  Moreover, if φ(x, Y, Y1 , . . . , Yn−1 ) satisfies (3.70) then µ(x, Y, Y1 , . . . , Yn−1 ) defined by (3.131) is a solution of the multiplier determining equations (3.109). Thus there exists a non-invertible mapping which takes each solution of the first integral determining equation (3.70) into a solution of the multiplier determining equations (3.109) and which takes each solution of (3.109) into solutions of (3.70). It follows immediately that if X = ξ(x, Y )  ∂ ∂ ∂ ∂ + η(x, Y ) + η i (x, Y ) + β(x, Y , φ) ∂x ∂Y ∂Yi ∂φ  (3.132)  is a point symmetry of (3.70) then it corresponds to a symmetry X = ξ(x, Y )  ∂ ∂ ∂ +η(x, Y ) +η i (x, Y ) + ∂x ∂Y ∂Yi  n−1  ∂ ∂µi (3.133)  βi1 (x, Y , φ, µ0 , . . . , µn−1 ) i=0  of (3.109). Using (3.131) one can calculate the functions βi1 by prolonging (3.132) to an extended point transformation X(1) acting on (x, Y , φ, φx , φY , φY1 , . . . , φYn−1 )-space. In particular, ∂ ∂ ∂ ∂ + η(x, Y ) + η i (x, Y ) + β(x, Y , φ) + ∂x ∂Y ∂Yi ∂φ ∂ (1) + βi (x, Y , φ, φx , φY0 , φY1 , . . . , φYn−1 ) ∂Yi  X(1) =ξ(x, Y )  (1)  where βi φx from  (3.134)  is given by the prolongation formula (3.84). One can eliminate  (1) βi  by using (3.70) to write  φx = −φY Y1 − φY1 Y2 − . . . − φYn−2 Y1 − φYn−1 f (x, Y ).  (3.135)  The substitution of (3.131) and (3.135) into the prolongation formula (3.84) 50  3.4. General ODEs yields (1)  βi1 = βi (x, Y , φ, φx , µ0 , . . . , µn−1 )  φx =−µ0 Y1 −...µn−2 Yn−1 −µn−1 f (x,Y )  (3.136) where i = 0, . . . , n − 1 and once again Y0 = Y. The symmetry (3.133) is a point symmetry of the multiplier determining equations (3.109) if and only if each function βi1 (x, Y , φ, µ) does not depend explicitly on φ [4]. From the proof of Theorem 3.12, if X is a point symmetry of (3.70) induced by a higher-order symmetry of (2.1) then one can choose β(x, Y , φ) ≡ 0, in which case  ∂βi1 ∂φ  ≡ 0. Consequently, from (3.84) one  sees that (3.133) is a local symmetry of (3.109) and thus each higher-order symmetry of (2.1) induces a point symmetry of (3.109). Now consider the converse. In order to determine sufficient conditions for the converse to hold, suppose X = ξ(x, Y )  ∂ ∂ ∂ + η i (x, Y ) + [αji (x, Y )µj ] i ∂x ∂Yi ∂µ  (3.137)  is a point symmetry of (3.109) such that ξ, η 0 , . . . , η n−1 are not all identically ˜ i ) as defined by (3.113). It zero. Now consider the variable σ = −µi D(Y follows from (3.113) that n−1  n−1  ˜ i] − µ X[DY  Xσ = − i=0  n−1  ˜ i) αji µj D(Y  i  i,j=0  ai (x, Y )µi  =  (3.138)  i=0  for some functions ai (x, Y ), i = 0, . . . , n − 1. One can then write the point symmetry (3.137) of the multiplier determining equations (3.109) as a point transformation acting on (x, Y , µ0 , . . . , µn−1 , σ)-space given by the infinitesimal generator ˜ = X + ai µi ∂ X ∂σ ∂ ∂ ∂ ∂ = ξ(x, Y ) + η i (x, Y ) + [αji (x, Y )µj ] i + ai µi . ∂x ∂Yi ∂µ ∂σ  (3.139)  51  3.4. General ODEs Recall that if µ = (µ0 , . . . , µn−1 ) satisfies (3.109), and σ is given by (3.113) then there exists a function φ(x, Y ), unique up to a constant, which satisfies (3.114) and (3.70). From (3.114), the point transformation (3.139) corresponds to a point symmetry of the first integral determining equations (3.70) X = ξ(x, Y )  ∂ ∂ ∂ + β(x, Y )φ , + η i (x, Y ) ∂x ∂Yi ∂φ ∂ ∂ ∂φx , ∂φY  (3.140)  ∂ ∂ ∂φY1 , . . . , ∂φYn−1 in the first prolon∂ gation X (1) of (3.140) are equal to the coefficients of ∂σ , ∂µ∂ 0 , ∂µ∂ 1 . . . , ∂µ∂n−1 ˜ defined by (3.139) when (φx , φY , φY , . . . , φY ) = (σ, µ0 , . . . , µn−1 ) in X 1 n−1  if and only if the coefficients of  ,  ˜ = 0. and Dφ Defining ∂φ ∂ ∂µi ∂ ∂ + + , ∂x ∂x ∂φ ∂x ∂µi ∂ ∂φ ∂ ∂µi ∂ D†0 = + + , ∂Y ∂Y ∂φ ∂Y ∂µi ∂ ∂φ ∂ ∂µi ∂ D†j = + + ∂Yj ∂Yj ∂φ ∂Yj ∂µi D†x =  (3.141) j = 1, . . . , n − 1,  the first prolongation of (3.140) is  X  (1)  ∂ =X +ρ + ∂φx  n−1 (1)  βi i=0  ∂ ∂φYi  (3.142)  where n−1  ρ = D†x [β(x, Y )φ] − φx D†x (ξ) −  φYj D†x (η j ), j=0  (3.143)  n−1 (1)  βi  = D†i [β(x, Y )φ] − φx D†i (ξ) −  φYj D†i (η j ),  i = 0, . . . , n − 1  j=0  52  3.4. General ODEs and Y0 = Y. The substitution of (3.113) and (3.114) into (3.143) yields ρ= (1)  βi  =  ∂β(x, Y ) φ− ∂x  n−1  n−1  ˜ j )], µj [D†x (η j ) − D†x (ξ)D(Y  ˜ i) − β(x, Y )µi D(Y j=0  i=0  ∂β(x, Y ) φ + β(x, Y )µi − ∂Yi  n−1  ˜ j )], µj [D†i (η j ) − D†i (ξ)D(Y j=0  (3.144) where i = 0, . . . , n − 1. Equating the coefficients of  ∂ ∂ ∂σ , ∂µ0 ,  . . . , ∂µ∂n−1 in  (3.142) and (3.139) when (φx , φY , φY1 , . . . , φYn−1 ) = (σ, µ0 , µ1 , . . . , µn−1 ), one obtains n−1  a k µk = k=0 n−1  αki µk = k=0  ∂β(x, Y ) φ− ∂x  n−1  n−1  ˜ j )], µj [D†x (η j ) − D†x (ξ)D(Y  ˜ i) − β(x, Y )µi D(Y i=0  j=0  ∂β(x, Y ) φ + β(x, Y )µi − ∂Yi  n−1  ˜ j )], µj [D†i (η j ) − D†i (ξ)D(Y j=0  (3.145) where i = 0, . . . , n − 1. The left-hand sides of (3.145) are independent of φ, implying that ∂β = 0, ∂x ∂β = 0, ∂Y ∂β = 0, ∂Yi  (3.146) i = 1, . . . , n − 1,  53  3.4. General ODEs and hence β(x, Y ) = c, a constant. It then follows from (3.145) that n−1  n−1  ˜ j ]µj , [aj + D†x (η j ) − D†x (ξ)DY  ˜ j= µj DY  cσ = −c j=0  j=0  (3.147)  n−1  ˜ j )]µj , [αji + D†i (η j ) − D†i (ξ)D(Y  cµi =  i = 0, . . . , n − 1.  j=0  Since the functions αji (x, Y ), η j (x, Y ) and ξ(x, Y ) are independent of µ, the coefficients of each µi in (3.147) must vanish independently, and hence ˜ j = 0, aj + D†x (η j ) − [D†x (ξ) − c]DY ˜ j = cδ j , αji + D†i (η j ) − D†i (ξ)DY i  i, j = 0, . . . , n − 1,  (3.148)  where δij is 1 if i = j and 0 otherwise. Therefore, if the coefficients of the point transformation (3.139) satisfy (3.148) then it corresponds to a point symmetry (3.140) of the first integral determining equation (3.70). Now suppose that (3.137) is a point symmetry of the system multiplier determining equations (3.109). Note in particular that the system multiplier determining equations (3.109) contain the subsystem µiYj = µjYi ,  i, j = 0, . . . , n − 1, i = j.  (3.149)  Let (1)i  γj  = D†j (αki µk ) − D†j (η k )µik − D†j (ξ)µix  ∂ ∂µiY  be the coefficient of  (3.150)  in the first prolongation X(1) of (3.137). Hence it  j  follows from (3.149) that if (3.137) is a point symmetry of (3.109) then (1)i  γj  (1)j  = γi  i, j = 0, . . . , n − 1, i = j  (3.151)  on solutions of (3.109).  54  3.4. General ODEs Next let g0 = −µn−1 D†0 (f ), gi = −µi−1 − µn−1 D†i (f ),  i = 1, . . . , n − 1.  (3.152)  It follows from the system multiplier determining equations (3.109) that ˜ k ) + gi , µix = −µiYk D(Y  i = 0, . . . , n − 1.  (3.153)  The substitution of (3.153) into (3.150) yields (1)i  γj  ˜ k )]µiY . (3.154) = −D†j (ξ)gi + µk D†j (αki ) + αki µkYj − [D†j (η k ) − D†j (ξ)D(Y k  Using (3.149) to replace µjYk with µkYj , one obtains (1)i  γj  ˜ i )]µi . (3.155) = −D†j (ξ)gi + µk D†j (αki ) + αki µjYk − [D†j (η k ) − D†j (ξ)D(Y Yk  Substituting (3.155) into (3.151), one finds after rearranging terms in the resulting expression that n−1  [D†j (αki ) − D†i (αkj )]µk =  ˜ k ]µi − [αkj + D†j (η k ) − D†j (ξ)DY Yk k=0  (3.156)  n−1  [αki  −  +  D†i (η k )  −  ˜ k ]µj D†i (ξ)DY Yk  +  D†j (ξ)gi  −  D†i (ξ)gj  k=0  where i, j = 0, . . . , n − 1 and i = j. Then, using the fact that µiYj = µjYi , one can re-write (3.156) as [D†j (αki ) − D†i (αkj )]µk =  ˜ k ]µiY − [αkj + D†j (η k ) − D†j (ξ)DY k k=j  [αki  −  +  D†i (η k )  ˜ k ]µj + D† (ξ)gi − D† (ξ)gj + − D†i (ξ)DY j i Yk  k=i  ˜ j ) − αi + D† (η i ) − D† (ξ)D(Y ˜ i) + αjj + D†j (η j ) − D†j (ξ)D(Y i i i  µiYj (3.157)  55  3.4. General ODEs for all i, j = 0, . . . , n − 1, i = j. n−1 l Equating the coefficients of {µl }n−1 l=0 and {µYk }l,k=0 in (3.157) to zero one  obtains ˜ k ≡ 0, αkj + D†j (η k ) − D†j (ξ)DY  j, k = 0, . . . , n − 1,  j = k,  (3.158a)  αjj + D†j (η j ) − D†j (ξ) = αii + D†i (η i ) − D†i (ξ), i, j = 0, . . . , n D†j (αki ) − D†i (αkj ) = 0, k = (i − 1), (j − 1), (n − 1), j i D†j (αi−1 ) − D†i (αi−1 ) = −D†j (ξ), i > 0, j i D†j (αj−1 ) − D†i (αj−1 ) = D†i (ξ), j > 0, j i ) = D†j (f )Di (ξ) − D†i (f )Dj (ξ), D†j (αn−1 ) − D†i (αn−1  − 1, (3.158b) (3.158c) (3.158d) (3.158e) (3.158f)  The right-hand sides of (3.158c)-(3.158f) are found by substituting the functions gi and gj defined by (3.152) into (3.157) and equating the coefficients of µ0 , . . . , µn−1 to zero. ˜ k and κj = αj − α ˜ kj it follows immediDefining α ˜ kj = −D†j (η k ) + D†j (ξ)DY k k ately from (3.158a) that κjk = 0,  k=j  (3.159)  i, j = 0, . . . , n − 1.  (3.160)  and from (3.158b) that κii = κjj ,  Thus there exists a function q(x, Y ) such that q(x, Y ) = κii , i = 0, . . . , n − 1. The substitution of ˜ k ) + q(x, Y )δ k αkj = −D†j (η k ) + D†j (ξ)D(Y j  (3.161)  56  3.4. General ODEs into (3.158c)-(3.158f) yields D†i q(x, Y ) = 0,  i = 0, . . . , n − 1.  (3.162)  To see this, suppose i < (n − 1) and let j = (i + 1). Then, choosing k = i, the substitution of (3.161) into (3.158c) gives 0 = D†j (αii ) − D† (αij ) ˜ i ) + q(x, Y )] − D† [−D† (η i ) + D† (ξ)D(Y ˜ i )] = D†j [−D†i (η i ) + D†i (ξ)D(Y i j j = D†j [q(x, Y )] + D†i (ξ)D†j (Yi+1 ) − D†j (ξ)D†i (Yi+1 ) = D†j [q(x, Y )] + D†i (ξ)  ∂Yi+1 ∂Yi+1 − D†j (ξ) ∂Yj ∂Yi  = D†j [q(x, Y )] (3.163) provided that j = (i+1). Since (3.163) holds for all j = 0, . . . , n−1, provided that i is chosen so that i = (j − 1), (n − 1), it is clear that D†j [q(x, Y )] = 0,  j = 0, . . . , n − 1.  (3.164)  Hence there exists a function h(x) such that q(x, Y ) = h(x) and from (3.161) one sees that ˜ k = δjk h(x). αkj + D†j (η k ) − D†j (ξ)DY  (3.165)  ˜ i as in (3.113), it follows from the system multiplier Defining σ = −µi DY determining equations (3.109) that     n−1  σY = −D†0   ˜ j )  = µ0 , (µj DY x j=0    n−1  σYi = −D†i     (3.166)  ˜ j ) = µix , i = 1, . . . , n − 1. (µj DY j=0  57  3.4. General ODEs Furthermore, ˜ i )X[µi ] − µi X[DY ˜ i ] = ak (x, Y )µk Xσ = −D(Y for some functions ak (x, Y ), k = 0, . . . , n − 1. If (3.166) is invariant under the symmetry X given by (3.137) then, using an argument identical to the one used for (3.151), one obtains ˜ k ]σY − [D†j (ak ) − D†x (αkj )]µk = [αkj + D†j (η k ) − D†j (ξ)DY k ˜ k ]µj + D† (ξ)ˆ − [ak + D†x (η k ) − D†x (ξ)DY g − D†x (ξ)gj , j Yk (3.167) on solutions of (3.109) and (3.113) where j = 0, . . . , n − 1 and gˆ = −fx µn−1 . The substitution of (3.165) and (3.113) into (3.167) yields ˜ k − D† (ξ)DY ˜ k ]µj + [D†j (ak ) − D†x (αkj )]µk = −[ak + D†x (η k ) + h(x)DY x Yk ˜ k )]. + D†j (ξ)ˆ g − D†x (ξ)gj − h(x)µk D†j [D(Y (3.168) Splitting (3.168) by setting to zero the coefficients of the µi and their partial derivatives, one obtains ˜ k − h(x)DY ˜ k, ak = −D†x (η k ) + D†x (ξ)DY  (3.169a)  ˜ k ), k = (j − 1), (n − D†j (ak ) − Dx (αkj ) = −h(x)D†j (DY j ˜ j−1 ), j > D†j (aj−1 ) − D†x (αj−1 ) = D†x (ξ) − h(x)D†j (DY  1),  (3.169b)  0,  (3.169c)  j D†j (an−1 ) − D†x (αn−1 ) = D†j (f )D†x (ξ) − D†x (f )D†j (ξ) − h(x)D†j (f ), (3.169d)  j = 0, . . . , (n − 1).  58  3.4. General ODEs Hence if j = 0, . . . , n − 2, the substitution of (3.169a) and (3.165) into (3.169b) yields ˜ j )] = D† (aj ) − D†x (αj ) −h(x)D†j [D(Y j j ˜ j ) − h(x)DY ˜ j ]− = D†j [−D†x (η j ) + D†x (ξ)D(Y ˜ j ) + h(x)] − D†x [−D†j (η j ) + D†j (ξ)D(Y ˜ j )] + D†x (ξ)D† (Yj+1 )− = −h(x)D†j [D(Y j  (3.170)  − D†j (ξ)D†x (Yj+1 ) − D†x [h(x)] ˜ j )] = h (x) − h(x)D†j [D(Y Comparing the left- and right-hand sides of (3.170) it follows that h (x) = 0. Similarly, if j = n − 1 one obtains n−1 ) D†n−1 (f )D†x (ξ)−D†x (f )D†n−1 (ξ) − h(x)D†n−1 (f ) = D†n−1 (an−1 ) − D†x (αn−1  ˜ n−1 ) − = D†n−1 −D†x (η n−1 ) + [D†x (ξ) − h(x)]D(Y ˜ n−1 ) + h(x)] − D†x [−D†n−1 (η n−1 ) + D†n−1 (ξ)D(Y = −h(x)D†n−1 (f ) + D†x (ξ)D†n−1 (f ) − D†n−1 (ξ)D†x (f )− − D†x h(x), (3.171) which after simplification yields h (x) = 0 and hence h(x) = c. Thus, if the point transformation (3.137) is a symmetry of the multiplier determining equations (3.109) then the coefficients must satisfy ˜ k ), ak = −D†x (η k ) + [D†x (ξ) − c]D(Y αkj  =  −D†j (η k )  +  ˜ k) D†j (ξ)D(Y  +  cδkj ,  k = 0, . . . , n − 1, j, k = 0, . . . , n − 1.  (3.172)  The system (3.172) is identical to (3.148) and thus each point symmetry (3.137) of the multiplier determining equations (3.109) is induced by a point symmetry (3.140) of the first integral determining equation (3.70). Finally, since every point symmetry of (3.70) projects onto a point symmetry of  59  3.4. General ODEs (3.10), all point symmetries (3.137) of the multiplier determining equations (3.109) must also project onto point symmetries of the ODE system (3.10).  Moreover, since the multiplier determining equations (3.109) of the ODE system (3.10) form a first-order linear system, one can prove directly that all local symmetries of (3.109) are induced by point symmetries of the ODE system (3.10). Theorem 3.20. All local symmetries of the system multiplier determining equations (3.109) are induced by point symmetries of the ODE system (3.10) and thus by higher-order symmetries of the scalar ODE (2.1). Proof. First, note that if µ solves the system multiplier determining equations (3.109) then ˜ 0 = µn−1 fy Dµ  (3.173)  which has the corresponding characteristic equation [3] dY dY1 dYn−2 dYn−1 dµ0 dx = = = ... = = = n−1 . 1 Y1 Y2 Yn−1 f µ fy  (3.174)  In order for a symmetry to leave invariant (3.109), it must also leave invariant (3.174) when µ satisfies (3.109). In particular, if x∗ = x∗ (x, Y ), Y ∗ = Y ∗ (x, Y ),  (3.175)  µ∗ = µ∗ (x, Y , µ, µx , µY , . . .) is a local symmetry of (3.109) where µY =  ∂µi ∂Yj  n−1 i,j=0  , then  ∗ dY ∗ dYn−1 dx∗ dY ∗ dY ∗ = ∗ = ∗1 = . . . = ∗n−2 = . 1 Y1 Y2 Yn−1 f (x∗ , Y ∗ )  (3.176)  Note that (3.176) are necessary and sufficient conditions for a transformation of (x, Y )-space to be a point symmetry of (3.10). Hence any local symmetry 60  3.4. General ODEs of (3.109) must be induced by a point symmetry of the ODE system (3.10), and the proof is complete.  Example 3.21. Consider the ODE y =0  (3.177)  which in Example 3.17 was shown to have corresponding system multiplier determining equations µ0Y1 = µ1Y , µ0x + Y1 µ0Y = 0,  (3.178)  µ1x + Y1 µ1Y + µ0 = 0. Using the GeM package [12] for MapleTM , one can show that the transformation of (x, Y, Y1 , µ0 , µ1 )-space ∂ ∂ ∂ + η(x, Y, Y1 ) + η 1 (x, Y, Y1 ) + ∂x ∂Y ∂Y1 ∂ + [f0 (x, Y, Y1 )µ0 + f1 (x, Y, Y1 )µ1 ] 0 + ∂µ ∂ + [g0 (x, Y, Y1 )µ0 + g1 (x, Y, Y1 )µ1 ] 1 ∂µ  X =ξ(x, Y, Y1 )  (3.179)  is a point symmetry of (3.178) if and only if the coefficients ξ, η, η 1 , f0 , f1 , g0 , and g1 satisfy the corresponding symmetry determining equations η 1 = ηx + Y1 ηY − Y1 (ξx + Y1 ξY ),  (3.180a)  ηx1  (3.180b)  +  Y1 ηY1  = 0,  f0 = −(ηY − Y1 ξY ) + c,  (3.180c)  f1 = −ηY1 ,  (3.180d)  g0 = −(ηY1 − Y1 ξY1 ),  (3.180e)  g1 = −ηY11 + c.  (3.180f)  61  3.4. General ODEs Comparing (3.180a) and (3.180b) with the symmetry determining equations of the ODE (3.177) given by (3.96a) and (3.96b) in Example 3.13, it follows that all point symmetries (3.179) of the system multiplier determining equations (3.178) project onto higher-order symmetries of the corresponding ODE (3.177). Conversely, each higher-order symmetry Xp = ξ(x, Y, Y1 )  ∂ ∂ ∂ + η(x, Y, Y1 ) + η 1 (x, Y, Y1 ) ∂x ∂Y ∂Y1  of the ODE (3.177) induces a point symmetry (3.179) of the system multiplier determining equations (3.178) with g0 , g1 , f0 , and f1 given by (3.180c)(3.180f), which agrees with the general result given in (3.172) where n = 2 and f (x, Y, Y1 ) ≡ 0. Example 3.22. Consider the ODE 1 y + yy = 0 2  (3.181)  which in Example 3.18 was shown to have corresponding system multiplier determining equations µ0Y1 = µ1Y , µ0Y2 = µ2Y , µ1Y2 = µ2Y1 1 1 µ0x + Y1 µ0Y + Y2 µ1Y − Y Y2 µ2Y = Y2 µ2 , 2 2 1 1 0 1 2 µx + Y1 µY1 + Y2 µY1 − Y Y2 µY1 = −µ0 , 2 1 1 µ2x + Y1 µ0Y2 + Y2 µ1Y2 − Y Y2 µ2Y2 = −µ1 + Y µ2 . 2 2  (3.182)  Using the GeM package [12] with MapleTM , one can show that the transfor-  62  3.4. General ODEs mation of (x, Y, Y1 , Y2 , µ0 , µ1 , µ2 )-space X =ξ(x, Y, Y1 , Y2 )  ∂ ∂ ∂ + η(x, Y, Y1 , Y2 ) + η 1 (x, Y, Y1 , Y2 ) + ∂x ∂Y ∂Y1  ∂ + ∂µ0 ∂ + [g0 (x, Y, Y1 , Y2 )µ0 + g1 (x, Y, Y1 , Y2 )µ1 ] + g2 (x, Y, Y1 , Y2 )µ2 ] 1 + ∂µ ∂ + [h0 (x, Y, Y1 , Y2 )µ0 + h1 (x, Y, Y1 , Y2 )µ1 ] + h2 (x, Y, Y1 , Y2 )µ2 ] 2 ∂µ (3.183) + [f0 (x, Y, Y1 , Y2 )µ0 + f1 (x, Y, Y1 , Y2 )µ1 + f2 (x, Y, Y1 , Y2 )µ2 ]  is a point symmetry of (3.182) if and only if the coefficients ξ, η, η 1 , f0 , f1 , f2 , g0 , g1 , g2 , h0 , h1 , and h2 satisfy the corresponding symmetry determining  63  3.4. General ODEs equations 1 η 1 = ηx + Y1 ηY + Y2 ηY1 − Y Y2 ηY2 − 2 1 − Y1 (ξx + Y1 ξY + Y2 ξY1 − Y Y2 ξY2 ), 2 1 η 2 = ηx1 + Y1 ηY1 + Y2 ηY11 − Y Y2 ηY12 − 2 1 − Y2 (ξx + Y1 ξY + Y2 ξY1 − Y Y2 ξY2 ), 2 1 ηx2 + Y1 ηY2 + Y2 ηY21 − Y Y2 ηY22 + 2 1 1 1 1 Y Y2 (ξx + Y1 ξY + Y2 ξY1 − Y Y2 ξY2 ) + Y2 η + Y η 2 = 0, 2 2 2 2 f0 = −ηY + Y1 ξY + c, f1 = −ηY1 + Y2 ξY , 1 f2 = −ηY2 − Y Y2 ξY , 2 g0 = −ηY1 + Y1 ξY1 ,  (3.184a)  (3.184b)  (3.184c) (3.184d) (3.184e) (3.184f) (3.184g)  g1 = −ηY11 + Y2 ξY1 , 1 g2 = −ηY21 − Y Y2 ξY1 , 2 h0 = −ηY2 + Y1 ξY2 ,  (3.184h)  h1 = −ηY12 + Y2 ξY2 , 1 h2 = −ηY21 − Y Y2 ξY2 + c. 2  (3.184k)  (3.184i) (3.184j)  (3.184l)  Comparing (3.184a)-(3.184c) with the symmetry determining equations of the ODE (3.181) given by (3.102a)-(3.102c) in Example 3.14, it follows that all point symmetries (3.183) of the system multiplier determining equations (3.182) project onto higher-order symmetries of the corresponding ODE (3.181). Conversely, each higher-order symmetry ∂ ∂ ∂ + η(x, Y, Y1 , Y2 ) + η 1 (x, Y, Y1 , Y2 ) + ∂x ∂Y ∂Y1 ∂ + η 2 (x, Y, Y1 , Y2 ) ∂Y2  Xp = ξ(x, Y, Y1 , Y2 )  64  3.4. General ODEs of the ODE (3.181) induces a point symmetry (3.183) of the system multiplier determining equations (3.182) with f0 , f1 , f2 , g0 , g1 , g2 , h0 , h1 , and h2 given by (3.184d)-(3.184l).  3.4.6  Connection between the System and Scalar Multiplier Determining Equations  The IFDE of the scalar ODE (2.1) may be constructed from the system multiplier determining equations (3.109) in the following way. Using the multiplier determining equations (3.109) of the ODE system (3.10), the dependent variables µn−2 , . . . , µ0 can be written as functions of the independent variables (x, Y ), µn−1 and its derivatives. Specifically, one has the recurrence relation ˜ i − µn−1 fY , µi−1 = −Dµ i  i = 1, . . . , n − 1  (3.185)  ˜ is where the total derivative D n−2  Yk+1 D†k + f (x, Y )D†n−1 ,  ˜ = D†x + Y1 D† + D 0  (3.186)  k=1  and D†x , D†0 , . . . , D†n−1 are given by (3.141). Defining Λ = µn−1 , the substitution of {µj (x, Y , Λ, Λx , ΛY , . . .)}n−2 into 0 the remaining equations of the system multiplier determining equations (3.109) yields a corresponding PDE system S i [x, Y , Λ, Λx , ΛY , . . .] = 0,  i = 1, . . . , [n(n − 1) + 1].  (3.187)  The following theorem proves that the resulting system (3.187) is the IFDE of (2.1). Theorem 3.23. A function Λ(x, Y ) is a solution of (3.187) if and only if it is an integrating factor of (2.1). Consequently, the system (3.187) obtained  65  3.4. General ODEs algorithmically from the system multiplier determining equations (3.109) is the IFDE of the scalar ODE (2.1). Proof. Let Λ be an integrating factor of the scalar ODE (2.1). From Theorem 3.16 and Lemma 3.15 there exist µ0 , . . ., µn−2 such that (µ0 , . . . , µn−2 , Λ) solves the system multiplier determining equations (3.109). By construction, it follows immediately that Λ satisfies (3.187). Conversely, suppose Λ solves (3.187) and define µ0 (x, Y , Λ, Λx , ΛY , . . .), . . . , µn−2 (x, Y , Λ, Λx , ΛY , . . .) through (3.185) with µn−1 = Λ. By the construction of (3.187), the functions (µ0 , . . . , µn−2 , µn−1 ) = (µ0 (x, Y , Λ), . . . , µn−2 (x, Y , Λ, Λx , ΛY , . . .), Λ) satisfy the system multiplier determining equations (3.109) and thus from Theorem 3.16 correspond to a set of multipliers of the ODE system (3.10). Finally, it follows from the proof of Lemma 3.15 that Λ is an integrating factor of the scalar ODE (2.1), and the proof is complete.  Example 3.24. Consider the ODE y = 0.  (3.188)  From Example 3.17, the system multiplier determining equations are µ0Y1 = µ1Y ,  (3.189a)  µ0x + Y1 µ0Y = 0,  (3.189b)  µ1x + Y1 µ1Y + µ0 = 0.  (3.189c)  Solving for µ0 in (3.189c) one obtains µ0 = −µ1x − Y1 µ1Y , which upon substitution into (3.189a) and (3.189b) yields − µ1xY1 − µ1Y − Y1 µ1Y Y1 = µ1Y ,  (3.190a)  − µ1xx − Y1 µ1xY − Y1 (µ1xY + Y1 µ1Y Y ) = 0.  (3.190b) 66  3.4. General ODEs The system (3.190) is clearly equivalent to the IFDE of (3.24), given by (2.13) in Example 2.6, when µ1 = Λ. Example 3.25. Consider the ODE 1 y + yy = 0. 2  (3.191)  Using Maple and the direct method one can obtain the IFDE of (3.191) 3 Λxxx = − 2Y Y1 Y2 ΛY Y1 + Y Y1 Y22 ΛY2 Y2 − 2Y2 Λ − 2Y Y2 ΛY − 3Y1 Y2 ΛY1 + 4 5 3 + Y1 Λx + Y12 ΛY − Y22 ΛY2 + Y Y1 Y2 ΛY2 − Y 2 Y2 ΛY1 + 2 2 5 3 2 9 1 3 Y Y2 ΛY2 + Y Y2 ΛY2 Y2 + 6Y1 Y2 ΛY Y − Y 2 Y22 ΛY1 Y2 + 2 8 4 1 3 + 2Y Y22 ΛY1 Y1 − Y1 Y22 ΛY1 Y2 + Y Λxx − 3Y1 ΛxxY − 3Y12 ΛxY Y + 2 2 2 3 + 6Y2 ΛxY + 3Y2 ΛxY1 Y1 − Y1 ΛY Y Y + 9Y22 ΛY Y1 + 2Y23 ΛY1 Y1 Y1 − 1 1 3 − Y Y22 ΛY Y2 + Y Y12 ΛY Y − 2Y Y2 ΛxY1 + Y 3 Y23 ΛY2 Y2 Y2 − 2 2 8 3 − Y 2 Y23 ΛY1 Y2 Y2 + Y Y1 ΛxY + 3Y1 Y22 ΛY Y1 Y1 , 4 1 ΛxxY1 = − 3ΛxY + Λ − 2Y2 ΛxY1 Y1 + Y Y1 ΛY Y1 − 2Y1 Y2 ΛY Y1 Y1 + 2 1 1 1 + Y Y2 ΛY Y2 + Y1 Y2 ΛY1 Y2 + Y 2 Y22 ΛY1 Y2 Y2 − Y Y2 ΛY1 Y1 + 2 2 4 1 2 1 1 + Y Y2 ΛY1 Y2 − 3Y1 ΛY Y + Y ΛxY1 + Y ΛY + Y1 ΛY1 − 2 2 2 − Y12 ΛY Y Y1 + Y2 ΛY2 − 4Y2 ΛY Y1 − Y22 ΛY1 Y1 Y1 − 2Y1 ΛxY Y1 , 1 ΛxY2 = − 2ΛY1 + Y ΛY2 − Y1 ΛY Y2 − Y2 ΛY1 Y2 + Y Y2 ΛY2 Y2 . 2 (3.192) Additionally, from Example 3.18 the system multiplier determining equa-  67  3.4. General ODEs tions are µ0Y1 = µ1Y ,  (3.193a)  µ0Y2  µ2Y ,  (3.193b)  µ1Y2 = µ2Y1 ,  (3.193c)  =  1 1 µ0x + Y1 µ0Y + Y2 µ1Y − Y Y2 µ2Y = Y2 µ2 , 2 2 1 1 0 1 2 µx + Y1 µY1 + Y2 µY1 − Y Y2 µY1 = −µ0 , 2 1 1 µ2x + Y1 µ0Y2 + Y2 µ1Y2 − Y Y2 µ2Y2 = −µ1 + Y µ2 . 2 2  (3.193d) (3.193e) (3.193f)  Solving for µ1 in (3.193f), after replacing µ0Y2 and µ1Y2 with µ2Y and µ2Y1 , respectively, and using (3.193b) and (3.193c), one obtains 1 1 µ1 = −µ2x − Y1 µ2Y − Y2 µ2Y1 + Y Y2 µ2Y2 + Y µ2 . 2 2  (3.194)  One can then solve for µ0 in terms of µ1 , which yields 1 µ0 = −µ1x − Y1 µ1Y − Y2 µ1Y1 + Y Y2 µ1Y2 . 2  (3.195)  The substitution of (3.194) and (3.195) into (3.193a)-(3.193d) yields the  68  3.4. General ODEs following system after simplification with Maple: 3 µ2xxx = − 2Y Y1 Y2 µ2Y Y1 + Y Y1 Y22 µ2Y2 Y2 − 2Y2 µ2 − 2Y Y2 µ2Y − 3Y1 Y2 µ2Y1 + 4 3 5 2 2 2 + Y1 µx + Y1 µY − Y22 µ2Y2 + Y Y1 Y2 µ2Y2 − Y 2 Y2 µ2Y1 + 2 2 5 3 2 2 9 + Y Y2 µY2 Y2 + 6Y1 Y2 µ2Y Y − Y 2 Y22 µ2Y1 Y2 + 2Y Y22 µ2Y1 Y1 − 8 4 1 3 2 2 2 − Y1 Y2 µY1 Y2 + Y µxx − 3Y1 µ2xxY − 3Y12 µ2xY Y + 6Y2 µ2xY + 2 2 3 + 3Y22 µ2xY1 Y1 − Y13 µ2Y Y Y + 9Y22 µ2Y Y1 + 2Y23 µ2Y1 Y1 Y1 − Y Y22 µ2Y Y2 + 2 1 3 3 2 3 2 3 2 1 2 2 2 + Y Y1 µY Y − 2Y Y2 µxY1 + Y Y2 µY2 Y2 Y2 − Y Y2 µY1 Y2 Y2 + 2 8 4 1 + Y Y1 µ2xY + 3Y1 Y22 µ2Y Y1 Y1 + Y 3 Y2 µ2Y2 , (3.196a) 2 1 µ2xxY1 = − 3µ2xY + µ2 − 2Y2 µ2xY1 Y1 + Y Y1 µ2Y Y1 − 2Y1 Y2 µ2Y Y1 Y1 + 2 1 1 1 2 2 + Y Y2 µY Y2 + Y1 Y2 µY1 Y2 + Y 2 Y22 µ2Y1 Y2 Y2 − Y Y2 µ2Y1 Y1 + 2 2 4 1 1 1 + Y 2 Y2 µ2Y1 Y2 − 3Y1 µ2Y Y + Y µ2xY1 + Y µ2Y + Y1 µ2Y1 − 2 2 2 2 2 2 2 2 − Y2 µY2 − 4Y2 µY Y1 − Y2 µY1 Y1 Y1 − 2Y1 µxY Y1 − Y12 µ2Y Y Y1 , (3.196b) 1 µ2xY2 = − 2µ2Y1 + Y µ2Y2 − Y1 µ2Y Y2 − Y2 µ2Y1 Y2 + Y Y2 µ2Y2 Y2 . 2  (3.196c)  which is clearly equivalent to the IFDE (3.193) when µ2 = Λ. In the proof of Theorem 3.23 it was shown that the mapping Λ → (µ0 (x, Y , Λ, Λx , ΛY , . . .), . . . , µn−2 (x, Y , Λ, Λx , ΛY , . . .), Λ),  (3.197)  which takes each solution of the IFDE (3.187) into a solution of the system multiplier determining equations (3.109), is invertible; thus implying that all local symmetries of the IFDE (3.187) correspond to local symmetries of the system multiplier determining equations (3.109) and conversely, all local symmetries of the system multiplier determining equations (3.109)  69  3.4. General ODEs correspond to local symmetries of the IFDE (3.187) [4]. Hence, from Theorem 3.20 it follows that that all local symmetries of the IFDE (3.187) must project onto point symmetries of (3.10). In particular, one has the following result. Theorem 3.26. A point symmetry ∂ ∂ ∂ X = ξ(x, Y ) + η(x, Y ) + η i (x, Y ) i + ∂x ∂Y ∂Y  n−1  [αji µj ] i,j=0  ∂ ∂µi  (3.198)  of the system multiplier determining equations (3.109) yields a point symmetry X† = ξ(x, Y )  ∂ ∂ ∂ ∂ + η(x, Y ) + η i (x, Y ) i + s(x, Y )Λ ∂x ∂Y ∂Y ∂Λ  (3.199)  of the IFDE (3.187) for some s(x, Y ) if and only if αjn−1 ≡ 0,  j =  0, . . . , n − 2. Moreover, every point symmetry (3.199) of the IFDE (3.187) yields a point symmetry of the system multiplier determining equations (3.109). Proof. Suppose ∂ ∂ ∂ X=ξ +η + ηi + ∂x ∂Y ∂Y i  n−1  [αji µj ] i,j=0  ∂ ∂µi  (3.200)  is a point symmetry of the system multiplier detemining equations (3.109). Let ∂ k Λ denote the kth order partial derivatives of Λ with respect to x, Y, Y1 , . . . , Yn−1 . Then, from the construction of the invertible mapping in the remarks following the proof of Theorem 3.23, X must correspond to a  70  3.4. General ODEs higher-order symmetry of the scalar IFDE (3.108) given by ∂ ∂ ∂ +η + ηi + ∂x ∂Y ∂Y i   n−2  ∂ n−1 n−1 j n + [αj (x, Y )µ (x, Y , Λ, ∂Λ, . . . , ∂ Λ)] + αn−1 (x, Y )Λ   ∂Λ  X† = ξ  j=0  (3.201) where the functions {µi (x, Y , Λ, ∂Λ, . . . , ∂ n Λ)}0n−2 are defined recursively by (3.185) with µn−1 = Λ. Since µi (x, Y , Λ, ∂Λ, . . . , ∂ n Λ) has an essential dependence on the [(n−1)−i]th order derivatives of Λ and no dependence on higher-order derivatives of Λ, it is clear that (3.201) is a point symmetry of (3.187) if and only if αjn−1 (x, Y ) ≡ 0 for all j = 0, . . . , n − 2. If αjn−1 (x, Y ) = 0 for some j ∈ {0, . . . , n − 2} then it follows that X† , defined by (3.201), is a higher-order symmetry of the scalar IFDE. Now it is shown that every point symmetry ∂ ∂ + η(x, Y ) + X = ξ(x, Y ) ∂x ∂Y  n−1  †  η i (x, Y ) i=0  ∂ ∂ + s(x, Y )Λ ∂Yi ∂Λ  (3.202)  of the scalar IFDE (3.187) induces a point symmetry X = ξ(x, Y )  ∂ ∂ +η(x, Y ) + ∂x ∂Y  n−1  η i (x, Y ) i=1  ∂ + ∂Yi  n−1  αji (x, Y )µj i,j=0  ∂ (3.203) ∂µi  of the corresponding scalar multiplier determining equations (3.109) for some functions αji (x, Y ). Suppose (3.202) is a point symmetry of the scalar IFDE (3.187). Then, using the invertible mapping constructed in the remarks following the proof of Theorem 3.23, one can construct the corresponding local symmetry of the system multiplier determining equations (3.109) in the following way. First let X†  (n)  denote the prolongation of X† (3.202) to a vector  field acting on (x, Y , Λ, ∂Λ, . . . , ∂ n Λ)-space, and let µ0 (x, Y , Λ, ∂Λ, . . . , ∂ n Λ), . . . , µn−1 (x, Y , Λ, ∂Λ, . . . , ∂ n Λ) be the image of Λ under the invertible map-  71  3.4. General ODEs ping (3.197). Specifically, the functions µ0 , . . . , µn−1 are defined recursively using (3.185) as follows µn−1 (x, Y , Λ, ∂Λ, . . . , ∂ n Λ) = Λ, ˜ − ΛfY , µn−2 (x, Y , Λ, ∂Λ, . . . , ∂ n Λ) = −DΛ n−1 ˜ DΛ ˜ − ΛfY ] − ΛfY , µn−3 (x, Y , Λ, ∂Λ, . . . , ∂ n Λ) = −D[− n−2 n−1 .. . ˜ 1 (x, Y , Λ, ∂Λ, . . . , ∂ n Λ)] − ΛfY , µ0 (x, Y , Λ, ∂Λ, . . . , ∂ n Λ) = −D[µ 1 (3.204) ˜ is defined by (3.186). Through (3.204) one can extend X† (n) to act where D on (x, Y , Λ, ∂Λ, . . . , ∂ n Λ, µ0 , . . . , µn−1 )-space, producing  X  † (n)  n−1  X†  + i=0  (n)  [µi (x, Y , ∂Λ, . . . , ∂ n Λ)]  ∂ . ∂µi  (3.205)  It is important to note that if Λ is a solution of the scalar IFDE (3.187), then the functions µ0 , . . . , µn−1 defined by (3.204) solve the system multiplier determining equations (3.109). Moreover, for each solution (µ0 , . . . , µn−1 ) of the system multiplier determining equations (3.109), there exists a unique Λ such that (3.204) holds and Λ satisfies the scalar IFDE (3.187). Hence, the point transformation with infinitesimal generator (3.205) must map each solution of the system multiplier determining equations (3.109) into another, possibly distinct, solution and thus corresponds to a symmetry of the system multiplier determining equations (3.109). Substituting Λ = µn−1 one obtains a corresponding transformation of (x, Y , µ0 , . . . , µn−1 )-space by pro-  72  3.4. General ODEs jection. By construction the resulting transformation X = ξ(x, Y )  ∂ ∂ + η(x, Y ) + ∂x ∂Y  n−1  X†  +  (n)  n−1  η i (x, Y ) i=0  [µi (x, Y , ∂Λ, . . . , ∂ n Λ)]  i=0  ∂ + ∂Yi  Λ=µn−1 , ∂Λ=∂µn−1 , ..., ∂ n Λ=∂ n µn−1  ∂ , ∂µi  (3.206) must also be a symmetry of the system multiplier determining equations (3.109). The transformation X defined by (3.206) is a point transformation of (x, Y , µ0 , . . . , µn−1 )-space if and only if the coefficients X†  (n)  [µi (x, Y , ∂Λ, . . . , ∂ n Λ)]  Λ=µn−1 , ∂Λ=∂µn−1 , ..., ∂ n Λ=∂ n µn−1  (3.207)  where i = 0, . . . , n − 1, can be made independent of the components of ∂µn−1 , . . . , ∂ n µn−1 through substitution of the system multiplier determining equations (3.109). Since the system multiplier determining equations (3.109) are linear in µ0 , . . . , µn−1 , the transformation X defined by (3.206) is a point symmetry of the system multiplier determining equations (3.109) if and only if it is of the form (3.203). Hence, one must show that there exist functions αji (x, Y ), i, j = 0, . . . , n − 1, such that n−1 i  αji (x, Y )µj ,  X[µ ] =  i = 0, . . . , n − 1  (3.208)  j=0  on solutions of the system multiplier determining equations (3.109). This is shown inductively as follows.  73  3.4. General ODEs First note that since µn−1 (x, Y , Λ, ∂Λ, . . . , ∂ n Λ) = Λ it follows immediately that Xµn−1 = X†  (n) n−1  µ  (x, Y , Λ)  Λ=µn−1 , ∂Λ=∂µn−1 , ..., ∂ n Λ=∂ n µn−1  = [s(x, Y )Λ]|Λ=µn−1 , ∂Λ=∂µn−1 , ..., ∂ n Λ=∂ n µn−1  (3.209)  = s(x, Y )µn−1 . Then, fixing 0 ≤ i < n − 1, suppose there exist functions αji+1 (x, Y ) such that X  † (n)  n−1  [µ  i+1  αji+1 (x, Y )µj (x, Y , Λ, ∂Λ, . . . , ∂ n Λ)  n  (x, Y , Λ, ∂Λ, . . . , ∂ Λ)] = j=0  (3.210) on solutions of the scalar IFDE (3.187). From (3.206) it follows that n−1  αji+1 (x, Y )µj  Xµi+1 =  (3.211)  j=0  on solutions of the system multiplier determining equations (3.109). The substitution of µn−1 = Λ into the equation (3.185) yields ˜ µi+1 (x, Y , Λ, ∂Λ, ∂ n Λ) − ΛfY µi (x, Y , Λ, ∂Λ, ∂ n Λ) = −D i+1 and hence after applying X† X†  (n)  (n)  (3.212)  to both sides of (3.212) one finds  µi (x, Y , Λ, ∂Λ, ∂ n Λ) = −X†  (n)  − X†  ˜ i+1 (x, Y , Λ, ∂Λ, ∂ n Λ) − Dµ (n)  (3.213) [ΛfYi+1 ].  74  3.4. General ODEs The application of X†  (n)  to the second term on the right-hand side of  (3.212) yields  X†  (n)    n−1  fYj Yi+1 η j   [ΛfYi+1 ] = fYi+1 s(x, Y )Λ + Λ fxYi+1 ξ + fY Yi+1 η + j=1  (3.214) from which it follows that X†  (n)  [ΛfYi+1 ]  µ  Λ=µn−1 , ∂Λ=∂µn−1 , ..., ∂ n Λ=∂ n µn−1  =   n−1  n−1   fYi+1 s(x, Y ) + fxYi+1 ξ + fY Yi+1 η +  fYj Yi+1 η  j  (3.215) .  j=1  Before proceeding further it is necessary to derive an expression for how i+1 n n the functions µi+1 x (x, Y , Λ, ∂Λ, . . . , ∂ Λ), µY (x, Y , Λ, ∂Λ, . . . , ∂ Λ), i+1 n n µi+1 Y1 (x, Y , Λ, ∂Λ, . . . , ∂ Λ), . . . , µYn−1 (x, Y , Λ, ∂Λ, . . . , ∂ Λ), transform un-  der the action of X†  (n)  . Let x∗ = x∗ (x, Y ; ),  Y ∗ = Y ∗ (x, Y ; ), Λ∗ = Λ∗ (x, Y , Λ; ), (∂Λ)∗ = ∂Λ∗ (x, Y , Λ, ∂Λ; ), .. .  (3.216)  (∂ n Λ)∗ = ∂ n Λ∗ (x, Y , Λ, ∂Λ, . . . , ∂ n Λ; ), be the one-parameter Lie-group of transformations with infinitesimal operator X†  (n)  . If (µi+1 )∗ = µi+1 (x∗ , Y ∗ , Λ∗ , (∂Λ)∗ , . . . , (∂ n Λ)∗ )  75  3.4. General ODEs is the image of µi+1 (x, Y , Λ, ∂Λ, . . . , ∂ n Λ) under the transformation (3.216) then the contact conditions imply that ∗ ∗ d(µi+1 )∗ = (µi+1 x ) dx , ∗ ∗ d(µi+1 )∗ = (µi+1 Y ) dY , ∗ ∗ d(µi+1 )∗ = (µi+1 Yj ) dYj ,  (3.217) j = 1, . . . , n − 1.  Re-writing the contact conditions (3.217) in terms of the infinitesimal generator X† X†  (n)  (n)  [3], one obtains  † † [µi+1 x ] = Dx [X  (n)  n−1  ηxj µi+1 Yj ,  i+1 (µi+1 )] − ξx µi+1 x − η x µY − j=1  X†  (n)  † † [µi+1 Y ] = D0 [X  (n)  n−1  ηYj µi+1 Yj ,  i+1 (µi+1 )] − ξY µi+1 x − η Y µY − j=1  X†  (n)  † † [µi+1 Yk ] = Dk [X  (n)  n−1  ηYj k µi+1 Yj ,  i+1 (µi+1 )] − ξYk µi+1 x − η Yk µ Y − j=1  k = 1, . . . , n − 1 (3.218) ˜ i+1 = µi+1 + where D†x , D†0 , . . . , D†n−1 are given by (3.141). Noting that Dµ x n−1 i+1 ˜ i+1 Y1 µY + j=1 µj DYj it follows that  X†  (n)  ˜ i+1 ] = X† [Dµ  (n)    n−1  ˜  µi+1 j DYj  i+1 µi+1 x + Y1 µ Y +  (3.219a)  j=1   = X  † (n)  (µi+1 x ) + Y1 X  † (n)  n−1  ˜ j )]X [D(Y  (µi+1 Y )+   † (n)   (µi+1 Yj ) +  j=1    n−2  1 + µi+1 Y η +  n−1 i+1 η j+1 µi+1 Yj + µYn−1  j=1  (3.219b)  fYk η k  .  ξfx + ηfY + k=1  76  3.4. General ODEs Using (3.218) one can re-write the first term on the right-hand side of (3.219b) to give  X  † (n)  (µi+1 x ) + Y1 X  † (n)    n−1  ˜ j )]X [D(Y  (µi+1 Y )+  † (n)   (µi+1 Yj ) =  j=1   i+1 D†x [X† (n) (µi+1 )] − ξx µi+1 x − η x µY −    n−1  + ηxj µi+1 Yj j=1   Y1 D†0 [X†  (n)    n−1  + ηYj µi+1 Yj  i+1 (µi+1 )] − ξY µi+1 x − η Y µY − j=1  n−1    ˜ k )] D† [X† [D(Y k  (n)    n−1  . ηYj k µi+1 Yj  i+1 (µi+1 )] − ξYk µi+1 x − ηYk µY − j=1  k=1  (3.220) Changing the order of summation in (3.220), one obtains  † (n) i+1 X† (n) (µi+1 (µY ) + x ) + Y1 X    n−1  ˜ j )]X† [D(Y  (n)   (µi+1 Yj ) =  j=1 (n) D†x [X† (µi+1 )]  +  (n) Y1 D†0 [X† (µi+1 )]  n−1  ˜ k )]D† [X† [D(Y k  +  (n)  (µi+1 )] −  k=1 n−1  −  ˜ k )]ξY µi+1 [D(Y k x  i+1 ξx µi+1 x + Y1 ξY µx +  −  k=1 n−1  −  ˜ k )]ηY µi+1 [D(Y k Y  i+1 ηx µi+1 Y + Y1 ηY µY +  −  k=1 n−1  n−1  ηxj µi+1 Yj  − j=1  +  Y1 ηYj µi+1 Yj  ˜ k )]η j µi+1 [D(Y Yk Yj  +  .  k=1  (3.221)  77  3.4. General ODEs ˜ given in (3.186) that It follows from the definition of D ˜ X† (n) (µi+1 ) = D†x [X† (n) (µi+1 )] + Y1 D† [X† (n) (µi+1 )]+ D 0 n−1  ˜ k )]D† [X† [D(Y k  +  (n)  (µi+1 )] ,  k=1 n−1  ˜ µi+1 x D(ξ) =  ˜ k )]ξY µi+1 [D(Y k x  i+1 ξx µi+1 x + Y1 ξY µx +  ,  k=1 n−1  ˜ µi+1 Y D(η) =  i+1 ηx µi+1 Y + Y1 ηY µY +  (3.222) ˜ k )]ηY µi+1 [D(Y k Y  ,  ˜ k )]η j µi+1 [D(Y Yk Yj  ,  k=1 n−1  ˜ j µi+1 Yj D(η ) =  j i+1 ηxj µi+1 Yj + Y1 ηY µYj + k=1  j = 1, . . . , n − 1, and thus  † (n) i+1 X† (n) (µi+1 (µY ) + x ) + Y1 X    n−1  ˜ j )]X† [D(Y  (n)   ˜ † (µi+1 Yj ) = D X  (n)  (µi+1 )  j=1 n−1  ˜ j µi+1 Yj D(η ).  i+1 ˜ ˜ − µi+1 x D(ξ) − µY D(η) − j=1  (3.223) Using (3.210) the first term on the right-hand side of (3.223) becomes     ˜ X† (n) (µi+1 ) = D ˜ D  αji+1 (x, Y )µj   n−1 j=0  n−1  n−1  ˜ αi+1 (x, Y ) µ D j j  = j=0  ˜ µj αji+1 (x, Y )D  +  .  j=0  (3.224)  78  3.4. General ODEs To simplify the second term on the right-hand side of (3.224) note that from (3.185) when µn−1 = Λ one has that ˜ 0 = −ΛfY , Dµ  (3.225)  ˜ j = −µj−1 − ΛfY , j = 1, . . . , n − 1. Dµ j Thus, it follows that n−1  n−1  ˜ µj αji+1 (x, Y )D j=0  n−1  αji+1 (x, Y )µj−1 −  =− j=1  αji+1 (x, Y )ΛfYj j=0  (3.226) which, upon substitution into the second term on the right-hand side of (3.224) yields ˜ X† (n) (µi+1 ) = D  n−1  ˜ αi+1 (x, Y ) µj D j  −  j=0 n−1  −  (3.227)  n−1  αji+1 (x, Y  )µ  j−1  αji+1 (x, Y  −  j=1  )ΛfYj .  j=0  Substituting (3.227) into (3.223) one obtains  † (n) i+1 X† (n) (µi+1 (µY ) + x ) + Y1 X    n−1  ˜ j )]X† [D(Y  (n)   (µi+1 Yj ) =  j=1 n−1  n−1  ˜ αi+1 (x, Y ) µj D j j=0  n−1  αji+1 (x, Y )µj−1 −  − j=1  αji+1 (x, Y )ΛfYj − j=0  n−1 i+1 ˜ ˜ − µi+1 x D(ξ) − µY D(η) −  ˜ j µi+1 Yj D(η ). j=1  (3.228) One can then use the fact that n−1  µi+1 x  =  −Y1 µi+1 Y  ˜ j )]µi+1 − ΛfY , [D(Y i+1 Yj  − j=1  79  3.4. General ODEs which follows from the system multiplier determining equations (3.109), to write (3.228) as  X  † (n)  † (n)  (µi+1 x ) + Y1 X    n−1  ˜ j )]X [D(Y  (µi+1 Y )+  † (n)   (µi+1 Yj ) =  j=1 n−1  n−1  ˜ αi+1 (x, Y ) µj D j  n−1  αji+1 (x, Y )µj−1 −  − j=1  j=0  αji+1 (x, Y )ΛfYj − j=0  n−1  ˜ ˜ ˜ − µi+1 Y [D(η) − D(Y )D(ξ)] −  ˜ j ˜ ˜ µi+1 Yj [D(η ) − D(Yj )D(ξ)]. j=1  (3.229) The substitution of (3.229) into the first term on the right-hand side of (3.219b) yields X†  (n)  n−1  n−1  ˜ αi+1 (x, Y ) µj D j  ˜ i+1 ] = [Dµ  αji+1 (x, Y )µj−1 −  − j=1  j=0 n−1  ˜ ˜ ˜ αji+1 (x, Y )ΛfYj − µi+1 Y [D(η) − D(Y )D(ξ)]−  − j=0  n−1  ˜ j ˜ ˜ µi+1 Yj [D(η ) − D(Yj )D(ξ)]+  − j=1    n−2  n−1 i+1 η j+1 µi+1 Yj + µYn−1  1 + µi+1 Y η + j=1   fYk η k  .  ξfx + ηfY + k=1  (3.230)  80  3.4. General ODEs The terms on the right-hand side of (3.230) can be re-ordered to give X†  (n)  n−1  ˜ i+1 ] = [Dµ  n−1  ˜ αi+1 (x, Y ) µj D j  αji+1 (x, Y )µj−1 −  −  j=0  j=1  n−1  ˜ ˜ )D(ξ)] ˜ αji+1 (x, Y )ΛfYj + µi+1 η 1 − [D(η) − D(Y − Y  − j=0 n−2  ˜ j ) − D(Y ˜ j )D(ξ)] ˜ µi+1 η j+1 − [D(η + Yj  − j=1  n−1  + µi+1 Yn−1  ˜ n−1 ) − D(Y ˜ n−1 )D(ξ) ˜ fYk η k − D(η  ξfx + ηfY +  .  k=1  (3.231) In the remarks following Theorem 3.23 it was shown that all local symmetries of the scalar IFDE (3.187) must project onto higher-order symmetries of the scalar ODE (2.1). In particular, since Xp = ξ(x, Y )  ∂ ∂ + η(x, Y ) + ∂x ∂Y  n−1  η j (x, Y ) j=1  ∂ ∂Yj  is the projection of X (3.198) onto a transformation of (x, Y, Y1 , . . . , Yn−1 )space, its coefficients ξ, η, η 1 , . . . , η n−1 satisfy the symmetry determining equations for the scalar ODE (2.1) given by ˜ ˜ η 1 = D(η) − Y1 Dξ, ˜ j ) − D(Y ˜ j )Dξ, ˜ η j+1 = D(η  j = 1, . . . , n − 2,  n−1  ˜ fYj η = D(η j  fx ξ + fY η +  n−1  (3.232)  ˜ n−1 )Dξ. ˜ ) − D(Y  j=1  81  3.4. General ODEs The substitution of (3.232) into the right-hand side of (3.231) yields X†  (n)  n−1  ˜ i+1 ] = [Dµ  n−1  ˜ αi+1 (x, Y ) µj D j  αji+1 (x, Y )µj−1 −  −  j=0  j=1  (3.233)  n−1  αji+1 (x, Y )ΛfYj .  − j=0  Hence the substitution of (3.233) and (3.214) into the right-hand side of (3.213) gives X†  (n)  n−1  ˜ αi+1 (x, Y ) µj D j  [µi (x, Y , Λ, ∂Λ, . . . , ∂Λ)] = −  +  j=0 n−1  n−1  αji+1 (x, Y )µj−1 +  + j=1  αji+1 (x, Y )ΛfYj − fYi+1 s(x, Y )Λ−  (3.234)  j=0      n−1  fYj Yi+1 η j  .  − Λ fxYi+1 ξ + fY Yi+1 η + j=1  Finally, from (3.234) it follows that X[µi ] =  X†  (n)  [µi (x, Y , Λ, ∂Λ, . . . , ∂ n Λ)]  Λ=µn−1 ,∂Λ=∂µn−1 ,...,∂ n Λ=∂ n µn−1  n−2 i+1 ˜ αi+1 (x, Y ) αj+1 (x, Y ) − D j  =−  µj +  j=0   n−1  i+1 ˜ + −D αn−1 (x, Y ) + αji+1 (x, Y )fYj − fYi+1 s(x, Y ) −  j=0   n−1  − fxYi+1 ξ + fY Yi+1 η + fYj Yi+1 η j  µn−1  j=1  (3.235) on solutions of the system multiplier determining equations (3.109) and is of the required form (3.208). Thus if there exist functions αji+1 (x, Y ) such  82  3.4. General ODEs that  n−1  αji+1 (x, Y )µj  X[µi+1 ] = j=0  then there exist functions αji (x, Y ) such that n−1 i  αji (x, Y )µj  X[µ ] = j=0  on solutions of the system multiplier determining equations (3.109). Noting that X[µn−1 ] = s(x, Y )µn−1 it follows by induction that n−1  X[µi ] =  αji (x, Y )µj , i = 0, . . . , n − 1 j=0  and hence that all point symmetries of the scalar IFDE (3.187) correspond to point symmetries of the system multiplier determining equations (3.109).  Example 3.27. In Example 2.6, it was shown that the ODE y =0  (3.236)  has the corresponding scalar IFDE Λxx + 2Y1 ΛxY + Y12 ΛY Y = 0,  (3.237)  2ΛY + Y1 ΛY Y1 + ΛxY1 = 0. Using the GeM package for MapleTM one finds that the point transformation of (x, Y, Y1 , Λ)-space X = ξ(x, Y, Y1 )  ∂ ∂ ∂ ∂ + η(x, Y, Y1 ) + η 1 (x, Y, Y1 ) + s(x, Y, Y1 )Λ ∂x ∂Y ∂Y1 ∂Λ (3.238)  is a symmetry of the scalar IFDE (3.237) if and only if its coefficients ξ, η, η 1 ,  83  3.4. General ODEs and s satisfy the symmetry determining equations ηY1 = ξY1 Y1 ,  (3.239a)  1  η = ηx + Y1 ηY − Y1 (ξx + Y1 ξY ),  (3.239b)  ηx1 + Y1 ηY1 = 0,  (3.239c)  s = −ηY11 + c.  (3.239d)  Hence, from (3.239a)-(3.239c) it follows that all symmetries of the scalar IFDE (3.237) project onto contact symmetries of the ODE (3.236). Conversely, all contact symmetries Xp = ξ(x, Y, Y1 )  ∂ ∂ ∂ + η(x, Y, Y1 ) + η 1 (x, Y, Y1 ) ∂x ∂Y ∂Y1  yield a point symmetry (3.238) of the scalar IFDE (3.237) where s = −ηY11 . In particular, the scalar IFDE (3.237) admits the point symmetry X† = x2  ∂ ∂ ∂ ∂ + xY + (Y − xY1 ) . + xΛ ∂x ∂Y ∂Y1 ∂Λ  (3.240)  Let µ1 (x, Y, Y1 , Λ) = Λ and µ0 (x, Y, Y1 , Λ, Λx , ΛY ) = −Λx − Y1 ΛY . The first prolongation of X† (3.240) is given by X†  (1)  =X† + [−Λx x − Λ − Y ΛY + Y1 ΛY1 ] − Λ Y1  ∂ − ∂Λx  ∂ ∂ + 2xΛY1 ∂ΛY ∂ΛY1  and hence X†  (1) 0  µ (x, Y, Y1 , Λ, ∂Λ) = [Λx x − Λ + Y ΛY − Y1 ΛY1 + Y1 ΛY1 − (Y − xY1 )ΛY ]  84  3.4. General ODEs which can be simplified to yield X†  (1) 0  µ (x, Y, Y1 , Λ, Λx , ΛY ) = −Λ − x[−Λx − Y1 ΛY ].  Thus, the point symmetry (3.240) of the scalar IFDE (3.237) induces a corresponding point symmetry of the system multiplier determining equations given by X = x2  ∂ ∂ ∂ ∂ ∂ − [xµ0 + µ1 ] 0 + xµ1 1 . (3.241) + xY + (Y − xY1 ) ∂x ∂Y ∂Y1 ∂µ ∂µ  In order to use X† to find particular solutions of the scalar IFDE (3.237) one first determines differential invariants arising as constants of integration of  dx dY dY1 dΛ = = = 2 x xY Y − xY1 xΛ + r  (3.242)  where the additional parameter r is introduced because the scalar IFDE (3.237) is linear and thus has a scaling symmetry. Integrating (3.242) one obtains Y , x t = Y1 x − Y, −r Λ= + C3 . 2x s=  Letting C3 = f (s, t) and substituting Λ(x, Y, Y1 ) =  (3.243)  −r 2x +f (s(x, Y, Y1 ), t(x, Y, Y1 ))  into the scalar IFDE (3.237) one obtains r = 0, cs f (s, t) = 2 + h(t), t  (3.244)  where c is an arbitrary constant and h is an arbitrary function of t. Thus Λ(x, Y, Y1 ) =  cY + xh(Y − xY1 ) (Y − xY1 )2  85  3.4. General ODEs are integrating factors of the scalar ODE (3.236) yielding the two functionally independent first integrals φ1 (x, y, y ) = y = a,  φ2 (x, y, y ) = y − xy = b.  Example 3.28. As a second example consider the second-order nonlinear ODE [9] 2 K(y)y  + xy = 0  (3.245)  arising in the study of plasma physics [1] and polymer science [15]. Adapting a result from [8], one can show that if K(y) =  1 , 1 + y2  then (3.245) admits the contact symmetry X = 2K(y)y + xy  ∂ ∂ ∂ − (1 + y 2 ) − 3yy . ∂x ∂y ∂y  (3.246)  One can then find the induced point symmetry of the system multiplier determining equations ∂ ∂ ∂ − (1 + Y 2 ) − 3Y Y1 + ∂x ∂Y ∂Y1 ∂ ∂ + (α00 µ0 + α10 µ1 ) 0 + (α01 µ0 + α11 µ1 ) 1 ∂µ ∂µ  X = (2K(Y )Y1 + xY )  (3.247)  where the coefficients α00 (x, Y, Y1 ), α10 (x, Y, Y1 ), α01 (x, Y, Y1 ), and α11 (x, Y, Y1 ) are given by the equations (3.172) when n = 2 and f (x, Y, Y1 ) = −  (1 + Y 2 ) 2  −2Y Y12 + xY1 . (1 + Y 2 )2  86  3.4. General ODEs Explicitly, one finds that X = (2K(Y )Y1 + xY )  ∂ ∂ ∂ + − (1 + Y 2 ) − 3Y Y1 ∂x ∂Y ∂Y1  4Y Y12 + xY1 + c µ0 + (1 + Y 2 )2 ∂ 2K (Y )Y12 + xY1 (2K (Y )Y1 + x) µ1 + + 3Y1 − 2K(Y ) ∂µ0 2Y1 0 ∂ + µ + (3Y − 2Y12 K (Y ) − xY1 + c)µ1 , 2 1+Y ∂µ1 +  2Y −  (3.248)  where c is a constant. Using MapleTM , one can find differential invariants arising as constants of integration of the system dY dY1 dx = = = 2 2K(Y )Y1 + xY −(1 + Y ) −3Y Y1 dµ0 2Y − =  4Y Y12 (1+Y 2 )2  + xY1 + c µ0 + 3Y1 −  2K (Y )Y12 +xY1 (2K 2K(Y )  dµ1 2Y1 µ0 1+Y 2  + (3Y − 2Y12 K (Y ) − xY1 + c)µ1  (Y )Y1 + x) µ1  . (3.249)  Integration of (3.249) yields C1 = C2 = µ1 =  Y1 3  (1 + Y 2 ) 2  ,  1 + Y 2x + ec arctan Y 3  (1 + Y 2 ) 2  2Y Y1 3  (1 + Y 2 ) 2  ,  (3.250)  [C3 Y + C4 ] ,  and an additional equation for µ0 in terms of c, C3 , C4 , x, Y, and Y1 . From Theorem 3.23, if {µ0 (x, Y, Y1 ), µ1 (x, Y, Y1 )} is a set of multipliers of the system multiplier determining equations corresponding to the ODE (3.245), then Λ = µ1 is a solution of the corresponding scalar integrating fac-  87  3.4. General ODEs Y1 tor determining equations. Letting s(x, Y, Y1 ) = 3 , t(x, Y, Y1 ) = 2) 2 (1+Y √ 1 + Y 2 x + 2Y Y1 3 , C3 = f (s, t), and C4 = g(s, t) one obtains the fol(1+Y 2 ) 2  lowing ansatz for the integrating factors of (3.249) Λ=  ec arctan Y [Y f (s(x, Y, Y1 ), t(x, Y, Y1 )) + g(s(x, Y, Y1 ), t(x, Y, Y1 ))] 3  (1 + Y 2 ) 2  .  (3.251) After substituting (3.251) into the scalar integrating factor determining equations of (3.245) one can solve the resulting system for f (s, t), g(s, t) and c to obtain c = 0, ∂h , ∂t ∂h g(s, t) = , ∂s  f (s, t) =  (3.252) 1  h(s, t) = k + r s2 e 2 (s  2 +t2 )  ,  which yields the integrating factor 1  r s2 (x, Y, Y1 )e 2 (s Λ(x, Y, Y1 ) =  2 (x,Y,Y  2 1 )+t (x,Y,Y1 ))  1  e 2 (s  2 (x,Y,Y  2 1 )+t (x,Y,Y1 ))  ×  3  (1 + Y 2 ) 2 × Y t(x, Y, Y1 )s2 (x, Y, Y1 ) + s(x, Y, Y1 ) 2 + s2 (x, Y, Y1 )  (3.253) where s(x, Y, Y1 ) =  Y1 (1+Y  3 2) 2  and t(x, Y, Y1 ) =  √  1 + Y 2x +  2Y Y1  3  (1+Y 2 ) 2  .  88  Chapter 4  Conclusions and Further Work In Part 1 of this thesis the relationship between the symmetries of a scalar ODE in solved form and the symmetries of its corresponding integrating factor determining equations (IFDE) was examined. In particular it was shown that every higher-order symmetry of the scalar ODE induces a Lie point symmetry of the corresponding IFDE. An explicit formula for this induced point symmetry was obtained, generalizing a result from [3] where a formula was derived for the symmetries of the IFDE induced by Lie point symmetries of the original scalar ODE. Using this formula an integrating factor was found for the scalar ODE 2  y 1 + y2  + xy = 0  (4.1)  which is invariant under the point symmetry of the IFDE induced by the contact symmetry of (4.1) X=  2  y + xy 1 + y2  ∂ ∂ ∂ − (1 + y 2 ) − 3yy . ∂x ∂y ∂y  (4.2)  Conversely, it was shown that all point symmetries of the IFDE which are of the form ∂ ∂ ξ(x, Y ) + η(x, Y ) + ∂x ∂Y  n−1  η j (x, Y ) j=1  ∂ ∂ + Λh(x, Y ) ∂Yj ∂Λ  (4.3)  must project onto higher-order symmetries of the corresponding scalar ODE.  89  Chapter 4. Conclusions and Further Work Hence the symmetry ansatz approach in which one uses higher-order symmetries of the scalar ODE to find point symmetries of the corresponding IFDE is equivalent to analyzing the symmetry determining equations of the IFDE for point symmetries of the form (4.3). In light of the connection demonstrated between symmetries of a scalar ODE and its corresponding integrating factor determining equations, there are many ways in which to proceed in future research. Firstly, one could use the formula derived here to find new integrating factors for scalar ODEs which admit only contact and higher-order symmetries, potentially allowing one to find new general solutions to ODEs. Secondly, one could seek to extend the results presented here to either include all point symmetries of the IFDE or to show that all such point symmetries, except those corresponding to the addition of a particular solution, can be written in the form (4.3). If this were true, then it would imply that one does not miss useful point symmetries of the IFDE by considering only those point symmetries which are induced by higher-order symmetries of the corresponding scalar ODE. Finally, it would be interesting to see if a similar result occurs for partial differential equations (PDEs). Namely, do there exist PDEs for which the corresponding multiplier determining equations admit point symmetries which are not scalings of the independent variables, the addition of a particular solution, or induced by a higher-order symmetry of the original PDE?  90  Part II Analysis of an Exact Solution of a Wave Equation  91  Chapter 5  The Wave Equation 5.1  Introduction  In this chapter, the wave equation and its applications are discussed. Classical results for the one-dimensional wave equation are presented in Section 5.2 along with the role of the wave equation in seismology, acoustics, elasticity theory, solid mechanics and electromagnetism. In Section 5.3 the energy of the wave equation is precisely defined and a proof of the conservation of total energy is given. Finally, in Section 5.4, by defining the reflection and transmission coefficients the transfer of energy in two-layered media is quantified and the Fresnel equations are presented.  5.2  The Wave Equation and its Applications  The wave equation utt (x, t) = c2 (x, t, u)∇2 u(x, t)  (5.1)  is an important tool for modelling physical problems in acoustics, electromagnetics, fluid mechanics and seismology in the absence of dissipative and driving forces. Here subscripts denote partial differentiation, t represents time, and x = (x1 , . . . , xm ) is the position in space. The function c(x, t, u), called the wave speed, encodes the fundamental properties of the medium through which disturbances propagate. In this work, only linear one-dimensional wave equations with time-invariant media will be considered; though using non-local symmetries the solutions found in this case can be used to find solutions of certain non-linear wave equations [5]. With  92  5.2. The Wave Equation and its Applications these simplifying assumptions, (5.1) becomes utt = c2 (x)uxx .  (5.2)  A great deal of insight can be gained by considering the case where the wave speed is a constant, c. The following classical result is due to d’Alembert [14]: Theorem 5.1. Consider the initial value problem utt = c2 uxx ,  x ∈ (−∞, ∞) (5.3)  u(x, 0) = f (x), ut (x, 0) = g(x)  where the functions f (x) and g(x) are twice differentiable. The solution of (5.3) is given by 1 1 u ˜(x, t) = [f (x − ct) + f (x + ct)] + 2 2c  x+ct  g(s)ds.  (5.4)  x−ct  Proof. Let u ˜(x, t) be defined as in (5.4). Evaluating u ˜(x, 0) one obtains 1 1 u ˜(x, 0) = [f (x) + f (x)] + 2 2c  x  g(s)ds = f (x).  (5.5)  x  The differentiation of u ˜ with respect to t gives 1 1 u ˜t (x, t) = [−cf (x − ct) + cf (x + ct)] + [cg(x + ct) − (−cg(x − ct))] 2 2c and hence u ˜t (x, 0) =  1 1 [−cf (x) + cf (x)] + [cg(x) + cg(x)] = g(x). 2 2c  (5.6)  Together, (5.6) and (5.5) show that u ˜ satisfies the initial conditions of (5.3).  93  5.2. The Wave Equation and its Applications The differentiation of u ˜ twice with respect to x gives 1 1 u ˜xx (x, t) = [f (x − ct) + f (x + ct)] + [g (x + ct) − g (x − ct)] 2 2c  (5.7)  and twice with respect to t gives 1 1 u ˜tt (x, t) = [(−c)2 f (x−ct)+c2 f (x+ct)]+ [c2 g (x+ct)−(−c)2 g (x−ct)], 2 2c (5.8) from which it follows immediately that u ˜tt = c2 u ˜xx  (5.9)  which completes the proof. From Theorem 5.1 it follows that any initial disturbance will propagate outwards with speed c. Moreover, if ut (x, 0) ≡ 0, then the shape of the initial disturbance will match the shapes of the leftward and rightward propagating waves. In geology, particularly seismology, the wave equation (5.2) is used to model the propagation of earthquakes through the crust and upper mantle [24]. It is especially important in the oil and mining industries where the velocity profile is recovered from measurements of the propagation time and strength. The starting point for this analysis is the seismic wave equation [24]: ρutt = ∇λ(∇ · u) + ∇µ · [∇u + (∇u)T ] + (λ + 2µ)∇∇ · u − µ∇ × ∇ × u, (5.10) where u is the velocity of a seismic disturbance, ρ is the density of the medium, µ(x) is the viscosity, and λ(x) is the Lam´e parameter. Considering only one-dimensional waves, (5.10) reduces to {[λ(x) + 2µ(x)]ux }x = utt .  (5.11)  94  5.2. The Wave Equation and its Applications In Section 6.3 it will be shown that each solution of (5.11) corresponds to a solution of (5.2) where c(x) =  λ(x) + 2µ(x) [6]. Conversely, each solution  of (5.2) yields a solution of (5.11). Similarly, in elasticity theory [19] the linear wave equation can be used to model the behaviour of small elastic oscillations in a solid body. In particular, considering the one-dimensional problem, if u(x, t) represents the longitudinal displacement of the material from its equilibrium position then utt =  E(1 − σ) uxx ρ(1 + σ)(1 − 2σ)  (5.12)  where the density (ρ), Young’s modulus (E) and Poisson’s ratio (σ) are assumed to be constant throughout the medium. The equation (5.12) is clearly of the form (5.2) where  c=  E(1 − σ) . ρ(1 + σ)(1 − 2σ)  An identical expression can be derived for the transverse displacement from equilibrium, though in this case the wave speed is c=  E . 2ρ(1 + σ)  The wave equation (5.2) also arises in the study of electromagnetism. Consider Maxwell’s equations in a linear material in the absence of free charge and free current [16]: ∇ · ( E) = 0,  (5.13a)  ∇ · B = 0,  (5.13b)  ∂B , ∂t B ∂ E ∇× = , µ ∂t ∇×E=−  (5.13c) (5.13d)  95  5.2. The Wave Equation and its Applications where µ is the permeability of the material, and  is the permittivity. In  many materials, µ can be treated as constant [16]. In addition, it will be assumed that depends only on x and that ∇ is perpendicular to E. Taking the cross product of (5.13c) and using the identity [16] ∇ × (∇ × E) = ∇(∇ · E) − ∇2 E,  (5.14)  ∂(∇ × B) . ∂t  (5.15)  one obtains ∇(∇ · E) − ∇2 E = −  Substituting (5.13d) into (5.15) and using the fact that µ is a constant one finds that ∇(∇ · E) − ∇2 E = −µ  ∂2E . ∂t2  (5.16)  Finally, (5.13a) gives ∇·E+∇ ·E=0  (5.17)  where by assumption the second term of (5.17) is zero. Using (5.17) in (5.16) it follows that  1 2 ∂2E = ∇ E, 2 ∂t µ  (5.18)  which when restricted to one dimension is of the form (5.2) with c(x) =  1 µ (x)  .  (5.19)  Alternatively, suppose that E = E(x, t)ˆ y , µ is a constant, and = (x). It follows from (5.13c) that B = B(x, t)ˆ z for some function B(x, t). Defining ψ(x, t) implicitly by ψt (x, t) = E(x, t), ψx (x, t) = −B(x, t),  (5.20)  from (5.13c) it is clear that ψtx = ψxt , while the equations (5.13a), (5.13b) are automatically satisfied by the initial assumptions. Finally, (5.13d) be-  96  5.3. Conservation of Energy comes  ψxx (x, t) = ψtt (x, t). µ  (5.21)  Thus, ψ(x, t) must also satisfy the wave equation (5.2) with wave speed (5.19). When considering the applications of the wave equation (5.2) the following definition is useful. Definition 5.2. A wave pulse is a compactly-supported disturbance propagating through a medium in a uniform direction. In physical applications, the behaviour of an incoming wave pulse crossing the interface between two media is particularly important, especially when considering the optical properties of materials [16], electromagnetic waves in plasmas [11], and the propagation of seismic waves between different types of rock within the Earth [24]. In this situation, only a fraction of the wave is transmitted into the new medium; the rest being reflected backwards. When the transition is sharp, ie. there is an abrupt transition between the two homogeneous media, the behaviour of an incident wave pulse is governed by the well-known Fresnel equations [16]. In this limiting case, the fraction of energy transmitted between media is independent of the shape of the incoming wave pulse, depending solely upon the ratio of the wave speeds in the two media [16]. In subsequent chapters, these results are extended to a four-parameter family of smooth wave speeds with transition regions of non-zero width.  5.3  Conservation of Energy  In order to proceed further it is necessary to give a definition of energy in the context of (5.2) that has the following features: • corresponds to the definition of energy used in the physical applications • is a conserved quantity on solutions of (5.2).  97  5.3. Conservation of Energy In electromagnetism, the energy density of an electromagnetic field is given by [16] U=  1 2  E2 +  B2 . µ  (5.22)  In terms of ψ defined in (5.20) it becomes U= The factor  1 µ  1 2µ  µ ψt2 + ψx2 =  1 ψt2 + ψx2 . 2µ c2 (x)  (5.23)  is a constant and can be eliminated by rescaling U. The energy  functional, E, can then be defined as follows. Definition 5.3. Let u(x, t) be a solution of (5.2), t > 0 and Ω ⊂ (−∞, ∞). Then E(u; Ω, t) =  1 2  u2t + u2x dx c2 (x)  Ω  (5.24)  is the energy of u(x, t) in Ω at time t, and φ(u; ∂Ω, t) = (ux ut )|∂Ω  (5.25)  is the flux through the boundary of Ω (denoted by ∂Ω) at time t.  In order to show that (5.24) is conserved, first note that (5.2) can be re-written in the form of a conservation law [14] ut 2 c (x)  + (−ux )x = 0.  (5.26)  t  The multiplication of both sides of (5.26) by ut gives ut  ut 2 c (x)  = ut (ux )x .  (5.27)  t  98  5.4. Reflection and Transmission Coefficients t The left-hand side of (5.27) is the derivative of 12 ( c2u(x) )2 with respect to time  and, since ut (ux )x = (ut ux )x − ux utx , it follows that u2t c2 (x)  1 2  = (ut ux )x − ux utx .  The last term on the right-hand side of (5.28) is equal to that  u2t + u2x c2 (x)  1 2  (5.28)  t −1 2 2 (ux )t  implying  = (ut ux )x .  (5.29)  t  The integration of both sides of (5.29) with respect to x over a region Ω with boundary ∂Ω yields d1 dt 2  Ω  u2t + u2x dx = (ut ux )|∂Ω . c2 (x)  (5.30)  From (5.30) it follows immediately that if u is a solution of (5.2) which is bounded and compactly-supported in x for all t, then d d E(u; R, t) = dt dt  ∞ −∞  u2t + u2x dx = lim (ut ux ) − lim (ut ux ) = 0. x→∞ x→−∞ c2 (x) (5.31)  The requirement that u has compact support can be relaxed provided that for all t, u and its first partial derivatives decay sufficiently rapidly as x → ±∞. By (5.31), though the energy in a finite region is not necessarily conserved, the total energy of the entire system is constant on solutions of (5.2).  5.4  Reflection and Transmission Coefficients  When considering the propagation of waves in a two-layered medium, the definition of energy (5.24) provides a means of quantifying the reflection and transmission occurring in the transition between the layers via reflection and transmission coefficients. Definition 5.4. The reflection coefficient, R, is the fraction of the total 99  5.4. Reflection and Transmission Coefficients energy of an incoming wave pulse incident upon a transition region separating two homogeneous media that returns to, or remains within, the original media. Consider the case where the initial wave pulse originates to the left of the transition region which has width 2l and is centered about x = 0. It follows that R = lim  t→∞  E(u; (−∞, −l], t) . E(u; (−∞, −l], 0)  (5.32)  The transmission coefficient, T, is the total energy of an incoming wave, incident upon a transition region separating two homogeneous media, that is transmitted into the second medium. If the initial wave pulse is located to the left of the transition region (of width 2δ) and is moving towards it at t = 0, then the transmission coefficient is given by T = lim  t→∞  E(u; [l, ∞), t) . E(u; (−l, −δ], 0)  (5.33)  Analogous expressions can be derived in the case where the incident wave pulse is originally on the right side of the transition region. When the transition region is sharp, ie. δ → 0, one can find an explicit formula for R and T which holds for any shape of the incoming wave pulse. Theorem 5.5. [16] Suppose one has a sharp transition, centered at x = 0, separating two media: medium 1 with wave speed c1 and medium 2 with wave speed c2 . In addition, suppose that the incident wave pulse starts entirely within medium 1. Then, the reflection and transmission coefficients are given by the Fresnel equations γ−1 2 , γ+1 4γ T = , (1 + γ)2  R=  where γ =  (5.34)  c2 c1 .  100  Chapter 6  Solutions of a Family of Wave Equations 6.1  Introduction  In this chapter the wave equation (5.2) is analyzed using symmetry methods. A symmetry group classification of (5.2) is presented in Section 6.2, yielding functions c(x) for which (5.2) has additional symmetries. In Section 6.3, two new systems are given which are related to (5.2) via a non-invertible mapping. The point symmetries of one of these systems are found in Section 6.4 and used in Section 6.5 to find useful solutions of the wave equation (5.2) for a four-parameter family of wave speeds.  6.2  Symmetry Classification of (5.2)  Symmetry methods provide a systematic way to seek particular solutions of PDEs such as (5.2). In particular, given a Lie group of point transformations admitted by (5.2) one seeks corresponding invariant solutions by solving a reduced equation with fewer independent variables. For an arbitrary wave speed c(x), (5.2) has two point symmetries [6]: translation in t; and scaling in u. Unfortunately, these symmetries are insufficient to reduce (5.2) to obtain physically-relevant solutions; other symmetries are required. It is thus necessary to determine those c(x) for which one can determine additional symmetries for (5.2) so that useful solutions can be found. Performing a group classification of the wave equation (5.2) allows one to determine the forms of the wave speed c(x), arising as solutions of a 101  6.2. Symmetry Classification of (5.2) Form of c(x) Arbitrary [(Bx2 + C) exp (A (Bx2 + C)−1 dx)]−1 , A, B, C are constants  x−C , C = 0, 1, 2 x−2 x−1 ex c(x) satisfies (α + Hα) = σ 2 c2 (x)α, where σ = const = 0, H(x) = c (x)/c(x), α2 (x) = [H 2 (x) − 2H (x)]−1  Symmetries ∂ ∂ X1 = ∂t , X2 = u ∂u , ∂ X3 = g(x, t) ∂u where gtt = c2 (x)gxx X1 , X2 , X3 , ∂ ∂ ∂ X4 = 12 u(A + 2Bx) ∂u + (Bx2 + C) ∂x − At ∂t 1 ∂ ∂ 2 X5 = 2 u(A + 2Bx)t ∂u + (Bx + C)t ∂x ∂ +[− 12 At2 + c2 (x)(Bx2 + C) dx] ∂t ∂ ∂ X1 , X2 , X3 , X6 = x ∂x + (1 + C)t ∂t 2+2C ∂ ∂ ∂ X7 = − 12 C t u ∂u + xt ∂x + [ x1+C + 21 (1 + C)t2 ] ∂t An infinite number ∂ ∂ ∂ X1 , X2 , X3 , X6 , X8 = 12 ut ∂u + xt ∂x + log(x) ∂t ∂ ∂ X1 , X2 , X3 , X9 = ∂x + t ∂t , ∂ ∂ 1 2x ∂ 1 X10 = − 2 ut ∂u + t ∂x + 2 [e + t2 ] ∂t X1 , X2 , X3 ∂ ∂ ∂ K11,12 = e±σt [− 12 uαH ∂u + α ∂x ± σ −1 (α + Hα) ∂t ]  Table 6.1: Group classification of the linear wave equation (5.2) appearing in [5] based on [6].  set of ODEs found systematically from the original PDE, for which (5.2) admits extra point symmetries in addition to the two given above. A point symmetry group classification was performed on the wave equation (5.2) in [6], the results of which are presented in Table 6.1. After performing a symmetry group classification, one must then ensure that the resulting forms of the wave speed c(x) are physically meaningful. Specifically, the wave speed must be bounded since, from the special theory of relativity, the speed of energy and matter cannot exceed that of light in a vacuum, 3.00 × 108 ms−1 . Thus, in electromagnetism, seismology and acoustics, unbounded wave speeds are not useful. Similarly, physical situations where the wave speed is, or asymptotically approaches, zero are rare since at such points the energy density goes to infinity.  102  6.3. Formulation of a Nonlocally Related System Considering the wave speeds given in Table 6.1 one can show that all of them either represent non-physical situations or yield wave equations which admit point symmetries that have non-physical invariant solutions. Thus the application of Lie symmetry methods does not provide new solutions of (5.2) for physically relevant wave speeds.  6.3  Formulation of a Nonlocally Related System  Nonlocal symmetry analysis presents an extension to the classical Lie symmetry methods, yielding new analytic solutions to many equations, and in particular for (5.2). Definition 6.1. [2] A symmetry of a PDE is called nonlocal if the coefficients of its corresponding infinitesimal generator do not depend solely upon the independent variables, dependent variables, and a finite number of derivatives of its dependent variables. Though Lie’s method for finding local symmetries does not directly generalize to nonlocal symmetries, systematic methods for seeking nonlocal symmetries systematically do still exist. In particular, such nonlocal symmetries often arise as point symmetries of other PDE systems which are nonlocally related to the original PDE. Definition 6.2. [4] Two PDE systems are nonlocally related if there exists a non-invertible mapping that takes each solution of one system into one or more solutions of the other, and vice versa. Non-locally related systems of a given PDE can be found systematically by using conservation laws to generate corresponding potential systems [4]. To illustrate this process, note that (5.2) has the conservation law (ux )x + −  ut 2 c (x)  = 0.  (6.1)  t  103  6.3. Formulation of a Nonlocally Related System Defining v(x, t) implicitly by vt = ux , ut vx = 2 . c (x)  (6.2)  one can form a potential system by augmenting (5.2) with (6.2). In this case, however, (5.2) is a differential consequence of (6.2) obtained from the integrability condition vtx = vxt ,  (6.3)  and can therefore be omitted. One then has the following result. Lemma 6.3. Each solution of the wave equation (5.2) yields a solution of the potential system (6.2) and a solution of vtt = (c2 (x)vx )x .  (6.4)  Conversely, every solution of (6.2) or (6.4) yields a solution of the wave equation (5.2). Proof. Let u(x, t) be a solution of (5.2). One must show that there exists a function v(x, t) such that u(x, t) and v(x, t) together satisfy (6.2). The function v(x, t) exists if and only if v(x, t) satisfies (vt )x = (vx )t .  (6.5)  From (6.2) it follows that v(x, t) satisfies (6.5) if and only if u(x, t) satisfies (5.2), which is true by assumption. For a given u(x, t) the corresponding v(x, t) is defined up to a constant and therefore the mapping between solutions of (5.2) and (6.2) is non-invertible. Conversely, suppose (u, v) solves (6.2). One must then show that u(x, t) solves (5.2). Since v(x, t) exists it must satisfy the integrability condition 104  6.4. Symmetry Analysis of a Nonlocally Related System (6.5). From (6.2) and (6.5) it follow that (ux )x =  ut c2 (x)  ,  (6.6)  t  and thus u(x, t) is a solution of (5.2) as required. Furthermore, if (u, v) solves (6.2) it follows immediately that u(x, t) must satisfy the integrability condition (ut )x = (ux )t .  (6.7)  Substituting (6.2) into (6.7) one finds that (c2 (x)vx )x = vtt .  (6.8)  Thus every solution u(x, t) of (5.2) yields a solution of (6.4) which is defined up to a constant. Conversely, suppose v(x, t) satisfies (6.4). Then, for a given v(x, t) there exists a function u(x, t) such that together (u, v) satisfies (6.2) if and only if (6.7) holds. Using (6.2), (6.7) is satisfied if and only if v(x, t) is a solution of (6.4). Hence every solution of (6.4) yields a solution of (6.2) and thus of (5.2). Note that for a given v(x, t) the corresponding u(x, t) is unique up to a constant and thus the mapping is non-invertible.  6.4  Symmetry Analysis of a Nonlocally Related System  Performing a group classification of (6.2) [6], one finds that if c(x) satisfies the ODE cc  c c  = 0,  (6.9)  105  6.4. Symmetry Analysis of a Nonlocally Related System then (6.2) admits two extra symmetries in addition to the two which are visible by inspection: X1 =  ∂ , ∂t  and X2 = u  ∂ ∂ +v . ∂u ∂v  (6.10)  In [6], the fourth-order ODE (6.9) has a reduction to the first-order ODE c (x) =  λ c sin ν log( ) , ν c1  (6.11)  where λ, ν, c1 > 0. If c(x) solves (6.11) it follows that if c(x) = c1 and c(x) = π  c2 = c1 e ν then c (x) = 0. Since c (x) is positive when c(x) ∈ (c1 , c2 ) it follows that if there exists x∗ such that c(x∗ ) ∈ (c1 , c2 ) then limx→−∞ c(x) = c1 and limx→∞ c(x) = c2 since c(x) is monotonically increasing [7]. Without loss of generality one can center the transition region about x = 0, in which π  case c(0) = c1 e 2ν and c (0) =  λ ν.  Since c (x) ≤ c (0) =  λ ν,  it is clear that λ  determines the sharpness of the transition between c1 and c2 . A representative solution of (6.11) is shown in Figure 6.1, and was obtained numerically using a Runge-Kutta method [22] implemented in MatlabTM [20]. Note that if 0 <  c(x) c1  − 1 << 1 then, using a Taylor expansion, one finds  that c (x) =  c(x) λ λ sin(ν log( ν )) = ν c1 ν  c(x) c(x) −1 +O ( − 1)2 c1 c1  . (6.12)  Considering only the leading-order term of the series in (6.12) one obtains the approximation c (x) ≈ λ  c(x) − c1 , c1  (6.13)  which can be solved analytically to obtain λ  x  c(x) ≈ Ae c1 + c1 .  (6.14)  106  6.4. Symmetry Analysis of a Nonlocally Related System Similarly, when 0 < c (x) = =  c2 c1  −  c(x) c1  << 1 one finds that  λ c(x) )) sin(ν log( ν c1 λ νc1 sin(π) + cos(π) ν c2  c(x) c2 − c1 c1  +O  c2 c(x) − c1 c1  2  (6.15) and thus that c(x) ≈ c2 − Be  −λx c2  .  (6.16)  1 0.95 0.9 0.85  c(x)  0.8 0.75 0.7 0.65 0.6 0.55 0.5  −8  −6  −4  −2  0 x  2  4  6  8  10  Figure 6.1: A solution of (6.11) for c1 = 21 , c2 = 1, and λ = 0.8 obtained using a Runge-Kutta method [22] implemented in MatlabTM .  107 Student Version of MATLAB  6.4. Symmetry Analysis of a Nonlocally Related System In order to find the point symmetries of (6.2) for wave speeds satisfying (6.9) consider the point transformation of (x, t, u, v)-space represented by the infinitesimal generator X = ξ(x, t)  ∂ ∂ ∂ + τ (x, t) + [f (x, t)u + g(x, t)v] + ∂x ∂t ∂u ∂ + [h(x, t)u + k(x, t)v] . ∂v  (6.17)  It can be shown that (6.17) is a point symmetry of (6.2) if and only if the functions ξ, τ, f, g, h, and k satisfy the symmetry determining equations [7]  ht − fx = 0,  (6.18a)  kt − gx = 0,  (6.18b)  k − τt − f + ξx = 0,  (6.18c)  c ξ + c[τt − ξx ] = 0,  (6.18d)  2  c hx − ft = 0,  (6.18e)  c2 kx − gt = 0,  (6.18f)  c2 τx − ξt = 0,  (6.18g)  c2 h − g = 0.  (6.18h)  108  6.5. Formulation of an Analytic Solution Adapting a result from [6], one can solve the symmetry determining equations (6.18a)-(6.18h) to obtain the following point symmetries of (6.2): ∂ ∂t ∂ ∂ Xs = u +v ∂u ∂v ∂ ∂ 1 λ2 ∂ + (H − 1) + [(1 − H )λu − Hv] − Xp = eλt λH ∂x ∂t 2 2 ∂u λ2 u λ ∂ + Hv − eλt 2cc 2 ∂v ∂ ∂ 1 λ2 ∂ Xq = e−λt λH − (H − 1) + [(1 − H )λu + Hv] − ∂x ∂t 2 2 ∂u ∂ λ2 u λ + Hv − eλt − 2cc 2 ∂v  Xr =  where H(x) =  6.5  (6.19a) (6.19b) (6.19c) (6.19d) (6.19e) (6.19f)  c (x) c(x) .  Formulation of an Analytic Solution  To find solutions of the system (6.2) which are invariant under the point symmetry  1 λ  (Xp + Xq ) one first seeks corresponding differential invariants  arising as constants of integration of the equations dx dt du = = = 2β (t)H(x) 2β(t)[H (x) − 1] [β (t)(2 − H (x)) + s]u − β (t)H(x)v dv = =d . uβ (t) [−β (t)H (x) + s]v − c(x)c (x) (6.20)  109  6.5. Formulation of an Analytic Solution where β(t) =  1 λ  sin λt. The integration of (6.20) yields the differential invari-  ants [7] z = sinh λt sin y, 2  √u f2 =  c(x)  c  (x)e2s  +  c(x)v  [λβ(t) cos y + β (t)]  , (6.21)  2  √u g2 =  c(x)  −  c(x)v  , [λβ(t) cos y − β (t)] √ γ cot y 1 E = 2 + √ arctan ν γ β (t) where y = ν log  c(x) c1  c  e2s  ,  and γ = 4pq. Without loss of generality, one can  choose p = q using the invariance of (6.2) under translation in t. Solving (6.21) for u and v and letting f and g depend on z, one finds that 1 1 1 u = [c(x)c (x)] 2 e−s [λβ(t) cos y + β (t)] 2 f (z)+ 2 1 1 1 + [c(x)c (x)] 2 es [λβ(t) cos y − β (t)] 2 g(z), 2 1 1 c (x) 1 s v= [ ] 2 e [λβ(t) cos y + β (t)] 2 f (z)− 2 c(x)  1 c (x) − 2 c(x)  1 2  (6.22)  1  es [λβ(t) cos y − β (t)] 2 g(z).  The substitution of (6.22) into (6.2), after simplifying, gives [7] 3z 1 zσ + σ 2 f (z) + 2 f (z) + 2 1 + R2 − 2 f = 0, z +1 z +1 z +1 √ z2 + 1 z+σ g= f + 2 f R z +1 where R =  1 2ν ,  and σ =  (6.23a) (6.23b)  −s √ 2ν γ .  110  6.5. Formulation of an Analytic Solution The equations (6.23a) and (6.23b) are solved in [7] when σ = 0, yielding A cos(Rψ(z) + ζ), z2 + 1 A sin(Rψ(z) + ζ). g(z; ζ) = √ 2 z +1  f (z; ζ) = √  where ψ(z) = log(z +  √  (6.24)  z 2 + 1) and ζ is an arbitrary constant.  Furthermore, from [18] and [7] if fn (z), gn (z) denotes the solution of (6.23a) and (6.23b) when σ = −ın, then solutions of fn (z; ζ) and gn (z; ζ) are given by the following recurrence relation   Re fn+1 (z ; ζ)    Im fn+1 (z ; ζ)   Re g n+1 (z ; ζ)  Im gn+1 (z ; ζ)      Re fn (z ; ζ)       = − (n + 1 )2 + R2 · N (ϕ, βn )  Im fn (z ; ζ)  Re g (z ; ζ)  2 n   Im gn (z ; ζ)      (6.25)    where N (ϕ, βn ) is the orthonormal matrix   cos ϕ cos βn  sin ϕ cos βn  − sin βn  0      − sin ϕ cos βn cos ϕ cos βn  0 − sin β n , N (ϕ, βn ) =   sin βn 0 cos ϕ cos βn − sin ϕ cos βn    0 βn = arctan  2R 2n+1  sin βn  sin ϕ cos βn  cos ϕ cos βn (6.26)  and ϕ = arccotz. Since (6.23) is linear one can rescale  fn (z; ζ) and gn (z; ζ) so that   Re fn+1 (z ; ζ)    Im fn+1 (z ; ζ)   Re g n+1 (z ; ζ)  Im gn+1 (z ; ζ)      Re fn (z ; ζ)       = N (ϕ, βn )  Im fn (z ; ζ)   Re g (z ; ζ) n   Im gn (z ; ζ)     .    (6.27)  111  6.5. Formulation of an Analytic Solution Finally, since (6.2) is a linear system one can use superposition to build solutions of the form ∞  u(x, t) =  Am um (x, t), m=−∞ ∞  v(x, t) =  (6.28) Am vm (x, t),  m=−∞  where 1  um (x, t; ζ2m ) =e−2mı arctan[cot ysech t] [c(x) sin y] 2 × 1  × [cosh λt + sinh λt cos y] 2 f2m (z; ζ2m )+ 1  +[cosh λt − sinh λt cos y] 2 g2m (z; ζ2m ) , −2mı arctan[cot ysech(t)]  vm (x, t; ζ2m ) =e  sin y c(x)  1 2  (6.29) ×  1  [cosh λt + sinh λt cos y] 2 f2m (z; ζ2m )− 1  −[cosh λt − sinh λt cos y] 2 g2m (z; ζ2m ) , and the ζ2m are arbitrary constants. At t = 0, (6.28) becomes ∞  u(x, 0) 1  1 2  v(x, 0)c (x) 1  [sin(y)] 2  (−1)m Am [f2m (0, ζ2m ) + g2m (0, ζ2m )]e2ımy ,  =  [c(x) sin(y)] 2  −∞ ∞  (6.30) m  (−1) Am [f2m (0, ζ2m ) − g2m (0, ζ2m )]e  =  2ımy  ,  −∞  and thus if U (x) = u(x, 0) and V (x) = v(x, 0) then, since y ∈ [0, π], it follows from classical Fourier analysis [23] that (−1)m Am [f2m (0, ζ2m ) + g2m (0, ζ2m )] = Bm  112  6.5. Formulation of an Analytic Solution and (−1)m Am [f2m (0, ζ2m ) − g2m (0, ζ2m )] = Cm where Bm = Cm  1 π  1 = π  π  U (x(y)) 1  π  u(x(y), 0) = U (x(y))  −2mıy  e  dy.  [sin(y)] 2  Let α be a positive constant, κ = sin y sin y †  (6.31)  1  V (x(y))c 2 (x(y)) 1  0  e−2mıy dy,  [c(x(y)) sin(y)] 2  0  2N + 12  e  α 4N +1 , α(y−y † ) 2  and y † = −arccot(κ). Then for  and v(x(y), 0) = V (x(y)) = 0, (6.32)  (6.31) can be integrated analytically to give Cm = 0 and [7] Bm  (2N )!A(κ, 2N ) bπ b + 2mı = (e − 1) 2 π b + 4m2  where κ =  α 4N +1 ,  N  b2 + 4k 2 − 4m2 + 4bmı (b2 + 4k 2 − 4m2 )2 + 16b2 m2 k=1 (6.33)  y † = −arccot(κ), b = 12 [(4N + 1)κ − ν1 ] and 1  A(κ, 2N ) = [sin y † eκy† ]−(2N + 2 ) . In Chapter 7 the solutions of (6.2) with initial conditions u(x, 0) = U (x), v(x, 0) = V (x) given by (6.32) are analyzed in greater detail. A plot of U (x(y)) as a function of y is shown in Figure 6.2. Since x = −∞ corresponds to y = 0, if the initial data is located far to the left of the transition region then when it is plotted against y the initial data will appear as a narrow peak near y = 0. The narrow width of the peak of the initial data means that many terms will be required to approximate the exact series solution (6.28) accurately.  113  6.5. Formulation of an Analytic Solution  1  0.8  U(x(y))  0.6  0.4  0.2  0  -0.2 0  0.5  1  1.5  2  2.5  3  3.5  y "grid0002.fast" using (($1)):($2)  Figure 6.2: A plot of U (x(y)) given by (6.32) with N = 1 and y † = e−7.5 obtained using a finite number of terms of the series solution (6.28).  114  Chapter 7  Numerical Implementation of the Analytic Solution 7.1  Introduction  In Chapter 6 useful solutions of (5.2) were found for wave speeds satisfying the ODE (6.9). The resulting solutions are given in terms of the eigenfunctions f2m and g2m where m = 0, 1, 2, . . . The nature of the analytic solution obtained makes direct analysis of its behaviour difficult. In particular, since one cannot in general obtain a closed form expression for the solution, it can only be approximated by considering only a finite number of terms. Moreover, the solution is given in terms of the variable y satisfying d dy =ν log dx dx  c(x) c1  =  λ sin y c  (7.1)  which cannot be written explicitly as a function of x; thus, evaluating each term of the series in (6.28) requires numerical quadrature. Finally, the physically-relevant initial conditions correspond to the situation where the initial wave pulse is localized to a small region on the y-axis near zero. In such circumstances the corresponding Fourier series (6.30) will converge slowly, requiring many terms of (6.28) to obtain an accurate approximation. It is therefore necessary to use a computational approach in conjunction with the analytic solution (6.28) to examine the properties of the general solution of (6.2) obtained in Chapter 6. To estimate the reflection and transmission coefficients for solutions of (5.2) a Fortran program was constructed which approximated (6.28) using 115  7.2. Details of the Implementation a truncated series. Details of the algorithms used are provided in Section 7.2 and the results are given in Section 7.3.  7.2  Details of the Implementation  In order to ensure the algorithm developed is both accurate and efficient, the solution is approximated along two lines in the xt-plane: 1. (x, 0) where x ∈ (−∞, ∞) 2. (0, t) where t ≥ 0. The first line represents the initial data and is used to calculate the total energy of the system as well as to determine the number of terms required to obtain reasonable convergence. Since energy is conserved, to estimate the total energy one need only approximate it at t = 0, which from (5.24) is E(u; R, 0) =  ∞  1 2  −∞  u2t + u2x dx. c2 (x)  (7.2)  If u(x, 0) = f (x) and ut (x, 0) = g(x) the energy at t = 0 is given by E(u; R, 0) =  1 2  ∞ −∞  (g(x))2 + f (x) c2 (x)  2  dx.  (7.3)  Alternatively, it follows from (6.2) that if v(x, 0) = h(x), and u(x, 0) = f (x) then ut (x, 0) = g(x) where h (x) =  g(x) . c2 (x)  Hence the initial energy in  terms of f (x) and h(x) is E(u; R, 0) =  1 2  ∞  (h (x))2 c2 (x) + (f (x))2 dx,  (7.4)  −∞  which can be approximated using standard numerical techniques. In this work, the derivatives were approximated to first order using finite differences and the integration was done using the trapezoid rule [22].  116  7.2. Details of the Implementation The second line corresponds to the values of u and v at x = 0, when t > 0. Using this line it is possible to determine the amount of energy from the initial wave pulse which is transmitted into the new medium. Using conservation of energy, the reflection coefficient can then be calculated. From (5.30) it follows that d E(u; (−∞, 0], t) = (ut ux )|0−∞ . dt  (7.5)  If the initial data is compactly-supported then, since the wave speed is bounded, it follows that lim ut = 0 = lim ux  x→−∞  x→−∞  for all t. The fundamental theorem of calculus then implies that tf  E(u; (−∞, 0], tf ) − E(u; (−∞, 0], 0) =  ut (0, t)ux (0, t) dt  (7.6)  0  and thus the net change of energy in the region (−∞, 0] is given by ∞  ∆E = lim E(u; (−∞, 0], tf ) − E(u; (−∞, 0], 0) = tf →∞  ut (0, t)ux (0, t) dt. 0  (7.7) The substitution of (6.2) into (7.7) yields ∞  ∆E =  ut (0, t)vt (0, t) dt.  (7.8)  0  If the wave pulse is entirely within (−∞, 0] at t = 0 then ∆E is the total energy which crosses the transition region and therefore, by definition, the transmitted energy. It follows that T =  ∆E E(u; (−∞, 0], 0)  and R = 1 − T.  (7.9)  117  7.3. Results The line x = 0 is chosen to simplify the terms of the series expansion. In particular, since y =  π 2  when x = 0,  1  1  um (0, t; ζ2m ) = c 2 (0)[cosh(λt)] 2 [f2m (sinh(λt); ζ2m ) + g2m (sinh(λt); ζ2m )] vm (0, t; ζ2m ) = c  −1 2  1  (0)[cosh(λt)] 2 [f2m (sinh(λt); ζ2m ) − g2m (sinh(λt); ζ2m )]. (7.10)  The solution restricted to x = 0 is u(0, t) c(0)  ∞ 1  = cosh 2 λt  Am [f2m (sinh λt; ζ2m ) + g2m (sinh λt; ζ2m )], −∞ ∞  1  Bm [f2m (sinh λt; ζ2m ) − g2m (sinh λt; ζ2m )].  c(0)v(0, t) = cosh 2 λt −∞  (7.11) One can approximate the functions u(0, t) and v(0, t) by requiring that |m| ≤ M. Denoting these truncations by u ˜(t) and v˜(t) one can then discretize time, letting tk = (∆t)k with t0 = 0. Approximating the derivative to second-order by u ˜t (tk ) =  u ˜(tk+1 ) − u ˜(tk−1 ) + O((∆t)2 ), 2∆t  (7.12)  one can then integrate (7.8) numerically using the trapezoid rule.  7.3  Results  In order to understand the nature of the solutions of (6.2) it is useful to consider first the behaviour of the modes given by (6.29). In particular, the form of the first mode, corresponding to m = 0, can be analyzed directly using MapleTM [21]. It consists of two pulses originating on either side of the origin which collide in the center of the transition region at t = 0, as shown in Figures 7.1-7.3. In order to visualize the first mode, the variable y = ν log  c(x) c1  is used in place of x, where y → 0 corresponds to x → −∞ and  y → π corresponds to x → ∞. In addition, using (6.14) one can show that 118  7.3. Results for y sufficiently small, log(y) ∝ x. Plots of u against log(y) can therefore be used to approximate the shape of u(x, t) up to a scaling of x by a constant factor. Similarly, if (π − y) is sufficiently small, log(π − y) ∝ x and thus for large x the shape of u(log(π − y), t) will match that of u(x, t) up to a scaling of x. Plots of these two limits are shown in Figures 7.4 and 7.5. Since the two wave pulses overlap when t ≥ 0, calculating the reflection and transmission coefficients directly from the two pulses produced by the collision is not straightforward due to the fact that the energy functional (5.24) is nonlinear; thus if two wave pulses have overlapping support, constructive and destructive interference will occur.  Figure 7.1: A plot of the first mode u0 (y, t) given in (6.29) when λ = 1, c2 = 2, and t = −8.  119  7.3. Results  Figure 7.2: A plot of the first mode u0 (y, t) given in (6.29) when λ = 1, c2 = 2, and t = 0.  Figure 7.3: A plot of the first mode u0 (y, t) given in (6.29) when λ = 1, c2 = 2, and t = 8.  120  7.3. Results  Figure 7.4: The first mode u0 (y, t) plotted against log(y) when λ = 1, c2 = 2, and t = 10.  Figure 7.5: The first mode u0 (y, t) plotted against log(π − y) when λ = 1, c2 = 2, and t = 10.  121  7.3. Results Though for the first mode u0 (x, t) one cannot calculate the reflection and transmission coefficients directly due to interference between the two incoming pulses, one can estimate the coefficients indirectly by examining the amplitudes of the wave pulses before and after entering the transition region. Comparing the amplitudes found analytically with those expected for a sharp transition enables one to calculate an effective wave speed ratio: the value of  c2 c1  necessary to produce the observed ratio of amplitudes between  incoming and outgoing waves for a sharp transition. Using this effective wave speed ratio one can then calculate the reflection and transmission coefficients necessary for a sharp transition to give rise to the observed amplitude ratios. The results of this analysis for various values of γ =  c2 c1  are shown in Figure  7.6.  1  Transmission Coefficient  0.9  0.8  0.7  0.6  0.5  0.4  0.3 1  2  3  "veff.txt" using ($1):(f($2))  4  5  6  7  8  "veff.txt" using ($1):(f($1))  "veff.txt" using ($1):(f($3))  Figure 7.6: The dependence of the transmission coefficient on the ratio of the asymptotic wave speeds for the first mode u0 (y, t) given in (6.29) when λ = 1. The red and green points were calculated from the two wave pulses created by the collision of the two incident wave pulses. The blue points represent the transmission coefficient for a sharp transition with the same wave speed ratio.  122  7.3. Results Additionally, it was found that varying λ for fixed c1 and c2 produced no change in the observed transmission and reflection coefficients. Note that in this situation the width of the initial wave pulses is also changing. In particular, it can be shown that for the fundamental mode, m = 0, the initial pulse width is inversely proportional to λ. Thus, as the transition is made sharper the incoming wave pulse is also compressed by the same factor. The lack of change in the transmission coefficients when λ is varied with c1 and c2 fixed suggests that the important parameter for determining the amount of reflection or transmission is the ratio  δ l  where δ is the width of the incoming  wave pulse and l is the width of the transition region which is of the order ν(c2 −c1 ) . λ  This observation is confirmed by the following proposition.  Proposition 7.1. Consider the initial value problem (IVP) utt = c2 (x)uxx , −∞ < x < ∞, t ≥ 0, x u(x, 0) = f ( ), δ x − ct1 ∂ f ut (x, 0) = , ∂t t=0 δ  (7.13)  with wave speed c(x) satisfying c (x) =  λ sin ν log ν  c c1  ,  (7.14)  and where f ( xδ ) is a compactly-supported function of width δ located far to the left of the transition region. If f (x), c1 and c2 are fixed then the reflection and transmission coefficients depend solely on the parameter κ =  (c2 −c1 )ν λδ  which represents the ratio between the width of the transition region and the width of the incoming pulse.  123  7.3. Results Proof. Let γ =  c2 c1 ,  l=  (c2 −c1 )ν , λ  ν(γ−1) , λ  κ=  l δ  and  x , l t S= , τ c(X(x)) , C(X) = c1                      τ=  X=  (7.15)  U (X, S) = u(x(X, S), t(X, S)).  It can then be shown that if u(x, t) satisfies IVP (7.13), then U (X, S) satisfies  USS (X, S) = C 2 (X)UXX (X, S),  −∞ < X < ∞, S ≥ 0,  U (X, 0) = f (κX), ∂ US (X, 0) = ∂S  (7.16) f (κ(X − T )) ,  S=0  where C (X) = (γ − 1) sin [ν log C(X)] . One can express the energy of a solution u(x, t) in terms of U, X, and T in the following way E(u; (−∞, −l], t1 ) = = E(u; [l, ∞), t1 ) = = E(u; (−∞, ∞), t1 ) = t=  1 2  −l −∞  u2t + u2x dx c2 (x)  λ 2ν(c2 − c1 ) 1 2  ∞ l  1 2  −∞  −∞  u2t c2 (x)  λ 2ν(c2 − c1 ) ∞  −1  + u2x dx ∞ 1  u2t c2 (x)  λ 2ν(c2 − c1 )  US2 (X, tτ1 ) t1 2 + UX (X, ) dX, 2 C (X) τ  US2 (X, tτ1 ) t1 2 + UX (X, ) dX, 2 C (X) τ  + u2x dx ∞ −∞  US2 (X, tτ1 ) t1 2 + UX (X, ) dX. 2 C (X) τ (7.17)  124  7.3. Results Note that for fixed c1 and c2 the function U (X, S) does not depend explicitly on the width of the transition region l or the width of the initial pulse δ, but only on their ratio κ. Using the definition of the reflection and transmission coefficients R and T given in (5.32) and (5.33) it follows that US2 (X, τ1 ) C 2 (X)  ∞ −∞  US2 (X, τ1 ) C 2 (X)  −1 −∞  US2 (X,S) C 2 (X)  2 (X, S) dX + UX  ∞ −∞  US2 (X,S) C 2 (X)  2 (X, S) dX + UX  ∞ 1  US2 (X, τ1 ) C 2 (X)  ∞ −∞  US2 (X, τ1 ) C 2 (X)  R = lim  t1 →∞  = lim  S→∞  T = lim  t1 →∞  = lim  S→∞  t  −1 −∞  t  2 (X, t1 ) dX + UX τ 2 (X, t1 ) dX + UX τ  t  t  +  2 (X, t1 ) UX τ  , (7.18)  dX  2 (X, t1 ) dX + UX τ  ∞ 1  US2 (X,S) C 2 (X)  2 (X, S) dX + UX  ∞ −∞  US2 (X,S) C 2 (X)  2 (X, S) dX + UX  ,  and thus that the reflection and transmission coefficients depend solely on the parameter κ =  l δ  rather than on l and δ independently.  The method outlined in Section 7.2 was implemented for the initial conditions u(x, 0) = U (x(y)), v(x, 0) = V (x(y)) given in (6.32). It is important to note that since the initial data was given in terms of y and not x, varying  c2 c1  in this case changes the initial conditions. Thus the numerical experiment performed corresponds to the situation where both the speed ratio and the initial data are being changed simultaneously. Sample results of these experiments are shown in Figure 7.7. Plots of the solution as a function of time were approximated using the finite element method outlined in Chapter 8 and are presented in Figure 8.1.  125  7.3. Results  1  Transmission Coefficient  0.9  0.8  0.7  0.6  0.5  0.4 1  2  3  4  5  6  7  8  "analy.txt" using (f($1)):(-$2) "analy.txt" using (f($1)):(g(f($1)))for a smooth transition Figure 7.7: A plot of the transmission coefficients with λ = 1 (red) and the transmission coefficients for a sharp transition (green) where the initial conditions are given by (6.32) with N = 1 and y † = e−7.5 where γ = cc12 is the ratio of the asymptotic wave speeds.  126  Chapter 8  Numerical Approximation to the Solution 8.1  Introduction  Consider the initial value problem utt = c2 (x)uxx ,  −∞ < x < ∞, t > 0 (8.1)  u(x, 0) = f (x), ut (x, 0) = g(x),  with wave speed c(x) satisfying (6.9). In order to approximate the solutions of (8.1) numerically, one must work on a finite domain. Given the exponential convergence of c(x) to c1 and c2 as x goes to −∞ and ∞, respectively, for large and small x one can approximate the solution of (8.1) by the constant wave speed solution (5.4). Letting l denote the width of the transition region, which is of the order and  xb l  ν(c2 −c1 ) , λ  choose xa , and xb such that  xa l  << −1,  >> 1. One can split the x-axis into the following five regions:  1. x << xa , 2. x = O(xa ), xa ≥ x, 3. xa ≤ x ≤ xb , 4. x = O(xb ), x ≥ xb , 5. x >> xb .  127  8.2. Details of the Implementation One then wishes to find an approximation to the solution of (8.1) in the case where a rightward-moving wave pulse originates in Region 1, passes through Region 2 and reaches the transition region. The transmitted wave pulse continues on through Regions 4 and 5 while the reflected wave pulse returns through Region 2 to Region 1. Due to the exponential convergence of c(x) to its limiting values as x → ±∞, in Region 1 and Region 5 the maximum value of |c | can be made arbitrarily small by a suitable choice of xa and xb , in which case the constant wave speed solution (5.4) will be a good approximation to the solution in these regions. The solution in Region 3 can then be found using a finite element method and matching with the asymptotic solutions in Regions 2 and 4. Choosing the initial datum to be a wave pulse in Region 1 facilitates the matching of asymptotic and numerical solutions since in this case, for sufficiently large time, it is expected that the reflected and transmitted pulses will lie entirely within Regions 1 and 5, respectively. This ensures that accurate results can be obtained using only a finite number of time steps for the numerical solver. In order to measure the reflection and transmission coefficients the initial wave was chosen to be a Gaussian function centered well outside the transition region. The initial energy of the wave pulse was calculated from (5.24) both analytically and numerically. A finite element scheme was then run until the support of u lay entirely within the Regions 2 and 4. The energy in each region was then calculated numerically and compared to the initial energy. The finite element method used is outlined in Section 8.2 and the results obtained are given in Section 8.3.  8.2  Details of the Implementation  Before formulating a finite element method to approximate the solution of (8.1) in Region 3, first notice that since the domain [xa , xb ] is finite, letting  128  8.2. Details of the Implementation X=  x−xa xb −xa  one obtains the following approximate initial value problem utt (X, t) = [c† (x)]2 uXX (x, t), u(X, 0) = f † (X), †  uX (X, 0) = g (X),  X ∈ (0, 1), t ∈ (0, tf ),  X ∈ (0, 1), X ∈ (0, 1),  (8.2)  uX (0, t) = uX (1, t) = 0, where c† (X) =  c(x(X)) xb −xa ,  f † (X) = U (x(X)), g † (X) = [xb − xa ]g(x(X)). The  boundary conditions are arbitrary, provided the wave pulse does not reach X = 0 or X = 1. This restriction can be enforced by an appropriate choice of the initial conditions and final time tf .  8.2.1  Weak Formulation  In order to obtain a weak formulation of the initial value problem (8.2), first let u(X, t) be a solution of (8.2) which, for fixed t, is in H 1 (Ω) where Ω = (0, 1) and H 1 (Ω) is the space of square-integrable functions on Ω whose derivatives are also square-integrable [14]. Multiplying 8.2 by a test function w(X) ∈ H 1 (Ω) and integrating over Ω, one obtains 1  0= 0 1  = 0 1  = 0  utt w dX − † [c (X)]2 utt w dX + † [c (X)]2 utt w dX + † [c (X)]2  1  uXX wdX 0 1 0  uX wX dX − (uX w)|10  (8.3)  1  uX wX dX. 0  where integration by parts, and the boundary conditions in (8.2) have been used to simplify the integrals. Since w(X) is independent of time t, for u(X, t) ∈ H 1 (Ω × (0, tf )) and w(X) ∈ H 1 (Ω) one may take the time derivative outside the integral. One then has the following weak formulation of (8.2). Definition 8.1. The weak formulation of the initial value problem (8.2) is to find u(X, t) ∈ H 1 (Ω × (0, tf )) such that for all w(X) ∈ H 1 (Ω), and all 129  8.2. Details of the Implementation t ∈ (0, tf ): 1 0  utt v dX + † [c (X)]2  1  uX wX dX = 0  (8.4)  0  with u(X, 0) = f † (X) and ut (X, 0) = g † (X). The existence, uniqueness and stability of solutions of the weak formulation of (8.2) are a direct consequence of a classical result presented in [14].  8.2.2  Discretization  Solving the weak formulation exactly is equivalent to solving (8.2) and thus to proceed further one must typically employ a method, such as the finite element method, to approximate its solutions. The finite element method approximates solutions of the weak formulation by restricting w(X) ans u(X, t), where t is fixed, to a finite-dimensional subspace of the infinitedimensional space of square-integrable functions on (0, 1), reducing the problem to an algebraic one. Following a method similar to that given in [13], let x0 , . . . , xN be uniformlyspaced points in [0, 1] with X0 = 0 and XN = 1 and let V N +1 be the set of square-integrable functions which are linear on each interval K i = [xi−1 , xi ]. In addition, let ϕi (X) i = 0, . . . , N be the square-integrable functions linear on each interval K i such that ϕi (Xj ) = δji . Since the functions in V N +1 are completely defined by their values at x0 , . . . , xN , it is clear that the functions N +1 . The initial condition f † is replaced by the {ϕi (x)}N 0 form a basis of V  vector f which has components 1  (f )i =  f † (X)φi (X)dX,  (8.5)  0  with g similarly defined. For a fixed t, one then approximates the solution u(X, t) and the test function w(X) by u ˜(X; t), w(X) ˜ ∈ V N +1 ⊂ H 1 (Ω).  130  8.2. Details of the Implementation Using the hat functions as a basis, one can write N  N  uj (t)φj (X),  u ˜(X, t) =  wj φj (X).  and w(X) ˜ =  0  (8.6)  j=0  Working in V N +1 the problem is then to find u = (u1 (t), . . . , uN (t)) ∈ H 1 ([0, T ])  N  such that for all w = (w1 , . . . , wN ) ∈ RN 1  uj (t) wk 0  j,k  φj (X)φk (X) dX + [c† (X)]2  1  uj (t) wk 0  j,k  dφj (X) dφk (X) dX = 0. dX dX (8.7)  Defining 1  Mi,j = 0  φj (X)φk (X) dX [c† (X)]2  one finds that M where u(0) = f and  du dt  1  and Ai,j = 0  dφj (X) dφk (x) dX, dX dX  d2 u + Au = 0, dt2  (8.8)  (8.9)  = g.  From Taylor’s theorem it follows that utt =  u((k + 1)∆t) − 2u(k∆t) + u((k − 1)∆t) + O (∆t)2 . (∆t)2  (8.10)  Substituting (8.10) into (8.9) and approximating u(k∆t) by uk , one obtains M (uk+1 − 2uk + uk−1 ) + (∆t)2 Auk = 0  (8.11)  when k > 0. If k = 0, then a slight modification is required, since u−1 is not given. Noting that the initial conditions give ut (0) = g and that Taylor’s theorem implies that ut (x, 0) =  u(∆t) − u(−∆t) + O (∆t)2 2∆t  131  8.3. Results it follows that u−1 ≈ u(−∆t) ≈ u(∆t) − 2(∆t)v 0 ≈ u1 − 2(∆t)v 0 .  (8.12)  One can then use (8.11) with u−1 = u1 − 2∆tv 0 .  (8.13)  Solving for uk+1 in (8.11) one finds the explicit second-order scheme uk+1 = −M −1 (∆t)2 Auk + 2uk − uk−1 ,  (8.14)  where u0 = u0 and u−1 are defined in (8.5) and (8.13), respectively. In order for this numerical scheme to be stable an additional Courant- FriedrichsLewy (CFL) condition must be imposed. Since the wave speed is bounded from above and below by c1 and c2 , respectively, the CFL condition requires that [17] ∆t ≤  h(xb − xa ) . c2  (8.15)  One can show that this CFL condition is necessary and sufficient for stability of the numerical scheme, by adapting a well-known method from finite difference methods [25].  8.3  Results  The method outlined in Section 8.2 was implemented in MatlabTM [20]. To accurately approximate the wave speed c(x), a fourth-order Runge-Kutta method was used with the step-size chosen sufficiently small to ensure that the error in c(x) was negligible relative to the error introduced by discretization in space and time. The entries of the matrix A were calculated analytically while those of M were calculated from (8.8) using the trapezoid rule [22]. When calculating the energy, ut (x, t) and ux (x, t) were approximated  132  8.3. Results by ut (mh, (∆t)k) ≈  k−1 uk+1 m − um 2∆t  and ux (mh, (∆t)k) ≈  ukm+1 − ukm−1 2h (8.16)  which have local truncation errors of O((∆t)2 ) and O(h2 ) respectively. A plot of the finite element solution for the initial conditions (6.32) is shown in Fig. 8.1. At t = 0, the solution is a wave pulse centered to the left of the origin. As time progresses, it splits into two pulses travelling in opposite directions. The rightward moving pulse eventually hits the the transition region where it separates into a reflected and transmitted pulse. It was found that the reflection and transmission coefficients obtained using this method agreed with those found using the exact solution in Chapter 7.  133  8.3. Results  Figure 8.1: A numerical approximation to a solution of the wave equation (5.2) with wave speed satisfying (6.11) for c1 = 1, c2 = 2, λ = 1 and initial conditions given by (6.32) where N = 1 and y † = e−7.5 , obtained using a finite element method implemented in MatlabTM . Blue: t = 0, green: t = 0.8, red: t = 3.4. A plot of the wave speed is given in (6.1).  134  8.3. Results The reflection and transmission coefficients of the system were tested using the finite element solver for Gaussian wave pulses u(x, 0) = U0 e  (x−x0 )2 2δ 2  (8.17)  ut (x, 0) = 0, where x0 was chosen to lie outside of the transition region. The parameters δ, λ, and c2 were varied to determine the effect of each one on the reflection and transmission coefficients. The behaviour of the incoming pulse for one set of parameters is shown at three different times in Figure 8.2.  Figure 8.2: A numerical approximation to a solution of the wave equation (8.1) with wave speed satisfying (6.11) for c1 = 12 , c2 = 1, and λ = 2.3 at three different times: t = 5 (blue), t = 23 (green), and t = 27 (red). The approximation was obtained using a finite element method implemented in MatlabTM . The corresponding wave speed as a function of x is given in (6.1).  135  8.3. Results The dependence of the reflection and transmission coefficients on the parameter κ =  (c2 −c1 )ν , λδ  defined to be the ratio of the transition region width  to the width of the incoming pulse, was examined by fixing c1 and c2 while varying κ. From these numerical experiments it was found that as κ was made arbitrarily small the reflection and transmission coefficients converged to those given by the Fresnel equations (5.34) in the case of a sharp transition (having zero width). In particular, it was found that the transmission coefficient converged to the classical result from above with the corresponding reflection coefficient converging from below. Typical results of the numerical simulations showing the convergence of the transmission coefficients to the Fresnel value with decreasing κ are plotted in Figure 8.3. The difference between the experimental and Fresnel reflection coefficients must satisfy (R(λ) − RF ) = −(T (λ) − TF ) since TF + RF = 1 = T (λ) + R(λ) by conservation of energy. 1.02  Transmission Coefficient  1 0.98 0.96 0.94 0.92 0.9 0.88 0.86 0.84 0.82 0  20  40  60  80  100  120  140  kappa  Figure 8.3: A plot of the smooth transmission coefficients T (κ) when c1 = 21 , and c2 = 1 with Fresnel transmission coefficient TF = 89 .  136  160  Chapter 9  Conclusions and Further Work In this part of the thesis, solutions to the one-dimensional linear wave equation (5.2) in a two-layered medium with a smooth transition were analyzed both numerically and analytically. In particular, the four-parameter family of wave speeds given by solutions of the ODE (6.11) was used. These wave speeds were considered both because one could find an analytic solution to (5.2) when c(x) satisfied (6.11) and because of their relevance to problems in electromagnetism and seismology. In these applications one is particularly interested in the situation where a wave pulse originating in one medium reaches the transition layer and splits into a reflected and transmitted pulse travelling in opposite directions. One can then calculate the reflection and transmission coefficients which represent the fraction of the energy of the original pulse which is reflected and transmitted, respectively. When the transition region is of zero width, the Fresnel equations [16] provide an analytic expression for the reflection and transmission coefficients, given in (5.34), which is independent of the shape of the incoming wave. Through the numerical and analytic methods it was found that, for a smooth transition, the coefficients were no longer independent of the incoming pulse. Rather than depending just on the speed ratio γ =  c2 c1 ,  the  reflection and transmission coefficients also depended upon the ratio of the width of the transition region to the width of the incoming pulse. This ratio, l δ,  is proportional to the unitless quantity κ =  ν(c2 −c1 ) . δλ  137  Chapter 9. Conclusions and Further Work As expected, when κ → 0, and the transition region becomes small relative to the width of the incoming pulse, the transmission and reflection coefficients converge to those given in the Fresnel equations from above and below, respectively. Conversely, if κ → ∞, and thus the transition is flat relative to the length-scale δ, the transmission coefficient approaches 1 while the reflection coefficient approaches 0, as to be expected. In the course of the analysis, it was found that the two methods described in Chapters 7 and 8 each have their own distinct advantages and disadvantages. For the finite element method, in order to compare the solution at t = 0 and a later time t = tf , it is necessary to accurately approximate the solution for all intervening times. Hence, one must approximate the solution over the rectangle [−40, 40] × [0, tf ] in the xt-plane. For the numerical approximation of the analytic method it is only necessary to determine the solution along two lines, x = 0 and t = 0. This difference is important when a high level of accuracy is required over large distances or times. In addition, when the transition region is very small or the incoming pulse is very narrow, the finite element method formulated in Chapter 8 becomes less accurate due to the discretization of the time derivatives. Conversely, the terms of the analytic solution converge most rapidly for narrow pulses near or within the transition region. To model the behaviour of large pulses far away from the origin requires a large number of terms, making the solution inaccurate due to the accumulation of rounding errors. To extend this work an obvious approach would be to try to establish the dependence of the transmission and reflection coefficients on the parameter κ directly from the analytic solution presented in Chapter 7. In such a case, one would expect to obtain an asymptotic series in κ whose first term matches the coefficient given by the Fresnel equations. Finally, one could also apply the method of non-local symmetries, and the approaches outlined in this thesis, to different wave speeds by using the non-local symmetries found in [5] and searching for additional ones.  138  Bibliography [1] J. Berryman and C. Holland. Nonlinear diffusion problem arising in plasma physics. Physical Review Letters, 40:1720–1722, 1978. [2] G.W. Bluman. Use and construction of potential symmetries. Mathematical and Computer Modelling, 8:1–14, 1993. [3] G.W. Bluman and S. C. Anco. Symmetry and Integration Methods for Differential Equations. Springer, 2002. [4] G.W. Bluman, S.C. Anco, and A.F. Cheviakov. Applications of Symmetry Methods to Partial Differential Equations. Springer, 2010. [5] G.W. Bluman and A. F. Cheviakov. Nonlocally related systems, linearization and nonlocal symmetries for the nonlinear wave equation. Journal of Mathematical Analysis and Applications, 333:93–111, 2007. [6] G.W. Bluman and S. Kumei. On invariance properties of the wave equation. Journal of Mathematical Physics, 28:307–318, 1987. [7] G.W. Bluman and S. Kumei. Exact solutions for wave equations of two-layered media with smooth transition. Journal of Mathematical Physics, 29:86–96, 1988. [8] G.W. Bluman and S. Kumei. Symmetries and Differential Equations. Springer, 1989. [9] G.W. Bluman and G.J. Reid. New symmetries for ordinary differential equations. IMA Journal of Applied Mathematics, 40:87–94, 1988.  139  Bibliography [10] G.W. Bluman, Temuerchaolu, and S. Anco. New conservation laws obtained directly from symmetry action on a known conservation law. Journal of Mathematical Analysis and Applications, 322:233–250, 2006. [11] B. Chakraborty. Electromagnetic wave propagation through slowly varying plasma. Journal of Mathematical Physical, 11:2570, 1970. [12] A.  Cheviakov.  Gem  symbolic  software  package.  http://math.usask.ca/ shevyakov/gem/, 2009. [13] S.H. Christiansen. Foundations of finite element methods for wave equations of Maxwell type. In E. Quak and T. Soomere, editors, Applied Wave Mathematics, pages 335–393. Springer, 2009. [14] L.C. Evans. Partial Differential Equations. American Mathematical Society, 2nd edition, 2002. [15] H. Fujita. The exact pattern of a concentration-dependent diffusion in a semi-infinite medium. Textile Research Journal, 22:757–827, 1952. [16] D.J. Griffiths. Introduction to Electrodynamics. Prentice-Hall, Inc., 3rd edition, 1999. [17] P. Holy and J. Rodriguez. Optimized higher order time discretization of second order hyperbolic problems: Construction and numerical study. Journal of Computational and Applied Mathematics, 234:1953–1961, 2010. [18] L. Infeld and T.E. Hull. The factorization method. Reviews of Modern Physics, 23:21–68, 1951. [19] L.D. Landau and E.M. Lifshitz. Theory of Elasticity. Pergamon Press, 2nd english edition edition, 1970. [20] MATLAB. version 7.14.0 (R2012a). The MathWorks Inc., Natick, Massachusetts, 2012.  140  Bibliography [21] M.B. Monagan, K. O. Geddes, K. M. Heal, G. Labahn, S.M. Vorkoetter, J. McCarron, and P. DeMarco. Maple 14 Programming Guide. Maplesoft, Waterloo ON, Canada, 2010. [22] A. Ralston and P. Rabinowitz. A First Course in Numerical Analysis. McGraw-Hill, second edition, 1978. [23] W. Rudin. Principles of Mathematical Analysis. McGraw-Hill, 3rd edition, 1976. [24] Peter M. Shearer. Introduction to Seismology. Cambridge University Press, 2nd edition, 2009. [25] J.C. Strikwerda. Finite Difference Schemes and Partial Differential Equations. SIAM, 2004.  141  

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
United States 5 0
Germany 3 0
South Africa 2 1
Japan 2 0
Pakistan 1 0
Poland 1 0
City Views Downloads
Ashburn 4 0
Unknown 4 0
Potchefstroom 2 1
Tokyo 2 0
Islamabad 1 0
Sunnyvale 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0073229/manifest

Comment

Related Items