Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Extension of Lie’s algorithm; a potential symmetries classification of PDEs Doran-Wu, Patrick Robert 1996

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1996-147401.pdf [ 8.21MB ]
Metadata
JSON: 831-1.0080004.json
JSON-LD: 831-1.0080004-ld.json
RDF/XML (Pretty): 831-1.0080004-rdf.xml
RDF/JSON: 831-1.0080004-rdf.json
Turtle: 831-1.0080004-turtle.txt
N-Triples: 831-1.0080004-rdf-ntriples.txt
Original Record: 831-1.0080004-source.json
Full Text
831-1.0080004-fulltext.txt
Citation
831-1.0080004.ris

Full Text

E X T E N S I O N OF LIE 'S A L G O R I T H M ; A P O T E N T I A L S Y M M E T R I E S C L A S S I F I C A T I O N OF P D E S By Patrick Robert Doran-Wu B. Eng. (Electrical/Electronic) University of Western Australia M . Sc. (Mathematics) University of Oxford, U.K. A T H E S I S S U B M I T T E D IN T H E R E Q U I R E M E N T S D O C T O R O F P A R T I A L F U L F I L L M E N T O F F O R T H E D E G R E E O F P H I L O S O P H Y m T H E F A C U L T Y O F G R A D U A T E S T U D I E S I N S T I T U T E O F A P P L I E D M A T H E M A T I C S We accept this thesis as conforming to the required standard T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A October 1996 © Patrick Robert Doran-Wu, 1996 In presenting this thesis in partial fulfillment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for refer-ence and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Institute of Applied Mathematics The University of British Columbia 2075 Wesbrook Place Vancouver, Canada V6T 1Z1 Date: Abstract Symmetries of a system of differential equations are transformations which leave invariant the family of solutions of the system. Infinitesimal Lie symmetries of locally solvable analytic differential equations can be found by using Lie's algorithm. We extend Lie's algorithm to one which can calculate infinitesimal Lie symmetries of analytic systems of differential equations which are not locally solvable. Local infinitesimal symmetries of differential equations are flows of vector fields which de-pend on local properties of solutions and have been extensively calculated and applied. In contrast infinitesimal nonlocal symmetries, which are flows of vector fields depending on non-local properties of solutions, have only recently been introduced. Using our extension of Lie's symmetry algorithm, we study the infinitesimal nonlocal symmetries of potential type intro-duced by Bluman, Kumei and Reid. We give verifiable criteria for useful potential systems and give a complete potential symmetries analysis for a class of nonlinear diffusion equations. We also find large classes of higher order scalar and systems of partial differential equations admitting potential symmetries. ii Table of Contents Abstract ii List of Figures vii Acknowledgement viii 1 Introduction 1 1.1 Brief Historical Background 1 1.2 Symmetries of Differential Equations 4 1.3 Potential Systems 9 1.4 A New Symmetry Algorithm For Systems of PDEs 16 1.5 Outline of Thesis 19 1.5.1 Specific Guide to Thesis 22 2 Lie's Algorithm 24 2.1 Transformation Groups 25 2.1.1 Flows of Vector Fields 25 2.1.2 Lie Groups 28 2.1.3 Lie Algebras 31 2.1.4 Symmetries of Algebraic Equations 33 2.2 Differential Equations 35 iii 2.2.1 Jet Space 36 2.2.2 Extended Transformations in n-Jet Space . 45 2.2.3 Lie Groups of Extended Point Transformations 48 2.3 Symmetries of Differential Equations 50 3 Extension of Lie's Algorithm for Systems of PDEs 60 3.1 Formal Theory of Integrability 62 3.1.1 Standard Form . 62 3.1.2 Formal Power Series Solution 70 3.2 Local Solvability 76 3.3 A New Symmetry Algorithm 83 4 Potential Systems of a Given System of PDEs 89 4.1 Construction and Properties of Potential Systems 92 4.1.1 Potential Symmetries and Their Applications 95 4.2 Conservation Laws and Useful Potential Systems 99 4.2.1 Conservation Laws 99 4.2.2 Useful Potential Systems 104 4.3 Potential Symmetry Analysis of the Nonlinear Diffusion Equation 107 4.3.1 First Generation Potential Systems 107 4.3.2 Second Generation Potential Systems I 108 4.3.3 Second Generation Potential Systems II 109 4.3.4 Third Generation Potential Systems I l l 4.3.5 Symmetry Classification of Potential Systems 114 iv 4.4 Linearizing Factors 117 4.4.1 Examples of Linearizations 118 5 A Potential Symmetries Classification of PDEs 123 5.1 Necessary Conditions for the Admission of Potential Symmetries 127 5.1.1 A Class of Third Order Scalar PDEs 129 5.1.2 A Class of Scalar PDEs of Order Greater than Two 134 5.2 Higher Order Scalar PDEs Admitting Potential Symmetries 140 5.3 Higher Order Systems of PDEs Admitting Potential Symmetries 151 6 Conclusions and Further Work 163 6.1 Conclusions 163 6.2 Further Work; PDEs with Three Independent Variables 165 Bibliography 170 Appendices 175 A Algorithms for Obtaining Prolonged Standard Forms 175 A . l Orthonomic Form 175 A.2 Simplified Orthonomic Form 176 A.3 Standard Form 177 A.4 Prolonged Standard Form 178 B Symmetries of Locus and Analytic Locus 180 C Proof of Correctness 184 v General Potential Systems Construction Frechet Formulation for Point Symmetries vi List of Figures 3.3 A binary tree of case splittings 67 4.4 Potential systems tree for the nonlinear diffusion equation; K(u) arbitrary. . . . 113 4.5 Potential systems tree for the nonlinear diffusion equation; K(u) = u~2 113 vii Acknowledgement I wish to thank my supervisor Dr G.W. Bluman for suggesting that I study potential symmetries of PDEs, for his expert guidance, and his support during my PhD degree. I especially thank him for his speedy reading of my thesis drafts and for his insightful criticisms. My thanks also go to Dr G.J. Reid for introducing me to Formal Theory of Integrability and to the world of symbolic programming. When I first arrived at U.B.C. he spent many tireless hours with me in the computer lab explaining the various features of his wonderful standard form package. I also thank him for reading my thesis drafts and for his helpful suggestions. This thesis would not be possible without the constant support and encouragement of my wife Helen. If it were not for her, I would still be a frustrated computer programmer. Instead, I am now not only a Mathematician, but the father of two wonderful girls. To my daughters Rebecca and Connie, thank you for many hours of fun and delight. I am forever indebted to my parents, Elizabeth and Philip. Their decision to migrate to Australia from Myanmar (formerly Burma), has given me and my siblings opportunities that would never have been possible otherwise. They have instilled in me the respect for good education and the drive for personal excellence. I would also like to thank my fellow graduate students for their companionship over the past four years. The many jokes shared in the IAM lab and the lunch time card games have helped me keep sane. Special thanks go to Jennifer Enns, Pete Newbury and Jeff Orchard who helped me print out my thesis remotely while I was starting my post-doctoral position in Australia. Funding for much of my stay in Canada was provided by the University Graduate Fellowship (U.B.C) , the Teaching and Research Assistantships and by my parents. This thesis was typeset using IATgX. viii Chapter 1 Introduction Examples of differential equations abound in all areas of science. They arise whenever one attempts to model any process (abstract or physical) that depends on continuously varying quantities and the rates at which they change. Once a differential equation has been obtained that adequately models the problem at hand, the question of existence and uniqueness of solutions for a given set of data (initial or boundary) must then be addressed. One may then be interested in obtaining numerical, asymptotic or exact analytical solutions. There are many branches of mathematics which deal with the above mentioned problems. Broadly speaking, they come under the general heading of analysis. However, the most general techniques for finding exact solutions of differential equations come from the branch of math-ematics called Lie group analysis of differential equations and its generalizations. This subject lies at the crossroads of analysis and differential geometry, and was originated by its founder, Sophus Lie." The research presented here lies in this field of mathematics. We first give a general overview of the historical and conceptual background to this thesis. 1.1 Brief Historical Background Marius Sophus Lie was born in Nordfjordeid, a town in the western part of Norway, on December 17, 1842. His school teacher [6] was Ludvig Sylow, who inspired him on the works of N.H. Abel and E. Galois. Abel showed that it was impossible to solve polynomial equations of order greater than four by radical expressions. Galois subsequently gave an elegant theory for this by considering symmetries of these equations. Lie and Sylow collaborated on a careful editing 1 Chapter 1. Introduction 2 of Abel's completed works. Lie had a close friendship with F. Klein that led to a long fruitful collaboration. Klein, who published his Erlangen program in 1872 (see [33, 27]), developed the thesis that geometry was the study of invariants of group actions on geometric objects. Lie's starting point had been in modern geometry, after coming across the works of J.V. Poncelet and J. Pliicker, and soon turned his attention to the geometry of differential equations with all his genius. Lie formally defined and initiated the mathematical study of continuous groups of transfor-mations, now called Lie groups. He showed how these led to a symmetry theory of differential equations, in the same spirit as the Galois theory for polynomial equations. Of the many achievements of Lie, of particular relevance to us is his algorithm on the explicit determina-tion of the continuous group of transformations (the admitted group) that leave invariant the solution space of a given differential equation. With this algorithm, he was able to classify all ordinary differential equations with respect to the admitted group, and to develop an integration theory for such equations: In a short communication to the Scientific Society of Gottingen (3 December 1874), I gave, among other things, a listing of all continuous transformation groups in two variables x,y, and especially emphasized that this might be made the basis of a classification and rational integration theory of all differential equations f(x,y, y', • • •, y{m)) = 0, admitting a continuous group of transformations. The great program sketched there I have subsequently carried out in detail. (S. Lie [41], p.187) I noticed that the majority of ordinary differential equations which were integrable by the old methods were left invariant under certain transformations, and that these integration methods consisted in using that property. Once I had thus represented many old integration methods from a common view-point, I set myself the natural problem: To develop a general theory of integration for all ordinary differential equations admitting finite or infinitesimal transformations. (S. Lie [42], p. iv) Chapter 1. Introduction 3 Sophus Lie died on Feb 18, 1899 and left a profound legacy. His work has greatly influenced the development of mathematics to the present day. His Lie groups and Lie algebras pervade much of mathematics and physics. For example, the modern theory of elementary particles in physics would be significantly different without his work. The modern theory of Lie groups is radically different from that originally formulated by Lie. Lie was interested in groups of transformations that act on the solution space of a differential equation. In general, these are local Lie groups, with transformations being defined if they are sufficiently close to the identity (otherwise graphs of functions may not be mapped to other graphs of functions). Besides E. Noether, who proved a remarkable theorem [46] showing the one-to-one correspondence between symmetries of a variational integral and the conservation laws of the associated Euler-Lagrange equations, the application of Lie groups to differential equations was scarcely further developed for almost fifty years. During that time, Lie group theory underwent a radical abstract reformulation at the hands of E. Cartan and those who followed him. Interest switched from local Lie groups to global Lie groups. Present day interests focus on special classes of global Lie groups (semi-simple, solvable, etc.) and on representation theory, where the group action on the underlying space is linear. This is in direct contrast to the local Lie groups studied by Lie which were never so elegant. A major revival of interest in Lie's original application of Lie groups to differential equations was sparked off by the works of Ovsiannikov in the former USSR in the late 1950's and 1960's [49] and of Bluman in the West in the late 1960's and 1970's [11, 12]. In the 1980's, the advent of symbolic manipulation computer programs (see the review paper [28]), which take away the often tedious computations associated with Lie's algorithm, has also helped in drawing researchers into this area. The basic theory can be found in Ovsiannikov [50], Olver [47], Bluman and Kumei [15], and in many other texts that have followed since. Besides applications of the theory to particular differential equations of interest, research in this area focusses on extending the scope of the theory. For example, in this dissertation, we develop a new symmetry algorithm which extends Lie's algorithm for the calculation of symmetries of general systems of PDEs. Chapter 1. Introduction 4 Using this new algorithm, we discover large classes of higher order scalar and systems of partial differential equations admitting potential symmetries, which are new classes of symmetries. For a more detailed account of Lie the mathematician, see Baas [6], Ibragimov [30], and the references therein. 1.2 Symmetries of Differential Equations A symmetry of a differential equation (DE) is a transformation mapping any solution to another solution of the DE. Given a solution of the DE, knowledge of an admitted symmetry then leads to the generation of another solution. This is one of the most basic applications of symmetry analysis of DEs and relies on the symmetry being explicitly known. Even if one starts with a simple solution, which can usually be found by inspection, one can obtain a highly nontrivial solution in this way. Continuous symmetries such as translations, scalings and rotations can often be found by inspection. Even some discrete symmetries may be stumbled upon. However, most symmetries are not so easily found and the fundamental problem is one of algorithmic determination of symmetries. We shall restrict ourselves to finding continuous symmetries. The task of finding discrete symmetries is still very much an open subject (see Reid et al [55]). Symmetry is defined in terms of transformations on the solution space of a DE. However, explicit knowledge of a symmetry in terms of local coordinates also defines how derivatives of solutions are transformed. Consequently, one can concentrate on how the DE itself is trans-formed instead of how its solution space is transformed. This is very useful, since the solution space is not usually known a priori. If one naively applies an arbitrary transformation on a DE, the symmetry conditions turn out to be an overdetermined system of nonlinear DEs for the transformation, which cannot be solved in general. To proceed, Lie considered a special class of transformations called a one-parameter group of point transformations or, more generally, contact transformations. These transformations are essentially the flows associated with vector fields defined in the space of independent and dependent variables. These vector fields and their Chapter 1. Introduction 5 components are called infinitesimal generators and infinitesimals of the point transformations, respectively. The fundamental insight due to Lie was that the symmetry conditions for these one-parameter groups of point transformations turn out to be linear in the infinitesimals of the transformations. Being linear, these infinitesimal determining equations are much easier to analyse and, in a large number of cases, have been solved explicitly to yield the admitted group of point (or contact) symmetries. The derivation of the infinitesimal determining equations of any given DE is called Lie's algorithm. Henceforth, we shall refer to 'one-parameter groups of point symmetries' as simply 'point symmetries'. The infinitesimal generators of point symmetries depend only on independent and depen-dent variables. Lie also considered contact symmetries, where the infinitesimal generators also depend on first order derivatives of dependent variables. Since the time of Lie, the symmetry theory of DEs has been extended and applied in many ways. Applications now include: • Construction of new solutions from old ones. • Integration of ordinary differential equations (ODEs). • Invariant (similarity) reductions of partial differential equations (PDEs). • Solving boundary value problems (BVPs). • Linearization of PDEs. • Generalized symmetries. • Construction of conservation laws (Noether's theorem). • Equivalence transformations. • Nonclassical solutions. The theory behind Lie's algorithm is given in §2. For now, let us consider two illustrative examples. Chapter 1. Introduction 6 Example 1 .2 .1 Elementary Example of Symmetry Consider the simple second order ODE dx2 0, (1.1) which has solutions consisting of all non-vertical lines in the plane R 2 , given by y = mx + c, m , c G l . Examples of symmetries of the ODE are translations in y, T£{x,y) = (x,y + e), and rotations about the origin / cos e — sm e \ y J ^ sine cose J y y where e G 1R. Clearly such translations and rotations map any straight line in the plane to another straight line and hence are symmetries of the ODE. Notice that different values of e define different transformations T£. Moreover, this family of transformations has a group structure: The identity element is To, the binary operation is composition (T$oTe = Ts+e) which is closed and associative, and to each transformation T£ there corresponds its inverse T_ £. The same can be said of R£, except that in this case, not all values of e are permitted if Re is to be a symmetry of the ODE. For instance R„/2 rotates a horizontal line to a vertical line which is no longer a solution of the ODE. However, if e is sufficiently close to zero, then Re is a symmetry of the ODE. In general, symmetry transformations may only be defined locally about the identity transformation (corresponding to e = 0). Both Te and Rs are examples of local Lie groups of point transformations (Ts is also a global Lie group of point transformations). They correspond to the flows of the infinitesimal generators (vector fields) X and Y respectively, given by X = dx, Y = -ydx + xdy. So far, we have concentrated on how transformations act explicitly on solutions of the ODE. Chapter 1. Introduction 7 This is made possible since the ODE was simple enough so that its solution space is explicitly known. In general, this is not true and one has to deal with the equations themselves. For any e, the translations (x,y) — Te(x,y) transform the ODE to the new ODE, d2y/dx2 = 0, which clearly has the same solutions as the original ODE. As for the rotations (x, y) = R£(x,y), the original ODE is transformed to the new ODE A 1 3—| = 0, A = cos e - sin £ , dxl \ dx where dy/dx can be expressed in terms of x, y and dy/dx. So long as A / 0, this ODE has the same solution set as the original ODE. The nonvanishing of A essentially restricts e to be sufficiently close to the identity to ensure that straight lines do not become vertical. For example, rotation of the line y = 0, through an angle of ^ results in a vertical line and this is reflected by the fact that A vanishes for the line y = 0 when e = ^. Lastly in this example, the general solution of the ODE can be obtained from rotations and translations of any one solution: Any non-vertical line can be reached by rotating any given non-vertical line line to achieve the right slope, followed by vertical translations to achieve the right y-intercept. In general, for the case of partial differential equations (PDEs) where the number of independent variables is greater than one, one cannot hope to generate the general solution from symmetries. However, the principle of mapping solutions to solutions still holds and one can hope to generate nontrivial solutions by applying symmetries to simple solutions. • Example 1.2.2 Mapping Solutions to Solutions Consider the partial differential equation Ui — uxx, known as the heat equation. By inspection, the heat equation has the simple solution u = 1. Applying Lie's algorithm leads to the infinitesimal generator x = xtdx + t2dt-ax2 + hu)du, (1.2) Chapter 1. Introduction 8 whose flow corresponds to the one-parameter Lie group of transformations x = T^-t, t = T^-t, u = uVl-etexp [ 4 ^ ] . (1.3) This transformation can be used to generate a new solution from the simple solution u = 1 as follows. The graph of this simple solution is given by {(x,t, 1) : x,t e R} . Now under the given symmetry transformation, any point (x,t, 1) on this graph is transformed to a new point (x,t,u), where u is the value of the new function / over the point (x,t), given by (1.3). This new function, obtained by expressing (x,t) in terms of (x,t) in (1.3c), is given by _ . u - f(x,t) = J-^exp t e x 2 ~ Hence, we have generated a highly nontrivial solution from a very simple solution, through the application of a symmetry.1 • Further examples of the applications of symmetry analysis can be found in the standard texts mentioned previously. However, we note in particular that there has been a lot of recent interest in the nonclassical method of Bluman and Cole [11] where the definition of invariant solutions of a DE is further extended. In the classical approach, one looks for solutions that are invariant under some point symmetry admitted by the DE. However, in the nonclassical approach, a special class of solutions of the DE are sought which admits a point symmetry that is not admitted by the DE as a whole. See Clarkson and Mansfield [24] for an algorithmic implementation of the nonclassical method and Bluman and Shtelen [19] for further extensions of the method. Other exact solution methods include the work of Ovsiannikov [50] on partially invariant solutions and Olver [48] on differential constraints. For work on the equivalence transformations, see Lisle [43]. *As an undergraduate, I did not like differential equations very much. However, as a Master's student searching for something to work on [25], I came across this very simple, but elegant application of symmetry analysis and was instantly won over to the study of Lie theory of DEs. Chapter 1. Introduction 9 1.3 Potential Systems As already mentioned, the symmetries considered by Lie were point symmetries where the infinitesimals depend on independent variables x and dependent variables u, and contact sym-metries where the infinitesimals can also depend on first order derivatives of u. Lie theory of DEs is further extended by the consideration of generalized symmetries, where the infinitesimal generators can also depend on the derivatives of dependent variables up to some finite order. Point, contact and generalized symmetries are called local symmetries, since their infinitesimal generators depend only on local properties of solutions. In particular, at any point x, the infinitesimal generators depend only on x and the values of u and its derivatives at that point. One can further enlarge the class of known symmetries of DEs by considering nonlocal sym-metries, which are characterized by infinitesimal generators that are not of local type. For example, if the infinitesimal generator depends on integrals of dependent variables, then the corresponding symmetry would be nonlocal. In principle, a DE can admit many nonlocal sym-metries, but the fundamental issue is in their algorithmic determination. Special and/or heuris-tic techniques have been employed by Akhatov et. al. [1], Konopelchenko and Mokhnachev [34], Kumei [37], Kapcov [32], Pukhnachev [52], and Krasil'shchik and Vinogradov [35, 36] to obtain nonlocal symmetries of DEs. In particular, the approach of Krasil'shchik and Vinogradov is restricted to PDEs with two independent variables. In this thesis, we will use the potential systems approach of Bluman, Kumei and Reid [16] which is a systematic method for finding nonlocal symmetries of PDEs with two or more independent variables. Nonlocal symmetries of ODEs require a different approach (see Bluman and Reid [17]) and are not considered in this thesis. A simple minded way to find nonlocal symmetries of a given PDE is to apply Lie's algorithm with infinitesimal generators of the form If unsuccessful, then what other forms of infinitesimal generators should one consider next? Chapter 1. Introduction 10 Clearly such an approach is ad hoc and a systematic procedure is needed. In the potential systems approach, one uses a conservation law of the PDE R{u} to form an associated potential system S{u,v} involving the introduction of potentials v. Here, the potential variable v, which is algorithmically determined, replaces the formal variable u_i in the above ad-hoc approach. By their very construction, potentials are nonlocal with respect to the dependent variable u of the original PDE R{u}. Since the solutions of R{u} are nonlocally embedded in the solutions of S{u,v}, studying S{u, v} may lead to new nonlocal information for R{u} and vice versa. Also, this embedding of solutions is not one-to-one so that invertible mappings in (a;, u, u)-space can lead to noninvertible mappings in (x, w)-space and vice versa. In this dissertation we only use the potential system S{u, v} for finding nonlocal symmetries of R{u). The use of potential systems to deduce other types of nonlocal information for R{u} can be found in [3, 4, 18, 20]. Nonlocal symmetries of R{u} may be found as point symmetries of S{u, v} which do not project onto any point symmetry of R{u}. Nonlocal symmetries that arise in this way are called potential symmetries. Being realized as point symmetries of S{u,v}, Lie's algorithm2 can be used to calculate these potential symmetries and, once found, all the applications of point symmetries outlined in the previous section are available. In particular, potential symmetries can lead to new information for R that is not obtainable via local symmetries of R. Applications of potential symmetries include: • Noninvertible mappings of known solutions to new solutions. • New invariant solutions of R. • Exact solutions of new boundary value problems for R. • Linearizations of R through noninvertible transformations. • Nonlocal mappings between PDEs. • Nonlocal conservation laws of PDEs. 2In §1.4, we will have more to say on the use of Lie's algorithm to find point symmetries of potential systems. Chapter 1. Introduction 11 Let us now discuss, through illustrative examples, the construction of potential systems S{u, v}, how their point symmetries can lead to potential symmetries of the original PDE R{u}, and how potential symmetries can lead, to new nonlocal information for R{u}. Construction of Potential Systems The idea of using a conservation law of a PDE to introduce potentials is not new. Applications have concentrated mainly on reducing the original system of PDEs to an equivalent system involving only the potentials. The main advantage of this is that the resulting system contains fewer equations and hence is easier to analyse. Example 1.3.1 Potentials in Physics In physics, the equations defining electromagnetic radiation are governed by Maxwell's equa-tions dE dB ^ - = V x B , = _ V x E , V - E = 0, V - B = 0, where E G H 3 is the electric field and B G M 3 is the magnetic field. One can introduce a vector potential A such that B = V x A is automatically divergence-free, i.e., V • B = 0. Then obviously, V X (dA/dt + E) = 0. Consequently, one can introduce a scalar potential (j> such that dA/dt + E = V(/>. Maxwell's equations are then reduced to the following equivalent system: d2A „ . „ . fi<t> „ dA which involves only the potentials A and <j>. • In the potential systems approach one does not seek to eliminate the original dependent variables to form a reduced system involving only the potentials, as was the case in the above example of Maxwell's equations. Rather, one replaces the given conservation law (divergence-free expression) with the equations defining the potentials to form a potential system, as shall be illustrated in the following two examples. The reduced equations can yield useful information for the original PDE [10]. However, from the perspective of finding nonlocal symmetries, the Chapter 1. Introduction 12 potential system allows these symmetries to be explicitly realized. We shall point this out when we come to discuss potential symmetries in §4.1.1. Example 1.3.2 Case of two independent variables In the case of two independent variables, the prototypical example is the nonlinear diffusion equation Q Q with corresponding potential system vx = u, (1.5) vt = K(u)ux, where u is the concentration, K(u) is the concentration-dependent diffusivity and v is a scalar potential. Notice that taking the sum of the /-derivative of (1.5a) and the ^-derivative of (1.5b) leads to the original PDE (1.4). We say that the original PDE is a differential consequence or an integrability condition of the associated potential system. • Example 1.3.3 Case of three independent variables In the case of three independent variables, the prototypical example is the nonlinear wave equation ^ ( - « 0 + Yx{Ci{u)ux) + ^(C2(u)uy) = 0, (1.6) with corresponding potential system ~Ut ~ «3,X + V2,y = 0, Ci(u)ux - vitV + v3it = 0, (1-7) C2(u)uy - v2,t + vhx = 0, where C\{u) and C2(u) are the wave speeds, v = (vi,v2,vs) is a vector potential, and VitXj = dvi/dxj (XJ = t, x or y). As in the previous case, the original wave equation is a differential consequence of the associated potential system, as can be seen by taking the sum of the t-derivative of (1.7a), the ^-derivative of (1.7b) and the y-derivative of (1.7c). • The precise construction of potential systems is described in §4.1. For now, it suffices to Chapter 1. Introduction 13 say that the procedure is algorithmic: In [26] Haager et al. have developed a software package which automates the construction process. Potential Symmetries and Applications Point symmetries of R{u} are of the form XR = £R(x,u)dx + J1R(x,u)du, whereas point symmetries of S{u, v} are of the form X 5 = Hs(x, u, v) dx + r)s(x, u, v) du + ps(x, u, v) dv. It turns out that if S{u, v} admits X 5 , then R{u} must admit X s = £s(x, u, v) dx + r)s(x, u, v) du. By comparing X R and X s , one can see that a new symmetry has been found for R{u} if ( £ s , r/s) depends on v. These new symmetries, called potential symmetries, are nonlocal symmetries of R{u} since v is nonlocal with respect to u. Example 1.3.4 Potential Symmetry Let R{u] be the nonlinear diffusion equation (1.4) with associated potential system S{u, v}, given by (1.5). If K(u) = exp(aarctan ,u)/(l + u2), where a is a constant, then S{u, v} admits the infinitesimal generator [16] X s = vdx + atdt — (1 + u2)du — xdv. Since the infinitesimal corresponding to x depends on v, we have found a potential symmetry of R{u}. In particular, the flow of X 5 corresponds to the symmetry transformation of S{u, v} given by Chapter 1. Introduction 14 x = x cos £ + v sin £, t = east, ~ u c o s e — s i n e c o s £ + « s i n £ ' v = —x sin £ + v cos £, which is defined for all s G 1R sufficiently close to zero. This transformation induces a symmetry transformation of R{u} obtained by projection onto (x,-«)-space: x = a;cos£ + (fudx) sine, t = ea£t, ~ u c o s e—sing c o s £-\-u s i n e * This induced symmetry transformation is clearly nonlocal. By projecting X 5 to (a;, tt)-space, one obtains the nonlocal infinitesimal generator X s = (Judx)dx + atdt - (1 + u2)du, admitted by R{u}. • The following example further illustrates why potential systems and potential symmetries are worthy of study. Example 1.3.5 Noninvertible Linearizations Consider the nonlinear system of PDEs R{ux, u2}, given by - u2 = 0, u2 - v}u2 - u1 - u2 = 0, which describes fluid flow through a reacting medium. These equations, also known as the Thomas equations, were studied by Thomas [64] (see also Whitham [71]). When applying the linearization algorithms [38, 15], one can show that there exists no invertible transformation which linearizes R. Evidently, the point symmetries of R are not sufficiently rich enough. However, a noninvertible linearization of R can be found through a potential system of R: Chapter 1. Introduction 15 Using the first equation in R which is a conservation law, one can form the associated system S{v}, u2, v}, given by vt = u1, vx = u2, u2 — v}u2 — u1 — u2 = 0. One can show that S admits an infinite-parameter family of point transformations with in-finitesimal generator X = ev {(^  + «V)0ui + + «V)0U* + i>dv) , where ip(x,t) is an arbitrary function satisfying the linear PDE 1pxt - Tpx ~ Ipt = 0. Consequently, the linearization algorithm leads to the following invertible mapping (1.8) z\ = x, w1 = — e v , w3 = e "B 1 , Z2 =t, w2 = e vu2, which transforms S to the linear system w\ = w2, w\2 = ws, w2Z2 = w2 + w3. Moreover, projecting the transformations (1.8) to (a;, w)-space, one obtains the noninvertible transformation Zl=x, w^-e-y^, z2=t, w2 = u2e-Iuldt, which maps R to the linear system w\ = w2, w2Z2 = w2 + w\2. Hence, we have found a noninvertible linearization of R through potential symmetries of R. • Chapter 1. Introduction 16 1.4 A New Symmetry Algorithm For Systems of PDEs Lie's algorithm successfully leads to the point symmetries admitted by scalar PDEs and systems of PDEs of Cauchy-Kovalevskaya type. However, we have encountered problems when applying this algorithm to more general systems of PDEs, such as potential systems. As such, we have had to derive a new symmetry algorithm to overcome these problems. To do this requires us to synthesize ideas from the Formal Theory of Integrability and the existing Lie Theory of DEs. We now sketch out the main points. Recall that a symmetry of an n-th order system R of PDEs is a transformation mapping any solution to another solution of R, i.e., a transformation that leaves the solution space of R invariant. Since the solution space of R is not known explicitly in general, Lie's algorithm deals solely with the equations of R. The equations of R define a locus of points (subvariety) in the space of independent variables, dependent variables and all derivatives of dependent variables up to order n (ra-jet space or n-th extended space). Since solutions of R Be in the locus of R, one arrives at the symmetries of the solution space of R by finding the symmetries of the locus of R. There are two important caveats to this. The first caveat is that symmetries of the locus of R 'are only guaranteed to map points in the locus to other points in the locus: They may not map functions to functions. As such, while they are guaranteed to map solutions of R to other points in the locus of R, they may not map solutions to other solutions of R. One ensures that solutions of R are mapped to other solutions by seeking symmetries that belong to the class of extended point transformations. The second caveat is that one may not obtain the full set of symmetries of R unless certain differential consequences of the equations of R are used. Appending differential consequences to the equations of R leads to a new system R which has the same solution space as R, but a different locus. As such, the symmetries of R and R are the same, but the symmetries of their respective loci may be different. It is true that any such system R can be used to find symmetries of R. However, in general the full set of point symmetries of R may not be Chapter 1. Introduction 17 uncovered. In short, to obtain all the point symmetries of R, one must apply Lie's algorithm to a locally solvable system R, i.e., R must be such that given any point in its locus there must pass a solution of R. If R is not locally solvable, symmetries of its locus are required to leave invariant, not only the solutions of R, but also regions of the locus through which there are no solutions. This leads to stronger conditions than necessary on the symmetry transformations. For analytic systems3 of PDEs, an n-th order system of PDEs is locally solvable if the system contains all its differential consequences up to order n. It turns out that scalar PDEs and systems of PDEs of Cauchy-Kovalevskaya type are locally solvable and, hence, Lie's algorithm successfully leads to all the point symmetries admitted by such PDEs. The situation for more general systems of PDEs is not so simple, as we now illustrate in the following two examples. (1.9) Example 1.4.1 Let R be the system of second order PDEs given by Ut = V. By differentiating (1.9b), we obtain the second order differential consequences uxt = vt, (1.10) utt - vx. However, we have not yet achieved local solvability. There is one further second order differential consequence obtained by equating mixed partial derivatives utxx = uxxt to obtain VXX = Vt. (1-11) The system R given by (1.9), (1.10) and (1.11) is locally solvable since there are no further differential consequences up to order two. Hence, to find all the point symmetries of R, one must apply Lie's algorithm to the system R. • Here is an example of a system of PDEs where local solvability cannot be achieved for any finite order. 3By staying in the analytic regime, we avoid certain smooth DEs for which there exist no solutions (see Lewy's counter example [39]). Chapter 1. Introduction 18 Example 1.4.2 Consider the third order scalar PDE -DtF(x,t,uw) + DxG(x,t,uw) = 0, (1.12) which has the associated potential system S, given by vx = F, (1.13) vt = G, where F and G are fixed, but not explicitly given. When looking for all the point symmetries of S, which is of order two, one must first make the system locally solvable by uncovering all its differential consequences up to order two. Such differential consequences can occur: For example, when (F,G) = (uxx + u,uxx), S has the second order differential consequence vxx = vtx + ux. Unfortunately, since F and G are not explicitly given (such a situation will arise in §5.1 when we study symmetries of potential systems), one cannot deduce all differential consequences of S up to order two. The best one can do is to differentiate the equations of S to obtain ^xx — L)XF, vxt = DtF, (1-14) vtt = DtG, and the original scalar PDE (1.12) which arises through the compatibility condition vxt = vtx. The system S, given by (1.12), (1.13) and (1.14), is of third order and to achieve local solvability one must determine all differential consequences of S up to order three. But this is not possible for the same reason that 5 cannot be made locally solvable. This problem persists after any number of differentiations of the equations of S. Consequently S cannot be made locally solvable at any finite order. As will be shown in §3.2, even the infinite system S obtained by differentiating the equations of the system (1.12) and (1.13) to all orders is not locally solvable. However, it turns out that such an infinite system satisfies the weaker property of analytic local solvability and this will be sufficient for our purposes. Even so, the application of Lie's algorithm to such an infinite system S is clearly not feasible. • Chapter 1. Introduction 19 The above example illustrates the problems that one may encounter when applying Lie's algorithm to find symmetries of systems of PDEs. In §3.3, we show how one can relax the local solvability requirement. Essentially, we show that the symmetry conditions for any locally solvable system of PDEs can be reduced to an equivalent set of conditions involving significantly fewer equations. In the new symmetry algorithm that results, only the equations of the given system of PDEs are required at the first step. At a later step in the algorithm (the substitution step), only a finite number of differential consequences of the system are required. For scalar PDEs and systems of PDEs of Cauchy-Kovalevskaya type, our symmetry al-gorithm is essentially the same as Lie's algorithm. For more general systems of PDEs, our algorithm is much more efficient: When finding symmetries of the system (1.13), only a finite number of its differential consequences are required, whereas Lie's algorithm requires the infi-nite system S described in Example 1.4.2. Even when finding symmetries of the system (1.10), Lie's algorithm requires all the equations of the locally solvable system (1.9), (1.10) and (1.11) from the very beginning, whereas our symmetry algorithm does not need to use the differential consequences (1.10) and (1.11) until the less computationally intensive substitution step. 1.5 Outline of Thesis The following is assumed throughout the thesis: Blanket locality and analyticity assumption: We always assume that all differential equations, their solutions and mappings are local analytic functions of their arguments. For simplicity of exposition we will discuss local properties using local coordinates. This blanket analyticity assumption may be relaxed in some cases. Where it is crucial to have analyticity, we shall explicitly state it. We anticipate that this dissertation will be of interest to at least three different audiences. One such audience will be interested mainly in the potential systems approach for finding nonlocal symmetries of PDEs. Another audience will be interested in our extension of Lie's Chapter 1. Introduction 20 algorithm and its theoretical basis. The third audience will be researchers in the field of sym-bolic computation who will be interested in the algorithms of this dissertation as well as their computer implementations. To help each reader access the specific information they need, we feel it essential to provide a 'road map' designed for each of these three audiences. We will do this after giving a brief description of each chapter. A more detailed description of the contents of each chapter is provided at the beginning of each chapter. The standard theory behind Lie's algorithm is presented in §2. The symmetry transforma-tions we consider belong to the class of transformations associated with flows of vector fields. We seek such transformations which leave invariant the solution space of the given system R of PDEs. As discussed in §1.4, one arrives at the symmetries of R through the symmetries of the locus of R, by restricting to the class of extended point transformations and by requiring that R be locally solvable. As such, the theory of extended point transformations and their corresponding extended vector fields is provided. (The issue of local solvability will be tackled in §3.) Symmetries of the locus are given by vector fields which are tangent to the locus. By restricting to extended vector fields the associated tangency conditions, called the infinitesimal symmetry conditions for R, lead to the required symmetries of R. The resulting algorithm is called Lie's algorithm. A discussion of the various other symmetry formulations, other algo-rithms and their shortcomings is provided. In particular, we point out that a commonly used and accepted algorithm for the calculation of symmetries of systems of PDEs appears to lack theoretical justification. In §3, a new symmetry algorithm which extends Lie's algorithm to systems of PDEs is presented. In order to do this, there are two questions to be answered. Firstly, how does one achieve local solvability in general? Secondly, how can one find the point symmetries admitted by systems of PDEs, such as (1.13), whose (analytic) locally solvable form consists of an infinite number of equations? The Formal Theory of Integrability of Riquier-Janet answers the first question. Here we follow the more efficient approach of Reid. By a finite number of operations involving differentiations, back substitutions and applications of the Implicit Function Theorem, Chapter 1. Introduction 21 a, prolonged standard form is constructed. We show that for any system of PDEs, it is the associated prolonged standard form for which Lie's algorithm applies. To address the second question, we show how the infinitesimal symmetry conditions for a prolonged standard form can be reduced to an equivalent set of symmetry conditions involving significantly fewer equations. This leads to a new symmetry algorithm which is more efficient than Lie's algorithm. If the starting system of PDEs consists of a finite number of equations, then only a finite number of equations from the prolonged standard form is required in this new algorithm. This is true even if the prolonged standard form comprises of an infinite number of equations. In §4, we present the mathematical framework of the potential systems approach for finding nonlocal symmetries of systems of PDEs. We provide details of how to find conservation laws, how to construct potential systems, how to delineate which of these systems are useful for finding potential symmetries, and how one can iterate this process to obtain higher generation useful potential systems. A complete potential systems analysis of the nonlinear diffusion equation is provided which shows how potential symmetries can arise. The construction of conservation laws is through linear combinations, involving coefficients called factors, of the equations of the system. We show how these factors can be used to derive necessary conditions for the linearization of systems of PDEs. To date, all examples of potential symmetries involve scalar PDEs of order two and systems of PDEs of order one. In §5, we tackle the problem of finding higher order scalar and systems of PDEs admitting potential symmetries. Preliminary work on this problem appears in [51], where necessary conditions for such PDEs are claimed, but where no examples were given. It turns out that there is an error in their claims, since they neglect to form a locally solvable system before applying Lie's algorithm. We correct their claims using the new symmetry algorithm of §3. We then proceed to construct large classes of higher order scalar and systems of PDEs which admit potential symmetries. Chapter 1. Introduction 22 Our results are summarized in §6, where possible future directions for this research are also discussed. 1.5.1 Specific Guide to Thesis The reader, who is interested only in the potential systems approach and who is already familiar with Lie's algorithm, can start immediately at §4 and read on. However, in §5 we make extensive use of our new symmetry algorithm. Unless the reader is willing to read §2 and §3 for the complete theory, the reader is advised to do the following: Read §1.4 to get the basic concept of local solvability and the need of our new symmetry algorithm. Look at Theorem 3.3.3 which is the basis of our new symmetry algorithm, given by Algorithm 3.3.4, as well as the example that follows this theorem. The reader should be familiar with all aspects of Algorithm 3.3.4 except for the use of the prolonged standard form in the substitution step (step 2). Just think of the prolonged standard form as providing all necessary substitutions that are derived from the equations of the given system. The algorithms for achieving a prolonged standard form are summarized in Appendix A. However, the reader should be able to follow the calculations of §5 without reading Appendix A. If the reader is still unconvinced for the need of the new symmetry algorithm, consider this. Any algorithm may lead to infinitesimal generators that can be verified, by other means, as symmetries of the given system of PDEs. But the point is, where is the guarantee that the algorithm used leads to the full set of the admitted infinitesimal generators? Through Theorem3.3.3, we provide a sound theoretical basis for our algorithm. The reader who is interested in our new symmetry algorithm and the theory behind it, should read §2 and §3. The proof of Theorem 3.3.3, which our new symmetry algorithm relies on, is of a technical nature and is given in Appendix C. For applications of our symmetry algorithm, see §5, where the symmetries of potential systems are investigated. Much of our effort here is spent on deriving the prolonged standard forms, which is an essential ingredient Chapter 1. Introduction 23 in the new symmetry algorithm, for the given potential systems. Read §1.3 to find out how potential systems are constructed and why they are important to study. We also direct the reader to Definition 4.0.1 and Theorem E.0.1, where the prolonged standard form has been used to define local symmetries and to state the Frechet formulation for finding them, respectively. The reader interested in algorithms and their computer implementations is directed to Lie's algorithm given by Algorithm 2.3.8, to a commonly used variant of Lie's algorithm given by Algorithm 2.3.9, and to our new algorithm given by Algorithm 3.3.4. Notice that the new algorithm does not require the given system to be in locally solvable form. Also, notice that the substitution step (step 2), which is required in each algorithm, has been made more explicit (and more easy to implement on a computer) in the new algorithm, through the use of the prolonged standard form. The algorithms for achieving prolonged standard form are given in Appendix A. If unfamiliar with the Formal Theory of Integrability of Riquier-Janet and the improved algorithms of Reid, then the reader is referred to §3.1. See Definition 4.0.1, Theorem E.0.1 and Theorem 4.2.5, where the prolonged standard form has been used to define local symmetries, to state the Frechet formulation for finding them, and to state the adjoint theorem for finding factors leading to conservation laws. These particular formulations can lead to improved algorithms for finding local symmetries and factors. Lastly, we point out that a computer package [26] now exists which automates various aspects of the potential systems approach. Chapter 2 Lie 's Algorithm In this chapter, we develop the standard theory behind Lie's algorithm which is used to de-termine all the point symmetries of a given PDE. In §2.1, we study transformation groups. The transformations that Lie considered are those represented by the flows of (analytic) in-finitesimal generators (vector fields). Given an infinitesimal generator, the corresponding set of transformations, which satisfy the group axioms, is called a one-parameter Lie group of point transformations. A set of infinitesimal generators generates a multi-parameter Lie group if and only if they form a Lie algebra. In §2.1.4, the infinitesimal generators admitted by a system of algebraic equations are derived. One requires that the infinitesimal generators are tangent to the locus of the system and this leads to the algebraic infinitesimal symmetry conditions. The n-jet (extended) space, i.e., the space of independent variables, dependent variables and all derivatives of the dependent variables up to order n, is described in §2.2. The locus of a system of DEs is the set of points in 7i-jet space which satisfies the equations of the system. As mentioned in §1.4, the symmetries of the locus lead to the symmetries of the system of DEs, provided that the system is locally solvable and the symmetry transformations map functions to functions. To ensure the latter, one restricts oneself to Lie groups of extended point transformations which are the flows of vector fields, called extended infinitesimal generators, defined in n-jet space. In §2.3, we derive the symmetry conditions for a system of PDEs. As in the case of algebraic systems, one requires the extended infinitesimal generators to be tangent to the locus of the system. This leads to Lie's algorithm for determining the point symmetries admitted by the 24 Chapter 2. Lie's Algorithm 25 system of PDEs. A discussion of the various other forms of the commonly used symmetry conditions and algorithms together with their theoretical shortcomings is provided. The local solvability requirement of Lie's algorithm can lead to problems and, hence, an extension of Lie's algorithm to overcome these problems is needed. 2.1 Transformation Groups Definition 2.1.1 Let M = 1RP, with coordinates x — (xi,X2, • • • ,xp). A point transformation of a space M is an (analytic) (C°°) mapping x = T(X) such that r is one-to-one and onto. The transformation corresponding to the inverse of r is denoted by r _ 1 . More generally, one can aUow M to be any differentiable manifold1 with local coordinates x, and r defined only locally so that the rotations Rs of Example 1.2.1 are also considered as transformations. In this thesis we shall not always explicitly emphasize the local nature of our statements, but it will always be assumed. Also, we will be dealing almost exclusively with one-parameter Lie groups of transformations and as such the unqualified term 'transformation' will aways be taken to be a 'point transformation'. We will see that these transformations are essentially the flows of vector fields on M. Given such a characterization, we will be able to write down the symmetry conditions for algebraic systems of equations in terms of these vector fields. This will then pave the way for determining the infinitesimal symmetry conditions of DEs, given in the next section-. Let us first review the more familiar concept of flows of vector fields. 2.1.1 Flows of Vector Fields Definition 2.1.2 Let M be a differentiable manifold and TM\X be the space of tangent vectors to M at x. A vector field X on M assigns a tangent vector X^ € TM\X to each point x £ M. l rThe precise definition of an r-dimensional differentiable manifold M can be found in [69, 47]. It shall be sufficient for us to treat M as a space made up of subsets of ]Rr that are patched together in a suitable manner. Since we will always work locally, the global topology of a manifold will not affect us. Chapter 2. Lie's Algorithm 26 An integral curve of X through x = x0 is a parametrized curve x = 4>x0(£) passing through x0 and whose velocity 4>Xo{E) coincides with the vector field X, at any point along the path: <^0(0) = zo, ' ^ 0 ( £ ) = X|0io(e), V e G / , (2.1) where / is some open interval of R containing the origin2. Different integral curves through a given point can have different domains of definitions / and the unique one corresponding to the maximum domain of definition is called the maximal integral curve through the point. Example 2.1.3 An example of an integral curve is the path traced out by a particle that is being carried along by a river. Here, M is the surface of the river and the vector field X at any point x G M is given by the surface velocity of the river at that point. Suppose that the flow is steady. Then these velocities, and hence X, do not change with time. Let e be the time, XQ be the location of a particle at e = 0 and (f>Xo(e) be the path traced out by the particle over time. Then 4>Xo(e), e G / , is an integral curve of X through x = XQ for some open interval / containing the origin. Assuming that the river is infinitely long with no sinks or sources, the maximum integral curve of X through x = XQ is the unique integral curve where / = R . • We denote the maximal integral curve of X passing through x G M by $(e,a;) and call $ the flow generated by X. The flow has the properties: *(«,*(£, a;)) = V(6 + e,x), x G M, (2.2) for all <5,£ G / such that both sides are defined (one cannot flow past a sink), V(0,x) = x, and £ * ( e , x) = x| . (2.3) Here is how flows of vector fields give rise to transformations on the underlying space. Let Te(x) = ty(e,x). Then for each fixed e, Ts : M —> M defines a transformation on M. To see this, recall the analogy of a vector field as representing a steady state surface velocity of a river. To determine how rs transforms any given x G M, just imagine dropping a particle on the 2We further emphasize that (2.1b) is not only satisfied at the point x = xo (so that <^ a:o(0) = X| X o), but also at all other points along the curve x = <t>x0(e), Ve € I. Chapter 2. Lie's Algorithm 27 surface of the river at x and letting it flow for a time e. The resulting location of the particle is re(x). Different values of £ lead to different transformations T£ and the set of all these is called a one-parameter family of transformations. In terms of the local coordinates x, a vector field has the form • e 2 ( x ) ^ - +... + e(x)^- , (2.4) dx2 x dxp x where (d/dXi)\x, i = 1, • • • ,p, forms a basis for TM at x. Henceforth, we shall drop the symbol \ x and it should be clear, from the context, which point x or tangent space TM\X one is referring to. We shall also use dXt to denote d/dXi and assume £ are analytic functions of x so that we deal exclusively with analytic vector fields. Using (2.4), it is convenient to also express (2.1) in terms of local coordinates: Theorem 2.1.4 The one-parameter family of transformations x = T£(X) associated with an analytic vector field X , given by (2-4), is the unique solution to the autonomous system of ordinary differential equations (ODEs) ^1 = C(x), 2(0) = x. (2.5) Solving (2.5) to obtain the transformations TS is often referred to as exponentiating X . In particular, using (2.5) one can show that the power series expansion of r £ is given by 2 TS(X) = exp(sX) x= (l + eX + y X 2 + • • •) x. Example 2.1.5 Let M = 1R with coordinates x and consider the vector field X = x2dx. Then the corresponding one parameter family of transformations is given by re(x) = exp(£X)z = (2.6) Notice that when x > 0, then r £ is defined only for e 6 Ix where Ix = (-oo, 1/x). Q Using the properties (2.2) and (2.3) of flows, one can show that the transformations r£ satisfy the axioms o f a group. In particular, the identity element is To, the binary operation between Chapter 2. Lie's Algorithm 28 transformations is given by composition, TEOT$ = T$+E, which is clearly associative. Lastly the inverse of r £ is just r_ £. Let us now explore the group structure of such transformations. 2.1.2 Lie Groups Definition 2.1.6 A group is a set G together with a binary operation * : G X G —• G satisfying the following axioms: Associativity: For all g, h, k £ G, g * (h * k) = (g * h) * k. Identity Element: There exists an element e £ G, called the identity element, such that for all g £ G, e*g — g = g*e. Inverses: For each g £ G, there exists an inverse, denoted by such that g * g~~l = e = g~x * g. Example 2.1.7 An example of a group is the set of integers Z together with addition as the associative binary operation. The identity element is 0 and the inverse of each integer j £ Z is —j. One often denotes this group by (Z, +). Other examples of groups are (H, +), the set of real numbers with addition, and (1R+,.-), the set of positive real numbers with multiplication. • Unlike, (Z, +), the group (R, +) has a binary operation which is an analytic map + : E x E - > R . Such groups are called Lie groups. Definition 2.1.8 An r-parameter Lie group is a group (G,*) which is also an r-dimensional differentiable manifold M such that both the binary operation * and the inversion map e •-»• £_1 are analytic. Henceforth, M will denote an p-dimensional differentiable manifold. Definition 2.1.9 Let (G, *) be an r-parameter Lie group. An r-parameter Lie Group of trans-formations is a collection of transformations Q on M , together with a map r : G —> G, which maps e to r e , such that Chapter 2. Lie's Algorithm 29 • r e is the identity map; • TS o TE = T5*£; • re(x) is analytic in x and e; • The binary operation * is analytic in both of its arguments. In general, we are interested in local Lie groups of transformations where the transformations T£ may only be denned when e is sufficiently close to the identity e, which is taken to be zero. Without loss of generality, we assume that any transformations considered are sufficiently close to the identity transformation. The need for the analyticity requirements for Lie groups of transformations is so that we can relate these to flows of vector fields. This will become clear shortly. We have mentioned that flows of vector fields lead to families of transformations which form a local Lie group. In this regard, one can make a more precise statement: Theorem 2.1.10 Given any vector field X on M, the transformations TS = exp(eX) form a one-parameter local Lie group of transformations acting on M. As already mentioned, one-parameter Lie groups of transformations arising as flows of a vector fields have a very simple composition law (cf. (2.2)). However, Lie groups of transformations can have more complicated composition laws in general. Example 2.1.11 An example of a one-parameter Lie group of transformations on M = R 2 is given by (2.7) The underlying Lie group is (7, *), where the group binary operation is given by £i * e2 = e\ + e2 + £\£2, £i,£2 e I, (2.8) Chapter 2. Lie's Algorithm 30 the identity is zero, and the inverse e 1 = —s/(l + e) for all £ € J . Clearly the axioms of Definition 2.1.9 are satisfied for r e . • Given any one-parameter local Lie group Q of transformations, Lie [40] proved that there always exists a re-parametrization (a particular choice of local coordinates for the underlying Lie group G) such that Q becomes equivalent to the flow of some vector field: Theorem 2.1.12 Consider a one-parameter local Lie group of transformations Q on M, given by x = Te(X) where TE — (r*, • • •, rj). Form the associated vector field X = £%(x) dXi, where The proof of this theorem, which can be found in [15], relies on the analyticity assumptions in Definition 2.1.9 in order to construct certain power series expansions. The re-parametrization £ = s{d) is a different choice of local coordinates for the Lie group G, such that the group binary operation in the original coordinates e becomes addition in the new coordinates 6. Definition 2.1.13 Consider a one-parameter Lie group of transformations re on M with in-finitesimals (£ 1, • • - ,£ p ) , given by (2.9). Each is called the infinitesimal of X{ and the corre-sponding vector field X = £,ldXi is called the infinitesimal generator of rE. Example 2.1.14 The infinitesimal generator associated with the one-parameter Lie group of transformations (2.7) is given by X = xdx + 2ydy. The flow of X yields the one-parameter Lie group of transformations (2.9) Then there exists a reparametrization e = e(6) such that re(s)(x) = exp(<5X)a\ S G (—oo, oo). The re-parametrization that relates r £ to rg is given by £ = e - 1. • Theorem 2.1.12 can be generalized to an r-parameter group of transformations Q as follows. Let the Lie group (G, *) have local coordinates e — (£i, • • •, er) G G and r £ G Q- Form the set Chapter 2. Lie's Algorithm 31 of r associated infinitesimal generators a = !,•••, r, where the infinitesimals £ t a are given by Each transformation r £ £ £/ can be realized as either exp(AiXi)exp(A 2X 2) • • -exp(A rX r), or exp(^X), (2.10) where X = / i i X i + /x 2 X 2 H h /A-X. (2.11) for all real numbers Xi,8,pa, with Xi,6 sufficiently small. Hence, the r infinitesimal generators X a contain all the information needed to reconstruct the r-parameter Lie group of transforma-tions Q. (See [15] for more details.) Henceforth we will refer to any Lie group of transformations as simply a Lie group. It is natural to consider the r-dimensional vector space £ with basis given by the infinitesimal generators { X i , - - - , X r } over the real numbers since for any X £ £, given by (2.11), the corresponding transformation (2.10) lies in Q for sufficiently small 6. It turns out that C is endowed with an additional algebraic structure which we shall now investigate. 2.1.3 Lie Algebras Definition 2.1.15 A Lie Algebra £ is a vector space over some field T with an additional binary operation (the commutator), such that for all a, b £ T and X , Y , Z £ £, the following Chapter 2. Lie's Algorithm 32 properties are satisfied: Closure. [ X , Y ] e £ ; Bilinearity. [ X , aY + b Z ] = a [ X , Y ] + b [ X , Z ]; Anticommutativity. [ X , Y ] = — [ Y , X ] ; Jacobi Identity. [X, [ Y , Z ] ] + [ Y , [ Z, X ] ] + [ Z, [ X , Y ] ] = 0. Given any finite dimensional Lie algebra C with basis { X i , • • •, X r } , Lie showed that the closure property leads to [X a ,X /3 ] = C , 2 / 3 X 7 where a, /?,7 = 1, • • •, r, for some constants C?p, called the structure constants of £. The anticommutativity property leads to and the Jacobi identity leads to pp pS , pp , pp fi6 _ n We will be confining ourselves to the case where £ is the vector space given by a basis of vector fields { X i , • • •, X r } , J- is the field of real numbers and the commutator is defined as follows: Definition 2.1.16 Let X = ~ and Y = j1 be two vector fields. Their commutator is the vector field [ X , Y ] = X Y - Y X = ( ^ g 7 - 7 J S ; ) ^ . (2.13) Can any set of vector fields { X j , • • •, X r } define a Lie algebra? One can easily verify that the commutator bracket (2.13), by its very definition, satisfies the bilinearity and anticommutativity properties as well as the Jacobi identity. However, it is not true in general that the closure property holds for any set of vector fields. For example, { X i = dx, X 2 = xdy} is not closed under the commutator bracket. However the closure property does hold for infinitesimal generators of an r-parameter Lie group. Theorem 2.1.17 The infinitesimal generators { X a } , a = l , - - - , r , of an r-parameter Lie group Q form an r-dimensional Lie algebra Cr. Chapter 2. Lie's Algorithm 33 This follows from the fact that Q is closed under composition of its members. In particular, given any two infinitesimal generators X a and of Q, their corresponding transformations exp(£X a) and exp(£Xig) both belong to Q and hence so does T£ = exp(- v / eX / 3 ) e x p ( - v / i X „ ) exp( v /eX i 9) exp( v / £X a ) , for £ sufficiently small. By Theorem 2.1.12 the infinitesimal generator corresponding to T£ is given by E = Q + = [ X a . X / j ] -Consequently, the infinitesimal generators of Q are closed under commutation. 2.1.4 Symmetries of Algebraic Equations Consider the system of algebraic equations defined on M: Fli{x) = 0, /i = l , (2.14) where I < p and F £ C°°(M). The solutions of (2.14) are given by the locus of points gF = {xe M: F(x) = 0} C M. Definition 2.1.18 A symmetry of a system of equations (2.14) is a transformation r mapping any solution to another solution of the system: T{QF) C QF, where gF is the locus of solutions of the system. A Lie group Q acting on M is a symmetry group of the system, if and only if for all r e £ Q, re is a symmetry of the system, for e sufficiently small. In other words, Q is a symmetry group of the system if and only if for all T£ £ Q, Ffi{re[x)) = 0 whenever x £ gF. These conditions are, in general, nonlinear in T£ and hence very difficult to analyse. Using the infinitesimal generators of Q, an equivalent statement for these symmetry conditions can Chapter 2. Lie's Algorithm 34 be made which turns out to be more useful in practice. As we shall see, the infinitesimal formulation of the symmetry conditions will be indispensable in the case of DEs. This was the fundamental insight due to Lie [40]. Before stating the infinitesimal symmetry conditions, one must first define what is meant by a system of equations having maximal rank. Definition 2.1.19 The system of equations (2.14) on M is of maximal rank if and only if the Jacobian matrix (dF^/dxi) is of rank / at every solution x £ QF. If a system is of maximal rank, then it is said to satisfy the maximal rank condition. The following theorem and the subsequent example are from [47, §2.1]. Theorem 2.1.20 (Infinitesimal Symmetry Conditions) Consider the system (2.14) with locus of solutions Qf and which is of maximal rank. Then a Lie group Q acting on M is a symmetry group of the system if and only if every infinitesimal generator X of Q satisfies X[Fll(x)] = 0, fi = 1, VxegF. (2.15) Example 2.1.21 Let Q = 5(9(2) be the one parameter rotation group in the plane M = R 2 , with infinitesimal generator X = — ydx + xdy and consider the algebraic equation F(x,y) = x4-rx2y2 + y 2 - l = 0. Clearly (2.15) holds since X(F) = -2xy(x2 + l)~1F(x,y). In addition, the maximal rank condition is satisfied since the Jacobian (^^) = (4x3 + 2xy2'2x2y + 2y^ vanishes only at (x,y) = (0,0) which is not a solution of F(x,y) = 0. By Theorem2.1.20, 50(2) is a symmetry group of F(x,y) = 0. Indeed, F{x,y) = (x2 + l){x2 + y 2 - l ) = 0, has 5 1 , the unit circle with origin for centre, as the solution set; rotations in the plane map points in 5 1 to other points in 5 1 . • Chapter 2. Lie's Algorithm 35 Example 2.1.22 To illustrate the importance of the maximal rank condition, let M = JR., Q be the one-parameter group of translations with infinitesimal generator X = dx and consider the equation F(x) = (x — l ) 2 = O'which has locus of solutions Qf = {x : x = 1). We have X ( i ? ) = 2(x — 1) so that (2.15) is satisfied. However, we know for certain that Q is not a symmetry group of QF. The'problem is that the Jacobian dF/dx = 2(x - 1) vanishes on QF, SO that the maximal rank condition does not hold and Theorem 2.1.20 does not apply. Of course, in this example one could have simplified the original equation to x — 1 = 0 since both have the same locus of solutions QF. The new equation does satisfy the maximal rank condition, but now (2.15) does not hold and one concludes that Q is not a symmetry group of F(x) = 0. However, the maximal rank condition is invaluable when considering more complicated systems where it may be unclear whether such simplifications are needed in order to apply Theorem 2.1.20. • The following theorem will prove useful later: Theorem 2.1.23 Consider the system (2.14) with solutions QF C M and symmetry group Q. Suppose H is a function on M such that H(x) = 0, Mx £ QF. Then X[H(x)} = 0, Vx € QF. Proof. Since Q is a symmetry group of the system, given any x £ gF and T£ £ Q, we have rE(x) £ QF for £ sufficiently small. In other words, H(re(x)) = 0. Now r £ is given by exponentiating some infinitesimal generator X in the Lie algebra of Q. Hence, we have 0 = H(T£(X)) = #(exp(eX) x), Vx £ QF. Differentiating this with respect to s and setting e = 0 leads to the desired result. • 2.2 Differential Equations It is natural to consider a system R of n-th order DEs as a system of algebraic equations in n-jet space, i.e., the space of independent variables, dependent variables and derivatives of dependent variables up to order n. A principle motivation for doing this is that the symmetry Chapter 2. Lie's Algorithm 36 conditions for algebraic equations have already been determined in the last section. How does the group thus calculated relate to the symmetries of i?? By passing to the algebraic regime, we lose the various relationships between a dependent variable and its derivatives. Without any a priori restrictions on the transformations considered, the algebraic symmetry conditions may result in transformations that do not map analytic functions to analytic functions, let alone mapping solutions to other solutions of R. Consequently, we are led to consider only certain types of transformations (Lie groups of extended transformations) when applying the algebraic symmetry conditions. It turns out that we will also require the system to be locally solvable. Let us first review some elements of n-jet space. 2.2.1 Jet Space Let X = H p , with coordinates x = (xi,x2, • • •, xp), be the space representing the independent variables, and let U = R 9 , with coordinates u — (u 1, u2, • • •, uq), be the space representing the dependent variables. The space M = X x U is called the base space. Of particular interest are functions u — f(x), which are identified with their graphs Tf = {(x,f(x)): xett} C XxU, where Q, C X is the domain of / . Consider the fc-th order partial derivatives of f(x), given by dkfa(x) djfa(x) = dxj1 dxj2 • • • dxjk where the multi-index J = ( j i , • • - ,jk) is an unordered fc-tuple of integers 1 < ji,- • - ,jk < P and the order of J , denoted by | J | , is the number of elements in J (k in this case). There are q • pk such partial derivatives, where p+k - l \ n=\ * y In order to build a space to represent these derivatives of / , one extends or prolongs the base space as follows. Let Uk = R 9 P f c have coordinates Uj, for all a = 1, • • - ,q and for all J = • •- > jk)i so that — U x Ux X • • • X Un represents the space of all partial derivatives of u = f(x) up to order n. Also, let u ( n ) denote a typical point in C/ ( n ) . By convention, UQ = ua is Chapter 2. Lie's Algorithm 37 called the zeroth order derivative of u01, (7 ( 0 ) = Uo = U, and J is given by the single multi-index 0 when k = 0. The dimension of C/ ( n ) is q • p ( n ) where = p + P l + p2 + • • • + P n = Example 2.2.1 Consider the case p — 2, q = 1 so that X = R 2 has coordinates (xi,x2) — (t,x), and U = R has the single coordinate NOW f/j is isomorphic to R 2 with coordinates (ut,ux) representing all first order partial derivatives of u with respect to t and x. Likewise, U2 — R 3 with coordinates (uu, utx, uxx) represents all second order partial derivatives of u. Lastly, £/ ( 2 ) ~ R 6 with coordinates M ( 2 ) = (u,ut,ux,utt,utx,uxx), is the space of all partial derivatives of u up to order 2. • The space X X C/ ( n ) is called the n-th extension space or the n-th. order jet space of the underlying base space X x U. It represents the space of all independent variables, dependent variables and all derivatives of dependent variables up to order n. To graph a function u = f(x) in this extension space, one must calculate the values of ah the partial derivatives of / up to order n, in the domain of / . One then forms the induced function u ( n ) = pr ( n ) / (x) , called the n-th prolongation of / , defined by the equations uaj = djfa(x). The extended or prolonged graph is then given by p r ^ T , = T(fn) = {(x, pr ( n )/(z))} C l x U(n). Example 2.2.2 Consider the case p = 1, q = 1 so that X X U{1) ~ R 3 with coordinates (x,u,ux). The graph of the function f(x) = sin a; is Tf = {(a;, sin a;) : x 6 R } , and the first prolonged graph is given by rf] = {(a;, sin a;, cos a;): a; G R} C X X (7 ( 1 ). Also, consider the foUowing locus of points Q = {(a;, sin a;, 1): x € R} C X X Chapter 2. Lie's Algorithm 38 Q does not correspond to the extension of any graph u = f(x). • The above example illustrates that any arbitrary locus of points ^ C I x [7(n) will not in general be the prolonged graph of some function u — f(x). However, given any point in n-jet space, one can always find an n-th prolonged graph of some function that passes through the point. Theorem 2.2.3 Given any point P{x,u{-n)) € I x £ / W , there is an n-th prolonged graph of some function u = f(x) that passes through P. The proof of this theorem relies on constructing a Taylor polynomial of order n. Here, we will just illustrate the case n = 1, with X = R 2 and U = R. Let X and U have coordinates (x,t) and u respectively. Then given any point P(x0, i 0 , u(xQ, t0), ux(x0, t0), Ut(xo, to)) £ Xx?7 ( 1 ) , the Taylor polynomial has the first prolonged graph T^p passing through P. There are two types of differentiations that can be performed on J x P w , depending upon how one views this space. Partial differentiation involves the partial differential operators dXi and du*, which treat X X f/ (n ) as just Euclidean space E w of dimension N = p + q • p^n\ Since no relationships between any of the coordinates (x,u^) are assumed, dXig(x,u^) is calculated while keeping fixed all the coordinates except for X{. A similar statement can be made for dua • Total differentiation on the other hand, involves the total derivative operators DXi, which respect the various relationships between the coordinates of X X [/(n). In particular, the total derivative operators are defined through the following identity: Example 2.2.4 Let X = R 2 and U = R with coordinates (x,t) and u respectively. The function g(x,u^) — xu\, has the following partial derivatives: dtg = 0, dxg = t i 2 , dug = 0, dUtg = 0, dUxg = 2xux. f(x,t) - u(x0,t0) + Ux(x0,t0)(x - XQ) + Ut(x0, tQ)(t - t0), (2.16) (2.17) Chapter 2. Lie's Algorithm 39 The total derivative of g with respect to t is calculated as follows: (Dtg) r ( 2 ) = o\(xf2(x,t)) = 2xfxfxt. This together with a similar calculation for Dxg gives Dtg = 2xuxuxt, Dxg - u\ + 2xuxuxx. One can easily show that the total derivative operator DXi is given by ^ DXi = dXt + ufdua + ••• + u^du* + •••, (2.18) where summation over the repeated indices a = 1, • • •, q, J = (ji, • • • ,jk) and Ji = (ji, • • • ,jk, i) is assumed. This is an infinite sum, but when applied to functions g(x, u^), only a finite number of terms are needed. It is clear that this identity yields the same expressions for Dtg and Dxg as given in Example 2.2.4. The advantage of using this identity for DXi is that one avoids having to make the replacements u ( n ) = pr ( n )/(a;) in (2.17). It is often convenient to use Di to mean DXi and Dj to denote the fc-th order total derivative operator Dj1 Dj2 • • • Djk. The total derivative operators Di encode the relationships between the various coordinates of I x U("\ namely uaj = Dj(ua). The prolongation structure of n-jet space is discussed in detail in [60]. For our purposes, it will be sufficient to view X x t 7 ( n ) as (RN,Di), i.e., the Euclidean space of dimension N = p + q-p1-"^ together with the prolongation structure encoded in the total derivative operators Di. We are now ready to discuss what we mean by a system of differential equations. Definition 2.2.5 A system R of n-th order DEs is given by the system of algebraic equations on n-jet space: A^x,u(n)) = 0, n = l,---,l, (2.19) where are real valued functions on X X U(n). A solution of the DE (2.19) is a function u = f(x) such that A(x,u^) =0. (2.20) 1 / Chapter 2. Lie's Algorithm 40 We must emphasize that by a solution of R, one really means a function u = f(x) such that, after replacing u and its derivatives by f(x) and its corresponding derivatives, A evaluates to zero for all x in the domain of f(x). Having said this, it is often convenient to consider the individual points in XxU{n) that satisfy (2.19): Definition 2.2.6 [47, p.96j(f] Let QA be the locus of points (subvariety) in n-jet space given by QA = {(z, «<">): A(a;,u<n>) = 0} C XxU(n). In other words, QA consists of the roots of the algebraic equations A : X X rj (n) —» It'. It is natural to ask what the relationship is between the solutions of R and the locus of points Q A . From (2.20), the extended graphs of solutions of R must lie in QA: Theorem 2.2.7 Let R be an n-th order system of DEs. Then u = f(x) is a solution of R if and only if its n-th extended graph lies in gA: I ^ C f c . (2.21) Example 2.2.8 Let X — U = 1R and R be the ODE, A = ux + u2 = 0. The general solution of R is f(x) = ^3^ , for any c £ R , since A(x, f(x), f'(x)) evaluates to zero for all x ^ c. One also has gA = {(x, u, -u2): x, u £ 1R} C X x U{1); T^-p given by the functions u = (x — c ) _ 1 , ux = —(x — c)~2-and, as seen in Figures 2.1, (2.21) holds. • In general, given any point P in gA there is no guarantee that there exists an extended graph of some solution u = f(x) of R passing through P. Theorem 2.2.3 still applies in that one can always find an extended graph of some function u = g(x) passing through P. However, u = g(x) will not, in general, be a solution of R. Chapter 2. Lie's Algorithm 41 Figure 2.1: Extended graphs T^n) of solutions u — f(x) of a PDE A = 0 must lie in its locus pA. Definition 2.2.9 The system of n-th order DEs (2.19) is locally solvable if and only if through every point in gA, there passes an n-th extended graph T^n) of some solution u = f(x) of R. For convenience, one often says that u = f(x) passes through a point in gA to mean that its extended graph passes through the point. Let us now explain the reason for introducing the definitions of the locus gA and of local solvability for a system R of DEs. Our goal is to determine the symmetries admitted by R which are transformations mapping solutions to other solutions of R. However one does not know the solution space of R explicitly. One only has the equations that the solutions must satisfy. Hence we must deal directly with the equations themselves. We have seen that it is natural to view these equations as algebraic equations in XxU(n) and this led to the definition of a locus gA of points in X x U{n) which are the roots of these equations. One can apply Theorem2.1.20 to determine the symmetry group Q of gA. How does a symmetry r G Q relate Chapter 2. Lie's Algorithm 42 to a symmetry r of Rl3 If there is a one-to-one correspondence between a symmetry of gA and a symmetry of R, then our goal is reached. Unfortunately, this is not true in general as we shall now show. From the outset, one must not confuse solutions of R with points in gA: The former are strictly functions u = f(x) and the latter are individual^points in XxU(n\ While it is true that all points in the extended graph of a solution belong to gA (cf. Theorem 2.2.7), unless one has local solvability, there may be points in gA which do not belong to some extended graph of a solution of R. Hence let gA = AUB C I X P ( n ) such that through any point in A (B), there exists (does not exist) an extended graph of a solution of R which passes through the point. Suppose r is a symmetry of R, mapping solutions to other solutions of R. Since through every point in A, there passes an extended graph of solutions of R and, vice-versa, all extended graphs of solutions of R are contained in A, we have T(A) C A. However, since there are no extended graphs of solutions passing through points in B, there is no guarantee as to how r transforms these points. In particular, one could have r mapping points in B to points outside of gA altogether. If this were true, then Q is not a symmetry group of Now let T be a symmetry of gA, so that r ( 4 j C & , gA = A L \ B ' . Unfortunately, there is no guarantee that f wiU leave A invariant as a subset of gA. In particular, T could map points in A to points in B. If this were true, then f is not a symmetry of R. The differences between the symmetries r of Q and f of Q are illustrated in Figures 2.2. 3In order to precisely relate r and r, one must first talk about their induced transformations: If T is explicitly known, then a transformation between solutions u = f(x) of R induces a well defined transformation on extended graphs T^"' of solutions. Since extended graphs belong to ra-jet space, r induces a transformation on n — jet space, which we shall investigate further in §2.2.2. It is this induced transformation that one uses to relate to the transformation r on £>A. Until §2.2.2, we shall not distinguish between T and its induced transformation on n-jet space. Chapter 2. Lie's Algorithm 43 X x r V \ *) Figure 2.2: Differences between the symmetries T oiQ and r off/. Now let us assume that R is locally solvable so that I? = 0. Then any symmetry T of R is also a symmetry of £»A. However, the converse is still not true in general. Certainly, r must now leave A invariant, but points on an extended graph of a solution of R may be mapped arbitrarily to other points of A and not be mapped to another extended graph of a solution of R. If this were true then r would not map solutions to solutions of R and hence would not be a symmetry group of R. Definition 2.2.10 A transformation acting on X x P ' " ' is an extended transformation if and only if the transformation maps extended graphs to extended graphs of functions. Consequently, it is natural to require that: (1) R be locally solvable. (2) Q consists of extended transformations. In §2.2.2, we will see that condition (2) leads to strong restrictions on the form of transformations one can have. Let us now discuss the local solvability condition for a system of DEs. There are two main reasons why R, given by (2.19), may not be locally solvable. The first is the lack of existence of solutions for R, which can occur if A is smooth but not analytic Chapter 2. Lie's Algorithm 44 in its arguments (see Lewy's counter example [39]). One avoids such problems by restricting to analytic differential equations. The second reason is that there may be other n-th. order differential consequences that lead to further relations between the coordinates of n-jet space and thereby restricting the locus of points that satisfy the DE. In this regard, it is possible for two systems of DEs R and R to have the same solutions, but only one being locally solvable. To illustrate, suppose a system R of DEs (2.19) has a differential consequence A/ + i (a; , tt ( n )) = 0. Then consider the new system R given by A M = 0, p = !,•••,I, Aj+i = 0. Let A = ( A i , • • •, A/+i), and Q~ (QA) denote the roots of A = 0 (A = 0). R and R have the same solutions. However, the equations of R and of R are not algebraically equivalent since the additional equation A / + i = 0 of R defines a new algebraic relation between the coordinates of J x [ / W . Consequently QA and g~ are not the same locus of points in I X P w . In particular we have Q~CQA. (2.22) Example 2.2.11 Let R be the system of DEs given by ut = e*, uxx = 0, and R be the system ut = e\ uxx = 0, uxt = 0, . utt = e*. R and R have the same solutions, which are given by f(x,t) = et + ax + P, (2.23) where a, (3 are arbitrary parameters. Now treat these two systems as algebraic systems in I x [ / ( 2 ) , with coordinates (x, t; u ; ux, ut; uxx, uxt, utt). The algebraic solution set of R is Q = {(x, t; u ; ux, el; 0, uxU utt): x, t, u, ux, uxt, utt G R ) C X X U(2\ Chapter 2. Lie's Algorithm 45 which is isomorphic to H 6 . On the other hand, the algebraic solution set of R is Q = {(a;, t; u ; ux, e*; 0,0, e4): x, t, u, ux G It} C X X (7 ( 2 ), which is isomorphic to 1R4. R is not locaUy solvable since (x,t;u;ux,ut;uxx,uxt,uu) = P(0,0 ; 0 ; 0, e° ; 0, 0, 0) G Q, but there is no solution (2.23) which agrees with the set of values prescribed by P (we can-not have uu = 0 at (x,t) = (0,0)). However, R is locally solvable since for any point Q(a, b • c ; d, eb; 0,0, eb) G Q, there exists a solution (2.23), with a — d and (3 = c — eb — da, which agrees with the set of values prescribed by Q at (x,t) = (a,b). • The Riquier-Janet theory of formal integrability presented in §3.1 deals with the issue of local solvability. There, a systematic procedure to obtain all integrability conditions of R is given. For now, the following lemma concerning the local solvability of scalar PDEs and systems of PDEs of Cauchy-Kovalevskaya type will be sufficient for us. Lemma 2.2.12 Any n-th order system of PDEs of Cauchy-Kovalevskaya type, ^ = ^ ( 3 , „ < » > ) , a = 1, •••,<?, (2.24) where 4>a is independent of the terms on the left hand sides is locally solvable. (Note that when u is a scalar, (2.24) is just a scalar PDE.) This lemma foUows simply from the fact that no further n-th order relations can be derived from the n-th order system (2.24). A precise proof of this lemma is given in §3.2, where a more general theorem regarding local solvability is given. 2.2.2 Extended Transformations in n-Jet Space In this section, we determine necessary conditions under which a transformation in n-jet space is an extended transformation. As we shall see, extended transformations preserve the under-lying prolongation structure of n-jet space and this results in strong restrictions on the form of Chapter 2. Lie's Algorithm 46 transformations one can have. In particular, such transformations will be completely character-ized by how the base space, i.e., the space of independent and dependent variables, transforms. The extension formula will then determine how the derivatives are transformed. Moreover, by requiring that n be finite, one is reduced to considering only point or contact transformations. Let us first naively generalize Definition 2.1.1 to obtain a transformation on l x [ / ( ° ' as being any analytic, locally one-to-one and onto map r ( n ) : X X I7 ( n ) —>• X X f/ ( n ), given by xt = XfauW), ua =Ua(x,«(»)), * ' 1^1 > 0, (2.25) a = 1, uaj = UJ(x,u^). The choice of notation for r ( n ) , in particular its superscript, is used to indicate which jet space the transformation is acting on. In the new coordinates (x,v,j) of I x ( 7 w , one has the partial derivative operators d^. and d~a, and the total differential operators D i = DZt = + 5? dz° + • • • + Vjft* + • • • • Recall that partial differentiation treats X X as just Euclidean space and so the normal chain rule defines how partial differential operators in the new coordinates relate to those in the original coordinates. However, total differentiation treats the coordinates Uj as derivatives of functions u = f(x). As such, when one relates total differentiation in the new coordinates with total differentiation in the original coordinates, r ( n ) must be an extended transformation (cf. Definition 2.2.10). Theorem 2.2.13 Let r ( n ) be any transformation given by (2.25). If r ( n ) is an extended trans-formation, then the following extension formula must be satisfied: UJ = Dj(Ua), T)j = DnD32 • • • Djk. (2.26) Furthermore, the total derivative operators Di are given by the change of variables formula F>i = J^D3, (2.27) where J(j is the Jacobian matrix Di(Xj(x,u(-"^)). Chapter 2. Lie's Algorithm 47 In other words, an extended transformation r ( n ) , given by (2.25), is completely defined by how the base space transforms, through the specification of the functions X and U. The extension formula (2.26) then determines how the derivatives are transformed. However, if n is finite, then it turns out that X and U cannot depend on second or higher derivatives of u. Evidently, the extension formula raises the order of derivatives and this places strong restrictions on X and U. Theorem 2.2.14 Consider an extended transformation r ( n ) , given by (2.25) and (2.26), on n-jet space with n finite. If the number of dependent variables is q > 1, then (x,u) are given by the point transformations Xi = Xi(x,u), ua = Ua(x,u). (2.28) If the number of dependent variables is q = 1 then (x,u^) are given by the contact transforma-tions xi = Xi(x,u(1)), u = H(x,u(1)), Ui=Ui(x,u{1)). (2.29) where Ui = DiU. The case q = 1 was proved by Backlund [7] and the case q > 1 was proved by Miiller and Matschat [45]. The case when n is aUowed to be infinite is treated in [5]. We will deal almost exclusively with point transformations in this thesis (see [15, §5.2.4] for more details on contact transformations). If r is any point transformation T:XXU ^ XxU, given by (2.28), then one can use the extension formula (2.26) to determine the corresponding transformation r ( n ) acting on X x ( 7 ( n ) . However, r ( n ) may stiU not be an extended transfor-mation, since (2.26) and (2.28) are only necessary conditions that r ( n ' be an extended trans-formation. What is required is that r maps graphs of functions to other such graphs. The extension formula (2.26) then ensures that r ( n ) maps extended graphs to other extended graphs of functions. Chapter 2. Lie's Algorithm 48 Lemma 2.2.15 If the point transformation T, given by (2.28), maps graphs of functions to other such graphs, then the corresponding transformation r ( n ) , given by (2.28), (2.25c) and (2.26), is an extended transformation. Any arbitrary point transformation r may not map all functions to other functions. For example, the point transformation T(X,U) — (u,x), does not map the function u(x) = 1 to another function. One avoids such problems by considering Lie groups of point transformations. 2.2.3 Lie Groups of Extended Point Transformations Consider a local Lie group Q of point transformations acting on the base space M — X X U. Let r £ G Q be given by Xi = Xi(x,u;e), i—l,---,p, (2.30) -a = Ma(x,u;e), a = l,---,q. The associated transformations r ^ n ) acting on X X f / ( n ) is then given by (2.30) and uaj=UJ(x,u^;e), (2.31) where UJ are given by the extension formula (2.26). Theorem 2.2.16 Let Q be an r-parameter local Lie group of point transformations acting on M = X xU. Then for all r G Q, the corresponding transformations form an r-parameter local Lie group acting on XxTJ^n\ Furthermore, any G £ ( n ) is an extended trans-formation, mapping extended graphs to extended graphs of analytic functions, for e sufficiently small. We call the Lie group of extended point transformations associated with Q. This theorem also holds when n = oo. We will make use of the fact that r^n ) G £ ( n ) maps extended graphs of analytic functions to analytic functions. In particular, one can derive an explicit formula [15, P-95jff] showing how an analytic function transforms. Such a formula depends on the analytic functions defining r e . Chapter 2. Lie's Algorithm 49 Recall Theorem 2.1.12 which relates any one-parameter Lie group of transformations on M with the flow of its infinitesimal generator. For the one-parameter Lie group of transformations Q, given by (2.30), M is the base space XxU and the infinitesimal generators are given by X = C\x,u)dXt + T]a(x,u)dua, (2.32) where C(x,u)= &Xi(x,u;e)\e=o, r,a(x,u) = £l(a(x,u;e)\^Q. (2.33) The transformations T£(X,U) are then recovered by exponentiation: (x,u) = T£(X,U) = exp(eX)(.T, u). Likewise, M — X x U(n) for the one-parameter Lie group of extended point transformations, given by (2.30) and (2.31). The corresponding infinitesimal generators are given by = e(x,u)dx,+r)a(x,u)dua+V?du? + --- + r]5du°, (2.34) i=\,---,p, a=l,---,q, \J\<n. Here (£, 77) is given by (2.33) and rjj is given by £Ztf(s , u"-">;e; £ = 0 By carrying out this calculation, using the extension formula (2.26) for Ltj, one can show: Theorem 2.2.17 LetX, given by (2.32) and (2.33), be the infinitesimal generator for r £ Q,a Lie group of transformations on base space XxU. The infinitesimal generator X^ of the corresponding extended transformation r ( n ) £ is given by (2.34), where rjj, 1 < | J | < n, are defined as follows: rij = Dj(r,a-?u?) + ?uaji. (2.35) See [47, p.HOjff] for a proof. See also [15, §2.3.5] and [50, §4.8] where rjj is given in terms of a recurrence relation. Just as r ( n ) is determined exactly by r, through the extension for-mula (2.26), so too X( n ) is determined exactly by X , through the infinitesimal extension for-mula (2.35). The infinitesimal generators X( n ) of the Lie group of extended transformations Chapter 2. Lie's Algorithm 50 £ ( n ) form a Lie algebra £ ( n ) , which is isomorphic to the Lie algebra £ of Q. The commuta-tor bracket defined by Definition 2.1.16 with M = XxU(n). Any r ( n ) £ £ ( n ) can be recon-structed by exponentiating some extended infinitesimal generator X' 7 1 ) £ £ ( n ) . In other words, T^n)(x, u ( n )) = (a;, 5j) = exp(eX(n')(a;, « ( n ) ) is given by the unique solution of ^ = C(x,u), = rij(Z,u"J% ( £ , ^ ) U = (*,ttf). (2.36) Note that the calculation of the infinitesimals (£%r/j), | J | > 0, and the exponentiation process are performed with X x f 7 ( n ) treated as Euclidean space. Sufficiently close to the identity, these extended transformations map extended graphs to extended graphs, thus preserving the prolongation structure of l x P ( " ' . This is also true when n = oo [60]. 2.3 Symmetries of Differential Equations Definition 2.3.1 A symmetry of a DE is a transformation mapping any analytic solution to another analytic solution of the DE. A Lie group of point transformations Q is a symmetry group of a DE if and only if for all r £ £ Q, re is a symmetry of the DE, for e sufficiently small. By ensuring that R is locally solvable and by considering only Lie groups of extended point transformations, we have the symmetry group of R is the same as the symmetry group of the locus gA of R. Theorem 2.3.2 Let R be a locally solvable system of DEs (2.19) and let gA C X x U(n) be the locus of points satisfying the system. Let Q be a Lie group of point transformations acting on X xU and let be the corresponding Lie group of extended transformations acting on X X [/ (n ). Then Q is a symmetry group of R if and only if <7(n) is a symmetry group of gA (cf. Definition 2.1.18). Proof. To prove sufficiency, suppose that £7(ri) is a symmetry group of gA. Then for any r ( n ) g (2.37) Chapter 2. Lie's Algorithm 51 for e sufficiently small. Recall that u = f(x) is a solution of R if and only if r y 0 ^ . Then For e sufficiently small, the left hand side is guaranteed to be the extended graph of some function u = f(x). The right hand side must lie in gA by (2.37). Hence r~} C gA and consequently u = f(x) must be a solution of R. The proof of necessity relies on the assumption of local solvability. Given any point P(x, u(n)) in gA, there exists a solution u = f(x) such that P £ F ^ . Now for any r £ £ Q, we have T(«)[P] e T^[T^}. By supposition, Q is a symmetry group of R and the right hand side must be the extended graph of another solution u = f(x), for e sufficiently small. Hence the right hand side must lie in gA and we arrive at (2.37). Consequently £ ( n ) is a symmetry group of gA and the theorem is proven. • Hence for locally solvable DEs (2.19), the admitted symmetry group Q can be obtained by finding the symmetry group of gA which is given by Theorem 2.1.20. To write down the infinitesimal symmetry conditions (2.15) for G(n\ we first define what is meant by maximal rank for a DE. Definition 2.3.3 The system of differential equations (2.19) is of maximal rank if and only if the I X (q • p ( n ) ) Jacobian matrix (dA^/du®) is of rank / whenever A(x,u(-n^) = 0. If a system is of maximal rank, then it is said to satisfy the maximal rank condition. This definition is almost equivalent to Definition 2.1.19, with the equations of R viewed as an algebraic system on M = XxU^n\ The only difference is that partial derivatives with respect to the variables x are omitted in the Jacobian matrix (we do not allow the variables x to satisfy an algebraic relationship). However this maximal rank condition implies the maximal rank Chapter 2. Lie's Algorithm 52 condition of Definition 2.1.19 and so Theorem 2.1.20 still applies. Theorem 2.3.4 (Infinitesimal Symmetry Conditions for DEs) Consider a system R of DEs, given by (2.19), that satisfies the maximal rank condition and that is locally solvable. A Lie group of point transformations Q is a symmetry group of R if and only if for every infinitesimal generator X ( n ) of Q^n\ X(n)A^(x,u(n)) = 0, p, = 1,- • •,/, whenever (x,u{n))eQA. (2.38) Proof. Since R is locally solvable, Theorem 2.3.2 is applicable so that Q is a symmetry group of R if and only if Q(n) is a symmetry group of gA. Now R satisfies the maximal rank condition and by Theorem 2.1.20, £ ( n ) is a symmetry group of the QA if and only if the infinitesimal generators X ( n ) of G(N) satisfy (2.38). • The symmetry conditions (2.38) lead to the foUowing algorithm, called Lie's algorithm, for finding point symmetries of an n-th order system R of PDEs (2.19) which is assumed to be locaUy solvable and of maximal rank: Algorithm 2.3.5 Lie's Algorithm 1. Determine X<n) A M , n - 1, • • •, /. 2. Make substitutions from R in the expressions of step 1. 3. Set the expressions in step 2 to zero and solve for the unknown infinitesimals of X . Note that the maximum rank condition ensures that the equations of R can be solved for unique left hand sides so that the substitutions of step 2 are well defined. Lie's algorithm results in an overdetermined system, called the infinitesimal determining equations, comprising of linear PDEs for the unknown infinitesimals. For a large number of PDEs the corresponding infinitesimal determining equations have been solved, leading to the admitted point symmetries. Numerous examples can be found in the standard texts previously mentioned. Rather than Chapter 2. Lie's Algorithm 53 duplicate these calculations here, we now discuss the finer points of the symmetry conditions (2.38) and of Lie's algorithm, Algorithm 2.3.5. We have chosen our particular formulation (2.38) carefully since there are problems as-sociated with the various other forms of the symmetry conditions that have appeared in the literature4: F l . X(")A = 0, whenever u = f(x) is a solution. F2. X(")A = 0, whenever A = 0. F3. X ( n ) A = 0, whenever A = 0 and its differential consequences hold. Some of the problems we now discuss are more subtle than others and one may argue about the level of precision that is needed in stating the symmetry conditions. A minimal requirement is that there be no ambiguity as to how the symmetries are calculated and that all the admitted point symmetries are found. A desirable requirement is that the particular formulation leads to an algorithm which can be implemented on a computer: Clearly F l , as stated, is the least useful since one does not know explicitly what the solutions are in general. What is really meant is either F2 or F3. In the literature, by F2 one often means F3. It may be of surprise to some that in F3, the differential consequences of A = 0 are not needed in the algorithm if R is locally solvable. As previously explained, the local solvability assumption (which, incidentally, is not always explicitly stated) assures that all differential consequences up to order n have been uncovered. This also explains why we prefer to state the infinitesimal symmetry conditions as (2.38) rather than as given in F2: Besides the fact that one can often confuse F2 with F3, (2.38) conveys more precisely the algebraic nature of the algorithm. Though this is a subtle point, using the locus QA will allow us to more effectively tackle the issue of local solvability in §3.2. Some examples at this stage will help emphasize the points just discussed. Let us start with a very simple example that is very illustrative. 4 The formulation of the symmetry conditions involving Frechet derivatives is discussed in Appendix E. Chapter 2. Lie's Algorithm 54 Example 2.3.6 Let R be the system of DEs linn — XL ZL-\ • (2.39) u\ = u\, where (xi,x2) are the independent variables and ( M 1 , ^ 2 ) are the dependent variables. Let us now apply Lie's algorithm to calculate the symmetry group Q of R. The infinitesimal generators of Q are given by X = £l(x,u)dXi + r)a(x,u)du°. According to step 1, we first calculate ~X.{2)[-u\2 + u1^] = -ri^ + ^ul + u1^ =cj>1(x,uW), X^l-ul + u\] = -r,2 + r,l - <f>2(x,u^), where rfj are given by (2.35). At step 2, we make substitutions from the equations of R in <f>. Since c/>2 depends only on derivatives up to first order, one can only make the replacement for u\ using (2.39). However, c/51 depends on second order derivatives of u1 and u2. Clearly one makes the replacement for u22 and u\ using (2.39). What about making substitutions for the derivatives u\Y and u\2 arising from the differential consequences of (2.39b)? Surely one should make these replacements. Yet in our previous discussion, we emphasized that differential consequences should not be needed. How can we reconcile this with the fact that the procedure just outlined seems natural and is commonly performed in practice? • The dilemma encountered in Example 2.3.6 can be resolved by realizing that the original system (2.39) is not locally solvable so that Theorem 2.3.4 and Lie's algorithm is not applicable! Here is how to correctly apply Lie's algorithm. Chapter 2. Lie's Algorithm 55 Example 2.3.7 We first make the system (2.39) locally solvable by appending all second order differential consequences of (2.39b) to form R given by u22 = u l u \ > (2.40) Ull = U12-> 2 1 1 1 U12 ~ U22 ~ " " l 1 (In §3.2, a general procedure to obtain a locally solvable system is described.) Clearly R and R have the same solutions and hence the same symmetry groups, but only R is locally solvable and hence Theorem 2.3.4 is only applicable to R (and not to R). The action of X on the equations of R is given by x ( 2 > [ - ? 4 + UH\] = V22 + « M + ^vh X ( 2 ) [ - W l l + ^ 2 ] = X ( 2 ) [ - « ? 2 + u1^] = -r]j2 + 7 7 X + ulvl, where rfj is given by (2.35). These expressions depend only derivatives of u up to order 2. One now makes the replacements for u\2i «i, uli and u\2 using (2.40) and demand that the resulting expressions vanish. Here the issue of whether to take differential consequences of the equations of R does not come into consideration since there are no further second order differential consequences of R (this was guaranteed to be true before the infinitesimal symmetry conditions were applied). • As illustrated in the above example, if the given system is not locally solvable, one must first form an equivalent system which is locally solvable. Hence, the correct algorithm to obtain the symmetry group of R is as follows: Chapter 2. Lie's Algorithm 56 Algorithm 2.3.8 1. Form an equivalent system R, given by A = 0, which is locally solvable. 2. Execute Algorithm 2.3.5 for the system R. For convenience, we shaU also call this algorithm - Lie's algorithm. An obvious question now arises. What is the algorithm for achieving local solvability for a given system of PDEs? We shall return to this question shortly. If R is not locally solvable then, in the symmetry formulation (F3), it makes sense to demand that X ^ A vanishes whenever A = 0 and its differential consequences hold. This leads to the following algorithm which seems to be widely used in practice: Algorithm 2.3.9 1. Determine X(n)A. 2. Make substitutions from R and aU its differential consequences of order up to n. 3. Demand that the resulting expressions vanish identically. As illustrated in (2.39), this algorithm seems natural and does lead to an overdetermined system of linear PDEs for the unknown infinitesimals. However, unlike Algorithm 2.3.8, there appears to be no proof in the literature that Algorithm 2.3.9 will correctly lead to the admitted symmetry group. One must always beware that any algorithm may provide point symmetries, but it may not uncover aU of them. We also point out that, as in step 1 of Algorithm 2.3.8, there are technical problems to be overcome in finding all n-th order differential consequences of a given system. In §3.3, we will provide the algorithms required to execute step 1 of Algorithm 2.3.8. More-over, we will show that if step 2 of Algorithm 2.3.9 is suitably modified, the resulting algorithm Chapter 2. Lie's Algorithm 57 does lead to the same result as Algorithm 2.3.8. This new algorithm is much more efficient than Algorithm 2.3.8, since the differential consequences of a system need not be used until after the computationally expensive step 1. To prove this result, one needs to address the following questions: How does one obtain all possible differential consequences of a system of DEs? How does one make substitutions from R and its differential consequences?5 What order does one need to consider? Since the original system of DEs is of order n, then surely one only needs to consider all differential consequences of order n. This is correct in principle, but there are pitfalls even here as we now illustrate. Consider the problem of calculating the symmetry group of R, which is the potential system vx = F(x,t,u™), (2.41) vt = G(x,t,u(2)) + uxx, where G is independent of uxx. Let us proceed with what would commonly be done in practice and apply Algorithm 2.3.9. At step 1, we have XW[-vx + F(x,t, u^)} = <j>\x, t, « ( 2 ) , u ( 2 ) ), X&[-vt + G{x, t, u^) + uxx] = <f>2(x,t, u<3V«(3))> where, for the subsequent discussion, it is sufficient to give the order of the derivatives appearing in <f>, which are known functions. Next we must use the equations of R to make substitutions in <j> and demand that the resulting expressions vanish. Here is the dilemma: Are there any further second order differential consequences of (2.41)? If one differentiates the equations of (2.41), then one obtains vXx = DxF(x,u(-2)), vxt = DtF(x,uW), V ^ (2.42) vtx = DxG(x,u{2)) + u x x x , vtt = DtG(x,u(2)) + u x x t . 5Care must be taken to avoid possible infinite loops occurring in the substitution step. For example, see Example 2.1.2 in [24]. Chapter 2. Lie's Algorithm 58 Equating mixed partials vxt = vix leads to the third order integrability condition u x x x = DtF(x,u^)-DxG(x,u^). (2.43) Since (2.42) and (2.43) are all third order equations, one could argue that there cannot be any further second order differential consequences. If this were true, then one only needs to substitute for vx and vt in 4> using (2.41). This is essentially what was done in [51], where Pucci and Saccomandi studied symmetries of a general class of potential systems, which include (2.41). Unfortunately this is not the correct procedure since, in general, there can be further second order differential consequences of (2.41). For example, if (F, G) = (uxx + u, 0) then the potential system R is given by vx — uxx -f- u, (2.44) which has the second order differential consequence Vxt - vtt + ut. (2.45) Hence one must use this to replace vxt in cf>. What if F and G are not explicitly given? Such a situation will arise in §5.1. Here one cannot in general uncover all second order differential consequences of (2.41). The differential consequence (2.45) was obtained by explicit knowledge of F and G. Consequently, one cannot apply step 2 of Algorithm 2.3.9 and this algorithm cannot be used to obtain the symmetry group of R. Now consider applying Algorithm 2.3.8. In step 1 we must find an equivalent system which has the same solutions as R and which is locally solvable. Since we cannot uncover any further second order differential consequences of (2.41), then the best one can do is to append the third order equations (2.42) and (2.43) to the original potential system (2.41). But now this new system R is of third order and the locus Q of algebraic roots of R is a subset of X X (7 ( 3 ). Certainly, all second order derivatives of v are fixed by (2.42), but now we have third order derivatives of v in Q which are arbitrary. As before, for certain functions F and G, there may be third order relations for v which cannot be derived from R since F and G are not explicitly Chapter 2. Lie's Algorithm 59 known. In general R is not locally solvable. One can repeat this procedure and append higher order equations obtained through differentiations, but at each stage the problem persists. Any finite order system obtained in this way is not locally solvable. Consequently Theorem 2.3.4 cannot be applied to any of these systems to find the symmetry group of R. Even if one differentiates the equations of (2.41) and (2.43) to ah orders, the resulting infinite system of DEs is not locally solvable. In the next chapter, we wiU show that such an infinite system satisfies the weaker property of analytic local solvability which will be sufficient for our purposes. However, the application of Lie's algorithm to such an infinite system would seem intractable. In §3, we show how to overcome the above mentioned problems. The resulting new algorithm extends Lie's algorithm to general systems of PDEs. Chapter 3 Extension of Lie's A lgor i thm for Systems of P D E s In this chapter, we derive an extension of Lie's algorithm for finding the point symmetries of systems of PDEs. Since scalar PDEs and systems of PDEs of Cauchy-Kovalevskaya type are locally solvable as they stand, Lie's algorithm can be directly executed. However, for more general systems of PDEs, the situation is not so straight forward. The need to first form a locally solvable system can lead to problems (cf. system (2.41)). To derive our new symmetry algorithm requires us to determine how a given system of PDEs can be made locally solvable. Before tackling this problem, we first make an excursion into the closely related area of the Formal Integrability Theory of Riquier-Janet [58, 31] (see also [65, 66, 63]) which is presented in §3.1. We will follow the more efficient approach of Reid [53, 54]. Here, an ordering of derivative terms is required to uniquely solve a given equation. A finite step algorithm is used to achieve a standard form which can be viewed as a basis set of equations which generate all the differential consequences of the given system. Such a standard form provides a natural partition of all derivative terms into those of principal and parametric type. For analytic systems of PDEs, Riquier-Janet showed that there always exists a unique formal power series solution passing through any prescribed point in initial data space, i.e., the space of independent variables and all parametric derivatives. Moreover, they delineated admissible initial data which lead to convergent power series solutions. An essential part of the Riquier-Janet theory is the construction of an n-th order prolonged standard form in which each principal derivative of order up to n is given as a function of the initial data variables. In §3.2, we use the infinite order prolonged standard form to establish 60 Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 61 a bijection between the locus gA of the system and the initial data space. Subsequently, the existence/uniqueness theorems of Riquier-Janet show that if a total derivative ordering is em-ployed, then local solvability can be achieved through a finite order prolonged standard form. Hence, whenever possible, total derivative orderings are used. However, there are instances such as the system (2.41) where one cannot use such orderings. In such cases, one can only achieve the weaker property of analytic local solvability through an infinite prolonged standard form. We show that analytic local solvability is sufficient for our purposes. We mention that dealing with an infinite number of equations requires great care. In order to arrive at our results, we rely on the fact that any equation in an infinite prolonged standard form contains a finite number of terms and is derived from the original system by a finite number of well defined operations. A common technique we will employ involves the projection from the infinite dimensional space AxC/ ( < x > ) to the finite dimensional space A x C / ( n ) . We then prove results on this finite dimensional space and use induction to obtain our results in I x i7(oc). The need for local solvability before applying Lie's algorithm can lead to inefficiencies. Even if total derivative orderings are employed, the corresponding locally solvable finite order prolonged standard form can contain many more equations than the original system. Moreover, if total derivative orderings cannot be used - a situation that wiU arise in §5 - then Lie's algorithm must be applied to the infinite order prolonged standard form. Clearly this is not feasible in practice. Consequently, in §3.3 we derive a new symmetry algorithm which overcomes these problems. We show how the symmetry conditions for any prolonged standard form can be reduced to conditions involving significantly fewer equations. Even if one starts with an infinite order prolonged standard form, the reduced conditions involve only a finite number of equations. Also, unlike other symmetry algorithms, the substitution step in this new algorithm is completely unambiguous. Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 62 3.1 Formal Theory of Integrability 3.1.1 Standard Form In this section, we describe the method of Riquier, Janet and Reid for achieving the standard form for a system of DEs. 1 For this, we define orthonomic and simplified orthonomic systems, implicit substitutions and integrability conditions of a system R of DEs (2.19). All these concepts rely crucially on orderings of derivatives, which we now describe. Ordering the derivatives allows one to uniquely choose which derivative to solve for in any given equation. If one has a solved equation then by differentiating this equation, it remains in a solved form. Clearly it would be desirable that this new equation be also solved with respect to the given ordering. Otherwise, after each differentiation, one would have to re-solve for the highest ordered term despite the fact that the differentiated equation already comes solved for some derivative. In the following definition and in the sequel, a term or derivative is used to denote any dependent variable or any of its derivatives. Definition 3.1.1 A derivative ordering is an order relation -< on the set of all derivatives {uj}, a = 1, • • •, q, J = (j i , j2, • • -,jk), 0 < ji < p, satisfying the properties 1-4: 1. If uJ -< Uj and uj -< uJ,, then u" -< v?K. 2. Given any two derivatives Uj and Uj, exactly one of the following three conditions holds: (a) uj < ttj, (b) up3 -< uj, (c) uj = uP3. 3. If uj < uPj then uJK •< uPJK. 4. For all \J\ > 0, uj < ujj. A weak total derivative ordering is a derivative ordering that respects the total derivative order within each set of derivatives for a given dependent variable, i.e., it must also satisfy: 'See Lisle [43] for a frame Riquier-Janet theory. Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 63 5. (total order for each uJ) If \I\ < | J | , then uJ -< Uj. A total derivative ordering is a derivative ordering that respects the total derivative order for all derivative terms, i.e., it must also satisfy: 5'. (respects total order) If | / | < \ J\, then uj -< Uj. Notice that a total derivative ordering is a weak total derivative ordering, but the converse is not always true. Riquier [58] used invertible matrices with non-zero integer entries to provide a large class of such orderings. Recently, the class of orderings has been extended [23, 70] (a classification of all such orderings can be found in [59]). In this dissertation we shah use the orderings described in the following example. Example 3.1.2 An example of a derivative ordering is the lexicographical ordering, <iex, de-fined as follows. Let Ordi(I) denote the number of times the integer i appears in I. Then <iex Uj if and only if one of the foUowing conditions hold: a. \I\ < | J | . b. \I\ = | J | and a < f3. c . |/| = | «7|, a = (3 and the first nonzero member of the following sequence is negative: Ordx(I) - Ordi(J), • • •, Ordp(I) - Ordp(J). By virtue of property (a), -<iex is a total derivative ordering. If p = 2 and q = 2, then ^ ~^lex ^ ~^lex ^X2 ~^lex ~^lex ^X2 ~^lex ~^lex ^X2X2 ~^lex '^,x\X2 ~~^lex " i i x i ~^lex u x 2 x 2 ~^lex uXlx2 ~^lex uxixi ~^lex Another example of a derivative ordering that is particularly useful when we come to study potential systems is the potential ordering, <pot- Let u = (u1, • • •, ur+s) with (ur+1, • • - ,ur+s) = (v1, • • •, vs) being the potential variables. We have uf -<pot Uj if and only if one of the foUowing conditions hold: Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 64 a. 1 < a < r and r + l</3<r + s. b. 1 < a,/3 < r or r+l<a,/3<r + s, and uj -<iex Uj. If r = 2, s = 1, p = 2, and (w1, i t 2 , u3) = (w1, w2, then U ~<pot U -\pot U X 2 ~^pot U X l ~^pot Ux2 ~^POt UXl ~^pot UX2X2 ~~^POt U X l X 2 ~^POt 1 2 2 2 uxix\ -{pot ux2X2 ~^-pot ux\X2 ~^pot uxiXj -{pot ' ' ' -{pot v -{pot Vx2 —{pot Vxi —{pot Vx2X2 —{pot VXlx2 -{pot —{pot ' ' ' Clearly -<pot is not a total derivative ordering since we have u2. x <pot v which violates property (5') of Definition 3.1.1. However, it is a weak total derivative ordering since \I\ < \ J\ implies ul "{lex uj a n d the latter is true if and only if u" -<pot Uj. Consequently, property (5) of Definition 3.1.1 is satisfied. • Definition 3.1.3 Let -< be a derivative ordering. The leading term of an equation is the highest ordered term, with respect to -<, appearing in that equation. A system R of DEs is in solved form if and only if each equation of R is solved for its leading term with respect to -<. Definition 3.1.4 The system R is in orthonomic form with respect to -< if and only if (1) R is in solved form with respect to -<. (2) No given term Uj appears on both the left and right hand sides of R. Example 3.1.5 Consider the system of DEs 0 = -uyy +vt-r ut, 0 = -ut + v, (3-1) 0 = -uty + vtwx. Using the lexicographical ordering <iex with [x\,X2,x^) — (t,x,y) and (wi, 1*2,^3) = (u,v,w), this system has the solved form Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 65 Uyy = Vt + Ut, Ut = V, Uty = VtWX. One must use the second equation to substitute for ut in the first equation to obtain Uyy = Vt+V, ut = v, (3.2) Uty = VtWx, which is the orthonomic form of (3.1). • In general, any system of PDEs can be put into orthonomic form by a process similar to Gauss-Jordan elimination: Solve each equation for the leading term using the Implicit Function Theorem if necessary and back substitute into the rest of the system. The algorithm orthonomic which does this is given in Appendix A . l . The use of the Implicit Function Theorem to solve for the leading term in each equation assumes certain nondegeneracy conditions and leads to case splittings. For example, if the leading term appears linearly we have to assume that its coefficient, coeff does not vanish identicaUy. The equation, coeff ^ 0, is called a pivot condition. The case corresponding to coeff = 0 is treated separately by adjoining that equation to the system and restarting the analysis. In general a binary tree of such cases must be analyzed (cf. Example 3.1.7 and Figure 3.3). For simplicity we will often not mention such cases. In particular the statement that a system has been reduced to orthonomic form will mean that it has been reduced to a set of orthonomic forms, each valid away from the vanishing of its corresponding pivots. A system in orthonomic form ensures that any leading derivative only appears once in the left hand side of some equation. However, one may have derivatives of a leading term appearing elsewhere in the system. Such derivatives can be replaced as follows: Let uf = rhs be an equation in an orthonomic form and let u"j be a term appearing in another equation. Then one can replace ujj by Dj(rhs) to obtain a system with the same solutions. For example, in (3.2), Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 66 one can use the second equation to implicitly substitute for uty in the third equation to obtain vy = vtwx. Reid calls this an implicit substitution. The algorithm allJ,mpl^subs(sysl,sys2), which makes all possible implicit substitutions from an orthonomic form sysl into a system (or expression) sys2, is given in Appendix A.2. Definition 3.1.6 A simplified orthonomic system is a system satisfying (1) and (2) of Defini-tion 3.1.4 and also: (3) No nontrivial derivative of any leading term of the system appears in the system. Given an orthonomic system, one can achieve a simplified orthonomic form by making ah possible implicit substitutions throughout the system. If an equation has its leading term substituted for, one has to re-solve this equation for the new leading term. The algorithm simp-orth to achieve a simplified orthonomic form is given in Appendix A.2. Example 3.1.7 Consider the orthonomic system (3.2). Differentiating the second equation with respect to y, one can implicitly substitute for the term uty in the third equation to obtain Vy = VtWX. Since the leading term was substituted for, one must solve this equation for the highest ordered term which is wx. Here, we have a case splitting depending on whether vt vanishes or not. Assume vt ^ 0. Then the corresponding simplified orthonomic system is Uyy = Vt+ V, ut = v, (3-3) wx - vy/vt. Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 67 (3.4) System (3.2) v,?0 vt = 0 I 1 System (3.3) System (3.4) Figure 3.3: A binary tree of case splittings. Now assume vt = 0. Then the corresponding simplified orthonomic system is uyy = vt ut = v, vt = 0, vy - 0. The two cases that arise are summarized in the binary tree of Figure 3.3. • Given a simplified orthonomic system, one may have a pair of equations of the form u°j = rhs-i, (3.5) Uj = rhs2. By cross differentiating, one may uncover new relations between the derivatives. Definition 3.1.8 Let R be a simplified orthonomic system with two equations given by (3.5). Form the set of ordered pairs of multi-indices A = {(I,J): 11 = 33, | 7 | , | J | > 0 } . A compatibility condition of (3.5) is any equation of the form -Djirhsi) + Djirhs2) = 0, (I, 3) G A. The set of all compatibility conditions is generally infinite. The unique compatibility condition corresponding to (1,3) G A such that \II\ is the minimum value over all elements in A is called a minimal compatibility condition of R. An integrability condition of R is any compatibility Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 68 condition of R which does not reduce to the trivial equation, 0 = 0, after aU possible implicit substitutions from R. A minimal integrability condition of R is an integrability condition corresponding to a minimal compatibility condition of R. In general there are an infinite number of integrability conditions for R and a fundamental problem is to find a finite subset Integ such that the simplified orthonomic form for RDlnteg has no integrability conditions. Definition 3.1.9 A system sf^ is in standard form if and only if it is a simplified orthonomic system with respect to -< and (4) sf^ has no integrability conditions. The algorithm used to achieve a standard form for a given system is given in [53] (see Algorithm 6). This algorithm involves repeatedly putting the system in a simplified orthonomic form and appending a certain finite set of integrability conditions. The argument used to show that this process terminates in a standard form after a finite number of iterations is originally due to Tresse [67]. There are several different algorithms for forming a satisfactory finite set of integrability conditions Integ of a simplified orthonomic form. Perhaps the simplest, but least efficient, is to form for each pair of leading derivatives of the same dependent variable the corresponding minimal integrability condition. Then Integ is the set of all such conditions. The theoretical justification that satisfaction of this finite set leads to the satisfaction of all integrability con-ditions for simplified orthonomic systems, is given by Mansfield [44] and Boulier et al [22]. An alternative, more complicated and efficient approach is that of Riquier and Janet. Through a completion process involving certain monomials representing leading derivatives, they con-struct a finite set of integrability conditions [65, 66] and this has been automated by Schwarz [62]. Reid's standard form algorithm uses an equivalence class to avoid many of the redundant equations arising in the Riquier-Janet approach. The reader should be aware, that although Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 69 efficient, the formation of the finite set of integrability conditions in that algorithm is fairly complicated (see Algorithm 6 of [53]). Recently Boulier [21] has also obtained a redundancy criterion for integrability conditions. Moreover, the large number of case splittings that result from the application of such algorithms to nonlinear systems are a significant barrier. For work in this area, see Reid et. al. [56], Mansfield [44] and Boulier et al [22]. Standard forms for nonlinear systems that are linear in the leading derivatives, for some ordering -<, are often achievable. This will be the case for all nonlinear systems considered in this dissertation. Example 3.1.10 Consider the simplified orthonomic system (3.3). The minimal compatibility condition between the first two equations is given by — vtt — vt + vyy = 0. Solving this equation for the leading term vyy one obtains the system Uyy = Vt + V, Ut =V, (3.6) wx = vy/vt, Vyy = Vti + Vt, which is in simplified orthonomic form. Since (3.6) has no further integrability conditions, it is the standard form system s/_<( of the original system (3.1). • A system in standard form separates all derivatives into two different classes: Definition 3.1.11 Let sL, be a system in standard form. A principal (parametric) derivative of order A; is a derivative which is (is not) a derivative of some (any) leading derivative. A(N) and I?(JV) denote the set of all parametric and principal derivatives of order up to N respectively. Note that in the above definition A ( o o ) and i ? ( o c ) denote the set of all parametric and principal derivatives of s/x respectively. In some contexts we will regard A ( J V ) and B(-N) as spaces with coordinates given by their derivatives. The definitions of principal and parametric derivatives will prove useful when we discuss formal power series solutions for the DE. Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 70 Example 3.1.12 Let sf^ be the standard form system (3.6). The set of leading terms in (3.6) of order up to one is {ut, wx} C -B ( 1 ) . Since there are no zero order leading terms, there are no other first order principal derivatives that can be obtained by differentiation: B(1) = {ut,wx}. The set of all parametric derivatives up to order one is just the set of all terms up to first order that is complementary to i? ( 1 ) : = {U, V, W, UX, Uy, Vt, VX, Vy, Wt, Wy}. The set of second order leading terms in (3.6) is {uyy, vyy} C -B ( 2 ) . Besides these, the remaining second order principal derivatives of sf^ are obtained by differentiating the first order leading terms in We have B(2) - B(l) U {Uxt,Uyt,Utt,Uyy,Vyy,WXX,WXy,Wxt}, (3.7) AW = A W U {UXX,UXy,Vxx,VXy,Vxt,Vyt,Vtt,Wyy,Wyt,Wtt}, where A-2** is the set of all parametric derivatives up to order two, which is just the set of all terms up to second order that is complementary to i? ( 2 ) . • i 3.1.2 Formal Power Series Solution In this section we use the equations of a standard form system (order ra) to construct infinite power series expansions about a given point x for each dependent variable u01 and state the conditions for which these series converge to solutions of the system. To do this, it is convenient to switch to a new multi-index notation. Let c be the ordered p-tuple (ci, • • •, cp) where each c; = 0,1,2, • • •. We denote any partial derivative D?.--D?ua{Xl,--;xp), by u". Here the order of this derivative is |c| = cx + h cp. By convention, we have UQ = ua. Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 71 Total derivative operators are given by Dc = D? •••Dc/. In order not to get confused with the other multi-index notation, we will always use superscripts for the multi-index notation just described and subscripts for the other one. For example, we have DiDc = Dc, ci = et + 1, CJ = CJ, j ^ i. (3.8) Consider a Taylor series expansion for each dependent variable ua about a point x = x°, given by ua(x) = ua(x°) + V Dcua * ? ) * • • • ( » P - X ° P ) C P _ ( 3 9 ) |cFo X = X ° C 1 \ C 2 \ - - - C P \ Given any function u = f(x), which is analytic in a neighbourhood of xn> the right hand side of (3.9) converges to f(x) near XQ. If, however, one chooses arbitrary values for ua and all its derivatives at x0, which corresponds to choosing an arbitrary point in XxU^°°\ then the right hand side of (3.9) will not converge in general. Instead one obtains what is called a formal power series. One can perform formal manipulations such as addition and differentiations on formal power series [63]. Using these formal manipulations, one can test whether a formal power series satisfies the given system of DEs. If so, it is called a formal power series solution of the system. Our goal is to find formal power series solutions of the given system that converge near a given point XQ. Let R be a system of DEs and s/x its standard form, with parametric and principal deriva-tives given by A ( o o ) and I? ( o o ) . The sets A ( o o ) and i? ( o o ) are a partition of the set of all derivatives. Let X be the space of independent variables. The space X x A ( o o ) is called the initial data space of s/_(, since any point P = (xo;u0°°'>) in X X A ( o o ) specifies unique values for all the parametric derivatives in A<-°°'> at x = XQ. Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 72 Definition 3.1.13 Let R be a system of DEs and s/x its standard form (of order m), with and the set of N-th order parametric and principal derivatives of sf^. For any N > m, a prolonged standard form sf^ of R is a system in solved form with respect to -< satisfying: (1) R and sf^ have the same solutions. (2) sf^ contains all the equations of sL,. (3) The left hand sides of sf^ are unique (i.e. no two are the same) and form the set of all principal derivatives B^NK A prolonged standard form can be achieved by appending to sf^ an equation for each remaining principal derivative of order at most N. For any principal derivative u°, such an equation is obtained by differentiating any appropriate equation from sL, so that the resulting equation has Uj as left hand side. Moreover, one can repeat this procedure for any principal derivative of any order N and so a prolonged standard form sf^ is well defined. However, a prolonged standard form is not unique since it is possible that more than one equation of s/x can be differentiated to obtain a given principal derivative as left hand side. While any prolonged standard form may be used to define a power series (3.9), different pro-longed standard forms may lead to different power series. To overcome this problem of non-uniqueness, we define a unique prolonged standard form that is distinguished from all the others. Definition 3.1.14 A prolonged standard form for R is called a canonical prolonged stan-dard form if and only if (4) The right hand sides of sf^ consist only of the independent variables and the para-metric variables. A canonical prolonged standard form can be achieved by making all possible implicit substitu-tions from s/_< into the right hand side of a given prolonged standard form. This renders the right hand sides to be independent of all principal derivatives. By starting with an N = oo order Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 73 prolonged standard form, and making the implicit substitutions, one arrives at an N = oo order canonical prolonged standard form. The algorithm proLstandard which achieves a canonical prolonged standard form for a given system R is provided in Appendix A.4. Example 3.1.15 Let R by the system (3.1) which has the standard form s/^  given by (3.6), which is of order m = 2. The set of second order principal derivatives i? ( 2 ) is given in (3.7). In particular, to obtain an equation for the principal derivative wxy, we start with the equation wx = Vy/vt in sf^ which we differentiate with respect to y to obtain WXy = Vyy/Vt - VyVyt/(vt)2. We now make all possible implicit substitutions from sf^ into the right hand side of this equation. This amounts to replacing vyy with vu + vt. Consequently, the equation for wxy is Wxy = (Vtt + Vt)/Vt - VyVyt/(vt)2. Repeating this process for the remaining terms in i? ( 2 ) , we obtain the canonical prolonged standard form s/^2) given by uxt — vx, WX = Vy/Vt, WXX = Vxy/Vt - VyVxt/(Vt)2, uyt = vyi U t = V , WXy = (Vtt + Vt)/Vt - VyVyt/(Vt)2, (3.10) Uyy = Vt + V, Vyy = Vtt + Vt. Wxt = Vyt/Vt - VyVtt/(Vt)2, utt - vt. • Lemma 3.1.16 Given any system R, its canonical prolonged standard form is unique. — ( N ) Proof. Let sf^ and s/_, be any two systems satisfying Definition 3.1.14. If these two systems are not identical, there must be any equation in each system given by uf = rhs\ and uj = rh,S2 respectively, such that the two right hand sides are not the same. By property (1) of Definition Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 74 3.1.13, both sf^ and sf_, have the same solutions as R and consequently rhsi = rhs2 must be satisfied by all solutions of R. By property (4) of Definition 3.1.14, this equation involves only parametric derivatives and so cannot be reduced to the trivial equation, 0=0, by implicit substitutions from s/x. Consequently a new integrability condition has been found. But this is a contradiction since by definition s/_< has no further integrability conditions. • In the sequel, whenever we refer to a prolonged standard form sf^ we shall always mean the canonical prolonged standard form. Also, sf^ will denote the infinite order canonical prolonged standard form which, as we have already explained, is well defined. Theorem 3.1.17 Let R be a system of DEs and s/_< its standard form (of order m) with A ( o c ) and the set of all parametric and principal derivatives respectively. Let sf^\ N > m, be the prolonged standard form of R. If -< is a total derivative ordering, then the equations of sf^ induce a map F: X x A w -» B(N\ (3.11) which takes a point (z,w ( J V )) in X x A ( A r ) to a corresponding value in the space of principal derivatives i? ( J V ) as determined by the equations of the prolonged standard form. Moreover, if N = oo then this is a well defined map for any weak total derivative ordering -<. Proof. By property (3) of Definition 3.1.13 and property (4) of Definition 3.1.14, the equations of sf^ define a unique value for each principal derivative in terms of the independent variables and the parametric derivatives. If -< is a total derivative order, then for any N > m the right hand sides must consist of parametric derivatives of order up to N. Consequently, (3.11) is well defined. Notice that this is not true of non-total derivative orderings since the right hand sides may depend on parametric derivatives of order greater than N. In such cases, (3.11) is only well defined when N = oo as we now show. Let -< be any weak total derivative ordering and N = oo. By the definition of a prolonged Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 75 standard form, each principal derivative Uj of any order is given by a unique equation uj = rhs from sf^ where rhs depends only on x and parametric derivatives. Moreover, such an equation was obtained by differentiating an equation of s/x followed possibly by a finite number of implicit substitutions. Since the right hand side of each equation of s/_< contains a finite number of terms, rhs can only depend on x and a finite number of parametric derivatives. Consequently, any point P € X x A ( o o ) defines a unique value for each principal derivative Uj € B^°°K That is, (3.11) is well defined for N = co and any weak total derivative ordering -<. • This theorem shows that specifying a point in initial data space leads to a unique value of ah principal derivatives given by the equations of sf^°K Consequently a unique power series (3.9) can be constructed. Theorem 3.1.18 (Formal Power Series Solution) Let R be an analytic system of DEs and -< be any derivative ordering. Let sf^ and sf^ be the standard form and prolonged standard form of R respectively. For any point in initial data space of sf^,, use the equations of sf^ to determine the values of all principal derivatives. Construct the corresponding formal power series (3.9) for u(x) about the given point x = x°. Then u(x) satisfies the equations of sf^°K This theorem, which holds for any derivative ordering, makes no statement about the conver-gence of the power series u(x), only that u(x) satisfies the equations of sf^°\ The proof of this theorem is originally due to Riquier and Janet (see also [65, 66, 63]). They also show that for a large class of initial data and when -< is a weak total derivative order, u(x) does converge and hence we have analytic solutions of R. We remark that it is not well known that these existence and uniqueness theorems of Riquier-Janet hold not only for total derivative orderings, but also for weak total derivative orderings. In the Riquier-Janet theory, admissible initial data is prescribed by analytic initial data functions defined on some boundary curve. So long as these functions are analytic and avoid certain values for which sf^ is undefined, they can be arbitrarily chosen and lead to convergent formal power series solutions. The foUowing wiU be sufficient for our purposes. Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 76 Lemma 3.1.19 Let -< be a weak total derivative order and sf^ be a prolonged standard form. Let P be any point in X x A ( w ) where N finite. This corresponds to specifying the values of x and all parametric derivatives up to order N. Then there exists a convergent formal power series solution that agrees with the given finite data. One can always choose suitable analytic initial data functions which agree with the given finite initial data P and which lead to the required solution. A particular choice of these functions is equivalent to specifying suitable values for the remaining parametric derivatives. 3.2 Local Solvability Recall that for a system of DEs to be locally solvable, one must show that through each point in the locus of algebraic roots of the system, there must pass a solution of the DE. In this section, we will show that if X is a total derivative order then any prolonged standard form sf^ is locally solvable. However this is not true if -< is a weak total derivative order. Even if one considers the infinite prolonged standard form sf^°\ one does not achieve local solvability. We show that sf^ does satisfy the weaker property of analytic local solvability which turns out to be sufficient for the symmetry analysis. To do this requires us to consider the locus of the infinite system sf^ which is the set of points in the infinite jet space that satisfy the system. This locus is weU defined since the map (3.11) is weU defined when N = oo. To see this, write the coordinates of any point in X X ( 7 W as (&;«<">; «<">), (3.12) where x are the independent variables, 2 ( J V ) and 2 ( J V ) are the parametric and principal derivatives up to order N respectively. Now let N = oo and let F be the map in (3.11). Given any point P in 5 ( o o ) , F~1(P) defines a set of points Q in initial data space X x A ( o o ) . Then by construction, for each point Q, (Q,P) G XxU(oo) satisfies the equations of sf^. The locus £ ( o o ) of sf^ is just the set of all such points (Q,P). To show (analytic) local solvability, we will construct a power series solution (3.9) for u(x) Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 77 that passes through a given point in £ ( o c ) . Since such a series is uniquely determined by a given point in initial data space, we first establish a correspondence between initial data space and Q {OO). Lemma 3.2.1 Consider a prolonged standard form sf[^\ N > m, for an analytic system of DEs. Let £>(JV) be the locus of points of sf^ where -< is a weak total derivative ordering. ( 1 ) Let -< be a total derivative ordering and N > m. Define I [ C " : I x i ( " U g C ) (3.13) to be the map which takes any point (x;u^N^) to (x; S ( i V ) ; w ( i V ) ) where = F(x;tt(JV)) is determined by the equations of sf^ as given by (3.11). Then i i f ( J V ) is a bisection between X X A^ and QW. (2) If N = oo, and -< is a weak total derivative ordering then is a bisection between X x A ( o o ) and Q{OO) . (3) A function g(x\ 2 ( w ) ) vanishes on Q(n^ if and only if 9\SF(N) = 0. (3.14) Here the symbol \ (N) denotes making all possible direct substitutions from s/^w) in g for all leading terms of sf^ . Also, we require N > m if -< is a total derivative order and N = oo if -< is a weak total derivative order. Proof. We shall first prove (1). Since -< is a total derivative ordering, (3.11) is well defined for any N > m. Consequently, given any point P £ X X Q = KW(P) = (P;F(P)) is a well defined point in iV-jet space. By construction Q must satisfy the equations of sf^ and so P £ £ w . This proves that (3.13) is well defined. It is one-to-one by construction. It is also onto since given any point Q(x; u(N); 2 ( J V )) £ g^N\ Q must satisfy the equations of sf^. By Theorem3.1.17, the values of all the principal derivatives S ( i V ) = F(x;u(N)) through (3.11). Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 78 Hence Q = 7i' ( A r )(x; u ( J V )). This completes the proof of (1). Notice that for N > m finite, we do not necessarily have a bijection when -< is not a total derivative ordering since (3.11) may not be a well defined map in this case. Let N = co. Then (3.11) is weU defined. The above arguments for (1) also holds in this case. Consequently (2) is proved. To prove (3), first let -< be a total derivative ordering and N > m. By assumption g vanishes on QW. That is, for each point Q(x; u^; «<">) £ glN\ g(Q) = 0. By (1), = F{x\ 2(Ar>) and so g(x;rtN);F(x;u(N))) = 0. Hence, making ah possible direct substitutions from sf^ in g for ah leading terms of sf^ results in zero. This is just (3.14). The case N = oo when -< is any weak total derivative ordering is proven likewise. • The infinitesimal symmetry conditions (2.38) involve the vanishing of X ( n ) A on the locus £ ( i V ) of R. Lemma3.2.1(3) wiU allow us to precisely test for the vanishing of expressions on the locus £> w . We just substitute from the equations of sf^ into the given expressions and set the result to zero. Since the equations of sf^ are already in solved form and only direct substitutions are required in (3.14) (no differentiations required), there is no ambiguity in how substitutions are to be made. Furthermore, each equation of sf^ is used at most once and no infinite loops of substitutions are ever encountered (cf. Example 2.1.2 in [24]). Definition 3.2.2 Let £>(oo) be the locus of Through the bijection (3.13), any point in £ ( o o ) corresponds to a point in initial data space X X A ( o o ) which in turn defines a formal power series (3.9). The analytic locus of sf^°\ denoted by ^ o o ) , is the set of all points in corresponding to convergent formal power series. We emphasize that s/^,00' is not in general locally solvable since analytic solutions only pass through the points in However, we can define a weaker form of the local solvability Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 79 criterion which will be sufficient in the subsequent theory. Definition 3.2.3 sf^ is said to be analytically locally solvable if and only if through every point in ^ ° o ) , there passes an analytic solution of sf^°°K Theorem 3.2.4 (Local Solvability) Let sf- be an analytic standard form system, of order m, with sfW its prolonged standard form. If -< is a weak total derivative ordering, then sf^ is analytically locally solvable. If -< is a total derivative ordering, then sf^ is locally solvable (in the usual sense of Definition 2.2.9) for N > m. Proof. We must show that through any point Q(XQ,SQ00';U0°°^) in the analytic locus ^ ° c ) of sf^°\ there must pass one analytic function f(x) which is a solution of sf^°K To do this, we first consider the point P(XQ', U ^ ) in initial data space which corresponds to the given point Q. P in turn yields a unique formal power series solution u(x) of sf^°K If we can show that u(x) passes through the point Q, then since Q is in u(x) must be convergent and this provides the required solution. To show that u(x) passes through Q, calculate the values of all the derivatives of u(x) at x = x 0 . This gives a point Q lying in g(oo). The bijection i i ' ( o o ) of (3.13) associates to P a unique point. Hence K<°°\P) = Q = Q, and u(x) must pass through Q. Analytic local solvability is thus proved for the case N = oo. We now prove the case when -< is any total derivative ordering and N > m. We use similar arguments to the above N = oo case, but there is an additional problem here. A point in the locus £>(w) only defines the values of all parametric derivatives up to order N and this is not sufficient to define a unique power series expansion. Here is how we overcome this problem. For any Q(xo;u0N); u0N)) € £>(N), P(x0;u0N)) determines the value of x and all parametric derivatives up to order N. By Lemma 3.1.19, there exists an analytic solution u(x) passing through P. At x = XQ, the values of x, u(x) and all derivatives of u(x) up to order N define Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 80 a point Q 6 £>(JV). By construction, the values of x and all parametric derivatives of Q agree with those of Q and corresponds to the point P defined earlier. On the other hand, since -< is a total derivative ordering, the bijection of (3.13) holds for any N > m. Consequently there is only one point in the locus which corresponds to P. Thus KW(P) = Q = Q. Hence u(x) must pass through Q and local solvability is proved for this case. • For total derivative orderings, the usual local solvability criterion is sufficient to obtain the infinitesimal symmetries of R. However, for weak derivative orderings, analytic local solvability is what is required. This will become clear in the next section. Note that Theorem 3.2.4 is more general than the local solvability theorems given in [47] (Corollary 2.74 and 2.80) which rely on the Cauchy-Kovalevskaya existence and uniqueness theorem [47, p.l62_/J] (Theorem 2.73). This is because the existence and uniqueness theorem of Riquier-Janet (Theorem 3.1.18) applies to more general systems of DEs than those of Cauchy-Kovalevskaya type (or normal systems). In particular, as the following proof of Lemma2.2.12 shows, local solvability in the Cauchy-Kovalevskaya case is a corollary of Theorem 3.2.4. Proof of Lemma 2.2.12 The system R, given by (2.24), is in solved form with respect to the lexicographical ordering -<iex, given in Example 3.1.2. It is a standard form, since there are no further compatibility conditions. It is also a prolonged standard form sf^. Since <iex is also a total derivative ordering, Theorem 3.2.4 with N = n, shows that R is locally solvable. • Example 3.2.5 Consider the systems R and R, given by (2.39) and (2.40) respectively. Though these two systems have the same solutions, in Example 2.3.6 it was asserted that R is the only one that is locally solvable. We now justify this assertion. Using the lexicographic ordering <iex of Example 3.1.2, the standard form sf<Ux of R is also given by (2.39), since the system is already in simplified orthonomic form with no further integrability conditions. Applying the algorithm proLstandard, we obtain the corresponding Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 81 second order prolonged standard form sf^\ which turns out to be the system R, given by (2.40). Since -<iex is a total derivative ordering, Theorem 3.2.4 shows that R is locally solvable. Consequently R is not locaUy solvable since the locus of R is strictly a subset of the locus of R. • Here is an example to illustrate the need to go to the infinite system sf^ when a weak total derivative ordering which is not a total derivative ordering is employed. Example 3.2.6 Consider again the potential system R, given by (2.41). Using the potential ordering -< = -<po i of Example 3.1.2 with ( ^ i , ^ ) = (x,t), R is already in simplified orthonomic form. It has the integrability condition (2.43) which must be appended to the system in order to achieve a standard form. The resulting third order system vx = F(x,t,u™), vt = G(x,t,u(2)) + uxx, u x x x = DtF(x,u™)-DxG(x,u™), has no further integrability conditions. Since it is also a simplified orthonomic form, it is the standard form sf^ of R. Since -< is not a total derivative ordering, the corresponding prolonged standard form sf^ will not, in general, be locally solvable for any finite value of N > 3. To see this, it wiU be convenient to let (F, G) — (uxx + u, 0), though the foUowing discussion holds more generaUy. Here, the standard form sf^ becomes vx — uxx 4~ it, vt = uxx, (3.15) V-xxx — V,xxt -\- Ut-Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 82 The algorithm proL standard leads to the prolonged standard form sf^ given by VXXX — Uxxtt ~f~ Uxx ~f~ Uxt ~f" Uf-f, , VXx = U x x t + UX + Ut, Vxxt = U x x t t + Uxt + Utt, vx — uxx ~\~ u, Vxt = Uxxt + ut, vxtt = uxxtt + +utt, (3.16) Vt — uxx, vtt = U x x t , Vttt = uxxtt, uxxx = uxxt -f- Ut-As can be seen, this is a fourth order system, but the leading terms are of order at most three. The locus Q of roots of this algebraic system is a subset of X x C/ ( 4 ), in which ah fourth order derivatives of v are arbitrary (there are no equations involving fourth order derivatives of v). However, one can easily see that vxttt = Vtttt+uttt is a fourth order differential consequence which must be satisfied by all solutions of the system. Consequently, there are points in Q through which there passes no solutions (u(x),v(x)) of the system (v(x) cannot have any arbitrary value for its fourth order derivatives), and the system is not locally solvable. The same problem persists for any prolonged standard form sf^\ where N > m is finite. However, by Theorem-3.2.4, the infinite system sf^ associated with R is analytically locaUy solvable. • Since non-total derivative orderings require one to consider the infinite system sf^°\ it may be argued that one should not use such orderings. However, for a given example, one may have no choice in the matter. In the above example, we are forced to solve the equations of R for vx and vt in terms of higher order derivative terms on the right hand side, since F and G are not explicitly given. Such a situation will again arise in §5.1 where we consider a general class of potential systems. In order to effectively handle the infinite system sf^ we have the following lemma, which is proven in Appendix C: Lemma 3.2.7 Let R be an analytic system of DEs (2.19) with standard form s/x, of order m, and prolonged standard form sf^ (N > m). Let g(x,u^n)) be any function of its arguments and N = max(m,n). Then: Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 83 (1) g(xMn))\sJm = g(x,u(n))\sf(r)-(2) If g(x,u^)\ ( „ ) = 0 for all solutions u = f(x) of R, then g(x,u(-"^)\ JN) = 0. 1 / SJ-< Here the symbols | JN), | , ( < » ) and | („) denote making direct substitutions from the solved equations of sf^, sf^ and from the equations Uj = djf(x) describing T^, respectively. Note that (2) allows one to pass from statements holding on the solution space of R to an equivalent statement involving only the equations of sf[^\ 3.3 A New Symmetry Algorithm Let us now return to the problem of finding symmetries of a system R of DEs. In order to apply Theorem2.3.4, the system must be locally solvable. Suppose R is not locally solvable. Then if possible, one can use a total derivative ordering -< to obtain the corresponding prolonged standard form sf^ for some N. By Theorem 3.2.4, sf^ is locally solvable and Theorem 2.3.4 leads to the symmetries. If R is such that a weak derivative ordering must be used, then one determines the infinite prolonged standard form sf^K However, sf^ is only analytically locally solvable and not locally solvable. Consequently, the following result, which is proved in Appendix B, is required: Lemma 3.3.1 Let sf^ be a prolonged standard form with respect to a weak total derivative ordering -<. Let £>(oo) andg^^ be the corresponding locus and analytic locus respectively. Let Q be a Lie group of point transformations acting on X x U with infinite extension (7 ( o o ) acting on X x U ^ . Then (?(od) is a symmetry group of £>(oo) if and only if it is a symmetry group ofg^°°\ The main difficulty in proving this lemma is that points in £>(oo) that are not in ^ ( o o ) correspond to formal power series solutions which are not convergent. How does one make sense of the induced action of Q on such series? In fact, we do not attempt to make sense of this. Instead, we only consider the point by point mapping induced by Q. We exploit the existence of nearby (in terms of projected jet coordinates) analytic solutions to get the invariance of the locus. Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 84 Let the equations of sf^ be written as S a ' J = -uaj + fa'J = 0, uaj G B w , where fa,J depends on x and a finite number of parametric derivatives. Also, for any infinites-imal generator X , given by (2.32), define its infinite extension, given by X = f (x, u)dXi + rja(x, u)du« + rfjdu<*, (3.17) i = •••,£>, a = l,---,q, \J\ > 0. where nj are given by the infinitesimal extension formula (2.35). As for the total derivative operator identity (2.18), this is an infinite sum. However, when applied to a given function g(x, M ( n )), only a finite number of terms are ever needed. Lemma 3.3.2 Let R be a system of DEs with standard form sf^ (of order m) and prolonged standard form sf^ (N > m). Then Q is a symmetry group of R if and only if for every infinitesimal generator X of Q, X £ " ' J = 0, whenever (x, u(N)) e g(N\ Uj G £ ( A f ) , (3.18) where N = m if -< is a total derivative order and N = oo if -< is a weak total derivative order. Proof. Since R and sf^ have the same solutions, they must admit the same symmetry group Q. Let N = m and -< be a total derivative order. Then by Theorem3.2.4, sf[^^ is locally solvable and an application of Theorem 2.3.4 leads to the desired result. Let N = oo and -< be a weak total derivative order. Let £>(oc) and ^ ( o c ) be the locus and the analytic locus of sf^ respectively. Let Q be the symmetry group of sf^°K For any T£ G G, TS maps any analytic solution to another analytic solution (cf. Theorem 2.2.16). By Theorem-3.2.4, is analytically locally solvable and so r^° c ) is a symmetry of ^ o o ) . By Lemma3.3.1 it is also a symmetry of £>(oo). Moreover, the converse of each of the previous statements holds. Consequently G is a symmetry group of sf^ if and only if (7(<x,) is a symmetry group of £>(<x>). The latter is equivalent to (3.18) (cf. Theorem 2.1.20) and the lemma is proved. • Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 85 If -< is any weak total derivative ordering, then N = oo and the need to solve (3.18) for the infinite number of equations of s/^° o ) makes the task of finding the symmetry group Q seem intractable. If -< is a total derivative order, then sf^ is a prolonged standard form for any N > m and only the finite number of equations of sf^ need to be considered in (3.18). But even in this case, the need to prolong all the equations of to order at least N = m can lead to a very large system of equations for sf[f\ Fortunately, we can show that the infinitesimal symmetry conditions for sfW reduces to an equivalent set of conditions involving significantly fewer equations. Theorem 3.3.3 Let R be a system of n-th order analytic DEs (2.19) with standard form sL (of order m) and prolonged standard form sf^ (N > m). Assume that -< is a weak total derivative order. Then Q is a symmetry group of R if and only if for every infinitesimal generator X of Q, .. ( X A J Am = 0, A = max(m,n), p = !,-••,I, (3.19) where the symbol | (jv) denotes making all possible direct substitutions from the equations of • 5 J - ; sfW. The proof of this theorem, which is presented in Appendix C, is of a technical nature. One must show that the symmetry conditions (3.19) are equivalent to those of (3.18). The key observation is that each equation of sf^ consists of a finite number of terms and is derived from the equations of R by a finite process involving differentiations, implicit substitutions, and applications of the Implicit Function Theorem. By determining how the invariance conditions (3.19) are transformed under each of these finite operations, we show that each of the invariance conditions in (3.18) are satisfied if (3.19) are assumed. Even in the N = oo case, each equation of sf^ is derived from the equations from R by a finite number of operations and we arrive at the desired result through induction. We note that Lemma3.2.1(3) will be used at some stage so that one can pass from expressions that vanish on £>(JV) (cf. (3.18)) to expressions that vanish after making direct substitutions from the equations of sf^K Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 86 Theorem 3.3.3 is a significant improvement over Theorem2.3.4 when used for calculating the point symmetries of systems of PDEs. The equations of R need not be locally solvable in order to apply the symmetry conditions (3.19). Moreover, only a finite number of equations of the prolonged standard form for R are required (for direct substitutions). Hence, one avoids the problems associated with Theorem 2.3.4 with regard to infinite systems of equations. Even for finite locally solvable systems, Theorem 3.3.3 is much more efficient than Theorem 2.3.4 in general. The symmetry conditions of Theorem 3.3.3 are reduced versions of the symmetry conditions of Theorem 2.3.4. Theorem 3.3.3 leads to the following algorithm which correctly calculates the symmetry group of R: Algori thm 3.3.4 1. Determine X A M , p = 1, /. 2. Determine the corresponding prolonged standard system sf(f\ N = max(m, n) and make direct substitutions from sf^ in the expressions obtained in the previous step 1. 3. Demand that the resulting expressions vanish identically. Notice that in step 1, we only need to apply X to the equations of the finite system R. For any weak total derivative ordering, one does not need to consider the infinite system sf^. In step 2, there is no ambiguity in how the substitutions are to be made. In particular, the equations of sf^ are used directly as is: Replace all occurrences of the leading terms of sf^ in the expressions from step 1, with the corresponding right hand sides of sf^. This is a significant improvement over the corresponding step 2 of Algorithm 2.3.9 (which is in common use), where it is not always clear how one 'makes substitutions from R and its differential consequences up to order n' (recall the calculation of the symmetry group of (2.41)). Also, it is not uncommon that, in applying step 2 of Algorithm 2.3.9, one may have to use an equation of R more than once as substitutions and if one is not careful, one may get into an infinite Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 87 loop of substitutions (cf Example 2.1.2 in [24]). In contrast, each equation of sf^ is used at most once in the direct substitution step 2 of Algorithm 3.3.4. Consequently, Theorem-3.3.3 is a significant improvement over Theorem2.3.4 (which is one of the common ways that the infinitesimal symmetry conditions are stated). Moreover, Theorem 3.3.3, its proof and Algorithm 3.3.4 appear to be new. Example 3.3.5 Consider the calculation of the symmetry group Q of R, given by (2.41) where (F,G) = (uxx + u,0). Let the infinitesimal generators of Q be X = £(x,t, u, v) dx + T(X, t, u, v) dt + r](x,t, u, v) du + p(x,t, u, v) dv. By Algorithm 3.3.4, we first calculate X[-vx + F(x,t,u™)] = ^0M,« ( 2 ),« ( 2 )), X[-vt + G(x,t,u™) + uxx] = <f>2(x,t,u^,vW), where X is given by (3.17). The standard form of R is the third order system (3.15), and the prolonged standard form sf[^\ N = max(3, 2), is given by (3.16). Hence in the next step of the algorithm, one makes direct replacements for vx, vt, vxx, vxt and vtt occurring in (3.20) using the equations of s/^3) as they stand. (Since c/> depends only on derivatives up to order 2, there is no need to use the remaining equations of sf^\ whose leading terms are of order 3.) We now have: X[-vx + F] , 3 ) = fi{x,t,uW), X[-vt + G + uxx] ( 3 ) = <£2(z, t, u(3)). «A Notice that (f) can only depend on x and the parametric derivatives of sf^. In particular, <f> is independent of the principal term uxxx. Consequently, no more substitutions from sf^ are required (each equation of s/^3) is used at most once). In the last step of the algorithm, the determining equations for X are the solutions to (f> = 0. Since the unknowns (£,r, r),p) of X depend only on (x,t, u, v) and are independent of the parametric derivatives of order greater than zero, we must equate all such like derivatives to zero. What results is an overdetermined system of PDEs which must be solved for the unknown infinitesimals of X . We shall not Chapter 3. Extension of Lie's Algorithm for Systems of PDEs 88 complete this calculation here, since a more general symmetry calculation, involving arbitrary F and G in (2.41), is provided in §5.1.1. However, it is clear, that each step of Algorithm 3.3.4 is well defined and involves only a finite number of operations. • Chapter 4 Potential Systems of a Given System of PDEs The potential systems approach [16] is a general method for finding nonlocal symmetries of PDEs with two or more independent variables. In this chapter, the mathematical framework of the potential systems approach is presented. Much of the material in this chapter can also be found in Bluman and Doran-Wu [13]. In §4.1, we show how to construct potential systems S, associated with a given system of PDEs R, and discuss the many properties they enjoy. Of particular interest to us will be the fact that the solution space of R is nonlocally embedded in the larger solution space of S. Studying the potential system S can then lead to nonlocal information for the original system R. For example, point symmetries of S can lead to nonlocal symmetries of R. Such nonlocal symmetries are called potential symmetries. In §4.1.1, examples of potential symmetries and their applications are given. The construction of potential systems S requires the use of a conservation law (divergence free expression) of the given system R. Different conservation laws can lead to different potential systems which may yield different nonlocal information for R. One looks for conservation laws through linear combinations, involving coefficients called factors, of the equations of R. If R is self-adjoint, Noether's theorem [15, 47] shows that symmetries of J? can lead to the required factors. Since most of the PDEs we consider are not self-adjoint we rely on the Adjoint Theorem, presented in §4.2.1. The Adjoint Theorem provides necessary conditions for factors leading to conservation laws of the given system R of PDEs. The usual statement of the Adjoint Theorem involves the 89 Chapter 4. Potential Systems of a Given System of PDEs 90 vanishing of certain expressions, given in terms of the Frechet derivative of R, on the solution space of R. Through the use of the prolonged standard form for R, which allows one to pass from statements holding on the solution space of R to statements involving the equations themselves, we arrive at a more algorithmic formulation of the Adjoint Theorem. Given a set of factors leading to a conservation law of R, more than one potential system can be constructed. In §4.2.2, we show how to determine which of these potential systems are useful for finding potential symmetries through potential factors and potential conservation laws. Given a useful potential system, the potential system construction may be repeated to obtain higher generation potential systems associated with R. We show how point symmetries of higher generation potential systems can also yield nonlocal symmetries of R. In §4.3, a complete potential symmetry analysis of the nonlinear diffusion equation is per-formed: All first and second generation useful potential systems are constructed in §4.3.1 through the use of the Adjoint Theorem. Then, for each of these potential systems, a complete symmetry classification is performed in §4.3.5. Examples of how potential symmetries arise as point symmetries of first and second generation potential systems are found. In §4.4, we show how necessary conditions for the linearization of a given system of PDEs can be derived in terms of certain linearizing factors admitted by the system. This is very useful since, during the potential systems construction process, the discovery of linearizing factors can alert one to the possibility of linearizations. The linearization algorithms of Bluman and Kumei [15] can then be used to seek an explicit linearizing transformation, if one exists. Let us now review the concepts of local and nonlocal symmetries: Consider an infinitesimal generator of the form X = C(x,u^)dx, + rla(x,u^)dUa, t = l , a =1, (4.1) Notice that we have switched to using subscripts to index the dependent variables. This is particularly useful when discussing potential systems (especially when we come to discuss higher generation potential systems where we will need to use superscripts to distinguish between Chapter 4. Potential Systems of a Given System of PDEs 91 potential variables of different generations). Henceforth, we shall stick with this new subscript notation for all dependent variables. In particular, we have Definition 4.0.1 Let R{u} be a system of n-th order DEs (2.19) with standard form sf^, (order ra) and prolonged standard form sf^ (N > m). A local symmetry admitted by R{u] is an infinitesimal generator of the form (4.1) such that where X is the infinite extension of X , given by (3.17), and | AN) denotes making aU possible direct substitutions from sfi^K Moreover, X is called a point symmetry when k = 0, a contact symmetry when k = q = 1, a generalized (Lie-Backlund, higher order) symmetry [15, 47] when it is not a point or contact symmetry. The reason for the value of N is that X A is of order n + k and the only possible direct substitutions from sf^ are from sf^\ where N > n-\-k. We also have, by definition, TV > ra, where m is the order of the standard form s/x. When k = 0, Theorem 3.3.3 shows that the solutions of (4.2) lead to the infinitesimal generators of the point symmetry group admitted by R{u). A similar statement can be made for contact symmetries (k = q = 1) [15]. However, how does one interpret the solutions of (4.2) for generalized symmetries? Here, we just mention that there are technical difficulties in determining the actual group of transformations corresponding to the infinitesimal generators of a generalized symmetry (see [5, 15, 47] for more details). Definition 4.0.2 A nonlocal symmetry of R{u] is a continuous symmetry admitted by R{u] which is not characterized by an infinitesimal generator of the form (4.1). As previously mentioned in §1.3, there are many specialized approaches to finding nonlocal symmetries. Some are more heuristic than others. We will use the potential systems approach, which provides a general framework for finding nonlocal symmetries of R{u}. ua,J - Dj(ua). (4.2) Chapter 4. Potential Systems of a Given System of PDEs 92 4.1 Construction and Properties of Potential Systems Consider an n-th order system R{u} of PDEs in which one PDE is a conservation law Difl(x,u(n~^) = 0. Without loss of generality, R{u] is given by All(x,uW)=0, M = 1, 1, Dif(x,u^)=0. If p = 2, let (2:1,0:2) = (x,t). Through the conservation law in (4.3), one can introduce a scalar potential v and form the potential system S{u, v} given by A A i (a; ,u ( " ) ) =0, p = 1 , 1 , vt + P = 0, (4-4) -vx + P = 0. An example of a potential system with two independent variables is given in Example 1.3.2. If p = 3, let (xi, X2, £ 3 ) = (t, x, y). Through the conservation law in (4.3), one can introduce a vector potential v = («i, 1^2,^3) and form the potential system S{u,v} given by A^x, «(»>)= 0, p= 1, 1, P-v3,x + v2,y =0, (4.5) P ~ vhy + v3tt = 0, P ~ v2,t + vi,x =0, where t>J]Xj denotes the partial derivative dvi/dxj. An example of a potential system with three independent variables is given in Example 1.3.3. If p > 3, then the potential systems construction is given in Appendix D. Though we shall only consider examples of potential systems where p = 2 or p = 3, the results of this chapter apply equally well for any p > 2. Potential systems are determined systems in the case of two independent variables, and under-determined in the case of three or more independent variables. Here, a system is said to be determined (under-determined) if there exists (doesn't exist) a unique solution to the Chapter 4. Potential Systems of a Given System of PDEs 93 system for a given set of data (initial or boundary). In the case of three independent variables, the nonuniqueness is due to the degree of freedom in the gauge function: Given any solution (u,v) = (f(x),g(x)) satisfying a given set of data, one can always choose a nonzero gauge function x(x) s u c n that (u, v) = (f(x), g(x)+dxx(x)) still satisfies the same set of data and hence must also be a solution. Here, one must impose an additional differential constraint in order to make the potential system determined. For PDEs with more independent variables, more than one differential constraint is needed to make the associated potential systems determined. Let us now discuss the various properties of potential systems. Property (i) Local existence of potentials Given any solution u = f(x) of a given system R{u} of PDEs, there always exists a function v = g(x) such that («, v) = (f(x),g(x)) is a solution of the associated potential system S{u,v}. To see this, just substitute u = f(x) into (4.4) to obtain vt = h}(x), vx = h2(x), for some functions h}(x) and h (x). This is a standard form sf^ (with respect to any ordering -<;), since the only integrability condition of (4.4b,c) is (4.3b) which is identically satisfied by u = f(x). An application of Theorem3.1.18 then leads to the local existence of potentials v = g(x). Property (ii) Nonlocal and Noninvertible embedding of solutions In any potential system, the potential variables v appear only in derivative form and cannot be expressed solely in terms of x, u and derivatives of u. Clearly v depends nonlocally on the original dependent variable u. For example, in the potential nonlinear diffusion equation (1.5), we have Chapter 4. Potential Systems of a Given System of PDEs 94 By Property (i), we have that the solution space of R{u} is embedded in the solution space of S{u, V}. Since the potential variables v are nonlocal with respect to u, we say that this embedding is nonlocal. Moreover the embedding is noninvertible since the function v = g(x) will not be unique: For example, in the case of two independent variables, one can always add an arbitrary constant, c, to g(x) such that (u, v) = (f(x),g(x) + c) is still a solution. In the case of three independent variables, one can add the gradient of an arbitrary function of the independent variables x(x) such that (u, v) = (f(x),g(x) + dxx(x)) 1S still a solution. The function x(x) 1S called a gauge function. Property (iii) Projection of solutions In Examples 1.3.2 and 1.3.2, it was shown that the original PDE R{u] is a differential consequence of the associated potential system S{u, v}. Consequently, if (u, v) = (f(x),g(x)) is any solution of S{u, v}, then u = f(x) is a solution of the original PDE R{u}. As we shall see in §4.2.2 this is not always the case and one must be careful to select potential systems that do satisfy the projection of solutions property. This projection property together with Property (ii) ensures that any symmetry of S{u, v} is also a symmetry of R{u} and vice versa. Property (iv) Nonlocal information for R Using the projection of solutions property, any invertible transformation in (x, u, v)-space induces a transformation in (a;, ii)-space. This induced transformation will be noninvertible and nonlocal if it depends essentially on the potential variable v. Consequently, the study of potential systems through qualitative or quantitative methods which are not coordinate dependent may yield new results for the original PDEs and vice-vera. For example, Bluman and Shtelen constructed new Schrodinger equations that are nonlocaUy related to the free particle equation [18] and new classes of diffusion processes that are nonlocally transformed to the Wiener process [20]. In [3], Anco and Bluman construct nonlocal conservation laws of a PDE through its potential systems. In the next section, we show how nonlocal symmetries can Chapter 4. Potential Systems of a Given System of PDEs 95 be found through potential systems. 4.1.1 Potential Symmetries and Their Applications Algorithm 3.3.4 can be used to find all the infinitesimal generators admitted by S{u,v} of the form X = f (a;, u, v)dx + rj(x, u, v)du + p(x, u, v)dv. The flow of these infinitesimal generators then lead to the one-parameter point symmetries of S{u, v}, which are of the form x = X(x,u, v; e), u — U(x,u,v;e), (4.6) v = V(x,u, v; s). Using the projection of solutions property, this symmetry transformation of S{u, v} induces a symmetry transformation of R{u} given by (4.6a,b). If X and/or U depend essentially on v, then we have found a nonlocal symmetry of R{u}. The corresponding infinitesimal generator is obtained by projection of X to (x, w)-space and is given by Y = u, v)dx + r)(x, u, v)du, where v needs to be explicitly replaced by some nonlocal expression involving u. It follows that the infinitesimal X of S{u, v} yields a nonlocal symmetry of R{u}, if and only if its components ( £ , 77) depend essentially on v. These observations lead to the following definition and theorem: Definition 4.1.1 A potential symmetry of R{u}, related to potential system S{u, v}, is a point symmetry of S{u,v} which does not project onto a point symmetry of R{u}. Theorem 4.1.2 A potential symmetry of R{u} is a nonlocal symmetry of R{u}. In particular, if Xs = £\x, u, v) dXi + na(x, u, v) dUa + p1(x, u, v) dVl, (4.7) Chapter 4. Potential Systems of a Given System of PDEs 96 is a point symmetry of S{u,v}, then Xs is a potential symmetry of R{u} if and only if at least one component of (£, n) depends essentially on v; otherwise X 5 projects onto a point symmetry of R{u], namely XR = C(x,u)dXt + na(x,u)dUa. (4.8) Conversely, a point symmetry XR, given by (4-8), yields a nonlocal symmetry of S{u,v} if and only if there exists no p^(x, u, v) such that X 5 = XR + p1 dVl is a point symmetry of S{u, v}. Examples of potential symmetries are given in Examples 1.3.4 and 1.3.4. Applications of poten-tial symmetries to find noninvertible linearizations of PDEs are given in Example 1.3.5 as well as in §4.4. Here are examples of how potential symmetries can lead to new invariant solutions and to exact solutions of new boundary value problems. Example 4.1.3 New Invariant Solutions The point symmetries and the invariant solutions of the quasilinear hyperbolic equation R{u}, given by Utt = [f(u)ux]x, were studied by Ames, Lohner and Adams [2]. Consider the associated potential system S{u, v}, given by vx = ut, vt = f(u)ux. Pucci and Saccomandi [51] showed that, for any choice of f(u), S{u,v} admits the symmetry X = (v + x)dx + (u + t)dt, which is a potential symmetry of R{u}. Invariants of X are given by z = f^f, w1 = u, w2 = v. (4.9) When seeking invariant solutions of the form u = w1(z), v = w2(z), (4.10) Chapter 4. Potential Systems of a Given System of PDEs 97 S{u,v} reduces to w2z = -zwl, w1 = f-l(z2). For any / , the invariant solutions of S{u, v} are given by u = / - V ) , v = -zf~\z2) + j r \ z 2 ) dz, where the similarity variable z is given implicitly by (4.9). By the projection of solutions property, u = f~1(z2) is then a solution of R{u}. In the case f(u) = (logtt)2, two sets of invariant solutions of S{u, v} can be obtained: u = ez\ v = (1 - zx)eZl + ci , [(1 - ^)e Z l + c1 + x] - z^e21 + t] = 0 ; u = e~z\ v = -(1 + z2)e-z* + c 2, [-(1 + z2)e~Z2 + c2 + x] - z2{e~Z2 + t] = 0. Consequently, two solutions of R{u}, given by u = eZl and u = e~Z2, are found.1 See also Bluman and Shtelen [19] who extend the nonclassical method to potential sys-tems. In their paper, they also consider nonclassical Lie-Backlund symmetries of potential systems. • Example 4.1.4 New Boundary Value Problems Point symmetries can be used to construct the exact solution to a given boundary value problem (initial value problem) associated with a PDE R{u} [15]. If the point symmetries of R{u} do not yield the solution of the BVP, then potential symmetries of R{u} may lead to the solution. For example, consider the BVP (initial value problem) posed for the wave equation R{u}: utt = c2(x)uxx, — OO < X < oo, u{x,0)=U(x), (4-H) 0 < t < oo. ut(x,0) = W(x), 1Pucci and Saccomandi also showed that potential symmetries can be used to obtain an even wider class of solutions for R{u}. In the above standard approach, one uses a potential symmetry X to obtain the invariants (4.9). Solutions of S{u, v} which are in terms of these invariants then lead to solutions of R{u] by projection. If instead, one directly looks for solutions of R{u} in terms of the invariants, then besides the above solutions, one also finds the solutions u = e23, where 23 satisfies [ c 3 e * 3 + x] — Z3[e*3 + t] = 0. Chapter 4. Potential Systems of a Given System of PDEs 98 One can embed this BVP for R{u} in a BVP for the associated potential system S{u, v}, given by v x = ^ u t , u(x,0)=U(x), vt = ux, v(x,0) = V(x). Using the relationship between vx and wt, one can relate the boundary data V{x) to W(x): V'(x) = ^ 1 so that the solution (u,v) = (f(x,t),g(x,t)) to the BVP for S{u, v} yields the solution u = f(x,t) to the BVP for R{u}. When c(x) satisfies c' = m sin(/ilog c), m , / i £ E , (4-13) so that it is a bounded wave speed describing wave propagation in two-layered media with smooth transitions, S{u, v} admits the point symmetry X = [2<f> cosh t] dx + [2(>' - 1) sinh t] dt + [(2 - u cosh t-<f>v sinh t] du (4.14) — [4>'v cosh t + c24>u sinh t] dv, where 4>(x) = c(x)/c'(x). Since the infinitesimal of u depends on v, X is a potential symmetry oiR{u). For any bounded wave speed c(x), the point symmetries of R{u} do not lead to the solution of the BVP for R{u}. However, Bluman and Kumei [14] were able to derive such a solution for the bounded wave speed c(x) satisfying (4.13) through the use of the potential symmetry X , given by (4.14). In particular, they used X to first derive the solution to the BVP for S{u, v} and then obtained the solution to the BVP for R{u} by projection. Complete details can be found in their paper. • In this dissertation, we study point symmetries of the potential system S{u,v}, rather than studying those of the reduced system G{v}, obtained by eliminating the original dependent variables u in S{u, v}. The infinitesimal generators admitted by G{v} are of the form Z = £(x,v)dx + r](x,v)dv. Chapter 4. Potential Systems of a Given System of PDEs 99 If £ depends essentially on v, then Z induces a nonlocal symmetry of R{u}. However, the symmetry transformation corresponding to the flow of Z determines only how x and v are transformed with no information on how the original variables u are correspondingly trans-formed. In general, nonlocal symmetries of R{u} cannot be obtained explicitly from point symmetries of G{v}. However, it must be emphasized that point symmetries of G{v} that cor-respond to nonlocal symmetries of R{u} may not be obtainable through S{u, v}. The problem is in obtaining an explicit realization of these nonlocal symmetries of R{u}. 4.2 Conservation Laws and Useful Potential Systems 4.2.1 Conservation Laws Up to now, in order to obtain potential systems, we assumed that at least one PDE of a given system is a conservation law. The question of how to construct conservation laws yielding useful potential systems naturally arises. Given a system R{u} of PDEs A(l(x,u™) = 0, / i = l , (4.15) we seek a set of factors (multipliers, characteristics) {A / i(x, u ( n - 1 ) )} which lead to a conservation law of R{u} given by A"A„ = I* =1, (4-16) In order to discuss the known theorems concerning the discovery of conservation laws, we need to define the Frechet derivative and the adjoint of differential operators. Unless otherwise stated, the following material can be found in [15, 47]. Definition 4.2.1 An n-th order differential operator is given by PJ[u]Dj, 0 < | J | < n, (4.17) where P J[it] = P J(a;,'u ( n )) are functions, J is the multi-index (ji,j2> • • -,jk), a n d DJ  1S the fc-th order total derivative operator D2l Dj2 • • • D]k. A homogeneous differential operator is one with Chapter 4. Potential Systems of a Given System of PDEs 100 P° = 0. A linear differential operator is one in which each PJ depends only on x. A matrix of differential operators whose r e e n t r y is PfjDj will also be referred to as a differential operator. Definition 4.2.2 Let u = (ui, • • • ,ug) and P[u] = (Pi[u], • • •,Pi[u]) to be an /-tuple of func-tions. The Frechet derivative of P is the differential operator Cp such that for any #-tuple of functions Q[u]. In other words, to obtain Cp(Q), we replace u (and its derivatives) in P by u + eQ (and its derivatives) and differentiate the resulting expression with respect to e. For example, if P[u] = uts'muxx, then So Cp = (sin uxx)Dt + (ut cos uxx)D2. In general, Cp is a / x q matrix of differential operators with ij-entry Definition 4.2.3 Let V be any differential operator (4.17). Then its adjoint is the differential operator V* which satisfies for every domain fi C St p, and for every pair of functions P[u] and Q[u] with compact support fl. A differential operator V is self-adjoint if and only if V = V*. In particular, V* is the adjoint of V if (P-VQ - Q-V*P) is a divergence expression. With V given by (4.17), an easy application of integration by parts shows that D*(Q) = (—D)J[PJ(Q)], for which we write £P(Q)= §1 e=QP[u + eQ[u]], £p{Q) = ^\e=Q(ut + eDtQ)sm(uxx + eD2x = (sin uxx)DtQ + (ut cos uxx)D2xQ. Q) {CP)l3 = (dPi/du3:J)Dj. (4.18) D* = (-D)j-Pj. Chapter 4. Potential Systems of a Given System of PDEs 101 For example, if V = uDx + Dx, then its adjoint is given by V* = {-Dxf • u + (~DX) = uxx + (2ux - 1)DX + uD2x. Clearly V is not self-adjoint. If V is a matrix of differential operators with entries 2?^, then its adjoint V* is a matrix of differential operators with entries which is the adjoint of the transposed entries of P . Let R{u} be the system of PDEs (4.15). If the Frechet derivative of A is self-adjoint, then system R{u} is the set of Euler-Lagrange equations for some variational principle with Lagrangian L. Consequently Noether's theorem can be used to obtain the set of factors and the corresponding conservation law of R{u}. The PDEs we will consider in this dissertation are usually not self-adjoint and, as such, Noether's theorem cannot be used to find conservation laws for these PDEs. Instead, we will use the following adjoint theorem, which applies to all PDEs, to determine necessary conditions for factors that give rise to conservation laws of R{u}. (See [15, 47] for more details on Noether's theorem.) Theorem 4.2.4 ([47, 68]) Suppose there exists a set of factors A M ( a ; , u ( n - 1 ) ) leading to the conservation law (4-16) of R{u). Then C*AX = 0, (4.19) on all solutions of R{u}. For this theorem to be effective, one must pass from (4.19), holding on the solution space of R{u}, which is not known a priori, to an equivalent one involving only the equations of R{u}. This is possible for systems that are locally solvable (cf. Definition 2.2.9). Chapter 4. Potential Systems of a Given System of PDEs 102 Theorem 4.2.5 (Adjoint Theorem) Let R{u} be an analytic system of n-th order PDEs (4-15) with standard form sf^ (order m) and prolonged standard form sf^ (N > m), where -< is a derivative ordering. If there exists a set of factors X^(x, v,(n~^) leading to the conservation law (4-16) of R{u}, then ( £ A A ) ( J V ) = 0, N = max(2n - 1, m). (4.20) Proof. By hypothesis, (4.19) holds. Since A is of order n, C*A is an n-th order differential operator. Also, A is of order n — 1, and so C*AX is of order 2n — 1. Applying Lemma 3.2.7(2) to (4.19) then leads to (4.20). • Unlike Noether's theorem, this is only a necessary condition that the factors A ' 2 lead to a conservation law of R{u} with no explicit formula given for the conservation law. Here is an example of how to find factors through the Adjoint Theorem. Example 4.2.6 Consider the nonlinear diffusion equation R{u} given by A = -ut + [K(u)ux]x = 0, K\u) ± 0. (4.21) The Frechet derivative of (4.21) is given by £A = -Dt + KD2X + 2K'uxDx + K"u2x + K'uxx = -Dt + D2x-K{u), which has adjoint (4.22) £*A = Dt + K{u)Dl Clearly £ A is not self-adjoint and so Noether's theorem is not applicable. However, Theorem-4.2.5 is applicable: Any factor A(x,'u ( 1 )) yielding a conservation law of R{u} must satisfy (£*AA) , ( 3 ) =0. (4.23) To obtain the prolonged standard form sf^x for R{u}, let (2:1,2:2) = {t,x) and <iex be the lexicographical ordering given in Example 3 .1 .2 . Then the standard form sf^Ux of R{u} is given by »™ = -Wtu*  + jfc,Ut-  (4-24) Chapter 4. Potential Systems of a Given System of PDEs 103 Since K ^ 0, sf-<lex and R{u} have the same solution set. By differentiating (4.24) with respect to x and t, and using (4.24) to replace the principle terms uxx in the resulting right hand sides, one obtains uxxt = [-^- + (jr)2]u2xut - j^u2 - 2^uxuxt + ±utU (4-25) uxxx = [-^ + 3(^) 2 ]u 3 - 3J^uxut + jruxt. sf*LX 1 S § l v e n ^ v (4-24) and (4.25). We have C^Xi^X^vS ') — E ^uxUxxx -\- XUtUxxt "i~ ^ u t u t ^ x i "F 2 A U x ' U [U xtU x x -\- X U x U x U x x ~\r 2XUUtUxUxt "i"2A U U x u x u x x -f- Xuuux -f- {2XXUx -\- Xu)uxx -f- 2Xxuux -f- A^^j (4.26) +(KX + 2KXXUt)uxt + XUtutt + Xuut + Xt. To determine the left hand side of (4.23), the equations of sf^ are used to substitute for the terms uxx, uxxt and uxxx appearing in (4.26). Consequently (4.23) becomes 2XUtutt + 4>(x,t,u(2);X) = 0, (4.27) where <f> is explicitly known and is independent of utt- Also A is independent of uu. Since utt is a parametric derivative of sf^ , it can take on any arbitrary value. For (4.27) to hold, Xut = 0 (4.28) and (4.23) becomes 2XUxuxt + (j>{x, t, u{2); A) = 0. Here both A and cf> are independent of the parametric derivative uxt so that XUx = 0 (4.29) and (4.23) reduces to Xt + 2Xuut + [KXUU - XuK')v?x + 2KXxuux + KXXX = 0. Since A is independent of the parametric derivative ut, Xu = 0 (4.30) Chapter 4. Potential Systems of a Given System of PDEs 104 and (4.23) further reduces to A t + K(u)\xx = 0. Since K'(u) ^ 0, we must have A* = 0, \ x x = 0. This together with (4.28), (4.29) and (4.30) leads to the following factors for R{u}. (4.23): A = ci + c2x, ci,c 2elR. (4-31) We now seek conservation laws of R{u) that are given by (ci + c2x)(-ut + [K(u)ux]x) = 0. Essentially two different conservation laws of R{u} can be thus obtained. The first one corre-sponds to the factor A = 1: Dt[-u] + Dx[K(u)ux] = 0. (4.32) The second one corresponds to the factor A = x: x(-ut + [K(u)ux]x) = Dt[-xu] + Dx[x(L(u))x - L(u)} = 0, (4.33) where K(u) = L'(u). • 4.2.2 Useful Potential Systems Suppose one has found a set of factors leading to the conservation law (4.16) of R{u}. Then to use the construction procedure of §4.1 to obtain a potential system, one must first replace the system R{u} with the new system RM{U}, given by A^x,u^)=0, /z = l , - . - , M - l , M + l , . . . , / , (4.34) Dif = 0. Chapter 4. Potential Systems of a Given System of PDEs 105 It follows that each solution of R{u} is a solution of RM{U}. However, each solution of RM{U} is a solution of R{u} or of the factor system RM{U) given by All(x,u™)=0, fx = 1, • • •, M - 1, M + 1, • • •, /, (4.35) \M(x,u(n-V) = 0. If RM{U} has solutions that are not solutions of R{u}, which can only happen when A M = 0 has solutions, then R{u} and RM{U} do not have the same solution set. Consequently they will not, in general, have the same symmetries. In particular, one would not expect a symmetry (local or nonlocal) of R{u} to also leave invariant the solutions of RM{U). This leads to the consideration of only certain types of factors in order to discover useful potential systems: Definition 4.2.7 A potential factor is a factor which does not vanish for any u(x), i.e., XM(x, M ( " _ 1 ) ) = 0 has no solutions. A potential conservation law of R{u} is a conservation law of R{u} arising from a set of factors with at least one potential factor. Let RM{U} be a system (4.34) associated with a potential conservation law of R{u}, where XM is a potential factor. The corresponding potential system is a useful potential system. The factors (4.31) admitted by (4.21) are examples of potential factors. The corresponding conservation laws (4.32) and (4.33) are examples of potential conservation laws. Many more examples of potential factors and potential conservation laws, as well as examples of useful potential systems wiU be provided in §4.3. It immediately foUows that only useful potential systems enjoy the projection of solutions property (cf. §4.1). Consequently, whilst any analytical technique, which includes symmetry analysis, applied to a useful potential system may lead to new information for the original PDE, this is not necessarily the case for other types of potential systems. Useful potential systems S{u,v}, are called first generation potential systems. Higher gen-eration potential systems, can also be constructed as follows. Let v1 = v and S1 = S{u, v1}. Chapter 4. Potential Systems of a Given System of PDEs 106 Using a potential conservation law of S1, one can introduce further potential variables2 v2 to form the second generation potential system S2 = S2{u, v1, v2}. Point symmetries of S2 could yield nonlocal symmetries of R{u}. Continuing the process with other conservation laws, one could obtain potential variables v1, v2, • • •, vJ and corresponding potential systems 5 1 , 5 2 , • • •, SJ{u, v1, • • - ,vJ}. We call SJ the |J|-th generation potential system. Example 4.2.8 Consider again the nonlinear diffusion equation R{u}, given by (1.4), which has the first generation potential system S{u, v}, given by (1.5). Using the conservation law (1.5b), one can introduce a potential variable w to form the second generation potential system T{u, v, w}, given by vx = u, (4.36) wt = L(u), 2 where K{u) = L'(u). When K(u) = u~3, T{u,v,w} admits the point symmetry [13] X = — w dx + 3uv du + v2 dv. Since the infinitesimal component corresponding to the independent variable x depends on w, X does not project onto (x, u, u)-space or (x, w)-space. In other words, X is a nonlocal symmetry of S{u, v} and of R{u}. • In general, any |J|-th generation potential system may yield nonlocal symmetries for a lower generation potential system and/or the original PDE itself. We shall also call nonlocal symmetries of R that arise in this way potential symmetries of R. Definition 4.2.9 Any point symmetry of SJ that does not project to a point symmetry of R (S1, I < J) is called a potential symmetry of R (S1). The foUowing theorem immediately foUows. 2 Note additional constraints may be required to make the potential system determined if the number of independent variables is greater than two. Chapter 4. Potential Systems of a Given System of PDEs 107 Theorem 4.2.10 Suppose SJ{u, vl, • • •, vJ} admits a point symmetry ~XsJ with infinitesimals ( £ , 77, p1, • • •, pJ), depending on (x, u, v1, • • •, vJ). Then ~KsJ induces a potential symmetry of S1, I < J, if and only if one component of (£ ,77, p\ • • •, p1) depends essentially on vJ; X 5 J induces a potential symmetry of R{u] if and only if one component o/(£, 77) depends essentially on vJ. Conversely, if R{u} admits a point symmetry X.R with infinitesimals (£ ,77) , then X R yields a nonlocal symmetry of SJ, J > 1, if and only if there exists no point symmetry of SJ with infinitesimals ( £ , 77, p1, • • •, pJ). If a point symmetry X f l of R yields a nonlocal symmetry of SJ, J > 1, then we say X^ is 'lost' in SJ. However, it must be emphasized that the symmetry is still present in SJ, albeit in the nonlocal sense. It is said to be 'lost' in SJ since it cannot be realized as a point symmetry of SJ. We will give illustrations of this in the next section. 4.3 Potential Symmetry Analysis of the Nonlinear Diffusion Equation As a prototypical example, we consider the nonlinear diffusion equation R{u} given by (4.21). 4.3.1 First Generation Potential Systems As the calculations in Example 4.2.6 show, R{u} admits the potential factors (4.31) with corresponding potential conservation laws (4.32) and (4.33). Using (4.32), we obtain the useful potential systems S1^, v} given by Af 1 = -vx + u =0, (4.37) = -vt + [L(u)]x =0. Using (4.33), we obtain the useful potential systems S2{u, V} given by A f = -Vx + xu =0, (4.38) A f = -Vt + x[L(u)]x - L(u) = 0. Chapter 4. Potential Systems of a Given System of PDEs 108 51{?i,t'} and S2{u, V} are first generation potential systems of R{u). 4.3.2 Second Generation Potential Systems I Consider the first generation potential system Sl{u, v} given by (4.37). Let -<iex be the lexico-graphical ordering of Example 3.1.2 with (xi,x2) = (t,x) and ( t i i , ^ ) — {u,v)- The standard form sfA of v} with respect to <\ e x is given by uxx = - j$-u2x + Kjfiut. (4.39) and vx =u, (4.40) vt = [L(u)]x. Since m = 2 (the order of sf^) and n = 1 (the order of S1^, v}), we have N = 2 in (4.20). The prolonged standard form sf^ , corresponding to S1{u,v}, is given by (4.39) and (4.40) and . V X X — VJX , vtt = K'uxut + Kuxt. The Frechet derivative of S1^, v} and its adjoint are given by 1 -Dx Dx-K(u) -Dt 1 K(u)Dx Dx Dt Factors A '(x , i , u^), i = 1,2, leading to a conservation law of u}, must satisfy = 0. (4.41) When solving (4.41), two cases are singled out. Case K{u) arbitrary: Here the general solution of (4.41) is (A 1, A2) = (0,c), c e R , leading to the useful potential system T 1 { ' U , t > , w } given by Chapter 4. Potential Systems of a Given System of PDEs 109 -vx + u = 0, -wx + v = 0, -wt + L(u) = 0, which is a second generation potential system of R{u}. Applying Theorem 4.2.5 to TY{u, v, w}, one can show that only trivial factors Xl(x,t,u,v,w) = 0, i = 1,2,3, are found. Case K(u) = u~2: Here the general solution of (4.41) is (X\X2) = (u-1F\v,t),F\v,t)), (4.42) where (FX,F2) are arbitrary solutions of the linear system dFl-p1 dF±-_dFl (4 431 dv - r •> dv - dt • l ^ - ^ J In §4.4, we show how these factors indicate the linearization of the system vx — it, vt = u~2ux. (4.44) 4.3.3 Second Generation Potential Systems II Consider the first generation potential system S2{u, V} given by (4.38). Let -<iex be the lexico-graphical ordering of Example 3.1.2 with (xi,X2) = (t,x) and ( ^ 1 , ^ 2 ) = (u,V). The standard form of S2{u, V} with respect to <\ e x is given by (4.39) and Vx = xu, (4.45) Vt = x[L(u)]x - L{u). The corresponding prolonged standard form sf^ is given by (4.39) and (4.45) and Vxt = xut, Vu = xK'uxut + xKuxt - Kut. Chapter 4. Potential Systems of a Given System of PDEs 110 The Frechet derivative of S2{u, V} and its adjoint are given by -D, xDx-K(u) — K(u) —Dt ^ S 2 -x -K(u)[2 + xDx] Dx Dt Factors Xl(x, t, u ( 1 )), i = 1,2, leading to a conservation law of S2{u, V}, must satisfy n e:r = 0. (4.46) When solving (4.46), two cases are singled out. Case K(u) arbitrary: Here the only solution of (4.46) is (A 1, A 2) = (0, cx~2), c £ R . These factors yield the potential conservation law x'2(Vt - x[L(u)]x + L(u)) = Dt[x~2V] + Dx[-x-xL(u)} = 0, leading to the second generation potential system T2{u,V,W} given by -Wx + x~2V =0, -Wt + x-Hiu) =0, —Vx + xu =0. It is unnecessary to seek factors for T2{u, V, W} since one can show that it is equivalent to Tx{u, v, w} through the mapping v = x^V + W, w = xW. (4.47) However, and S2{u,V} are not invertibly equivalent since, as will be seen in §4.3.5, for any K{u), these systems admit point symmetry Lie algebras of different dimensions. Case K(u) = u~2: Here the only solution of (4.46) is (A 1, A ) = c (-^, -Jj), c G Ut. These factors yield the potential conservation law ±(VX - xu) + £(Vt - u-1 - xu-2ux) = Dx[^ -x} + Dt[^] = 0, Chapter 4. Potential Systems of a Given System of PDEs 111 which in turn yields the systems Sf{u, V} given by " *] + = 0, -Vt + u-1 + xu~2ux = 0, and S2{u,V} given by -Vx + xu = 0, D*[H-x] + Dt[g]=0, (cf. RM{U} in §4.2.2). Obviously A 1 is a potential factor and A 2 is not a potential factor. Consequently only S2{u,V} leads to a useful potential system T2{u, V, >V}, given by Af 2 = -Vt + u~l + xu~2ux = 0, A f = -Wt + 2(x - £ ) =0, (4-48) Af 2 = - W X + £ =0. 4.3.4 Thi rd Generation Potential Systems Consider the second generation potential system T2{u, V, W} given by (4.48). Let <iex be the lexicographical ordering of Example 3.1.2 with ( £ 1 , 2 : 2 ) = {t,x) and (u\,U2,u3) = (u,V, W). The standard form of S2{u, V} with respect to -<\ex is given by uxx = 2u~1ul + u2ut, Vx = xu, Vt = u-1 + xu~2ux, (4.49) yVt =2x- 2(xu)~1V, yvT = x~2v2. Chapter 4. Potential Systems of a Given System of PDEs 112 The corresponding prolonged standard form sf^ is given by (4.49) and Vxx — U - J - XUX, Vxt = xut, Vtt = ~u~2ut - 2xu~3uxut + xu~2uxt, Wxx = 2x~1uV - 2x~3V2, yVxt = 2x-2u~1V + 2x~1u-2Vux, yVtt = -2x-xvT2 - 2u~3ux + 2x-1u~2Vut. The Frechet derivative of f2{u, V, W} is xu~2Dx - (u~2 + 2xu~3ux) -Dt 0 2x~xu-2V -2(xu)~x -Dt 0 2x~2V -Dx (fa*) and its adjoint is -u~2(2 + xDx) 2x-lir2V 0 Dt -2(a;u)-1 2x~2V 0 Dt Dx J Factors A4(x, t, u, V, W), i = 1,2,3, leading to a conservation law of T2{u, V, W} must satisfy 0. The only solution of this is A = (ca;_ 2,0,0), c G E . The resulting useful potential system is U2{u,V,W,Z} given by -Wx + x~2V2 = 0, -Wt + 2x - 2{xu)~1V =0, -Zx + x~2V = 0, Zt + (xu)'1 =0, ' ' -which is a third generation potential system of R{u}. One can show that this potential system admits only trivial factors of the form Xl(x,t, u, V, W, Z), i = 1,2,3,4. Chapter 4. Potential Systems of a Given System of PDEs 113 Case K(u) arbitrary R{u} 5 1 { « , t ; } ( A ! , A 2 ) = ( 0 , 1 ) r1!^, v, w} no factors S2{u,V} I T2{u,V,W} no factors Figure 4.4: Potential systems tree for the nonlinear diffusion equation; K{u) arbitrary. Case K(u) = u 2 R{u} linearizing factors ~ 1 S2{u,V} v ' ' v xu ' x* ' I T2{u,V,>V} (X1 ,X2)=(0,x~2) ( A 1 , A 2 , A 3 ) = ( x - 2 , 0 , 0 ) I C/2{M,y,W,2:} 1 T2{u,V,W} linearizing factors no factors Figure 4.5: Potential systems tree for the nonlinear diffusion equation; K{u) = u 2 Chapter 4. Potential Systems of a Given System of PDEs 114 4.3.5 Symmetry Classification of Potential Systems The factors and potential systems arising for the nonlinear diffusion equation R{u} are summa-rized by the potential trees shown in Figures 4.4 and 4.5. One can use Algorithm 3.3.4 to calcu-late the point symmetries of the systems R{u}, t>}, S2{u,V}, ^{u^v^w}, T2{u, V, W}, T2{u, V, W} and U2{u, V, W, Z}. We shall not give explicit details of the symmetry calcula-tions, but list all the results obtained. We note that, for each system, the admitted symmetries depend on the form of the diffusivity K'(u) ^ 0, modulo scalings and translations in u. Point Symmetries of R{u} (1) K(u) arbitrary: X? = dx, XR = dt, X f = x dx + 2tdt. (2) K(u) = ux,Xe R: Xy, Xf , Xf , XR = Xx dx + 2udu. (3) K(u) = u - l : XR,---,XR, XR = x2dx-3xudu. Point Symmetries of Sl{u,v} (1) K(u) (2) K(u) (3) K{u) arbitrary: = ux, X e R: u -2. 8F2 dv dF1 dt ' dF1 dv = F2. (4) K(u) i + « 2 X f , - - - , X ^ , X% = vdx + atdt-(l + u-2) du - x <9, a is a constant. Chapter 4. Potential Systems of a Given System of PDEs Point Symmetries o / T 1 - ^ , v,w} (1) K(u) arbitrary: (2) K{u) = ux, A e H : (3) K(u) = u-2: (4) K(u) = u"\ (5) = t t _ l : (6) A"(«) = l+«2 a is a constant. 0 a a r c t a n u . X T 1 = Xf , x f = Xf1, X^1 = Xf + 2w du XA = X 4 + x dw, X^ = dw. xf , - - - ,x f , Xt =Xf +2(l + \)wdw. XT1, • • •, X?1, X£ = X^ + *) - F3(v, t)} d„ T1 v T 1 v S 1 -co -"-co where is an arbitrary solution of the linear system x!\ dF3 dv p i M l _ p 2 9 F i — F 2 r ' at. — r i dv — r •, Xg1, xy = Xf + (w - xv) dv + xw dw. • , XT, Xg1 = w dx - 3W?J <9„ - v2 dv. /T1 , x f , xy=xf + Uv2-x2)du fl1 rS1 Point Symmetries of S2{u,V} (1) K(u) arbitrary: (2) K{u) = ux, A G E : (3) = u~S: X f = X?, X f = Xf + 2F cV, X f = dv. Xf, Xf , X f , X f = XR + 2(l + X)Vdv. x f , - , x f 4 > A 5 - A 5 • Point Symmetries of T2{u,V,W} (1) tf(u) = vT2: X f = X f , X f = X f + 3Wdw, X f = X f - 2Wdw, X 4 = <9yy. Point Symmetries of U2{u, V, W, 2} (1) K(u) = u~2: X?2 = X f , Xf2 = X f + 2 dz, xf2 = x f , X ? = x f x?2 = BZ. We now analyse the above symmetries in view of the material presented in §4.2.2. Chapter 4. Potential Systems of a Given System of PDEs 116 When K(u) = u~s, the point symmetry X ^ is 'lost' in v}, since it induces no point symmetry of Sx{u, v}. In particular, XR induces a nonlocal symmetry of Sx{u, v} which is represented by the infinitesimal generator X W o e = XR+(KJvdx-xv)dv. On the other hand, Sx{u, v} yields a potential symmetry of R{u} given by Xg 1 when K(u) = _ i - j -e" C r e t a n a r i ( j ^ w j i e n K(u) = u~2. The latter symmetry leads directly to the linearization of R{u} by a noninvertible mapping [15, 38]. For any K(u), Tx{u,v,w} 'covers' R{u} and 51{tt,v}, since the point symmetries of T 1 - ^ , to} project onto aU point symmetries of both R{u} and In particular, the point symmetry X ^ , 'lost' in 5 1{u, v}, is 'recovered' as the point symmetry X ^ 1 of Tx{u, v, w}. Moreover, if K{u) = u~3, the point symmetry Xg 1 yields a potential symmetry of both 51{w, v} and R{u}. For any K(u), the point symmetry XR is 'lost' in S2{u, V}, since it induces no point sym-metry of S2{u,V}. One can show that Xy* induces a nonlocal symmetry of S2{u, V} which is represented by the infinitesimal generator x t n l 0 C = XR+(judx)dv. All other point symmetries of R{u} induce point symmetries of S2{u, V} and, in turn, S2{u,V} yields no potential symmetries of R{u}. Since T2{u,V, W} is equivalent to Tx{w, v,w}, through the mapping (4.47), it follows that each point symmetry of T1 {u,v,w} correspondingly maps into a point symmetry of T2{u,V, W}. In particular, it is interesting to note that the point symmetry Xy" 'lost' in S2{u, V} is 'recovered' as the point symmetry X r 2 = Xy + (x~1V+ W)dv-x^Wdw, of T2{u,V,W}. FinaUy, the potential systems T2{u,V, W} and U2{u, V, W, Z}, which only arise for Chapter 4. Potential Systems of a Given System of PDEs 117 K(u) = u 2 , are disappointing since they do not 'recover' Xy" as a point symmetry, their point symmetries yield no nonlocal symmetries of R{u}, and, unlike T2{u, V, W}, they do not lead directly to the linearization of R{u}. 4.4 Linearizing Factors In this section, we show how factors admitted by a given system of PDEs can indicate whether a linearization is possible. This is very useful since, during the potential systems construction process, systems for which linearizations might be possible are immediately found. Consider a system of linear PDEs given by A = L[x]u = 0, (4.50) where L[x] is an / X q matrix of n-th order linear differential operators (cf. Definition 4.2.1). Theorem 4.4.1 A set of factors AM(a;), p = 1, yields a conservation law for (4-50) if and only if L*[x]\(x) = 0. (4.51) The proof of this theorem follows from Theorem 4.2.5 and the fact that the Frechet derivative of (4.50) is £ A = L[x]. Thus any linear system of PDEs (4.50) admits an infinite number of factors given by the solution of the related linear system (4.51). Theorem 4.4.2 (Necessary for Linearization) If a system R{u} of PDEs (4-15) is lin-earizable by an invertible transformation, then it must admit factors X^x,u^) = A%(x, u^)Fp(X), p, p = 1, • • •, /. (4.52) where (1) -A^ are specific functions of (x, u ( n _ 1 ) ) . (2) X = ( X L , • • • ,XP) are specific functions depending on (x,u^), if I = 1, and depending on (x, u), if I > 1. (3) F(X) are arbitrary functions satisfying the linear system Chapter 4. Potential Systems of a Given System of PDEs 118 L*[X]F = 0, where L*[X] is the adjoint of L[X], an I x q matrix of linear differential operators. (4) The linear system is given by L\X]U = 0, where the new independent variables are X and dependent variables are U = (U\, • • •, Uq). The proof of this theorem, which can be found in [9], essentially follows from the fact that conservation laws are invariant under contact transformations [8]. Consequently, if R{u} is linearizable by an invertible transformation, then by Theorem 4.4.1 it must admit an infinite number of factors. The invertible mapping then leads to (4.52) which relates the factors AM of R{u} to the factors F(X) of the related linear system given in Theorem 4.4.2 (4). This motivates the following definition. Definition 4.4.3 Factors AM(x, w ( n - 1 ) ) , p = 1, • • are linearizing factors for R{u} provided the adjoint equations (4.20) can be expressed, through (4.52), in the form given in Theorem-4.4.2(3). If a given system R{u} admits linearizing factors, then an explicit linearization of R{u} can be sought using specific algorithms of Bluman and Kumei [15, 38]. We now consider three examples. 4.4.1 Examples of Linearizations Nonlinear Diffusion Equation Consider the potential nonlinear diffusion system v}, given by (4.37). For K(u) = u~2, S1{u, v} admits the linearizing factors (4.42) with arbitrary functions (F1, F2) satisfying (4.43). As such, one is alerted to the possibility of linearizing 5 1{u, v}, and an explicit linearization can now be sought. In fact, as shown in [15, p.370^], the application of linearization algorithms [15, 38] yields the invertible mapping Z\ = V, Z2 = t, W\ = X, W2 = U~X , Chapter 4. Potential Systems of a Given System of PDEs 119 which transforms any solution (u(x, t), v(x,t)) of S1^, v} to a solution (wi(zi,z2), (w2(zi, z2)) of the hnear system of PDEs W\,z2 = w2,Zl, and vice versa. This in turn yields the noninvertible mapping x = wi, t = z2, u = (witZl)~1, which transforms any solution wi(zi, £ 2 ) of the linear heat equation to a solution u(x,t) of the nonhnear diffusion equation (4.21). Burgers' Equation Using the factor A = 2, Burgers' equation uxx = uux + ut, (4.53) has conservation law Dt[—2u] + Dx[2ux — u2] = 0, yielding potential system S{u,v} given by Si = -vx + 2u =0, (4.54) 5*2 = —vt + 2ux — u2 =0. The standard form of S{u,v} with respect to -<iex where (111,112) = (u,v) and (xi,x2) = (t,x) is given by (4.53) and vx = 2u, (4.55) vt = 2ux — u2. The prolonged standard form sf^x is given by (4.53) and (4.55) and vxx — 2ux, vxt = 2uu (4-56) vtt - 2uxt - 2uut. Chapter 4. Potential Systems of a Given System of PDEs 120 The Frechet derivative of S{u, v} and its adjoint are given by 2 -Dx 2DT - 2u -Dt ££s — 2 -2u - 2D7 D3 Dt Factors Xl(x,t,u,v), i = 1,2, leading to a conservation law of S{u,v} must satisfy 'til = 0. This has solutions (A 1 ,A 2 ) = e-T[\uF1(x,t) + F2(x,t),F1(x,t where (FX,F2) is an arbitrary solution of the linear system dF1 _ P2 dF]_ _ _ BF2_ dx ~ r ' dt — dx • Since (A 1, A 2) are linearizing factors one can now seek an explicit linearization of S{u,v}. In fact, as shown in [15], the application of linearization algorithms [15, 38] yield the invertible mapping zi = x, z2 = t, w\ = -e~^, u>2 = | e ~ 4 5 which transforms any solution (u(x,t), v(x,t)) of S{u, v} to a solution {w\(z\, z2), ( ^ 2 ( ^ 1 , z2)) of the linear system of PDEs Wl,z2 = W2,Z1, and vice versa. This in turn yields the noninvertible Hopf-Cole map t = z2, u = -2 -Wi which transforms any solution w\(zi,z2) of the hnear heat equation to a solution u(x,t) of Burgers' equation. Chapter 4. Potential Systems of a Given System of PDEs 121 Nonlinear Telegraph System Consider the nonlinear telegraph system T{u,v}, given by Ti = -vt + ux = 0, T2 = -vx + u~2ut + 1 - u'1 = 0. The standard form of T{u, v} with respect to -<iex where (ui,u2) = (u,v) and (xi,x2) = (t,x) is given by vt = ux, vx = u~2ut + 1 — u - 1 . The prolonged standard form sf^ is given by (4.57) and Uxx = u~2uu - 2u~3u2 + u~2ut, (4.57) vxx = u 2uxt — 2u 3uxut + u 2ux, (4.58) vxt = u~lutt - 2u~6uf + u~2ut, Vtt = uxt. The Frechet derivative of T{u, v} and its adjoint are given by I) -Dt Dt-u-2 + u-2 -Dx r* — -Dx u-2(l-Dt) Dt Dx Factors A*(x, r, u, v), i = 1,2, leading to a conservation law of T{u, v} must satisfy = o. This has solutions (A 1, A 2) = (F1{X,T),U-1F2(X,T)), where (X ,T) = (x - v,t - logu) and ( F ^ F 2 ) is an arbitrary solution of the linear system dF1 , dF± _ p2 _ A d_F2_ , dEL _ n 8X dT r ~ u ' 9T "*" 8X ~ u -Since (A 1, A 2) are linearizing factors one can now seek an explicit linearization of T{u,v}. In fact, as shown in [15, p.324jff], the application of linearization algorithms [15, 38] yield the invertible mapping zi = x - v, z2 = t — logu, w\ = x, w2 = e', Chapter 4. Potential Systems of a Given System of PDEs 122 which transforms any solution (u(x,t),v(x,t)) of T{u,v} to a solution z2), ( ^ ( ^ l , z2)) of the linear system of PDEs (4.59) u>i,z2 = e Z2w2,Zl, and vice versa. This in turn yields the noninvertible mapping x = wi, t = log w2, u = e~Z2w2, which transforms any solution (u>i(zi, z2), w2(zi, z2)) of the linear system (4.59) to a solution u(x,t) of the nonlinear telegraph equation. Chapter 5 A Potential Symmetries Classification of PDEs All previous examples qf potential symmetries have involved second order scalar PDEs and first order systems of PDEs with two independent variables. It is natural to consider the question of whether other types of PDEs can admit potential symmetries. In this chapter, we consider the problem of finding higher order scalar and systems of PDEs with two independent variables admitting potential symmetries. Preliminary work on potential symmetries of higher order scalar PDEs with two indepen-dent variables has been presented in [51] by Pucci and Saccomandi. However, no examples of such PDEs were provided. Moreover, they make two claims regarding necessary conditions for higher order scalar PDEs to admit potential symmetries, but, as already intimated in the sym-metry calculation of (2.41) in §2.3, there appears to be an incompleteness in their argument. Consequently, the validity of their claims comes under question. In §5.1 we use Algorithm 3.3.4, which has been shown to correctly lead to the symmetries of a given system of PDEs, to disprove one of their claims and to 'correctly' prove the other. In §5.2, for each n > 3, we consider the problem of constructing n-th order scalar PDEs admitting potential symmetries. Our experience in deriving the necessary conditions of §5.1 shows that the general classification approach is not appropriate here. The problem is that the classifying functions depend on too many variables. Consequently, it is natural to consider a smaller class of PDEs by reducing the number of classifying functions and their dependencies. However, there is a fine balance that one must make between reducing the class sufficiently in order to make the computations tractable and yet still have a class that includes the PDEs we 123 Chapter 5. A Potential Symmetries Classification of PDEs 124 seek. In our approach, we start with a class of PDEs characterized by just one arbitrary function and look for PDEs within this class that admit a potential symmetry generator X , which is fixed a priori. The success of our approach hinges on discovering the functional dependencies of certain components of the n-th extension operator X'™) of the given X . These functional dependencies then dictate the minimal set of dependencies for the classifying function that we use. For each n > 3, we are able to find a class of n-th order evolutionary scalar PDEs admitting a potential symmetry. Moreover, for each n > 3, the corresponding class of PDEs constructed is characterized by an arbitrary function of n variables. Consequently, we prove that potential symmetries are admitted by an abundance of higher order scalar PDEs. In §5.3, for each n > 2, a class of n-th order systems of PDEs admitting a potential symmetry is constructed, by foUowing the methodology of §5.2 for the scalar PDE case. Consequently, we also show that potential symmetries are admitted by an abundance of higher order systems of PDEs. It is convenient to use the foUowing notation in this chapter: The independent variables are given by (2:1,2:2) = (x,t), the dependent variables are u = (u\, • • • ,uq), and the scalar potential variable is v. Higher order derivatives of u are denoted by dT+SUa _ and likewise for higher order derivatives of v. When r and s are smaU, we wiU often revert back to the usual expanded notation such as d5ua _ dxWt2 ~ Ua<xxxtt-Moreover, when convenient, we may use both the above notations in the same equation. How-ever, there should be no confusion in this. Also, the infinitesimal generators we consider in this Chapter 5. A Potential Symmetries Classification of PDEs 125 chapter are always of the form 1 dx + e dt + r?1 8U1 + • • • + V" dUq + Vq+1 3, (5.1) where (£, rj) depend on (x, t, u, v). If (£, r/1, • • •, rr7) depend essentially on v, then we call X an infinitesimal generator of potential type or simply a potential symmetry generator. Lastly, we will be exclusively using the potential ordering of Example 3.1.2, where we may change between (xi,x2) = (x,t) and (xi,X2) = (t,x) so that certain equations are solved in the desired manner. In this regard, we shall always explicitly state the ordering of the independent variables and so there should be no confusion. Here is the basic framework of all calculations performed in this chapter. For simplicity, we restrict this discussion to the scalar PDE R, given by where, without loss of generality, G is of order n — 1 and F is of order k < n — 1. However, the observations and methodology we now outline also apply to systems of PDEs which are considered in §5.3. Associated with R is the potential system S given by vx = F, (5.3) vt =G. By Theorem 3.3.3, Q is a symmetry group of S if and only if for any infinitesimal generator X of Q, given by (5.1), we have DtF(x, t, u(k)) + DxG(x, t, ul ; ("-1 )) = 0, n > 3, (5.2) N = max(m, n — 1) (5.4) where m is the order of the standard form of S and sf^ is the prolonged standard form (cf. Definition 3.1.14) of S. To discuss these conditions, it will be convenient to let Chapter 5. A Potential Symmetries Classification of PDEs 126 so that (5.4) becomes ¥ = ( ¥ 1 , ¥ 2 ) = (0,0). (5.6) To solve (5.6) for the unknown infinitesimals (£,»), we note that $ depends on parametric derivatives up to some finite order, whereas the infinitesimals depend only on (x,t,u,v). Con-sequently, we must equate to zero all the coefficients of like parametric derivative terms of order at least one that occur in The resulting overdetermined system of equations for (£ ,77) are called the determining equations for (£ ,77) . In general, these determining equations are quite difficult to analyse, since we will have (F, G) as unspecified functions depending on many variables. We make progress by isolating the highest order parametric derivative terms in In this regard, we may write * i = (uuxx)vtn-i + 4>{x, u ' " - 1 ' , v<"-2>), to show that the only dependency of \Pi on (n — l)-th order derivatives of v is in the expression (uuxx)vtn-i, and that all other terms are lumped together in <j>. In principle, one may be able to determine 4> explicitly, but we will rarely need to do so. The foUowing observations wiU help isolate the highest order parametric derivatives in which is the result of making direct substitutions from sf^ in \P. Since \P is of order n — 1, only the equations of sf^^ whose left hand sides are of order up to n — 1 can be used for substitutions in Also, the equations of sf^ are subdivided naturaUy into two sets. The first set has left hand sides which are derivatives of the original dependent variable u, and the second set has left hand sides which are derivatives of the potential variable v. Consider any equation in the first set given by uj = rhs, where by the definition of a potential order (cf. Example 3.1.2), rhs must be independent of v and its derivatives. RecaU that a potential ordering is not a total derivative ordering. However, amongst the subset of terms {uj}, one does have a total derivative ordering (see Example 3.1.2). That is, the derivative order of rhs is less than or equal to the order |«7| of the left hand side. Chapter 5. A Potential Symmetries Classification of PDEs 127 Consequently, direct substitutions from sf^^ for principal derivatives uj in $ do not raise the order of \P. However, this is not necessarily true for any equation of the second set. For example, (5.3a) belongs to this second set and its right hand side is of order greater than that of its left hand side. Hence, substitutions from sf^ for principal derivatives vj can raise the order of As such, much of our effort will be spent in trying to determine the equations in s/^N) for the principal derivatives vj, since these lead to the highest order parametric derivatives in 5.1 Necessary Conditions for the Admission of Potential Symmetries In this section, we determine necessary conditions for a scalar PDE (5.2), which is of order n > 3, to admit potential symmetries through its potential system S, given by (5.3). This problem has been studied by Pucci and Saccomandi in [51], where they make the following two claims: Claim 5.1.1 (Theorem 1 in [51]) Necessary conditions for R to admit potential symmetries are dG — = 0, and F = F(x,t,u,ux,ut). (5-7) OUtn-\ Claim 5.1.2 (Theorem 2 in [51]) Assuming k < l , 1 conditions for R to admit potential symmetries are { H(x,t,u)ut + Ki(x,t,u)ux + K2(x,t,u) if FUt / 0, K(x,t,u,ux) ifFUt = 0. (5.8) (2) G is independent ofutn-i. The arguments Pucci and Saccomandi used to arrive at these claims have been described at the end of §2.3 through the example of the potential system (2.41), which is just a special case of (5.3). Through that example, we showed what the difficulties are in trying to discover all 'The condition k < 1 was not explicitly stated in Theorem 2 of [51], since it follows from Claim 5.1.1, if it were to be true. However, we shall show that Claim 5.1.1 is not true, and, as such, we require the assumption k < 1 here. Chapter 5. A Potential Symmetries Classification of PDEs 128 the (ra — l)-th order differential consequences of (5.3). Since one cannot, in general, uncover all (ra—l)-th order integrability conditions of (5.3), one cannot accomplish step 2 of Algorithm 2.3.9. Since this is essentially the algorithm used in [51], the validity of the above two claims comes into question. We overcome such difficulties through Theorem 3.3.3, which leads to Algorithm 3.3.4, by using the prolonged standard form of S. In the sequel, the unqualified term 'necessary conditions' shall always denote the necessary conditions for R to admit a potential symmetry through S. In §5.1.2, we 'correctly' prove that (5.8) is indeed a necessary condition. This takes care of the case k < 1. When k > 1, the determining equations for X turn out to be very difficult to analyse since the arbitrary functions F and G depend on many variables. In §5.1.1, we restrict ourselves to the case ra = 3 and derive a set of complicated necessary conditions. These conditions lead us to the foUowing third order scalar PDE u x x x + 2uxxt + uxti = 0, (5.9) with associated potential system vx — v,xx uxtt (5.10) Vt = uxx + uxt. Notice that this potential system has the second order integrabihty conditions vxx = vu and vxt = —Vtt- The possibihty of having such integrabihty conditions was overlooked in [51]. One can show that, through the potential system (5.10), (5.9) admits the infinite parameter family of potential symmetries X = a(x,t, v) 8X + (3(t) dt + [f(x, t, v)] du + [C(t) - lx - it - v/3t)] 8V, (5.11) where (a,/3,7,£) is any solution of the linear system of PDEs ox = (it - at, Ixx = -2~ixt - itt ~ vfitt + Ct-Chapter 5. A Potential Symmetries Classification of PDEs 129 This proves that Claim 5.1.1 is not true.2 5.1.1 A Class of Third Order Scalar PDEs Let k = 2 and consider the third order scalar PDE u x x x = DtF(x,t,uw)-DxG{x,t,u^), £ = 0, (5.12) which has potential system S, given by vx = F, (5.13) vt = G + uxx. Notice that (5.12) corresponds to (5.2) with G = G + uxx. In this section, we will determine some necessary conditions, involving F and G, for (5.12) to admit potential symmetries through S. As we shall see, the determining equations for S, given by (5.4), are highly complicated due to the fact that there are two arbitrary functions F and G and that these depend on many variables. Classification problems with two functions, each depending on just one variable, are difficult enough (see [43]). As such, the necessary conditions that we are able to derive are not very tight and we are only able to obtain the example (5.9) at this stage. Many more examples of higher order PDEs admitting potential symmetries will be constructed through a different approach in §5.2. Let -< be the potential ordering of Example 3.1.2 with (2:1,3:2) = (x,t). The standard form s/_- of S is the system (5.12) and (5.13), which is of order m = 3. Consequently, we have (k, n, N) = (2,3, 3) and G = G + uxx in the determining equations (5.4). The sets of parametric and principal derivatives of sf^ up to order 3 are respectively A ( 3 ) = {u,ux,ut,uxx,uxt,utt,uxxt,uxtt,uttt} U {v}, B(3) = {uxxx} U {vx, vt, vxx, vxt, vu, vxxx, vxxt, vxtt, vttt}-To obtain the prolonged standard form sf^\ we must obtain equations for each member of 2Notice that one can determine the general solution of (5.9) by quadrature. An open question is whether one can add a non-degeneracy assumption in Claim 5.1.1 in order to make its conclusion hold. Chapter 5. A Potential Symmetries Classification of PDEs 130 i ? ( 3 ) . The algorithm proCstandard, to achieve this, is given in Appendix A.4. Since we will only need to use the subset of equations of sf^ whose leading derivatives are of order at most two, let us determine this subset. The equations for vx and vt are given by (5.13). To obtain the equations for vxx, vxt and vtt, we first differentiate the equations in (5.13) to obtain dF _, _dF_n, i IK duxx axxx + £ t u x x t + $-Uxtt + <f>\x,t,u^), Vxt = i^Uxxt + ^ U x t t + ^-Uut + <£ 2(Z , M ( 2 ) ) , (5-14) Vtt = U x x t + ^-Uxtt + •§u^Uttt + 4>3(x,t, U^), where (f>1 are specific functions which are independent of third order derivatives of u. In the sequel, we will not need to know <f>1 explicitly. The only principal derivative appearing in the right hand sides of (5.14) is uxxx. As such, for these equations to belong to s/^3), one must use (5.12) to replace uxxx- In order to do this, let us first expand (5.12): Uxxx U x x i + (|£ - S) U x t t + ^-Uttt + * 4(M,«< a>), for some function <j>4. One can now use this equation to replace uxxx, in (5.14), rendering the right hand sides to be independent of any principal derivatives. In summary, the subset of equations in s/i, 3 ), whose leading terms are of order at most two, is given by vx '= F, vt = G + uxx, - (JLE- \-9F_ _ _dG_] , dF \ , f _dF_ \_dF_ _ dG_] , dF_\ x x \duxx [duxx duxt\ duxt J x x t ' \duxx [duxt dutt J "*" dutt J x t t ^ -j^) + Hlti^)uut + nx,t,uW), Vx* = & U x x * + + £ u « * + </>2(x,t,u^), vtt = uxxt + ^ u x U + if^uttt + <t>3(x, t, u(2)), where <f>5 = c^ 1 + ^ ~<^ 4 is independent of all third order derivatives of u. Let us now explicitly calculate the determining equations (5.4), where we will only focus on the highest ordered terms. We have X<2>[-t,, + F(z,t,«<»>)] = -v! + £ X f + £ 2 f + 0 < |J\ < 2, (5.16) Chapter 5. A Potential Symmetries Classification of PDEs 131 where (£ 1, £ 2 , r/1, rj2) are functions of (x,t,u,v), and (??2,7?j) are given by the infinitesimal extension formula (2.35). Since the right hand side depends only on (x, t, w ( 2 ), v ( 2 ) ) , when making direct substitutions from s/^3) we are only required to replace (vx, vt, vxx, vxt, vtt) using (5.15). Now vx and vt are replaced by terms of order at most 2, whereas vxx, vxt and vtt are replaced by third order terms. As such, we focus on terms in (5.16) that depend on these second order derivatives of v. Since F is independent of v, second order derivatives of v can only occur in rr\x, n\2 and r)\2 through the dependence of (£ 1 , £ 2 , n 1 ) on v. To see this explicitly, let us compute rrjj: 77^ = DK-q1 - ^ux - £2ut) + ^ u x x x + £2uxxt = vlvxx ~ tluxvxx - i2vutvxx + <f>6(x, t, u(2\ vW) = d-itvxx + <!>*, for some function <f>6 and where Q1 is the characteristic Q 1 = n1 - ?ux - (5.17) Likewise, we have VI2 = ^ -vxt + f(x,t,u^\vW), Notice that <^ 6, 4>7, (f)S are independent of third order derivatives of u and second order deriva-tives of v. Consequently, (5.16) becomes X ( 2 ) [ - , , + F} = + + * « £ 7 ) + A M ^ V ) . where we have lumped together all the lower order terms in <f>9. By making direct substitutions from (5.15) into this equation and collecting all lower order terms together in </>10, we have (5.4a) equivalent to 0=(xW[-vx + F])\ ,3) = dS-[^12uxxt + a122uxtt + a222utu} + <f>w(x, t,u™,v), (5.18) where -< a112 = ML <^F_ 8uxx \duxx dF dG duXx dux ML) 4. dF dF , dF duxt ) duxx duxt dutt ' , 2 a122 =dF_(dF_\dF_ _ | G | + 8F.) + ( d F , ) ' + BF. (5.19) duxx \duxx yduxt autt\ dutt J ' \auxt ) o « t t o « n ' 222 _ (ML)2 ML I MLML 4. dF dG \duxx) dutt duxt dutt dutt dutt Chapter 5. A Potential Symmetries Classification of PDEs 132 Notice that the right hand side of (5.18) depends only on x, /, and parametric derivatives in A ( 3 ) . This illustrates again the advantages of Theorem 3.3.3 in that each equation of the prolonged standard form associated with the given system of PDEs is used at most once in the substitutions step (cf. Algorithm 3.3.4). Recall that parametric derivatives can take on any arbitrary values and since the unknown infinitesimals (£, n) are independent of the third order parametric derivatives uxxt, uxtt and um, (5.18) can only be satisfied if a 1 1 2 = 0, a 1 2 2 = 0, a 2 2 2 = 0. Notice that in order for X to be a potential symmetry generator, we cannot have = 0 as this would imply that (€1,€2,r}1) are independent of v. A similar calculation leads to X{2)[-vt + uxx + G}= d-§^(vxx + v x t l g. + ^g) + <f,"(x,t,u™,vll)). By making direct substitutions from (5.15) into this equation, we have (5.4b) equivalent to 0= (xW[-vt + G}) ( 3 ) = ¥ [ / ? 1 1 2 ^ + / ^ (5-20) where 3112 — ( dF \ 2 , dF I dG " \duxx ) ' duxt ' dutt' j3122 = ^ F _ ( d F _ _ d G _ \ + d F _ + d F _ ^ + ^ d ^ ( 5. 21) r ouxx \ouxt outt J outt ffuIt a%t ouxt outt ' o222 _ dF 8F , 9F dG , I dG \ 2 " duxx duxt dutt duxt / As before, since the infinitesimals are independent of all third order parametric derivatives and 7^  0 for potential symmetries, we must have /3 1 1 2 = 0, /? 1 1 2 = 0, p112 = 0. In summary, we have derived the foUowing necessary conditions: Theorem 5.1.3 Necessary conditions for (5.12) to admit potential symmetries through (5.13) are a 1 1 2 = 0, a 1 2 2 = 0, a 2 2 2 = 0, (5.22) /3 1 1 2 = 0, (5122 = 0, (5222 = 0, Chapter 5. A Potential Symmetries Classification of PDEs 133 where aJ and f3J are given by (5.19) and (5.21). As can be seen by the definitions of aJ and /3J, the necessary conditions (5.22) are quite complicated. Moreover, if one satisfies these conditions, then one still has to satisfy the system (f>10(x, t, u{2), v) = 0, cf>12(x, t, u{2\ v) = 0, (5.23) where </>10 and <f>12, defined through (5.18) and (5.20), are very complicated expressions involving (£, rj, F, G) and their derivatives. As such, we do not attempt to obtain any tighter necessary conditions. Instead, let us try to find some specific examples of PDEs that satisfy (5.22). Example 5.1.4 Consider the restricted class of PDEs (5.12) satisfying ^ = - 1 , | ^ = 0. (5.24) o u x x ' dutt v ' Then the necessary conditions (5.22) are reduced to #^ = - 1 , fi- = l, ?F = 0. (5.25) a u x t ' auxt ' dun ' Consequently, we are led to consider F = - u x x - uxt + f(x,t,u{1)), G = uxt + g(x,t,v,W), where / and g are arbitrary functions of their arguments. In this case, the restricted class of third order scalar PDE (5.12) is given by Uxxx = -2uxxt - uxtt + Dtf - Dxg. (5.26) The corresponding potential system (5.13) is vx — uxx uxt -\- f, vt = uxx + uxt + g. This is a much simpler potential system to analyse than the original system (5.13). However, we still have two arbitrary functions / and g, with each depending on five variables (x,t, u, ux, ut), that must satisfy (5.23). At this stage, we are unable to completely analyse (5.23) for all cases of / and g. However, we have found that (5.23) is satisfied when (/, g) = (0,0). The scalar Chapter 5. A Potential Symmetries Classification of PDEs 134 PDE (5.26) corresponding to this case is just (5.9). As previously mentioned, through the potential system (5.10), the scalar PDE (5.26) admits the infinite-parameter family of potential symmetries (5.11). • In summary, we have derived the necessary conditions (5.22) for the third order scalar PDE (5.12) to admit potential symmetries through the potential system (5.13). We have pointed out that these necessary conditions are very complicated and are not very tight ((5.23) are still to be satisfied on top of (5.22)). Due to their complexity, we do not attempt to obtain tighter necessary conditions. However, using these necessary conditions, we have discovered the scalar PDE (5.9) which proves that Claim 5.1.1 is not correct. In §5.2 we will construct many examples of higher order scalar PDEs admitting potential symmetries through a different approach. 5.1.2 A Class of Scalar PDEs of Order Greater than Two Consider the scalar PDE R, given by A = -DtF(x,t, u(k)) + DxG(x, t, ? i ( n - 1 ) ) = 0, k < 1, n > 3, (5.27) with potential system S, given by (5.3). Let us calculate the determining equations for S which are given by (5.4). We first need the prolonged standard form for S. Let < be the potential ordering of Example 3.1.2, with (x\, x2) = (x,t), and assume that R has the same solutions as its solved form ux*tn-* = <f>(x,t,uM), du°l_a = 0, (5.28) where ux<rtn-a is the leading term in (5.27) with respect to -<, for some 0 < o < n. The standard form s/x of S is the system (5.3) and (5.28), which is of order m = n. Consequently, we have N = n in (5.4). The set of principal derivatives of up to order n is given by B(U) = {Uxatn-<r} U {Vxrts 0<T,5, 1 < T + S < Tl}. To obtain the prolonged standard form s/^n), we must obtain equations for each member of Chapter 5. A Potential Symmetries Classification of PDEs 135 i ? ( n ) (see algorithm proLstandard in Appendix A.4). The equations for vx, vt and given by (5.3) and (5.28). To obtain equations for the higher order derivatives of v in B^n\ we first differentiate the equations of (5.3) to obtain vxrts = J J J - ' f l ' W i ^ i i C ) ) , 1 < r, 0 < s, l<r + s'<n, (5.29) vts = Dst~1G(x,t,u^"-1'>), 1< s < n - 2, and vtn-i = D^-2G(x,t,u^n-1'>), (5.30) The reason for isolating the equations for v t n - i and vtn will become apparent shortly. In general, the right hand sides of these equations may involve principal derivatives of sf^. If so, we must use the equations of s/x to implicitly substitute for these terms (see alCimpEsubs in Appendix A.2). Before doing so, let us discuss how we will be using the equations of sf^ to analyse (5.4). . Since k < 1, ~XSl\—vx + F] depends only on (x,t, u ( 1 ) , ?J (1)). Hence, in calculating (5.4a), the only substitutions from sf^ required are for vx and vt, given by (5.3). Since G is of order n — 1, X ( n - 1 ) [ — V t + G] depends only on ( x , t , - y ( n _ 1 ) ) . Hence, in calculating (5.4b), we require all equations of sf^ whose leading terms are derivatives of v up to order n — 1. As such, (5.4b) is quite complicated and, to make progress, we focus on its highest order terms. At first sight its highest order terms must come from the substitutions for v t n - i since the right hand side of (5.30a) depends on terms of order up to 2n — 3, which is strictly greater than n — 1 ( r i > 3). If this were true, then to obtain the highest order terms in (5.4b) one would simply find all expressions involving % > - i . This was the approach taken in [51]. However, as it stands, (5.30a) is not an equation of sf^ since we have yet to ensure that its right hand side is independent of all principal derivatives. If all the highest order terms in the right hand side of (5.30a) are principal derivatives, then implicit substitutions from st\ would render the right hand side of (5.30a) to be of lower order, and hence we cannot obtain the highest order terms in (5.4b) by simply isolating those expressions in X ( n _ 1 ' [ — v t + G] which involve v t n - i . Chapter 5. A Potential Symmetries Classification of PDEs 136 However, in the following lemma we prove that this does not happen. In [51], no such result was provided and, as such, the 'proof of Claim 5.1.2 is not complete in that paper. Lemma 5.1.5 Let -< be the potential ordering of Example 3.1.2 with (2:1,2:2) = (x,t), let S be the potential system (5.3), and let sf^ be the standard form of S, given by (5.3) and (5.28). Suppose that sf^ is the prolonged standard form of S. Then the equations in sf^ for vtn-i and vtn are of the form K ' (5.31) vtn = H2(x,t,u^"-2^), for some functions H1 and H2 which are independent of any principal derivatives. Moreover, all other equations in sf^ are of lower order. Proof. Let us first prove that the equation for vtn-i is of the form (5.31a). As previously mentioned, (5.31a) is the result of making all possible implicit substitutions from sL, into the right hand side of (5.30a). If one can show that D™~2G depends on a parametric derivative uj of order \J\ = 2n — 3, then such a term cannot be implicitly substituted for and, as such, this part of the proof will be complete. Let us show that D™~2G depends on a parametric derivative of order 2n — 3. By assumption, (5.28) is the solved form of (5.27) with respect to <. Since -< is the potential ordering with ( £ 1 , 2 : 2 ) = (x,t), and u x * t n - a is the leading term in (5.27), we must have 7^0, ^ = 0, a<r<n. (5.32) On the other hand, expanding (5.27) and focusing on the n-th order terms, we have for some function A. By (5.32) and (5.33), we have dG i n dG 8u . I , . , , . = 0, a<r<n. Using this, we can expand (5.30a) as follows: Vt"~l = {duxZGltn_r) « B r - l t 2 n - r - 2 + A(x, t, U<2"-*>), 1 < T < G, Chapter 5. A Potential Symmetries Classification of PDEs 137 for some function A . Consider making all possible implicit substitutions from sf^ into the right hand side of this equation: Clearly, implicit substitutions can only be made from (5.28) for derivatives of ux<rtn-<r. Hence the highest order terms cannot be replaced (and hence are parametric derivatives). Moreover, when using (5.28) to substitute for the principal derivatives in A , we do not raise the derivative order of the terms in A . Consequently, we obtain v t - > = { d u j ° t n - r ) «*- i*»-r- 3 + A(s, t ,u ( 2 - 4 ) ) , 1 < r < a, where the right hand side is independent of all principal derivatives. Hence, this is the equation for vtn-i appearing in sf^. In fact, its right hand side is the required function H1 in (5.31a). By a similar argument, one can also show that the equation for v^n is of the form (5.31b). We now have that (5.31) are two equations in sf^. The remaining equations are obtained by making all possible implicit substitutions from sf^ into the system I given by (5.3), (5.28) and (5.29). These remaining equations are of lower order than (5.31) since 7 is of order In — 1 and the only possible substitutions are from (5.28) which do not raise the order of equations. Consequently, (5.31) are the highest order equations in sf^K • With this lemma, we are now in a position to prove Claim 5.1.2. Since k < 1, we have F - F(x,t,u,ux,ut), and XW[-V, + F] = -n\ + + + ^ where (£ x , £ 2 ,77 1 , n2) are functions of (a;,i, u, v), and (n2,77^,772) are given by the infinitesimal extension formula (2.35). Since the right hand side depends only on (x, t, V ( 1 ) ) , the only substitutions from sf^ are for vx and vt, given by (5.3). As such, (5.4a) is given by 0= (xW[-vx + F])\j) = 41G(x,tMn-1)) + 1>2, (5-34) where Chapter 5. A Potential Symmetries Classification of PDEs 138 ^ = e+euux+evF+§£(r,i - evux -^2 = £ {vi + viux - Ux[ex - euu,} - ut[et + £ « j - + - «*]) 3 + f ^ ( " i + viut - ux[& - - + £ut]) With n > 3, G is strictly of greater order than ip1 and ^ 2 , which are of order one. Since the unknowns (£, 77) are independent of the highest order parametric derivatives of G, the only way that (5.34) can be satisfied is if ip1 = 0, v2 = 0. (We do not allow G = 0.) These are precisely the equations derived by Pucci and Saccomandi in [51] (cf. equations (3.5,3.6)). By analysing these equations further, they arrived at (5.8b). Consequently, Claim 5.1.2(1) is proved. We also have X^i-tH + G(x,t,u(n~1})] = -n\ + + + rAffi, (5-36) where 0 < | J\ < n — 1, (£*,£2,771, rj2) are functions of (x,t,u,v), and (T? 2 ,^ ) are given by the infinitesimal extension formula (2.35). Since the right hand side depends on (x, t, ?/"-1)), the only equations of sf^ required for substitutions are those whose leading terms are deriva-tives of v up to order n — 1. Of these equations, the highest order substitution is for vtn-i, by Lemma5.1.5. Moreover, this term only arises in the right hand side of (5.36) through nj = D^W ~ £ V - eut] + euxtn-i + £ 2 u t n , = ?gLVtn_1+^x,t,u^\v^), where J = (2,2 • • •, 2), | J | = n — 1, <$> is independent of vtn-i, and Q1 is given by (5.17). Hence, in (5.36), we have X^i-vt + G] = ° & v t n - i ^ + $(x,t,u^\v^), where <f> is independent of vtn-i. If one were to make direct substitutions from sf^ in the right hand side of this equation, then by Lemma5.1.5 the highest order terms in the resulting Chapter 5. A Potential Symmetries Classification of PDEs 139 expression must come from the substitution of vtn-i. Hence, 0 = (xln-1)[-vt + G{x,tMn-1))))\ f ( n ) = H \ x , t , u ^ - ^ ^ + t(x,t,u^\v), where cf> is independent of all principal derivatives. Notice that we have used 2n — 4 > n — 1, since n > 3, to determine the order of (f>. Since the unknowns (£, r/) are independent of the highest order parametric derivatives of H1, we must have ^ = 0, or H\x,t,u^) = 0, or ^ = 0. We cannot have = o since this implies that (£ x, £ 2 , r/1) are independent of v and X would not be a potential symmetry generator. Since H1 depends only on x , t and parametric derivatives, which can take on arbitrary values, we cannot have H1 = 0. Hence, we have proved Claim 5.1.2(2). Consequently, we elevate Claim 5.1.2 to the status of a theorem: Theorem 5.1.6 Assuming k < 1, necessary conditions for R to admit potential symmetries are (1) F- \ H ( x , t i u ^ U t + Ki(x,t,u)ux + K2(x,t,u) if FUt ^ 0, \ K(x,t,u,ux) ifFUt=0. (5.37) (2) G is independent of utn-i. The results of §5.1.1 and §5.1.2 illustrate the importance of Theorem 3.3.3 and Algorithm 3.3.4 in overcoming the problems associated with Algorithm 2.3.9, which was essentially the algorithm used in [51] to arrive at Claim 5.1.1 and Claim 5.1.2. In general, one cannot discover all possible (n — l)-th order differential consequences of (5.3) and, as such, step 2 of Algorithm 2.3.9 cannot be executed correctly. As example (5.9) shows, such differential consequences can exist and this explains why Claim 5.1.1 is in error. We avoid such problems in Algorithm 3.3.4 by using the prolonged standard form of the given system of PDEs. Moreover, Claim 5.1.2 turned out to be correct because of the fact that, when k < 1, there are no (n — l)-th order differential consequences of (5.3) involving vtn-i. Consequently, the coefficient of vtn-i must be zero. As the corollary below shows, this fact follows directly from Lemma 5.1.5, which we Chapter 5. A Potential Symmetries Classification of PDEs 140 relied on to prove Theorem 5.1.6. However, this fact was not proven, but assumed3 in [51] and, as such, the 'proof in that paper for Claim 5.1.2 is not complete. Corollary 5.1.7 When k < 1 and n > 3, there are no (n — l)-th order differential consequences of (5.3) involving vtn-i. Proof. Let -< be the potential ordering of Example 3.1.2 with ( £ 1 , 2 : 2 ) = Then, by Lemma 5.1.5, the two highest order equations in the prolonged standard form sf^ of (5.3) are given by (5.31). Suppose there is an (n — l)-th order differential consequence of (5.3) involving Vfn-1: Q(x,t,u^\v^) = 0, g ^ z r / o . Then aU solutions (w, v) = f(x) of (5.3) must satisfy the equation: n\ By Lemma3.2.7(2), we must have p(n-l) - 0. 1 / ft But the left hand side of this equation cannot vanish since Q is of order n — 1 and, by Lemma-5.1.5, the highest order substitution is for vtn-i. This substitution leads to parametric terms of order In — 3, and all other possible substitutions lead to parametric terms of lower order. Since n > 3, we have 2n — 3 > n — 1 and 0| ( „ ) cannot be identically zero, but must depend on parametric terms of order 2n — 3. • 5.2 Higher Order Scalar PDEs Admitting Potential Symmetries In this section, we consider the problem of finding scalar PDEs of order n > 3 that admit potential symmetries. The two standard approaches to this problem would be to perform a direct symmetry classification of potential systems, or to use differential invariants. It turns 3In the 'proof of Claim 5.1.2 in [51], no substitutions for vtn-i were made in (5.4b) and its coefficient was set to zero. Chapter 5. A Potential Symmetries Classification of PDEs 141 out that there are significant problems to be overcome in either approach. Let us investigate the differential invariants approach first. Let X be any infinitesimal generator in (x, t, u, t>)-space. To find the differential invariants of X , one first solves the system Xf>i) = 1, X(z 2 ) = 0, X(wi) = 0, X(> 2) = 0, (5.38) where (z\, z2, w\, w2) are linearly independent functions of (x, t, u, v). In (zi, z2,wi,w2)-space, the infinitesimal generator becomes X = dZl which corresponds to translations in z\. As such, the invariants of X are z2, W{ and all derivatives of W{, i = 1,2. Consequently, any n-th order PDE of the form A(z2,w[n\w2n)) = 0, (5.39) must admit X , where (zi,z2) are treated as the independent variables and (u>i,w2) are treated as the dependent variables. The problem with using this approach to find scalar PDEs admitting a potential symmetry X is this: In general, the functions z, w and derivatives of w with respect to z depend on v and its derivatives in a nontrivial way. As such, the PDE (5.39) will also depend on v and its derivatives. But the scalar PDE we require must involve only the original dependent variable u and its derivatives. Consequently, only certain functional forms of A are suitable. Example 5.2.1 Let n = 2 and X = vdx. Then the following linearly independent functions X Z\ = - , Z2 = t, Wi = U, W2 = V, V Chapter 5. A Potential Symmetries Classification of PDEs 142 are solutions of (5.38). We have the following invariants of X : w\,zlZl = 2x~1(~1v3ux + C,~2vA(uxx + 2x~1ux) - (~3xv4uxvxx, vJi,zlZ2 = -(~1v2uxt + (~2v2(vtux + xvtuxx + xuxvxt) - (~3X2V2VtUxVxx, wi,z2z2 = utt - C~1x(2vtuxt - uxvtt) + (~2x2(v?uxx + 2vtuxvxt) - (~3X3V?UXVXX, U>2tzi = — C - 1 ^ - 1 ^ 3 — X_1V2, W2,z2 = - ( ^ W t , W2,zlZl = -C~3v5vxx + 2(-2x~2v5 + AC-xx-2v4 + 2x~2v3, W2,zxz2 = (~1X-1V2Vt + (~2V3(vxt + X^Vt) - (-3V3XVtVxx, W2,Z2z2 = -(~3x2vv?vxx + 2(-2xvvtvxt - C^VVtt, where £ = — v + xvx. Notice how these invariants depend on v and its derivatives in a nontrivial way. While it is true that any second order PDE of the form A(z2,W1,W2, Wi,Zl, Wi, Z 2 , W2,Zl, W2,z2, Wl^iZ! , w l , z l Z 2 , W\,Z2Z2 , w 2 , z l Z l , W2,ZlZ2 , W2>Z2Z2 ) = 0, admits X , this PDE does not correspond, in general, to a scalar PDE for u. One must find particular functional forms for A so that A is independent of v and its derivatives, i.e., one must find solutions of 8 A _ n 9A_ _ n dA — ft dv - u> dvx ~ U' dvt ~ U> 9 A _ = 0 | A = 0 , 4 ± = 0 , OVxx ' OVxt ' ovtt ' which is an overdetermined system of PDEs for A . • In general, for any n > 2, the task of finding suitable functional forms of A , such that (5.39) is a scalar PDE for u, appears not to be a trivial one. As such, we do not pursue the differential invariants approach at this stage. As for the symmetry classification approach, our efforts in §5.1 show that one can only go so far with this method when considering the general class of potential systems (5.3) associated with the n-th order scalar PDE (5.2). The two classifying functions F and G depend on many Chapter 5. A Potential Symmetries Classification of PDEs 143 variables and this leads to symmetry conditions which are very difficult to analyse. In order to simplify these symmetry conditions, one could consider a smaller class of scalar PDEs (e.g. by making the classifying functions depend on fewer variables). Furthermore, rather than perform a complete potential symmetry classification, one can a priori fix the potential symmetry generator and search only for PDEs within the smaller class of scalar PDEs which admit the given potential symmetry generator. This is the general strategy we will employ. In our approach, we start with a given scalar PDE R which is known to admit a potential symmetry Xpot through its potential system S. For each n > 3, we then construct an n-th order PDE Rn and its associated potential system Sn, which are related to R and S through an unknown function g(x, t, t^™-1'). We then determine sufficient conditions for Rn to admit the given potential symmetry generator Xpot through Sn. In short, in our approach we: • Start with R which admits Xpot through S. • Construct Rn and Sn which are related to R and S through a function g(x,t, ft (" - 1 )). • Determine sufficient conditions for g such that Rn admits Xpot through Sn. The sufficient conditions for g, which are certainly simpler than the symmetry conditions for the general case (which involves F and G), are still very difficult to analyse. As such, we will not attempt a complete classification of all possible functions g. Rather, by determining the functional form of X^pnot 1'g, we are able to choose suitable restrictions for the dependencies of g so that the associated symmetry conditions reduce to a first order scalar PDE for g. The general solution of this PDE is then obtained through the method of characteristics [15, 47]. We now give the details. Consider the second order diffusion equation R given by AR = -ut + Dx[K(u)ux] = 0, where Chapter 5. A Potential Symmetries Classification of PDEs 144 K(u) = ^ e -a a r c t a n u a = const. (5.40) A f (5.42) Associated with R is the potential system S given by Af = -vx + u = 0, Af = -vt + Kux = 0. Through S, R admits the potential symmetry (cf. Xg 1 in §4.3.5) Xp0t = - v d x - atdt + (1 + u2) du + xdv. (5-41) Using the infinitesimal extension formula (2.35), one can easily derive the following identity which will be useful later: r vx + u 0 vt — Kux u + a where A s = (Af,A^)T. Incidentally, note that (5.42) verifies that S admits Xpot. For any n > 3, consider the n-th order scalar PDE Rn given by ' • AR" = -ut + Dx[Kux) + Dxg(x\t, u ^ ) = 0, # 0, (5.43) where g is some analytic function of its arguments and K is given by (5.40). We assume that Rn has the same solutions as its solved form uxn = F(x,t,uM), |Sr = o, for some function 4>n. Associated with Rn is the potential system Sn given by Af = -vx + u = 0, Af" = -vt + Kux + g = 0. Our goal is to find functions g such that Rn admits the potential symmetry Xpot through Sn. To this end, it will be useful to observe the following relationships between the equations of R and S with those of Rn and Sn: AR" = AR + Dxg, A 5 " = A 5 ' ° (5.44) (5.45) Chapter 5. A Potential Symmetries Classification of PDEs 145 Consequently, using (5.42), we also have the identity 0 A + 0 AP o t 9 J 0 (5.46) Hot 9 vx + u 0 vt — Kux u + a vx + u 0 vt — Kux u + a This identity will allow us to greatly simplify the infinitesimal symmetry conditions (3.19) of Sn so that the conditions that g must satisfy, in order for Sn to admit Xpot, can be easily obtained. We first need to determine the prolonged standard form of Sn. Let -< be the potential ordering of Example 3.1.2 with (xi,x2) = (x,t). The standard form -s/t; of Sn, which is given by (5.44) and vx = u, (5.47) vt = Kux + g, is of order m = n. The parametric and principal derivatives of sf^ up to order N > n are respectively given by A{N) = {uxrts : 0 < r < ra, 0<s , r + s < A } U {v}, B(N) = {uxrts : ra<r, 0 < 5 , r + s < N} U {vxrt, : 0 <r,s, 0 < r + s < N}. To obtain the prolonged standard form sf^ of Sn, one first appends to s/x the following equations which are obtained by differentiating the equations in (5.47): vxrt° = uxr-it*, 1 < r, 0 < s, 1 < r + s < ra, (5.48) vts = D;-\Kux + g) 1 < s < ra. Consider the system (5.44), (5.47) and (5.48): It has the same solutions as Sn; there is a one-to-one correspondence between its leading terms and the principal derivatives of _B ( n ) ; and its right hand sides depend only on x, t and parametric derivatives of A2n~2^ (D™~2g is of order 2ra — 2 and contains no principal derivatives). Consequently this system satisfies all the axioms Chapter 5. A Potential Symmetries Classification of PDEs 146 of Definition 3.1.14 and is thus the prolonged standard form sf^ for Sn. Let us now make direct substitutions from the equations of sf^ into (5.46): A s " must vanish, by Lemma 3.2.7(2), and g remains unchanged since g does not depend on any principal derivatives. Thus (•Xfe 1 ) A 1 ) s / ( n ) = 0, (Xfc- 1 * A f ) | ^ B ) = - ( « + a)g + (^g) • A direct application of Theorem 3.3.3 now proves the following result: Lemma 5.2.2 Consider the n-th order scalar PDE Rn and its potential system Sn, given by (5.43) and (5.45) respectively. Through Sn, Rn admits the potential symmetry Xpot, given by (5.41), if and only if -(u + a)g + (xfeVg) = 0, (5.49) where sf^ is the prolonged standard form (5.44), (5-47) and (5.48). Now our goal is to find functions g satisfying (5.49). Let us investigate the case n = 4 to see the difficulties that arise in analysing (5.49). We will be highlighting some key observations which are generally applicable for any n > 3. Example 5.2.3 When n = 4, we have g = g(x,t,u™), (5.50) R4 is then given by -ut + Dx[Kux] + Dxg(x, t, u ( 3 )) = 0, (5.51) which has solved form uxxxx = <f(x,t, u ( 4 )), 9 ^ = 0, (5.52) for some function <f>4. The associated potential system S4 is given by -vx + u = 0, -vt + Kux + 5 = 0 , (5.53) Chapter 5. A Potential Symmetries Classification of PDEs 147 which has standard form s/_<, given by (5.52) and vr = 11, vt = Kux + g. The prolonged standard form sf^ is then given by (5.52), (5.54) and (5.54) t) -r* -T" Ubnr Vxxxx — Uxxx, ^XXX — I^XX) Vxxxt — Uxxt, ^xxt = uxt, Vxxtt = uxtt, Vxtt = utt, Vxttt = um, vtt = Dt[Kux] + Dtg, vm = D2[Kux] + D2g, vtttt = Df[Kux] + Dfg. (5.55) Notice that: (01) Any equation Ihs = rhs in s/^n) with ord(rhs) > ord(g) has Ihs € {vtt, Vttt, • • •}• Using the extension formula (2.34), we have X$t9 = - « f £ + vtt (ux^-t + u x x ^ + Zuxt&fc) + vmux^-t + iP(x,t,u(3\vt), (5.56) where the dependency of ip on g and its first partial derivatives is not explicitly stated. The reason for isolating the coefficients of v, vtt and vUt will be pointed out shortly. Here is another useful observation: (02) In (5.56), the term v arises only through the dependence of g on x, and pure t derivatives in v arise only from the dependence of g on uxrts, s > 0. Using (5.56) and the equations for sf(f>, equation (5.49) with n = 4 becomes - v& + Dtg (ux^ + u x x ^ + 3uxt^) + D2g (ux^) + # M , u^) = 0, (5.57) for some function ip. The unknown to solve for is g. A compatibility condition between (5.50) and (5.57) is ¥ = o, dx ' (5.58) since g is independent of the parametric term v. Moreover, g is independent of the fifth order parametric terms in D2g and this leads to da 0. (5.59) Chapter 5. A Potential Symmetries Classification of PDEs 148 With this, the highest order terms in (5.57) are the fourth order parametric terms in Dtg. As before, since g is independent of these terms, we must also have {ux^ + u x x ^ + 3uxt^)=0, (5.60) and consequently $(x,t,u(3)) = 0. (5.61) Notice that the left hand sides of (5.58), (5.59) and (5.60) are just the coefficients of v, vtt and Vttt in (5.56). This is worth noting: (03) When solving (5.49), the coefficients of v and the terms in {vtt,vttt, • • •} that occur in X^T^g must vanish. This explains why we isolated the coefficients of v, vtt and vm in (5.57). To complete the solution process, we must solve the overdetermined system of nonlinear PDEs for g given by (5.50), (5.58), (5.59), (5.60) and (5.61). The nonlinearity is due to the fact that ip is nonlinear in g. The problem of obtaining the general solution for g is, in general, not tractable.4 However, we are not necessarily interested in obtaining the general solution of (5.49), but only in obtaining specific solutions in the easiest possible manner. One way to proceed is to restrict g to the form g = g(t,u,ux,uxx,uxxx), (5.62) since, by (02), X^0\g will then be independent of v, vtt and vm. Consequently, none of the conditions described in (03) arise and (5.57) reduces to (5.61) and (5.62). This is certainly a simplification. However, one may still end up with an overdetermined system to solve if there are further compatibility conditions between (5.61) and (5.62). Fortunately this does not happen as we now explicitly show. 4Work on general methods to solve overdetermined systems of nonlinear PDEs can be found in [56] and [44]. However, even when the nonlinearity in the unknown(s) is of polynomial type, the problem of obtaining general solutions is still an unsolved one. Chapter 5. A Potential Symmetries Classification of PDEs 149 When g is given by (5.62), then (5.61) becomes 0 = -(u + o)g - at% + (1 + „ 2 ) | i + 3uuxfe + (4uuxx + 3u2x)^ +(5uuxxx + 10UXUxx)Q^.-Clearly all terms appearing in this equation involve the arguments of g and so there are no further compatibility conditions between (5.63) and (5.62). Moreover, this equation is just a first order scalar PDE for g. Consequently, it is equivalent to an ODE whose general solution, obtained through the standard method of characteristics [15, 47], involves an arbitrary function of k arguments where k is one less than the number of arguments of g. Carrying out the calculations, we find the following general solution of (5.63): g = (1 + u2) 2 exp (a arctanu) Q(d1,d2,63,64), f g - ^ 0, (5.64) where 0 is any analytic function5 of its arguments: 0! = arctan u + Mogr, 93 = - j^tp, f\ UX Q Uxxx IQUUXUXX I 15u2Ux U1 — d i„,213/2 ? U4 — / - I _ L „ . 2 \ 5 / 2 fi .,,,217/2 T " ( 1 + U 2 ) 3 / 2 ' 4 ~ ~ ( 1 + U 2 ) 6 /  ( 1 + U 2 ) 7 / 2 ( 1 + U 2 ) 9 / 2 ' The reason for (5.64b) is so that (5.43b) is satisfied. In summary, any function g, given by (5.64), satisfies (5.49) with n = 4. By Lemma5.2.2, for any such g, the corresponding fourth order scalar PDE (5.51) admits the potential symmetry (5.41) through the potential system (5.53). Due to the functional form of g, (5.51) corresponds to a class of fourth order nonlinear evolutionary PDEs characterized by an arbitrary function of four variables. A particular member of this class is "XXXX ( -14M + a)uxxxux - ltiuu2xx 5(19tf2 - 2au - 2)uxxu2 • /i . o \ Q "T~ , O\A > (1 + U2)2 ' (1 + 7 J 2 ) 3 ' (1 + U 2 ) 4 exp(a arctan u), (5.65) uxx 15u(au + 2 — 6u2)ux (a - 2u)u2 l + M 2 ( l + M 2 ) 5 ( l + U 2 ) 2 l which corresponds to g = ( l + u2)? exp (a arctanu) 64. • The observations (Ol), (02) and (03) made in the above example equally apply to the 5 Recall that we are only considering analytic PDEs so that the existence and uniqueness theorem, Theorem-3.1.18, and the symmetry theorem, Theorem 3.3.3, are both applicable. Chapter 5. A Potential Symmetries Classification of PDEs 150 general case of solving (5.49) for any n > 3. As such, we are motivated to restrict g to the form g = g(t,u,ux,ux2,---,uxn-i). (5.66) Our hope is that, as in the above example where n = 4, (5.49) will coUapse to a first order scalar PDE for g in its arguments. To this end, we first require the functional form of X^^g: Lemma 5.2.4 Consider an infinitesimal generator in (x,t,u,v)-space of the form X = £\t, v) dx + £ 2(i) dt + r)\t, u) du + n\x, t, u, v) dv. Suppose g is of the form (5.66). Then ~XSn~x^g depends only on (t\ U, Ux, • • • , Uxn—1 j vx, vx2, • • •, Vxn—1 ) (5.67) and on first order partial derivatives of g with respect to its arguments. Moreover, the first order partial derivatives of g appear linearly. Proof. Let ( x i , x 2 ) = (x,t) and ( u i , w 2 ) = (u,v). Using the infinitesimal extension formula (2.35), we have X ^ g = ep- + ri1jJE-, J = (1,1, ••• ,!) , 0 < | J | < n - 1, (5.68) dx2 dultJ where With £ 2 = £ 2(a; 2), we have V j = Djiv1) - A / ^ u i . i ) + lV,(J,i). (5.69) Since n 1 = r]l(t,u), Djrj1 depends only on (5.67). By expanding Z ) J ( £ 1 T J 1 1 ) , one finds that the only term that doesn't depend solely on (5.67) is the highest ordered term — ^ M i ^ j y ) which cancels with the last term in (5.69). This proves that rfj depends only on (5.67). Moreover, by (5.68), X ' 7 1 - 1 ' ^ depends linearly on the first order partial derivatives of g. • Corollary 5.2.5 Let Xpot be given by (5.41), let g be given by (5.66), and let sf^ be given by (5-44)J (5-47) and (5.48)- Then, for any n > 3, (5.49) is a first order scalar PDE for g in terms of its arguments. Chapter 5. A Potential Symmetries Classification of PDEs 151 Proof. Since Xpot and g satisfy the conditions of Lemma 5.2.4, we have that X^T^g depends only on (5.67) and linearly on first order partial derivatives of g. Consider making all possible substitutions from sf^ into X^^g: Using (5.48) with s = 0, all x derivatives of v in (5.67) are replaced by x derivatives of u up to order n — 2. No other substitutions are possible. Consequently (X^ot l^g)\ ,(«) depends only on the arguments of g and linearly on its first order partial derivatives. Hence (5.49) is a first order scalar PDE for g. • One can now apply the standard method of characteristics to obtain the general solution of the first order scalar PDE (5.49). Since g, given by (5.66), depends on n + 1 arguments, the general solution of (5.49) involves an arbitrary function of n variables (cf. (5.64) where n = 4). By Lemma 5.2.2, any such solution g of (5.49) leads to a scalar PDE Rn admitting the potential symmetry Xpot through the potential symmetry Sn. Moreover, by the form of g in (5.66), Rn corresponds to a class of PDEs of evolutionary type. In general, as illustrated by the example (5.65), these PDEs are nonlinear. We have proven the following theorem: Theorem 5.2.6 For each n > 3, there exists a class Rn of n-th order nonlinear evolution-ary PDEs (5.43), where g is of the form (5.66) and depending on an arbitrary function of n variables, such that each PDE in this class admits the potential symmetry (5.41) through the potential system (5.45). This theorem shows that potential symmetries are admitted by an abundance of scalar PDEs of higher orders (n > 3). 5.3 Higher Order Systems of PDEs Admitting Potential Symmetries In this section, we generalize the method of the previous section to determine large classes of systems of PDEs of order n > 2 which admit a potential symmetry. We will start with a given system of PDEs R admitting a potential symmetry Xpot through an associated potential system S. We then construct, for each n > 2, an n-th order system Rn with potential system Sn that Chapter 5. A Potential Symmetries Classification of PDEs 152 are related to R and S through a function g(x, t t ( n _ 1 ) ) . Necessary and sufficient conditions for Rn to admit the potential symmetry Xpot through Sn are then derived which only involve the unknown function g. By isolating the highest order terms appearing in these conditions, a suitable restriction of the arguments of g is arrived at. By determining the functional form of X^pnot ^g, we show that g must satisfy a first order PDE (in its arguments) whose solution leads to the desired PDEs Rn. Consider the first order system R of PDEs given by A f = u2,t - u-i_u2 - Mi - u2 = 0, which is also known as the Thomas equations [64]. Using the conservation law in the second equation one can construct the associated potential system S given by A f = u1<x - u2,t = 0, Af = Af = 0, A f = ~Vt + Ml = 0, Af = -vx + u2 = 0. Through S, R admits the potential symmetry [15]: (5.70) where tp(x,t) is any solution of the linear PDE (5.71) This can be verified through the following useful identity: V> -(u2ip + ipx) 0 = e 0 0 (5.72) 0 0 where A 5 = (Af,Af,Af) r. Chapter 5. A Potential Symmetries Classification of PDEs 153 For any n > 2, consider the n-th order PDE Rn given by A f = u2,t - uiu2 U\ u2 = 0, A f = uliX - u2tt - Dtg(x, t, uW) = 0, ? 0, 1 ,tn 1 (5.73) where g is some analytic function of its arguments. We assume that Rn has the same solutions as its solved form U2_t = UiU-2 + Ui + U2, (5.74) 2,t \U2   u2, Ulitn = <f>n(x,t,V.W), d<f>n 0, (5.75) (5.76) for some function (f>n. Without loss of generality, we can use (5.74a) to make all possible implicit substitutions (see the algorithm all-impl.subs in Appendix A.2) for u2,t and its derivatives appearing in <f>n and g. Consequently, we assume that cf)n = (f>n(x,t; u2tXr; uX)Xrts), 0<r,s, s < n, r + s < n, g = g(x,t;u2>xr; u1>xrt.), 0<r,s, r + s < n - 1. Associated with Rn is the potential system Sn given by A f = A f = 0, A f = -vt + MI = 0, A f = -vx + u2 + g = 0. Our goal is to find functions g such that Rn admits the potential symmetry Xpot through Sn. To this end, it will be useful to realize the following relationship between the equations of R and S with those of Rn and Sn: AR" =AR + [0,-Dtg]T, A5" = tf + [0,0,g]T. Consequently, using (5.72), we also have x&v&r = ^ > s + [ 0 , o , ^ V 1> -(u2xp + ipx) 0 0 ev 0 0 As + 0 0 0 A p o t 9 Chapter 5. A Potential Symmetries Classification of PDEs 154 A 5 " + 0 0 (5.77) tp -(u2tp + tpx) 0 o i> o o o v This identity will aUow us to greatly simplify the infinitesimal symmetry conditions (3.19) of Sn so that the conditions that g must satisfy, in order for Sn to admit Xpot, can be easily obtained. We first need to determine the prolonged standard form for Sn. Let -< be the potential ordering of Example 3.1.2. Here, we set (x\,x2) = (t, x) so that (5.74) is solved with respect to -<. The standard form s/_< of Sn, which is given by (5.74) and vt = « i , vx = u2+ g, (5.78) is of order m = n. The parametric and principal derivatives of sf^ up to order N > n are respectively given by A(N) = {ultXrts : 0 < r, 0 < s < ra, r + s<N}U {u2<xr : 0 < r < N} U {v}, B(N) = {ultXrts : 0< r , n < s, r + s < N}U {u2,xrts : 0 < r, 0 < s, r + s < N} U {vxrts : 0 < r , 5 , 0 < r + s< N}. The prolonged standard form sf^ of Sn contains a unique equation for each term in i ? ( n ) . To determine sf^, one first appends to sf^ the following equations which are obtained by differentiating the equations in (5.74a) and (5.78): u2>xrts = DrxDSt~1(u1u2 + u1 + u2), 0< r , 1 < s, 1 < r + s < n, vxrts = ulxrts-\, 0 < r , l < s , 1 < r + s < ra, (5.79) vxr = u2iXr-i + D^g, 1 < r < ra. Consider the system I given by (5.74), (5.78) and (5.79): It has the same solutions as Rn and there is a one-to-one correspondence between its leading terms and the terms in i ? ( n ) . Consequently, the first two axioms of Definition 3.1.14 are satisfied. Also, except for the set of equations (5.79a), the right hand sides of I depend only on independent variables and parametric Chapter 5. A Potential Symmetries Classification of PDEs 155 derivatives of A ( 2 n _ 2 ) . Hence, to obtain the prolonged standard form sf^ of Sn, one must make all possible implicit substitutions from sf^, in (5.79a). The set of all principal terms in (5.79a) is {u2,x*t> • 0 < r, 1 < s, r + s < n- 1}, (5.80) which can aU be implicitly substituted for, using (5.74a). Notice that such substitutions do not raise the order of each equation in (5.79a). Consequently, sf^ is given by (5.74) and (5.78) and u2^t. = fr's(x,t, «<••+•>), 0 < r, 1 < s, 1< r + s < n, (5.81) vxr = u2tXr-i + D ^ g , 1 < r < n, where fs'r depends only on x, t and parametric derivatives in A-r+s\ It turns out that we will not need to know fr's explicitly. Example 5.3.1 Let -< be the potential ordering -<pot of Example 3.1.2 with (xi,x2) = (t,x). Let n = 3 and g = g(x,t;u2,u2,x,u2,xx;u1,ultX,ultt,ultXX,ultXt,uliU). (5.82) The third order system R3 is given by u2t - UiU2 - ux - u2 = 0, (5.83) ui,x - u2,t - Dtg = 0, 9 ^ ^ 0. By expanding Dtg, (5.83b) has the solved form (with respect to -<), ui,ttt = <!>3(x,t,uW), (5.84) where 4? = -r^— ui,x - (uiu2 + Ui + u2)(l + gU2) - gt - gUlui,t - gUltui,tt - gUlxui,xt -guliXtui,xtt - gUl,xxu1<xxt - gU2iX((i + M I ) ^ , * + (1 +u 2)u l i X) - 5 u 2 , M ( ( l + u2)u1>xx + (1 + ux)u2,xx + 2uitXu2,x)\. Note that we have used (5.83a) to replace ah occurrences of u2jt- The potential system S3 Chapter 5. A Potential Symmetries Classification of PDEs 156 associated with R3 is given by U2,t = (1 + Ui)u2 + Ul, Vt = ui, (5.85) vx =u2 + g. The standard form sfA of S3 is given by (5.84) and (5.85). The sets of parametric and principal derivatives up to order 3, respectively, are given by A ( 3 ) = { u i , u i t t , U i 7 X , U i j t , u i , x t , u i i X X , u i , x t t , u 1 : X X t , u i ! X X X } U { u 2 , u 2 t X , u 2 , x x , u 2 t X X X } U {v}, B(3) = {Uijtt} U {U2,t, U2,tt, U 2 y X t , U 2 i t t t , U2,xtt, U2,xxt} U {vt, Vx, Vtt, Vxt, vxx, vttt, vxtt, vxxt, vxxx}. We shall also need to refer to the set of parametric derivatives up to order 4: A^ ^ — A^ ^ U { u i ^ x x t t , u i ^ x x x t , u i x x x x } U {u2tXXXX}• To obtain the prolonged standard form for S3, one must find equations for each term in B^3K As such, we first append to s/_< the following equations which are obtained by differentiating (5.85): u2,tt = (1 + ui)u2,t + (1 + u 2 ) u h t , u2,xt = (1 + ui)u2<x + (1 + u 2 ) u i t X , u2,ttt = (1 + ui)u2,u + 2uiitu2,t + (1 + u 2 ) u h t t , (5.86) u2,xtt = (1 + ui)u2jXt + uitXu2tt + u i j t u 2 j X + (1 + u 2 ) u i i X t , u 2 , x x t = (1 + Ui)u2jxx + 2« i ,xU2 x + (1 + u2)ui ,xx, Vtt = Uiit, Vttt = Ul,tt, Vxt = u i i X , v^t = uitxt, vxxx = u2>xx + Dig. (5.87) Vxx ~ v,2yX -f" Dxg, vxxt — u i i X X , Consider the system I given by (5.84)-(5.87). There is a one-to-one correspondence between its leading terms and the terms in J3 ( 3 ). Also, except for equations (5.86), which correspond to (5.79a) with n = 3, the right hand sides of I depend only on x, t and the parametric derivatives of A ( 4 ) (the highest order terms come from Dxg which is of order 4). As for the right hand sides of (5.86), one can use the algorithm all-imp.subs (Appendix A.2) to make all possible implicit Chapter 5. A Potential Symmetries Classification of PDEs 157 substitutions from s/x to remove the principal derivatives {u2,t,U2,xt,U2,tt} (cf. (5.80)). The result is the set of equations: u2,tt = (1 + u1)2u2 + (1 + UCJUY + (1 + u2)uu, u2,xt = (1 + u^u-2^ + (1 + u2)uiiX, u2,m = (1 + « i ) 3 « 2 + (1 + u i ) V + [3(1 + «i)(l + u2) - 2}uu + (1 + u 2 )«i ,«, ( 5 - 8 8 ) u2,xtt = (1 + u2)ultXt + [2(1 + tii)(l + u2) - l]u1<x + [(1 + Ui)2 + Ui>t]u2,x, u2,xxt = (1 + u2)uitXX + (1 + ui)u2>xx + 2u\ ,xU2 x • Consequently, the prolonged standard form s/^3) for S3 is given by (5.84), (5.85), (5.87) and (5.88). Lastly, we note that all the calculations of this example equally apply if one considers g = g(x, t;u2;uu u h u u1>u), (5.89) instead of (5.82). In particular, corresponding to (5.89), the system R3 is given by (5.83), S3 is given by (5.85), and sf^ is given by (5.84), (5.85), (5.87) and (5.88). We will be using this fact when we continue this example later on. • Let us now make direct substitutions from the equations of sf^ into (5.77): As" must vanish by Lemma 3.2.7(2), and g remains unchanged since g does not depend on any principal derivatives (see (5.75)). Thus A direct application of Theorem 3.3.3 proves the following result: Chapter 5. A Potential Symmetries Classification of PDEs 158 Lemma 5.3.2 Consider an n-th order system Rn of PDEs, given by (5.73), and its potential system Sn, given by (5.76). Suppose g is of the form (5.75). Through Sn, Rn admits the potential symmetry Xpot, given by (5.70), if and only if -e^g + iX&^g] , ( n ) = 0 , (5.90) where sf^ is the prolonged standard form (5.74), (5.78) and (5.81). Our goal is now to find functions g satisfying (5.90). As in Example 5.2.3, by isolating the highest ordered terms in (5.90), the following observations can be made: (01) Any equation Ihs = rhs in sf^ with ord(rhs) > ord(g) has Ihs £ {vxr : 1 < r < n}. (02) In (5.90), vxr only arises from the the dependence of g on Ui>xrts, 0 < r < n, i = 1,2. (03) When solving (5.90), the coefficients of vxr, 1 < r < n, must vanish. These observations together with (5.75) lead us to consider the following form of g: g = g(x,t;u2;ui,u1>t,uht2,---,ulitn-i), (5.91) since then none of the conditions in'(03) arise. Our hope is that, as in the previous section on scalar PDEs, (5.90) will collapse to a first order scalar PDE for g. If so, we can obtain the general solution of (5.90) through the method of characteristics. To this end, we first require the functional form of X^ot ^g which can be obtained through the foUowing lemma (cf. Lemma 5.2.4): Lemma 5.3.3 Consider the infinitesimal generator in (x, t, u, v)-space of the form X = eav[a(x,t,u)dt + (3j{x,t,u)dU]+l(x,t,u,v)dv}, a G Ut, j = l ,2 . Suppose g is of the form (5.91). Then X( n - 1 '<7 = eavg, where g depends only on (x,t;u2;u1, uitt, uht2,---, uhtn-i ; vu vt2, • • •, v t n - i ) , (5.92) and on first order partial derivatives of g with respect to its arguments. Moreover, the first order partial derivatives of g appear linearly. Chapter 5. A Potential Symmetries Classification of PDEs 159 Proof. Let ( £ 1 , 2 : 2 ) = (t, x) and £1 = eava, £ 2 = 0, rl1 = eav(31, r,2 = e™/?2, rf = eav~f. (5.93) so that X = CdXt+r,kdUk, t = l ,2 , k = 1,2,3. Using the infinitesimal extension formula (2.35), we have 9xi ' '' 8u2 ' ^ d t t L j ' where Vj = Djiv1 - - £ 2«i,2) + ^"l.fj . i) + ^ 2«i,(J,2), and J = (1,1, • • •, 1), 0 < I J\ < n - 1. Using (5.93), we have n) = Dj[ea^ - eavaultl)] + eavauli{J>1). (5.94) Since a and /31 are independent of v, Djie^P1) = e^fi1'3', Dj(-eavau1A) = e^a1'3 - e a v a « l i ( J ) 1 ) , where a1,J and /3l'J depend only on (5.92). Hence we have rjlJ = eav(J31'J + 01"1). Consequently, X(n-1)<7 = eavg, where g = a(x,t,u)p- + P2(x,t, + + a^)-_ % dxi du2 dultj which depends only on (5.92). Moreover, from the form of g, X^™ - 1^ depends linearly on the first order partial derivatives of g. Consequently the theorem is proved. • Example 5.3.1 (cont.) Let n = 3, ( x i , X 2 ) = (t,x), g be given by (5.89), and Xpot be the infinitesimal generator (5.70). Clearly g and Xpot satisfy the conditions of Lemma5.3.3 with a = 1, (31 = tpt + u^, l=ip, V = # M ) -a = 0, P2 = i>x + u2tp, if)xt = 4>x + Vt> Chapter 5. A Potential Symmetries Classification of PDEs 160 Let us explicitly calculate Xp2Jtg to illustrate the conclusions of Lemma5.3.3. We have (5.95) where n2 = ev[ipx + U2lf)], n1 = ev[ipt + l i i V ' ] , r,\ = ev[(i>t + u^)vt + iptt + unk + (5.96) " i i = e"[(V>t + ux^vu + (i>t + uiV>)u t2 + 2(iptt + uitpt + ultti))vt Hence X^^g = evg where g only depends on (a;, t; u2 ; «i, wi,t, ; vt, vtt) and first order par-tial derivatives of g, which appear linearly. Consequently, the conclusions of Lemma 5.3.3 are verified. • Let us see how Lemma 5.3.3 helps in analysing the symmetry conditions (5.90): Corollary 5.3.4 Let Xpot be given by (5.70), let g be given by (5.91), and let sf^ be given by (5.74), (5-78) and (5.81). Then (5.90) is a first order scalar PDE for g. Proof. Since Xpoi and g satisfy the conditions of Lemma 5.3.3, we have that X^^g depends only on (x,t;u2;«i,«i,t,uht2, - - -,u 1 > t n-i ;vt,vt2,-• •,vtn-i) (5.97) and linearly on first order partial derivatives of g. Consider making all possible substitutions from sf^ into X^1^g: Using (5.78a) and (5.81b) with r = 0, all t derivatives of v in (5.97) are replaced by t derivatives of ui up to order n — 2. No other substitutions are possible. As such, the resulting equation depends only on the arguments of g and first order partial derivatives of g, which appear linearly. In other words, (5.90) is just a first order scalar PDE for g. • Example 5.3.1 (cont.) Let us first verify Corollary 5.3.4 explicitly for the example (n = 3) we have been following throughout this section, where g is given by (5.89). Recall that Xpot is given by (5.70) and sf™ is given by (5.84), (5.85), (5.87) and (5.88). Then condition (5.90) for Chapter 5. A Potential Symmetries Classification of PDEs 161 g i s dg where (n , r? , r/1, 77^) are given by (5.96). Dividing through by ev and carrying out the direct substitutions from sf^ explicitly, we arrive at 0 = -i>g + [V* + u2^] — + [Vt + U l ^ g ^ + K , t V + «iV + 2t*iVt + ^ « ] ^ ^ +[«i 1ttV + 3(Vt + «iV)«i,t + + 3«iVt + 3wiVtt + Vttt]-33 (5.98) For any given function rp(x,t), this is clearly a first order scalar PDE for g. Consequently, Corollary 5.3.4 is verified for n = 3. Applying the method of characteristics [15, 47], we find that the general solution of (5.98) is given by g = ( V i + U i V ) - 1 0(0!, 92,83,94,65), 80 905 7^ 0, (5.99) where 0 is any analytic function of its arguments: h = x, = t, (Vt + «iV)V' 04 = ^i,tV - ^iV - uiVtV + yy f t -V2(Vt + «iV) 75 = ^i,ttV3 - 3 M I « 1 ) 4 V 3 + VtttV2 - 3uxVttV2 - 3VtVttV + 2vf + 2ui v 2 y + ^ v 3 V3(Vt + ^iV) The reason for (5.99b) is so that (5.73c) is satisfied. In summary, any function g given by (5.99) satisfies (5.90), with n = 3. By Lemma5.3.2, the corresponding class of third order systems (5.83) admits the potential symmetry (5.70) through the potential system (5.85). A particular member of this class is u2,t = uru2 + Ul + u2, u\,ut = (Vi + « i V ) _ 1 [(Vt + Uiipf^^-u-i^-Ui-u^ + (3u^V + 2Vtt + 2ui,iV + 5t*iVt)«i,« -(TifV + 9« 2Vt + (9Vtt - 2 V 2 V _ 1 K - 2V~2Vt3 - 2Vttt + 3V_1VtVtt)^i,t -3(V«i - VtK.t + 2Vt«t + 2V>««? + (3Vttt + 8V"2V3 - 13V _ 1VtVtt)«? - ( - i4V" 3 Vt 4 + ^ttu + 3V_1Vt2t + 2 iy - 2 y 2 y„ - 9V _ 1 VtVttt)«i -^Mtttt + 2(2V"2Vt2 + - 3V" 2VtVt 2t - 8V-3Vt3^tt + 6V-4Vfl, Chapter 5. A Potential Symmetries Classification of PDEs 162 which corresponds to g = (Vt + w i V ) 1 ^5- • Let us now return to the general case. By Corollary 5.3.4, for each n > 3, (5.90) is a first order scalar PDE for g, given by (5.91). Since g has n 4- 3 arguments, the general solution of this PDE involves n + 2 variables (cf. (5.99) where n = 3) and is obtainable through the method of characteristics. By Lemma 5.3.2, any such solution g of (5.90) leads to an n-th order system of PDEs (5.73) admitting the potential symmetry (5.70) through the potential system (5.76). We have just proven the following theorem: Theorem 5.3.5 For each n > 2, there exists a class Rn of n-th order systems of PDEs (5.73), where g is of the form (5.91) and depends on an arbitrary function of n + 2 variables, such that each system in this class admits the potential symmetry (5.70) through the potential system (5.76). This theorem shows potential symmetries are admitted by an abundance of systems of PDEs of higher orders (n > 2). Chapter 6 Conclusions and Further Work 6.1 Conclusions In this dissertation we have sought nonlocal symmetries of PDEs through the potential systems approach. This approach relies on finding point symmetries of systems of PDEs. Although existing symmetry algorithms are sufficient for finding the point symmetries of scalar PDEs and systems of PDEs of Cauchy-Kovalevskaya type, there are difficulties in applying these symmetry algorithms to more general systems of PDEs such as potential systems. One such difficulty is that there are examples of n-th order systems of PDEs whose differential consequences of order up to n cannot be all found (see potential system (2.41)). As such, the substitution step in these algorithms cannot be properly executed and one may end up with only a subgroup of the full group of symmetries admitted by the system (i.e. one may miss symmetries). The difficulties in applying one of the standard theorems, Theorem 2.3.4 (see also Theorem 2.71 in [47, p.161/7]), is the requirement of local solvability. For some systems of PDEs, to achieve (analytic) local solvability, one must first form a prolonged standard form which has an infinite number of equations (cf. Theorem 3.2.4). The computational problems with deriving the determining equations of such an infinite system are obvious. Through Theorem3.3.3 and Algorithm 3.3.4, which appear to be new, we overcome the problems of existing symmetry algorithms for the calculation of point symmetries of a system of PDEs. In particular, through the use of the prolonged standard form, all differential conse-quences are implicitly uncovered (cf. Corollary 5.1.7). Our algorithm is efficient: The extended operator X^ 7 1 ' need only be applied to the finite number of equations of the original system. 163 Chapter 6. Conclusions and Further Work 164 We do not need to start with a locally solvable system. Only a finite number of equations, each used only once, are required in the substitution step. Moreover, even for scalar and systems of PDEs of Cauchy-Kovalevskaya type, our algorithm makes more precise the substitution step of the existing symmetry algorithms. Using the prolonged standard form, we have improved on the existing Frechet approach used for finding symmetries, and the existing Adjoint Theorem used for finding conservation laws (cf. Theorem E.0.1 and Theorem 4.2.5 respectively). Here we make more precise what is required in the substitution steps. An important component of the mathematical framework of the potential systems approach, given in §4, is the delineation of potential and linearizing factors. Given a set of factors, more than one potential system may be constructed for the system of PDEs, but it is a potential factor that leads to useful potential systems. By repeating the construction process, higher generation useful potential systems are formed and, through the example of the nonlinear diffusion equation in §4.3, we showed how these higher generation potential systems can lead to potential symmetries of the original PDE. During the construction of potential systems, the discovery of linearizing factors indicates the possibility of linearizing the given system of PDEs. The existing linearization algorithms of Bluman and Kumei can then be employed to seek the explicit linearization, if one exists. Examples of linearizing factors leading to subsequent linearizations of the nonlinear telegraph, the nonlinear diffusion and Burgers' equations have been provided in §4.4. The advantages of our symmetry algorithm (Algorithm 3.3.4) over the existing symmetry algorithms are further illustrated in §5. In deriving necessary conditions for higher order scalar PDEs with two independent variables to admit potential symmetries (§5.1), we correct some results of Pucci and Saccomandi. Due to the presence of two classifying functions which depend on many variables, these necessary conditions (Theorem 5.1.6) are not very tight and further Chapter 6. Conclusions and Further Work 165 analysis of the symmetry conditions seems very difficult. In order to find examples, we spe-cialised to a smaller class of higher order scalar PDEs involving one classifying function g. The computations were further simplified by a priori fixing the potential symmetry generator Xpot, finding the functional dependencies of certain components of its extension .XJj,™^1', and then using these dependencies to dictate the minimal set of arguments for g. In this way, the associ-ated determining equations are greatly simplified from an overdetermined system of nonlinear PDEs for g to a first order scalar PDE for g in terms of its arguments. For each n > 3, we have constructed a large class of n-th order nonlinear evolutionary scalar PDEs, characterized by an arbitrary function of n variables, which admits potential symmetries. In a similar way, we also constructed, for each n > 2, a large class of n-th order systems of nonlinear PDEs, characterized by an arbitrary function of n + 2 variables, which admits potential symmetries. Hence we show that potential symmetries are admitted by an abundance of higher order scalar and systems of PDEs with two independent variables. 6.2 Further Work; PDEs with Three Independent Variables A computer implementation of our new algorithm, Algorithm 3.3.4, is required. A computer implementation of the standard form algorithm already exists [53] and this can be easily mod-ified to obtain an implementation of the prolonged standard form algorithm, Algorithm A.4.1. This would then achieve step 1 of Algorithm 3.3.4. The remaining steps can be implemented by modifying the existing computer implementations of Lie's algorithm. Implementations of our version of the Adjoint Theorem are also needed. The orderings that our new symmetry algorithm requires are weak total derivative orderings. Further work would be to extend our method to other derivative orderings. To this end, it is hoped that the approach taken in this thesis, which involves the use of analytic local solvability and of nearby analytic convergent formal power series solutions, wiU prove useful. The connection between our symmetry algorithm and the nonclassical symmetry algorithm Chapter 6. Conclusions and Further Work 166 of Clarkson and Mansfield [24] warrants further investigation. In the former we have followed the standard form approach, whereas in the latter they have followed the differential Grobner base approach. In the nonclassical method, an unknown differential constraint (the invariant surface condition of the sought after point symmetries) is first appended to the given system before the point symmetries are calculated. In general, prolonged standard forms for such constructed systems are difficult to achieve since the appended constraint is not explicitly given a priori.1 On the other hand, differential Grobner bases for such systems appear to be more easily obtainable. We have shown that the potential systems approach is very fruitful for finding nonlocal symmetries of PDEs with two independent variables. We have constructed large classes of scalar and systems of PDEs of all orders which admit potential symmetries. However, this represents only a partial classification. Other examples are required. Also, we have not consid-ered the subsystems approach [10] for finding nonlocal symmetries, whereby a potential system is collapsed to a related subsystem through eliminations of some dependent variables. This subsystems approach warrants further investigation. We have not looked at potential symmetries of PDEs with three or more independent variables. Here, unlike the case of two independent variables, the potential systems are not determined systems (the number of equations is less than the number of dependent variables) and one or more constraints are needed. As such, the fundamental problem is one of constraint determination, i.e., to find the required constraint(s) such that the resulting determined poten-tial system leads to potential symmetries for a given system of PDEs. We now discuss some preliminary results for the case of three independent variables. Consider the system of PDEs (4.3) where p = 3. The potential system (4.5) requires one constraint G(x, t i ( w ) , i ; ( J V ) ) = 0 to make it determined. The resulting determined potential 1Note that the prolonged standard form for the potential system (5.3) was achieved despite the classifying functions being unknown. The particular form of the system enabled us to uncover all its integrability conditions. Chapter 6. Conclusions and Further Work 167 system is given by ApfauW) = 0, n=l,---,l-l, f1 ~ v3,x + v2,y = 0, P ~ vhy + v3tt = 0, (6.1) f - v2,t + VitX = 0, G(x,u(N\v^) = 0. When imposing a constraint G = 0, one must ensure that the resulting determined potential system (6.1) still contains all the solutions of the original system of PDEs (4.3), where p = 3. Such constraints are called admissible constraints. If one does not use an admissible constraint, any symmetry of the determined potential system will only be a symmetry of the corresponding restricted solution space of the original system (not of the whole solution space). Examples of admissible constraints are: Name Zeroth Order Lorentz Coulomb Note that the Lorentz and Coulomb constraints are commonly used in physics [57, 61]. We have found the following two examples of PDEs with three independent variables which admit potential symmetries. Constraint TJl = f(x,u,v2,v3), -v\,Xl + v2tX2 + v3>X3 = 0, v2,X2 + v3tX3 = 0. Chapter 6. Conclusions and Further Work 168 Example 6.2.1 Let R be the nonlinear wave equation (1.6), and let S be the potential system -Ut ~ V3,x + V2,y = 0, C\(u)ux - V1>y + v3it = 0, C2(u)uy - v2,t + vltX = 0, -Vl,t + V2,X + V3,y = 0, which has been made determined by imposing the Lorentz constraint. One can show that the nonlinear wave equation (1.6) admits potential symmetries through S if and only if C\{u) = C2(u) = 1. In this case, the potential symmetries are X i = -2tydt - 2xydx + (x2 - t2 - y2) dy + (xVl + tv2 + 2yu) du + (tu - xv3 + 2yv2) dVl + (2yv3 + tvi + xv2) dV2 + (tv3 - xu + 2yv1) dV3, X 2 = (-t2 - x2 - y2) dt-2 txdx - 2 tydy + (yv2 - xv3 + 2tu) du + {xv! + 2tv2 + yu) dVl + (yv-i + 2tv3 - xu) dV2 + (2tvi + xv2 + yv3) dV3, X 3 = -2txdt + (y2-t2-x2)dx-2xydy + (2xu-tv3-yv1)du (6.2) + (2xv2 + yv3 + tvi)dVl + (2xv3 - yv2 - tu) dV2 + (2xvx + tv2 + yu) dV3, X 4 = vidu + v3dVl - v2dV2 - udV3, X 5 = v2du + udVl - v1dV2 - v3dV3, X 6 = v3du + vidVl + udV2 + v2dV3. This appears to be the first known example of a scalar PDE with three independent variables admitting potential symmetries. Applications of these nonlocal symmetries can be found in [4]- • Chapter 6. Conclusions and Further Work 169 Example 6.2.2 Let R{u} be the nonlinear scalar PDE -ut + uxx + (L(u))yy = 0, (6.3) with associated first generation potential system S{u, v} given by - u - v3>x + v2,y = 0, ux - vhy + v3,t =0, (6.4) (L(u))y - v2,t + vljX = 0, v3 =0, which has been made determined by a simple zeroth order constraint v3 = 0. Unfortunately, for any L(u), S{u,v} does not yield any potential symmetries of R{u}. Likewise, the second generation potential system T{u,v, w}, given by (6.4) and - v2- w3>x + w2>y = 0, v1-wiiy + w3it =0, (6.5) L(u) - w2<t + w1<x = 0, w3 = 0, (where the constraint w3 = 0 has been used) does not yield any potential symmetries of R{u}. However, when L(u) = —3u~z, T{u,v,w} admits the point symmetry X = y2dy - 3yudu + (wi - yvx) dVl + (w2 - yv2) dV2 + ywidWl + yw2dW2, (6.6) which is a potential symmetry of S{u, v} (but not of R{u}). The system (6.4) appears to be the first known example of a nonlinear system of PDEs with three independent variables admitting a potential symmetry. • To find more examples of PDEs with three independent variables admitting potential sym-metries, one must tackle the fundamental open problem of finding suitable constraints for the associated potential systems. It is exciting to see the potential symmetries (6.2) for the linear wave equation being used in [4] to study Maxwell's equations. Bibliography I.S. Akhatov, R.K. Gazizov, and N.K. Ibragimov. Nonlocal symmetries. Heuristic ap-proach. Itogi Nauki i Tekhniki, Seriya Sovremennye Problemy Matematiki, Noveishie Dos-tizheniya, 34, 3-83. 1989. W.F. Ames, R.J. Lohner, and E. Adams. Group properties of utt = [f(u)ux]x. Int. J. Non-Linear Mech., 16, 439-447. 1981. S. Anco and G.W. Bluman. Derivation of conservation laws from nonlocal symmetries of differential equations. J. Math. Phys., 37, 2361-2375. 1996. S. Anco and G.W. Bluman. Nonlocal symmetries and nonlocal conservation laws of MaxweU's equations. Preprint, 1996. R.L. Anderson and N.H. Ibragimov. Lie-Backlund Transformations in Applications. SIAM, Philadelphia, 1979. N.A. Baas. Sophus Lie. The Math. Inteli, 16, 16-19. 1994. A . V . Backlund. TJeber Flachentransformationen. Math. Ann., 9, 297-320. 1876. G.W. Bluman. Invariance of conserved forms under contact transformations. Preprint, 1992. G.W. Bluman. Potential symmetries and equivalent conservation laws. In N.N. Ibragi-mov et al., Editors, Proc. Workshop Modern Group Analysis: Advanced Analytical and Computational Methods in Mathematical Physics, 71-84. Kluwer Acad. Publ., 1993. G.W. Bluman. Use and construction of potential symmetries. J. Math. Comp. Modelling, 29, 1-14. 1993. G.W. Bluman and J.D. Cole. The general similarity solution of the heat equation. J. Math. Mech., 18, 1025-1042. 1969. G.W. Bluman and J.D. Cole. Similarity Methods for Differential Equations, Volume 13 of Appl. Math. Sci. Springer-Verlag, New York, 1974. G.W. Bluman and P.R. Doran-Wu. The use of factors to discover potential systems or linearizations. Acta Applicandae Mathematica, 41, 21-43. 1995. G.W. Bluman and S. Kumei. Exact solutions for wave equations of two layered media with smooth transition. J. Math. Phys., 29, 89-96. 1988. 170 Bibliography 171 G.W. Bluman and S. Kumei. Symmetries and Differential Equations, Volume 81 of Appl. Math. Sci. Springer, New York, 1989. G.W. Bluman, S. Kumei, and G.J. Reid. New classes of symmetries for partial differential equations. J. Math Phys., 29, 806-811 and 2320. 1988. G.W. Bluman and G.J. Reid. New classes of symmetries for ordinary differential equations. IMA J. Appl. Math., 40, 87-94. 1988. G.W. Bluman and V. Shtelen. New classes of Schrodinger equations equivalent to the free particle equation through nonlocal transformations. J. Phys. A, 1996. To appear. G.W. Bluman and V. Shtelen. Developments in similarity methods related to pioneering work of Julian Cole. Preprint, 1996. G.W. Bluman and V. Shtelen. On nonlocal transformations of diffusion processes into the Wiener process. Preprint, 1996. F. Boulier. Some improvements of a lemma of Rosenfeld. Preprint, 1996. F. Boulier, D. Lazard, F. Ollivier, and M . Petitot. Representation for the radical of a finitely generated differential ideal. In Proc. ISSAC '95. A C M Press, 1995. G. Carra-Ferro. Grobner Bases and Differential Algebra, Volume 356 of Lecture Notes in Comp. Sci., 128-140. Springer-Verlag, 1987. P.A. Clarkson and E.L. Mansfield. Algorithms for the nonclassical method of symmetry reductions. SIAM J. Appl. Math., 54, 1693-1719. 1994. P.R. Doran-Wu. Symmetries of differential equations. Master's thesis, University of Ox-ford, U.K., 1990. G. Haager, G. Baumann, and Nonnenmacher T.F. An algorithm to determine potential systems in Mathematica. J. Symbolic Computation, 20, 179-196. 1995. T. Hawkins. The erlanger programm of Felix Klein: Reflections on its place in the history of mathematics. Hist. Math., 11, 442-470. 1984. W. Hereman. Review of symbolic software for the computation of Lie symmetries of differential equations. Euromath Bull., 1, 45-79. 1993. N.H. Ibragimov. Lie-Backlund groups and conservation laws. Soviet Math. Dokl., 17, 1242-1246. 1976. N.H. Ibragimov. Sophus Lie and harmony in mathematical physics, on the 150th anniver-sary of his birth. The Math. Inteli, 16, 20-28. 1994. M . Janet. Sur les systemes d'equations aux derives partielles. J. Math., 3, 65-151. 1920. Bibliography 172 [32] O.V. Kapcov. Extension of the symmetry of evolution equations. Sov. Math. Dokl., 2 5 , 173-176. 1982. [33] F. Klein. Gesammelte Mathematische Abhandlung, Volume 1. Springer-Verlag, Berlin, 1921. [34] B.G. Konopelchenko and V.G. Mokhnachev. On the group-theoretical analysis of differ-ential equations. Sov. J. Nucl. Phys., 30, 288-292. 1979. [35] I.S. Krasil'shchik and A . M . Vinogradov. Nonlocal symmetries and the theory of coverings: an addendum to A . M . Vinogradov's local symmetries and conservation laws. Acta Applic. Math., 2, 79-96. 1984. [36] I.S. Krasil'shchik and A . M . Vinogradov. Nonlocal trends in the geometry of differential equations: symmetries, conservation laws, and Backhand transformations. Acta Applic. Math., 1 5 , 161-209. 1989. [37] S. Kumei. A Group Analysis of Nonlinear Differential Equations. PhD thesis, University of British Columbia, Vancouver, Canada, 1981. [38] S. Kumei and G.W. Bluman. When nonlinear differential equations are equivalent to linear differential equations. SIAM J. Appl. Math., 4 2 , 1157-1173. 1982. [39] H. Lewy. An example of a smooth linear partial differential equation without solution. Ann. Math., 66, 155-158. 1957. [40] S. Lie. Uber die Integration durch bestimmte Integrale von einer Klasse linearer partieller Differentialgleichungen. Arch, for Math. 6, 328-368. See also Gesammelte Abhandlungen, Vol. I l l , B.G. Teubner, Leipzig, 1922, 492-523, 1881. [41] S. Lie. Klassifikation und Integration von gewohnlichen Differentialgleichungen zwischen x, y, die eine Gruppe von Transformationen gestatten. Arch, for Math. VIII, 187-453. 1883. [42] S. Lie. Theorie der Transformationsgruppen, Bd. 1 (Bearbeitet unter Mitwirkung von F. Engel). B. G. Teubner, Leipzig, 1888. [43] I. Lisle. Equivalence Transformations for Classes of Differential Equations. PhD thesis, University of British Columbia, Vancouver, Canada, 1992. [44] E. Mansfield. Differential Grobner Bases. PhD thesis, University of Sydney, Australia, 1992. [45] E.A. Miiller and K. Matschat. Uber das Auffmden von Ahnlichkeitslosungen partieller Differentialgleichungssysteme unter Benutzung von Transformationsgruppen, mit Anwen-dungen auf Probleme der Stromungsphysik. Miszellaneen der Angewandten Mechanik, 190-222. 1962. Bibliography 173 E. Noether. Invariante Variationsprobleme. Nachr. Konig. Gesell. Wissen. Gottingen, Math-Phys. KL, 235-257. 1918. P.J. Olver. Applications of Lie Groups to Differential Equations, Volume 107 of GTM. Springer-Verlag, New York, 2nd Edition, 1993. P.J. Olver. Direct reduction and differential constraints. Proc. R. Soc. Lond. A, 444, 509-523. 1994. L.V. Ovsiannikov. Group Properties of Differential Equations. Novosibirsk, 1962. In Russian: English translation by G.W. Bluman. L.V. Ovsiannikov. Group Analysis of Differential Equations. Academic Press, 1982. E. Pucci and G. Saccomandi. Potential symmetries and solutions by reduction of partial differential equations. J. Phys. A, 26, 681. 1993. V .V Pukhnachev. Equivalence transformations and hidden symmetries of evolution equa-tions. Sov. Math. Dokl, 35, 555-558. 1987. G.J. Reid. Algorithms for reducing a system of pdes to standard form, determining the dimension of its solution space and calculating its Taylor series solution. Euro. J. Appl. Math., 2, 293-318. 1991. G.J. Reid. Finding abstract Lie symmetry algebras of differential equations without inte-grating determining equations. Euro. J. Appl. Math., 2, 319-340. 1991. G.J. Reid, D.T. Weih, and A.D. Wittkopf. A point symmetry group of a differential equation which cannot be found using infinitesimal methods. In N.H. Ibragimov and M . Torrisi and A. Valenti, Editors, Modern Group Analysis: Advanced Analytical and Computational Methods in Mathematical Physics, 93-99, Dordrecht, 1993. Kluwer. G.J. Reid, A.D. Wittkopf, and A. Boulton. Reduction of systems of nonlinear partial differential equations to simplified involutive forms. Euro. J. Appl. Math. To appear, 1996. W. Rindler. Introduction to Special Relativity. Oxford University Press, Oxford, 2nd Edition, 1991. C. Riquier. Les Systemes d'Equations aux Derives Partielles. Gauthier-Villars, 1910. C. Rust. On the classification of rankings of partial derivatives. Preprint, 1993. D. J. Saunders. The Geometry of Jet Bundles. Volume 142 of London Mathematical Society lecture note series. Cambridge University Press, New York, 1989. B.F. Schutz. A First Course in General Relativity. Cambridge University Press, 1990. F. Schwarz. An algorithm for determining the size of symmetry groups. Computing, 49, 95-115. 1992. Bibliography 174 [63] O. Stormark. Formal and local solvability of partial differential equations. Technical Report TRITA-MAT-1989-11, Dept. of Math. Royal Institute of Technology, S-100 44 Stockholm 70, Sweden, 1989. [64] H.C. Thomas. Heterogeneous ion exchange in a flowing system. J. Am. Chem. Soc, 66, 1664-1666. 1944. [65] J .M. Thomas. Riquier's existence theorems. Ann. Math., 30(2), 285-321. 1929. [66] J .M. Thomas. Riquier's existence theorems. Ann. Math., 35(2), 306-311. 1934. [67] A. Tresse. Sur les invariants differentiels des groupes continus de transformations. Acta Mathematica, 18, 1-88. 1894. [68] A.M.Vinogradov. Symmetries and conservation laws of partial differential equations: basic notions and results. Acta. Appl. Math., 15, 3-21. 1989. [69] F.W. Warner. Foundations of Differentiable Manifolds and Lie Groups. Scott, Foresman, Glenview, HI., 1971. [70] V. Weispfenning. Diffferential-term orders. In Proc. ISAAC '93, Kiev, 1993. A C M press. [71] G.B. Whitham. Linear and Nonlinear Waves. Wiley, New York, 1974. Appendix A Algorithms for Obtaining Prolonged Standard Forms Except for the prolonged standard form algorithm in Appendix A.4, the foUowing material is from Lisle [43, Appendix A]. We assume that an ordering -< is chosen and denote leading(eqn) as the highest ordered term with respect to -< in the given equation eqn. A.l Orthonomic Form RecaU that an orthonomic form (Definition 3.1.4) can be achieved by a process similar to Gauss-Jordan ehminations: Algorithm A.1.1 [orthonomic] function orthonomic(sys) unsolved := sys solved := 0 repeat leadingterms := {leading(eqn) | eqn £ unsolved} term := highest ordered term in leadingterms with respect to -< thiseqn := any eqn £ unsolved, such that leading(eqn) = term thiseqn := solve thiseqn for term unsolved := substitute thiseqn into unsolved \ {thiseqn} solved := substitute thiseqn into solved solved := solvedU {thiseqn} 175 Appendix A. Algorithms for Obtaining Prolonged Standard Forms 176 until unsolved = 0 orthonomic := solved end Note that after each iteration, the number of equations in unsolved decreases by one. Conse-quently, the algorithm must terminate after a finite number of steps. Also, after each iteration, solved has leading derivatives that are strictly ordered higher than that of thiseqn. Hence solved must remain in solved form after substitutions from thiseqn. A.2 Simplified Orthonomic Form We denote implicit substitution, which was described in §3.1.1, from an equation u" = rhs into ujj throughout a system sys, by implicit subst(u" = rhs,ujj,sys). The algorithm that makes all possible implicit substitutions from an orthonomic system sysl into a system (or expression) sys2 is given by: Algorithm A.2.1 [alL impLsubs] function allAmpl-subs(sysl, sys2) while ujj in sys2 where uj = rhs is an equation in sysl do sys2 := implicit-subst(u" = rhs,ujj, sys2) od all..impl-subs := sys2 end Note that there is only a finite number of terms in sys2 and any implicit substitution replaces a term with other terms which are strictly of lower order (with respect to -<). Consequently, Appendix A. Algorithms for Obtaining Prolonged Standard Forms 177 carrying out all implicit substitutions is a finite process. With this in hand, the algorithm to achieve simplified orthonomic form (cf. Definition 3.1.6) of a system sys is obtained as follows: Algorithm A.2.2 [simp_orth] function simp-orth(sys) repeat sys := orthonomic(sys) sys := all-impl-subs(sys,sys) until sys is orthonomic simp-.orth := sys end Note that we have already shown that the algorithms orthonomic and allAmpl-Subs terminate after a finite number of iterations. Iterations in this algorithm only occur if allAmpl.subs removes a leading derivative of the system sys. Since one cannot replace leading derivatives by lower ordered terms (with respect to -<) indefinitely, this algorithm must terminate. A.3 Standard Form In §3.1.1, the process of obtaining integrability conditions of a system sys was described. Denote this process by integ.cond(sys). Then the standard form (cf. Definition 3.1.9) is achieved by the following algorithm: Appendix A. Algorithms for Obtaining Prolonged Standard Forms 178 Algorithm A.3.1 [standarcLform] function standard-form(sys) repeat sys := simp-orth(sys) integ := integ.cond(sys) sys := sys U integ until integ = 0 standard.form := sys end The fmiteness of this process was first proved by Tresse [67] (see also [53]). A.4 Prolonged Standard Form The process for obtaining a prolonged standard form (cf. Definition 3.1.14) for a system is given in the following algorithm. The definitions of TV-th order parametric and principal derivatives of sf^,, denoted by A ( w ) and f? ( J V ) respectively, are described in §3.1.2. We denote the order of a system sys by ord(sys). Algorithm A.4.1 [proL standard] function proLstandard(sys,N) SF := standard.form^sys) N:=maximum integer in {N,ord(SF)} B := £ W \ {leading terms in SF} I:= SF while £ ^  0 do Appendix A. Algorithms for Obtaining Prolonged Standard Forms 179 term := u" in B B:=B\{uJ} eqn := (uj = rhs) in SF, such that I = JK, \K\ > 0 I := I U {u" = alLimpFsubs(SF, Dx^rhs))} od proFstandard := I end Appendix B Symmetries of Locus and Analytic Locus In the sequel, we always assume the following: (Al) R is an analytic system of DEs (2.19), which is of order n. (A2) -< is a weak total derivative ordering. (A3) R has standard form s/_<, which is of order m. (A4) R has prolonged standard form sfif) (N > m) with locus £>(JV). (A5) ^ ( o o ) is the analytic locus of sf(™\ (A6) 7 r ^ is the natural projection map from X x Uico) to X x U(N). (A7) Q is a local Lie group acting on A x U with induced Lie groups Q^N) and Q(°°\ To prove Lemma 3.3.1, the following lemmas will prove useful. Lemma B.0.1 Let £ £ W and r^00' £ <7(°°>. T/ten i/je following identity holds: Proof. In other words, the identiy (B.l) means that it makes no difference if one maps a point P in XxJ7 ( ( X ) ) under the induced action of r e £ Q and then project down to XxlI^N\ as opposed to first projecting P down to X X ?7(JV) and then mapping this point under the induced action of r £ £ G. We have r W : XxU™ -* I x ^ » and r' 0 0) : X x U{oo) -»• A X £ / " ( o ° ) . RecaU that these transformations are induced from T£ : X XU ^ X XU through the chain rule. Now determines how all derivatives up to order N are transformed. Notice that such a transformation depends only on derivatives up to order N. The transformation is consistent with T^N\ 180 Appendix B. Symmetries of Locus and Analytic Locus 181 but now transformations of higher derivatives are given. Once the transformations (T^) have been defined, one can use them to map between points in I x f J ( w ) ( X x C / ( o o ) ) without any regard to how the functions corresponding to these points are transformed. The underlying transformation between functions may not even be defined in the X X Z7 (oo) case.1 Given a point P in XxU^°°\ the left hand side of (B.l) transforms P to another point Q in XxU^ under the induced action of T£ and then projects this point down to XxZ7 ( J V ) to obtain a point QN. Notice that QN is just the set of values for the transformed x and all derivative terms up to order N. On the other hand, the right hand side of (B.l) first projects P down to PN in X X £/"(JV) before transforming under the induced action of r e to a point QN in X X U<-N). The points Q and QN must be the same since, as mentioned above, is consistent with r W . • Lemma B.0.2 Let P be any point in £>(oo). For each N > 0, there exists an analytic solution 0 / 5 / x < x > ) which agrees with the data prescribed by 7 r ^ ( P ) . Proof. Given P and N, let SN denote the set of equations of sf^ with leading terms of order at most N. Let M be the order of the system S N . We have M > N since the right hand sides of SN may be of order greater than that of the left. Let PN = T T^(P ) , PM = ^ ( P ) . pM (pN^ determines a point PQ1 (PQ) representing the value of x = x0 and all parametric derivatives up to order M (N). P ^ agrees with PQ1 for x and all parametric derivatives up to order N. The reason we require PQ1 is that this point defines through SN the principal terms QQ of P N . This is not true of P^ unless -< is total derivative order (M = N). By Lemma3.1.19, PQ1 is sufficient to determine an analytic solution u(x) of sf^°K By construction u(x) satisfies the data given by PQ1 and also the data given by PQ . Moreover, l r rhe transformations of (? ( o o ) were defined by the action on analytic functions. However, once we have these transformations, we can use them to map any point in Xx(7(°°' to another point. We do not have make sense of such a point transformation in terms of mappings of functions. This is convenient since some points in XxU^ do not have any analytic functions passing through them. Appendix B. Symmetries of Locus and Analytic Locus 182 since PQ1 defines through SN the principal terms QQ of PN and u(x) satisfies SN, u(x) must agree with the data QQ . In summary, u(x) agrees with the data given by P^ and QQ and hence it must pass through P N . • We now provide a proof of Lemma3.3.1 which we restate here for convenience. Lemma B.0.3 is a symmetry group of g^ if and only if it is a symmetry group ofg^°°K Proof. From the very beginning, we emphasize that we are considering symmetries of sets of points and not functions. We will only make the correspondence between a point in ^ o o ) and its convergent formal power series solution. We will never try to make sense of what it means to map a non-convergent formal power series under the action of Assume Q^°°"> is a symmetry group of g^ and let £ Q(°°\ We have to show V P £ 7 / o o ) , Q = ^ (P) £ 7/0 0', for sufficiently small e. By hypothesis Q lies in £ ( o o ) . By Theorem 3.2.4, there exists an analytic solution u(x) that passes through P. Using Theorem 2.2.16, (? ( o o ) maps u(x) to an analytic function u(x). Since u(x) passes through P, then u(x) is an analytic function passing through Q and hence Q £ ^ o o ) as required. Now assume £ ( < x , ) is a symmetry group of ^ o o ) and let r^° c ) £ Q(°°K We have to show VP £ g(oo\ Q = Ti°°\P) £ £(oo). Since P lies in X x ! 7 ( c o ) then Q also lies in X x ( 7 ( o o ) . We now have to show that Q satisfies the equations of sf^K Let PN = 7r^(P) and QN = npj(Q) denote the projection of P and Q onto the finite space XxlJ(N) respectively. Let SN denote all the equations of sf^ whose left and right hand sides are of order up to A . Note that if ^ is a total derivative order, then SN is equivalent to sf^K However, this is not true in general. In what follows we will show that QN satisfies SN, for each N > 0. This will complete the proof since then Q satisfies sf^ and must thus be a point Appendix B. Symmetries of Locus and Analytic Locus 183 in £>(<x>). Here are the details. By LemmaB.0.2, for each N there exists an analytic solution u(x) which satisfies the data given by PN. At x = XQ, determine the values of all derivatives of u(x) to obtain a point P which must he in the analytic locus ~(f°°\ By construction of u(x), T T ^ ( P ) = PN = T T ^ ( P ) . (B.2) By hypothesis, Q = T<°°\P) also lies in Since Q satisfies sft°\ QN = TT^(Q) must satisfy S N . Using LemmaB.0.1 and (B.2) we have QN = T r ^ ( Q ) = 7r^r(~)(P) = r ^ ( P ) . = T ^ ( P ) = ^T^(P) =*N(Q) = QN Hence for all A , QN satisfies the equations of SN and the proof of LemmaB.0.3 is com-plete. • Appendix C Proof of Correctness In the sequel, we always assume the following: (Al) R is an analytic system of DEs (2.19), which is of order n. (A2) -< is a weak total derivative ordering. (A3) R has standard form sf^, which is of order ro. (A4) R. has prolonged standard form sf^ (N > ro) with locus £>(JV). (A5) 7/ ^  is the analytic locus of sf™K (A6) 7 r ^ is the natural projection map from X x U(oo) to X x U(N). Recall Theorem 3.2.4 which says that sf^ is analytically locally solvable. This fact is essential in proving the foUowing useful lemma. Lemma C.l Let g(x,u(-n^) be an analytic function of its arguments. Then (Ll) g(x,u^)\sf{N) = 3(x,w ( n ))| s /(~), N = max(m,n). (L2) g(x,u^)\ (n)=g(x,u^)\ { o o ) . 1 / 1 / (LS) g(x, « ( n ) ) | (oo) = 0 if and only if g\ (oo) = 0. (L4) gix,^^)]^) = 0 if and only if g(x, u^)^) = 0. (L5) g(x,u^)\ (oo) = 0 implies (D,g)\ ( o o) = 0, \J\ > 0. Proof. The equations of sf^ are also equations in sf^°K The remaining equations in sf^ have leading terms of order greater than N. When making direct substitutions from sf^ or sf^ in g(x, i i ( n ) ) , only the equations whose leading terms are of order at most n are required. 184 Appendix C. Proof of Correctness 185 These facts together with N > n lead to the desired identity (LI). Since (T^) are given by the functions Uj = djf(x) where 0 < |J\ < ra (0 < |J|), a similar argument can be used to show (L2). Lemma3.2.1(3) proves (L3). We now prove (L4). If g(x, « ( n ) ) | e ( o o ) = 0 then, since ^ ( ° c ) is just a subset of g(°°\ we have 5(x, w ( n ) ) | ^ o o ) = 0. Now assume g(x,u(-"^)\^0o) = 0. Let P be any point in g^K Then pn _ defines a point in I x P w . By LemmaB.0.2, there exists an analytic solution which agrees with the data specified by Pn. Let x0 be the value for x given by Pn. At x = XQ, u(x) and all its derivatives determine a point P i n X x J7 ( o o ) such that 7r£°(P) = Pn. We have g(x, «<»>)|p = g(x, «<»>)|P„ = g(x, u™)\p (C.l) since only the coordinates of P (P) corresponding to x and all derivatives of u up to order ra, which is given by P™, are required for substitution. Moreover, since u(x) is a solution, we have g vanishing on P. By (C.l), g must also vanish on P and (L4) is proved. We now prove (L5). Assuming the hypothesis in (L5), (L3) shows that g vanishes on g(°°K By Theorem 2.2.7 we have for all solutions u = f(x) of R and so g must vanish on all such extended graphs T^K In addition (L2) leads to 0 = 5 l r ( ° ° ) = 5 l r (« ) -Taking partial derivatives, we obtain 8, g(x,u(n))\r{n) 0 and applying the total derivative operator identity (2.17), we have By (L2), this is equivalent to r ( n + | J | ) - ° -1 f (Djg(x,u^))\r(oo) = 0, Appendix C. Proof of Correctness 186 for all solutions u = f(x) of R. Using analytic local solvability of sf^°\ we have Djg vanishing on ^ ( o o ) . By (L3) and (L4) we arrive at the desired result (L5). • Since this lemma will be used quite often in the sequel, we shall use (L1)-(L5) to refer to its results. Proof of Lemma3.2.7. (LI) proves Lemma3.2.7(1). Assuming the hypothesis of Lemma3.2.7(2), (L2) shows that Since sf^ is analytically locally solvable, g must vanish on if00). By (L3) and (L4) we have An application of (LI) then proves Lemma3.2.7(2). • Our goal is to prove Theorem 3.3.3 which, by (LI), is equivalent to the following theorem: Theorem C.2 Q is a symmetry group of R if and only if for every infinitesimal generator X where the symbol | (oo) denotes making all possible direct substitutions from the equations of Given any system R of DEs and any infinitesimal generator X satisfying (C.2), we say that R admits X with respect to the prolonged standard form sf^°K Proof. The necessity of (C.2) follows from the fact that Q is a symmetry group of sf^ and hence of £ ( o o ) ; A vanishes on £>(oo); and by Theorem2.1.23, we have g(x,u^)\r(oo) = 0. $(z,u<">)|i/(0o) = 0. ofS (C.2) X A ( I , M W ) = 0 whenever (x,u(n)) <E g(oo). But this is equivalent to (C.2) by (L3). Appendix C. Proof of Correctness 187 To prove sufficiency of (C.2) we will require the following lemma which is proven later: Lemma C . 3 Suppose one forms a new system R by performing one of the following operations to R: (a) Replace an equation of R with its solved form. (b) Append to R a derivative of an equation of R. (c) Append to R an integrability condition of R. (d) Use a solved equation of R to make all possible implicit substitutions in the remaining equations of R. (e) Use a solved equation of R to make all possible implicit substitutions in the right hand sides of R. If R admits X with respect to s/^°°' (i.e. (C.2) is satisfied), then R also admits X with respect to sf^\ (Note that the prolonged standard form sf^ for R and R are identical.) Recall that a prolonged standard form sf^ of any order N > m is obtained by a finite number of operations (a)-(e) on the equations of R (see the algorithms of Appendix A). Our starting system R satisfies (C.2) and by repeatedly applying the above lemma we have that the equations of sfi,N> also satisfy (C.2): ( X E ^ I ^ ^ O , (C.3) where S"-7 = 0 are the equations of sf^\ N > m. Clearly (C.3) also holds for N = oo, i.e., (C.3) holds for the equations of sf^°K (Otherwise there must be one equation in sf^ for which (C.3) does not hold. But this cannot happen since such an equation belongs to sfW, for some finite N > m, and we already have shown that sf^ satisfy (C.3).) We now prove that (C.3) is equivalent to (3.18) so that we can use Lemma3.3.2 to show that Q is a symmetry group of R, and thus arrive at the sufficiency of (C.2) in Theorem C.2: Certainly, by (L3), (C.3) is equivalent to (3.18) for the case when -< is a weak total derivative ordering and N = oo. When -< is a total derivative ordering, so that N > TO, we have the equations of sf^ consisting of Appendix C. Proof of Correctness 188 terms up to order N. In this case, after using (LI), (C.3) is equivalent to and consequently (3.18) is proven. • Proof of Lemma C.3. We will tackle each operation (a)-(e) one at a time. Lemma C.4 (Operation (a)) Let X satisfy (C.2). Suppose one solves an equation of R to obtain «? =/(*,«"•>), (C.4) where f is independent of u". Then (x[-uj + f. ,/<,<»> = 0. (C.5) Proof. Without loss of generality, let Ai = 0 be the equation leading to (C.4). By definition R and sf^ have the same solutions. Hence, when applying the Implicit Function Theorem to solve Ai = 0, we have that 0 A i 0 contains no solutions of R. Consequently 9 A i duc. ^ 0. (C.6) Since (C.4) is the solved form for Ai = 0, we must have A i ( * y " > ) i u Q = / = o. In other words, replacing uf by / in A x leads identically to zero. Consequently, we have the following identity: 0 = x(A1(x,u^)\u?=f) Hence 9 A i ndAi dAi~r„ / n T. . . r. 9 A i ^ d A i dAx~ ra n u t n ^ + ^ 7 J = - ^ X [ / ] ' (/?,/) ^ (a, I ) . dxi di (C.l) Appendix C. Proof of Correctness 189 On the other hand, applying X directly to A i leads to di duj ' (P,J)^(a,I). This together with (C.7) leads to the identity: xA1(xy^)=(vf-x[f})^ Consider making direct substitutions from sf^ in this equation. By supposition (C.2), the left hand side vanishes so that 0 = (nf - X[f] 8AX l»/<< o o ) duf Consequently we arrive at (C.5) using (C.6) and the fact that X(u") = nf. • For the next lemma, we will need the following identity which is proven by Ibragimov [29]: XDi = DiX - Di{?)Dj. (C.8) Lemma C.5 (Operation (b)) LetX satisfy (C.2). Then X[DjA M ] ) = 0. (C.9) Proof. We proceed by induction on |<7| > 0. The case | J | = 0 is given by hypothesis. Now assume that (C.9) holds for ah | J | < k. For any multi-index J, \ J\ = k + l, there is a multi-index K, \K\ = k, and i = 1, • • - ,p such that J = Ki. By (C.8), we have (X[DJA»]) = (A- [X(2? X A„) ] ) - ( W ) (DJDKA») (CIO) The induction hypothesis implies X [ £ A - A j ) 0. (C . l l ) One also has = 0. (C.12) Applying (L5) to (C . l l ) and to (C.12), the right hand side of (C.10) must vanish and the induction is complete. • Corollary C.6 (Operation (c)) Let X satisfy (C.2) and (p(x,v!-k)) = 0 be any integrability condition of R. Then Appendix C. Proof of Correctness 190 - 0. (C.13) Proof. Since <f> = 0 is an integrability condition (cf. Definition 3.1.8) of R, we must have <f> = DiAi — DjAj = 0, for some i,j = 1, • • • ,p and \ J\, \K\ > 0. Consequently, ,/<<<»> and an apphcation of Lemma C.5 then leads to the desired result. • Lemma C.7 (Operation (d)) Let X satisfy (C.2). Suppose one equation (say the first equa-tion) of R is given by A 1 = - < + / (x ,« ( " ) ) = 0, (C.14) where uf is the leading term with respect to -<. Form the new system R obtained by making all possible implicit substitutions from A i in the remaining equations of R. This leads to: A i = A i , = An\{ufj=Djf, |J|>0}> p, = 2,- • •,/. (C.15) Then (XA„) 5 / ( o o ) = 0, p =!,-••,/. (C.16) Proof. Since A i = A i , (C.16) holds for p, = 1 and consequently, by Lemma C.5, we have 0 = (x[£>jAi]) For p = 2,; • = ( x [D j ( ^? + /)]) = ( - ^ J + X[2?j/]) we have the identity ,/<,<»> (C.17) | J |>0 . X A M - X ( A M | { l i ? ^ D j / ; |j|>0 }) 5 A , . 9 A , ^ A , ~ t'dxi +riKdvPK + d u h X [ D j f l where (/?,#) ^ (a, I J) and | J | > 0. Hence OA dx (C.18) Appendix C. Proof of Correctness 191 On the other hand, applying X directly to A M , we have ™* = t^ + 4^ + V?Jj£r, ((3,K)^(a,IJ), \J\>0. oxi OWK OUIJ This together with (C.18) leads to the identity: X A M = X A , + (Vfj - X[Djf]) ^ ± | J | > 0. Now consider making direct substitutions from sf^ into this equation. By assumption (C.2), the left hand side vanishes and by (C.17) the second expression on the right also vanishes. Consequently we arrive at (C.16). • Lemma C.3 is now proven. • Appendix D General Potential Systems Construction Suppose one PDE of R{u}, say the last one, is a conservation law v J2D1f(x,u^-1)) = 0, p>2. i=l Then R{u} is the system given by All(x,u™)=0, fi = 1, 1, A / i ( £ , ^ ( n - 1 ) ) = 0, * = l , Using the conservation law (D.lb), one can introduce \p(p — 1) potential variables v _ (\fr 1 2 ; $ 1 3 ; . . •, $ l n , V]/ 2 3, • • $ 2 n , • • •, $ n _ 1 ' n ) , where (i < j) are components of an antisymmetric tensor, to form the associated potential system S{u,v}, given by (D.la) and This potential system 5{u, v} consists ofl + p—1 PDEs with q+ \p(p— 1) dependent variables u = (u 1 , u2, • • •, uq), W3 (i < j). If p = 2, then S^u, v} is the determined potential system, given by (4.4) where the potential variable is v = \t12. If p > 3, S{u,v} is underdetermined and one can impose suitable constraints (a choice of gauge) on the potentials $ , J to make this system a determined one. 192 Appendix E Frechet Formulation for Point Symmetries It turns out that one can restate the infinitesimal symmetry conditions (4.2) for local symmetries in terms of the Frechet derivative: Theorem E.0.1 Let R{u} be a system of PDEs (2.19) with standard form sf^ (order m) and prolonged standard form sf[f> (N > m). Then R{u} admits the local symmetry (4-1) if and only if (EAQ) AN) = Q, A = max(m, n + k + 1), QA = na - (E.l) where \SJ(N) denotes making all possible direct substitutions from sf^fK The conditions (E.l) are usually stated as: 1. CAQ = 0, whenever u = f(x) is a solution. 2. CAQ = 0, whenever A = 0 and its differential consequences hold. In §2.3, we have already explained the disadvantages of such formulations. In particular, there is no ambiguity in (E.l) as to how the equations are to be used as substitutions. Since the formulation (E.l) appears to be new, we shall prove it here. Proof. The following identity can be found in [47]: X("'A = C A Q + Z'DjA, QA = n a - f Consider making direct substitutions from the equations of sf^K Since the left hand side is of order n + k and each expression on the right hand side is of order n + k + 1, applying 193 Appendix E. Frechet Formulation for Point Symmetries 194 Lemma 3.2.7(1) leads to (x(n)A)LA^ = (£^)L"2> + (^A) Ni = max(m, n + k + i), i = 0,1. The left hand side vanishes since, by definition, X is a local symmetry if and only if (4.2) holds. Since all solutions of R satisfy DjA = 0, by Lemma 3.2.7(2) the second expression on the right also vanishes. Consequently, the theorem is proved. • 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0080004/manifest

Comment

Related Items