UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Semilinear stochastic evolution equations Zangeneh, Bijan Z. 1990

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1990_A1 Z36.pdf [ 5.83MB ]
Metadata
JSON: 831-1.0080412.json
JSON-LD: 831-1.0080412-ld.json
RDF/XML (Pretty): 831-1.0080412-rdf.xml
RDF/JSON: 831-1.0080412-rdf.json
Turtle: 831-1.0080412-turtle.txt
N-Triples: 831-1.0080412-rdf-ntriples.txt
Original Record: 831-1.0080412-source.json
Full Text
831-1.0080412-fulltext.txt
Citation
831-1.0080412.ris

Full Text

SEMILINEAR STOCHASTIC EVOLUTION EQUATIONS Bijan Z. Zangeneh B. Sc.(Mathematics) Sharif ( formerly Arya Mehr) University of Technology M . Sc.(Mathematics) Sharif ( formerly Arya Mehr) University of Technology A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF MATHEMATICS We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA April 1990 (c) Bijan Z. Zangeneh In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department The University of British Columbia Vancouver, Canada Date AfH 1 * Q , DE-6 (2/88) Abstract Let -fiT be a separable Hilbert space. Suppose (0, J7, Tu P) is a complete stochastic basis with a right continuous filtration and {Wt,t E R} is an if-valued cylindrical Brownian motion with respect to ($7, J7, Tt, P). U(t, s) denotes an almost strong evolution operator generated by a family of unbounded closed linear operators on H. Consider the semilinear stochastic integral equation Xt = U(t,0)Xo+ [tU(t,s)fs(Xa)ds+ ftU(t,s)gs(X)dWs + Vt, Jo Jo where • / is of monotone type, i.e., /*(.) = /(£,u>,.) : H —• H is semimonotone, demicon-tinuous, uniformly bounded, and for each x £ H, ft(x) is a stochastic process which satisfies certain measurability conditions. • gs(.) is a uniformly-Lipschitz predictable functional with values in the space of Hilbert-Schmidt operators on H. • Vt is a cadlag adapted process with values in H. • Xo is a random variable. We obtain existence, uniqueness, boundedness of the solution of this equation. We show the solution of this equation changes continuously when one or all of X0, /, g, and V are varied. We apply this result to find stationary solutions of certain equations, and to study the associated large deviation principles. Let {Zt,t G R} be an iZ-valued semimartingale. We prove an Ito-type inequality and a Burkholder-type inequality for stochastic convolution f0tU(t,s)ga(X)dZll. These are the main tools for our study of the above stochastic integral equation. u Table of Contents Abstract ii Acknowledgment vi 1 INTRODUCTION 1 1.1 Linear Stochastic Evolution Equation 4 1.2 Non-linear Stochastic Evolution Equations 5 1.2.1 First Type 5 1.2.2 Second Type 6 1.2.3 Comparing the Two Types of Equations 7 1.2.4 The Semilinear Stochastic Evolution Equation of Monotone Type 8 1.3 The Main Results 9 1.3.1 The Method of Study 10 1.3.2 Existence, Uniqueness, and Boundedness of the Solution 10 1.3.3 The Semilinear Integral Equation on the Whole Real Line and the Stationarity of its Solutions 11 1.3.4 Continuity with Respect to a Parameter 12 2 T H E MEASURABILITY OF T H E SOLUTION 13 2.1 The Main Theorem 13 2.2 The Measurability of the Solution in Finite-dimensional Space 19 2.3 The Proof of the Measurability in Theorem 2.2 22 iii 3 STOCHASTIC CONVOLUTION INTEGRALS 25 3.1 Introduction and Preliminaries 25 3.2 Ito-Type Inequality 30 3.3 Burkholder-Type Inequality 39 4 A SEMILINEAR EQUATION 44 4.1 Introduction 44 4.2 The Measurability of the Solution of the Semilinear Equation 45 4.3 Some Examples 48 4.4 A Second Order Equation 50 4.5 A Semilinear Integral Equation on the Whole Real Line 56 5 T H E CONTINUITY OF T H E SOLUTION 60 5.1 Introduction 60 5.2 The Main Theorem and its Corollary 60 5.3 Application to the Large Deviation Principles 66 5.4 Galerkin Approximations 67 5.5 Galerkin Approximations for the Integral Equation on the Whole Real Line 69 6 STATIONARY PROCESSES 72 6.1 Introduction 72 6.2 The Continuity of the Solution with Respect to Vn 75 6.3 The Main Theorem 79 6.4 The Einstein-Smoluchowski Equation 79 7 T H E G E N E R A L SEMILINEAR EQUATION 82 7.1 Introduction 82 7.2 The Main Theorem 83 iv 7.3 Some Examples 93 7.4 Initial-Value Problem of the Semilinear Hyperbolic System 95 7.5 Second Order Equations 98 8 GENERALIZATION AND T H E CONTINUITY 100 8.1 Introduction 100 8.2 Boundedness of the Solutions 100 8.3 Generalization of Theorem 7.1 103 8.4 The Continuity of the Solution with Respect to the Parameter 106 Bibliography 112 v Acknowledgment The building of the character of mankind involves many different factors. Included in this building process are teachers and the learning enviorment. Mathematics, as a beautiful and wonderful human activity, also is, included in this building process. One of the most difficult problems a scholar can encounter is deciding which area they should choose to enter in terms of their ability and interest. This problem does not have a simple or easy solution. The solving, or partial solving of this dilemma took me several years. Of course, some of the factors which greatly influenced my final decision were the many teachers I had over the years. They helped and guided me over the course of my life and enabled me to eventually make a wise choice of field of study. I would like to speak a bit about them and to pay tribute to them. First of all, I would like to thank my parents who were my first teachers. They fostered in me the love of knowledge and wisdom. They also taught me to always evoke reasoning and never to accept anything without questioning, for the way of creative thinking is to discover for oneself. They instilled problem-solving in my spirit and the preference for seeking truth over material achievement and success. Secondly, I would like to thank my high-school mathematics teachers and also my university professors at Sharif University of Technology in Tehran, who put the love of mathematics in my heart as an unquenchable fire. The mathematics environment cultivated by the professors at Sharif University in particular, enabled me to make the choice concerning which area I could explore, which was very important, for this changed my academic life. It encouraged me to transfer from a field of engineering to a field of mathematics and computer science and specifically, to choose between mathematics and vi computer science. My professors at Sharif University and M . I. T. greatly contributed to increase my awareness of mathematical knowledge. In particular, I would like to thank Professor Helgason of M . I. T., who taught me to view mathematics as an integrated whole. I have seen this in his field of Harmonic Analysis on Lie Groups in which the connection between algebra, analysis and differential geometry is apparent. I would like to thank my colleagues in Isfahan University of Technology, Dr. Miamee and Dr. Rejali, who made me excited about probability, the field which I eventually chose to enter. Probability is a field in which you can see the closeness and unity between the real and the abstract world; intuitive mathematics and abstract mathematics; applied math-ematics and pure mathematics. Probability is a field which shows how they all connect. Also, I would like to thank Professor Chatterji of EPFL in Switzerland, who furthered my knowledge in probability and Professor Carnal of the University of Bern. They both were of great help and support to me while I was living alone in Switzerland and informed me of the probability group at UBC. I would like to extend my great appreciation to the entire mathematics department at UBC for its invaluable and extensive support during my stay here. In particular, appreci-ation is extended to the UBC probability group which provided a friendly and stimulating atmosphere in which to work by providing many seminars and visiting professors who provided interesting lectures. I would especially like to thank my classmates for the many productive discussions and also thank you to Ed Perkins for presenting a well-organized course on General Theory of Stochastic process. I was introduced to the new concept of probability in the school of Strausberg in this course and he also taught the stochastic differential equation which greatly influenced my choice of thesis topic. vii In addition, Professor Perkins gave me great moral support during my stay at U. B. C. which consisted of continuos, friendly advice. Joe Watkins also receives my appreciation, both for his friendliness and for adding an interesting dimension to the group discussions, as well as for introducing me to the paper; Fairs and Jona-Lasino, which was a motivation to my reseach topic. Professor Haussmann deserves recognition, for answering my questions and for intro-ducing me to the thesis of Pardoux, which had a great influence on my own thesis. Professor Bui deserves credit as well for instructing me in partial differential equations and for introducing me to good papers and books during the course of my research. A very special thanks to my supervisor, John Walsh, who, upon my arrival in Van-couver gave me immense support and encouragement. He taught me and gave me general guidance. If I'm a probablist today, it is because he made me one. He carefully read my thesis several times, and provided me with insightful comments. Something very important I learned from him was to look at problems from different angles. During the period of my research, many times when I wrote the proof of a theorem he would encourage me to find a more simple, elagant and natural proof. Though at times this was annoying for me, because of the time involved, I now thank him for his insistence as I learned many things from these experiences. He improved my intuitive ability in mathematics and under his tutelage I learned to visualize a more abstract problem in a more natural way. To my wife Zahra, I would like to express my very deep gratitude for her immense support, who, while herself a student and working, took on as well, the responsibilities of childern and home. During the years I was a student she consistently sustained me through all the difficulties and trials. My two wonderful childern, Sahar and Al i , I would like to thank for their tolerence in living with a mother and father who were both students and for their support and viii understanding of what we do. To my relatives, in particular my brothers and mother-in-law who supported me in all directions and to my many friends who also provided invaluable moral support during the period of my research, I express my sincere thanks. ix Chapter 1 INTRODUCTION In recent years there has been increasing interest in the theory of stochastic evolution equations. This has been partly motivated by the needs in various applied fields such as control theory, mechanics, statistical hydromechanics, quantum mechanics, quantum field theory, population genetics, stochastic quantization, neurophysiology and random vibration. Several of those applications are presented in Curtain and Pritchard (1978), Dawson (1975), Krylov and Rozovskii (1981), Walsh (1981, 1984, 1986), Faris and Jona-Lasinio (1982), Crandall and Zhu (1983) and Biswas and Ahmed (1985). Suppose one is given a dynamical system governed by a partial differential equation. Suppose that the system is then excited randomly by some sort of noise. Then the response of the system will be governed by a stochastic partial differential equation. Example 1.1 Assume that an elastic string of length / is tightly stretched between two supports at the same horizontal level, so that the x-axis lies along the string. Let u(t, x) denote its vertical displacement at the point x at time t. If damping effects such as air resistance are neglected, and if the amplitude of the motion is not too large, then u(t, x) satisfies the P.D.E utt = auxx (1.1) in the domain 0 < x < I , t > 0, where a > 0 is a constant. 1 Chapter 1. INTRODUCTION 2 If the string is excited randomly by a white noise W(t, x), then the dynamical response is governed by the stochastic partial differential equation (SPDE) uu = a2uxx + W(t, x). (1.2) This equation is called the stochastic wave equation and has been well-studied in Cabana (1970, 1972), Pardoux (1975), Walsh (1986), Biswas and Ahmed (1985), and Carmona and Nualart (preprint). The elastic string may be thought of as a violin string or a guitar string [see Walsh (1986), p. 0.1] An important example of an elastic string is the electric power line which is excited by a white noise. According to Biswas and Ahmed (1985), " ...this distributed noise could be attributed to the random aerodynamic forces acting on the (transmission line) conductors, arising due to the randomness of wind velocity and the irregularity of ice formation on the conductor surface" (p. 1043). The vibrating transmission line is called the galloping conductor . In random vibration, structural elements such as beams, cables, arches, membranes, and shells can be excited by some sort of random loadings. According to Crandall (1979), "Random loadings on such structural elements can arise from earthquakes . . . , or windstorms . . . , acting on onshore structures, from storm winds and waves . . . on offshore structures, from turbulent boundary layers and jet noise on high-speed aircraft . . . , or from turbulent flow in and around the tubes in heat exchangers " (p. 1). See also Crandall and Zhu (1983) for a survey of the recent developments of random vibration. Stochastic Partial Differential Equations (SPDEs) As in the classical theory of PDE, there are two methods to study SPDEs. First, one can study the multiparameter processes u(t, x) as solutions of the SPDE. This method emphasizes sample path properties of real-valued multiparameter processes u(t, x). Walsh Chapter 1. INTRODUCTION 3 (1986) contains a systematic treatment of this approach. See also Walsh (1981, 1983), Dozzi (1989) and the references given therein for further information on this approach, though we shall not follow it closely here. The second method, which we will follow, is to consider one-parameter Banach-valued processes, u = {ut : t £ R} as solutions of the SPDEs. Here one envisages the SPDEs as stochastic evolution equations in an appropriate Banach space, where functional analytic techniques are applied. Dawson (1975) contains a rigorous treatment of the theory of stochastic evolution equations and reviews the subject up to 1975 and has extensive references. See also Curtain and Pritchard (1978). Though these methods are nearly equivalent, some problems are more natural to pose from one point of view than the other. Notations and Definitions Let H be a, real separable Hilbert space with norm || || and inner product < , >. Let L2(H) be the space of Hilbert-Schmidt operators on H with norm || ||2. Let g be an .//-valued function defined on a set D(G) C H. Recall that g is monotone if for each pair x,y £ D(g), < g(x) - g(y),x - y >> 0, and g is semi-monotone with parameter M if, for each pair x,y £ D(g), < g(x) - g(y),x- y >> -M\\x - y\\2. On the real line we can represent any semimonotone function with parameter M, by f(x) — Mx; where / is a non-decreasing function on R. We say g is bounded if there exists an increasing continuous function i\) on [0, oo) such that ||^(a;)|| < ^>(||a:||),Va: £ D{g). g is demi-continuous if, whenever (a;„) is a sequence Chapter 1. INTRODUCTION 4 in D(g) which converges strongly to a point x £ D(g), then g(xn) converges weakly to 9{x). Let (0, T, Tt-, P) be a complete stochastic basis with a right continuous filtration. We follow Yor (1974) and define cylindrical Brownian motion as Definition 1.1 A family of random linear Junctionals {Wt, t > 0} on H is called a cylindrical Brownian motion on H if it satisfies the following conditions: (i) WQ = 0 and Wt(x) is ^-adapted for every x £ H. (ii) For every x £ H such that x ^ 0, Wt(x)/\\x\\ is a one-dimensional Brownian motion. Note that cylindrical Brownian motion is not H-valued because its covariance is not nuclear. For the properties of cylindrical Brownian motion and the definition of stochastic integrals with respect to the cylindrical Brownian motion see Yor (1974). 1.1 Linear Stochastic Evolution Equation Linear stochastic evolution equations have been extensively studied in recent years. They occur in Dawson (1975), Miyahara (1981), Ito (1978, 1982), Holley and Strook (1978), Kallianpur and Wolpert (1984), Ichikawa (1978), Walsh (1981, 1984, 1986), Ustunel (1982) , Kallianpur and Perez-Abreu (1987), Da Prato et al. (1982a, 1982b), Da Prato (1983) , and Leon (1989). Consider the linear stochastic evolution equation dXt = A(t)Xt dt + dMt, (1.3) where Mt is an H-valued martingale and for each t £ R, A{t) is a closed unbounded linear operator on H which satisfies certain conditions. The mild solution of (1.3) with initial condition ^(0) = 0 can be represented as a stochastic convolution integral fQU(t,s)dMa [see Kotelenez (1982)], where U(t,s) is the evolution operator generated by A(t). Chapter 1. INTRODUCTION 5 Kotelenez (1982, 1984) proved a submartingale-type and stopped-Doob inequality for stochastic convolution integrals. An Ito-type inequality and a Burkholder-type inequality for this object will be proved in Chapter 3. 1.2 Non-linear Stochastic Evolution Equations Most of the work on strong solutions of non-linear stochastic evolution equations in recent years has concentrated on two types of equations. The first has the form where / : H —> H and g : H —> L2(H) are uniformly Lipschitz mappings. In solving this equation one usually uses semigroup techniques. The other non-linear stochastic evolution equation is based on a Gelfand triple, B C H C B* (where B is a Banach space and B* is its dual) and has the form where F : B —> B* and G : B —• L2(H) satisfy certain monotonocity and coercivity conditions. In this setting, one often uses a variational approach. Let us briefly discuss the strong solutions in each of these two classes. 1.2.1 First Type If A(t) = 0, (1.4) is called a stochastic differential equation in H and has been well-studied by several authors [see Dawson (1975) and Metivier (1982) for references]. When A(t) is an unbounded closed linear operator, equation (1.4) is called a semilinear stochastic evolution equation and since / and g are Lipschitz, we say it is of Lipschitz type. Here we usually look for mild solutions of (1.4), which are strong solutions of the dXt = A(t)Xtdt + f(Xt)dt + g(Xt)dWt (1.4) dXt = F(Xt)dt + G(Xt)dWt (1.5) Chapter 1. INTRODUCTION 6 following integral equation: Xt = U(t, 0)X0 + f U(t, s)f{X.)ds + f U(t, s)g(Xs)dWs, (1.6) Jo Jo where U(t,s) is an evolution operator generated by A(t). When A(t) = A is a negative, self-adjoint operator such that A-1 is nuclear, the existence and uniqueness of the mild solution of (1.4) has been proved by Dawson (1972, 1975). The existence and uniqueness still apply when g takes values in the space of bounded linear operators on H, instead of the Hilbert-Schmidt operators. If A(t) is a generator of an evolution operator U(t, s), the existence and uniqueness of the solution of (1.4) have been studied by several authors [see for example Ichikawa (1982), Kotelenez (1988)]. Kotelenez (1984) has studied a more general case. Ahmed (1985) and Da Prato and Zabczyk (1988) have proved the existence and uniqueness of (1.3) in the context of Banach spaces. There are several results in this theory of a qualitative character. We shall briefly mention a few. Marcus (1974) considered problems of the asymptotic stationarity of the solution of (1.3) (in the case g = I). Funaki (1983) applied equation (1.4) to examine the random vibration of strings. Ichikawa (1982, 1983, 1984) has results on the stability, boundedness and invariant measures of (1.4). Maslowski (1989) has results on the uniqueness and stability of invariant measures. Da Prato and Zabczyk (1988) obtained the Wentzel-Freidlin large-deviations estimate for the solution in the case g = I. 1.2.2 Second Type In equation (1.5), assume the functions F and G satisfy the following conditions: there exist C > 0, e > 0 and p > 1 such that Chapter 1. INTRODUCTION 7 • Coercivity of (F, G): 2 < X,F(X) > B X B * + H G T O H ! + e\\X\\ pB<C(l + \\X\n • Monotonicity of (F, G): 2<X2- Xi,F(X2) - F(Xt) > B X B * +\\G(X2) - < C\\X2 - X1\\2. • Boundedness of the growth of F: i i w i i B ^ c c i + mr 1)-• Semicontinuity of F: the function < X,F(Xi + XX2) >BxB* is continuous in A on In this approach one is usually interested in the strong solution of the integral equation When G = 1, the existence and the uniqueness of the solution of (1.7) is proved in Bensoussan and Temam (1972). In the general case, the existence and uniqueness of the solution was first proved by Pardoux (1975) under stronger assumptions on (F, G). This was proved in connection with the theory developed in Pardoux (1975). A direct proof was later given by Krylov and Rozovskii (1981). This has been generalized by Gyongy and Krylov (1982a). This was one of the most active areas in the theory of the stochastic evolution equa-tions in the last decade. There have been extensive works by Pardoux, Krylov, Rozovskii, Gyongy, such as Gyongy (1982, 1988, 1989a, 1989b), and Gyongy and Krylov ( 1982b ). 1.2.3 Comparing the Two Types of Equations Each one of these approaches has some advantages. For example, if the differential operator is non-linear as in the Navier-Stokes equation, we have to pose the problem R 1 . (1.7) Chapter 1. INTRODUCTION 8 in the second setting, and if the differential operator is linear but does not have the coercivity property, as in the wave equation or in the symmetric hyperbolic system, then it is more natural to pose it in the first setting. One advantage of the semigroup approach is that it gives a unified treatment of a wide class of parabolic, hyperbolic and functional differential equations. In the case of parabolic equations, one can employ the variational method, which applies to non-Lipschitz (F,G) [see Krylov and Rozovskii (1981)]. Consider the second order semilinear stochastic evolution equation on H, written formally as • Wt is a cylindrical Brownian motion on H; • A is strictly positive definite, self-adjoint unbounded operator on H. When / and g satisfy the Lipschitz condition, equation (1.8) falls in the first category. Pardoux (1975) has studied this equation when / satisfies certain monotonicity and coercivity conditions. He has constructed the solution of (1.8) in connection with the theory developed in part three of Pardoux (1975). His approach is a stochastic version of Lions and Strauss (1965). In Chapters 4 and 7 of this thesis, we will study Equation (1.8) as a corollary of our existence and uniqueness Theorem. 1.2.4 The Semilinear Stochastic Evolution Equation of Monotone Type The preceding two approaches are both stochastic versions of well-known methods in the theory of deterministic evolution equations. In the latter theory, there is yet another (1.8) • where / : H x H —» H and g : H x H —> L,2(H) satisfy certain conditions; Chapter 1. INTRODUCTION 9 approach in which a large class of problems could be studied. This approach is a gener-alization of the first one above and can be used to study semilinear evolution equations of monotone type. where / is of monotone type, i.e., —/ is semimonotone, demicontinuous and bounded by <f [see for example Browder (1964), Kato (1964), Vainberg (1973), Tanabe (1979) and Carroll (1969)]. The study of the stochastic version of the above equation, i.e., the study of the equation (1.4) when / is of monotone type and g ^  0 is uniformly Lipschitz, is not in the literature. In this dissertation, we shall see that we may use semigroup theory to extend the above deterministic method to equation (1.9) and then use this to study the stochastic equation (1.4). We will obtain the existence, uniqueness, boundedness and the continuity with respect to a parameter. We will also apply it to find stationary solutions of certain equations, and to study the associated large deviation principle. Now we shall briefly outline our main results and the contents of this thesis. 1.3 The Main Results Let us consider the following generalization of the integral equation (1.4): where • ft(.) = f(t, u,.) : H —• H is of monotone type, and for each x £ H ft(x) is a stochastic process which satisfies certain measurability conditions; • gs(.) is a uniformly Lipschitz predictable functional with values in L^H); Consider Xt = A(t)Xt + f{Xt), (1.9) (1.10) Chapter 1. INTRODUCTION 10 • Ws is an H-valued cylindrical Brownian motion; • Vt is a cadlag, adapted process with values in H. 1.3.1 The Method of Study We construct the solution (1.10) by first constructing its solution when g = 0. This latter will be shown to be a weak limit of solutions of (1.10) in the case when g = 0 and A = 0, which in turn have been constructed by the Galerkin approximation of the finite-dimensional equation. Pardoux, Krylov, Rozovskii, and Gyongy built their analysis of (1.7) on a generalized Ito's Lemma decomposition of ||u(£)||2. Since we use semigroup techniques in this dissertation, our object is to solve the equation by means of convolution integrals. Our proof is based upon a different version of an Ito-type inequality (Theorem 3.1) and on a Burkholder-type inequality (Theorem 3.2). 1.3.2 Existence, Uniqueness, and Boundedness of the Solution • Equation (1-10) in case g = 0 : The main problem in this case is to show the measurability of the solution. The proof is in chapters two and four. In Chapter 4, we show that several important stochastic semilinear equations fall in this setting. • Extending equation (1.10) to the general case: In Chapters 7 and 8 we prove that if Vt is a continuous adapted process, and if / is bounded by a polynomial, then the integral equation (1.10) has a continuous adapted solution. We also give a bound for the moments of this solution in Chapter 8. In Chapter 7 several examples are studied including the second order equation and the semilinear hyperbolic system. Chapter 1. INTRODUCTION 11 1.3.3 The Semilinear Integral Equation on the Whole Real Line and the Stationarity of its Solutions Consider the stochastic semilinear equation dXt = AXtdt + ft(Xt)dt + dWt, (1.11) where A is a closed, self-adjoint, negative definite, unbounded operator such that A-1 is nuclear. A mild solution of (1.11) with initial condition X(0) = X0 is the solution of the integral equation Xt = U(t, 0)Xo + f U(t - s)fs(Xs)ds + f* U(t - s)dWs, (1.12) Jo Jo where U(t) is the semigroup generated by A. Marcus (1974) has proved that when / is independent of t and u>, and / is uniformly Lipschitz, then the solution of (1.12) is asymptotically stationary. To prove this, he studied the following integral equation: Xt= f U(t-s)f.(X.)ds+ f* U{t-s)dWs, (1.13) J—oo J—OO where the parameter set of the processes is extended to the whole real line. This motivated us to study the existence of the solution of the slightly more general equation Xt= f U(t-s)fa(X,)ds + Vt, (1.14) J — OO where / is of monotone type and is bounded by a polynomial, and Vt is a cadlag adapted process. In Chapter 4, we prove the existence the uniqueness of the solution to (1.12). In Chapter 5, we will prove that finite dimensional Galerkin approximations converge strongly to the solution of (1.14). In Chapter 6 we prove under certain conditions that if Vt is a stationary process, then Xt is also a stationary process. Chapter 1. INTRODUCTION 12 1.3.4 Continuity with Respect to a Parameter Faris and Jona-Lasinio (1982) have studied the equation (1.10) in the case when g = 0, the generator of U is j^, and /(x) = — Arc3 — \ix. They showed that the solution X is a continuous function of V in this case. Da Prato and Zabczyk (1988) generalized this to the case where U is a general analytic semigroup and / is a locally Lipschitz function on a Banach space. In Chapter 5 we generalize these results by proving that [in case g = 0] the solution of (1.9) changes continuously when any or all of V, /, A, and X0 are varied. As a corollary, we prove a generalization of Faris and Jona-Lasino's theorem for semimonotone / and more general U; this was open after Faris and Jona-Lasinio (1982) [see for example Smolenski et al.(1986), p. 230]. We also prove the strong convergence of the finite dimensional Galerkin approximations to the solution of (1.10) (in case g = 0). Metivier (1980) has proved that when A = 0, V = 0, and / is Lipschitz, then the solution of equation (1.10) changes continuously as / , g and X0 are varied. In Chapter 8 we generalize this by proving that the solution of (1.10) in the general case changes continuously when one or all of X0, f, g, and V are varied. Chapter 2 T H E MEASURABILITY OF T H E SOLUTION 2.1 The Main Theorem Let H be a real separable Hilbert space with an inner product and a norm denoted by < , > and || ||, respectively. Let (G, Q) be a measurable space, i.e., G is a set and Q is a <j-field of subsets of G. Let T > 0 and let S = [0,T]. Let /3 be the Borel field of S. Let L2(S,H) be the set of all .//-valued square integrable functions on S. Consider the initial value problem, formally written as , ' & = /(«.«(')). <6Si ( 2 1 ) u(0) - w0, where / : S x H —• H and u0 £ H. We say u is a solution of (2.1) if it is a solution of the integral equation u(t) = u0+ [ f(s,u(s))ds. (2.2) Jo We will actually be interested in slightly more general equations. Consider the integral equation «(<>y) = f f(s,y,u(s,y))ds + V(t,y), t e S, y € G. (2.3) In this case / : S X G X H —• H and V : S X G H. The variable y is a parameter, which in practice will be an element u> of a probability space. Our aim in this chapter is to show that under proper hypotheses on / and V there exists a unique solution u to (2.3), and that this solution is a 0 x ^-measurable function of t and the parameter y. 13 Chapter 2. THE MEASURABILITY OF THE SOLUTION 14 In this chapter we say X(.,.) is measurable if it is /3 x ^-measurable. We will study (2.3) in the case where —/is demi-continuous and semi-monotone on H and V is right continuous and has left limits in t (cadlag). This has been well-studied in the case in which V is continuous and / is bounded by a polynomial and does not depend on the parameter y. See for example Bensoussan and Temam (1972). Let 7i be the Borel field of H. Consider functions / and V f : SxGxH H V : S x G -> H. We impose the following conditions on / and V: Hypothesis 2.1 (a) / is (3 x Q x 'H-measurable and V is Q x "H-measurable. (b) For each t £ S and y £ G, x —> f(t,y,x) is demicontinuous and uniformly bounded in t. (That is, there is a function <p = (p(x,y) on R + x G which is continuous and increasing in x and such that for all t € S, x £ H, and y £ G , ||/(£,y,x)\\ < <p(v,M\)') (c) There exists a non-negative Q-measurable function M(y) such that for each t £ S and y £ G, x —• —f{t,y,x) is semimonotone with parameter M(y). (d) For each y £ G, t —*• V(t, y) is cadlag. Theorem 2.1 Suppose f and V satisfy the Hypothesis 2.1. Then for each y £ G, (2.3) has a unique cadlag solution u(-,y), and u(-, •) is f3 x Q-measurable. Furthermore ||u(i,y)|| < ||V(t,y)|| + 2 /' eM^~%f{s, y, V(s, y))\\ds; (2.4) | |«(.,»)l|oo < ||V(.,y)||oo + 2CMy, ||V(.,y)||oo), (2-5) Chapter 2. THE MEASURABILITY OF THE SOLUTION 15 where Wu]]^ = sup0< t< ( r ) and C t = I m)eM(y)T (f M^ * 0 1 otherwise Let us reduce this theorem to the case when M = 0 and V = 0. Define the transfor-mation X(t,y) = e^t(u(t,y)-V(t,y)) (2.6) and set g(t, y, x) = e"M*f(t, y, V(t, y) + x e " ^ ) + M(y)x. (2.7) Lemma 2.1 Suppose f and V satisfy Hypothesis 2.1 . Let X and g be defined by (2.6) and (2.7). Then g is 0 x Q x ri-measurable and —g is monotone, demicontinuous, and uniformly bounded in t. Moreover u satisfies (2.3) if and only if X satisfies X(t, y) = f g(s, y, X(s, y))ds, Vi £ S, y £ G. (2.8) Jo Proof: The verification of this is straightforward. Suppose that V and / satisfy Hy-pothesis 2.1. We claim g satisfies the above conditions. • g is 0 x Q x 7^-measurable. Indeed, if h £ H then < f(t, y,.), h > is continuous and V(t, y)-\-xe~ MM* \s0xCxH-measurable, so < f(t,y,V(t,y) + xe~M^),h > is 0 x Q x W-measurable. Since H is separable then / (t, y, V(t, y) + x e _ M ^ ' ) is also 0 x Q x W-measurable, and since eM^f and M(y)x are 0 x Q x 7i-measurable, then g is 0 x Q x 7i-measurable. • g is bounded, since supt||Vt(y)|| < oo and \\g(t,y,x)\\ < 4>(y, \\x\\), where <j>(y, 0 = eMMT<f>(y, £ + supt\\Vt\\) + M(y)t • g is demicontinous. Chapter 2. THE MEASURABILITY OF THE SOL UTION 16 • — g is monotone. Furthermore, one can check directly that if X is measurable, so is u. Since X is continuous in t and V is cadlag, u must be cadlag. It is easy to see that different solutions of (2.7) correspond to different solutions of (2.3). Q.E.D By Lemma 2.1, Theorem 2.1 is a direct consequence of the following. Theorem 2.2 Let g = g(t,y,x) be a (3 x Q x 7i- measurable function on S x G x H such that for each t £ S and y £ G, x —* —g(t,y,x) is demicontinous, monotone and bounded by (p. Then for each y £ G the equation (2.8) has a unique continuous solution X(.,y), and (t,y) —> X(t,y) is /3 x Q-measurable. Furthermore X satisfies (2.5) with M = 0 and V = 0. Remark that the transformation (2.6) u —• X is bicontinuous and in particular, implies if X satisfies (2.4) and (2.5) for M — 0 and V = 0, then u satisfies (2.4) and (2.5) Note that y serves only as a nuisance parameter in this theorem. It only enters in the measurability part of the conclusion. In fact, one could restate the theorem somewhat informally as: if / and u0 depend measurably on a parameter y in (2.2), so does the solution. The proof of Theorem 2.2 in the case in which / is independent of y is a well-known theorem of Browder (1964) and Kato (1964). One proof of this theorem can be found in Vainberg(1973), Th (26.1), page 322. The proof of the uniqueness and existence are in Vainberg (1973). In this section we will prove the uniqueness of the solution and inequalities (2.4) and (2.5). In Section 2.3 we will prove the measurability and outline the proof of the existence of the solution of equation (2.8) Since y is a nuisance parameter, which serves mainly to clutter up our formulas, we will only indicate it explicitly in our notation when we need to do so. Chapter 2. THE MEASURABILITY OF THE SOLUTION 17 Let us first prove a lemma which we will need for proof of the uniqueness and for the proof of inequalities (2.4) and (2.5). Lemma 2.2 / / a(.) is an H-valued integrable function on S and if X(t) : = XQ + fQa(s)ds, then \\Mt)\\2 = \\Xo\\2 + 2 f <X(s),a(s)>ds. Jo Proof: Since a(s) is integrable, then X(t) is absolutely continuous and X'(t) = a{t) a.e. on S. Then ||A"(£)|| is also absolutely continuous and jt\\X{t)\\* = 2 < *%p-,X(t)>= 2 < a(t),X(t) > a.e. so that r* d Jo d s ~ m s ) l l 2 d S = l l X m 2 ~ l | X o | | i Thus \\X(t)f - \\X0f = 2 f <X(s),a(s)>ds. Jo Q.E.D. Now we can prove inequalities (2.4) and (2.5) in case M = 0 and V = 0. Lemma 2.3 If M = V = 0, the solution of the integral equation (2.8) satisfies the inequality \\X(t)\\ < 2 f \\g(s,0)\\ds < 27V(0). Jo Proof: Since X(t) is a solution of the integral equation (2.8), then by Lemma 2.2 we have Chapter 2. THE MEASURABILITY OF THE SOLUTION 18 \\X(t)\\2 = 2 f <g(s,X(s)), X(s)>ds Jo = 2 f <g(s,X(s)) -g(s,0),X(s)>ds Jo + 2 f < g(s,0),X(s) > ds Jo < 2 /* <g{s,X(s))-g(s,0),X(s)>ds Jo + 2 [ \\g{s,0)\\ \\X(s)\\ds. Jo Since — g is monotone, the first integral is negative. We can bound the second integral and rewrite the above inequality as \\X(t)\\2 < 2 / j | ^ , 0 ) | | | | X ( , ) | | ^ Jo 2su P o < 5 < t | |X(s) | | f \\g(s,0)\\d* — Jo < Thus sup0<s< i||A^(5)|| < 2fo\\g(s,0)\\ds. Since sup0<s<t\\g(s,x)\\ < <p(\\x\\), the proof is complete. Q.E.D Proof of Uniqueness Let X and Y be two solutions of (2.8). Then we have X(t, y) - Y(t, y) = f[g{s, y, X(s, y)) - g(s, y, Y(s, y))]ds. Jo By Lemma 2.2 one has \\X(t, y) - Y{t, y)f = f < g(s, y, X(s, y)) - g(s, y, Y(s, y)), X(s, y) - Y(s, y) > ds. Jo Since —g is monotone, the right hand side of the above equation is negative, so X(t,y) = Y(t,y). Q.E.D Chapter 2. THE MEASURABILITY OF THE SOLUTION 19 2.2 The Measurability of the Solution in Finite-dimensional Space Consider the integral equation X(t,y) = f h{s,y,X{s,y))ds, (2.9) Jo where h(-, •) satisfies the following hypothesis. Hypothesis 2.2 (a) h satisfies Hypothesis 2.1 (a), (b). (b) For each t 6 S and y £ G, —h(t,y,-) is continuous and monotone. Since h is measurable and uniformly bounded in t, then h(-,y,x) is integrable. As h(t,y,-) is continuous, the integral equation (2.9) is a classical deterministic integral equation in R" and the existence of its solution is well known. In section 2.1 we proved that (2.9) has a unique bounded solution, so we only need to prove the measurability of the solution. The existence, uniqueness and measurability of the solution of (2.9) is known (see Krylov and Rozovskii (1979) for a proof in a more general situation). Since the mea-surability result is easy to prove in our setting, we will include a proof in the following theorem for the sake of completeness. Theorem 2.3 The solution of the integral equation (2.9) is measurable. Proof: For the proof of measurability we are going to construct a sequence of solutions of other integral equations which converge uniformly to a solution of (2.9). First: Let be a positive C°°-function on Hn~YLn with support {||a;|| < T(p(0) + 2}, which is identically equal to one on {||a;|| < T<p(y,0) + 1}. Now define h(t,x) — h(t,x)rj)(x). Chapter 2. THE ME AS URABILITY OF THE SOL UTION 20 —h is semimonotone. This can be seen because. If > T<p(0) + 2 and \\Z\\ > 7V(0) + 2,then h(t, X) = h(t, Z) = 0 and so < h(t,X) h{t,Z),X - Z >= 0. Let \\Z\\ < 7V(0) + 2. Then <h(t,X)-h(t,Z),X-Z> = <h(t,X)tl>(X)-h(t,Z)t/;(X),X-Z> Since —h is monotone and i/> is positive, the first term of the right hand side of the inequality is negative. Now as Z is bounded and ip is C°° with compact support, the second term is < M(y)\\X — Z\\2 for some M(y). Since by Lemma 2.3 the solution of (2.9) is bounded by T<p(0), it never leaves the set {\\x\\ < T(p(0) + 1}, so the unique solution of (2.9) is also the unique solution of the equation X(t) = h(s, X(s))ds. Thus without loss of generality we can assume h(t,.) has compact support. Second: Define k(x) to be equal to C e x p { } °n < 1} and equal to zero on {\\x\\ ^ !}• Then k(x) is C°° with support in the unit ball < 1}. Choose C such that J R „ k(x)dx = 1. Introduce, for e > 0 This is a C°°-function called the mollifier of u. Now define hE(t,x) — I£h{t, -)(x). Since for any e the first derivatives with respect to x of J£u(x) and also J£u(x) itself are bounded in terms of the maximum of ||«(x)||, + < h(t, Z)^(X) - h(t, Z)^(Z), X - Z > . By the Schwarz inequality this is < j,(x) < h(t,x)-h(t,z),x-z> +\\h(t,z)\\ \^{x)-y>(z)|||A--z||. Chapter 2. THE MEASURABILITY OF THE SOLUTION 21 then h£ and Dxh£ are bounded in terms of the maximum of \\h(t,x)\\. Thus there exist Ki(y) and K2{y) independent of e such that Kx{y)> sup \\Dxhe(x)\\ and K2(y) > sup ||M*)II-\\x\\<T<p(y,0)+2 \\x\\<TV(y,0)+2 By the mean value theorem we have \\he(t,y,x2)-h£(t,y,x1)\\ < KiMWx* - Xl\\. (2.10) Now consider the following integral equation : Xe{t) = f h£(s,X£(s))ds. (2.11) Jo Equation (2.11) can be solved by the Picard method. Since y —> h(t,y,x) is measurable in (t,y), y —* h£(t,y,x) is measurable in (t,y). Then the solution X£ of equation (2.11) is measurable and so is lim^o-^e- To complete the proof of Theorem 2.3 we need to prove the following lemma. Lemma 2.4 The solution X£ of (2.11) converges uniformly to a solution X of (2.9). Proof: From (2.9) and (2.11) we have X£(t)-X(t) = [\h£(s,X£(s))-h(s,X(s)))ds. Jo Then \\x£(t) - x(t)\\ < fwh^x^-h^xismds Jo + ft\\h£(s,X(s)-h(s,X(s))\\ds. Jo By (2.10) we see this is < Kx{y) f \\X£{s) - X{s)\\ds Jo + /Ve(s,*(a))-M*,*(*j)ll<fe. ./o Chapter 2. THE MEASURABILITY OF THE SOLUTION 22 By Gronwall's inequality we have Sup0<t<T\\Xs(t) - X(t)\\ < exp(TKi) I \M*,X(s)) — Jo h(s,X(s))\\ds. But h£(s, X(s)) —» h(s,X(s)) pointwise and ||/ie(2, X(2))|| < K2 so by the dominated convergence theorem, 2.3 The Proof of the Measurability in Theorem 2.2 Now we shall briefly outline the proof of the existence from Vainberg(1973), Th(26.1), page 322 and give a proof of the measurability of the solution of equation (2.8). Vainberg constructs a solution of this equation by first solving the finite-dimensional projections of the equation, and then taking the limit. Since the solution of the infinite-dimensional case is constructed as a limit of finite-dimensional solutions, one merely needs to trace the proof and check that the measurability holds at each stage. There is one extra hypothesis in [Vainberg, Th(26.1) ], namely that t —• g(t,x) is demicontinuous, whereas in our case, we merely assume g is measurable and uniformly bounded in t [Hypothesis 2.1 ( (a) (b))]. However, the demicontinuity of g is not used in showing the existence of the solution of the integral equation (2.8). It is only used to show the inequality (2.4) for the finite-dimensional case. We have reproved (2.4) in Lemma 2.3. Now let (Hn) be an increasing sequence of subspaces of H such that UnHn is dense in H, and let Jn be the orthogonal projection of H onto Hn, so that Jn —• I strongly. Consider the integral equation sup0<t<T\\X£(t) - X(t)\\ -> 0. Q.E.D (2.12) First let us show that Jng satisfies Hypothesis 2.2. Chapter 2. THE MEASURABILITY OF THE SOLUTION 23 • Jng(t,y,') is continuous . Since g(t,y,-) is demicontinuous, g(t,Xk) g(t,x) weakly when \\xk — x\\ —• 0. But Jng takes its values in the finite-dimensional space Hn, where weak and strong convergence coincide, therefore \\Jng(t,xk) - Jng(t,x)\\ -»• 0, and Jng(t,y,-) is continuous. • Jng(t,y, •) is monotone from Hn to Hn. Let X, Z G Hn. Then < Jng(t,X)-Jng(t,Z),X-Z> = < g(t,X) - g(t, Z),JnX - JnZ > (2.13) since Jn = J*. For X, Z G Hn, Jn(X - Z) = X - Z so the left hand side of (2.13) is negative, hence Jng(t,y,-) is monotone. • Jn9(t,y) satisfies Hypothesis 2.1(a). • Jng(t, y) is uniformly bounded by tp. Now by Theorem 2.3, equation (2.12) has a unique continuous measurable solution which satisfies ||*n(<)|| <2 [t\\Jng(s,0)\\ds<2 ['\\g(s,0)\\ds < 2ZV(y,0). (2.14) ^0 Jo Now we are going to prove Lemma 2.5 For each y, Xn(-,y) converges weakly in L2(S,H) to a solution X(-,y) of (2.8). Furthermore X(-,y) is continuous for each y. Proof: Let (Xnk) be an arbitrary subsequence of (Xn). By (2.14) and Hypothesis 2.1 (b) we have \\g(t,y,Xnk(y,t))\\ < <p(y, \\Xnk(y,t)\\) < <p(y,2T<p(y,0)) Chapter 2. THE MEASURABILITY OF THE SOL UTION 24 so g(.,Xnk(.)) is a bounded sequence in L2(S,H). Then there is a further subsequence (rik,) such that g(., Xnki (.)) —• Z{.) weakly in L2(S,H) as / —• oo. Each Xn satisfies (2.12) and it can be proved that Xnkt(.) —> JQZ(S)OIS weakly [see Vainberg]. We define X to be the weak limit of Xnki in L2(S,H) . Vainberg proved that X(y,.) is continuous and is a solution of (2.8) [ see Vainberg, p 325-326]. Since the solution X(-,y) is unique, every subsequence of (Xn) has in turn a subse-quence which converges to X(y, •) weakly, it follows that the whole sequence Xn converges weakly to X. Q.E.D To complete the proof of Theorem 2.2 we need to show the measurability of X{-, •). Fix t G S , h G H, since by Theorem 2.3 Xn is measurable in (t, ?/), then JQ < Xn(s,y),h > ds is measurable in (t, y). But /J < Xn(s,y),h > ds converges to JQ < X(s, y), h > ds pointwise, so /Q < X(s, y), h > ds is measurable in (t, y) . As the integrand < X(s,y),h > is continuous in 5 , then d /* — Jo < X(s, y), h>ds=< X(t, y), h > and since the integral is measurable in (t,y), the function < X(t, y), h > is measurable. By the separablity of H, X(t, y) is measurable in (t, y) . Q.E.D Chapter 3 STOCHASTIC CONVOLUTION INTEGRALS 3.1 Introduction and Preliminaries Let (fi, T, Tt, P) be a complete stochastic basis with a right continuous filtration. This means that T is complete with respect to P and each J-t contains all P-null sets of T and Ts = r\t>a for all s. Let L(H) be the space of linear bounded operators on i f with norm || Let (Xt)teH+ be an if-valued stochastic process. We say that X is a locally square-integrable process (l.s.i) if there is a sequence of stopping times (Tn) with Tn < T n + i , Tn —» oo a.s. such that for all n, -E{||^tAT„||2l{Tn>o}} is bounded in t. Note that a continuous adapted process is l.s.i, as is one with bounded jumps. A process ZT with values in i f is a semimartingale if there exists an if-valued local martingale MT and a process of finite variation VT such that ZT = MT + VT. In this thesis we always assume MQ = ZQ — Va = 0. We shall say that the semimartingale Z is a l.s.i. semimartingale if M is a l.s.i. local martingale and V is a process of finite variation such that \V\ is l.s.i. Let M be an H-valued, cadlag local martingale. Let V denote an if-valued, y^-adapted, cadlag process with total variation |V | . Let Z be an if-valued cadlag semi-martingale. Let XQ be an if-valued TQ -measurable random variable. Consider on i f the 25 Chapter 3. STOCHASTIC CONVOLUTION INTEGRALS 26 linear stochastic evolution equation formally written as dXt = A(t)Xtdt + dZt (3.1) < ( X(0) = X0, where {A(t), t G S} is a family of closed linear operators on H whose domain D is independent of t G S and is dense in H. (Useful background and motivation for stochastic evolution equations in Hilbert spaces can be found in Curtain and Pritchard We will define two types of solutions of Eq (3.1). Definition 3.1 An H-valued process X is a strong solution of (3.1) if and only if (i) Xt G D for almost all t G S, X. G LX{S,H) a.s., and A(.)X G LX{S,H) a.s.. (ii) Xt = X0 + foA(s)Xsds + Zt a.s. for each t G S. Suppose that {A(t) : t G S} generates a unique evolution operator {U(t,s) : 0 < s < t < T}, i.e, the U(t,s) are bounded linear operators on H such that and (t,s) —»• U(t, s) is strongly continuous for 0 < s < t < T , and certain relationships between A and U hold, which we will introduce later on. Definition 3.2 An H-valued process X is a mild solution of (3.1) if and only if (1978)). U(t, t) = 7, U(t; s) U(s, r) = U(t, r) for 0 < r < s < t < T , (i) X. G L1(S,H) a.s. (ii) Xt = U(t,0)Xo + tiU(t,s)dZs a.s. for each t £ S. Definition 3.3 We say the evolution operator U(t,s) is an almost strong evolution op erator with generator A{t) if it satisfies the following: (a) For almost all s < t and for each x G D (3.2) Chapter 3. STOCHASTIC CONVOLUTION INTEGRALS 27 (b) Let x G D and s 6 S. For almost all t > s U(t,s)D C D; (3.3) A(r)U(r, s)xdr = (U(t, s) - I)x. (3.4) If U and A satisfy (3.2), (3.3), and (3.4) for every s £ S, u is called a strong evolution operator. Remark 3.1 (i) If {A(t) : t 6 S} is the generator of an almost strong evolution operator U(t,s), then (3.1) (with Z = 0 and X0 G D) has a unique solution X(t) — U(t, 0)Xo which is differentiable almost everywhere. (ii) For a.e. 0 < s < t < T and each x G D we have —U(t,s)x = A(t)U(t,s)x, (3.5) —U(t, s)x = -U(t, s)A(s)x. (3.6) We say U(t,s) is an exponentially bounded with parameter X on S if there is A G R such that ||tf(M)IU ^ e H t ~ s ) for a.e. 0 < s < t < T. (3.7) Note that if an almost strong evolution operator U(t, s) is exponentially bounded on S with parameter A, we have < A(t)x,x >< X\\x\\2, Vx G D. (3.8) This can be seen because if x G D and t > s , \\U(t,s)x\\2 - \\x\\2 < (e2M*-s) - 1)11^ 2 t — s ~ t — s Chapter 3. STOCHASTIC CONVOL UTION INTEGRALS 28 or \\U(t,S)x\\2 - llxlP ( c2A(t-.) _ hm^-t- " v ' " ! U L < bmt_t,+ i i i L L = 2 A i 2 ; t — 5 t — 5 but l i m ^ J 1 ^ ' ^ " N | 2 = ^lltfft-MV. = 2 < A(s)x,x > a.e.. Let s) := v4(t)[//7—/l(.s)]_1. By (3.1cl), if p > A then B(t, s) is a bounded operator [ see Kato (1953) Lemma(2)]. The following are the relevant hypotheses concerning A and U: Hypothesis 3.1 (a) The domain T>(A(t)) := D is independent oft for t G S and is dense in H; (b) {^ 4(i) : t G S} generates a unique almost strong evolution operator U(t,s); (c) U(t,s) is exponentially bounded on S with parameter A; (d) B(t, s) is uniformly bounded in (t, s), that is, for p > A there is a K(p) > 0 such that \\B(t, S)\\L < -^(AO for every s,t (this is the case ifB(t,s) is continuous in t in the sense of the norm \\\\T, at least for some s). We refer to Pazy (1983) and Tanabe (1979) for sufficient conditions for the existence of an evolution operator with the properties 3.1(a)-(d). These conditions apply to a large class of delay equations, and to parabolic and hyperbolic equations [see for example Curtain and Pritchard (1978)]. Consider the stochastic convolution integral Xt = fQU(t,s)dMs. Notice that because the integrand depends on t as well as on s, Xt is not necessarily a local martingale. However, it is possible to prove some results analogous to the well-known martingale inequalities. Kotelenez (1982, 1984) proved submartingale type and stopped-Doob in-equalities for this object. In this chapter we are going to prove an Ito-type inequality and a Burkholder-type inequality. For the definition and properties of stochastic integrals with respect to a Chapter 3. STOCHASTIC CONVOLUTION INTEGRALS 29 semimartingale see Metivier (1982), and for the properties of stochastic convolution in-tegrals see Kotelenez (1982, 1984). Proposition 3.1 (Kotelenez (1982)) Suppose U and A satisfy Hypothesis 3.1 (b)-(c). If the l.s.i. semimartingale Z is continuous (respectively cadlag), then Xt : = Jo £/(£, s)o?Zs := f(0^U(t,s)dZs has a version X with continuous (respectively cadlag) sample paths. Note that Kotelenez (1982) has proved the above theorem for martingales, but the extension to semimartingales is immediate. Thus, in dealing with JQU^, s)dZs we may always assume that / 0 4 U(t, s)dZs is cadlag (or continuous if Z is continuous). Let A be an unbounded operator on H with dense domain D. Let || \\D be the norm defined on D by NlL = l|A*ir + NI2, xeD. This norm is called the graph norm on D. Note that it generated by the inner product < x,y >D=< Ax, Ay > + < x,y > Remark 3.2 (i) An operator A is closed if and only if its domain D is complete under the graph norm [see Reed and Simon(1972),problem 15(a), p314-J (ii) Suppose A is a closed linear operator with dense domain D. Then D is a Hilbert space with graph norm || Let Z be an H-valued semimartingale. Suppose F(.) is an L(H, D)-valued measurable function on S with ™Ptes\\F(t)\\ < °°- (3-9) Then fQl F(s)dZs is a D-valued semimartingale [see Metivier(1982) page 156 ,157]. More-over A f F(s)dZs = f AF(s)dZa w.p.l (3.10) Jo Jo Chapter 3. STOCHASTIC CONVOLUTION INTEGRALS 30 (3.10) can be seen by approximating F(s) by step functions in L(H,D), and then taking limits. Now we are going to prove our version of Ito's inequality. 3.2 Ito-Type Inequality Theorem 3.1 (Ito's inequality) Let {Zt,t G S} be an H-valued cadlag l.s.i. semi-martingale. Suppose U and A satisfy Hypothesis 3.1(a)-(c). If Xt = U(t,0)Xo + [tU{t,s)dZs, (3.11) Jo then \\Xt\\2 < e2Xt\\XQ\\2 + 2 [ \ 2 X ^ <X(s-),dZs > Jo + e2U JQe-XsdZs^ , teS, (3.12) where [Z]t is the real quadratic variation of Z. Before proving Theorem (3.1) we are going to prove two lemmas. Suppose U(t,s) satisfies (3.1c) for some A G R. Define *7i(M) = e-x{t-s^U(t,s), Ax{t) = A{t)-\I, Z}=fte-x'dZ. Jo and X} = e~XtX., Lemma 3.1 If U and A satisfy Hypothesis 3.1, then Ui and Ax satisfy Hypothesis 3.1 with A = 0. Moreover, Xt satisfies (3.11) if and only if X] satisfies X] = U1(t,0)Xo+ [tU1(t,s)dZ1s. (3.13) Jo Proof: ||C/i(M)IU = e~X('t~^W(t,s)\\L < 1 a.e. By the definition of U\ we can rewrite (3.11) as Xt = eXtU1(t,0)Xo + eXt f* e~XsUi{t,s)dZs. Using the definition of X] and Z\ we can rewrite the above as (3.13). Q.E.D Chapter 3. STOCHASTIC CONVOL UTION INTEGRALS 31 Lemma 3.2 If inequality (3.12) is satisfied when (i) M is globally square integrable, (ii) the total variation \V\t of V satisfies E{\V\]}<oo, (iii) and \\X0\\ is bounded, then (3.12) is also satisfied without these restrictions. Proof: Since Z is l.s.i. then by definition there is a sequence of stopping times (Tn) with Tn < Tn+i, Tn —>• oo such that M?:=MTATN, VTN:=VT/,TN and Z«n := Z t A r „ satisfy the above conditions. Let XQ = ^ol{||x0||<n}-Now consider the integral equation x? = u(t,o)xz+ ftu(t,s)dz:. Jo Since XQ is bounded in norm, then X£,MN, and VN satisfy the above conditions, so we have l l ^ r i l 2 < W I I 2 + 2 f < Xn(s~),dZ: > +[Z%. (3.14) Jo Define Sn - Tn l{||*0||<n}-Note that X£ = = X0 and Z t B = Z f + 1 = Zt on [0, Sn]. Then by uniqueness X? = X?+1 = XT on [0,Sn], Chapter 3. STOCHASTIC CONVOLUTION INTEGRALS 32 so XT = lim^X™. Then we can rewrite (3.14) as ||*«||21«<*. < ||Xo|| 2l t<s n +2 / < X(s~),dZs > +[Z}S. ~ ~ Jo Since P{Sn = T} -> 1 this implies (3.12). Q.E.D Proof of Theorem 3.1 : By Lemma (3.1) we can assume A = 0 in Hypothesis 3.1(c). Then for all x G D, < A(t)x, x > < 0 for a.e. t. Define a map Rn(t) : H —> D by Rn(t) = n(nl — v 4 ( £ ) ) - 1 . Then Rn(t) is defined on all of H. Since < A(t)x,x > < 0 for a.e. t, then < \{nl — A(t))x,x > > \\x\\2 for a.e. t G S ,Wx G D. By the Schwarz inequality we have for all x G D that \\l-{nl - A{t))x\\ > \\x\l so | | i? n(t)| |L < 1 for a.e. t ... We proceed as in Kotelenez (1984) and approximate XT by Yosida's method. Define a semimartingale Zn(t) := /0* Rn(s)dZ(s), a martingale Mn(i) := /0' Rn(s)dM(s) and a process V„(i) := f£ Rn(s)dV(s). Note that since i?„(t) : H —> Z},then Z n(t) G D, Mn(t) G .D and V^(t) G -D. Let {XQ } be a sequence in Z) which converges almost surely to a limit X0 such that \\XQ \\ < \\XQ\\ for all n. Define Xn(t):= U(t,0)X^ + fTU(t,s)dZn(s). (3.15) JO VFe are going to prove that \\XN — X ^ —> 0 m X 2 . Note that by Doob's inequality for convolution integrals [ see Kotelenez (1984)], we have E{\[' U{.,s)d(Mn(s) - M(s)) 2 } < E{[Mn - M]T} {\\J0 oo) Chapter 3. STOCHASTIC CONVOLUTION INTEGRALS 33 while [Mn - M]T 0 in L1 [ see Kotelenez (1984)]. Then \f'U(.,s)d(Mn(s) - M(s)) \Jo 0 in L2 and \\U(.,0)(X^n) - A-0)||oo < | | ^ o " Xo\\ - 0 boundedly. so that it is enough to show that 8upo<,<rll ftU(t,S)d(Vn(s) - V(s))\\ — J o 0 in L . (3.16) But since H is a separable Hilbert space then by the Radon-Nikodym property [see: Chatterji (1976)], we can write dV(t) = Qv(t)d\V\(t) for a.e. t, where \V\(t) is the total variation of V on [0, t] and {Qv(t) , 0 < t < T) is an integrable //-valued process with \\QV\\ < 1 a.e. Now Vn(t) - V(t) = f\Rn(s) - I]dV(s). Jo Then we have d(Vn(s) - V(s)) = (Rn(s) - I)Qv(s)d\V\(s). Now \\U(t,S)||.L < 1 so we have || fu(.,s)d(Vn(s) - V{s))U < fT\\(Rn(s)-I)Qv(s)\\d\V\(s). (3.17) Jo Jo Since Rn(s) —> / strongly then ||(/i!n(.s) — /)(5V(-s)|| —*• 0 for a.e. s G S, and since H-Rn(-s) — I\\L < 2 then the integrand is < 2 a.e. Then by the dominated convergence theorem, the right hand side of (3.17) approaches zero almost surely, and since it is bounded by square integrable process |V|(T), we get (3.16 ). This can be seen by using dominated convergence theorem. Hence \\Xn — X||oo —> 0 in L2. Chapter 3. STOCHASTIC CONVOL UTION INTEGRALS 34 Let us first prove Ito's inequality (3.12) for Xt = U(t,Q)X0+ [tU{t,s)dZs, (3.18) Jo where Z satisfies the following. Hypothesis 3.2 (a) N is a D-valued square integrable martingale; (b) V is a D-valued process of bounded variation with E{\V\2}<oc; (c) XQ is a D-valued bounded random variable; (d) Z = N + V. Lemma 3.3 If X0 and Z satisfy Hypothesis 3.2 and if X is a solution of (3.18), then \\Xt\\2 < | |Xo|| 2 + /* < X(s'), dZs > +[Z]t. (3.19) <>o Proof: Define Yt := U(t,0)Xo + (tU{t,s)dVa Jo and Yt:= [tU(t,s)dN3. Jo Since dVa = (5v(^)c?|y|s a.e. and because V and Xo satisfy Hypothesis 3.2, so by [ Theorem 2.38 page 45 Curtain and Pritchard (1978)], Yt satisfies Yt = X0+ [* A(s)Ys + Vt. (3.20) Jo Let {e,- : i = 1,2,...} be a basis for the Hilbert space D. Let Jk be the projection operator on the manifold generated by {e l 5 e 2 , e ^ } . Let N\ =< Nt, e,- >n, i = l,...,k be real-valued square integrable martingales such that O O Nt = £ JV/e,-. i=l Chapter 3. STOCHASTIC CONVOLUTION INTEGRALS 35 Define Yk(t) by Yk(t) := £ [tU(t,s)eidNi = f U(t,s)d(JkNs). • i J 0 »/ 0 Define + ? and Zfc := JkN + V. Since [(/ — Jk)N] converges to zero in Z 1 , then by Doob inequality | |**-A"| |oo - » 0 in L\ Yk(t) satisfies Yk(t) = f A(s)Yk(s)ds + JkNt. (3.21) Jo This can be seen by (3.4) and Fubini's theorem : /* A(r)Yk(r)dr = T, f* A(r)( f U(r,s)eidNi)dr Jo t = 1 Jo Jo = j l f i f A{r)U{r,s)eidr)dNia ~[ JO Js = J2f\u(t1a)-I]ei4Ni = Ytk-JkNt. Now Yk and Yk satisfy (3.20) and (3.21) and A is linear so Xk(t) = XhQ + f* A(s)Xk(s)ds + Zk(t). (3.22) Jo Since A(.)Xk(.) 6 L1(S,H), we can apply the usual Hilbert space form of Ito's formula [see Metivier (1982), page 184, Theorem (26.5)] to see that IWI2 = ll^oll2 + 2 f <A(s)Xk(s),Xk(s)>ds Jo + 2 f <Xk(s-),dZk(s)>+[Zk]t. (3.23) Jo But [Zk]t < [Z]t. Moreover, < A(s)Xk(s), Xk(s) > < 0. a.e., so (3.23) implies that ||**(*)H2 < ll^oll2 +2 f < Xk(3-),dZk(s) > + [Z]t. (3.24) Jo Chapter 3. STOCHASTIC CONVOLUTION INTEGRALS 36 To complete the proof of the Lemma, we only need to show that /' < Xk(s~), dZk(s) >-*[*< X(s~), dZ(s) > in probability. (3.25) Jo Jo Now | /* < Xk(s-),dZk(s) > - f < X(s-),dZ(s)\ Jo Jo < I f* <Xk{s~) - X(s-),dJKN(S)>\ + | f* <X(s-),d({I-Jk}Nk{s))>\ Jo Jo + I / * < X f c ( 0 - X(s-),dV(s)>\ Jo •= 13(01 +13(01+ 13(01-• Ik is a local martingale and its quadratic variation process satisfies [Il(t)] < f \\Ms-) - X(s-)\\2d[JKN]S < \\Xk - X\\l[JkN]T. (3.26) Jo Since [JKN]T < [N]T a n d \\Xk — X]]^ converges to zero in I?. It follows from (3.26) and boundedness of E{[N]T} that [II]1^2 converges to zero in L1. and so by inequality of Burkholder-Gundy-Davis sup 0 < t < T |7^(t)| -—> 0 in L1. • Ik is also a local martingale, and its quadratic variation process satisfies [Il)t < fT \\X(s-)\\2d[(I - JK)N}S Jo < \\X\\U(I ~ J*)N}T. (3.27) but [(I-Jk)N]T 0 in L1 and is finite, then (3.27) implies that [T^]1/2 con-verges to zero in L1, and so by inequality of Burkholder-Gundy-Davis sup0< t<T|Ik(01 —> 0 in L1. • Since dV(s) = Qv(s)d\V\(s) a.e. s G S, then sup0<t<xl43(0l < \\Xk-X\\eo\V\(T). Chapter 3. STOCHASTIC CONVOLUTION INTEGRALS 37 Since \\Xk — XWoo —• 0 in probability then sup0<t<T\Ik(t)\ —> 0 in probability. Q.E.D Now since Xg, Mn(s), Vn(s), and Zn(s) satisfy Hypothesis 3.2 then by Lemma 3.3 we have M l 2 = M l 2 + 2 f <A(s)Xn(s),Xn(s)>ds Jo + 2 /* < Xn(s-),dZn(s) > +[Zn]t. (3.28) Jo But [Zn]t < Jo ||JR„(s)|||,d[Z],. (See Metivier, Pellaumail (1980), 4.2, page (52)). Since H-RnOOlU < 1 a.e, so [Zn]t < [Z]t. Moreover, < A(s)Xn(s),Xn(s) >< 0 a.e. and ||^ o II ^  ll^oll, so that (3.28) implies that \\Xn(t)\\2 < \\X0\\2 +2 /* < Xn(s-),dZn(s) > +[Z]t. (3.29) Jo To complete the proof of the theorem, we only need to show that f* < Xn(s-),dZn(s) > - » / " * < X(s-),dZ(s) > in probability. (3.30) Jo Jo Now | /* < Xn(s-),dZn(s) > - f* < X(s-),dZ(s) > | Jo Jo < I f <Xn(s~) - X(s-),dMn(s)>\ + | [* <X(s-),d(M(s)-Mn(s))>\ Jo Jo + I f* < Xn(s~) - X(s-),dVn(s) > | + | f <X(s-),d(V(s)-Vn(s)) > I Jo Jo ••= \m\+\m\ + um+\m\-Chapter 3. STOCHASTIC CONVOLUTION INTEGRALS 38 • I\ is a local martingale and its quadratic variation process satisfies < f l l*-(0 ~ X{s-)fd[Mn)s < \\Xn - X\\l[Mn]T. (3.31) Jo But [Mn}T < [M}T. We have seen that | | * » - ^| |oo - » 0 in L2 So (3.31) implies that converges to zero in L1 and by inequality of Burkholder-Gundy-Davis suPo<t<rl3(0l ~* 0 in L 2 • I2 is also a local martingale, and its quadratic variation process satisfies [%]t < fT \\X(s-)\\2d[M - Mn]a Jo < \\X\\l[M - Mn]T. (3.32) Since [M - Mn]T -> 0 in L1. It follows from (3.32) that [72]f - • 0 in L1 and so by inequality of Burkholder-Gundy-Davis suPo<t<rl3(0l -» 0 in probability. • Since dK(s) = #n(s)<2v(s)d|V|(s) for a.e. s and ||i2„(s) Qv(s)\\ < 1 for a.e. s, then sup0<t<rl/n(')l < ||x„-x|UV|(r). Since \\Xn — A"||oo —> 0, in probability, sup 0 < t < T | /^( i ) | —> 0 in probability. • Since d(V(s) - Vn(s)) = (I - Rn(s))Qv(s)d\V\(s) a.e. s, then Bu P o <Krl^(0l < ll*IL fT\\(An(s)-I)Qv(s)\\d\V\(s). — Jo But (An(s) — I)Qv(s) converges to zero a.e. and its norm is bounded by 2, so by the bounded convergence theorem sup0<t<T\I^(t)\ tends to zero. Q.E.D Chapter 3. STOCHASTIC CONVOLUTION INTEGRALS 39 Remark 3.3 In the proof of Theorem 3.1 we have used the local square integrability of \V\t only to prove that sup0<t<T\In(t)\ and sup0<t<T\I2(t)\ tend to zero in probability, so if M = I\ = I2 = 0, we don't need this restriction. 3.3 Burkholder-Type Inequality Before proving Burkholder's inequality for convolutions, we are going to prove the fol-lowing lemma which we will need in Chapters 7 and 8. Lemma 3.4 Let p > 1 and let Cp be the constant in inequality of Burkholder-Gundy-Davis for real-valued martingales. Then for K > 0 we have E |su P o< e< t | fQ < Xs.,dMs > |>J < CPE ((X*)'[Af]tf) < §rJE?(W)2') + %£-E(m), (3-33) where X* = sup0<s<J|Xs||. Proof: By Burkholder's inequality, we have E{supo<0<t\ f < Xs_,dM3 > |p} < CPE{[[' < Xs_,dMs >]>}. - - Jo Jo But [Jo < Xs_,dMs >}t < (X*)2[M]t so this is < CpE{(X*)p[M]f}. But since ab < \{j^o? -f Kb2), the proof of the lemma is complete. Q.E.D Theorem 3.2 Let A and U satisfy Hypothesis 3.1 with A = 0. If p > 1, then E[\\ fu(.,s)dMs\\%]<KpE[[M]pT], Jo where Kp = 22pC2p + 2P. Chapter 3. STOCHASTIC CONVOLUTION INTEGRALS 40 Proof: Without loss of generality we can assume that Mt is a cadlag globally square-integrable martingale. Let Xt = ftU(t,s)dMa, Xn(t)= ftU(t,s)dMn(s), Jo Jo where Mn(t)= f Rn(s)dM(s). Jo We will prove that for all n E{Un\S£\ < KpE{[MfT}. (3.34) Note that this implies the theorem, since by the proof of Theorem (3.1) \\Xn — A"|loo —• 0 in probability, and we can apply Fatou's Lemma (using, if necessary, a subsequence) to obtain £(II*IILP} < KPE{[M]PT}. It remains to prove (3.34). Now by (3.12) we have H^nWII2 < f <Xn(s-),dMn(s)> +[Mn]t. Jo Then \Xn(t)\\2p < 2 p{| / ' < Xn(s-),dMn(s) > \p + [M„H, Jo so E{\\Xn\\%} < 2^E{\\ /' < Xn(s-),dMn(s) > I&} + 2>E{[Mn]ft. Jo Now [Mn]t < [M]t, so by Lemma 3.4 this is < -^E{\\Xn\\H} + n^f- + l)E{[M]T}. Chapter 3. STOCHASTIC CONVOLUTION INTEGRALS 41 Choose K = 2PCP; then we have E{\\Xn\\%} < \E{\\Xn\\%} + (S!S£ + 2 P ) £ ; { [ A F ] P } . To complete the proof of the Theorem we need to show that if i?{[M]j} < oo, then -^ {ll^ nlloo} < 0 0 • By the stochastic version of integration by parts we have rt d Xn(t)= Mn(t) - Jo —(U(t,s))Mn(s)ds Since Mn(s) £ D, by (3.6) we have d a (U(t,s))Mn(s) = -U{t,s)A(s)Mn(s), a.e. s <E S. us Then Xn(t) = Mn(t)+ ftU(t,s)A(s)Mn(s)ds. (3.35) Jo Define a martingale Nn(t):= (I - A(0))Mn(t) = f\l - A(0))Rn(s)dM(s). Jo Now we can rewrite (3.35) as Xn(t) = Mn(t) + f U(t,s)A(s)(I - AiO^N^ds Jo and so \\Xn(t)\\2p < 2 2 *{| |M n (*) | | 2 p + £\\U(t,s)A(s)(I - A(0))-'\\2Lp\\Nn(s)\\2pds] . (3.36) But by (3.Id) there exists K such that ||A(s)(I — A(0)) - 1 | |£, < K a.e, and since ||t^(^5)||z, < 1 a - e - we can write (3.36) as E{\\Xn\\l) < 2^{E(\\M4^)+K^TE(\\N42^)}. But [Mn]t < [M]t and by (3.1d) there exists Kn > 0 such that \\(I - A(0))Rn(s)\\L < Kn. Then [Nn]T < K2n[M\T. Since E([M]PT) < oo we are done. Chapter 3. STOCHASTIC CONVOLUTION INTEGRALS 42 Q.E.D We have proved the theorem when A = 0 in (3.1c). We can easily generalize to the case when A > 0 by the following corollary. Corollary 3.1 If X > 0 and p > 1 we have 2p E where M\ : = f* e " A s dM Jo U(., s)dMs I < Kpe2pXTE ([M1]^) , (3.37) Proof: Define Xt = f£U(t,s)dMa. Then by Lemma 3.1 X] = fiUi(t,s)dM„ and by Theorem 3.2 one has £7{ | |JVT 1 < KpE^M1}?}. By substituting X} = e~XtXt we get (3.37). Q.E.D Theorem 3.2 also gives us Burkholder's inequality for //-valued local martingales by setting A{t) = Oand£/(£, s) = / . To complete the proof of the inequality of Burkholder-Gundy-Davis for //-valued martingales we need to prove: Theorem 3.3 E([M]pt) < 22p(l + C2)E((Mt)2p). (3.38) Proof: By Ito's formula we have [M]t = \\Mt\\2 - /„ < Ms_,dMs >. Then [M]P < 2 p | | M t | | 2 p + 2p\ j* < Ms-,dMs > \p, Jo so E([M]P) < 2pE((M:)2p + 2pE(supQ<e<t\ f <Ms~,dMs > \p). — Jo By Lemma 3.4 this is ?PC 2pC K < 2pE((M;yp) + -^E{{M;)2P) + -^-E{\M)P). Chapter 3. STOCHASTIC CONVOL UTION INTEGRALS 43 Choose K = 2~PC~' ^ ° c o m p l e t e the proof of the theorem we need to show that if E((M*)2p) is finite , then E([M]P) is finite. Define the stopping time Tn = inf{< : [M]t > n} then [M]TnM < n + sup || A M J 2 < n + 4(M*) 2. s<t So [M]x n A t G L p and we have E{W]PTnM) < 2 2 p ( l + C2p)E((M}nM)2p). Now let n —* oo. Q.E.D Remark 3.4 Combining Theorem 3.2 and Theorem 3.3 we have 2p1 E f'U(.,s)dMs < KPE([M]PT) < K'pE (\\M\\%) . (3.39) JO oo V ' Chapter 4 A SEMILINEAR EQUATION 4.1 Introduction Let (fi, JF, Tti P) be a complete stochastic basis with a right continuous filtration. Let Z be an if-valued cadlag semimartingale. Consider the initial value problem of the semilinear stochastic evolution equation of the form: dXt = A{t)Xtdt + ft{Xt)dt + dZt (4.1) X(0) = X0, where A = {A(t), t £ S} is a family of operators satisfying the following hypothesis. Hypothesis 4.1 (a) There exists A 6 R such that for all s > 0, (A(s) — XI) is a generator of a contraction semigroup; (b) the operator-valued function (—A(t) + pil)~x is strongly continuously differentiable with respect to t for t > 0 and p > A; (c) there exists a fundamental solution U(t,s) of the linear equation u(t) = A(t)u(t). Moreover, if UQ £ H and f £ C(S,H), then the equation u(t) = A(t)u(t) + f(t) u(0) = u0 has a strong solution u given by u(t) = U(t, 0)uo + (* U(t, s)f(s)ds. (4.3) Jo If u0 £ D(A(0)) and f £ Cl{S,H), then (4-3) is also a strong solution of (4.2). 44 Chapter 4. A SEMILINEAR EQUATION 45 Note that an evolution operator which satisfies the above condition is a strong evolution operator [see Curtain (1977)] which satisfies Hypothesis 3.1(b) and (c). Remark 4.1 Note that Hypothesis 4-1 holds, for example, if{A(t), t £ R+} is a family of closed operators in H with domain D independent oft, satisfying the following conditions: (i) considered as a mapping of D (with graph norm ) into H, A(t) is C1 in t on R + in the strong operator topology; (ii) if A(t)* is the adjoint of A(t), then D(A(t)*) C D for all t; (iii) 3A £ R such that < A(t)x,x >< \\\x\\2, VxeD(A{t)), VteS. Proof: See Browder (1964) We say Xt is a mild solution of (4.1) if it is a strong solution of the integral equation Xt = U{t, 0)X0 + I* U{t, s)fs(Xs)ds + f* U{t, s)dZa. (4.4) Jo Jo Since Z is a cadlag semimartingale the stochastic convolution integral /0* U{t,s)dZs is known to be a cadlag adapted process (see Chapter 3). More generally, instead of (4.4) we are going to study Xt = U(t, 0)XQ + I* U(t, s)fs(X3)ds + Vt, (4.5) Jo where Vt is a cadlag adapted process. In Theorem 4.1 we will study the integral equation (4.5) in a more abstract setting, where V = V(t, y) and / = f(t, y, x) satisfy the hypotheses of Theorem 2.1. 4.2 The Measurability of the Solution of the Semilinear Equation Theorem 4.1 Let Xo(.) be Q-measurable. Suppose that f and V satisfy Hypothesis 2.1 and suppose that A(t) and U(t,s) satisfy Hypothesis 4-1- Then for each y £ G, (4-5) has Chapter 4. A SEMILINEAR EQUATION 46 a unique cadlag solution X(.,y), and X(.,.) is j3 x Q-measurable. Furthermore \\X(t)\\ < \\Xo\\ + \\V(t)\\ + / t c< A +^->||/(*,^,0 )X o + ( 4 . 6 ) Jo Halloo < \\Xo\\ + H V I l o o + CM\\X0\\ + II^ IU), (4.7) where cT=l ^eiM+X)T M + A^° y 1 otherwise. If Xi and X2 are solutions corresponding to different initial values X0i and X02, then \\X2(t) - X1(t)\\ < e^+M»\\X01 - X02\\, t € S. (4.8) Proof: By using the transformations (2.6), and (2.7) we can assume by Lemma 2.1 that XQ — 0, M = 0 and V = 0 in (4.5). By Lemma 3.1 we can also suppose A = 0 in Hypothesis 4.1(a). Thus we consider X(t, y) = f U(t, s)f(s, y, X(s, y))ds, t E S, yeG. (4.9) y serves only as nuisance parameter. It only enters in the measurability part of conclusion. The proof of Theorem 4.1 in the case in which / is independent of y is a well-known theorem of Browder (1964) and Kato (1964). The existence and uniqueness are therefore known. To establish the measurability and inequalities (4.6)-(4.8) we follow the proof of Vainberg (1973), Th (26.2) page 331. Let An(t) := A(t)(I — n - M ( i ) ) _ 1 , and consider the equation Xn(t, y) = f\An(s)Xn(s, y) + f(s, y, Xn(s, y))ds. (4.10) Jo An is a bounded operator with ||v4„(£)||i, < 2n which converges strongly to A(t). Vainberg shows that (4.10) has a unique solution Xn, and moreover that there is a Chapter 4. A SEMILINEAR EQUATION 47 subsequence (Xnk) of Xn which converges weakly in L2(S,H) to a limit X, which is a solution of (4.9); and for each y X(.,y) is continuous. But now by Lemma 2.5 Xn converges weakly to X in L2(S,H). Moreover fn(x) := Anx + f(x) satisfies the hypotheses of Theorem 2.2 so that Xn(-, •) is j3 x ^-measurable. It follows by the proof of Theorem 2.2 that X(.,.) is j3 x ^-measurable. The proof of the inequalities (4.6)-(4.8) in case M = 0, A = 0 and V = 0 are in Vainberg (1973), and the extension to the general case of Theorem 4.1 follows immediately from transformation (2.6) and (2.7). Note that discontinuity of the solution in general comes from discontinuity of V. Q.E.D As an application of Theorem 4.1 we can show the existence and uniqueness of the solution of (4.5) when X0 , / and V satisfy the following conditions. Hypothesis 4.2 (a) X0 £ TQ. (b) / = f(t,u,x) and V = V(t,u>) are optional; (c) There exists a set G C £1 such that P(G) = 1, and if u £ G, then f and V satisfy Hypothesis 2.1. Corollary 4.1 Suppose that X0) f and V satisfy Hypothesis 4-2. Suppose A and U satisfy Hypothesis 4-1- Then (4-5) has a unique adapted cadlag (continuous, if Vt is continuous) solution. Proof: The existence and uniqueness of a cadlag solution is immediate from Theorem 4.1. We need only prove that it is adapted. To see this, fix s < i , take S — [0,s], and take Q = J-t\a in Theorem 4.1, where G is the set of Hypothesis 4.2. Now Vt — G has measure 0 so it is in J-Q C Ft-Chapter 4. A SEMILINEAR EQUATION 48 Theorem 4.1 implies X(s, .)\Q is £/-measurable; as all subsets of Q — G are in Tt by completeness, X(s,.) itself is .^-measurable. By right continuity of the filtration, x{s, .)ers = nt>srt. Thus {X(t, .),£ £ S} is adapted. Q.E.D 4.3 Some Examples Example (4.1) Let A be a closed, self-adjoint, negative definite unbounded operator such that A'1 is nuclear. Let U(t) = etA be a semigroup generated by A. Since A is self-adjoint then U satisfies Hypotheses 3.1 and 4.1, so it satisfies all the conditions we impose on U. Let W{t) be a cylindrical Brownian motion on H. Consider the initial-value problem: dXt = AXtdt + ft(Xt)dt + dW(t), (4.11) . X(0) = X0, where Xo,and / satisfy Hypothesis 4.2 . Let X be a mild solution of (4.11), i.e. a solution of the integral equation: Xt = U(t)X(0) + I* U(t - s)fs(Xs)ds + f* U(t - s)dW(s). (4.12) Jo Jo The existence and uniqueness of the solution of (4.12) have been studied in Marcus (1978). He assumed that / is independent of u> € 0 and t £ S and that there are M > 0, and p > 1 for which < /(u) - f{v),u- v >< -M\\u - u||p and i i /Hi i^ca + H r 1 ) . Chapter 4. A SEMILINEAR EQUATION 49 He proved that this integral equation has a unique solution in Lp(fl, LP(S, H)). As a consequence of Corollary 4.1 we can extend Marcus' result to more general / and we can show the existence of a strong solution of (4.12) which is continuous instead of merely being in Lp(Ct, LP(S, H)). The Ornstein-Uhlenbeck process Vt = U.(t — s)dW(s) has been well-studied e.g. in [Iscoe et. al (1989)] where they show that Vt has a continuous version. We can rewrite (4.12) as Xt = U(t)X(0) + /* U(t - s)f,(X,)ds + Vu Jo where Vt is an adapted continuous process. Then by Corollary 4.1 the equation (4.12) has a unique continuous adapted solution. Example (4.2) Let D be a bounded domain with a smooth boundary in R d . Let —A be a uniformly strongly elliptic second order differential operator with smooth coefficients on D. Let B be the operator B = d{x)D^ + e(x), where is the normal derivative on dD, and d and e are in C°°(dD). Let A (with the boundary condition Bf = 0) be self-adjoint. Consider the initial-boundary-value problem + Au = ft{u) + W on D x [0, oo) Bu = 0 on dD x [0,oo) (4.13) u(0,x) = 0 on D, where W — W(t, x) is a white noise in space-time [for the definition and properties of white noise see J.B Walsh (1986)], and ft is a non-linear function that will be defined below. Let p > | . W can be considered as a Brownian motion Wt on the Sobolov space H-p [see Walsh (1986), Chapter 4. Page 4.11]. There is a complete orthonormal basis {ek} for Hp. The operator A (plus boundary conditions) has eigenvalues {A^} with respect to {efc} Chapter 4. A SEMILINEAR EQUATION 50 i.e. Aek = Afcefc, Vfc. The eigenvalues satisfy Ej( l + A~ p) < oo if p > | [see Walsh (1986), Chapter 4, page 4.9]. Then [ A _ 1 ] p is nuclear and —A generates a contraction semigroup U(t) = e~tA. This semigroup satisfies Hypotheses 3.1 and 4.1. Now consider the initial-boundary-value problem (4.13) as a semilinear stochastic evolution equation dut + Autdt = ft(ut)dt + dWt (4.14) with initial condition u(0) = 0, where / : S x f l x H-p —> H_p satisfies Hypotheses (4.2b) and (4.2c) relative to the separable Hilbert space H = H-p. Now we can define the mild solution of (4.14) (which is also a mild solution of (4.13)), as the solution of ut = f* U(t - s)f3(us)ds + /* U(t - s)dWs. (4.15) Jo Jo Since Wt is a continuous local martingale on the separable Hilbert space H~p, then /o U(t — s)dWs has an adapted continuous version [see Chapter 3]. If we define Vt := ftU(t-s)dWin Jo then by Corollary 4.1, equation (4.15) has a unique continuous solution with values in H-.p. 4.4 A Second Order Equation Let Zt be a cadlag semimartingale on H. Let A satisfy the following: Hypothesis 4.3 A is a closed strictly positive definite self-adjoint operator on H with dense domain D(A), so that there is a K > 0 such that < Ax,x >> K\\x\\2, Vx £ D(A). Chapter 4. A SEMILINEAR EQUATION Consider the Cauchy problem, written formally as 51 a^ + Ax = Z (4.16) x(0) — x0, Sf(o) = yo. Following Curtain and Pritchard (1978), we may write (4.16) formally as a first-order system dX(t) = AX(t)dt + dZ< (4.17) dZt { X(0) = X0, where X(t) = x(t) \ y(t) J 0 \ M , and A = ( 0 , XQ — zt 1 \ y°) Introduce a Hilbert space K, = D(A1^2) x H with inner product < X,X >K=< A1/2x, A1/2x > + <y,y>, and norm \\X\\2c= \\A^2x\\2 + \\y\\2, [see Chapter 4, page, 93, Vilenkin (1972)]. Now for X e D(A) = D(A) x D(A1^2), we have < X, AX > £ = < Ax,y > + < y,-Ax >= 0 Thus ' <(A- \I)X, X >,c=< AX, X >K -\\\X\\l = -X\\X\\2K. Since H i where X = ,x = \ y) I < (A-\I)X,X >K | < ||(.4 - A/^lljcllA-Hx:, Chapter 4. A SEMILINEAR EQUATION 52 we have | | ( ^ - A 7 ) X | k > A||X|U. The adjoint of A* of A is easily shown to be —A. With the same logic ||U* - A/)*|k > AH*!!*;. Then A generates a contraction semigroup U(t) = etA on K. [see Curtain and Pritichard (1978), Th (2.14), Page 22]. Moreover A and U{t) satisfy Hypothesis 3.1 with A = 0, and they also satisfy Hypothesis 4.1. Now consider the mild solution of (4.17): Vt = U{t)X0 + T U(t - s)dZs. (4.18) Jo Since Zt is a cadlag semimartingale on K, the stochastic convolution integral foU(t — s)dZa has a cadlag version (see Chapter 3), so Vt is a cadlag adapted process on JC. Now let us consider the semilinear Cauchy problem, written formally as ' d - ^ + Ax(t) = /(*(<), + Zt x(0) = x0, (4-19) dx I ai\t=o = 2/o, where / : D^A1?2) x H —> H satisfies the following conditions: Hypothesis 4.4 (a) — f(x,.) : H —> H is semimonotone i.e. 3M > 0 such that for all x £ D(A1/2) and all yx,y2 £ H < f{x,y2) - f(x,yi),V2 - y i > < M\\y2 - yi\\2; (b) for all x £ D(A1/2), f(x,.) is demicontinuous and there is a continuous increasing function if : R + —• R + such that ||/(0,y)|| < <^(||y||); (c) f(-,y) : D(A1^2) —> H is uniformly Lipschitz i.e 3M > 0 such that \fy £ H \\f(x2,y)-f(x1,y)\\<M\\A1'2(x2-x1)\\. Chapter 4. A SEMILINEAR EQUATION 53 [The completeness of D{A1I2) under the norm ||^ 4.1/2a:J| follows from the strict positivity of A1'2. ] Note that any uniformly Lipschitz function / : D(A1^2) x H —* H satisfies Hypothesis 4.4 . Proposition 4.1 / / / satisfies Hypothesis 4-4> then the Cauchy problem (4-19) has a unique mild adapted cadlag solution x(t) with values in D(A1^2). Moreover is an H-valued cadlag process. If Zt is continuous, is continuous in tC. Proof: Define a mapping F from K to K by F(x,y) = . We are going to show that F satisfies the hypotheses of Corollary 4.1. • F is semimonotone. Let Xr = fx \ Xi M and X2 = . Then <F(X2)-F(X1),X2-Xl >K = <f(x2,y2)-f(x1,y1),y2-y1> = < f{x2,y2) - f(x2,yi),y2 - y i > + < /(^2,2/i) - /(zi,2/i),2/2 - J/i > • By Hypothesis 4.4(a) and the Schwartz inequality this is < M\\y2 - y,\\2 + | | / (x 2 , y i ) - f{xuyx)\\\\yt - y a||. By Hypothesis 4.4(c) this is < M\\y2 - yx\\2 + M\\A'l2{x2 - Xl)\\\\ya - Vl\\ < M\\y2 - y,\\2 + Ml2\\Axl\x2 - x{)\\2 + M/2\\y2 - y i \ \ 2 < Ml2(\\A'l2{x2-xx)\\2 + \\y2-yi\\2) = 3M/2\\X2-X1\\2IC. Chapter 4. A SEMILINEAR EQUATION 54 Thus — F : K —> K- is semimonotone. • F is demicontinuous in the pair (x, y) because it is demicontinuous in y and uniformly continuous in x. • F is bounded since WFWWc = ||/(*,y)|| < \\f(x,y) - /(0,y)|| + ||/(0,y)||; by Hypotheses(4.4b) and (4.4c) this is <M\\A^x\\+<p(\\y\\), and since \\A^2x\\ < \\X\\K and ||y|| < ||X||,c then WFWWKKMWXWK + VQXWK:). Thus F is bounded by the function 0(r) = Mr + <p(r). Then F satisfies the hypotheses of Corollary 4.1 on /C. Now as in the linear case we may write (4.19) as a first order initial value problem: dXt = A{t)Xtdt + F(X{t))dt + dZt, < , X(0) = X0. Since A generates a contraction semigroup U(t) we can write the above initial value problem as X(t) = U(t)X(0) + f* U(t - s)F(X(s))ds + f U(t - s)dZt. Jo Jo By (4.18) we can rewrite this as X(t) = f U{t - s)F(Xs)ds + Vt. Jo Since Vt is cadlag and adapted then F, U and V satisfy all the conditions of Corollary 4.1. then there is an adapted cadlag solution on K. If Zt is continuous, Vt is continuous too and Xt is a continuous solution of (4.19) on IC. Q.E.D Chapter 4. A SEMILINEAR EQUATION 55 Remark 4.2 We assume f : D{All2) xH -• H. We could let f depend on UJ G Vl and t G S as well. This would not involve any essential modification of the proof. Example (4.3): Let D,A,B, and W be as in Example (4.2). Let p > d/2 and consider a mixed problem of the form: d2U at2 + Au = f(u^) + W onl>x[0,oo) Bu - 0 u(x,0) = 0 f?(x,0) = 0 ondD x [0, oo) onD on D, (4.20) where / : H_p+i x H_p —> H_f As in Example (4.2) we consider W as a Brownian motion Wt on the Sobolev space H_p. Now A is a strictly positive definite self-adjoint operator on H_p, and is nuclear. Since all of the eigenvalues of A are strictly positive, then <Ax,x>H_p>\0\\x\\2H_p, (4.21) for all X e D(A) = H-p+2. Then we can write (4.20) as the following Cauchy problem on the Sobolev space i / _ p : dut = utdt dut = —Autdt + f(uu ut)dt + dWt (4.22) u(0) = 0 M(0) = 0. Now A satisfies (4.21) and it is a positive definite self-adjoint operator on H-p. Note that if / G Hn, then Ax/2f <E #„_i [see, Walsh (1986), Example 3, Page 4.10]. Then D(AX/2) = H_p+i. Since Wt is continuous then by Proposition 4.1, (4.22) has a continuous mild solution ut 6 C(S, H-p+i) and, moreover, ut G C1(S, H-p) i.e., the mild solution of (4.20) is continuous process in H-p for any p > d/2 — 1, and it is a differentiable process in H^p for any p > d/2. Chapter 4. A SEMILINEAR EQUATION 56 4.5 A Semilinear Integral Equation on the Whole Real Line Recall the integral equation (4.12) of Example (4.1). Marcus (1974) has studied the existence of the solution of (4.12) where the parameter set of the processes extended to the whole real line, i.e. the integral equation Xt = f U(t - s)f(Xa)ds + f U(t - s)dW(s). (4.23) J—oo J—oo This motivated us to study the existence of the solution of (4.23) when —/is only monotone rather than being Lipschitz. We are going to use this in Chapter 6 to prove that the solution Xt of (4.23) is stationary. Instead of (4.23) we are going to study the slightly more general equation Xt = f U(t - s)f(Xa)ds + Vt. (4.24) We will impose the following conditions on / , V and the generator A of the semigroup U. Hypothesis 4.5 (a) U(t) is a semigroup generated by a strictly negative definite, self-adjoint unbounded operator A such that A-1 is compact. Then there is X > 0 such that \\U(t)\\<e-». (b) Let ip(t) = K(l + tp) for some p > 0, K > 0. — / is a monotone demicontinuous mapping from H to H such that ||/(x)|| < y>(||x||) for all x 6 H. (c) Let r = 2p2. Vt is cadlag adapted process such that sup<ej^E{\\Vt\\r} < oo. Let us first study the integral equation: Xt= f* U(t-s)f(Xs + Vs)ds. (4.25) J—oo The following theorem translates Corollary 4.1 to the case when parameter set of the process is the whole real line. Chapter 4. A SEMILINEAR EQUATION 57 Theorem 4.2 If A,f and V satisfy Hypotheses 4-5, then the integral equation (4-25) has a unique continuous solution X such that \\Xt\\ < f e-^ V(||ys||)^ ; (4.26) E{\\Xt\\) <\snVEW{\\Vs\\)} := Kx. (4.27) Proof: Consider a sequence of solutions (Xn) of the integral equation Xn(t) = f U(t - s)f(Xn(s) + Vs)ds. (4.28) J—n The solution of (4.28) exists by Corollary 4.1. It satisfies \\Xn{t)\\ < f* e-^-M\\V>\\)ds < f e - A ^ V ( l |V; | | )^ for t > -n. J — n J— oo Since by Hypothesis 4.5 sup 5 e H £{cp(|| V^||)} < oo, then by Fubini's theorem E{f e-^-M\\Vs\\)ds) = f e-W-*E{<p(\\V.\\)}da. J—oo J—oo Then E[\\Xn(t)\\} < Kx for t>-n. (4.29) But Xn+1 (t) = f* U(t - s)f(Xn+1 (s) + Vs)ds, J — n—l so by using the semigroup property of U(t) we can rewrite this as Xn+1(t) = U(t + n)Xn+i(-n) + [* U(t - s)f(Xn+1(s) + Vs)ds. (4.30) J—n But (4.30) is the same equation as (4.28) with different initial conditions. Then by Corollary 4.1 we have \\Xn+1(t) - Xn(t)\\ < e-x^\\Xn+l(-n) - 0||, Chapter 4. A SEMILINEAR EQUATION 58 or | | * n + i - Xn\\T < eXTe-Xn\\Xn+1(-n)l where ||A"||r = s u P - x < « r 1-^*1- Then E{\\Xn+1 - Xn\\T} < eXTe-^E{\\Xn+1(-n)\\}. But by (4.29) E{\\Xn+l(-n)\\} < Ku so oo oo £ £ { | | X n + i - Xn\\T} < K i e X T £ e~Xk < oo. fc=0 fc=0 Thus (Xn) is a convergent sequence in LX(Q,, C([—T, T], H) for each T > 0. Define X = lim^oo Xn. Since Xn(t) satisfies (4.26) and(4.27) and since E[\\Xn(t) - X(t)\\] -» 0 then X also satisfies (4.26) and (4.27) . To complete the proof of the theorem one needs to show that X satisfies (4.25). Now we can rewrite equation (4.28) as Xn(t) = U(t + T)Xn(-T) + f U(t - s)f(Xn(s) + V3)ds. (4.31) Consider the integral equation Y(t) = U(t + T)X(-T) + f U(t - s)f(Y(s) + V.)ds. (4.32) J — T By Corollary 4.1 this equation has a unique solution. Comparing (4.32) with (4.31), we have by Corollary 4.1 that ||*n(<) - Y(t)\\ < e-x^\\Xn(-T) - X(-T)\\. Now E[\\Y(t) - X(t)\\] < E[\\X(t)-Xn(t)\\] + E[\\Xn(t)-Y(t)\\} < E[\\X(t) - Xn(t)\\] + e-x^E[\\Xn(-T) - X(-T)\\], Chapter 4. A SEMILINEAR EQUATION 59 and since £[||A"(<) - X n(t)||] -» 0 as n -* oo then we have X(t) = Y(t) a.s., i.e., X(t) is a solution of (4.32). We can rewrite (4.32) as X(t) = U(t + n)X(-n) + f* U(t - s)f(X, + Vs)ds. (4.33) J—n By (4.26), (4.27) and Hypothesis 4.5(b) we have \\U(t-a)f{X. + V.)\\ < e - ^ V d l X + KH) < 2 p K 1 e - x ^ | l + (J'^ e-W-^WVvW'duJ + \\VS\\P} Then by hypothesis 4.5(c) and Fubini's theorem it is easy to see that /' \\U(t-s)f(X. + V.)\\ds<oo. J—oo Then by the dominated convergence theorem we have lim / * U(t - s)f(Xs + Vs)ds = f* U{t - s)f(X. + Vs)ds. n—*ooj_n J—oo Since X satisfies (4.27) then £[ | |X(-n) | | ] < K, so £[||X(< + n)X(-n)| |]<e- A( t+")^ 1 , which implies that U(t + n)X{—n) —* 0 and so (4.33) implies Xt = I* U(t - s)f(Xs + Vs)ds. J—oo Q.E.D. Chapter 5 T H E CONTINUITY OF T H E SOLUTION 5.1 Introduction Consider the integral equation X(t,y) = U(t,O)Xo(y) + f U(t,s)f(s,y,X(s,y))ds + V(t,y), t G S, y e S. (5.1) Jo Faris and Jona-Lasinio (1982) have proved that the solution X of (5.1) is a continuous function of V in the special case when the generator of U is ^ and f(x) = —Xx3 — fix. Da Prato and Zabczyk (1988) generalized this result to the case where U is a general analytic semigroup and / is a locally Lipschitz function on a Banach space. We are going to generalize the previous result by proving that the solution of (5.1) changes continuously when any or all of V, / , A and XQ are varied. As a corollary we will prove a generalization of Faris and Jona-Lasino's theorem for semimonotone / and more general U; this was open after Faris and Jona-Lasinio (1982) [see for example Smolenski et al (1986), page 230]. We will also prove the strong convergence of the finite dimensional Galerkin approximation to the solution of (5.1). 5.2 The Main Theorem and its Corollary Theorem 5.1 Let f1 and f2 be two mappings satisfying Hypothesis 2.1 with parameters Mi(y) and M2(y) respectively and'bounded by functions y>\ and <p2 respectively. Suppose V1 and V2 satisfy Hypothesis 2.1. Suppose A and U satisfy Hypotheses 4-1 60 Chapter 5. THE CONTINUITY OF THE SOLUTION 61 and 3.1. Let Xl(t), i = 1,2 be solutions of the integral equations: X% y) = U(t, 0)X'o + f U(t, s)f(s, y, X\s, y))ds + V% y). (5.2) Jo Then we have \\X\t)-X\t)f < 2 | |F 2 (*)-V 1 (*) | | 2 + 2 e ( 2 A + 4 M 2 + l ) t | | | X 2 _ x l | | 2 + y\-^\\v\s)-v\s)rds] 1/2 + £ e - 2 A s | | / 2 ( X 1 ( 5 ) ) - / 1 ( X 1 ( , ) ) | | 2 ^ } , (5.3) where 1 = 2 U E -2As f(X\s))-f\X\s)) 1/2 + 4 M 2 M q e-2 A s||y2( 5)-y 1(5)|| 2^ } dsj 1/2 Note that, since by Theorem 4.1 X\ and X2 are bounded by Vi and V2, then J is bounded by function of V\ and V2 . Proof: Since U satisfies Hypothesis 4.1, then by Theorem 4.1 the solution of (5.2) exists. Define Y{(t) = X{(t) - V'"(<), i = 1,2. Then we can write (5.2) in the form y«'(*) = U{t, 0)X0 + /* U(t, s)fi(Xi(s)d Jo 's, i = 1,2, so that Y\t) - Y\t) = U{tMXl - Xl) + fQU{t,s)[f{X\s)) - f{X\s)]ds. Since U satisfies Hypothesis 3.1(a)-(d), then by Remark 3.3 we have \\Y\t)-Y\t)\\2 < e 2 A < | | X 2 - A ^ | | 2 + 2 < Y2(s) - Yl(s),f(X2(s)) - f'iX'is)) > ds Jo To complete the proof of this theorem we need the following lemma. (5.4) Chapter 5. THE CONTINUITY OF THE SOLUTION 62 Lemma 5.1 Let K > 0. Then 2 / e~2Xs < Y2(s) - Y\s),f2(X2(s)) - f'iX'is)) > ds Jo < (K + 4M2) [te-2Xs\\Y2(s)-Y1(s)\\2ds Jo t l . + i(J e - ^n ^ ^ - v 1 ^ ) ! ! 2 ^ ) 2 1 f* -2\s f2{X\s))-f\X\s)) ds. (5.5) Note that because Y' and X' are cadlag and the / ' are bounded by </?,-, then the integrands are dominated by cadlag functions and hence are integrable. Proof: The left hand side of (5.4) is < 2 fe~2Xs < Y2(s) - Y\s),f2(X2(s)) - fiX'is)) > ds Jo + 2 f e - 2 X s < Y\s) - Y\s),f2(X\s)) - fiX^s)) > ds Jo since Yl = X* — V* and — f2 is semimonotone. By the Schwartz inequality this is < 2 [e2Xs\\V2(s) - V\s)\\ \\f2(X2(s)) - f2(X\s))\\ds Jo + 2M2 fte-2Xs\\X2(s)-X1(s)\\2ds Jo + 2[\-2X°\\Y2(s)-Yi(s)\\ WfiX^-fiX'isMds. Jo Apply the Schwartz inequality to the first integral, use the inequality 2ab < Ka2 + j^b2 in the third, and write X' = Yl + V and use the inequality again in the second to see that this is < 2[j\-2X°\\V\s) - V\s)\\2)ds] {j\-2X°\\f2(X2(s)) - f(X*(s)\\2ds} + 4M 2 f'e-^WY^-Y^s^ds Jo Chapter 5. THE CONTINUITY OF THE SOLUTION 63 + 4M2 [te-2Xs\\V2(s)-V1(s)\\2ds J 0 + K [*e-2Xs\\Y2(s)-Y\s)\\2ds Jo + i / o < e " 2 A S | | / 2 ( X l ( s ) ) - / 1 ( X l ( s ) ) l 1 2 ^ -This proves the lemma. Q.E.D To finish the proof of Theorem 5.1, let us define g(t) := e - 2 A s | | y 2 ( t ) - Y1^)]]2. By Lemma 5.1 one has g(t) < \\X*-Xl\\ + (1 + 4M2) fg{s)ds Jo 0tt \ 1/2 [ e-2X°\\V2{s)-V\s)\\2ds) I + [ e-2Xs\\f2(X\s)) - f\X\s)\\2ds. Jo By Gronwall's inequality g(t) < e^M* [\\X20 - XU]2 + l(j\-2X°\\V2(s)-V\s)\\2ds + [e-2Xs\\f2(X\s))-f\X\s))\\2ds Jo Substituting for g(t) and using the following inequality | |X 2(t) - X\t)\\2 < 2\\Y2(t) - Y\t)\\2 + 2\\V2(t) - V\t)\\2, to get inequality (5.3). Q.E.D Chapter 5. THE CONTINUITY OF THE SOLUTION 64 Remark 5.1 We can extend Theorem 5.1 to the case where the evolution operator U(t,s) varies too, but unfortunately the inequality becomes more complicated. Let Uf(t,s), i = 1,2 be an evolution operator satisfying the hypotheses of Theorem 5.1. Let Xl(t), i = 1,2 be the solutions of the integral equations X\t) = U\t, 0)Xto + f U% S)fi{X\s))ds + V\t). (5.6) Jo Define V 3(t) := (U2(t,0) - U\t,0))X] + f\u2(t,s) - U\t,s))f2(X\s))ds, Jo and define I: = 2^j\-^\\f2(X2(s))-f\X1(s))\\2dsy I + 4M 2 (^j\-2Xs\\V3(s) + V\s) - V2(s))\\2ds^j 2 . Then we have \\X2(t)-X\t)\\2 < 4\\V2(t)-V\t)\\2 + M\V\t)\\2 + 2e(2X+K+4M2)t [ H ^ - ^ I I 2 + l(j\-2X°\\V2(s)-V\s)\\2dsy + l(j\-2Xs\\V\s)\\2dsy + [*e-2Xs\\f2(X\s)) - f\X\s))\\2ds] . (5.7) Jo J Proof: From (5.6) and the definition of V3(t) we have X^t) = U2(t,0)X1o + [tU2(t,s)f1(X1(s))ds + V\t) + V3(t). Jo Compare this equation with (5.6) for i = 1 to get (5.7). Q.E.D Chapter 5. THE CONTINUITY OF THE SOLUTION 65 Corollary 5.1 Consider equations (5.6). There is a constant d^ such that on the set where max,=l i2(||A"o||, M2(y), || V'||oo) < N one has x'-x'wl < dN\\\X20-X10\\2 + \\V2-V1\\oo + \\V 311 oo + Fe-2X°\\f2(X\s))- f\X\s))\\2ds Jo (5.8) Proof: From the definition of V3(t) and inequality (4.7) there is a constant dxN > 0 such that ||V3||oo < d)y. From inequality (4.7) and the definition of I there is d2N > 0 such that I < d% and ||/ 2(X 2(.s))| < d?N. Define d% = max(4,^r). Using (5.7) we have I I * 2 - * 1 ! ! 2 * < 4 | | y 2 - y 1 | | 2 o + 4 | | y 3 | | L 2 e ( 2 A + 1 ^ { | | X 0 2 - ^ | | 2 + [||V* - H^eo + ||V3||oo]4r[/Tc-aA'rfs] Jo + /^-^n/ 2^ 1^))-/ 1^ 1^))!! 2^}. Using the facts that || V1^ < N, \\V2^ < N, || V2||oo < d% and the above inequality, we get (5.8). Q.E.D Remark 5.2 Let D(S,H) be the set of H-valued cadlag functions on S with norm . = SUPKESII/(*)II-By Corollary 5.1 there is a continuous mapping i\) : SxD(S, H) —• D(S,H) such that if X(t) is a solution of rt X(t)= j U(t,s)f(X(s))ds + V(t), Jo then X(t) = ij)(t,V){i). Moreover there is a constant d^ such that on the set where max i =i ) 2(M2(?/), H^ 'Hoo) < N, we have W{.,V2)-i>(.X)\\oo<dN\\V2-V l\\l, so i{> is Holder continuous with exponent 1/2. Chapter 5. THE CONTINUITY OF THE SOLUTION 66 5.3 Application to the Large Deviation Principles Da Prato and Zabczyk (1988) have studied large-deviation principles for the Ornstein -Uhlenbeck process Vt = f* U(t - s)dW(s). In the case when / is locally Lipschitz, they also studied large-deviation principles for the solution of dXt = AXt +f(t,X(t))dt + edW(t) (5.9) I X(0) = x, where e > 0. As a consequence of Remark 5.2 we can generalize their result to the case when / is semimonotone. Suppose A, U, f, and W are as in Example (4.1). Then we can write the mild solution of (5.9) as Xt = U{t)x + f* U(i- s)f(s, X(s))ds + eVt. (5.10) Jo Let n G L2(S, H). Consider a system of the form di = M(t)+f(t,m)+A-1^) (5.11) Note that A - 1 is a nuclear operator. We can write the mild solution of (5.11) as i(t) = U(t)x + J* U(t - s)f(s, t(s))ds + C(t), (5.12) where = f* U(t - s)A-1r](s)ds. By Remark 5.2 we can write the solutions of (5.10) and (5.12) as xr = *KtM-)* + eV.)(t), and Then we have the following proposition. Chapter 5. THE CONTINUITY OF THE SOLUTION 67 Proposition 5.1 (i) For arbitrary 6>0)a>0,f3>0 and C > 0 there exists e0 > 0 such that for rj,x satisfying / J 1177(s)|j2c?3 < C, \\x\\ < f3 and e G (0,e0), rT P(| |X^-ri |oo<<5)>exp (ii) For arbitrary 8>0,a>0, f3>0 and rQ > 0 there exists e0 > 0 such that for arbitrary r G (0, r 0 ) and e G (0, e0) and \\*\\<0 P(distanceH(Xx'e,K(x,r)) > 6) < exp[-^e _ 2(r 2 - a)] where K(x,r) stands for the set for all £n'x for which JQ \\n(s)\\'2ds < r 2 . Proof: The continuity of tp in Remark 5.2 allows us to reduce the problem to the linear case (/ = 0) and zero initial condition [see Freidlin and Wentzell (1984), Theorem 3.1, Page (81)]. But when / = 0, the theorem has been proved by Da Prato and Zabczyk [Da Prato and Zabczyk (1988), Theorem 5]. Q.E.D 5.4 Galerkin Approximations Let U(t) be a semigroup generated by a strictly negative definite closed unbounded self-adjoint operator A such that A - 1 is compact. Then there is a complete orthonormal basis (<f)n) and eigenvalues 0 < Ao < Ai < A 2 < ... with A„ —> 0 0 , such that A(f>n = —\n<j)n. Let Hn be the subspace of H generated by {< 0^, <f>i,<^>n_i} and let Jn be the projec-tion operator on Hn. Define /„ = Jnf, Vn(t) = JnV(t) and Un(t) = JnU(t)Jni where / and V satisfy Hypothesis 4.1. Chapter 5. THE CONTINUITY OF THE SOLUTION 68 Let Xn(t) be the solution of Xn(t, y) = f Un{t - s)fn(s, y, Xn(s, y))ds + Vn(t, y),t>0 (5.13) Jo and let X(t) be the solution of X{t,y)= ftU(t-s)f(y,s,X(s,y))ds + V(t,y). (5.14) Jo We will prove Proposition 5.2 For all y G G, we have | | J M ^ ) - * ( - , y ) l l o o - » o . Proof By Corollary 5.1 we have WXn-XWl, < <*Ar[||V;-V||oo+||K||oo + [Te-^\\fn(X(s)) -f(X(s))\\*ds] , (5.15) Jo where K = [\un(t - s ) - U(t - s)]f(s, X(s))ds. Jo Since /„ = Jnf and Vn = JnV, then the first and 3th term in the right hand side of (5.15) approach zero, so to complete the proof we need to show that IIKIU -»o. But \\Vn(t)\\<<P(\\X\U /Vn(*)-tf(*)IU<fa, Jo Chapter 5. THE CONTINUITY OF THE SOLUTION 69 so IIKHoo < V>(ll*l|oo) fT \\Un(s) - U(s)\\Lds. (5.16) Jo Since ||£/n(i) — U(t)\\L equals e~Xnt for t > 0 and equals zero for t = 0, then by the bounded convergence theorem the left hand side of (5.16) approaches zero. Q.E.D 5.5 Galerkin Approximations for the Integral Equation on the Whole Real Line We can prove the convergence of similar Galerkin approximations to the solution of equation (4.24) of Chapter 4. Define /» = Jnf, Vn(t) = Jn V(t), Un(t) = Jn V(t)Jn and define Xn(t) and X(t) as solutions of *»(*) = f Un(t - s)fn(Xn{s))ds + Vn(t). (5.17) and X(t) = I* U(t - s)f{X{s))ds + V(t) (5.18) J—oo Now we can prove Theorem 5.2 If A,U,f and V satisfy Hypotheses 4-5, then one has E(\\Xn(t) - X(t)\\) ^ 0. Proof: Define Xi(t) = / * Un(t ~ s)fn(Xkn(s))ds + Vn(t), J—k Chapter 5. THE CONTINUITY OF THE SOLUTION 70 Xk(t) = T U(t -s)f(Xk(s))ds + V(t), J—k and Vn,k(t) = /* (Un(t - 3 ) - U(t - s)f(Xk(S))ds. By Remark 5.1 we have \\Xi(t)-X\t)\\* < 4 | | V n ( t ) - V ( t ) | | 2 + 4 | |K l f c(t)| | 2 + iy^\\Vn(s)-V(s)fdsy +1 (jyxn\vnAs)\\2dSy + f e2^\\fn(X(s)) - f(X(s))\\2ds. (5.19) Taking expectations and using the Schwartz inequality and Fubini's theorem, (5.19) implies that E{\\Xk(t)~Xk(t)\\2} < 4E{\\Vn(t)-V(t)\n + AE{\\VnAt)\\2} + (E{P})i (/^ e2^E(\\Vn(s) - V(s)\\2)ds) * + (E{P})1 (/^ e2 (^||t/n)fc(5)||2)^ ) ? + /' e2^E(\\fn(X(s))-f(X(s))\\2)ds. (5.20) We first show E{\\Xk(t) - Xk(t)\\2} -+ 0 uniformly in fc. (5.21) Since Vn = JnV and fn = Jnf, the first,third, and 5th term of the right hand side of inequality (5.20) converge to zero. Then to prove (5.21) it is enough to show E(\\Vntk(t)\\2) converges to zero uniformly in k and t £ (—oo,T], By using ||/(a:)|| < C ( l + \\x\\p) a n d inequality (4.6), we see that suVteRE(\\V(t)\\2n < oo, Chapter 5. THE CONTINUITY OF THE SOLUTION 71 and, using Fubini's theorem, one has sup i e*£(HK, f c«||2) < supteRE{l + \\V(t)\\>) f° \\U(-s) - Un(-s)\\lds. J—oo Since \\U(-s) - Un(-s)\\L -> 0 for s < 0 and \\U(-s)-Un(-s)\\L<e^% then by the dominated convergence theorem sup t g /j JB(||V> n ) f c(i)||2) 0 uniformly in k. Then E(\\Xn>k(t) - Xh(t)\\2) 0 uniformly in k. By Theorem 4.2, then E(\\Xntk(t) -Xn(t)\\) -> 0 as k -> oo, hence £( | |X f c (*)-X(i) | | ) -» 0 and we have £ ( | |X n ( t ) -X( i ) | | ) -» 0 Q.E.D Chapter 6 STATIONARY PROCESSES 6.1 Introduction Consider an integral equation of the form: X(t) = f U(t-s)f(X(s))ds + V(t), (6.1) J—oo where U and / satisfy Hypothesis 4.5 and V satisfies the following condition: Hypothesis 6.1 V is a cadlag adapted stationary processes on H, such that £(l l^(0) ir) < oo for r = 2p2, (6.2) where p > 1. In the special case when f(x) = — | V F ( x ) is the Frechet derivative of a potential F(x) on H and Vt is the stationary Ornstein-Uhlenbeck processes ft^Uit — s)dW(s), we may consider the integral equation (6.1) as a mild solution of the infinite dimensional Einstein-Smoluchowski equation: dX{t) = -AX(t)dt - ^VF(xt)dt + dW(t). (6.3) In finite-dimensions, the solutions are diffusion processes and the stationary measures of these diffusion processes were studied by Kolmogorov (1937). Infinite-dimensional Einstein-Smoluchowski equations have been studied by many au-thors, e.g. Marcus (1974, 1978, 1979), Funaki (1983) and Iwata (1987). The stationary 72 Chapter 6. STATIONARY PROCESSES 73 measure associated to this equation has important applications in stochastic quantization [see Marcus (1979), Jona-Lasinio and Mitter (1985), Albeverio and Rockner (1989) and Iwata (1987)]. Marcus (1974) studied (6.1) when / is Lipschitz, V is an Ornstein-Uhlenbeck process, and A~l is nuclear. He proved that the solution of (6.1) is a stationary process; when f(x) = | V F ( x ) , he characterized its stationary measures explicitly. This result was generalized somewhat in Marcus (1978) to the case where / : B —• B*, where B C H C B* is a Gelfand triple and / satisfies < f(x) - f(y), x-y >B'xB < -C\\x - y\\pB and ||/(X)||B. < C ( l + | |x | | B - 1 ) for some C > 0, and p > 1 Unfortunately we were unable to follow his proof of the stationarity of the solution of (6.1). In this chapter we extend his setting to a slightly more general case in which / , U and V satisfy Hypothesis 4.5 (a), (b) and Hypothesis 6.1 on a Hilbert space H. Our method of proof is different from that of Marcus (1978). We are going to use the results of Chapter 4 and Chapters 5. We will give the stationary distribution of (6.3) when VF(x) is monotone. Since V satisfies Hypothesis 6.1, it also satisfies Hypothesis 4.5(c), so by Theorem 4.2 a solution of (6.1) exists. By Theorem 5.3 this solution is the Z^-limit of the solutions of the finite-dimensional equations: Xn(t) = [* Un(t-s)fn(Xn(s))ds + Vn(t), (6.4) J—oo where Un(t) = JnU(t)Jn, fn = Jnf, and Vn = JnV. Thus to prove that the solution of (6.1) is stationary, it is enough to prove that the solution of (6.4) is stationary. Chapter 6. STATIONARY PROCESSES 74 Let / : R —» Y, where Y is a topological space. Define (9af)(t) = f(t + s). Definition 6.1 A process X = {X(t) : t £ R}, taking values in a topological space Y, is called strongly stationary if for each h and real numbers t\,t2, •.. ,tn the families (X(ti), X(t2),..., X(tn)) and ((9^X)(ti),..., (9hX){tn)) have the same joint distribution. Let D(R, H) be the space of H-valued cadlag functions on R with the metric of uniform convergence on compacts u,g)~km + \\f-9\\ky where ll/IU = 8U P_ f c< t< J f c | |/(t)||. Let Hn be a finite dimensional subspace of H. If / £ D(R,Hn), O.f is a function from R to D(R,Hn). Now we are going to prove the following lemma: Lemma 6.1 If V = {V(t), t £ R} is an Hn-valued cadlag stationary process on R then 0.V, = {6SV s £ R} is a D (R, Hn)-valued stationary process on R. Proof: To prove this, it is enough to prove that for all real tx < t2 < ... < tn, all real si < s2 < ... < sm, and all real h, V)( 5 l ) , (et2V)(Sl),..., (0tnV)(Sl),(9tlV)(sm),(9tnV)(sm)} and {{6h + hV)(Sl), (9t2 + hV)(Sl),(9tn + hV)(Sl),(9tl + hV)(sm),(9tn + hV)(sm)} have the same joint distribution. But by definition (0ti+h.V)(sj) — V(ti+h+Sj) and since V is an ifn-valued stationary process, then we have equality of the joint distributions, and the proof is complete. Q.E.D Chapter 6. STATIONARY PROCESSES 75 6.2 The Continuity of the Solution with Respect to Vn Let K be D (R, Hn), with metric A (f „\ V* 11/ ~ 9\\k + (jT e-WA-||/(S) - jWU'd.) , A„ > 0. To prove that the solution Xn(t) of (6.4) is a stationary process, we need to prove a result similar to Remark 5.2 for equation (6.4), i.e., that there is a continuous mapping xb : R xK -> D(R,Hn) such that Xn(t) = ij;(t,Vn(-))(t). To prove this we first need to prove the existence of a solution of (6.4) when Vn 6 K. Then instead of equation (6.4) we consider the following integral equation: Y(t) = f U(t - s)f(Y(s) + g(s))ds, (6.5) under the following hypothesis. Hypothesis 6.2 (a) U(t) =: Un{i) = JnU(t)Jn, and U satisfies Hypothesis 4-5; (b) —/ : Hn —• Hn is a continuous monotone function such that \\f(x)\\ < C(l + \\x\n,forr = 2p2; (c) g G K. Note that because Hn is a finite-dimensional space, the U(t) form a group and U{t) is well-defined for all * € R and U(-t)U{t) = I. Now we are going to prove two purely deterministic lemmas: Lemma 6.2 / / / , U, and g satisfy Hypothesis 6.2, then (6.5) has a unique continuous solution. Chapter 6. STATIONARY PROCESSES 76 Proof: As in Theorem 4.2, define Yk(t) = fk U(t - s)f(Yk(s) + g(s))ds. (6.6) Then we have \\Yk(t)\\ < C f* e-**~'\l + \\g(a)\nds, J—oo and by Hypothesis 6.2(c) there are C(T) > 0 and Ci(T) > 0 such that for all t e (-oo,T], < C(T)e-Xot < d{T). (6.7) Let a < ti < t2 < T. By (6.6) one has U{-t2) Yk(t2) - U{-h) Yk{t{) = U(-s)f(Yk(s) + g(s))ds. Now it is easy to see from (6.7) and Hypothesis 6.2(c) that there is C(T, a) > 0 such that | |W(-<2)W 2 ) -W(-<i)n(*i ) | | <C(T,a)|< 2 -< a | . Then U(—t) Yk(t) is uniformly equicontinuous on [a,T] so lfc(i) is uniformly equicon-tinuous on [a,T]. Since Yk(t) is uniformly bounded by (6.7), then by the Arzela-Ascoli theorem there is a subsequence (&/) such that Ykl converges uniformly to a continuous function Y on [a, T] . To complete the proof of the Lemma we need to prove that Y(t) is a solution of (6.5). As in the proof of Theorem 4.2 we can show that Y(t) is a solution of the equation Y(t) = U(t + T)Y(-T) + f U{t - s)f(Y(s) + g(s))ds, t > -T. Then it is enough to prove that Y{-T) = f~TU(-T - s)f(Y(s) + g(s))ds. (6.8) J—oo Chapter 6. STATIONARY PROCESSES 77 But Yk(-T) = [~ U(-T-s)f(Yk(s) + g(s))ds J—k = [~TU(-T - s)f(Yk(s) + 5r(5))l [_ fc,_r](.)^. J —oo By Hypothesis 6.1(c) \\U(-T-s)f(Yk(s)+g(s))ll.k,.T](s)\\ is dominated by an integrable function. Since Ykl(s) —• Y(s) and since / is continuous, then by the dominated convergence theorem we get (6.8). Q.E.D Lemma 6.3 Suppose U,f, and gi satisfy the conditions of Lemma 6.2. If Xi, i = 1,2 are solutions of Yi(t) = f p{t-s)f{Yi{s) + g{s))ds, J—oo then there is a constant C(T) > 0 such that \\Y2-Y4l < C(T,gug2) (J'^Wnis) - gi(s)\\2dsy . (6.9) Proof: Define Yk(t) = f U(t-s)f(Yk(s) + g(s))ds, t = 1,2. J — k By Theorem 5.1 we have \\Y2k{t)-Yx\t)\\ < 2 e ( - ^ + 1 ) i / ( / _ t f c e 2 A ' ' s | | 5 2 ( 5 ) - 5 l ( 5 ) | | 2 ^ ) " , (6.10) where / = 2 ( £ e^\\f{Yk{s) + 92(a)) - f(Yk(s) + *(*))||3<fa) * . Chapter 6. STATIONARY PROCESSES 78 First we show that / is uniformly bounded in k. Because < C ( l + \\x\\p) and /5ooc2Ao*ll^-(5)ll2parfa < °°> i 1 ; i s enough to show that f^ke2Xos\\Ykk(s)\\2pds is uniformly bounded in k. But by (4.7) \\Yk(t)\\ < e-A°< f ex°°(l + \\gi(sW)ds, i = 1,2. J—oo By using Fubini's Theorem we can show that fT e2Xos\\Yk(s)\\2pds < fT e2pXou(l + \\gi{u)\\2p2)( [T e2X^-p>ds)du. J—oo J—oo \J u J Then /5fc e2X°s\\Yf:(s)\\2pds is uniformly bounded in k, so there is CX(T) such that / < Ci(T), and we can rewrite (6.10) as \\Y2k{t) - Yk(t)\\2 < 2C1(T)e(- 2X°^ t [J^ e2X"s\\92(s) - <h(*)||2^] * . Since by the proof of Lemma 6.2 Yk'(t) —> Yi(t), then by taking the limit over the subsequence (ki) and taking the supremum on [—T, T] we get (6.9). Q.E.D Remark 6.1 Let <f>(g) : = f j^ ex°s\\g(s)\\rds for g € K. Then: (i) if <j)(gi) < N, i = 1,2, £/&ere «s a constant CN > 0 suc/i 1^ 2 — ^ i | | r ^  (6.11) / e2X°°\\g2(s) - 9l(s)\\2)ds J—oo (ii) By Theorem 5.2 equation (6.4) has a unique cadlag adapted solution, and by (i) there is a constant CN > 0 such that on the set where <f>(Vi) < N, i = 1,2, | | X 2 - * i | | 2 T < Cjv^K-O^Vi))*, (6-12) where aV(-, •) is a metric on K. (iii) There is a continuous mapping 0^ : R X K D(R,Hn) such that if Xn(i) is the solution of (6.4), then Xn(t) = ip(t,Vn('))(t) on the set {<f>(Vn) < N}. Chapter 6. STATIONARY PROCESSES 79 6.3 The Main Theorem Theorem 6.1 If f and V satisfy Hypothesis 4-5 and ifV satisfies Hypothesis 6.1, then the solution of (6.1) is a stationary processes. Proof: Since Vt is an if-valued stationary process then Vn(t) : — Jn V(t) is also an ifn-valued stationary process. From (6.4) we have Xn{t + h) = / Un(t + h- s)f(Xn(s))ds + Vn{t + h); J—oo by changing variables we see this is /* Un(t - s)f(Xn(s + h))ds + 6hVn{t). J—oo Then by Remark 6.1 we have Xn(t+h) = if>N(t, (0, (6hVn))(t) on the set {(f>(v) < iV}, and in particular Xn(h) = IPN(0, (dhVn))(0) on the set {(f>(v) < N}. But by Lemma 6.1 QbVn is a D(R, iin)-valued stationary process; since </>(/) = ^ ( 0 , /)(0) is a continuous function from K to Hn then Xn(t) = tl>{6tVn) = ^N(0, (0tVn)(0) is an iin-valued stationary process. Since X(t) is the limit of Xn{t) by Lemma 6.3, then {X(t) : t € R} is also a stationary process. Q.E.D 6.4 The Einstein-Smoluchowski Equation Now consider (6.3) where — WF(x) satisfies Hypothesis 4.5. The stationary solution of (6.3) satisfies the following integral equation: X(t) = ^ /* U(t - s)VF(X(s))ds + /* U(t-s)dW(s). (6.13) By Theorem 5.3 the solution of (6.13) is a limit of solutions of the finite dimensional equations Chapter 6. STATIONARY PROCESSES 80 Xn(t) = Un(t-s)VF(Xn(s))ds + /' Un(t-s)dW(s). (6.14) Z J—oo J—oo The stationary distribution of (6.14) is well-known from Kolmogorov (1937) and can be given explicitly [ see Marcus (1974), (1978)]. But instead of (6.14) we are interested in a slightly different equation. Consider Yn(t) = ^[t Un(t-s)VF(JnYn(s))ds + f* Un(t-s)dW(s) (6.15) Z J—oo J—oo It is clear that JnYn(t) = Xn(t). Since Yn(t) = JnYn(t) + (Yn(t) - JnYn(t)) and Yn(t) - JnYn(t) = /' (/- Jn)U(t-s)(I- Jn)dW(s) J—oo and Xn(t) —> X(t), then we have Yn(t) —* X{t). By Theorem 6.1 Yn(t) is a stationary process. Let M be the stationary Gaussian measure o f /-oo ~ s)dW(s) on H. Then Lemma 6.4 / / U and -VF(x) satisfy Hypothesis 4.5, the stationary distribution ofYn(t) has a Radon-Nikodym derivative exp(—F(Jn.)) fH exp(—F(.))dM(.) with respect to M on H. Proof: See Marcus (1978), Lemma (10). Now we can prove Theorem 6.2 IfU and -^F(x) satisfy Hypothesis 4-5, then the distribution of the solu-tion X(t) of (6.13) has the Radon-Nikodym derivative exp(—F((.)) fH exp(—F(.))dM(.) with respect to M on H. Proof: Since E(\\Yn(t) - X(t)\\) -» 0 it is sufficient to show that lim / \exp(-F(x)) - exp(-F(Jnx))\dM(x) = 0 Chapter 6. STATIONARY PROCESSES 81 since this implies weak convergence. Note that lirn^oo F(Jn.) = F(.) on the set with M-measure equal to 1. Without loss of generality let V(0) = 0. Then the monotonicity of VF(x) ensures that F is nonnegative and exp(—F(.)) < 1. The Lebesgue bounded convergence theorem can now be applied to show that the limit of the integral is equal to 0. Q.E.D Chapter 7 T H E GENERAL SEMILINEAR EQUATION 7.1 Introduction Let H and K be two real separable Hilbert spaces. Let L2(K, H) be the space of Hilbert-Schmidt operators from K to H with Hilbert-Schmidt norm || || 2. Let (Q, T, Tu P) be a complete stochastic basis with a right continuous filtration. Let Wt be cylindrical Brownian motion on K with respect to (fi, Tt, P). Let g : R + x 0 x C (R+,H) —• L2(K, H) be a predictable functional on the //-valued continuous adapted processes. We say g is a predictable functional if, whenever A and Y are if-valued continuous adapted processes and r is a stopping time such that Xl[o i T) = yi[o)T)) then 1[0)T]<7(.,., A') = 1[O,T]<7(-) -IY)- See Metivier and Pellaumail (1980b). Consider a semilinear stochastic evolution equation of the form dXt = A(t)Xtdt + ft(Xt)dt + gt(X.)dWt (7.1) with initial condition A(0) = Ao. In the case when / and g are Lipschitz, the existence and uniqueness of the solution of (7.1) has been studied using semigroup theory [see for example Kotelenz (1982, 1984)]. In this chapter we will use semigroup theory to prove the existence and uniqueness of the solution of (7.1), when —/ is semimonotone. Let us write the mild form of (7.1) as the integral equation: Xt = U(t,0)Xo + ftU{t,s)fs(Xs)ds + [tU(t,s)gs(X)dW3. Jo Jo 82 Chapter 7. THE GENERAL SEMILINEAR EQUATION 83 We are going to study a slightly more general equation: Xt = U(t,0)Xo + /V(t,5)/.(X,)<fc+ ftU(t,s)gs(X)dWs + Vt, (7.2) JO Jo where Vt is a continuous adapted process. The following are the relevant hypotheses concerning X0, f, g, A, U and V: Hypothesis 7.1 There exists a set G C fi of probability one and constants q > 1 and C > 0 with the following properties: (a) / satisfies Hypothesis 4-2, with <p(x) = C(l + xq), x G R + and the constant M is independent ofuinQ; (b) g : R + x fl x D(R+,H) —> L2(K, H) is a predictable functional on the H-valued continuous adapted processes. (c) \\g(s,u>,X) - g(s,u,Y)\\2 < CsuPo<,<t||Xt - Yt\\ = C(X-Y)*, for allt e S, ueG, X,Y <E C(S,H); (d) A, and U satisfy Hypotheses 4-1 and Hypotheses 3.1; (e) V = {Vt : t G S} is an H-valued continuous adapted process. (f) Xo is an H-valued J-"o-measurable random variable ; (g) for allp> 1 and all t G S E{\\X0\\»}, E{(Vt*Y} and E{supo<s<t\\gs(0)\\p2} are finite. From this chapter and the following chapter C will denote a positive constant whose exact value is unimportant and may change from line to line. 7.2 The Main Theorem Theorem 7.1 If Hypothesis 7.1 is satisfied, then the integral equation (7.2) has a unique Chapter 7. THE GENERAL SEMILINEAR EQUATION 84 continuous strong adapted solution X with E{{X*Y} < oo for all p > 1 and t G S. Before proving Theorem 7.1 we are going to prove Lemma 7.1 Suppose that there is a unique solution to (7.2) in the case where XQ = 0, g(s,u,0) = 0 and A = 0 in Hypothesis 3.1(c). Then there is a unique solution in the general case. Proof: We will prove this in two stages. First define gs(x) = gs(x) — ga(0) and set VT = U{t,0)XO + [tU(t,s)gs(Q)dW3 + VT. Jo Then we can rewrite equation (7.2) as XT = fQ U(t, s)fS(Xs)ds + £ U(t, s)gS(X)dWs + % (7.3) Note that <7(s,0) = 0 and that there is no XQ term on the right hand side of equation (7.3). We claim that / , g and V satisfy Hypothesis 7.1. Indeed, / , A, U have not changed so (a) and (d) still hold; <7S(0) is predictable and g(s,x) - g(s,y) = g(s,x) - g(s,y) so (b) and (c) still hold, (f) is trivial since there is no X0 term. We need only check V to verify (e) and (g). Since (7S(0) is an L2(K, if)-valued predictable process which satisfies Hypothesis 7.1(b), and WT is a /^-valued cylindrical Brownian motion, then JQ gs(0)dWs is an H-valued con-tinuous local martingale with quadratic variation /J ||<7s(0)||2«?s [see Yor (1974)]. By Proposition 3.1 the stochastic convolution integral [tU(t,s)gs(0)dWs Jo Chapter 7. THE GENERAL SEMILINEAR EQUATION 85 is adapted and continuous in t. By Burkholder's inequality (Theorem 3.2) we have E{su?Q<t<Tyy{t,s)gs(0)dWsjP} < Kv. Then E{(£\\9s(0)\\Usy] < TKpE{suVo^T\\gs(0)\\?}<oo for all p > 1 by Hypothesis 7.1(g). Thus the stochastic convolution integral is Lp bounded for all p > 1. Next, U(t, 0)X0 is adapted, continuous and Lp-bounded by (f), (g) and Hypothesis 3.1(c). Finally, Vt is continuous and Zp-bounded by (e) and (g), hence V also satisfies (e) and (g). Since (7.3) and (7.2) are the same equation — only the notation has been changed— then (7.2) has a unique solution iff (7.3) does. Finally, by Lemma 3.1, the map X —> X\ reduces (7.3) to an equivalent equation which A = 0 in Hypothesis 3.1(c). Q.E.D Proof of Theorem •Uniqueness: By Lemma 7.1 we may assume X0, g(s,u>,0) and A are zero. Let X and Y be two adapted continuous solutions of (7.2). Then we have Xt-Yt = l*U{t,a)(UX.)-Uya))ds Jo + fu{t-s){gs{X)-gs{Y))dWs. Jo By Theorem 3.1 ( Ito's inequality) II** - Ytf < 2 f < Xs - Y„ f.(X.) - f.(Y.) > ds Jo + 2 T < Xs - Y„ (gs(X) - gs(Y))dWs > Jo + f \\(gs(X) - gs(Y))\\lds. (7.4) Jo Chapter 7. THE GENERAL SEMILINEAR EQUATION 86 Since —/ is semimonotone,the first term of the right hand side is bounded by 2M ['\\Xa-Ya\\2ds < 2M f* ((X -Y)*)2ds, Jo Jo and the second term is bounded by 2sup 0 < r < t | fT < Xa-Ya,(ga(X)-gs(Y))dWa > |, — Jo and by Hypothesis 7.1(c) the third term is bounded by C JQ((X — Y)*)2ds. Define the stopping time Tn := inf{t: \\XT\\ + \\Yt\\ > n} AT. Then from (7.4) and the above E { ( ( X - Y y t A T n f } < (2M + C)Ey*((X-Yys/,Tn)2ds s u p 0 < r < « A T j f < XA - Ys, (gs(X) - 9a(Y))dWa > \] . (7.5) — Jo + 2E The expectations are all finite since and | |Yi| | are bounded on [0, Tn]. By using Fubini's theorem on the first term, and Lemma 3.2 with p = 1 and K = 2C\ on the second term we have E{((x-YytATn)2} < CfQE[{{X-YyaATn)2ds] + \E{((X-YyATn)2} + CJ*E{((X-YysATn)2ds}, (7.6) so l-E{{{x - r)*ATn)2} <CJ*E{((X - YysATn)2} ds. By Gronwall's inequality, E{(X-YytATJ2 = 0, Vn. Chapter 7. TEE GENERAL SEMILINEAR EQUATION 87 But P{Tn = T) -> 1 so Xt = Yt a.s. Q.E.D • Existence: By Lemma 7.1 we can assume that X0, g(s,u,0) and A are zero. We proceed as in Pardoux (1975). Define X® = 0 and define X™ by induction. Suppose for k = 0, . . . , n that Xk is an adapted continuous process such that (X f c)* £ Lp for all p > 0. Define Vk = Vt+ [tU(t,s)gs(Xk)dWa + M ftU(t,s)Xkds. (7.7) Jo Jo Lemma 7.2 For k < n, Vk is an adapted continuous process, and (Vk)* £ Lp for all P > 0. Proof: The stochastic integral exists since gs(Xk) is predictable and I M * * ) I I 2 < c(xkys by Hypothesis 7.1(c) and the fact that gs(0) = 0. But (Xk)* £ Lp so ||# s(X f c)||2 £ V. Set Mk = fgs(Xk)dWs. Jo This is a continuous H-valued martingale with quadratic variation [M f c] t. By Proposition 3.1, the stochastic convolution integral in (7.7) is adapted and continuous in t, and since (Xk)* £ Lp for all p > 0 then by Theorem 3.2 E{supr<t\\ frU(r,s)dMk\\2p} <oo, -for all p > 0, and E{sup r < J | foU(r, s)Xkds\\2p} < oo. Since V satisfies Hypothesis 7.1(e) and (g) the lemma is proved. Chapter 7. THE GENERAL SEMILINEAR EQUATION 88 Q.E.D Now consider = f U(t, s)fs(X:+1)ds + Vtn, (7.8) Jo where for all x G H fs(x) : = fs(x) — Mx. Note that / and Vn satisfy the hypotheses of Corollary 4.1 and —/ is monotone, so (7.8) has a unique cotinuous adapted solution which satisfies l l * r + 1 | |< | | V t " | | + f* \\f.(V.»)\\ds. (7.9) Jo Since / is dominated by y>(x) — ^ (1 + xq), then / is dominated by 2C(1 + xq), so (Xn+1)* < (Vn)* + 2tC{\ + (Vn)f). Since (Vn)* G Lp for all n,p and t G S then (Xn)* G Lp, for all n, p and t. (7.10) Let f" = f\ga(Xn) - g3(Xn~1)dWs Jo and note that x r + i _ X n = / V ( t , 5 ) [ / S ( x ; + 1 ) - / S ( x ; ) ] ^ Jo + M [tU(t,s)(X:-Xr1)ds+ ftU(t,s)dN:. (7.11) Jo Jo Moreover d[Nn]t < \\ga(Xn) - g.iX^Hdt *\ 2 < C((xn- *n _ 1)J dt, so by the Ito inequality of Chapter 3, Chapter 7. THE GENERAL SEMILINEAR EQUATION 89 x?+i-x?( < 2f <x?+i-x:js(x:+i)-fs(x:)>ds j o + 2M f* < A ; + 1 - A ; , A ? - A ; - 1 > ds Jo + 2 / ' < A ; + 1 - A ; , div; > ./O +c/o'{(x--x-');}2* Now —/ is monotone, so J^i) < 0. We can bound 72: h < w fjx™ - x:\w\x: -xri\\ds < 1 { - x » ) ; } 2 + 2M / ; {(x- - X"- ) ; } 2 ds. By Lemma 3.2, for any K > 0, E{mm < j i E - X ' ) T ) + KCE{£((X"-X"->yfds}. Using the bounds of Ii(t) and I-i{t) we can rewrite (7.12) as { ( A , " + 1 - A " ) * } 2 <c£{(xn- X"- 1 ) * } 2 "^ + 2I*(t). Using the above and the bound of I^t), there is C > 0 such that £ J ( ( A N + 1 - A N ) * ) 2 P } < C (K + I) E {((Xn - Xn-)*)2p}ds Set + ^ E { ( x n + i - x n y t } K{t) = E [ { { X r i - X n ) i f P ) (7.12) Chapter 7. TEE GENERAL SEMILINEAR EQUATION 90 and choose K — C. The hn{t) are finite by (7.10), so we can subtract: \hn{t) < C(C + 1) fhn.^ds. Z JO Then from (7.10) there exists Do>0 such that h0(t) < D0 if t < T, and if D = 2C(C + 1), then MO < D f K-t{s)ds. Jo By induction MO ^  A > ^ . n! Thus E (M0)^ < °°, and we conclude that (Xn) is a Cauchy sequence in L2p(il, C(S, H)) for all p > 1. Take p = 1 : there exists a process {Xt, 0 < t < T} such that l i n w £ { s u p t e 5 | | X « - X?\\2} = 0. Then t —> Xt is continuous (it is the uniform limit of continuous functions) and adapted. Moreover E{(x;y} < oo, v P > I . We must show it satisfies the equation (7.3). Consider R(t) := Xt - f* U(t, s)f.(X.)ds - f U{t, s)gs(X)dW3 - Vt. Jo Jo Chapter 7. THE GENERAL SEMILINEAR EQUATION 91 R is well-defined, for both integrals make sense. It is continuous in t. Let us also consider Rn(t) := X ? + 1 - f U{t,s)UX:+l)ds Jo +M fu(t,S){x:+' -x:)ds Jo - [tU(t,s)gs(Xn)dWs-Vt. Jo (Rn(t) = 0, of course). Let x £ H. We claim that < x,R(t) > = 0 a.s. This will do it since then for all x £ H, < x,R(t) > = 0 a.s., which implies R(t) = 0 a.s. Let t range over all rationals and use the continuity of R to see that R(t) = 0, for all t w.p.l. First E{\\X?+1 - Xt\\2} -+ 0 E{< X,X?+1 - X1 >2} 0 or < X, X?+1 > < X, Xt > in L2. Next: <x, ftU(t,s)fs(X:)ds> = f <x,U(t,s)fs(X:)>ds Jo Jo = f <U*(t,s)x,fs(X:)>ds Jo Let ys = U*(t,s)x (U* is the adjoint of U, not the sup, here). Now fs(z) is demicontinuous in z, hence z —» < ys,fs(z) > is continuous. Since X™ —> Xs in X 2 , it also converges in probability, so that < ys, fs{Xs) >^<ya, fs(Xa) > in probability. Since X™ —• Xs in L2(Q,,C(S, H)), then there is a subsequence (n^) such that (X»>)*t - Xt w.p.l Then for large enough k, {Xn»)*< X* + l< co. Chapter 7. THE GENERAL SEMILINEAR EQUATION 92 The convergence is bounded, since <ys,fs(x:*)> < | | y . M | | A ^ | | ) < \\ys\\<p((X«*yt) < | | y s | | ^ + l ) . We can go to the limit under the integral to see < x, / V ( t , 5 ) / S ( x ; f c ) ^ > ^ < x, rry(<,3)/.(x,)ds > . Jo Jo Since X? converges to Xs in £ 2 (n , C(S, H)), / V ( M ) P C + 1 -X?)ds -» 0 in L2. J o The third integral also converges in L2, since E{\\jy(t,s)(gs(Xn)-g(X))dWa\\2} < E { £ \\g.(X») - g{X)\\lds} < CtE{((Xn - X)*)2} 0. The last term doesn't depend on n. Thus < x, R(t) > - l in^ < x, Rn(t) > = 0. Note: the proof above gives Z/2-bounds on Xt: \\xt\\L, < ±\\x^-xn\\L2 Q.E.D Remark 7.1 Theorem 7.1 remains valid if we replace the cylindrical Brownian motion W by a K-valued Brownian motion Wt) and if we let the predictable functional g be in L(K, H) instead ofL2(K,H). Chapter 7. THE GENERAL SEMILINEAR EQUATION 93 Proof: This comes from the fact that a /^-valued Brownian motion Wt has a covariance Q which is nuclear [see Metivier (1982)], so we can write Wt = Q^Wt, where Wt is a cylindrical Brownian motion on K. Now Q% is a Hilbert-Schmidt operator on K so if gs(x) is L(K, H)-valued then gs(-)Q^ is a L2(K, f/)-valued predictable functional of Hypothesis 7.1, so we can apply Theorem 7.1. Q.E.D 7.3 Some Examples Example (7.1): Let D,A,B, dD, and W be as in Example (4.2). Consider the initial-boundary-value problem |f + Au = ft(u) + gt{u)W on D x [0, oo) Bu = 0 on dD x [0, oo) (7.13) U(0,X) = 0 onD. Since W can be considered as a Brownian motion Wt on a Sobolev space Hp, p > | [See Walsh (1986), Chapter 4, Page 4.11], we can let K = Hp, for some p > | and let H be the Sobolev space Hn for a fixed n £ Z. Let g.(.) : D(S,Hn) -» L(Hp,Hn) satisfy Hypothesis 7.1(b), (c), and (g). Let /,(.) : Hn —» f/n satisfy Hypothesis 7.1(a) and rewrite (7.13) as dut = -Aut + ft(u)dt + gt(u)dWt. Since — A, and U satisfy Hypothesis 7.1(d) then by Remark 7.1 there is a unique contin-uous solution with values in Hn. Definition 7.1 An TV*1-valued function f(x,u) of two variables x 6 D(ZRd, u £ is said to satisfy the Caratheodory condition if it is continuous with respect to u for almost all x 6 D and measurable with respect to x for all values ofu. Chapter 7. THE GENERAL SEMILINEAR EQUATION 94 Example 7.2 (Zakai Equation) Let D,A,B, and dD be as in Example (7.1). Let Wi, i = 1,... ,1 be independent standard scalar Brownian motions. Now consider the initial-boundary-value problem % + Au = f(x, u(t, x)) + T!i=i9i{x, u(t, x))Wi(t) onDx [0, oo), Bu = 0 on dD x [0,oo), (7.14) u(0,x) = 0 on D, where / and g, satisfy the following: Hypothesis 7.2 (a) / , : D x R —> R, i = 1,...,/ satisfy the Caratheodory condi-tion; (b) there exists a function a G L2(D) and a constant C > 0 suc/i that \f(x,u)\ < a(x) + C\u\, u G R, x £ D C R d , |#(x,u)| < a(x) + C|u | , si = l , . . . , / ; (c) £/ie <7t(x,.), i — 1,..., I are uniformly Lipschitz i.e. there is a constant C > 0 such that \gi(x,u2) - #(x,ui)| < C\u2 - iti|, Vx G D, u 2 , u x G R, i = l , . . . , / ; (d) — / (x, . ) is semimonotone i.e. 3M > 0 suc/i £/ia£ if{x,u2) - f(x,ux))(u2 - ux) < M ( K 2 - « I ) 2 . Define i / = L2(D) and let || || be the L2-norm. By Example 4.1, the operator A (with boundary conditions) generates a contraction semigroup U(t) on H. Define gi and / : L2{D) -> L2{D) by: (/(t i))(s) = /(«(*)) (gi{u))(x) = gi(u(x)), u(EL2(D), xeDcRd and i = l , . . . , / . Chapter 7. THE GENERAL SEMILINEAR EQUATION 95 Since / and satisfy Hypothesis 7.2(a) and (b), then by Theorem (2.1) of Kras-noel'skii (1964), / and i — 1,..., / are continuous and there is C > 0 such that | |/(«)|| < C(l + |M|) and ||^.(«)|| < C(l + ||u||). and since g,-, i = 1,..., / satisfy Hypothesis 7.2(c), then \\9iM - 9i(ui)\\2 = I [9i(x,u2(x)) - g1(xiu1(x))]2dx < C2 ( (u2(x)-Ul(x))2dx = C2\\u2-Ul\\2. Since / satisfies Hypothesis 7.2(d), then < f M ~ / ( « l ) , t l 2 - Ul > = / (f(x,U2(x)) - /(X^U^X)) (U2(x) - M i ( x ) ) J ID < M j^(u2(x) - ui(x))2 dx dx = M H ^ - u x l l 2 . Define a map g = {gu.,gi) from H = L2(D) to (L2(D))1 ~ L(R},L2(D)). Then K = R' and we can write (7.14) as du(t) = -Au(t) dt + f(u(t))dt + J2li=i9Mt))dWi(t) (7.15) [ «(0) = 0. Since —A, U, /,and g satisfy the conditions of Remark 7.1, there is a unique mild continuous adapted solution of (7.15) with values in H = L2(D) i.e. the SPDE (7.14) has a unique continuous mild solution with values in L2(D). 7.4 Initial-Value Problem of the Semilinear Hyperbolic System Example (7.3) Consider the following initial-value problem of the system of semilinear stochastic partial differential equations Chapter 7. TEE GENERAL SEMILINEAR EQUATION 96 (7.16) u(0,x) = u0(x), u0(x) G L2(Rn)N, x G R", where W is an m-dimensional Brownian motion, u = («i,., UJV)' (the superscript £ denotes a transpose) is the set of unknowns, and for each j and x, dj(x) and b(x) are square matrices of order N. We will assume the following: Hypothesis 7.3 (a) The matrices aj(x), for j = l , . , n , and x G R n are symmetric; (b) each component of aj, j = 1,... ,n and its first order derivatives are continuous and bounded, and b is continuous and bounded; (c) / : R n x R^ —• R^ satisfies the Caratheodory conditions; (d) — / is semimonotone in the second variable i.e. 3M > 0 such that for all x G D — R and for all U\,u2 G R^ one has < f(x,u2) ~ /(x,«i),«2 ~ u\ >< M\\u2 - ui||2; (e) there exist a function a G X 2(R n) and a constant C > 0 such that \\f(x,u)\\ < a(x) + C\\u\\, x G R n , u G R", ||fl'(a;,w)IU(R™,R») < a(x) + c||u||; (f) g : R™ x R^ L (R m ,R n ) satisfies the Caratheodory condition and is uniformly Lipschitz in the second variable. Let H = L2(Rn)N, K = R m and define a closed unbounded operator A on H by Au = YT-.ai(x)w~ + Kx)u, u € D(A) C H. By Theorem (3.51), page 75 of Tanabe (1979), A generates a semigroup U{t) on H which satisfies all of the conditions of Theorem 7.1. Chapter 7. THE GENERAL SEMILINEAR EQUATION 97 Now define / : H -> H by f(u)(x) = f(x,u(x)), u G H, x G R n , and g:H ^ L(K, H), g(u)(x) = g{x, u{x)). As in Example 7.2, / and g are continuous and there is C > 0 such that ||/(u)|| < c(i + ||«||), \\g{u)\\L(K,H) < c(i + ||u||). Moreover, —/ is semimonotone on H and g is uniformly Lipschitz. Then / and g, satisfy the conditions of Remark 7.1 and we can write (7.16) as du(t) = Au(t)dt + f(u(t))dt + g(u(t))dWt u(0) = u0. Since A, /, g, uQ, and W satisfy the conditions of Remark 7.1, then equation (7.17) has a continuous adapted mild solution with values in H = L2(Rn)N. Thus problem (7.16) has a unique mild continuous adapted solution with values in L2(Rn)N. Remark 7.2 We assumed in Examples 7.2 and 7.3 that f,g and gi did not depend on oo G fi or t G S. In fact we could have let them depend on u> and t; this would not have involved any essential modification of the proof. Chapter 7. THE GENERAL SEMILINEAR EQUATION 98 7.5 Second Order Equations Let us consider the semilinear Cauchy problem on the Hilbert space H, written formally as d-^ + Ax(t) = f(x{t),^)+g(x(t),^)wt U(0,x) = u0(x), (7.18) * l*=o 2/o, where Wt is a cylindrical Brownian motion on K. Let ~A,f, and g satisfy the following: Hypothesis 7.4 (a) A satisfies Hypothesis 4-3; (b) there are p > 0 and C > 0 such that f : D(A^) x H —• H satisfies Hypotheses 4-4 with tp(x) = C(l + xp), x e R + ; (c) g : D(A2) x H —> L2(K,H) is uniformly Lipschitz i.e 3C > 0 such that \\g(x,y) - g(x,y)\\ < C (\\A*(x - x)||2 + \\y - y\\2) As in Chapter 4 we define Xt — t x \ dt I and A = 0 / —A 0 on the Hilbert space )C = D(A?) x H. We can rewrite (7.18) as dXt = AXtdt + F(Xt)dt + G(Xt)dWt, [ X(0) = X0, where X0 = i , F(x,y) = 0 / V f(x,y) and G(x,y) 0 < g{x?y) (7.19) . Note that G : K, —> L2(K,IC). Wt is still a cylindrical Brownian motion on K. Now A satisfies Hypothesis 4.3 and by Chapter 4 it also satisfies Hypothesis 4.1 and Hypothesis 3.1, so it satisfies Hypothesis 7.3(d). Since F is bounded by a polynomial, then by Proposition Chapter 7. THE GENERAL SEMILINEAR EQUATION 99 4.1 it satisfies Hypothesis 7.3(a). Since G is a uniformly Lipschitz operator, it satisfies Hypothesis 7.3(b), (c) and (g). Then all conditions of Theorem 7.1 are satisfied and we have (7.20) Proposition 7.1 If A,f, and g satisfy Hypothesis 7-4, then the equation (7.19) has a unique mild solution so that xt G C(S, D(A%)) D CX{S,H), i.e., the mild solution of (7.19) is a continuous process in D(A^) and it is a differentiable process in H. Example 7.4: Let D, A, B and W be as in Examples 4.2 and 4.3. Consider a mixed problem of the form d-^- + Au = f(u,%) + g(u,%)W on£>x[0,oo) Bu = 0 on dD x [0, oo) u(x,0) = 0 onD %(x,0) = 0 on D, where n G Z and g : Hn+i x Hn —> L(K,Hn+i x Hn) is uniformly Lipschitz and / : Hn+i x Hn —* Hn. As in Example 4.2, we consider W as a Brownian motion Wt on the Sobolev space H-p, for some p > | . Now yl is a strictly positive definite self-adjoint operator on Hn. As in Example (4.3) we can write (7.20) as the following Cauchy problem on the Sobolev space Hn: dut — iitdt ii = -Autdt + f(ut, ut)dt + g(ut, ut)dWt (7.21) u(0) = 0 «(0) = 0 Now f,g and A satisfy the conditions of Proposition 7.1. Then (7.21) has a unique continuous mild solution ut G C(S, Hn+\) and, moreover, ut G C1(S, Hn). Q.E.D Chapter 8 GENERALIZATION AND T H E CONTINUITY 8.1 Introduction In this chapter we first generalize Theorem 7.1 by relaxing the Lp-boundedness Hypoth-esis 7.1(g). Then we prove a theorem about the continuity of the solution of the integral equation (7.2) with respect to a parameter. We also give a bound for the pth moments of the solution of (7.2). The continuity and smoothness of the solution of a stochastic equation depending on a parameter have been well-studied by several authors, [see e.g. Emery (1978)]. Metivier (1982) has proved continuity and smoothness of the solution of an if-valued stochastic differential equation of Lipschitz type with respect to a parameter. We will generalize his result for evolution equations, i.e., we will prove that the solution of (7.2) changes continuously when any or all of V, f, g and XQ are varied. This is also a gener-alization of Theorem 5.1. 8.2 Boundedness of the Solutions Lemma 8.1 Let p > 1. If XT is a solution of (7.2) and if Hypothesis 7.1 is satisfied, then E{(x:r} < c{i+£(iix 0 p) (8.1) In particular X* G Lp for all p> I. 100 Chapter 8. GENERALIZATION AND THE CONTINUITY 101 Proof: Without loss of generality we can assume that A = 0 in Hypothesis 3.1(c) and that ga(0) = 0 (by Lemma 7.1). Define Yt = Xt — Vt. Then we can rewrite (7.2) as Yt = U(t, 0)X0 + f* U(t, s)fa(Xa)ds + f U(t, s)gs(X)dWa. Jo Jo By the Ito's inequality of Chapter 3 one has \\Yt\\2 < \\X0\\2 + 2 ['< Y.J,(XM) > ds Jo + 2 f <Ya,dNa>+[N]u (8.2) Jo where Nt — /0*ga(X)dWs is an H-valued martingale. Now 2 f < Ya, fa(Xa) >ds = 2 f < Ya, fa(Ys + V,) - fa(Va) > ds Jo Jo + 2 / ' <Ya,fa(Va)>ds. Jo Since fs is semimonotone with parameter M and since it is bounded by <p(x) = C ( l + xg) for some q > 1, the right hand side of the above equation is < 2M f* \\Ya\\2ds + 2CTYt* (1 + {Vt*}q) Jo < 2M fQ(Y:)2ds + l-{Yt*)2 + 2(2CT) 2 ( l + {V/}2') , so we can rewrite (8.2) as 1-{Y?)2 < \\X0\\2 + 2MJ\Y*)2ds + C ( l + {Vt*}2q) + 2suP o< r< t| £ < Ys,dNs > | + [N]t. Using (ai + a2 +... + a5)p < 5 PK + ... + a£) Chapter 8. GENERALIZATION AND THE CONTINUITY 102 for p > 1, taking expectations and using Fatou's lemma, we can see that there is C > 0 such that E{(Y*)2P} < C {l + E (\\X0\\2p) + f E {{YS*)2P} ds J 0 + E ({VT*}2™) + E ([N]Yt) • + E (suP o< r< t | j T < Ya, dN. > ) } . Using Lemma 3.2 on the last term of the above inequality to see that for for all K > 0 this is < C [l + E (\\X0\\2p) + j f E {{Y:)2p} ds + E{(V;)2P'} + ^E{(YS*)2P} + (1 + ^f~)E {[N)}p] . Choose K = CCP and note that E{(YT*)2P} < oo. Then l-E{{YT*?P} < c{l-rE{\\X0\\2p} + ^E{(Ys*)2p}ds + E { ( ^ ) 2 p 9 } + ( l + ^ ) E ([N]Pt)} • (8-3) But [N]t = E(J*\\g.(X)\\22ds)J so E{[N]P} < E \\g.(X)\ff ds} < C [J* E ({¥?)») ds + E((VT*)2P)} by Hypothesis 7.1(c) and the fact that gs(0) = 0. Since there is C > 0 such that E {m2p} < C (l + E {{V*)2PQ}) , Chapter 8. GENERALIZATION AND THE CONTINUITY 103 we can rewrite (8.3) as E{(Y*)2p} < C {l + E {\\X0\\2p} + £ E {{Y*)2p} ds + E {{v*)2pq}}. By Gronwall's inequality we have E {{Yt*)2p} < eCT [l + E {\\X0\\2p} + E {(V*)2p9}] . But now (x;)2p < (2)2" {(Yt*)2p + (v;)2p} so there is C > 0 such that E {{X:)2p} < CT {l + E {\\X0\\2p} + E {{Vt*)2pq}} . Q.E.D 8.3 Generalization of Theorem 7.1 In this section we are going to relax Hypothesis 7.1(g) as follows. Hypothesis 8.1 (a) Let X0, f, g, A, U and V satisfy Hypothesis 7.1(a)-(f); (b) E{\\X0\\2}, E{(Vt*)2q}, E{ sup \\gM\\22} are bounded. 0<s<t Theorem 8.1 If Hypothesis 8.1 is satisfied, then the integral equation (7.2) has a unique continuous adapted strong solution with E{(X*)2} < oo. Moreover it satisfies (8.1). Proof: Uniqueness is trivial from Theorem 7.1. Existence Just as in Theorem 7.1 we can assume without loss of generality that gs(Q) = 0. Chapter 8. GENERALIZATION AND THE CONTINUITY 104 Define the stopping time Tn :=inf{< : \\Vt\\>n}AT and define Vtn := VtAT„ a n ( i ^ 0 : = ^ o l { w : | | X 0 | | < n } Now consider the integral equation X? = U(t, 0)XZ + f U(t, a)f.(X:)ds Jo + [TU(t,s)gS(Xn)dW3 + Vtn. (8.4) Jo Since XQ and Vtn are bounded in norm by n, then XQ and Vtn satisfy Hypothesis 7.1(g), so that all of the conditions of Theorem 7.1 are satisfied, and there is a unique continuous solution on S = [0,T]. Define SN — Tnl{w.\\x0\\<n}-Note that Vtn = Vtn+1 and XQ = X£+1 on [0, SN], so by uniqueness X?+1 = X? if t < SN. Now by Lemma 8.1 we have £{(W)*) 2 } ^ c {i + E{\\xs\\2} +E{((vm2q}}-But (Vtn)* < V* and \\XS\\ < \\X0\\ so we have £ { ( p q T ) 2 } < C {l + E {\\X0\\2} + E {(Vi)*)2*}} < oo, by Hypothesis 8.1(b). Define Xt := lim n_ > < x >A' t n. Since Xt = X? for 0 < t < SN, Xt = (A?)* for 0 < t < SN, and E {l{0<t<Sn}(Xty} = £ { l { o < K s „ } P ( t f } Chapter 8. GENERALIZATION AND THE CONTINUITY 105 < E{[(X»)*)]2} < C [l + E {\\X0\\2} + E {(V*)2'}} < oo. Since P{Sn = T} —> 1 this implies that E {(Xt)2} <C{l + E{\\X0\\2} + E {(V?) 2 '}} . Now since X™ is the solution of (8.4) we have * r i < « < M = i{t<sn}[u(t,o)xz+jy(t,s)fs(x:)ds + f U(t, s)ga(Xn)dWs + Vtn\ . (8.5) Jo Since Xt = X?, Vt = X? and X% = X0 on [0, Sn], then we can write (8.5) as Xtl{t<sn} = l{t<Sn}[u(t,0)Xo + J*U(t,s)fa(Xs)ds + ftU(t,s)ga(X)dWs + Vt . Jo Since P{Sn = T} —> 1 this proves that Xt is a solution of (7.2) on [0, T]. To complete the proof of the theorem we need to prove (8.1). First we show without loss of generality we can assume ga(Q) = 0. If ga(0) / 0 we can define ga(x) = gs(x) — ga(0) and Vt= ftU(t,s)ga(0)dWs + Vt. Jo Then E{((Vyt)2™} < C{E(SuPo<r<t / U(r,s)g.(0)dW. ) + E((Vt*)2p9)}- (8-6) — \Jo By Theorem 3.2 (Burkholder's inequality) there is C > 0 such that the first term on the right hand side of (8.6) is bounded by CE[JQ \\gs(0)\\2p9ds]. Then Xt satisfies (8.1) without <jrs(0) = 0. Chapter 8. GENERALIZATION AND THE CONTINUITY 106 Then let gs(0) = 0. By Lemma 8.1 we have E{((X?y)2p] < C{1 + E{\\XS\\2'} +E{((VN)*T)2™}. But \\XS\\ < \\X0\\ and (VN)* < V* so E{i{t<Sn}(x;yn = E{i{t<Sn}(xnyty} < E{((Xn)*t)2»} < C{\ + E{\\X0\\2*} + E{{VT*)2™}}. Since P{Sn = T} —• 1 we have E{(X;)2?} < c{i + E{\\X0\\2»} + E{(V;)2*<}}. Q.E.D Remark 8.1 Let p > 1. It follows from (8.1) that if E{\\X0\\2p) < co, E{(V*)2PQ} < oo and E{ f ||^(0)||^} < oo, Jo then Xt G L2p. 8.4 The Continuity of the Solution with Respect to the Parameter Theorem 8.2 If f\g\ V\ and XQ, i = 1,2 satisfy the conditions of Theorem 7.1 and if X\, i = l ,2 are solutions of the integral equations X't = U(t,0)X'o+ f U{t,s)fs(Xi)ds (8.7) Jo + ftU(t,s)gis(Xi)dWs + VTI, z = 1,2, Jo Chapter 8. GENERALIZATION AND THE CONTINUITY 107 then there is a constant C > 0 such that E{((X2 - Xl)*)2p} < C[E{\\Xl - Xl\n + E{j Mm-g\{X*)\$ds} Jo + E{[T\\fs2(Xl)-f(Xl)f»ds} Jo + K(E{\\V2-V1\\%})h (8.8) where K < C ( l + E{||X02||2p'} + ^ {||*ol|2p9} + E{\\v2\\^2} + E{\\v*\\y}). (8.9) Proof: By Lemma 3.1 we can assume A = 0 without loss of generality. Define Ytl = Xlt - Vj, i = 1,2. Then Y; = U(t,0)Xto+ rC/ ( i , s ) / i (X^+ f U{t,s)gl{Xi)dWs, Jo Jo hence Y2-Yj = U(t,0)(X2o-X})) + fTU(t,S)[f2(X2)-fs(Xl)]ds Jo + fTg2(X2) - g\(Xl))dW.. (8.10) J o Define an if-valued local martingale Nt by Nt := fTg2s(X2) - g](Xl)]dWs. Jo This has quadratic variation [N]i= fT\\92S(X2) - gl(X^\\22ds. Jo Define Yt:=Y2-Yt\ Vt:=V2-Yt\ X0 := X2 - X*, and Xt := X2 - X}. Chapter 8. GENERALIZATION AND THE CONTINUITY 108 Using Ito's inequality of Chapter 3, (8.10) implies ||y,|| 2 < | | X 0 | | 2 + 2 f <Y„fl{X])-f}{X))>da Jo + 2 [* <Y„dNa>+[N]t Jo := | |Xo|| 2-r/i(*)-r/ 2(*) + TO. (8.11) By Lemma 5.1 hit) < ( i + 4M) / ' imi i^ + r/iiviu Jo + [llfiX^-fiXlWds, (8.12) where I = 2{fT | | / 2 ( X 2 ) - fiXDfds}, + 4MT||V||oo. Jo 2 But since the /* are bounded by y(x) = C ( l + xq) then ||/2(x2) - f\x])\\ < c(i + ||x2||D + c(i + lix1!^), and / < TC{2 + | | X 2 | | ^ + H X i U + 4MT||V||oo. (8.13) Now by taking the supremum over t G 5" in (8.11) and using the inequality (ax + a 2 + a 3 + aAf < 4 P « + . . . + ap), we get £{(>7)2p} < nE{\\Xo\\2p} + E{(Itit))»} + E{iim)P} + E{[N}p}}. (8.14) By Lemma 3.2 E{imp} < ^E{iYt*)2p} + CPKE{[N]P}. (8.15) Chapter 8. GENERALIZATION AND THE CONTINUITY 109 Choose K = in (8.15). Since E{(Y*)2p < oo , (8.14) implies that there is C > 0 such that But Emm < c{E{\\x0m + E{m))P} + E([N]Pt)}-< 2 f\g2s{X2) - g2s{Xx) Jo 11 +2/'||s 3 m-^(x i) Jo 11 ds, and by Hypothesis 7.1(c) ||<72(*2) - 92S(X*)\\2 < LX: < LY: + L Then there is C > 0 such that E{[N]?} < C [J* E (iY;)2*) ds + E {\\V\\%} + E M*1) - gimwlfds Using the Schwartz inequality in (8.12) we can see there is C > 0 such that E{(I*2(t))P} < C [J*E{(Ys*)2»}ds + (E {l2p})h (E {\\V\\%})> + EyQT\\f2(Xl)-fl(Xl)\\2pds Combining (8.16), (8.17) and (8.18) we can see that there is C > 0 such that E{(Y*)2P} < C [E{\\X0\\2p} + J*E {(Yt*)2p}ds (8.16) (8.17) (8.18) Chapter 8. GENERALIZATION AND THE CONTINUITY 110 + {E{?>})* (E{\\V\\2})* + E{JO ] ] F ° { X L ) " ^ ^ ) H 2 P d 8 } + E { H y l i - } + E ^ \\g]{X*) - g\{X*)\$ds} • By Gronwall's inequality E{(Y*)2p} < C [E {\\x0\\2p} + Eyj\f2(Xl)-fl(Xl)\\2^ + EyoT\\g2s(X1)-g](Xi)\\lPds} + (E{\\V\\%})* (E + (E{\W\\l:})"]. (8.19) Since X? < Y* + || V||oo, (8.19) implies that there is C > 0 such that E{(x;fp} < C [E{\\X0\\2p} Ey*\\ftXl)-f}(Xl)\\'*ds} + E ^ \\ftX1) - gl(X*)\\l>ds} + K(E{\\V\\%})*], where K = E(P>)i + E {\\V\\>>}\ To complete the proof of Theorem 8.2 we need to show that K satisfies (8.9). Indeed (8.13) implies that E (l2p) <C{l + E {\\XY™} + E { l i x 1 ! ! ^ } + E {\\V\\2»}} . Chapter 8. GENERALIZATION AND THE CONTINUITY 111 By (8.1) we have so E { l l A i S r } < C [l + E {II A S H 2 " } + E {|| V%*f}] E{I2p} < C[l + E {\\X2\\2pc>} + E {\\X]\\2pg} + E{\\V2\\%i2} + E{\\Vl\\y}} . But {^11^1125}* < ^ { l + JE?{||l^ a||2«'s,}H-JE7{||V^1||2r"}}, so K satisfies (8.9). Remark 8.2 (i) IfV2 = V1 then (8.8) implies E{{X2-X')f) < C [E {\\X2 - Xl\\2p) + E ^ T h'AX1) - gl(Xx)\\2/ds^ + E{ fT\\f2(Xl)-fl(Xl)\\2pds} Q.E.D (8.20) By localization this inequality holds without Hypothesis 7.1(g). (ii) By the localization method we can generalize inequality (8.8) by replacingthe con-dition E{\\X'0\\2pq} < oo, £ { | | V « | | ^ } < O O instead Hypothesis 7.1(h). Bibliography Ahmed, N.U, (1985). Abstract stochastic evolution equations on Banach Spaces. Stochastic Analysis and Applications, 3(4), 397-432. Albeverio, S. &: Rockner, M . (1989). Stochastic differential equations in infinite dimensions: Solutions via Dirichlet forms. Preprint, 1-55. Besoussan, A. &: Temam, R.E.T. (1972). Equations aux derivees partielles stochastiques non lineaires (1). Israel Journal of Mathematics, 11, 95-129. Biswas, S.K.&: Ahmed, N.U.(1985). Stabilization of systems governed by the wave equation in the presence of distributed white noise. IEEE Trans. On Automatic Control, AC-30(10), Browder, F .E. (1964). Non-linear equations of evolution. Ann. of Math., 80, 485-523 Cabana, E . M . (1970). The vibrating string forced by white noise. Z.W., 15, 111-130. Cabana, E . M . (1972). On barrier problems for the vibrating string. Z.W., 22, 13-24. Carmona, R. &: Nualart, D. Random non-linear wave equations: Smoothness of the solutions. Preprint. Carroll, R.W. (1969). Abstract methods in partial differential equations. Harper's Series in Modern Mathematics. Chatterji, S.D. (1976). Differentiation of Measures. Lecture Notes in Mathematics 541, 173-179, Springer-Verlag, Berlin. Crandall, S.H. (1979). Random vibration of one and two dimensional structures. In P.R. Krishnaiah (Ed.), Developments in Statistics, 2, 1-81. Academic Press, New York. Crandall, S.H. Zhu, W.Q. (1983). Random vibration: A survey of recent devel-opments. Journal of Applied Mechanics, 50, 953-962. Curtain, R.F. (1977). Stochastic evolution equations with general white noise disturbance: Journal of Mathematical Analysis and Applications, 60, 570-595. Curtain, R.F. &c Pritchard, A.J. (1978). Infinite dimensional linear system the-ory. LN in control and information sciences. 8, Springer-Verlag, Berlin-Heidelberg, New York. 112 Bibliography 113 Da Prato G., Iannelli M . and Tubaro L. (1982a). On the path regularity of a stochastic process in a Hilbert space, defined by Ito integral. Stochastics, 6, 315-322. Da Prato G., Iannelli M . and Tubaro L. (1982b). Some results on linear stochastic differential equations in Hilbert spaces. Stochastics, 6, 105-116. Da Prato G. (1983). Some results on linear stochastic evolution equations in Hilbert spaces by the semi-group method. </. Stoch. Anal, and Appl., 1. Da Prato, G., Kwapien, S. &c Zabczyk, J. (1987). Regularity of solutions of linear stochastic equations in Hilbert spaces. Stochastics, 23, 1-23. Da Prato, G. &: Zabczyk, J. (1988). A note on semilinear stochastic equations. Differential and Integral Equations. 1(2), 1-13. Dawson, D.A. (1972). Stochastic evolution equations. Mathematical Biosci., 15, 287-316. Dawson, D.A. (1975). Stochastic evolution equations and related measure pro-cesses. Journal of Multivariate Anal, 5, 1-55. Dozzi, M . (1989). Stochastic process with a multidimensional parameter. Pitman Research Notes in Mathematics Series, 194. Emery, M . (1978). Stabilite des solutions des equations differentielles stochas-tiques. Application aux integrales multiplicatives stochastiques. Z.W., 41, 241-262. Faris, W.G.&: Jona-Lasinio, G. (1982). Large fluctuations for a non-linear heat equation with noise. J. Phys A: Math. Gen., 15, 3025-3055. Freidlin &c Wentzell. (1984). Random Perturbations of Dynamical Systems. Springer Verlag. Funaki, T. (1983). Random motion of strings and related stochastic evolution equations. Nagoya Math, 89, 129-193. Gyongy, I.& Krylov, N. (1980). On stochastic equations with respect to semi-martingales I. Stochastics, 4, 1-21. Gyongy, I. &: Krylov, N. (1982a). On stochastic equations with respect to semimartingales II, Ito formula in Banach spaces. Stochastics, 6, 153-173. Gyongy, I.& Krylov, N. (1982b). On stochastic equations with respect to semi-martingales III. Stochastics, 7, 231-254. Gyongy, I. (1988). On the approximations of stochastic partial differential equa-tions I. Stochastics, 25, 129-164. Gyongy, I. (1989a). On the approximations of stochastic partial differential equa-tions II. Stochastics, 26, 129-164. Gyongy, I. (1989b). The stability of stochastic partial differential equations and applications I & II. to appear in stochastics. Bibliography 114 [33] Holley, R. &: Stroock, D. (1978). Generalized Ornstein-Uhlenbeck Processes and infinite particle branching Brownian motions. Publ. RIMS Kyoto University, 17, 741-788. [34] Ichikawa, A. (1978), Linear stochastic evolution equations in Hilbert spaces. J. Diff. Eq., 28, 266-277. [35] Ichikawa, A. (1982). Stability of semilinear stochastic evolution equations. Journal of Mathematical Analysis and Application, 90, 12-44. [36] Ichikawa, A. (1983). Absolute stability of a stochastic evolution equation. Stochas-tics, 11, 143-158. [37] Ichikawa, A. (1984). Semilinear stochastic evolution equations. Stochastics, 12, 1-39. [38] Iscoe, I., Marcus, M.B., McDonald, D., Talagrand, M.& Zinn, J. (1989). Continuity of /2-valued Ornstein-Uhlenbeck Process. Preprint. [39] Ito, K. (1978). Stochastic analysis in infinite dimensions, stochastic analysis. Aca-demic Press, 187-197. [40] Ito, K. (1982). Infinite dimensional Ornstein-Uhlenbeck Processes. Taniguchi Symp. SA Katata, 197-224. [41] Iwata, K. (1987). An infinite dimensional stochastic differential equation with state space C(R). Prob. Th. Rel. Fields 5743, 141-159. [42] Jona-Lasinio, P. &: Mitter, P.K. (1985). On the stochastic quantization of field theory. Commun. Math. Phys, 101, 409-436. [43] Kallianpur, G. Sz Wolpert, R. (1984). Infinite dimensional stochastic differential equation models for spatially distributed neurons. J. Appl. Math. Opt., 12, 125-172. [44] Kallianpur, G. & Perez-Abreu, V. (1987). Stochastic evolution equations driven by nuclear-space-valued martingales. Technical Report No. 204, Center for stochastic processes, Department of Statistics, University of North Carolina, Chapel Hill, North Carolina. [45] Kato, T. (1953). Integration of the equation of evolution in a Banach space. Jour-nal of the Mathematical Society of Japan, 5(2), 208-334 [46] Kato, T. (1964).Nonlinear evolution equations in Banach spaces. Proc.Symp.Appl.Math., 17, 50-67. [47] Kolmogorov, A. (1937). Zur umker barkeit der statistischen naturgesetze. Math. Ann., 113, 766-772. [48] Kotelenz, P. (1982). A submartingale type inequality with applications to stochas-tic evolution equations. Stochastics, 8, 139-151. Bibliography 115 Kotelenz, P. (1984). A stopped Doob inequality for stochastic convolution inte-grals and stochastic evolution equations. Stochastic Anal, and applications, 2, 245-265. Krasnosel'skii, M.A. (1964). Topological methods in the theory of nonlinear in-tegral equations. Pergamon Press, Macmillan Company, New York. Krylov, N.V. &: Rozovskii, B.L. (1981). Stochastic evolution equations. J.Soviet Math., 16, 1233-1277. Leon, J.A. (1989). Stochastic evolution equations with respect to semimartingales in Hilbert space. Stochastics and Stochastics Reports, 27, 1-21. Lions, J.L. &c Strauss, W.A. (1965). Some non-linear evolution equations. Bull Soc. Math. France, 93, 43 a, 96. Marcus, R. (1974). Parabolic Ito equations. Trans. Amer. Math. Soc, 198, 177-190. Marcus, R. (1978). Parabolic Ito equations with monotone non-linearities. Func-tional Analysis, 29, 275-286. Marcus, R. (1979). Stochastic diffusion on an unbounded domain. Pacific Journal of Mathematics, 84(1). Maslowski, B. (1989). Uniqueness and stability of invariant measures for stochas-tic differential equation in Hilbert spaces. Stochastics and Stochastics Reports, 28, 85-114. Metivier, M . (1982). Semimartingales : a course on stochastic processes. Walter de Gruyter, Berlin-New York. Metivier, M . , &c Pellaumail, J. (1980a). Stochastic Integration. Academic Press, New York. Metivier, M . &c Pellaumail, J. (1980b). On a stopped Doob's inequality and general stochastic equations. The Annals of Probability , 8(1), 96-114. Miyahara, Y . (1981). Infinite dimensional Langevin equation and Fokker-Planck equation. Nagoya Mathematics Journal, 81, 177-223. Pardoux, E. (1975). Equations aux derivees partielles stochastiques non Lineaires monotones, Etude de solutions fortes de type ltd. These doct. Sci. math., Universite de Paris Sud, Centre d'Orsay. Pardoux, E. (1979). Stochastic partial differential equation and filtering of diffu-sion processes. Stochastics, 3, 127-167. Pazy, A. (1983). Semigroups of linear operators and applications to partial differ-ential equations. Applied Mathematical Sciences, 44. Springer-Verlag, Berlin. Bibliography 116 [65] Reed, M . &; Simon, B. (1972). Methods of modern mathematical physics. I: Functional analysis. New York London: Academic Press. [66] Smolensk!, W., Sztencel, R. Sz Zabczyk, J . (1986). Large deviations esti-mates for semilinear stochastic equations. Proceeding of the 5th IFIP Conference on Stochastic Differential Systems, Eisenach. [67] Tanabe, H. (1979). Equations of evolution. Pitman, London. [68] Ustunel, S. (1982). Stochastic integration on nuclear spaces and its applications. Ann. Inst. H. Poincare, 28, 165-200. [69] Vainberg, M . M . (1973). Variational method and method of monotone operator in the theory of nonlinear equations. John Wiley & Sons, LTD. [70] Vilenkin, N.Ya. (1972). Functional analysis, Wolters-Noordhoff Publishing, Groninge, The Netherlands. [71] Walsh, J . B. (1981). A stochastic model of neural response. Adv. Appl. Prob, 13, 231-281. [72] Walsh, J. B. (1984). Regularity properties of a stochastic partial differential equa-tion. Seminar on Stochastic Processes, 1983, Birkhauser, Boston. [73] Walsh, J.B. (1986). An introduction to stochastic partial differential equations. Lecture Notes in Math., 1180, 266-439. [74] Yor, M . (1974). Existence et unicite de diffusions a valeurs dans un espace de Hilbert. Ann. Inst. H.Poincare, 10 , 55-88. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0080412/manifest

Comment

Related Items