Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Stochastic phase dynamics of noise driven synchronization of uncoupled conditional coherent oscillators Thompson, William Frederick 2010

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2010_fall_thompson_william.pdf [ 1MB ]
Metadata
JSON: 24-1.0071263.json
JSON-LD: 24-1.0071263-ld.json
RDF/XML (Pretty): 24-1.0071263-rdf.xml
RDF/JSON: 24-1.0071263-rdf.json
Turtle: 24-1.0071263-turtle.txt
N-Triples: 24-1.0071263-rdf-ntriples.txt
Original Record: 24-1.0071263-source.json
Full Text
24-1.0071263-fulltext.txt
Citation
24-1.0071263.ris

Full Text

Stochastic phase dynamics of noise driven synchronization of uncoupled conditional coherent oscillators by William Frederick Thompson B.Math., The University of Waterloo, 2008 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty of Graduate Studies (Mathematics)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2010 c William Frederick Thompson 2010  Abstract We consider a pair of uncoupled conditional oscillators near a subcritical Hopf bifurcation that are driven by two weak white noise sources, one intrinsic and one common. The effect of the competition between the common and intrinsic noise forcing on the synchronization behaviour of the phases of these two oscillators is studied. Using a stochastic multiple scales method, we derive the envelope equations of the oscillators and then use the theory of linearized stochastic differential equations as well as an asymptotic analysis to study the probability density of the phase difference of the oscillators. It is found that common noise increases the degree of synchrony in the pair of oscillators and that it can be characterized by the ratio of intrinsic to common noise. Furthermore, the nonlinear dynamics of the oscillators can affect the character of this synchronization in terms of the average phase difference. The results are also related to the study of spike time reliability and possible implications are briefly discussed.  ii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iii  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  v  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  vi  Abstract  List of Tables  List of Programs  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii  Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ix  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  x  Statement of Co-Authorship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  xi  1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1  Dedication  1.1  Background  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  5  1.1.1  Stochastic modelling  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  5  1.1.2  Coherence resonance  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  6  1.1.3  Neurophysiological models  1.1.4  Spike time reliability  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  7  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  9  2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1  The model  2.2  Multiple scales analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.1  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11  Deterministic analysis, parameter scalings  . . . . . . . . . . . . . . . . . . . 12 iii  Table of Contents 2.2.2 2.3  Determining the linearized probability density function 2.3.1  2.4  2.5  Stochastic multiple scales analysis . . . . . . . . . . . . . . . . . . . . . . . . 15  Moments for amplitude derived from distribution  . . . . . . . . . . . . . . . . 18 . . . . . . . . . . . . . . . 23  Asymptotic analysis of amplitude and phase equations, Fokker-Planck equation  . . 26  2.4.1  Deriving the moments including nonlinear effects  . . . . . . . . . . . . . . . 28  2.4.2  Reduction of Fokker-Planck equation for phase difference . . . . . . . . . . . 31  Stationary solutions to the reduced Fokker-Planck equation . . . . . . . . . . . . . . 31 2.5.1  K = 1 and the δ-function solution . . . . . . . . . . . . . . . . . . . . . . . . 33  3 Numerical Work  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34  3.1  Numerical method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34  3.2  Numerical simulations, comparison to analysis 3.2.1  Moments of amplitude  3.2.2  Phase difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37  4 Discussion and Conclusions 4.1  . . . . . . . . . . . . . . . . . . . . . 35  Future work  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47  Appendices A Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51  iv  List of Tables 3.1  Standard parameters used in all numerical simulations, unless stated otherwise. . . . 35  v  List of Figures 2.1  Bifurcation diagram of r =  x2 + y 2 vs. λ2 in the case of the deterministic system  (δ1 = δ2 = δC = 0) for a single oscillator given by (2.1). Other parameters include: = 0.1, α = 0.2, γ = −0.2. Solid (dashes) lines indicate stability (instability). Black (Gray) lines indicate a fixed point (periodic orbit). The data for this diagram was generated using XPPAUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1  Normalized histogram data for the amplitudes of the coupled oscillators compared to the marginal amplitude distributions for R1 and R2 given by (2.91) (solid) and (2.101) (dashed). Standard parameters are used. Left (Right): δ1 = 0.008, δ2 = 0.003, δC = 0.008 (δ1 = δ2 = 0, δC = 0.01) . . . . . . . . . . . . . . . . . . . . . . . . 36  3.2  Histograms of simulated phase data of the original system, (2.1), plotted against the asymptotic solution for p¯(Φ). Parameters used in this simulation are the standard (ω0 = 0.9, ω1 = 1.2, λ2 = −3,  = 0.1, δ1 = δ2 = 0.004). Notice that by our  choices of noise strength we have ∆1 = ∆2 , and that we increase the common noise strength from left to right, top to bottom (δC = 0, 0.001, 0.004, 0.01). Grey: Histogram of simulated data. Black, solid (dashed): Asymptotic solution (Linear distribution). All noise parameters are chosen to ensure that the trajectory remains near the stable steady state, and moves onto the stable limit-cycle that exists in this parameter regime with very limited probability. . . . . . . . . . . . . . . . . . . . . . 38 3.3  A sample time series for the system given by (2.1) when δ1,2 = 0, δC = 0.005 (i.e: K = 1), with all initial conditions sampled from the normal distribution with mean 0 and standard deviation 0.05. Despite the different initial conditions, the trajectories of the system begin to overlap as t gets larger eventually becoming indistinguishable, as seen from the difference in the images on the left and right. Standard parameters are used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39  vi  List of Figures 3.4  Histograms of simulated phase data of the original system, (2.1), plotted against the asymptotic solution for p¯(Φ) (solid) and the distribution derived from the linearization (dashed). Parameters used in this simulation are (ω0 = 0.9,  = 0.1, λ2 = −3).  The intrinsic noise strengths are δ1 = 0, δ2 = 0.009. Left(Right): δC = 0.003(0.009). Top(Bottom): ω1 = 5(20). The data in the lower right has parameters that are not in the asymptotic regime. The green line is the stationary solution to (2.104), computed using a finite-element method. . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.5  Histograms of simulated phase data of the original system, (2.1), plotted against the asymptotic solution for p¯(Φ) (solid) and the distribution derived from the linearization. Parameters used in this simulation are the standard (ω0 = 0.9, ω1 = 1.2, = 0.1). The noise strengths are δ1 = δ2 = δC = 0.001. Left (Right): λ2 = −3(−10). This puts the system inside (outside) the knee in the left (right) figure.  3.6  . . . . . . . 42  Histograms of simulated phase data of the original system, (2.1), plotted against the asymptotic solution for p¯(Φ) (solid) and the distribution derived from the linearization. Parameters used in this simulation are the standard (except for λ2 = −6). Top/Middle/Bottom: δ1 = δ2 = δC = 0.01/0.05/0.1. . . . . . . . . . . . . . . . . . . 43  vii  List of Programs A.1 C program that simulates a single or multiple time series of the original system of equations (2.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51  viii  Acknowledgements I would like to thank Rachel Kuske for introducing me to stochastic oscillator dynamics. I am grateful to Rachel Kuske and Yue-Xian Li for their guidance, wisdom, patience and support during the course of my degree. I would also like to thank the staff and faculty at UBC and the IAM for all they have done to make the past two years a wonderful experience.  ix  Dedication To everyone who supported me in my degree, thank you very much.  x  Statement of Co-Authorship Rachel Kuske identified and designed the research program. William Thompson performed the research with guidance and direction from Rachel Kuske and Yue-Xian Li. Writting of the manuscript was performed by William Thompson, with contributions to Chapter 2 from Rachel Kuske. Figures and Tables were generated by William Thompson. Editing of the manuscript was performed by Rachel Kuske, Yue-Xian Li and William Thompson.  xi  Chapter 1  Introduction Synchronization and coherence of multiple oscillators is of general interest to many scientific and engineering disciplines, notably in biology and neurophysiology where modeling of various chemical and cellular processes by dynamical systems is common [27, 37]. The electrophysiological behaviours of individual cells are often studied using mathematical models like the Hodgkin-Huxley (HH) [20] and Morris-Lecar (ML) models [31]. These models can be classified as oscillators because they display repetitive behaviour under certain conditions. Modelling of individual cells is useful in studying processes that occur on a larger scale, like travelling wave patterns in brain tissue and coherent beating rhythm of the heart that depend on “collective oscillations”. Phenomena like these often depend on the synchronization of cellular dynamics. Other applications such as these, from a broad variety of disciplines, continue to motivate the study of oscillator synchronization. Synchrony has been studied in weakly coupled oscillators for many years. It has been found that identical oscillators synchronize when interaction occurs through diffusive coupling [34]. When the dynamics of the oscillators are similar, but not identical, then oscillator entrainment with a constant phase shift occurs. This study is also an early example of the method of phase reduction, where the dynamics of a limit-cycle oscillator are described via a phase variable. In 1984, Kuramoto laid much of the groundwork for the study of deterministic oscillations, including the method of phase reduction [27]. Kuramoto is now a standard reference for those studying oscillations. Within the field of neurophysiology, there is an array of possible interneuron coupling methods (excitatory vs. inhibitory coupling, delayed coupling) mediated through synaptic coupling. There is also a broad variety of dynamical behaviours that exist in various neuronal models (spiking, bursting, mixed mode oscillations). As such, many works on the study of weakly coupled oscillators have been written in a neurophysiological context [1, 6–9] using these models. A few of these models are discussed in section 1.1.3. Given that many oscillatory systems are exposed to noise, studies have also been undertaken to investigate the effects of stochastic forcing on oscillators and coupled oscillatory systems [15, 29]. Stochastic effects in dynamical systems have long been known to lead to behaviours that are unpredictable by deterministic theory. Several of them, surprisingly, lead to increased orderly behaviour in certain dynamical systems. One of the most famous examples is stochastic resonance, whereby stochastic forcing in conjunction with weak periodic driving in a bistable potential results in well transitions that are in-phase with the periodic 1  Chapter 1. Introduction driving. The noise is an essential component to this phenomena as coherent transitions do not exist if either the noise or periodic driving is eliminated from the dynamics. There are other similar orderly behaviours induced by noisy forcing such as coherence resonance and self-induced stochastic resonance, [14, 28, 38]. Many of these effects are counter-intuitive because any addition of noise into a dynamical system is generally seen as having a detrimental effect on the coherence and order of the system. Recently it has been revealed that a common noise source synchronizes oscillators that are uncoupled. That is, when two oscillators that are not interacting with each other experience the same noisy forcing simultaneously, their dynamics begin to synchronize. This is another example of order induced by noisy forcing and this result has attracted much attention since its discovery. Noise induced synchronization is a non-intuitive result, but is not totally unexpected as it is widely known that forcing two identical oscillating systems with the same smooth forcing function leads to entrainment of their dynamics. The validity of noise induced synchronization has been confirmed by experiments on olfactory bulb neurons [13]. There has also been a great deal of interest in noise induced oscillator synchronization as it pertains to the phenomenon of Spike Time Reliability (STR) of neuronal cells. STR refers to the phenomenon whereby repeated applications of rapidly fluctuating applied stimulus to a neuron produce an essentially identical response with each application in the neuronal dynamics where repeated application of a constant current fails to do so. Experiments involving prepared slices of neuronal tissue have confirmed this phenomenon to be true several times [5, 30] and there is some theoretical work supporting STR [4]. There are clear implications of STR on synchronization of uncoupled oscillators. The main result of STR implies that if many similar neuronal oscillators are subjected to the same noisy forcing simultaneously, their behaviour synchronizes, despite the fact that the neurons are not interacting with each other. STR is discussed in more detail in section 1.1.4. The study of oscillator synchronization has yielded two major methods that are often used to characterize synchronous behaviour. The first is the probability density function for the phase difference. To compute the phase distribution, Fokker-Planck theory is usually incorporated into the analysis[16]. Goldobin and Pikovsky use Fokker-Planck analysis to derive the probability density for the phase difference of multiple phase oscillators [18]. They study a pair of phase oscillators in two different situations. In one case, the oscillators under study have different angular velocities and experience only common noise forcing and in the other, the oscillators are identical and experience both common and intrinsic noise forcing. They find that in both cases the differences between the oscillating systems cause intermittent phase slips, which are brief periods of desynchronization of phase oscillators, by observing the power law tails of the distribution and numerical simulations. More recently, Nakao et al. studied synchrony of multiple oscillators by calculating the probability 2  Chapter 1. Introduction density function of the phase difference for a general class of limit-cycle oscillators with great success [33]. Moreover the analytical results are very accurate quantitatively, when compared to the numerical results. The work also predicts the emergence of multiple phase clusters of oscillators under certain multiplicative noise conditions; a very non-intuitive result. This clustering is predicted by observing the probability density function for the phase difference from the mean. The existence of common noise forcing in addition to intrinsic noise produces a peak at phase difference zero (or multiple peaks) in the probability density function. This indicates a comparatively greater probability of observing the phases of the oscillators near the same value and thus synchronous behaviour exists, although it is not perfect. They predict the phase slipping observed in the simulations in the presence of weak intrinsic noise by observing the power law tails of the distribution. More recent studies of oscillator synchronization have built on these important analyses for both uncoupled and coupled oscillators. Ly and Ermentrout applied a combination of Fokker-Planck, Perturbation and Fourier analysis to the study of noise induced synchronization of coupled limitcycle neuronal oscillators [29]. The perturbation expansion method utilized in this work agrees quite well with numerical data simulated using both a Monte-Carlo and numerical Fokker-Planck solution. Other numerical studies involving the computation of the probability density function from the Fokker-Planck equation for uncoupled oscillators have supported the conclusion of noise induced synchronization of uncoupled limit-cycle oscillators [11]. The use of Fokker-Planck analysis allows the authors to derive the probability density function for the phase distribution and determine qualities of the synchronization analytically, however the calculation of the Lyapunov exponent is also a useful way of characterizing the dynamics. Recent theoretical studies have revealed that the emergence of a negative leading Lyapunov exponent is a good criterion for noise driven synchronization. In several studies where the phase probability density is derived, the calculation is supplemented by the calculation of this Lyapunov exponent [18, 33]. Teramae and Tanaka [43] found that a negative Lyapunov exponent is a strong criterion for noise driven synchronization of a general class of oscillators. They define this Lyapunov exponent for the synchronized state using the phase response curve [27]. Moreover, they provide a general criterion under which two oscillators can display stable synchronization under this characterization, which is that the phase response curve has a second derivative. Oscillators that do not have this characteristic, such as the leaky integrate-and-fire model, appear to synchronize in the presence of solely common noise but, in fact, spend time fluctuating about the synchronized state in between intermittent phase slips with temporal distribution given by a power law [32]. This is unlike oscillators with a continuous phase response curve, where the phase difference becomes exponentially small in the presence of strictly common noise. The negative Lyapunov exponent criterion for synchrony is supported by Goldobin and Pikovsky [18] who perform a similar calculation of the Lyapunov exponent on the Van der Pol-Duffing oscillator in conjunction with the aforementioned Fokker-Planck analysis of the stochastic dynamics. The authors give a more 3  Chapter 1. Introduction general definition for the Lyapunov exponent of the phase difference equation in [19]. They confirm that perfect synchronization is possible for idenitical phase oscillators forced with the same noise. Furthermore, they discuss the effect of slightly different dynamics in multiple oscillators and find that when the number of oscillators is greater than 2, perfect synchronization is not possible. They also find that desynchronization of oscillators occurs when the common noise strength is sufficiently large and the oscillator has a sufficient degree of nonisochronicity. All these conclusions are drawn from analysis of the Lyapunov exponent which is often calculated numerically especially in the case where intrinsic noise is of non-zero strength where analytical calculation is often not possible. In the work of Nakao et al., the Lyapunov exponent confirms the existence of perfect synchrony when intrinsic noise is not present [33]. By perfect synchrony, we mean there are no phase slips. The current theory pertains to limit-cycle oscillators and phase reduction theory is applicable. Specifically, phase reduction applies in the limit where perturbations from the limit-cycle do not have a significant effect on the phase dynamics because the system is strongly attracted to the limit-cycle. This is often the case when the oscillators are forced with weak noise. Other works have also studied the validity of the method of phase reduction even in the presence of weak noise forcing. Yoshimura and Arai study this question in detail and propose an alternative method of phase reduction [44]. Further, they discover a mean shift in oscillator frequency as a result of the noisy forcing. Teramae et al. studied the validity of phase reduction of stochastic oscillators and determined a new phase equation is required in order to account for time-correlated noise and the rate of attraction to the limit-cycle [42]. Their result is consistent with the results of both [44] and previous results. This consistency is achieved by considering two different limits of the ratio of noise-correlation time and decay rate to the limit-cycle. To date, there is no existing theory that deals with the case of conditional oscillators subject to common noise forcing as the oscillators do not exhibit limit-cycle behaviour in the absence of noise. Conditional oscillators are characterized by sustained oscillations generated by forcing in a system that would otherwise be quiescent; weakly attracted to a locally stable fixed point. These sustained oscillations can be generated close to a Hopf Bifurcation point via the mechanism of coherence resonance (also known as autonomous stochastic resonance) [45]. Coherence Resonance (CR) refers to the phenomenon whereby the addition of noisy forcing into a nonlinear oscillator generates a coherent response at an optimal noise level. When a dynamical system is subthreshold to a Hopf Bifurcation, the noise can excite small oscillations close to the frequency of the oscillator at the bifurcation point, and CR is observed. CR is explained in more detail in section 1.1.2. This thesis aims to provide theory for stochastic phase dynamics for conditional oscillators experiencing intrinsic and common noise forcing. Since we are studying the case of conditional oscillators, the results from the study of the synchronization dynamics of limit-cycle oscillators do not apply and there are no results related to conditional oscillators, to our knowledge. Further 4  1.1. Background to the goal of developing the aforementioned theory, is the study of the effects of common noise on multiple conditional oscillators. We wish to see if the results of the study of noise-induced synchronization hold in the case of coherent conditional oscillators. In particular, we attempt to answer the following questions: • Can noise-induced synchronization of conditional oscillators be realized? • If noise-induced synchronization is possible, can it be characterized? • Do different intrinsic noise strengths between oscillators have an effect on the dynamics? We provide background on topics related to this thesis for the remainder of this chapter. To answer the questions given above, we do the following. First, we give a canonical model for a conditional oscillator in chapter 2. We perform a stochastic multiple scales analysis of the model to derive stochastic amplitude and phase equations for the main mode of oscillation, which provide the basis for our analysis of the dynamics. We then analyze the stochastic differential equations using both linear theory and a perturbation analysis of the Fokker-Planck equation for the phase dynamics. In chapter 3, we describe the numerical method used to simulate the system of equations and give numerical results for the oscillator dynamics. We also compare the numerical results to the analytical results derived in chapter 2. We conclude that common noise forcing increases the degree of synchrony in uncoupled conditional oscillators by observing peaks in the probability density function for the phase difference indicating increased probability of observing the oscillators in-phase with each other. Moreover, we find that the degree of synchronization can be characterized by the ratio of intrinsic to common noise. We also discover that different intrinsic noises and amplitude dependent phase variation lead to a peak in the probability density not centered at phase difference zero, indicating that one oscillator, on average, leads the other in terms of phase behaviour. The conclusions are laid out in more detail in chapter 4. Possible directions for further research are also given.  1.1 1.1.1  Background Stochastic modelling  Most real world processes are subject to some degree of stochasticity. Due to this, the study of stochastic processes is essential to learning about many real world processes. Moreover, stochastic modeling often leads to dynamics that deterministic modeling cannot predict. However, one cannot apply standard calculus and differential equations theory; stochastic calculus is required. There are many texts written about stochastic calculus and modeling [16, 25, 36]. The application of 5  1.1. Background stochastic calculus and stochastic differential equations (SDEs) has had a broad impact on many diverse areas. For example, financial mathematics where the stochastic Ito integral is an essential tool. The sciences have also benefitted from stochastic modelling. Chemical reactions are now routinely modeled with stochastic considerations meant to capture the fact that a chemical reaction is fundamentally about individual interactions of molecules, rather than a continuum of reaction [17]. Much like ordinary differential equations, most SDEs do not have exact, closed-form solutions. Thus, numerical solutions are important. Given the probabilistic nature of SDEs, the corresponding numerical methods have important considerations, such as the type (mode) of convergence in addition to the order of convergence relative to time step size that is normally considered in numerical methods for simulating ODEs [25]. Since we are studying noisy oscillators, stochastic modeling techniques are essential and many of the concepts and techniques we refer to can be found in standard works on stochastic differential equations.  1.1.2  Coherence resonance  Coherence resonance (CR) refers to the appearance of coherent oscillations in the behaviour of an excitable dynamical system in the presence of a noisy forcing term in a quiescent parameter range. In particular it can refer to the emergence of coherent behaviour at an optimal noise strength of intermediate value. It is an excellent example of the emergence of orderly behaviour in a dynamical system due to the presence of noise. In the 1980s and 1990s, the phenomenon of stochastic resonance was a popular research topic. In classical stochastic resonance, one studies a noisy particle in a bistable potential with periodic driving. It is observed that oscillations between the wells in the bistable potential can be driven by a combination of noisy forcing and periodic forcing [3]. When there is no noise present, the driving is not strong enough to drive transitions between the wells of the potential. In the presence of weak noise, transitions are rare. When the noise is too strong it overwhelms the underlying dynamics and the particle transitions between the wells with no coherent behaviour. Stochastic resonance occurs when transitions between the wells occur in phase with the periodic driving at an optimal level of noise. This phenomenon has found applications in climate science, where the transition between ice ages and warmer periods is thought to be driven by this mechanism [35]. Stochastic resonance has also found applications in other sciences [40]. It was found later that noisy resonance phenomena occur in systems that do not rely on periodic driving to induce transitions, but rather depend on the intrinsic dynamics of the system to which the noise is applied. In 1997, Pikovsky and Kurths undertook a study of the Fitzhugh-Nagumo (FHN) equations in the presence of a noisy forcing term [38]. They found that very coherent oscillations could be driven at an optimal noise strength and gave a measure for that coherence in terms of an autocorrelation  6  1.1. Background function, C(τ ): C(τ ) =  y¯(t)¯ y (t + τ ) , y¯2 (t)  y¯ = y − y  (1.1)  where y denotes the time average of y. Using this coherence measure, they found that oscillations of the FHN model were much more pronounced for a moderate level of noise, and coined the term “coherence resonance” (CR) to describe this behaviour. The FHN equations are, in some sense, the canonical model for an excitable system and have been instrumental in the study of excitable systems. It is the excitable nature of the FHN system that allows for the coherent oscillations. In [38], the system is taken to be close the bifurcation point that separates the qualitative behaviours. When noise is used as a perturbative forcing, it repeatedly excites the system and drives coherent oscillations. As in stochastic resonance, the noise cannot be too strong or too weak, or oscillations are not coherent. The work of Pikosky and Kurths is widely regarded as the seminal work on CR, although Gang et al. had described a similar mechanism in 1993 [14]. They noticed that even in the absence of periodic driving oscillations could be made most coherent at a moderate level of noise for a system with a saddle-node bifurcation separating limit-cycle oscillations from quiescent behaviour in the absense of noise. When at the critical value of the bifurcation parameter, the authors observe that a characteristic frequency close to that corresponding to the period of the limit-cycle behaviour is heightened when noisy forcing is introduced into the dynamics, indicating that the limit-cycle behaviour of the system is excited by the noisy forcing. They introduce a quality factor for the coherence of oscillations, β. β = h(∆ω/ωp )−1  (1.2)  where h is the height of the power spectrum at the characteristic frequency ωp , and ∆ω is the width of the peak. This is similar to the signal-to-noise ratio in traditional stochastic resonance. There are other types of noise-induced resonance behaviours that are similar, yet different from CR [28]. CR is observed in this thesis as we drive small oscillations about a fixed point near a Hopf bifurcation using weak noise forcing. The forcing excites the main mode of oscillation leading to a increase in the Fourier power spectrum at a frequency close to that of the frequency at the onset of the Hopf bifurcation. The more appropriate measure of coherence is given by β in this case. This problem has been studied by Yu et al. in [45].  1.1.3  Neurophysiological models  Neurophysiological modelling, and indeed much of mathematical biology, can be traced back to the work of Hodgkin and Huxley who, in 1952, published work on modelling the behaviour of the membrane potential in a squid giant axon [20]. This work spawned much of the work that followed 7  1.1. Background in mathematical biology. The model is given by a set of four nonlinear ordinary differential equations, based from Kirchoff’s current laws and experimental measurements to determine refractory behaviour of various ion gates that exist in the axon. The model may have a large degree of quantitative variability, but the qualitative behaviours persist for a range of parameters. One of the most important qualities that the Hodgkin-Huxley (HH) model captures is excitability of neuronal cells. A system of nonlinear equations is ‘excitable’ if a relatively small stimulus can induce a large excursion of the systems trajectory in phase space. A stimulus of ‘insufficient size’ perturbs the system, but the trajectory returns to a neighbourhood of a stable fixed point without the excursion behaviour. The Morris-Lecar (ML) Model is a simplification of the wildly successful HH model [31]. It was not derived directly from the HH model, but was derived through a similar methodology. Initially it was derived from experimental observations of a barnacle muscle fiber, but captures many of the same behaviours that the HH model predicts for excitable cells.The advantage of the ML model is the fact that it is a two variable system, which allows for the application of phase plane analysis. Similar analyses are much more difficult with the four dimensional HH model. The Morris-Lecar model is as follows: dV dt dW dt  C  = −gCa m∞ (V )(V − VCa ) − gK W (V − VK ) − gL (V − VL ) + Iapp ,  (1.3)  = φ(V )(w∞ (V ) − W )  (1.4)  where 1 2 1 2  V − V1 V2 V − V3 1 + tanh w∞ (V ) = V4 V − V3 φ(V ) = Λ cosh . 2V4  m∞ (V ) =  1 + tanh  ,  (1.5)  ,  (1.6) (1.7)  V is the membrane potential and W is the percentage of open potassium gates. The constants gi and Vi represent various ion conductances and potential values, resp. Iapp is the applied current. It is a bifurcation parameter for this system. Below the critical value of this parameter, the Morris-Lecar model is quiescent and beyond it the system experiences sustained oscillations. Depending on the values of the parameters of the system, the bifurcation that separates quiescence and oscillations is different. For a Type-I ML oscillator, this bifurcation is a saddle-node bifurcation and the behaviour of the oscillations is characterized by an arbitrarily low firing frequency. For a Type-II ML oscillator, the bifurcation is a subcritical Hopf bifurcation. In this case, the onset of firing occurs at a finite frequency. Both Type-I and Type-II ML oscillators have applications for modeling of real world 8  1.1. Background neurons. Pyramidal cells are thought to possess Type I excitability properties [41], while mitral cells are Type II [10]. For typical parameter values for both Type I and Type II behaviour, refer to [23]. Both Type-I and Type-II ML oscillators are excitable, but with different dynamical consequences. Close to the bifurcation point in the quiescent regime, a sufficiently strong perturbation to a Type I oscillator results in a single spike before returning to a neighbourhood of the equilibrium. In the case of a Type II oscillator, a sufficiently strong perturbation pushes the trajectory of the system onto the stable limit-cycle that coexists with the locally stable fixed point resulting in oscillations. Return to the fixed point is accomplished by lowering the bifurcation parameter such that the limit-cycle does not exist and the only equilibrium is the stable fixed point. The bifurcation parameter may be treated as a dynamical variable on a slower time scale than that of the action potentials making the system three dimensional. This slow variable oscillates about the bifurcation point resulting in trains of action potentials separated by quiescent periods. This is the dynamical phenomenon known as bursting. Noise is present at all levels of the nervous system and recent modeling of neurons using HH or ML models often includes white noise to model the stochastic effects. Some noise effects can be intrinsic and some are common. Intrinsic noise usually models intracellular processes such as ion exchange, while the common noise shared by multiple neurons represents input from other external sources like other neurons.  1.1.4  Spike time reliability  When a neuron is subjected to an applied current of a sufficient strength in the current clamp experiment, the membrane potential spikes until the applied current is turned off. Spike Time Reliability (STR) refers to the phenomenon whereby repeated applications of rapidly fluctuating stimulus to a neuron produce a precisely timed train of spikes with each application in the neuronal dynamics where repeated application of a constant current fails to do so. A neuron which displays the characteristic behaviour of STR is reliable. STR has been observed experimentally in a number of works. In 1976, Bryant and Segundo observed that repeated application of a Gaussian white noise stimulus to the same nerve cell (Aplysia californica abdominal ganglion) produced an identical, time-invariant response [5]. Mainen and Sejnowski made a similar observation in cortical neurons, but also made a comparison to stimulating the neuron with repeated application of a constant current [30]. They observed that when driven with a constant current, the variance in the times of the spikes increased from the time of stimulus application. The variance in the interspike intervals would add up over time; the first spike time had low variability but the last spike time was highly variable. This desynchronization of the timing of the spikes may or may not be due to stochastic effects intrinsic to the neuronal cell. Brette and 9  1.1. Background Guigon showed that aperiodic forcing for a broad class of neuronal oscillators allows for reliable responses and that even periodic forcing does not result in reliable spiking [4]. A newer study by Galan et al. blends the theoretical and experimental approach to study STR [12]. It is found that there exists an optimal time scale for reliability of neurons, where the reliability is defined as the correlation of time displaced voltage traces. The authors also describe the connection between stochastic synchrony of oscillators and STR: If many similar neuronal oscillators are subjected to the same noisy forcing simultaneously, their behaviour synchronizes, despite the fact that the neurons are not interacting with each other.  10  Chapter 2  Analysis 2.1  The model  We consider the canonical λ−ω oscillator with parameters such that the system is subthreshold but near a subcritical Hopf Bifurcation. This model corresponds with the normal form for a Type-II Morris Lecar oscillator with parameters near the critical value of the applied stimulus needed to induce a steady spiking behaviour [39]. The oscillators are subject to both an intrinsic and extrinsic (common) noise forcing. The model is given by the following stochastic differential equations:  dx = [λ(r )x − ω(r )y ] dt + δ dη (t) + δ dη (t) i i i i i i i C C dy = [ω(r )x + λ(r )y ] dt, i = 1, 2, r2 = x2 + y 2 i  i  i  i  i  i  i  .  (2.1)  i  The functions ηi , ηC are Wiener processes (standard Brownian motions) that satisfy ηj (t) = 0 and ηj (t)ηk (t − τ ) = 1{j=k} δ(τ ) where j, k ∈ {1, 2, C} and η denotes the time average of η. δi , δC are the respective strengths of the noise sources. Notice that the term δC dηC (t) affects both oscillators i = 1, 2 identically. dηC is refered to as the common noise forcing term for this reason. The other two noise terms dη1 , dη2 are the intrinsic noise sources and only affect their respective oscillators. In the general λ − ω system, the function λ(ri ) determines the behaviour of the amplitude, while ω(ri ) determines the frequency behaviour for the oscillator. For our study we consider the following functions for λ(ri ) and ω(ri ): λ(ri ) = λb + αri2 + γri4 ,  ω(ri ) = ω0 + ω1 ri2 .  The constants λb , α and γ are parameters that determine the bifurcation structure of the oscillators amplitudes. The values of these constants determine the criticality of the Hopf Bifurcation that occurs at λb = 0. We take λb to be our bifurcation parameter. To ensure that the system has a subcritical bifurcation for the deterministic system (δ1 = δ2 = δC = 0), we choose α > 0, γ < 0. This leads to the bifurcation diagram seen in Figure 2.1. One can do some standard stability analysis on the deterministic system to see that the ri = 0 state is locally stable for λb < 0 and unstable for λb > 0. Attention is focused on the regime λb < 0, |λb |  1; close to the bifurcation 11  2.2. Multiple scales analysis point. In this parameter regime, the λ-ω model is a conditional oscillator, with a weakly decaying amplitude. The noise drives coherent oscillations about the fixed point, as per the mechanism of coherence resonance. A limit-cycle may exist depending on the parameters of the system. The noise strengths are taken to be sufficiently small such as to limit the possibility of jumping to the stable limit-cycle, if it exists. In the absence of noise, the amplitude decays to zero. From the form of ω(ri ), one can see that the frequency of the small oscillations is, to leading order, equal to ω0 . The frequency of oscillations changes depending on the amplitude of the oscillator and this change is determined by the ωi ri2 term. 1  r  0.5  0  −0.5  −1 −6  −4  −2 λ2  0  2  Figure 2.1: Bifurcation diagram of r = x2 + y 2 vs. λ2 in the case of the deterministic system (δ1 = δ2 = δC = 0) for a single oscillator given by (2.1). Other parameters include: = 0.1, α = 0.2, γ = −0.2. Solid (dashes) lines indicate stability (instability). Black (Gray) lines indicate a fixed point (periodic orbit). The data for this diagram was generated using XPPAUT  2.2 2.2.1  Multiple scales analysis Deterministic analysis, parameter scalings  To analyze the small oscillations that are sustained by the noisy driving, we apply a multiple scales analysis to the system of equations (2.1) [21, 24]. We apply a classical multiple scales analysis to the deterministic system (i.e: (2.1) with δ1 = δ2 = δC = 0) to obtain the scalings of parameters for which weakly damped oscillations appear in the dynamics. These oscillations must not occur on the stable limit-cycle. As such, we look for small oscillations that occur within the knee of the bifurcation diagram that are of small magnitude. Writing xi , yi as an asymptotic series with  as  12  2.2. Multiple scales analysis a small parameter yields xi =  xi,0 +  2  yi =  yi,0 +  2  xi,1 + ...  (2.2)  yi,1 + ...  (2.3)  Notice that we assume {xi , yi } = O( ) because we are looking for the amplitude equations for small oscillations. Since we are looking close to the Hopf bifurcation point, λ∗b = 0, we perform an asymptotic expansion in the bifurcation parameter near the bifurcation point, λb =  1  λ1 +  We also introduce long time scales T1 = t, T2 = ∂ ∂ + + ∂t ∂T1 =  2  2  λ2 + O( 3 ).  Rewriting (2.1) using these scalings gives  2 t.  ∂ + ... ∂T2  xi,0 +  2x  yi,0 +  2y  λ(ri ) −ω(ri )  xi,0 +  ω(ri )  yi,0 +  λ(ri )  (2.4)  i,1  + ...  i,1  + ...  i,1 + ... 2y i,1 + ... 2x  We consider the contributions to this equation to each order in  (2.5)  .  and solve the resulting equations.  To leading order (O( )), we get the following equation: xi,0  ∂ ∂t  O( ) :  +  yi,0  0  ω0  xi,0  −ω0  0  yi,0  = 0.  (2.6)  ,  (2.7)  The solution to this equation is given by xi,0  Ai cos(ω0 t) − Bi sin(ω0 t)  =  yi,0  Ai sin(ω0 t) + Bi cos(ω0 t)  where Ai , Bi are functions of the long time scales. This is the main oscillatory mode of solutions. The next order equation is given by O( 2 ) :  ∂ ∂t =−  xi,1 yi,1 ∂ ∂T1  + xi,0 yi,0  0  ω0  xi,1  −ω0  0  yi,1  +  λ1  0  xi,0  0  λ1  yi,0  .  (2.8)  By the Fredholm alternative, the right-hand side of (2.8) must be orthogonal to the nullspace of the adjoint of the operator acting on xi,0 in (2.6) to get a solution for {xi,1 , yi,1 }. In the context 13  2.2. Multiple scales analysis of multiple scales analysis, this is known as eliminating the secular terms. Define the matrix differential operator M as follows: M=  ∂ + ∂t  0  ω0  −ω0  0  (2.9)  One can show using the definition of the adjoint that the adjoint of M , M ∗ = −M . Thus, the null space of M ∗ is given by cos(ω0 t)  null(M ∗ ) = span  ,  sin(ω0 t)  − sin(ω0 t)  (2.10)  cos(ω0 t)  Taking the inner product of the null space of M ∗ and the right hand side of (2.8) and applying the orthogonality condition gives the conditions for eliminating the secular terms, 2π/ω0  (cos(ω0 t), sin(ω0 t)) ·  −  ∂ ∂T1  xi,0  (− sin(ω0 t), cos(ω0 t)) ·  −  ∂ ∂T1  xi,0  0 2π/ω0 0  yi,0 yi,0  +  λ1  0  xi,0  0  λ1  yi,0  +  λ1  0  xi,0  0  λ1  yi,0  dt = 0,  (2.11)  dt = 0.  (2.12)  Evaluating the above integrals gives the following equations: ∂ Ai = λ 1 Ai , ∂T1  ∂ Bi = λ1 Bi . ∂T1  (2.13)  Notice that if λ1 > (<) 0, the parameters are such that the fixed point is unstable (locally stable) and O( ) perturbations result in exponential growth (decay) on time scale T1 . We study the case where there is no exponential growth or decay on time scale T1 , which requires that λ1 = 0 and also that the leading order solution experiences no variation on the time scale T1 . It follows that the solution has no variation on time scale T1 : ∂xi /∂T1 = ∂yi /∂T1 = 0. Then the nonlinear terms {α, ω1 } influence the slow time dynamics. Proceeding to the next order, we get the following equation: O( 3 ) :  ∂ ∂t =−  xi,2 yi,2 ∂ ∂T2  + xi,0 yi,0  0  ω0  xi,2  −ω0  0  yi,2  +  2 λ2 + αri,0 2 ω1 ri,0  2 −ω1 ri,0  λ2 +  2 αri,0  xi,0 yi,0  .  (2.14)  To eliminate the secular terms of the right-hand side of this equation, we take the inner product of the right hand side of (2.14) with the null space of M ∗ , as for the O( 2 ) equations. We find the  14  2.2. Multiple scales analysis following equations must be satisfied: ∂ 2 2 Ai = (λ2 + αri,0 )Ai − ω1 ri,0 Bi ∂T2 ∂ 2 2 Bi = ω1 ri,0 Ai + (λ2 + αri,0 )Bi . ∂T2  (2.15) (2.16)  It is shown later that this is consistent with our results from the stochastic analysis, which is performed in essentially the same spirit. The analysis of the deterministic system has yielded appropriate scalings with which to analyze the stochastically forced system. To construct small amplitude oscillations (of order ) for this system, the bifurcation parameter λb must be within O( 2 ) of the critical value, λ∗b = 0 and negative. If λ2 > 0, then the zero solution is unstable and the oscillators’ amplitudes experience local exponential growth on the slow time scale. In this case, the oscillatiors are driven onto the limit-cycle that exists in this regime. If α2 − 4γ 2 λ2 < 0 and λ2 < 0 the system has a bistable behaviour where both the zero amplitude solution and the limitcycle are locally stable. If α2 − 4γ 2 λ2 > 0 and λ2 < 0, then only the zero amplitude solution exists and it is stable. Provided the initial conditions are sufficiently close to zero amplitude, sufficiently weak perturbations do not drive the system onto the limit-cycle that may or may not exist in (2.1), depending on the parameters. We take λ2 < 0 from now on and choose it such that the system is in the knee of the Hopf bifurcation (α2 − 4γ 2 λ2 > 0), unless stated otherwise. The time scale for the slow dynamics is T2 = T =  2.2.2  2 t.  Stochastic multiple scales analysis  We derived the parameter scalings necessary to observe small oscillations (O( )) from the deterministic multiple scales analysis (T =  2 t,  λb =  2λ , 2  λ2 = O(1) < 0). To derive the equations of  evolution for the envelope functions Ai , Bi in the general case, (i.e. non-deterministic system), we apply a multiple scales analysis similar to that in [26] to include the stochastic contribution. We assume {xi , yi } has a form similar to the leading order solution from the deterministic analysis (2.7), xi yi  =  Ai cos(ω0 t) − Bi sin(ω0 t)  (2.17)  Ai sin(ω0 t) + Bi cos(ω0 t)  where Ai = Ai (T ), Bi = Bi (T ) are stochastic processes and evolve on time scale T =  2 t.  This is  valid in the small noise approximation, 0 < δ1 , δ2 , δC  1.  (2.18)  Notice that we have not specified the order for the noise strengths; only that they are less than O(1). We only require that the noise strengths are of the order such that they affect the behaviour 15  2.2. Multiple scales analysis of Ai , Bi . Secondly, we look for evolution equations of Ai , Bi on the slow time scale with the intrinsic and common noise components expressed as two different noise processes, dAi = ψAi dT + σAi I dξAi I (T ) + σAC dξAC (T ),  (2.19)  dBi = ψBi dT + σBi I dξBi I (T ) + σBC dξAC (T )  (2.20)  where each ξab (T ) represents a Wiener process operating on time scale T . The subscript I indicates the effect of intrinsic noise forcing, while C indicates the effect from common noise. We separate them to better distinguish their respective effects. Notice that the common noise forcing is identical in this ansatz. We expect it to be identically affecting both pairs of oscillators as the effect in (2.1) is identical also. Applying Ito’s Formula to the expressions for xi , yi given by (2.17) and substituting (2.19, 2.20) where appropriate gives ∂xi ∂xi ∂xi dt + dAi + dBi ∂t ∂Ai ∂Bi 1 ∂ 2 xi 2 ∂ 2 xi ∂ 2 xi 2 + dA + 2 dA dB + dB , i i i 2 ∂A2i ∂Ai ∂Bi ∂Bi2 i = − ω0 [Ai sin(ω0 t) − Bi cos(ω0 t)]dt  dxi =  ⇒  dxi  (2.21)  + cos(ω0 t)[ψAi dT + σAi I dξAi I + σAC dξAC ] − sin(ω0 t)[ψBi dT + σBi I dξAi I + σBC dξBC ], ⇒  dyi =  (2.22)  ω0 [Ai cos(ω0 t) − Bi sin(ω0 t)]dt + sin(ω0 t)[ψAi dT + σAi I dξAi I + σAC dξAC ] + cos(ω0 t)[ψBi dT + σBi I dξAi I + σBC dξBC ].  (2.23)  Comparing (2.22, 2.23) to (2.1) with (2.17) substituted in for xi , yi : 3  (cos(ω0 t) (λ2 + αRi2 )Ai − ω1 Ri2 Bi − sin(ω0 t) (λ2 + αRi2 )Bi + ω1 Ri2 Ai )dt +δi dηi (t) + δC dηC (t)  (2.24)  = (cos(ω0 t)ψAi − sin(ω0 t)ψBi )dT + (cos(ω0 t) [σAi I dξAi I + σAC dξAC ] − sin(ω0 t) [σBi I dξBi I + σBC dξBC ]) 3  (cos(ω0 t) (λ2 + αRi2 )Bi + ω1 Ri2 Ai + sin(ω0 t) (λ2 + αRi2 )Ai − ω1 Ri2 Bi )dt  (2.25)  = (cos(ω0 t)ψBi + sin(ω0 t)ψAi )dT + (cos(ω0 t) [σBi I dξBi I + σBC dξBC ] + sin(ω0 t) [σAi I dξAi I + σAC dξAC ]) where Ri2 = A2i + Bi2 . Using projections similar to deterministic analysis and consistent with Baxendale [2], we determine expressions for ψAi , ψBi by focusing on the drift components of the 16  2.2. Multiple scales analysis equation. Consistent with the multiple scales assumption, we treat terms that evolve on time scale T as constants and evaluate the following: 2π/ω0 0 2π/ω0  (cos(ω0 t), sin(ω0 t)) · ((2.24), (2.25)) dt,  (2.26)  (− sin(ω0 t), cos(ω0 t)) · ((2.24), (2.25)) dt.  (2.27)  0  From this integration, we determine the functions ψAi , ψBi . The result is consistent with what is observed in the deterministic analysis above. ψAi = (λ2 + αRi2 )Ai − ω1 Ri2 Bi ,  ψBi = ω1 Ri2 Ai + (λ2 + αRi2 )Bi .  (2.28)  This projection to determine ψAi , ψBi is identical to eliminating the secular terms from the higher order equations for  as done in the multiple scales analysis of the deterministic system. To de-  termine the stochastic component we follow Baxendale [2]. To obtain the values of the constants {σA1 I , σB1 I σA2 I , σB2 I , σAC , σBC }, we consider the diffusion terms of the averaged generator for the original system (2.1) and the generator for the equations given by (2.19, 2.20). The diffusion terms of the generators must be equal to each other for consistency. Expressing the noise terms on the slow time scale and using the properties of the Wiener process gives dη(t) = dη(  −2  T) =  −1  dη(T ),  where η(t) is a Wiener process on time scale t.  (2.29)  We write the diffusion terms of the generator [36] for the stochastic processes given by the orig2 2 inal system (2.1) and the ansatz (2.19, 2.20). These diffusion terms are given by Dorig,i , Dnew,i  respectively, 2 Dorig,i =  2 δi2 + δC 2 2  ∂2 , ∂x2i  2 Dnew,i =  2 2 σA + σAC iI 2  ∂2 + ∂A2i  2 2 σB + σBC iI 2  ∂2 ∂Bi2  (2.30)  2 To determine the averaged diffusion operator, we change variables from (xi , yi ) to (Ai , Bi ) for Dorig,i  so that both operators are expressed in the same variables. Then, we average the resulting operator over the one period of oscillation on the fast time scale. Denote the averaged diffusion operator for the original system by D¯2 orig,i . 2 Dorig,i =  2 δi2 + δC 2 4 2π/ω0  D¯2 orig,i = 0  cos2 (ω0 t)  2 Dorig,i dt =  ∂2 ∂2 ∂2 2 − 2 cos(ω t) sin(ω t) + sin (ω t) 0 0 0 ∂Ai ∂Bi ∂A2i ∂Bi2  2 δi2 + δC 4 4  ∂2 ∂2 + 2 ∂Ai ∂Bi2  .  ,(2.31) (2.32)  17  2.3. Determining the linearized probability density function To ensure consistency of the multiple scales analysis, we equate the coefficients of the diffusion operators, keeping in mind that we want to separate intrinsic and common noise effects: 2 D¯2 orig,i = Dnew,i , 2 δi2 + δC 4 4  ∂2 ∂2 + ∂A2i ∂Bi2  =  2 2 σA + σAC iI 2  ∂2 + ∂A2i  2 2 σB + σBC iI 2  ∂2 . ∂Bi2  (2.33)  We get the following values for the noise strengths for the ansatz (2.19, 2.20): δi σAi I = σBi I = √ , 2 2  δC σAC = σBC = √ . 2 2  (2.34)  Thus, we have determined the SDEs for Ai , Bi , i = 1, 2: δC δi dAi = (λ2 + αRi2 )Ai − ω1 Ri2 Bi dT + √ dξAi I + √ dξAC , 2 2 2 2 δi δC dBi = ω1 Ri2 Ai + (λ2 + αRi2 )Bi dT + √ dξBi I + √ dξBC . 2 2 2 2  (2.35) (2.36)  Notice that because of the noisy forcing, the amplitude of the oscillations does not evolve to the steady state where Ai = Bi = 0. In effect, the noise excites the main mode of oscillation {cos(ω0 t), sin(ω0 t)}. Notice the noise strengths must be O( 2 ) to ensure consistency of the multiple scales analysis. Now that the stochastic differential equations that describe the envelope of oscillations for the original system have been derived, we perform an analysis of the linearized system.  2.3  Determining the linearized probability density function  For small amplitudes Ri  1, we linearize (2.35, 2.36) about Ri = 0. δi δC dξAi I + √ dξAC dAi = λ2 Ai dT + √ 2 2 2 2 δi δC dBi = λ2 Bi dT + √ dξBi I + √ dξBC . 2 2 2 2  (2.37) (2.38)  In this approximation, the terms Ai , Bi evolve as Ornstein-Uhlenbeck (OU) processes, which are straightforward to analyze. Unfortunately, as with any linear approximations, nonlinear effects are lost. We lose the amplitude dependent frequency variation, described by ω1 , as well as the nonlinear amplitude effects. In particular, there is no possibility of jumping onto the stable limit-cycle, which may coexist with the stable fixed point depending on the value of λ2 . Due to the fact that the system is forced with weak noise and experiencing small oscillations where the nonlinear effects are 18  2.3. Determining the linearized probability density function essentially negligible, this linear approximation can be justified, provided the noise is sufficiently weak. We start with the probability density function for the Ai , Bi , denoted P (A1 , B1 , A2 , B2 ). Writing out the full system in matrix form gives the following:   dA = λ2 A + Bdξ,  A1       B1  ,  A=   A2  B2  2     dξB1 I   dξA2 I   , dξB2 I   dξAC   dξBC  δC 0  0 δC  . δC 0   0 δC       dξ =        1 B= √ 2  dξA1 I  δ1 0 0 0   0 δ1 0 0   0 0 δ 0 2  0 0 0 δ2  (2.39)  The probability density function for {Ai , Bi } given by (2.39) is P (A1 , B1 , A2 , B2 ) =  1 (2π)2  1 exp − AT σ −1 A 2 Det[σ]  (2.40)  where σ = (−2λ2 )−1 BBT . We denote the phase of oscillator i by φi = arctan(Bi /Ai ), and the amplitude Ri in the same way as above. Technically, the phase of the oscillator should be defined as φi = arctan(yi /xi ), but it is easily shown to be equivalent to the above definition by our change of coordinates (2.17). The joint probability density for amplitude and phase, denoted by P (R1 , φ1 , R2 , φ2 ) is given by [16]: P (R1 , φ1 , R2 , φ2 ) = CR1 R2 exp where C=  δ2 π2 C × (∆2 R12 + ∆1 R22 − C4 R1 R2 cos(φ2 − φ1 ) . λ2  λ22 4 /4 8 )) , π 2 (∆1 ∆2 − (δC  ∆i =  2 δC δi2 + . 2 4 2 4  (2.41)  (2.42)  The end goal of this analysis is to obtain the density of the phase difference Φ = φ2 − φ1 , of these two oscillators when driven by a mixture of common and intrinsic noise forcing. We denote the density by P¯ (Φ). Using the standard definition of the marginal probability density function we can obtain an expression for said density: P¯ (Φ) =  ∞  π −π  0  ∞  P (R1 , φ1 , R2 , Φ + φ1 ) dR1 dR2 dφ1 .  (2.43)  0  19  2.3. Determining the linearized probability density function Evaluating such an integral is a non-trivial task as, in general, P is a Gaussian distribution with correlated terms. We evaluate this integral by deriving the general solution to another integral of the same form. Consider the following integral: ∞  I= 0  ∞  rs exp −ar2 − bs2 + crs dr ds,  a, b ≥ 0.  (2.44)  0  To evaluate it, we first apply a linear coordinate transformation, Q to diagonalize the quadratic form in the exponential: ar2 + bs2 − crs =  r s  a  − 21 c  r  − 12 c  b  s  =  QDQ−1  r s  r s  (2.45)  ,  where  Q =   √e1 c  a−b−  (a−b)2 +c2  a−b+  e1  (a−b)2 +c2  e1 e2  D =  1 2 (a  2  1+  a−b−  √c  a−b+  + b) +  −1            (a−b)2 +c2 2  1+  ,  e2        =          √e2 c  √c  (a−b)2 +c2  −1   (a − b)2 + c2  1 2  0 1 2 (a  0  + b) −  1 2  (a − b)2 + c2  .  One can show with a little bit of work that Q is unitary (i.e: Q−1 = QT ). The requirement of a unitary transformation allows us to diagonalize the quadratic form. We apply the transformation r s  =Q  rˆ  (2.46)  sˆ  to the integral. Depending on the value of c, the result changes due to a flipping of the region of integration. If c > 0, then: I=  Usˆ Lsˆ  Urˆ  (Q11 Q21 rˆ2 + (Q11 Q22 + Q12 Q21 )ˆ rsˆ + Q12 Q22 sˆ2 ) exp −D11 rˆ2 − D22 sˆ2 Det[Q] dˆ rdˆ s,  Lrˆ  (2.47)  where Lrˆ = −Q22 Q−1 ˆ, 21 s  Urˆ = −Q12 Q−1 ˆ, 11 s  Lsˆ = 0,  Usˆ = ∞.  (2.48) 20  2.3. Determining the linearized probability density function If c < 0, then I=  Urˆ Lrˆ  Usˆ  (Q11 Q21 rˆ2 + (Q11 Q22 + Q12 Q21 )ˆ rsˆ + Q12 Q22 sˆ2 ) exp −D11 rˆ2 − D22 sˆ2 Det[Q] dˆ sdˆ r,  Lsˆ  (2.49)  where Lsˆ = −Q21 Q−1 ˆ, 22 r  Usˆ = −Q11 Q−1 ˆ, 12 r  Lrˆ = 0,  Urˆ = ∞  (2.50)  and Qij denotes the ith row and j th column of Q. In both cases c > 0 and c < 0, this transformation maps the domain of integration to an infinite triangular domain. Considering the case where c = 0, we evaluate (2.47, 2.49) with the software package MAPLE to obtain I=α  π + arctan(β) + γ 2  (2.51)  where Det[Q] Q12 Q22 D11 + Q11 Q21 D22 , 4 (D11 D22 )3/2 Q12 Q22 D11 + Q21 Q11 D22 √ , -Det[Q] D11 D22 Det[Q]2 . 4D11 D22  α = −  (2.52)  β =  (2.53)  γ =  (2.54)  Now that we have an expression for the above integral, in terms of the values of the entries of the matrices Q and D, we can begin to perform the back substitutions to get the integral, I, in terms of a, b, c. Performing the necessary back substitutions for various quantities appearing in the resulting expression for I we see that: α=  c , (4ab − c2 )3/2  β=√  c , 4ab − c2  γ=  1 . 4ab − c2  (2.55)  Comparing to (2.43, 2.41), we see that π P¯ (Φ) = β1 (Φ) + arctan β2 (Φ) 2  + β3 (Φ),  (2.56)  21  2.3. Determining the linearized probability density function where the terms β1 (Φ), β2 (Φ) and β3 (Φ) are as follows: β1 (Φ) =  2 cos(Φ) ∆ ∆ − (δ 4 /4 8 ) δC 1 2 C , 4 4 4π (∆1 ∆2 − δC cos2 (Φ)/(4 8 ))3/2 2 cos(Φ) δC  β2 (Φ) = β3 (Φ) =  4 cos2 (Φ) 4 8 ∆1 ∆2 − δC  1 2π  (2.57) (2.58)  ,  4 /4 8 ) ∆1 ∆2 − (δC 4 cos2 (Φ)/(4 8 ) ∆1 ∆2 − δC  (2.59)  .  Notice the properties of this solution. First, it is an even function of Φ. Notice also that when δC = 0, P¯ (Φ) is constant, indicating that in the absense of common noise the phases are completely desynchronized in the linearized regime. Also notice that when δC > 0, the distribution ceases to be uniform, indicating that the common noise forcing is indeed having some effect on the phase dynamics that is different from the effect of the intrinsic noise forcing. We show in Chapter 3 that this probability distribution matches simulated data from the original system (2.1) quite well in regimes where this approximation is appropriate. At this point, we take the opportunity to define a parameter, K, which can help characterize the synchronization behaviour of the system: K=  2 δC √ = 2 4 ∆1 ∆2  1+  δ12 2 δC  1+  δ22 2 δC  −1/2  .  (2.60)  Notice that K ∈ [0, 1] for all values of noise strengths. K = 0 and K = 1 correspond to the cases of no common noise and only common noise, respectively. Making the appropriate substitutions into (2.57, 2.58, 2.59) leads to the following: β1 (Φ) = β2 (Φ) = β3 (Φ) =  K cos(Φ) 1 − K 2 , 2π(1 − K 2 cos2 (Φ))3/2 K cos(Φ) , 1 − K 2 cos2 (Φ) 1 1 − K2 . 2π 1 − K 2 cos2 (Φ)  (2.61) (2.62) (2.63)  We see that, to leading order, only the ratio of intrinsic to common noise strengths is important in determining the character of the phase probability density function. Notice that when K = 0, the phase difference probability density function is uniform, as we would expect given that there is no common forcing. When K = 1, the density is zero everywhere except at Φ = 0 where a singularity occurs. This suggests the existence of a δ-function solution centred at Φ = 0 when K = 1. We discuss this case later in section 2.5.1. 22  2.3. Determining the linearized probability density function As mentioned, this linearized approximation for the SDEs (2.35, 2.36) has eliminated the effect of the amplitude-dependent phase term (ω1 Ri2 ), which become important when the oscillators experience intrinsic noise of different strengths as we show later. It also eliminates higher order amplitude effects (αRi2 ). To capture the effects of these terms, we perform a different asymptotic analysis of the amplitude and phase equations as given in section 2.4.  2.3.1  Moments for amplitude derived from distribution  To calculate the various moments of amplitude from the probability density function given by (2.41, 2.42), we must integrate (2.41) over all the values of Φ = φ2 − φ1 , φ1 . Notice that the evaluation of that involves an integral of the form π  (2.64)  ea cos(x) dx −π  which is non-trivial. To evaluate this, we evaluate a contour integral in the complex plane. Let z = eix . The integral is now a path integral around the unit circle in the complex plane moving counterclockwise. Denote it by Γ. π  z = eix , dz = iz dx  ⇒  a  ea cos(x) dx = −i  −π  e 2 (z+¯z ) z −1 dz  (2.65)  Γ  where z¯ denotes the complex conjugate of z. Observe that as we have defined z, we have that z¯ = z −1 . Rewrite the integral as follows: a  e 2 (z+¯z ) z −1 dz = −i  −i  Γ  Γ  1 a a exp z + z −1 dz z 2 2  (2.66)  We can now write the exponential terms as a power series and apply Cauchy’s residue theorem to this integral. exp  az 2 ⇒  ∞  = n=0  1 az n! 2  n  (2.67)  1 a a exp z + z −1 dz 2 2 Γ z a a 1 = 2π Resz=0 exp z + z −1 z 2 2  (2.68)  −i  = 2π Resz=0 ∞  = 2π n=0  1 z  a2n 22n (n!)2  ∞ n=0  1 az n! 2  n  ∞ n=0  1 a n! 2z  n  (2.69)  23  2.3. Determining the linearized probability density function This integral converges for all a ∈ R by the comparison test as follows: ∞ n=0  M −1  a2n 22n (n!)2  ≤ n=0  a2n + 22n (n!)2  ∞ n=M  a2n 22n  a 2  +k  2n  <∞  (2.70)  where M is the first positive integer such that M !2 > (a/2 + k)2M and (a/2 + k) > a/2. Using this result we have the marginal probability density function for the amplitudes determined from the linear theory, Pˆ : π2 Pˆ (R1 , R2 ) = 4π 2 CR1 R2 exp C × (∆2 R12 + ∆1 R22 ) × λ2 ∞ n=0  1 (n!)2  1 2  2R R λ2 δC 1 2 4 ∆ ∆ − (δ 4 /4 4 ) 1 2 C  2n  (2.71)  We include a short moment calculation here for a consistency check for later calculations. To calculate the expected moments for R1 , we evaluate the following integral ∞  E[R1N ] = 0  ∞  R1N Pˆ (R1 , R2 ) dR1 dR2  (2.72)  0 ∞  = 4π C  ∞  ∞  2  0  exp  π2 λ2  0  n=0  1 (n!)2  2 δC π2C 2 4 λ2  2n  R1N +1+2n R21+2n ×  C × (∆2 R12 + ∆1 R22 ) dR1 dR2 (2.73)  24  2.3. Determining the linearized probability density function To interchange the infinite integration and infinite summation, we appeal to the monotone covergence and dominated convergence theorems. ∞  E[R1N ] = 4π 2 C n=0 ∞ ∞ 0  0 ∞  = 4π 2 C n=0 ∞ 0  1 (n!)2  2n  2 δC π2C 2 4 λ2  × π2 C × (∆2 R12 + ∆1 R22 ) dR1 dR2 λ2  R1N +1+2n R21+2n exp 1 (n!)2  2n  2 δC π2C 2 4 λ2  ∞ 0  (2.74)  R1N +1+2n exp C1 R12 dR1 ×  R21+2n exp C2 R22 dR2  (2.75)  π2 C∆2,1 λ2  (2.76)  where ∞  = 4π 2 C n=0  C1,2 = 1 (n!)2  2n  2 δC π2C 2 4 λ2  Γ(1 + n + N/2)Γ(1 + n) 4(−C1 )1+n+N/2 (−C2 )1+n  (2.77)  where Γ is the Gamma function which satisfies Γ(n + 1) = n! if n ∈ Z+ . Rewrite the factorial terms using this property: ∞  E[R1N ] = π 2 C n=0  Γ(n + 1 + N/2) Γ(n + 1)  2 δC π2C 2 4 λ2  2n  1 (−C1  )1+n+N/2 (−C  1+n 2)  (2.78)  We cannot evaluate this sum when N is odd, given the difficulties in evaluating quotients of the Gamma function involving non-integer arguments. Considering the case N = 2, we get E[R12 ] =  π2C C12 C2  =  λ32 ∆1 C 2π4  = −  ∆1 λ2  ∞  (n + 1) n=0  4 δC π4C 2 4 8 λ22 C1 C2  4 8 4 − 4 8∆ ∆ δC 1 2  n  2  (2.79)  25  2.4. Asymptotic analysis of amplitude and phase equations, Fokker-Planck equation We also wish to evaluate E[R1−1 ]. Unfortunately, we cannot evaluate it, but we can show that it exists by observing that the sum converges. E[R1−1 ] = = ≤  π 5/2 C √ −C2 −C1 π 5/2 C √ −C2 −C1 π 5/2 C √ −C2 −C1  ∞ n=0 ∞  (1 · 3 · 5 · · · · · 2n − 1) 2n (n!) xn K 2n  4 δC π4C 2 4 8 λ22 C1 C2  where xn = 1 −  1 2n  n  xn−1 , x0 = 1  (2.80) (2.81)  n=0 ∞  K 2n < ∞  (2.82)  n=0  Thus E[R1−1 ] exists. This exercise supports the calculation of the moments of amplitude undertaken in section 2.4.1, where we use a different approach to calculate the moments of amplitude that allows us to include the nonlinear effects.  2.4  Asymptotic analysis of amplitude and phase equations, Fokker-Planck equation  From the equations (2.35, 2.36) we derive equations for the scaled amplitude and phase. Applying Ito’s formula to get SDEs for Ri , φi gives the following SDEs for amplitude and phase: 1 1 Ri (λ2 + αRi2 ) + ∆i Ri−1 dT + √ [cos(φi )(δi dξAi I + δC dξAC ) 2 2 2 + sin(φi )(δi dξBi I + δC dξBC )] 1 = ω1 Ri2 dT + √ [cos(φi )(δi dξBi I + δC dξBC ) 2 2 Ri − sin(φi )(δi dξAi I + δC dξAC )] , i = 1, 2  dRi =  dφi  (2.83)  (2.84)  To derive a FPE for these four SDEs (2.83, 2.84) is straightforward, but the solution of such an equation is less straightforward. Even numerical computation is not desirable as the FPE is dependent on four variables, making it a computationally expensive problem to solve. One can try to separate the amplitude and phase contributions to the FPEs in the method highlighted in [33], but this is not possible. Instead we look for a reduction based on the observation that the amplitude Ri and phase φi evolve on different time scales. We consider the SDEs for R1 , φ1 , noting that the same analysis applies for R2 , φ2 . Rescaling R1 = ρs, where s = O(1), and ρ  1 and  26  2.4. Asymptotic analysis of amplitude and phase equations, Fokker-Planck equation substituting into (2.83) yields ds =  s(λ2 + αρ2 s2 ) + + sin(φ1 )(δ1 dξB1 I  Then for  √  ∆i ≤ ρ  ∆1 1 dT + √ 2ρ2 s 2ρ + δC dξBC )] .  2  [cos(φ1 )(δ1 dξA1 I + δC dξAC ) (2.85)  1 in the equation for s, the leading order terms for the drift and diffusion  dynamics evolve on the slow T time scale. With λ2 < 0, the linear terms in (2.85) indicate mean reverting behavior on the T scale. This suggests that Ri then has mean reverting behavior on the T scale. Small values of Ri in the drift terms in (2.84) describes evolution on the even slower ∆i T time scale for φ. We see later that this scaling assumption for R1 is consistent with the calculation of the expectation value of the first moment of amplitude. The difference in time scales in the Ri and φi suggests an evolution of R reminiscent of a quasi-steady approximation, where the more rapid variation of Ri treated as independent of φi . Then we look for an approximation of the probability density function for Ri , φi , i = 1, 2 where Ri evolves independently of φi , while φi maintain their Ri -dependence. We then integrate the SDE (2.84) against the distribution for Ri . The end result is that the various moments of Ri are replaced with their expected values. Thus, we get the following modified SDEs for the phases, 1 E[Ri−1 ] [cos(φi )(δi dξBi I + δC dξBC ) 2 2 − sin(φi )(δi dξAi I + δC dξAC )] , i = 1, 2  dφi = ω1 E[Ri2 ]dT + √  (2.86)  We thus achieve a pseudo-separation of the amplitude and phase components for the oscillators, and we consider the phase behaviour that includes the influence of the Ri through its moments. There is some analogy to the approximation used in [18, 43], where the authors undertake an analysis of the phase equations via a phase reduction technique of the original system of equations under study. In those studies, the phase reduction is justified as perturbations from the limit-cycle are shown to return to a small neighbourhood of the limit cycle very quickly, which allows for a characterization of the dynamics through the phase. In our problem, due to the separation of time scales, our moment approximation is, in some sense, similar to assuming close proximity to the limit-cycle in the above studies. For the phase difference Φ defined above and the phase variable φ1 , the FPE for the probability  27  2.4. Asymptotic analysis of amplitude and phase equations, Fokker-Planck equation density function q(Φ, φ1 ) is given by [16], ∂q ∂T  ∂q ∂q − ω1 E[R12 ] ∂Φ ∂φ1 2 δ2 ∂ q ∂2 1 + [∆1 E[R1−1 ]2 + ∆2 E[R2−1 ]2 ] 2 − C4 E[R1−1 ]E[R2−1 ] 2 (cos (Φ)q) 2 ∂Φ ∂Φ 2 2 δ ∂ (−∆1 E[R1−1 ]2 + C4 E[R1−1 ]E[R2−1 ] cos (Φ))q +2 ∂Φ∂φ1 2 ∂2q . +[∆1 E[R1−1 ]2 ] ∂φ1 2  = −ω1 (E[R22 ] − E[R12 ])  (2.87)  This is now an advection diffusion equation in 2 dimensions with anisotropic diffusion. Given that this is a FPE for phase behaviour, the boundary conditions for this system are periodic, and the solution must be normalized: ∂ m+n q(−π, φ1 ) = ∂Φm ∂φn1 ∂ m+n q(Φ, −π) = ∂Φm ∂φn1 where m, n ≥ 0 and  π  π  −π  −π  ∂ m+n q(π, φ1 ), ∂Φm ∂φn1 ∂ m+n q(Φ, π), ∂Φm ∂φn1  (2.88)  q(Φ, φ1 ) dΦdφ1 = 1.  (2.89)  To get a sense of what affects the behaviour of solutions to the FPE, we derive E[Ri2 ], E[Ri−1 ] in terms of the system parameters.  2.4.1  Deriving the moments including nonlinear effects  To compute the expected values of the moments for the amplitude we return to the equations given by our ansatz for the coefficents of the dominant oscillatory modes for the amplitude: Ai , Bi , (2.35, 2.36). As above, we assume that Ri2 is small, and obtain linearized SDEs (2.37, 2.38) which are OU-Processes. Moreover, we can see that linearizing the SDEs has resulted in a decoupling of Ai from Bi terms and thus a separable probability density for each component, assuming there are negligible correlation effects. We explain below why this assumption is acceptable in the range of parameters where the OU approximation is appropriate. The probability density function, ρi for the ith independent oscillator is given by ρi (A, B) = ρAi (A)ρBi (B)  where  ρAi (A) =  −λ2 exp ∆i π  λ2 2 A ∆i  (2.90)  28  2.4. Asymptotic analysis of amplitude and phase equations, Fokker-Planck equation Performing the change of variables (Ai , Bi ) → (Ri , φi ), and change of measure, we find that to leading order the distribution for the amplitude of oscillator i, ρi , is given by ρi (Ri ) =  −2λ2 Ri exp ∆i  λ2 2 R ∆i i  (2.91)  Notice that this distribution agrees with (2.41, 2.42) in the limit where no common noise is present, and thus the oscillators are independent. Various expected moments of amplitude can easily be calculated from this distribution. E[Ri2 ] = −  ∆i , λ2  E[Ri−1 ] =  −λ2 π ∆i  (2.92)  We also evaluate the first moment of amplitude, E[Ri ] = π∆i /(−4λ2 ). This serves as confirmation √ that our rescaling of Ri = O( ∆i ) that we used to derive the FPE (2.87) in section 2.4 is of the correct order. Notice that the second moment above is exactly the same as (2.79). This is important to point out that we get exactly the same leading order moments when we compute them using the linear distribution (2.41). Thus our assumption of negligible correlation effects is justified. Using this leading order solution (2.91), we compute the corrections to the leading order solution caused by the nonlinear terms. Using the SDEs for the amplitude of the oscillator i given in (2.83), we can write a FPE for the probability density function for the amplitude of the oscillator which includes the nonlinear effects, denoting the steady state solution to this equation with ρˆi (R) 0=−  ∂ ∂R  1 ∆i R(λ2 + αR2 ) + ∆i R−1 ρˆi + 2 2  ∂ 2 ρˆi ∂R2  ,  ρˆi = ρˆi (R)  (2.93)  It is easy to check that when α = 0, we find the density function ρi satisfies (2.93) exactly. With this in mind, we define the corrected steady state distribution as follows n  ρˆi = ρi g,  where  g = g(R) = exp  (2.94)  ak R k k=0  where n is some terminal indexing integer. Substituting this into (2.93) gives the following equation: 0 = 8∆i αR3 + 4αλ2 R5 g(R) + ∆i (2αR4 − ∆i − 2λ2 R2 )  dg d2 g − ∆2i R 2 dR dR  (2.95)  Substituting in the expression for g given by (2.94) allows us to solve for the coefficients, ak . This is done by separating all contributions on the polynomial basis in the variable R. Doing so yields  29  2.4. Asymptotic analysis of amplitude and phase equations, Fokker-Planck equation the following equations for various orders in R: O(1) :  0 = −∆2i a1  O(R) :  0=  −4∆2i a2  (2.96) −  ∆2i a21  (2.97)  O(R2 ) : 0 = −9∆2i a3 − 2∆i a1 λ2 − 4∆2i a1 a2  (2.98)  O(R3 ) : 0 = 8∆i α − 16∆2i a4 − 4∆i a2 λ2 − ∆2i (4a22 + 6a3 a1 )  (2.99)  O(R4 ) : . . . . Solving for the constants {ak } yields that a0 is free, a1,2,3 = 0 and a4 =  α 2∆i .  The freedom in  choosing a0 allows for normalization of the corrected distribution such that integrating it over all R ∈ [0, ∞) equals 1. The corrected probability density function is given by ρˆi (R) = ci  −2λ2 R ∆i  exp  α 4 λ2 2 R + R ∆i 2∆i  (2.100)  where ci is the normalization constant. This distribution is valid for R small. Unfortunately the density function that results is not normalizable as α > 0 and the distribution diverges. This is a result of the local subcritical Hopf bifucation that occurs in the deterministic system for λb = 0. To pick the normalization constant, we expand the quartic term in the exponential as a Taylor 4  series: exp ( αR 2∆i ) ≈ 1 +  αR4 2∆i .  Using this approximation, the distribution is now normalizable. This  expansion of the distribution is justified as we need only consider small values of Ri since we are forcing the oscillators with very weak noise which keeps the amplitude near the steady state and away from the large amplitude limit-cycle. ρˆi (R) = ci  −2λ2 R ∆i  exp  λ2 2 R ∆i  1+  α 4 R , 2∆i  ci = (1 + α∆i /λ22 )−1 .  (2.101)  Taking the integral of this distribution and setting it equal to 1 gives us the solution for ci given in (2.101). We compute the approximate moments for amplitude, incorporating the effect of the lowest order nonlinearity in λ(ri ): E[Ri2 ] = ci −  ∆i 3α∆2i − λ2 λ32  E[Ri−1 ] = ci  −λ2 π 3α + ∆i 8  π∆i −λ32  .  (2.102)  It should be noted that exact distribution for the amplitude could be normalized if we were to include the effect of γ < 0, the highest order nonlinearity in λ(ri ). It would result in exponential decay of the amplitude distribution for values of r = R beyond the amplitude of the stable limitcycle. The effect of γ is negligible for R small. We use the moments (2.102) in our asymptotic solution for the FPE.  30  2.5. Stationary solutions to the reduced Fokker-Planck equation  2.4.2  Reduction of Fokker-Planck equation for phase difference  Now that we have expressions for the moments of the distributions, we reduce the FPE to study the phase difference, rather than the distribution of phases themselves. We integrate (2.87) over all φ1 ∈ [−π, π] to get the following reduced FPE: ∂p ∂T  ∂2p ∂p 1 [∆1 E[R1−1 ]2 + ∆2 E[R2−1 ]2 ] 2 + ∂Φ 2 ∂Φ 2 2 δ ∂ − C4 E[R1−1 ]E[R2−1 ] 2 (cos (Φ)p) ∂Φ  = −ω1 (E[R22 ] − E[R12 ])  (2.103)  where p = p(Φ) is the marginal probability density distribution for the phase difference: p(Φ) = π −π  q(Φ, φ1 ) dφ1 . The resulting FPE is an anisotropic advection-diffusion equation in one variable.  Substituting the moments (2.102) for Ri , i = 1, 2 in (2.103) yields: ∂p ∂2p ∂2 ∂p = m1 + m2 2 − m3 2 (cos (Φ)p). ∂T ∂Φ ∂Φ ∂Φ  (2.104)  where m1 = m2 = m3 =  3α ω1 (∆2 c2 − ∆1 c1 ) + 2 ∆2 c22 − ∆1 c21 , λ2 λ2 3απ 9α2 π 1 (∆2 c2 + ∆22 c22 ) , −λ2 π(c21 + c22 ) + (∆1 c21 + ∆2 c22 ) + 2 4|λ2 | −64λ32 1 1 2c c δC 1 2 2 4  3απ −λ π √ 2 + 8|λ ∆1 ∆2 2|  ∆1 + ∆2  ∆2 ∆1  +  9α2 π −64λ32  ∆1 ∆2 ,  (2.105) (2.106) (2.107)  and {ci , i = 1, 2} are as defined as above.  2.5  Stationary solutions to the reduced Fokker-Planck equation  We solve for the stationary distribution for this FPE (2.104). Integrating (2.104) with ∂p/∂T = 0 yields the following solution for p: p(Φ) = ([m2 − m3 cos(Φ)] exp [f (Φ)])−1 × f (x) = 2m1 arctan  M1 exp[f (Φ)] dΦ + M2 (m2 + m3 ) tan (x/2) m22 − m23  (2.108) (2.109)  where M1 , M2 are constants chosen for normalization and periodicity (2.88, 2.89). In this form the analytical solution does not yield very much information about the qualities of the distribution. 31  2.5. Stationary solutions to the reduced Fokker-Planck equation Instead we look for an asymptotic solution of the form p = p0 + ∆1 p1 + O(∆2i ), based on the small noise approximation (∆i  1). When we plug this asymptotic solution into (2.104), we get the  following order equations for the stationary distribution: O(1) : O(∆1 ) :  ∂2 [(1 − K cos (Φ))p0 ] ∂Φ2 ∂p0 ∂2 ∂2 0 = a1 + [(a + a cos (Φ))p ] + [(a4 + a5 cos (Φ))p0 ] 2 3 1 ∂Φ ∂Φ2 ∂Φ2 0 =  (2.110) (2.111)  where a1 = (z − 1)  ω1 λ2  a4 = πα(1 + z)  1+  3α λ22  ,  1 3 + λ2 8|λ2 |  a2 = −λ2 π, ,  a5 = −  a3 = λ2 πK  3απ (1 + z)K, 8|λ2 |  (2.112) z = ∆2 /∆1 .  We see that the parameter K has reappeared in the calculuations. By inspection, one can derive the leading order solution from the leading order equation, (2.110). Integrating this equation, then enforcing both normalization and periodicity yields √ 1 − K2 . p0 (Φ) = 2π(1 − K cos(Φ))  (2.113)  From this we can notice important qualities about the solution to the FPE. If K = 0, corresponding to the case of no common noise, the probability density for Φ is uniform on [−π, π]. As K is increased, the height of the probability density at Φ = 0 increases as well. As K gets close to 1, the probability distribution becomes very peaked until a singularity occurs at Φ = 0. We show in section 2.5.1 that the solution to (2.110) is the Dirac-δ function centered at Φ = 0 when K = 1. We compute the first order correction to the distribution, p1 , by solving (2.111). After renormalizing the distribution and enforcing periodicity, we obtain the following result: p1 (Φ) =  ω1 (z − 1) arctan  (1+K) tan (Φ/2) √ 1−K 2  λ22 π 2 (1 − K cos (Φ)) (a5 K − a4 ) √ + . 2 2λ2 π 1 − K 2 (1 − K cos (Φ))  −  Φ 2  √ −  1 − K 2 (a4 − a5 cos (Φ)) −2λ2 π 2 (1 − K cos (Φ))2 (2.114)  32  2.5. Stationary solutions to the reduced Fokker-Planck equation  2.5.1  K = 1 and the δ-function solution  When K = 1, the asymptotic solution (2.113, 2.114) above is invalid, since it is non-integrable. When K = 1, then the steady state solution to p solves: 0=  ∂2 [(1 − cos (Φ))p] ∂Φ2  (2.115)  The reason we drop the advection term is because K = 1 implies that δ1 , δ2 = 0 (δC < ∞) which eliminates the coefficient for the advection term. The standard method of integration yields a solution for (2.115) that is singular at Φ = 0. Trying to enforce normalization on such this solution is not possible. An alternative method of solution is required. To find such a solution, we use the theory of generalized functions [22] which yields the following theorem: Given a function f (Φ) that is n-times continuously differentiable, then n  f (Φ)δ (n) (Φ) =  (−1)(n−j) j=0  n (n−j) f (0)δ (j) (Φ) j  (2.116)  where f (n) (Φ) denotes the nth derivative of f with respect to Φ. Expanding out (2.115), we get the following: 0=  ∂ 2 p¯ ∂ p¯ ∂ 2 p¯ − cos(Φ) + 2 sin(Φ) + cos(Φ)¯ p. ∂Φ2 ∂Φ2 ∂Φ  (2.117)  It is easy to show that p = δ(Φ) solves (2.115), by applying the above theorem to the terms in. This indicates perfect synchronization of the oscillators when K = 1. p(Φ) = δ(Φ)  when K = 1.  (2.118)  In chapter 3, we simulate a sample trajectory for the K = 1 case and see that the oscillators only forced with common noise eventually evolve to pure synchrony after an initial transcient phase difference.  33  Chapter 3  Numerical Work In this chapter, we give some details about the numerical method we used to numerically compute realizations of the model and compare the numerical simulations with our analytical theory.  3.1  Numerical method  To simulate the stochastic differential equations (2.1), we use the explicit order 2.0 weak scheme given by Kloeden and Platen [25]. In this case the term ‘weak’ means that the numerical method generates solutions that give convergence in the functionals for the system, like the moments and probability densities. This stands in contrast to a ‘strongly’ convergent numerical method where the solution converges pathwise to a given definite noisy realization. The order ‘2.0’ means that such quantities converge to second order in the size of the time step. In our case, we are interested in probability densities for Φ and moments of amplitude, so using a weakly convergent method is justified. This method is of higher order than the Euler-Maruyana approximation that is often the standard numerical method used for simulating SDEs. Said method has weak convergence order 1.0. The order 2.0 weak scheme is as follows. If we denote the deterministic component of the right (n)  hand side of (2.1) by f and let xi  (n)  (n)  = [xi , yi ]T , where the superscript denotes the nth time-step,  then the stepping scheme is given by: (n+1)  xi  Ξ(n)  1 (n) (n) (n) f (Ξ(n) ) + f (xi ) ∆t + δi ∆(n) ηi + δC ∆ηC , 2 (n) (n) (n) (n) = xi + f (xi )∆t + δi ∆(n) ηi + δC ∆ηC . (n)  = xi  +  (3.1) (3.2)  One can see by a brief inspection that this is somewhat analogous to the explicit trapezoidal rule for (n)  (n)  (n)  numerical integration (also known as Heun’s method). The terms ∆t , ∆ηi , ∆ηC are the discrete increments of time, the independent noise forcing for the ith oscillator and the common noise forcing, (n)  (n)  respectively, at the nth time step. ∆ηi , ∆ηC are sampled from the normal distribution of variance (n)  one and scaled by (∆t )1/2 , following the standard Euler step rules for SDEs. For all histograms below, we simulate the system (2.1), for a sufficiently long time to obtain a 34  3.2. Numerical simulations, comparison to analysis reasonably large data set: t = 5000. We also eliminate a sufficiently large component of the time series (0 ≤ t < 300) to ensure that the system has settled to a statistical equilibrium. We also (n)  simulate multiple time series; 5 for each plot, unless stated otherwise. We also take ∆t  = 10−2  for all n. We use standard parameters in all simulations, unless stated otherwise, given in Figure 3.1. The code used for the simulations is given in Appendix A. Parameter λ2 α γ ω0 ω1  Value -3 0.2 -0.2 0.9 1.2 0.1  Table 3.1: Standard parameters used in all numerical simulations, unless stated otherwise.  3.2 3.2.1  Numerical simulations, comparison to analysis Moments of amplitude  In section 2.3.1, we derived a probability density function for the amplitudes of the oscillators maintaining the correlative effects of the common noise (2.71). In section 2.4.1, we neglected the correlative effects to get the probability density function for the amplitude. These density functions, to leading order, were that of OU-processes. The nonlinear corrections perturb these probability density functions away from the probability density function for the OU-process. We test the accuracy of the moments of amplitude from the asymptotic analysis given in section 2.4.1 and also test whether any correlative effects exist for the amplitudes of the original system. Using the numerical method described above, we simulate the original system (2.1) for two different sets of noise strengths. To test whether the amplitude dependent phase variation has an effect on the predicted amplitudes, we simulate the system with noise strengths δ1 = δC = 0.008, δ2 = 0.003. To test the effect of correlated noise on the amplitude distribution, simulate the original system with intrinsic noise strengths equal to zero and δC = 0.01. The results are shown in Figure 3.1. From the plots, we can see that neither amplitude dependent phase variation, nor correlation of noise affect the distribution of amplitudes. Moreover, the results confirm that the probability density function (2.101) from which the moments of amplitude (2.102) are derived is accurate and that the inclusion of the nonlinear effects has negligible effect on the amplitude distribution for sufficiently small noise (i.e: limited probability of transition to large amplitude oscillations on the limit cycle). 35  3.2. Numerical simulations, comparison to analysis  3 P (R 1 = R)  P (R 1 = R)  2  1  0 0  0.5  1 R  1.5  0.5  1  1.5  1  1.5  R 3 P (R 2 = R)  P (R 2 = R)  1 0 0  2  3 2 1 0 0  2  0.5  1 R  1.5  2 1 0 0  0.5 R  Figure 3.1: Normalized histogram data for the amplitudes of the coupled oscillators compared to the marginal amplitude distributions for R1 and R2 given by (2.91) (solid) and (2.101) (dashed). Standard parameters are used. Left (Right): δ1 = 0.008, δ2 = 0.003, δC = 0.008 (δ1 = δ2 = 0, δC = 0.01)  36  3.2. Numerical simulations, comparison to analysis  3.2.2  Phase difference  We simulate the original system for δ1 = δ2 as seen in Figure 3.2. From left to right, top to bottom the strength of the common noise is increased, while the intrinsic noise strengths are held equal and constant. Given the equal strength of intrinsic noises, and the theoretical distribution computed in the previous sections we expect the distribution of phases to be peaked at Φ = 0. This means that in-phase oscillation is the most likely state for this system. As the relative strength of the common noise is increased, corresponding to an increase in the parameter K, we see that the distribution becomes more peaked, approaching the δ-function solution as K → 1. What this means is that as the common noise is increased relative to the intrinsic noise strength, the state of in-phase oscillation is becomes more likely. We expect from the theory that we have derived that in the limit K → 1, the state of the system evolves to exact synchrony. Looking at Figure 3.3 supports this. After an initial transcient phase difference, the system evolves to a state of pure synchrony over the long time scale until the oscillators become virtually indistinguishable in terms of their behaviour. We also do a comparison of the quality of the fits provided from both the linear distribution for the phases P and that of the asymptotic solution p. We can see that the linear distribution P matches the simulated data quite well. The asymptotic distribution does not match as well, although it is qualitatively correct capturing the increase in probability of observing the oscillators in a synchronized state as the common noise is increased. The asymptotic solution is however able to capture effects that the linearized distribution cannot as we show next. Amplitude-dependent phase evolution The importance of the asymptotic solution is apparent in the case when the parameters of the system yield amplitude dependent phase evolution. This occurs when ω1 = 0 and ∆1 = ∆2 . When this is true, the correction to the leading order asymptotic solution, p1 has a non-zero odd component. This indicates that the distribution of phase differences has a peak centered at a nonzero value. This phase shift cannot be captured using the linear distribution, but the asymptotic distribution has little problem capturing this effect, qualitatively as seen in Figure 3.4. In Figure 3.4, we simulate the system for two different values of ω1 and δC , and plot the linear and asymptotic distributions over the data. In essence, the effect of the amplitude-dependent phase evolution is to, on average, advance or delay the phase of one oscillator relative to the other. The average phase difference is positive (negative) when ω1 (∆2 − ∆1 ) is positive (negative). One would expect this, given the form of ω(ri ). The noisy forcing tends to increase the amplitude of the oscillator. If one is noisier than the other, than there is a non-zero average difference in the amplitude. In the lower right plot of Figure 3.4, which does not fall under the asymptotic regime, 37  3.2. Numerical simulations, comparison to analysis  0.2 0.16 0.14  0.18  0.16  0.1  p¯(Φ)  p¯(Φ)  0.12  0.08  0.14 0.06 0.04  0.12  0.02 0 −3  −2  −1  0 Φ  1  2  0.1 −3  3  −2  −1  0 Φ  1  2  3  −2  −1  0 Φ  1  2  3  0.35 0.8 0.7  0.25  0.6  0.2  0.5  p¯(Φ)  p¯(Φ)  0.3  0.15  0.4 0.3  0.1 0.2 0.05 0 −3  0.1 −2  −1  0 Φ  1  2  3  0 −3  Figure 3.2: Histograms of simulated phase data of the original system, (2.1), plotted against the asymptotic solution for p¯(Φ). Parameters used in this simulation are the standard (ω0 = 0.9, ω1 = 1.2, λ2 = −3, = 0.1, δ1 = δ2 = 0.004). Notice that by our choices of noise strength we have ∆1 = ∆2 , and that we increase the common noise strength from left to right, top to bottom (δC = 0, 0.001, 0.004, 0.01). Grey: Histogram of simulated data. Black, solid (dashed): Asymptotic solution (Linear distribution). All noise parameters are chosen to ensure that the trajectory remains near the stable steady state, and moves onto the stable limit-cycle that exists in this parameter regime with very limited probability.  38  !'&  "'(  !'!(  "'") x(t), y(t)  x(t), y(t)  3.2. Numerical simulations, comparison to analysis  !  !!'!(  !!'& !  "  !"'")  "!  #!  t  $!  %!  &!!  !"'( !""  !#"  !$"  t  !%"  !&"  $""  Figure 3.3: A sample time series for the system given by (2.1) when δ1,2 = 0, δC = 0.005 (i.e: K = 1), with all initial conditions sampled from the normal distribution with mean 0 and standard deviation 0.05. Despite the different initial conditions, the trajectories of the system begin to overlap as t gets larger eventually becoming indistinguishable, as seen from the difference in the images on the left and right. Standard parameters are used.  39  0.25  0.5  0.2  0.4  0.15  0.3  0.1  0.2  0.05  0.1  0 −3  −2  −1  0 Φ  1  2  0 −3  3  0.25  0.5  0.2  0.4  0.15  0.3  −2  −1  0 Φ  1  2  3  −2  −1  0 Φ  1  2  3  p¯  p¯(Φ)  p¯(Φ)  p¯(Φ)  3.2. Numerical simulations, comparison to analysis  0.1  0.2  0.05  0.1  0 −3  −2  −1  0 Φ  1  2  3  0 −3  Figure 3.4: Histograms of simulated phase data of the original system, (2.1), plotted against the asymptotic solution for p¯(Φ) (solid) and the distribution derived from the linearization (dashed). Parameters used in this simulation are (ω0 = 0.9, = 0.1, λ2 = −3). The intrinsic noise strengths are δ1 = 0, δ2 = 0.009. Left(Right): δC = 0.003(0.009). Top(Bottom): ω1 = 5(20). The data in the lower right has parameters that are not in the asymptotic regime. The green line is the stationary solution to (2.104), computed using a finite-element method.  40  3.2. Numerical simulations, comparison to analysis we solve for the stationary solution for (2.104) using a finite element method. The generally good correspondence between the finite element solution and the simulated data serves as a confirmation of sorts that our FPE, (2.104) corresponds to the data nicely, even in the limit when the amplitudedependent frequency term has noticeable effect due to both the large value of ω1 and the relatively large difference between noise strengths. Inside vs. outside the knee We also test whether the position of the bifurcation parameter λ2 being inside or outside the range of values where a transition to the stable limit cycle is possible has any effect. In particular, we test the extent to which the small noise approximation can be applied in the presence of increasing noise strengths in regions where there is no danger of moving onto the stable limit cycle (i.e: regions where λ(r) = 0, has no real solutions for r). Whether or not we are inside or outside the knee has no effect on the validity of our results, nor does it change the synchronization behaviour of the system in a significant way, as we can see in Figure 3.5. In this figure, we display the simulated data from two oscillators with equal intrinsic noises and some common noise driving, with one oscillator having the bifurcation parameter inside the knee and the other outside the knee. We also simulate the system outside the knee of the bifurcation diagram to test the limits of our multiple scales approximation without the risk of the system evolving to limit-cycle behaviour by choosing λ2 such that no limit-cycle exists. In Figure 3.6, we choose λ2 = −6, α = 0.2, γ = −0.2 and simulate the system for increasing values of the noise strength. The choose of parameters means the system is just outside the knee for the stable limit-cycle behaviour. For the multiple scales approximation to be valid, we must have δi = O( 2 ), i = 1, 2, C. Moreover, the linear and asymptotic solutions for the phase difference probability density functions require that ∆i be small. From top to bottom, δ1 = δ2 = δC =  2 , 5 2 , 10 2 .  Clearly as the noise strength is increased beyond  the small noise limit, both the linear and asymptotic approximations cease to be useful and the phase distribution approaches uniformity.  41  0.35  0.35  0.3  0.3  0.25  0.25  0.2  0.2  p¯(Φ)  p¯(Φ)  3.2. Numerical simulations, comparison to analysis  0.15  0.15  0.1  0.1  0.05  0.05  0 −3  −2  −1  0 Φ  1  2  3  0 −3  −2  −1  0 Φ  1  2  3  Figure 3.5: Histograms of simulated phase data of the original system, (2.1), plotted against the asymptotic solution for p¯(Φ) (solid) and the distribution derived from the linearization. Parameters used in this simulation are the standard (ω0 = 0.9, ω1 = 1.2, = 0.1). The noise strengths are δ1 = δ2 = δC = 0.001. Left (Right): λ2 = −3(−10). This puts the system inside (outside) the knee in the left (right) figure.  42  3.2. Numerical simulations, comparison to analysis  0.35 0.3  p¯(Φ)  0.25 0.2 0.15 0.1 0.05 0 −3  −2  −1  0 Φ  1  2  3  −2  −1  0 Φ  1  2  3  −2  −1  0 Φ  1  2  3  0.35 0.3  p¯(Φ)  0.25 0.2 0.15 0.1 0.05 0 −3  0.35 0.3  p¯(Φ)  0.25 0.2 0.15 0.1 0.05 0 −3  Figure 3.6: Histograms of simulated phase data of the original system, (2.1), plotted against the asymptotic solution for p¯(Φ) (solid) and the distribution derived from the linearization. Parameters used in this simulation are the standard (except for λ2 = −6). Top/Middle/Bottom: δ1 = δ2 = δC = 0.01/0.05/0.1.  43  Chapter 4  Discussion and Conclusions In this thesis, we present a pair of identical uncoupled conditional oscillators with parameters chosen such that the system is near a subcritical Hopf Bifurcation and forced with weak white noise. The noisy forcing is the additive sum of an intrinsic noise source and common noise source, acting identically on both. While this problem has been studied several times in the context of limitcycle oscillators, this is the first study undertaken of noise-induced synchronization of uncoupled conditional oscillators. We applied a stochastic multiple scales method that allowed us to study the effect of the weak noisy forcing on a canonical λ-ω model. Using a separation of time scales, we study the effect of the noise as it effects the envelope functions of deterministic oscillations. We see that small, nearly regular oscillations are driven by CR, similar to [45] and that the amplitude can be modeled as a stochastic process on the slow time scale, while the deterministic oscillations evolve on the fast time scale. The calculation of the drift terms for the envelope equation was carried out in the same spirit as the standard multiple scales method. To compute the effects of the white noise forcing, we compare the diffusion terms of the generators of the original and transformed system using the method given in [2]. Due to the presence of both common and intrinsic noise forcing, we maintain the distinction between the two in our derivation of the envelope equations so as to better distinguish their effects later in the analysis. The scaling of the noise strength coefficients in the envelope SDEs, δi / 2 , δC /  2  reflects the amplification of the noisy forcing on the long time scale.  We derive the probability density function for the amplitudes and phases of the oscillators to leading order, based on the linearization of the stochastic differential equations for which a wellestablished theory exists [16]. We use this probability density function to compute the marginal densities for the phase difference and amplitude, the latter of which we use to study the related moments. The marginal phase distribution reveals that the existence of weak common noise forcing does indeed increase the degree of synchrony as seen by the development of a peak in the probability density at Φ = 0. We find that the degree of synchrony for the oscillators can be characterized by the parameter K =  2 1 + δ12 /δC  2 1 + δ22 /δC  −1/2  , which is bounded on a closed interval between  0 and 1. It indicates that the degree of synchrony of small oscillations is largely determined by the ratio of intrinsic to common noise. A perturbation analysis of the stochastic equations is applied in order to confirm the result from 44  Chapter 4. Discussion and Conclusions the linear theory. This asymptotic analysis involves the reduction of a four dimensional system to a two dimensional system using a moment approximation for amplitude components. The moments are calculated using a perturbation expansion method, which allows us to include lower order nonlinear effects in the amplitude evolution, although they do not have a significant effect on the corresponding probability density function for the amplitude when the small noise requirement is considered (Fig. 3.1). The asymptotic solution for the probability density of the phase difference indeed confirms the results from the linear theory. That is, the presence of common noise increases the degree of synchronization of uncoupled conditional oscillators and that the parameter K can be used to characterize the synchronization, to leading order. Moreover, the asymptotic analysis predicts that different intrinsic noise strengths have an effect on the frequency of oscillation near the bifurcation when the system has amplitude-dependent frequency variation. When one oscillator experiences noise forcing that has a greater strength than the other, the phase of that oscillator generally leads or follows that of the other one, depending on the sign of the parameter ω1 , which determines the frequency variation of the amplitude. This is an effect that is not observable in the phase distribution derived from the linearization. The precise degree of proximity to the bifurcation (λ2 , ) and amplitude nonlinearities (α) do not have an effect on the leading order solutions for the phase distribution. They do have an effect on the first order corrections for the asymptotic solution, however. The quantitative differences between the asymptotic and linear solutions for the probability density functions are significant. We test our theory by simulating the original system and comparing to our analytically derived distributions. We see that the addition of a common noise source to these conditional oscillators near a subcritical Hopf Bifurcation does indeed increase the degree of synchrony for this system. The linear theory matches the numerical data quite well, but does not capture the effect caused by the combination of different intrinsic noise strengths and amplitude-dependent phase evolution. It is unlikely that this shift in the peak indicates out-of-phase entrainment of the phases. It is more likely that it indicates the system spends more time near synchrony, but the faster oscillator has a tendency to slip. An interesting observation is that in the case where the amplitude dependent phase variation is of comparable order to the noisy forcing, then the numerically computed solution to the FPE (2.104) matches the data quite well (see Fig. 3.4). We test the limits of the small-noise approximation and found that we must have δi , δC ∼  2  in order for the theory to hold. The small noise limit must hold as it is required for consistency of the multiple scales method (i.e δi , δC = O( 2 )) or the noise dominates the fast time dynamics. Moreover, the analysis of both the linear process and the asymptotic limit, require that ∆1 and ∆2 be small, as we eliminate nonlinearities by assuming that Ri2 Increasing the values of δi , δC to values a few times greater than  1 and we find E[Ri2 ] = O(∆i ). 2  invalidates both the linear and  asymptotic theory and we notice that the analytical solutions break down. In this case, the noisy forcing becomes more dominant in the slow time behaviour. These conclusions are similar to those 45  4.1. Future work drawn in [26]. This study may have implications for the study of spike time reliability and synchronization of membrane potential oscillations in type II neurons. The bursting behaviour that is possible in type II neurons that occurs by passing through the Hopf Bifurcation point may be synchronized by the following mechanism. The phase synchronization observed in the conditional oscillators increases the likelihood of a perturbation common to both oscillators pushing the oscillators beyond the manifold separating the excursion dynamics from quiescence. In this case, that manifold is the unstable limit cycle separating the fixed point at the origin and the stable limit cycle.  4.1  Future work  To extend this work, there are more challenges that could be undertaken with some effort. • Determining the effect of common noise on structurally identical conditional oscillators, but with different strengths affecting each. An application for this work would be voltage attenuation along inputs of various lengths and compositions entering neuronal cells. Would the common noise still induce synchronization, or would amplitude dependent frequency variation result in desynchronization? • Determining the precise effect that the mechanism described above has on the synchronization of uncoupled bursting Type II Morris-Lecar neuronal models. Does the hypothesis given in the previous section hold? • Working on improving the asymptotic approximation to the full FPE, if at all possible. Can we increase the quantitative accuracy of the asymptotic approximation?  46  Bibliography [1] Aushra Abouzeid and Bard Ermentrout. Type-ii phase resetting curve is optimal for stochastic synchrony. Phys. Rev. E, 80(1):011911, Jul 2009. [2] Peter H. Baxendale. Stochastic averaging and asymptotic behavior of the stochastic duffing-van der pol equation. Stochastic Processes and their Applications, 113(2):235 – 272, 2004. [3] Roberto Benzi, Alfonso Sutera, and Angelo Vulpiani. The mechanism of stochastic resonance. J. Phys. A, 14:L453 – L457, 1981. [4] Romain Brette and Emmanuel Guigon. Reliability of spike timing is a general property of spiking model neurons. Neural Comp., 15(2):279–308, 2003. [5] Hugh L. Bryant and Jos´e P. Segundo. Spike initiation by transmembrane current: a white-noise analysis. J. Physiol., 260:279–314, 1976. [6] Jonathan Drover, Jonathan Rubin, Jianzhong Su, and Bard Ermentrout. Analysis of a canard mechanism by which excitatory synaptic coupling can synchronize neurons at low firing frequencies. SIAM J. Appl. Math, 65(1):69 – 92, 1983. [7] Matthew G. Earl and Steven H. Strogatz. Synchronization in oscillator networks with delayed coupling: A stability criterion. Phys. Rev. E, 67(3):036204, Mar 2003. [8] Bard Ermentrout and Tae-Wook Ko. Delays and weakly coupled neuronal oscillators. Phil. Trans. R. Soc. A, 367:1097–1115, 2009. [9] G. B. Ermentrout and N. Kopell. Fine structure of neural spiking and synchronization in the presence of conduction delays. Proc. Natl. Acad. Sci. USA, 95:1259–1264, 1998. [10] Roberto F. Gal´ an, G. Bard Ermentrout, and Nathaniel N. Urban. Efficient estimation of phase-resetting curves in real neurons and its significance for neural-network modeling. Phys. Rev. Lett., 94(15):158101, Apr 2005. [11] Roberto F. Gal´ an, G. Bard Ermentrout, and Nathaniel N. Urban. Stochastic dynamics of uncoupled neural oscillators: Fokker-planck studies with the finite element method. Phys. Rev. E, 76(056110), 2007. 47  Bibliography [12] Roberto F. Gal´ an, G. Bard Ermentrout, and Nathaniel N. Urban. Optimal time scale for spike-time reliability. J. Neurophysiol., 99(1):277–283, 2008. [13] Roberto F. Gal´ an, Nicolas Fourcaud-Trocm´e, G. Bard Ermentrout, and Nathaniel N. Urban. Correlation-induced synchronization of oscillations in olfactory bulb neurons. Jour. Neurosci., 26(14):3646–3655, 2006. [14] Hu Gang, T. Ditzinger, C. Z. Ning, and H. Haken. Stochastic resonance without external periodic force. Phys. Rev. Lett., 71(6):807–810, Aug 1993. [15] D. Garc´ıa-Alvarez, A. Bahraminasab, A. Stefanovska, and P. V. E. McClintock. Competition between noise and coupling in the induction of synchronisation. Europhys. Let., 88(30005), 2009. [16] Crispin. W. Gardiner. Handbook of Stochastic Processes. Springer-Verlag, 1983. [17] Daniel T. Gillespie. Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem., 81(25):2340 – 2361, 1977. [18] Denis S. Goldobin and Arkady Pikovsky. Synchronization and desynchronization of selfsustained oscillators by common noise. Phys. Rev. E, 71(045201(R)), 2005. [19] Denis S. Goldobin and Arkady Pikovsky. Synchronization of self-sustained oscillators by common white noise. Physica A, 351:126–132, 2005. [20] A. L. Hodgkin and A. F. Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physio., 117:500–544, 1952. [21] Mark. H. Holmes. Introduction to Perturbation Methods. Springer-Verlag, 1995. [22] Ram. P. Kanwal. Generalized Functions: Theory and Technique. Birkh¨auser, 1993. [23] James P. Keener and James Sneyd. Mathematical Physiology. Springer: New York, 2004. [24] J. Kevorkian and J. D. Cole. Perturbation Methods in Applied Mathematics. Springer-Verlag, 1981. [25] Peter E. Kloeden and Eckhard Platen. Numerical Solution of Stochastic Differential Equations. Springer-Verlag, 1992. [26] M. M. Klosek and R. Kuske. Multiscale analysis of stochastic delay differential equations. SIAM J. Multiscale Modeling and Simulation, 3:706–729, 2005. [27] Yoskiki Kuramoto. Chemical Oscillations, Waves, and Turbulence. Dover, 2003. 48  Bibliography [28] R. E. Lee DeVille, Eric Vanden-Eijnden, and Cyrill B. Muratov. Two distinct mechanisms of coherence in randomly perturbed dynamical systems. Phys. Rev. E, 72(3):031105, Sep 2005. [29] Cheng Ly and G. Bard Ermentrout. Synchronization dynamics of two coupled neural oscillators receiving shared and unshared noisy stimuli. J. Comput. Neurosci, 26:425–443, 2009. [30] Zachary F. Mainen and Terrence J. Sejnowski. Reliability of spike timing in neocortical neurons. Science, 268:1503–1506, 1995. [31] Catherine Morris and Harold Lecar. Voltage oscillations in the barnacle giant muscle fiber. Biophys. J., 35:193–213, 1981. [32] Hiroya Nakao. Asymptotic power law of moments in a random multiplicative process with weak additive noise. Phys. Rev. E, 58(2):1591–1600, Aug 1998. [33] Hiroya Nakao, Kensuke Arai, and Yoji Kawamura. Noise-induced synchronization and clustering in ensembles of uncoupled limit-cycle oscillators. Phys. Rev. Lett., 98(184101), 2007. [34] John C. Neu. Coupled chemical oscillators. SIAM J. Appl. Math, 37(2):307–315, 1979. [35] C. Nicolis. Long-term climatic transitions and stochastic resonance. J. Stat. Phys., 70(1,2):3– 14, 1993. [36] Bernt Øksendal. Stochastic Differential Equations. Springer-Verlag, 2003. [37] Arkady Pikovsky, Michael Rosenblum, and J¨ urgen Kurths. Synchronization: a universal concept in nonlinear sciences. Cambridge University Press, 2001. [38] Arkady S. Pikovsky and J¨ urgen Kurths. Coherence resonance in a noise-driven excitable system. Phys. Rev. Lett., 78(5):775 – 778, 1997. [39] John Rinzel and G. Bard Ermentrout. Analysis of neural excitability and oscillations. In Methods in neuronal modeling: From synapses to networks, pages 135–169. MIT Press, 1989. [40] C. Rouvas-Nicolis and G. Nicolis. Stochastic resonance. Scholarpedia, 2(11):1474, 2007. [41] T. Tateno and H. P. C. Robinson. Rate coding and spike-time variability in cortical neurons with two types of threshold dynamics. Jour. Neurophysio.i., 95:2650–2663, 2006. [42] Jun-nosuke Teramae, Hiroya Nakao, and G. Bard Ermentrout. Stochastic phase reduction for a general class of noisy limit cycle oscillators. Phys. Rev. Lett., 102(19):194102, May 2009. [43] Jun-nosuke Teramae and Dan Tanaka. Robustness of the noise-induced phase synchronization in a general class of limit cycle oscillators. Phys. Rev. Let., 93(20):204103, November 2004. 49  [44] Kazuyuki Yoshimura and Kenichi Arai. Phase reduction of stochastic limit cycle oscillators. Phys. Rev. Lett., 101(15):154101, Oct 2008. [45] Na Yu, R. Kuske, and Y. X. Li. Stochastic phase dynamics: Multiscale behavior and coherence measures. Phys. Rev. E, 73(5):056205, May 2006.  50  Appendix A  Code In this appendix, we give some of the code used to simulate the original system of oscillators. Data simulated from this code was piped into a data file and manipulated using MATLAB to generate the figures in this thesis. The command gasdev(arg) generates a random variable using arg as a seed. Program A.1 C program that simulates a single or multiple time series of the original system of equations (2.1).  /* * *  This C file when compiled will simulate the two canonical conditional  *  experiencing correlated noise near a subcrit. HB.  *  Created by William Thompson on 29/10/09.  *  Copyright 2009 University of British Columbia. All rights reserved.  * */ #include <stdio.h> #include <math.h> #include <time.h> #include <stdlib.h> ..... /* RHS1 will calculate the RHS of the DE for the x variable*/ double RHS1(double t, double y[2], double p[6]) { //p is the parameter vector p = (lambda2,alpha,gamma,omega0,omega1,epsilon) double ans; 51  Appendix A. Code double rsq; rsq = y[0]*y[0] + y[1]*y[1]; ans = (p[5]*p[5]*p[0] + p[1]*rsq + p[2]*rsq*rsq)*y[0] - (p[3] + p[4]*rsq)*y[1]; return(ans); } /* RHS2 will calculate the RHS of the DE for the y variable*/ double RHS2(double t, double y[2], double p[6]) { //p is the parameter vector p = (lambda2,alpha,gamma,omega0,omega1,epsilon) double ans; double rsq; rsq = y[0]*y[0] + y[1]*y[1]; ans = (p[5]*p[5]*p[0] + p[1]*rsq + p[2]*rsq*rsq)*y[1] + (p[3] + p[4]*rsq)*y[0]; return(ans); }  main() { double t0,tf,dt,dtrt,t,temp[2],k1[2],k2[2],k3[2],k4[2]; double x[2][2]; double p[6]; double deltaI[2]; double deltaC; int n; int idum = 1; int numRuns; double tRec; //time at which to start recording the data init_genrand(time(NULL)); //seed the random number generator with clock. t0 = 0; //initial time tf = 2000; //final time dt = 0.01; //time step dtrt = sqrt(dt); n = ceil(tf/dt) + 1; //number of time steps required numRuns = 1; //number of time series to simulate 52  Appendix A. Code tRec = 300; //time to start recording data to avoid transcient effs. p[0] = -3; //lambda_2 p[1] = 0.2; //alpha p[2] = -0.2; //gamma p[3] = 0.9; //omega_0 p[4] = 1.2; //omega_1 p[5] = 0.1; //epsilon deltaI[0] = 0.0000; //delta_1 deltaI[1] = 0.0000; //delta_2 deltaC = 0.001;  //delta_C  printf("%e %e %e %e %e \n",p[0],p[1],p[4],p[5],0.0); printf("%e %e %e %e %e \n",deltaI[0],deltaI[1],deltaC,0,0); //Generate the common noise signal, unscaled. double *noiseC; double (**noiseI); //Allocate memory noiseC = (double*) malloc(n*sizeof(double)); noiseI = malloc(2*sizeof(double*)); noiseI[0] = (double*) malloc(n*sizeof(double)); noiseI[1] = (double*) malloc(n*sizeof(double)); int Z; int k; for(Z=1;Z<=numRuns;Z++) { for (k=1;k<=n;k++) { noiseC[k] = gasdev(&idum); noiseI[0][k] = gasdev(&idum); noiseI[1][k] = gasdev(&idum); } 53  Appendix A. Code //Simulate the uncoupled oscillators t = 0; x[0][0] = 0.05*gasdev(&idum); //initial condition on x1 x[0][1] = 0.05*gasdev(&idum); //initial condition on y1 x[1][0] = 0.05*gasdev(&idum); //initial condition on x2 x[1][1] = 0.05*gasdev(&idum); //initial condition on y2 if(tRec == 0) { printf("%e %e %e %e %e \n",t,x[0][0],x[0][1],x[1][0],x[1][1]); } int s; double gam[2]; for(k=2;k<=n;k++) //iterate over all time steps { for(s=0;s<=1;s++) // iterate over the number of oscillators { gam[0] = x[s][0] + dt*RHS1(t,x[s],p); gam[0] = gam[0] + deltaI[s]*dtrt*noiseI[s][k] + deltaC*dtrt*noiseC[k]; gam[1] = x[s][1] + dt*RHS2(t,x[s],p); //Apply 2nd order explicit weak scheme from Kloeden and Platen temp[0] = x[s][0] + dt/2*(RHS1(t,gam,p) + RHS1(t,x[s],p)); temp[0] = temp[0] + deltaI[s]*dtrt*noiseI[s][k] + deltaC*dtrt*noiseC[k]; temp[1] = x[s][1] + dt/2*(RHS2(t,gam,p) + RHS2(t,x[s],p)); x[s][0] = temp[0]; x[s][1] = temp[1]; } t = t + dt; if(t >= tRec) { printf("%e %e %e %e %e \n",t,x[0][0],x[0][1],x[1][0],x[1][1]); } 54  Appendix A. Code } } free(noiseC); free(noiseI); }  55  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0071263/manifest

Comment

Related Items