Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Deterministic and stochastic modeling of the Min system for cell division Jamieson-Lane, Alastair 2019

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2019_may_jamieson-lane_alastair.pdf [ 4.56MB ]
JSON: 24-1.0376798.json
JSON-LD: 24-1.0376798-ld.json
RDF/XML (Pretty): 24-1.0376798-rdf.xml
RDF/JSON: 24-1.0376798-rdf.json
Turtle: 24-1.0376798-turtle.txt
N-Triples: 24-1.0376798-rdf-ntriples.txt
Original Record: 24-1.0376798-source.json
Full Text

Full Text

Deterministic and Stochastic Modeling of the MinSystem for Cell Division.byAlastair Jamieson-LaneMSc. University of British Columbia, 2014BSci. Hons. University of Canterbury, 2012A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFDoctor of PhilosophyinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Mathematics)The University of British Columbia(Vancouver)March 2019c© Alastair Jamieson-Lane, 2019The following individuals certify that they have read, and recommend tothe Faculty of Graduate and Postdoctoral Studies for acceptance, the thesisentitled:Deterministic and Stochastic Modeling of the Min System for CellDivision.submitted by Alastair Jamieson-Lane in partial fulfillment of the require-ments for the degree of Doctor of Philosophy in Mathematics.Examining Committee:Eric Cytrnbaum, MathematicsSupervisorLeah Keshet, MathematicsSupervisory Committee MemberPaul Tupper, SFU Mathematics DepartmentExternal ExaminerWayne Nagata, MathematicsUniversity ExaminerMark Van Raamsdonk, PhysicsUniversity ExaminerAdditional Supervisory Committee Members:Priscilla (Cindy) E. Greenwood, Department of MathematicsSupervisory Committee MemberLeah Edelstein-Keshet, Department of MathematicsSupervisory Committee MemberiiAbstractThe Min system acts as a key regulator for cell division in E. coli, repressingcell division at either end of the cell via pole to pole oscillation. Recent invitro experiments have demonstrated the Min system’s tendency to create“burst” patterning under suitable concentration conditions, whereby highconcentration ‘bursts’ of Min proteins nucleate from an approximately ho-mogeneous background, before “freezing” and fading away. I start thisthesis by giving a quick review of some of the complexities involved inmodeling chemical reactions via Partial Differential Equations - particu-larly in 2D surfaces such as the cell membrane. I consider a number of toymodels, demonstrating discrepancies between classical Reaction-Diffusionrepresentations of chemical systems, and the more foundational particlesystem. These discrepancies are in most cases minor, in some cases ex-treme. A simplified Min model is developed, demonstrating how particlemodels of Min dynamics can lead to burst formation, even in cases wheredifferential equations predict a uniform solution.Next, I take a recently developed and parameterized ODE model of the Minsystem based on experimental data from Ivanov et al, and extend the modelto consider finite space, both on the membrane and in the volume of thecell. This extended model allows me to map out a bifurcation diagram ofthe system’s behavior for concentrations both higher and lower than thoseused in the original data fitting, and explore the conditions under whichburst nucleation is predicted. Finally, I show that white noise can allow aspatially distributed reaction diffusion system to escape from a neutrallystable steady state at zero, passing to some fixed value u(0, T) > 0 in finiteiiitime. The most probable path to such a state leads to a narrow sharp spikereminiscent of experimental observations. Dynamics of this kind are typ-ical whenever a system loses stability by passing slowly through a saddlenode bifurcation.ivLay SummaryThe Min system is a collection of proteins critical to cell division. In thisthesis, I consider what sorts of mathematical models predict the “Burst”behavior recently observed for the Min system, and what predictions pre-viously existing models have regarding the system’s behavior for a varietyof experimental conditions.By better understanding these mathematical models of the Min system,we can in turn better understand the chemical reactions that regulate celldivision.In addition, I discuss and explore the various ways that chemical reac-tion rates in two dimensions (such as on the wall of a cell) do not behavelike chemical reaction rates in three dimensions (such as chemicals reactingin a glass). This difference is important for correctly describing how cellchemistry works, but is often ignored or forgotten in mathematical models.vPrefaceThis dissertation is based upon original, independent work carried out bythe author in conversation with the supervisory committee. In particular,the focus on the Min system was suggested by research supervisor EricCytrnbaum, but the focus upon and analytic work surrounding burst for-mation are the work of the author. None of the text of the dissertation istaken directly from previously published articles.Chapter 2 contains a mixture of original results and literature review,each clearly marked within the chapter. Chapter 3 includes originally re-sults, and builds on models previously developed and parameterized byWilliam Carlquist [71]. The results of Chapter 4 are (to the author’s knowl-edge) entirely new.Figures 1.2 and 1.4 (panes A,B and C) are borrowed, within the de-scribed terms of fair use, from their respective sources. Full details aregiven in each figure captions. Figures 1.6 and 4.1 are based on data ac-quired via private correspondence with Anthony Vecchiarelli, and are usedwith permission, data as described in [64].The various script files contained in the supplementary materials areconstructed entirely by the author.viTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiList of Supplementary Materials . . . . . . . . . . . . . . . . . . . . . xivAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 The Min system . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Structure of thesis . . . . . . . . . . . . . . . . . . . . . . . . . 62 Reviewing deterministic and stochastic representations of chem-istry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.2 Representations of Chemistry . . . . . . . . . . . . . . . . . . 112.2.1 Reaction-Diffusion PDEs . . . . . . . . . . . . . . . . . 112.2.2 Brownian Dynamics . . . . . . . . . . . . . . . . . . . 11vii2.2.3 Markov Chain Representation . . . . . . . . . . . . . 122.3 Mapping between Brownian dynamics and PDE reaction dif-fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.3.1 Concentration as probability density . . . . . . . . . . 152.3.2 Smoluchowski Theory . . . . . . . . . . . . . . . . . . 172.4 Some Examples of Model Mismatch . . . . . . . . . . . . . . 202.4.1 Particle Anti-Particle Annihilation . . . . . . . . . . . 212.4.2 Variable Diffusion Rates Between Reacting Particles . 232.4.3 Degeneracies Produced in a Simplified Duplication-Annihilation Min Model . . . . . . . . . . . . . . . . . 262.4.4 When Can We Ignore All This? . . . . . . . . . . . . . 302.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Some Direct PDE Simulations of the Min System . . . . . . . . . 353.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.2 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.2.1 The Mathematical Model . . . . . . . . . . . . . . . . 383.3 Exploring Parameter Space with Continuation Methods. . . 413.3.1 Newton’s Method and Continuation . . . . . . . . . . 413.3.2 Detecting Spatial-Temporal Stability and Patterning . 433.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473.4.1 Shallow Solution . . . . . . . . . . . . . . . . . . . . . 473.4.2 Medium Depth Solution . . . . . . . . . . . . . . . . . 503.4.3 Deep Solution . . . . . . . . . . . . . . . . . . . . . . . 513.4.4 PDE Results . . . . . . . . . . . . . . . . . . . . . . . . 543.5 Wave Pinning . . . . . . . . . . . . . . . . . . . . . . . . . . . 593.5.1 Introduction to Wave Pinning . . . . . . . . . . . . . . 593.5.2 Complications of Wave Pinning Theory for Multi-SpeciesSystems . . . . . . . . . . . . . . . . . . . . . . . . . . 643.5.3 Evidence of Wave Pinning in Carlquist’s Model . . . 663.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67viii4 Stochastic Forcing Causes Spikes To Evolve From HomogeneousNeutrally Stable Steady State . . . . . . . . . . . . . . . . . . . . . 694.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.1.1 Blow-Up in Non-Linear Reaction-Diffusion Equations 704.2 Computational Exploration . . . . . . . . . . . . . . . . . . . 724.2.1 Toy System Simulations . . . . . . . . . . . . . . . . . 724.2.2 Mori’s System . . . . . . . . . . . . . . . . . . . . . . . 734.3 A Particular Form of Petrasek Model . . . . . . . . . . . . . . 764.4 Introducing Some Tools . . . . . . . . . . . . . . . . . . . . . 794.4.1 Large Deviation Theory . . . . . . . . . . . . . . . . . 804.4.2 Functional Derivatives and Their Uses . . . . . . . . . 864.5 “Escape from Zero” Time for Non-Linear Equations . . . . . 884.5.1 Spaceless Case . . . . . . . . . . . . . . . . . . . . . . 894.5.2 Spaceless Equation With an Energy Barrier . . . . . . 944.5.3 The Spatially Distributed Problem . . . . . . . . . . . 984.5.4 Numeric Results for Spatially Distributed System . . 1014.5.5 The Difficulties in Two Dimensions . . . . . . . . . . 1024.6 Comparison to “Approach to Infinity” . . . . . . . . . . . . . 1074.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 1105.0.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . 111Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113ixList of TablesTable 3.1 Parameter values and meanings from Carlquist’s model.ωi,κ,γ provided via private correspondence with Carlquist,and based upon numerical fits of Ivanov et al’s data, [29].L0,O0 as described in Ivanov’s experimental methodol-ogy. Despite experimental uncertainty, we retain this levelof significant figures so as to better compare our resultswith those of Carlquist. . . . . . . . . . . . . . . . . . . . . 42xList of FiguresFigure 1.1 Cartoon illustration of the results of incorrect placementof FtsZ ring. . . . . . . . . . . . . . . . . . . . . . . . . . . 2Figure 1.2 Pole to pole oscillation of Min Proteins. . . . . . . . . . . 3Figure 1.3 Cartoon illustration of Pole-to-Pole oscillation of Min Pro-teins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Figure 1.4 (A,B) Anomalously long, and (C) Y shaped cells.(D) Schematicof in vitro experiments. . . . . . . . . . . . . . . . . . . . 5Figure 1.5 Different patterns observed for various MinD/E concen-trations by Vecchiarelli et al. . . . . . . . . . . . . . . . . . 7Figure 1.6 Burst patterning, as observed by Vecchiarelli et al. . . . . 7Figure 2.1 Graphical representation of the Markov chain associatedwith A↔∅. . . . . . . . . . . . . . . . . . . . . . . . . . . 13Figure 2.2 Illustration comparing Brownian dynamics, Markov chainand PDE representations of chemistry. . . . . . . . . . . . 13Figure 2.3 Radial concentration profile around an absorbing trap, atvarious times. . . . . . . . . . . . . . . . . . . . . . . . . . 19Figure 2.4 Particle-Antiparticle annihilation simulations. . . . . . . 22Figure 2.5 Finding empty “clearings” of radius R− ρ (right) is equiv-alent to finding uncovered ground in the coverage prob-lem with radius R (left). . . . . . . . . . . . . . . . . . . . 29Figure 2.6 Typical results of Smoldyn simulations for a “Bonny like”Trapping/duplication reaction. . . . . . . . . . . . . . . . 31xiFigure 3.1 Basic dynamics of Carlquist’s Min model. . . . . . . . . . 36Figure 3.2 For large bulk, the solution concentration is largely inde-pendent of membrane concentration. ... . . . . . . . . . . 40Figure 3.3 Cartoon guide to Coco Continuation package. . . . . . . 44Figure 3.4 Cartoon guide to Local Perturbation Analysis. . . . . . . 46Figure 3.5 “Slice” view taken from figure 3.6, showing the transi-tion between states A and B. . . . . . . . . . . . . . . . . . 48Figure 3.6 Bifurcation diagram for h = 1µm . . . . . . . . . . . . . . 49Figure 3.7 Bifurcation diagram for medium h = 20µm. . . . . . . . . 51Figure 3.8 Bifurcation diagram for h = 200µm. . . . . . . . . . . . . 53Figure 3.9 PDE simulations in bistable regime of Carlquist model. . 55Figure 3.10 PDE results demonstrating spike formation as predictedby Local perturbation analysis. . . . . . . . . . . . . . . . 56Figure 3.11 PDE results demonstrating how heterogeneity if a a func-tion of solution depth h. . . . . . . . . . . . . . . . . . . . 57Figure 3.12 PDE results for the Carlquist equations using parametervalues closely matching Ivanov’s experiments. . . . . . . 58Figure 3.13 Spike formation in Carlquist system for h = 200µm. . . . 60Figure 3.14 PDE simulation demonstrating transient spatio-temporalpatterning for Carlquist system. . . . . . . . . . . . . . . 61Figure 3.15 Introduction to wave pinning. . . . . . . . . . . . . . . . 63Figure 4.1 Burst patterning, as observed by Vecchiarelli et al. . . . . 70Figure 4.2 Blow up of nonlinear equations of form ut = u2 . . . . . 71Figure 4.3 Simulation results for ut = ∆u + u2(2− u) + ξ.) . . . . . 74Figure 4.4 Simulations of the deterministic system ut =D∆u+u2(u−2), with white noise initial conditions. . . . . . . . . . . . 74Figure 4.5 Simulation of Mori equation starting from Neutral steadystate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76Figure 4.6 Nullclines of our simplified Petrasek model. . . . . . . . 78Figure 4.7 Particle and PDE simulations of simplified Petrasek model. 79Figure 4.8 Probability density visualization for Large Deviation the-ory, as viewed in both u and ξ coordinates. . . . . . . . . 82xiiFigure 4.9 Simple Illustration of Laplace’s Principle . . . . . . . . . 83Figure 4.10 Some optimal φ found for particular boundary condi-tions using equation 4.48. . . . . . . . . . . . . . . . . . . 91Figure 4.11 Action vs T. This graph constructed via numeric integra-tion of equation 4.50 - more direct methods of evaluatingS(φ) run into numerical instability. . . . . . . . . . . . . 92Figure 4.12 Passage time from 0 to 1 vs e for ut = u2 + eξ ; Compari-son of analytics and simulation times. . . . . . . . . . . . 95Figure 4.13 Schematic view of my iteration scheme, as used to solveequations 4.5.5. . . . . . . . . . . . . . . . . . . . . . . . . 101Figure 4.14 I calculate the action cost ST(φ) to reach u(0, T) = U f bytime t = T numerically. . . . . . . . . . . . . . . . . . . . . 102Figure 4.15 (Upper) final u profile for U f = 1, T varying. (Lower)final u profiles for T = 110, U f varying. . . . . . . . . . . 103Figure 4.16 ξ(x,0) for a variety of T values. . . . . . . . . . . . . . . . 109Figure 4.17 A comparison of my “escape from zero” results with Galak-tionov and Posashkov’s “Approach to infinity” results. . 109xiiiList of SupplementaryMaterialsIn this thesis I have made use of a number of Matlab scripts and Smoldynconfiguration files. All programs directly cited in this thesis can be found inthe Supplementary Materials submitted to the University of British Columbiaalong with this thesis, as well as online at Here I provide both references as a con-venience to the reader.Both these locations include:•, a markdown file containing a simple guide to the othersupplementary materials.• toy bonny D.txt, toy bonny E.txt and toy bonny DE.txt; acollection of Smoldyn configuration files used to simulate the simpli-fied Bonny model in chapter 2.• BuildBifurcationCarlquistModel.m and DrawBifurcationPlot.m,a pair of matlab script files used to construct the bifurcation diagramsfound in chapter 3.• dudt.m,dudtJacob.m,dudt dp.m and dudt du.m, a collection ofhelper function necessary to invoke BuildBifurcationCarlquistModel.m.• CartoonBurst.m,CartoonBurst IC noise.m and MoriBurst.m;A collection of minor matlab simulations used to observe the growthof spots for neutrally stable systems.xiv• my laplacian.m; a helper script for the above, designed to approx-imate diffusion numerically.• PetrasekBurst.m produces a PDE simulation of the simplified Pe-trasek model. PetrasekParticles.txt gives a Smoldyn basedparticle simulation of the simplified Petrasek model.• BackForwardSolveAdaptive.m; a script designed to solve a par-ticular pde and thus find the “most probable” spike resulting from aneutrally stable system.• IterateBFsolve.m; a simple script designed to run throughBackForwardSolveAdaptive.m multiple times in a row for a va-riety of parameter values.xvAcknowledgmentsI would like to acknowledge my Thesis supervisor, Dr. Eric Cytrynbaum,who acted as a guide and sounding board throughout this project. I wouldlike to acknowledge my thesis committee members Dr Priscilla (Cindy)Greenwood and Dr Leah Edelstein-Keshet, who provided feedback, en-couragement, and advice on which professors to talk to at which time (alongwith invaluable advice navigating the university system, and mathemat-ics itself). Professors Dan Coombes, Michael Ward and Brian Wetton pro-vided advice with respect to modeling, analysis and numerics, and one ofthese days I’m sure I’ll come up with a good come back to Dan’s wit andsnark. I would like to thank Professors Christoph Hauert and Alex Jamesfor guiding me through my Masters project and undergraduate Honoursproject (respectively), and for giving me the skills to tackle this much largerwork, and (eventually) to start asking my own research questions. Finally(amongst professors), I would like to thank Dr. Anthony Vecchiarelli (andhis collaborators) for providing the beautiful Min data upon which this en-tire project is based.In addition to all these professors, I would also like to show my apprecia-tion for my fellow graduate students at the Institute of Applied Mathemat-ics, especially Donajı´ Herrera and Sanna Tyrva¨inen, who have had my backmore times than I can count. From Green College, I owe much appreciationto Stephanie Dreier, Kent Williams-King, Darcy Davis-Case and Sebastia´nMedrano. You have all of you kept me sane over these past few years. Iowe you more joy than I can reasonably express in these scanty trails ofink.xviAnd finally... I would like to acknowledge that the majority of mystudies took place on the grounds currently occupied by the University ofBritish Columbia, the unceeded and ancestral lands of the Musqueam peo-ple. While I greatly appreciate the university, and all it has done for me, Istill find it hard to parse the fact that I study on these ancestral lands, andyet when I look around, there are practically zero First Nations students onthis campus; an expression of this vast... inequality, in some sense a lock-ing out. Or perhaps there is simply nothing here that is considered of truevalue.xviiChapter 1IntroductionBiologically, this thesis is dedicated towards the understanding of the Minsystem- a biochemical system involved in cell division. Mathematically weare interested in the general process of “spot” formation, and the numerousmechanisms that lead spikes to nucleate from an initially homogeneoussteady state.1.1 The Min systemCell division in E. coli is driven by the FtsZ ring [4], a ring of proteins thatbind to the cell membrane and contract inwards, eventually leading to celldivision [56]. For the sake of ensuring that both daughter cells contain a fullset of DNA it is critical that this ring forms in the correct location. At a hu-man scale, locating the middle of something is a trivial problem: eyes, mea-suring sticks, and balancing scales can all be used to locate “the middle” tovarying degrees of accuracy. But all of these approaches depend upon thecomplex machinery of the eye and brain, neither of which are available toa solitary E.-coli cell. E.-coli must instead use biochemical means to detectits midline.Correct FtsZ ring placement in E.-coli is achieved through two mecha-nisms: nucleoid occlusion [75] and the Min system (so named because ofthe so called “Mini cells” that result from its failure) [12]. The first sys-1  ABCDFigure 1.1: A) E.-coli. B) The contracting FtsZ pinches off the middleportion of a E.-coli cell. C) Incorrect placement of the FtsZ ringcan lead to daughter cells containing either two or zero copiesof the cells DNA. D) correct placement of the FtsZ ring leads toviable daughter cells.tem, nucleoid occlusion, prevents FtsZ from accumulating in the vicinityof the two newly formed nucleoids, thereby preventing division throughthe newly formed bundles of DNA as they begin to separate. The secondsystem, the focus of this thesis, is the Min system. The Min system consistsof three proteins, MinC, MinD and MinE (see figure 1.3, upper left) 1.MinD is a protein that can both diffuse throughout the cell cytosol, andbind membrane as a dimer; that is, a pair of MinD proteins folded togetherare the appropriate shape for membrane binding. Membrane bound MinDdimers catalyze the binding of MinE to the membrane, and the MinE inturn breaks the MinD dimer into monomers, leading to the release of MinDfrom the membrane. Taken together, this “chase and release” processes isobserved to cause pole-to-pole oscillation. See figure 1.2 for experimen-tal observations of this behavior, along with figure 1.3 (main panel) for aschematic view of this process.1While in this thesis I primarily talk of Min proteins and cell division in E.-coli, it is worthnoting that this is simply the best studied case. Min proteins also regulate cell division inother cells, for example A. tumefaciens [16].2  Figure 1.2: Pole to pole oscillation of Min Proteins. Images taken fromfigure 1, [46] (Copyright 1999, National Academy of Science, re-produced within the guidelines of PNAS non-commercial reusepolicy).  C   D E 123123Time averageFigure 1.3: (Main panel) 1. MinD proteins bind to the membrane faraway from membrane bound MinE proteins. MinE proteins re-lease to the cytosol. 2. MinE proteins bind to membrane boundMinD. 3. MinE break up MinD dimers and release them to thecytosol. The cycle begins anew. (Upper left) Our three proteins,MinC, MinD and MinE. Note that MinC is not shown in the cy-cle diagram, so as to avoid clutter. (Lower right) Averaged overtime, Min proteins have high concentration at either cell pole,and lower concentration around the midline of the cell, wherecell division apparatus can assemble.MinC provides the final ingredient required for this oscillatory patternto be useful in cell division. MinC is a protein known to inhibit cell divisionapparatus [12] by preventing the formation of the FtsZ ring [28]. MinC isalso known to bind to membrane bound MinD - as a result, it acts as a“passenger” in the MinD/E oscillations previously described. Averagedover time, this leads to high MinC concentration at either end of the cell,with low concentration in the middle (figure 1.3, lower right). As a result,cell division apparatus are prevented from assembling anywhere but themidline of the cell.While the overall behavior of the Min system is understood (MinDbinds to membranes, MinE removes MinD from membranes), the exactchemical reactions involved are the subject of some debate. Numerouspossible reaction schemes [10, 45, 51, 64] have been proposed, with nonethus far being the “definitive” set of reactions required to perfectly ex-plain all observed behavior. These papers represent their proposed reactionschemes in some cases using spatial-temporal Partial Differential Equa-tions [45], in some cases particle based simulations [62], and in some casesboth [6]. PDE based representations were generally selected based on theiranalytic tractability, while particle simulations were used primarily by re-searchers interested in studying stochastic behavior and the effects of lowparticle number. The total copy number of Min proteins in a dividing E.coli cell is estimated to be of the order of 1000 particles [6]. This placesus well within the range of computationally feasible simulations, and alsocasts doubt on the “smooth concentration” assumptions of deterministicPDEs.A variety of experiments have been performed in order to narrow downand precisely understand chemical reactions governing the Min system.In vivo- experiments include observation of Min dynamics in anomalouslylong cells [46], and in Y shaped E.-coli, where Min must contend with theexistence of more than two poles [62] (see figure 1.4).While fascinating, and useful these in vivo studies tend to be difficult tointerpret and make use of quantitatively. Because cells are small (µm wide,depending on the apparatus only tens of pixels large), it can be hard to see4  CDFigure 1.4: (A,B) Anomalously long, and (C) Y shaped cells. Im-ages taken from [46] (Copyright 1999, National Academy of Sci-ence, reproduced within the guidelines of PNAS non-commercialreuse policy) and [62] (Copyright 2008, Journal of Bacteriology,reproduced within the guidelines of ASM’s reuse in advanceddegree policy) . (D) Schematic of in vitro experiments, showing awell mixed solution of MinD/D being flowed over a large artifi-cial membrane.details, especially in cases when cells overlap. Questions about the impor-tance of cell shape can also complicate modeling. The exact concentrationof MinD and MinE can not be readily controlled, and the possible existenceof some additional structure or chemical in the cell leads to questions as towhether the behavior is the result of just MinD and MinE interaction. Thethree dimensional geometry of the cell also makes interpretation of datamore difficult.For these reasons, a number of in vitro experiments have been con-structed [36, 51, 64]. These experiments make use of large (mm wide), flat,artificial membranes, topped by an artificial cytosol. In these in vitro ex-periments it is possible to control both geometry [51] and concentration5[64] directly, and all system behaviors can be attributed to the dynamics ofMinD and MinE without fear of some “hidden actor” interacting with thesystem.In vitro experiments have been observed to produce a variety of pat-terns, including “spiral waves”, “uniform oscillations”, and “burst pattern-ing” (see figure 1.5). Of particular interest are burst patterns, whereby ini-tially empty sections of membrane nucleate a narrow region of high con-centration, which then spreads out to some limited radius, and then freezes.Frozen bursts fade from the membrane as new bursts form, and the cyclerepeats (see figure 1.6) Throughout this thesis, my primary interest will bethe nucleation stage of this cycle, and the various mechanisms (both phys-ical and mathematical) whereby a uniform initial condition can evolve to asharply heterogeneous “speckled” state.1.2 Structure of thesis• Various reaction schemes and modeling frameworks have been usedto study the Min system, but it is not always clear which observa-tions are the result of the chemical reactions assumed, and which arecontingent on the assumptions of the modeling paradigm. In chap-ter 2 of this thesis I review some of the numerous simplificationsand assumptions inherent in differential equation models of chemicalsystems. I discuss and demonstrate places where these assumptionsbreak down, particularly when chemical reactants must find one an-other via diffusion in 2D (such as in the membrane of a cell). I end thechapter by presenting a simplified Min model which exhibits burstformation when modeled using particles, but tends towards a homo-geneous state when modeled deterministically.• In chapter 3, I explore a model for the Min system due to WilliamCarlquist [71], and extended it to consider space, both explicitly alongthe surface of the cell membrane, and implicitly by considering the“size” of the cytosolic bulk.6  Figure 1.5: Different patterns observed for various MinD/E concentra-tions by Vecchiarelli et al. Images taken from figure 1, [64]. Scalebar 5µm. From left to right we observe: ‘Amoebas’, waves, spi-rals, ‘mushrooming’ and ‘bursts’. Cytosolic concentration of Minproteins decreases as we pass from left to right. Movies of eachof these phenomena are provided in [64]. Figure reused in linewith PNAS’s non-commercial use policy, and with permission ofthe authors.  t=0 t=15 t=30 t=45t=60 t=75 t=90 t=105Figure 1.6: Burst patterning, as observed by Vecchiarelli et al. At t = 0,membrane has an almost homogeneous distribution of parti-cles. This initial condition evolves into bright, high concentration‘bursts’ of membrane bound MinD, which then freeze in place,and fade. While one generation of bursts fades, the next genera-tion of bursts appears (bottom row). Pictures from data acquiredvia private correspondence, experiments as described in [64].• In chapter 4 I study the nucleation of spikes from a neutrally sta-ble steady state for a “toy model” consisting of the reaction diffusionequations ut = uxx + u2 + ξ. This spike formation reflects the nucle-ation process observed in Min experiments, and is typical of a largeclass of chemical systems passing through saddle-node bifurcations,including a number of Min models.• Chapter 5 summarizes the results of the previous chapters, and brieflydiscusses future research to be done.From a mathematical perspective, each chapter of this thesis can be readlargely independently of the others- the mathematical tools and modelingparadigm used in one chapter depend only lightly on the mathematicalmachinery discussed in other chapters. From the point of view of Phe-nomenology, all chapters discuss in some sense the same question: howcan nucleation occur? What can this tell us about the Min system?8Chapter 2Reviewing deterministic andstochastic representations ofchemistry2.1 IntroductionSystems of chemical reactions such as the Min system are frequently repre-sented using differential equations, or, in cases where both time and spaceare important, as partial differential equations. These reaction diffusionequations are used to model chemical systems in a broad range of contextsand scales [58, 79]. They are built upon a firm basis of experimental obser-vation [23] and have theoretical justification from more foundational theo-ries [14, 52]. The foundations used however are not universally applicable;in particular, they assume that reactants are free to diffuse in three dimen-sions. When considering molecules which are bound to a membrane, thisassumption of three dimensional diffusion breaks down, and several of theassumptions of reaction diffusion kinetics with it. Further, these founda-tions are built on theories that study only a single reaction at a time. Thereis no theory guaranteeing that systems containing multiple reactions can besimply added together as the sum of their parts. The disconnect between9differential equations and the predictions of actual particle based simula-tions is particularly critical when thinking about and discussing the Minsystem, both because the dynamics of the Min system depends criticallyon the rate of reactions between molecules bound to a 2D membrane, andalso because different models of the Min system propose not only differentphysical mechanisms, but also make use of different modeling paradigms.In such cases, it is not always clear which of the differences observed aredue to the chemical reactions hypothesized, and which are the results ofthe modeling framework itself.In this chapter, I introduce three different representations of chemicalsystems: Reaction Diffusion Equations, Brownian dynamics, and Markovchain representations. I take a closer look at some of the assumptions un-derlying these representations and discuss how these differing representa-tion of chemistry can fail to map to one another, particularly in 2D. I discussa number of toy models exemplifying the various ways in which correla-tion between particle positions can lead to a break down of the assumptionsof Reaction-Diffusion kinetics. Very little of the mathematical work in thischapter is truly original; instead I aim to collate and review both obstaclesand results mentioned previously in the literature, as well as provide a fewnovel examples so as to illustrate the key points in as clear a manner aspossible. In particular, I discuss “particle/anti-particle” annihilation, someoften discounted subtleties in the importance of diffusion rates, and a sim-plified Min model which demonstrates burst like behavior when modeledvia Brownian dynamics, but not when modeled using Reaction diffusionequations.For the reader interested in experimental work in the field of membranebound chemistry, Melo and Martins give a review [40] covering a widerange of experiments, along with their strengths and weaknesses.102.2 Representations of Chemistry2.2.1 Reaction-Diffusion PDEsMost often, when dealing with chemical systems in mathematics, we findourselves thinking at one of two scales.On the macroscopic scale, we model chemistry using deterministic par-tial differential equations (PDE). These equations represent our system interms of chemical concentrations, A(x, t), where A :Rd+1→R. Here A(x, t)specifies the concentration of some chemical A at a particular time andplace. A is governed by some form of reaction diffusion PDE, such as∂A∂t= αA− βAB + DA∆A.Each term on the right hand side of this partial differential equation rep-resent either diffusion (DA∆A), or a single reaction governed by the lawof mass action. The Law of Mass Action states that the rate of a chemicalreaction is proportional to the product of reactant concentrations, hence areaction A + B→ C will have reaction rate βAB. Once defined, reactiondiffusion PDEs can be solved either numerically, analytically or asymptot-ically in order to gain knowledge about the chemical system.2.2.2 Brownian DynamicsIf on the other hand we were considering things at a somewhat smallerscale, we would consider individual particles, each moving according Brow-nian motion. Particles that pass within some reaction radius ρ of one an-other react (possibly with some probability p), and are replaced by the re-action products of the corresponding reaction. The exact ρ used would de-pend on the reactants and reaction in question. Models of this kind whichrepresent our chemical system as a collection of moving particles are re-ferred to as “Brownian Dynamics”. Such models can be explored analyti-cally when the number of molecules is very small (say, two), but for largernumbers of molecules we turn to simulation software such as Smoldyn[3].112.2.3 Markov Chain RepresentationWhen the number of particles is large enough to make individual particlesimulation difficult, but still small enough such that stochastic effects arecritical, it can prove useful to represent our system as a Markov chain. Insuch a representation we consider space as a collection of ‘bins’, each con-taining some number of particles of each type. See figure 2.2. The “state”of our Markov system is a vector describing the number of particles of eachspecies in each bin. Each bin will have a reaction rate for each reactiondependent on the number of particles it contains, and a diffusion rate gov-erning how particles are transported to neighboring bins.As a minimal example, suppose space consists of a single bin, and wehave one species of particle, A, along with the reactions A→∅ and∅→ A.The state of the system can be described by a vector of length one, specify-ing the total number of A particles. We can leave any state by either gainingor losing an A particle, and similarly we can enter any state by one of theneighboring states gaining or losing an A particle (as appropriate). TheMarkov chain associated with this system is illustrated in figure 2.1. If wewished to include space in such a representation, we would need to add re-actions of the form Ai↔ Ai+1, and with “reaction” coefficient proportionalto D/dx2, where dx is our mesh size, and D is the diffusion cooefficient ofspecies A.Chemical Markov chains are in most cases easy to simulate on a com-puter. This can be done either exactly, or approximately, depending onthe degree of accuracy required and the availability of computational re-sources. In order to simulate exactly, the Gillespie algorithm is used [21].When using the Gillespie algorithm, we consider the current state of thesystem. For every possible transition, an exponential random variable isdrawn indicating the time at which that transition will next take place. Forexample, for our simple A↔∅ system, if we are in the state A = 2 (that isto say, there are two A particles), then we would draw exponentially dis-tributed random variables indicating the time until either of our particlesvanish, as well as the time until a new particle was created. Whichever12  i=0 i=1 i=2 i=...Figure 2.1: Graphical representation of the Markov chain associatedwith A↔∅.  1 0 0 0 20 1 3 0 02 1 0 0 13 0 0 1 10 1 1 0 05.6 cm2.1 cmFigure 2.2: Brownian Dynamics representations record the location ofevery particle, and allow particles to move according to Brown-ian motion (left). Markov chain representations record the num-ber of particles in each volume, and particles move by jumpingbetween neighboring volumes, leading to the state of the systemchanging in discrete steps (middle). Deterministic PDE represen-tations consider the concentration u(x, t), a continuous quantityvarying in continuous space and time (right).of these transition times is lowest, this is the event which occurs; if ourrandom number generator claims that it will be 0.3 seconds till the nextcreation reaction, and 0.2 second until the next particle is removed, thena particle is removed. Time is incremented by the corresponding amount(0.2s in our case), and the state of the system is updated. All transition timeswhich are unused are “thrown out” and do not affect future transitions inany way.While exact, Gillespie’s algorithm tends to be exceedingly computation-ally expensive, especially for systems with a large number of reactions. Incases where computational speed is a limiting factor, we might use approx-imate method such as τ-leaping, in which we select a fixed τ. During eachtime step, we select Poisson random variables indicating how many timeseach reaction has occurred, and sum up the resulting changes to determinethe state of the system at time t + τ. For example, starting in state A = 8,we might draw random variables indicating that three of our A particleshave been removed and five created, resulting in a final state of A = 10 atthe time t + τ. The approximation is substantially faster than Gillespie’salgorithm, especially in cases where the number of reactions occurring isvery large. It can prove to be inaccurate in cases where the rate of reactionchanges rapidly over the course of a time step. An excellent discussion ofthe limits of this approximation is given by Gillespie [22].Markov chain representations can also be studied analytically by mak-ing use of the Kolmogorov equations. The Kolmogorov equations considerevery possible state the system might be in, and tracks the probability ofbeing in each such state. Differential equations can then be written track-ing the flow of probability from one possible state into another, and (intheory) the total state of the entire system can be tracked, along with allcorrelations, steady states, etc. The Kolmogorov equations associated withour minimal example from the previous page are:dPidt= −αPi − βiPi + αPi−1 + β(i + 1)Pi+1. (2.1)Here, our two negative terms give the rate at which probability leaves14state i through either the creation or destruction of a particle. The positiveterms give the flow of probability into state i, through either the creation ofa particle (when the system was previously in state i− 1), or the destructionof a particle (when the system was previously in the state i + 1).In practice the Kolmogorov equation representation often leads to ex-tremely large systems of equations, seldom tractable, and thus the result-ing Markov chain is instead simply implemented computationally, usingeither custom built tools, or specialized software such as MesoRD [26].Of our three models, the Brownian Dynamics particle model is whatwe might consider the closest to “ground truth”, and the deterministic PDErepresentation is what we might consider the furthest away, with bin basedMarkov chain model situated somewhere in between (see figure 2.2). Itmight be hoped that the various descriptions of a chemical system wouldagree, at least in some suitable limit. In what follows in this chapter I willprimarily be examining the translation between particle based models, andthe more widely used PDE approaches, and the difficulties that this cancause when working in 2D. For a discussion of the mapping between theMarkov chain representation and the corresponding PDEs and stochasticPDEs, please see Gillespie [22].2.3 Mapping between Brownian dynamics and PDEreaction diffusion2.3.1 Concentration as probability densityFor the sake of the discussion to come, the mapping between the particlebased Brownian dynamics and the full scale PDE demands the most atten-tion. On the one hand we have individual particles with distinct locations,on the other we have continuous concentration functions in spacetime. Inorder to continue, we will need to ask the question “What is concentra-tion?” At a macroscopic scale, concentration is a fairly innocuous concept- if the concentration of Na+ ions is 1molL−1, and I have one liter of solu-tion, then I expect to have one mol of ions. Half a liter contains half a mol15and so on. This interpretation of concentration runs into slight difficultiesat the very small scale when we find ourselves in the situation of having33.58 Na+ ions. In order for this to make sense, it is necessary to think ofconcentrations as referring to the expected number of particles, where herewe use “expected” in the most literal mathematical sense.If we imagine a set of N particles in our underlying particle based model,each with position determined by some probability distribution Pi(x, t),then the expected number of particles within some region Ω is simply∑i=1...N∫Ω Pi(x, t)dx. Because this expected number formula applies to allpossible Ω it makes sense to define C(x, t) = ∑i=1...N Pi(x, t), that is to say,concentration is a sum of probability densities. This explicit understandingof concentration as a probability distribution is used in [76], although asidefrom that, it does not appear to be in common circulation. Presumably thisstems from the fact that most people encounter the concept of concentra-tion long before they encounter continuous probability distributions, andare thus unlikely to frame the more familiar and intuitive concept in termsof the less familiar one.Because particles of the same species are interchangeable, swapping thepositions of particles can be considered a non-operation, hence, by assign-ing each particle a random labeling we can select identical probability dis-tributions. If all particles have the same probability distribution, and theconcentration is merely the sum of probability distribution then we havePi(x, t) = C(x, t)/N. Diffusion of concentration is thus seen as a direct con-sequence of diffusion of probability. If particles have some probability ofdegradation, we might represent this in the particles’ probability distribu-tion as ∂tPi(x, t) = −µPi + D∆Pi, leading to a series of “Probability distri-butions” with total integral less than one, but this can easily be understoodas particles that may or may not exist at a given moment in time. Addingnew particles to the system provides difficulty for the indexing of our sum,but is, overall, no major difficulty, and all the expected properties from thesingle particle probability distributions filter up to the expected ensembledistribution A, giving rise to our PDE without difficulty.The one place where some delicacy is needed is in dealing with Bi-16molecular reactions. Classically speaking, reactions involving multiple par-ticles such as A + B → C are represented in PDEs by terms of the formk× A× B. This term is loosely justified by appealing to the “law of massaction”. The Law of mass action was originally identified experimentallyby Cato M. Guldberg and Peter Waage [23]. It wasn’t until half a centurylater when Smoluchowski would give a full analytic justification of thesestrikingly elegant observations [52]. A historical perspective on these de-velopments, and their impacts is given by Voit et al [67].2.3.2 Smoluchowski TheoryHere I give a brief introduction to Smoluchowski theory [52]. Smoluchowskitheory is the theory used to justify the law of mass action for bi-molecularreactions in three dimensions. It also passes comment on bi-molecular re-action rates in lower and higher dimensions, but we will postpone the dis-cussion of that for now.Suppose we wish to determine the rate of reactions between two species ofparticle, A+ B→ B. We define the species to react with one another when-ever the distance between two such particles is less than some reaction dis-tance ρ. If we consider a single stationary B particle, centered at the origin,then we can define A(r, t) as the concentration of A particles around ourcentral B particle, at various distances r and times t. Interpreted in terms ofprobability distributions, we can instead define some A˜ as the probabilitydensity of a single particle, with an absorbing boundary at ρ and a reflect-ing boundary at some large R. We consider a particle originally placeduniformly in the spherical space between ρ and R. In the limit R→ ∞, A˜approaches a simple rescaling of A(r, t) dependent on the desired far fieldconcentration A∞. This is because A and A˜ obey the same linear differen-tial equation and their initial conditions are proportional to one another. Inthe limit their boundary conditions become equivalent also.In 3D simple diffusion assumptions, along with our absorbing bound-17ary leads to the PDE:∂A∂t= D∆r A =Dr2∂∂r(r2∂A∂r),A(ρ, t) = 0 for all t,A(∞, t) = A∞ for all t.(2.2)A particles diffuse in spherical coordinates, and are annihilated upon meet-ing B. It is assumed that a single B particle does not affect the far-fieldconcentration of A particles. The steady state solution of the above is A =A∞ − A∞ρ/r, with a resulting flux of A into B of 4piρDA∞ per B parti-cle. Assuming B particles are sufficiently sparse so as to not interfere withone another, this leads to a TOTAL reaction rate of 4piρDA∞B ≡ kAB (wemultiply the amount consumed by one B particle by the concentration of Bparticles in order to get the reaction rate per unit volume). This is preciselythe law of mass action.In two-dimensions, our diffusion operator becomesD∆r A =Dr∂∂r(r∂A∂r).This operator does not lead to a steady state distribution for any finite A∞.It was shown by Naqvi [49], that if we assume an initially uniform concen-tration of A particles, A0, then the concentration of A particles at a distancer from a singular trap B will obeyA(r, t) =2A0pi∫ ∞0exp(−Du2t) J0(uρ)Y0(ur)− J0(ur)Y0(uρ)J20(uρ) +Y20 (uρ)duu. (2.3)Here J0 and Y0 are order zero Bessel functions of the first and second kind1. Values of this rather opaque formula are given for various t in figure 2.3.Physically, this result implies that in two dimensions a trap will produce anever expanding “depleted region” around itself, in which the concentration1Naqvi’s paper is a thing of both evil and beauty, and I dearly hope never to require theuse of so many Bessel functions in my own lifetime. My great appreciation goes to bothNaqvi, as well as Carslaw and Jaeger (whose inverse Laplace transform Naqvi cites [9])180 0.5 1 1.5 2 2.5 3r00.,t)Concentration near an absorbing particlet=10 -4t=10 -2t=10 0t=10 2t=10 4Figure 2.3: Output of equation 2.3 for various times. Here we haveassumed that A0 = 1 and ρ = 0.1.of particles is reduced. In three dimensions the depleted region around atrap remains small, and does not grow as time progresses.Naqvi’s results lead not to a constant reaction rate, but instead to con-tinuously decaying reaction rate k(t)AB, with k(t)→O(1/ln(t)) as t→∞.Intuitively, this decaying reaction rate can be thought of as an artifact ofthe properties of random walks in two and three dimensions: in three di-mensions space is big, and a random walk does not often retrace its ownsteps; each step will (with reasonably high probability) explore new space,and thus the chance of collision with each step is practically independent,hence reaction rates are constant. In two dimensions, recurrence is guar-anteed; particles are frequently passing over their own footsteps. At anygiven moment a particle is likely to be re-exploring previously explored re-gions of space- regions already demonstrated to not contain reaction part-ners. As time wears on, particles that have not reacted so far are likely to befar away from any particle they might react with in the future. The reactionrate k is not constant. The differences between two and three dimensions19have been studied experimentally by examining the mutual quenching ofexcited and unexcited Pyrene complexes (Py ∗+Py→ 2Py). This work wasstarted by Vanderkooi et al [61], and subsequently extended Merkal et al[41], and later Martins et al [38], who were able to confirm that “the modelof Razi Naqvi describes the experimental decay with extreme accuracy”.That said, the number of experiments testing and examining Naqvi’s re-sults in two dimensions is at best limited - in particular because Naqvi’sresults apply only in the case of an initially well mixed population, and be-cause it typically takes a long time for Naqvi’s results to be clearly distin-guishable from classical exponential decay — long enough that competingchemical reactions can typically interfere with and complicate ones read-ings.In addition, when dealing with a finite system, system size and bound-ary effects have a significant effect on the long term asymptotics, as do theprecise reactions consider, and our “particle placement” rule at the start oftime, that is to say, there are differences between placing a fixed numberof particles, and a fixed density of them in terms of how we manage ourprobability distributions etc.2.4 Some Examples of Model MismatchWhen thinking about concentration as being a rescaling of our underlyingparticle probability distribution, there is one critical piece of informationthat this concentration does not encode: correlation. Concentration, as en-coded by any PDE for A(x, t) does not distinguish between models thatpredict one million A particles spread uniformly at random, and modelsthat predict one million A particles, tightly concentrated near a single ran-domly chosen point. The variable reaction rate k(t) predicted in 2D is insome sense a symptom of this; as time passes A and B particles becomeanti-correlated in space, and hence are less likely to react with one another.Because the concentrations themselves do not encode this anti-correlationthe reaction rate is forced to account for it.20In this section, I consider a number of specific systems and models, bothoriginal and from the literature. These models demonstrate the stark dif-ferences which can arise between our classical reaction diffusion represen-tation of chemistry, and the corresponding particle based systems. I alsoexamine a number of other wrinkles and intricacies that can eventuate, andare often overlooked. All of these discrepancies can, in one way or another,be seen as a the result of ignoring various types of correlation.2.4.1 Particle Anti-Particle AnnihilationA simple reaction model illustrating the limitations of classical PDE rep-resentations of chemical systems can be seen in the case of “Particle/ anti-particle annihilation”, as studied in [? ], and simulated in [57]2. Suppose wehave a system, initially well mixed with equal homogeneous starting con-centrations of A and B, with the single reaction A+ B→∅. A and B are fur-ther assumed to have equal diffusion rates. Classically, this might be rep-resented as an PDE by the equation A˙ =−kAB+ D∆A, B˙ =−kAB+ D∆B.Because B(x, t) = A(x, t) = A0 initially, this might be simplified to the ODEA˙ = −kA2. Over time we expect the total number of particles to scale likeO(t−1). In particle simulations, this is not what is observed (figure 2.4 forequivalent simulations of my own).Because there will always be slight local variations in the concentrationsof A and B, one species or the other will come to dominate in any given lo-cation. After an initial flurry of reactions, our system quickly segregatesinto regions which are either rich in particles or anti-particles. While theexpected concentration at any given point is identical for both particles andanti-particles, the action of our reaction quickly leads to significant levelsof correlation such that the observation of one particle provides strong evi-dence that many particles and few anti-particles are to be observed nearby.Rather than studying the system in terms of the concentrations of A2Torney and Warnock’s paper considers the 3D case, though they also make mention ofa computational 2D paper (in press), but to the best of my research ability, no such paperwas ever published. This is most unfortunate, as I imagine the results would have beenrather interesting.210 1 2 3 4 5log(t)77.588.599.5log(#p)Figure 2.4: (left) the state of an initially “homogenous” system after along time. (right) time vs particle number on a log-log plot- blueindicated the actual recorded numbers in simulation while theblack line is a line with the theoretically predicted −d/4 slope.or B, Toussaint and Wilczek instead consider the function f = A − B, thedifference in particle abundance between the two particles. Because anni-hilation reactions reduce both A and B (by equal amounts), f is conserved.If we make the simplifying assumption that both particles diffuse at thesame rate, and that locally either A = 0 or B = 0, then we can make theapproximation that the total number of particles and anti-particles scaleslike∫ | f |dx, and that f˙ = D∆ f . By representing f as a Fourier series, withrandom initial co-efficients, and solving the diffusion equation exactly, Tou-ssaint and Wilczek are able to show that∫ | f |dx scales as O(√A0(Dt)−d/4)for dimensions d = 1,2 or 3. These results are born out in their simulations(see figure 2.4 for my creation thereof).Although they may use uniform initial conditions (that is particles andanti-particles placed uniformly at random on their lattice), Toussaint andWilczek also make a point of discussing how unphysical such starting con-ditions are: the universe does not allow particles to move about at randomand then “turn on” some annihilation reaction. The annihilation mechan-ics of the particle inherently indicates that this homogeneous “equilibrium”starting state is in fact anything but an equilibrium state. In creating modelswhich we expect to accurately reflect experimental data, it is insufficient to22consider only the process of particle reaction, we may also need to considerthe process producing our initial particle distribution.2.4.2 Variable Diffusion Rates Between Reacting ParticlesA common assertion found in the literature is the claim that, for a bi-molecularreaction A + B→?, of reaction radius ρ, the reaction rate depends only onDA + DB, the so called “mutual diffusion rate” of a pair of particles relativeto one another. This assumption dates back to Smoluchowski’s originalwork [52], and is implicitly repeated in various resulting papers [40, 49, 66,77]. It is explicitly stated in [38] and is used as a test of correctness of theiralgorithm. The assertion is, unfortunately, incorrect [5, 7, 8, 13, 78].The assertion that only the sum DA + DB matters is based on the rea-soning that if we focus on a single A particle, then our focal particle movingright is equivalent to its reaction partners moving left, and vice versa, hencewe might hope that the reaction rate can thus be calculated by imagining asingle focal particle amidst a cloud of independently moving reaction part-ners. Unfortunately, this reasoning ignores the fact that when our focalparticle moves left this is equivalent to all reaction partners moving to theright. Movement of our focal particle, effectively correlates the movementof all other particles, and any theory which ignores this correlation willinevitably lose accuracy.To see this discrepancy in more concrete terms, consider a system con-taining both particles P and traps T. Any particle that comes within dis-tance ρ of a trap is consumed according to the reaction P + T→ T.For our initial condition, consider N traps and N particles spread acrossa square domain of area N uniformly at random. Traps and particles diffuseat rates DT and DP respectively.We wish to consider the probability S(t) that a given particle survivesup until time t in the cases where either DT = 0, DP = 1 or the case whereDT = 1, DP = 0. Let us refer to these two distributions as SP(t) and ST(t),respectively (the subscript refers to the mobile particle type).In order to determine SP(t) and ST(t), we will make use of the “Weiner23Sausage” Wρ(t)(b). The Weiner sausage, originally described and discussedin [53], and later named in [13], is defined as the set of all points within ra-dius ρ of some point on the Brownian path b before time t. The area ofthe Weiner sausage is denoted |Wρ(t)(b)|, or when we do not specify aparticular Brownian path, |Wρ(t)|. The Weiner Sausage can be generalizedusing whatever distance metric seems appropriate for determining our “ρ-radius” and to any space where Brownian motion is well defined. We willconsider only the Weiner sausage inRd using the standard Pythagorean L2distance metric.If we wish to determine SP(t), the probability that a mobile particle willsurvive until time t, this is equivalent to determining if the Weiner sausagegenerated by the particle’s motion contains no traps, that is to say:SP(t) = E[(1− |Wρ(t)(bP)|N)N]−−−→N→∞E[exp(−|Wρ(t)(bP)|)]. (2.4)Here we take expectations over the set of all Brownian paths bP that theparticle might take.In contrast, if we imagine that traps are moving and particles are stationary,then for a particle to survive the particle itself must not be in the Weinersausage of any of the moving traps. Stated mathematically:ST(t) = E[∏i=1..N(1− |Wρ(t)(bTi)|N)]−−−→N→∞exp(−E[|Wρ(t)(bT)|]).(2.5)By Jensen’s inequality [30] this implies that ST(t) ≤ SP(t). How closewe get to equality depends both on where we are in the exponential curve,and on the variance of |Wρ(t)|.The case of immobile traps (SP(t)) was studied by Donsker and Varad-han [13] where they showed that in the limit of large tlimt→∞ t−1/2 log(E[exp(−|Wρ(t)(br)|)]) = −2√2γd, (2.6)24where γ2 is the lowest eigenvalue of −1/2∆ in the unit circle. This leads toSP(t)→ O(exp(−4j0,1√t)) for large time, were j0,1 ≈ 2.4 is the first zero ofthe Bessel function of the first kind.E[|Wρ(t)(bT)|] on the other hand can be found via a direct applicationof Naqvi’s work, and has been shown to be asymptotic to O(2pit/ln(t))[54], hence ST(t) ≈ exp(−2pit/ln t)For the intermediate case, where both particles and traps are diffusing,Bramson and Lebowitz [7] demonstrated for the reaction A + B→ ∅ thatthe survival rate of A particles decays like O(exp(−Ct/ln(t))) for someconstant C, assuming A particles are rare and B particles are abundant.This reaction scheme can be shown to be equivalently to P + T→ T as faras our survival time results are concerned.Blythe and Bray were able to build on this work, [5, 8], identifying C andthus giving S(t)→ O(exp(−4piρDTt/ln(t))) in the limit of large time. Ofparticular note here is the fact that long time survival time is independentof the diffusion rate of the particle itself, and is determined purely by thediffusion rate of traps, assuming this diffusion rate is non-zero. In the caseDT = 0 eq. 2.6 holds.Unfortunately, while fascinating in its own right, this last result is oflimited practical applicability - as Blythe and Bray note “convergence toasymptopia will be extremely slow in two dimensions.”, and that theirmeasured survival probabilities happily decayed to 10−99 during the nu-merical experiments they used to verify these results. This places the asymp-totic results substantially outside both the experimental regime (and in-deed, barely within the reach of numerical experimentation).While Bray and Blythe’s result appears at first blush to have limited di-rect applicability to physical systems, it does establish an important prece-dent: namely that survival probability is not a direct function of D = DA +DB, but does in fact depend on the diffusion probabilities of each particlespecies individually. Simulators and experimenters that insist upon rela-tive diffusion rate being the only determiner are liable to be disappointed,as neither real life, nor “perfect” simulations will exactly match their overlystringent assumptions regarding the survival time of reacting particles.252.4.3 Degeneracies Produced in a SimplifiedDuplication-Annihilation Min ModelThe survival time results in the previous section, while interesting in theirown right, are a little abstract, and do not clearly illustrate just how extremethe consequences of our decaying reaction rate can be. Consider the systemof reactions∅ κ−→ D, (2.7)D α−→ 2D, (2.8)D + Eβ−→ E, (2.9)in 2D. This system both builds upon the particle and trap dynamics studiedin the previous section, and also acts as a simplified version of the Minreactions as hypothesized by Schweizer et al. and studied by Bonny et al.[6,51]. Here I have kept the core elements of Schweizer’s and Bonny’s models:MinD binds to the membrane both spontaneously at some low rate κ, andautocatalytically at some much higher rate α. MinE removes MinD fromthe membrane (at rate β). I have abstracted away both the dynamicallychanging levels of MinE bound to the membrane, along with any explicitmention of the cytosolic bulk (MinD is simply permitted to “appear” onthe membrane). Inevitably, this system of reactions tends to promote andexaggerate the effects of particle correlation — the duplication term rapidlyincreasing the correlation in the position of D particles, while the removalterm anti-correlates D particles with respect to E. Classically, if we assumehomogeneous initial conditions, this system might be represented by theODE systemD˙ = κ + (α− βE)D, (2.10)E˙ = 0, (2.11)which can be solved to give D(t) = −κα−βE + Ce(α−βE)t. The system will ei-ther grow exponentially (if α > βE), or decay towards the steady state -26κ/(α− βE) if α< βE. Even for heterogeneous initial conditions, the systemdescribed has no means to maintain any pattern initially present, and willdiffuse towards a homogeneous state over time.As discussed previously however, in 2D the notion of a constant bimolecu-lar reaction rate β is problematic.Rather than think directly in terms of concentrations and differentialequations, we might instead think in terms of the number of particles in a“lineage”. Imagine all D particles are labelled with a number representingtheir lineage. When a particle binds to the membrane spontaneously, wecan consider this to be the start of a lineage, and give the particle a previ-ously unused label. When a particle is bound to the membrane via D→ 2D,it is assumed to inherit the label of whatever particle is binding it. Parti-cles binding spontaneously have no information about the distribution ofE particles around them, and thus initially observe a uniform probabilitydensity of E particles. Such a particle has a survival probability either SP(t)or ST(t) (equations 2.4 and 2.5).Particles binding autocatalytically however do have information — theexistence of the particle they are binding to implies that E particles are mostlikely distant. From the point of view of the survival curve they thus inherittheir parents ‘age’.In the long run, this implies that the expected number of particles be-longing to a given lineage is SP/T(t)eαt. Because both SP(t) and ST(t) shrinkslower than e−αt for large t, this implies that the expected number of par-ticles in any given lineage grows towards infinity over a very long time.In the ideal arbitrarily large system this happens regardless of the choice ofparameters. In practice, what is observed is that most lineages die out afterfinite time, and a few end up growing very large.This result seems, at first blush, to be nonsensical, or at the very leastdegenerate in some way. Much of this comes from the unphysical assump-tion that D particles are created out of nowhere, and that there is no limit onthe rate that these particles can be generated. Even without this unphysicalbehavior however, the fact remains that theory predicts our bi-molecularreaction will be overwhelmed eventually, regardless of reaction parame-27ters.Making intuitive sense of this observation is easiest if we consider thecase where D particles are moving and E particles are stationary. Considera continuous concentration of D particles, diffusing amongst a collection ofabsorbing E particles of some radius ρ, positioned according to a PossionPoint process 3 with intensity E0. This leads to the concentration equationD˙ = ∆D + κ + αD (2.12)D(x, t) = 0 for all x within ρ of a E particle. (2.13)Here Ω = [0, L]2/E. If we consider the limit of small κ, then the eigenvalueproblem of the above system becomes λD = αD + ∆D. If, for a given ar-rangement of E particles there exists any eigenfunction of the above systemsuch that λ > 0 then we expect unbounded growth of the correspondingeigenfunction.Determining the probability of a positive eigenvalue directly is a diffi-cult problem, however it is possible to bound this probability from below.We first note that according to Lan et al [33], the probability that a collec-tion of disks placed on the unit square according to a Poisson point processcovering that entire unit square is bounded above according to:P([0,1]2 ⊆ ∪Ei) ≤ 1−(1+21.2epir2λpir2λ2)−1, (2.14)where here r is the radius of our disks, and λ is the intensity of our pointprocess.If we rescale our unit square to a square of size L, and change variablesin our formula appropriately then it is easy enough to show:3Although the details of the Possion point process are critical for the theorems I willlater invoke, they are not crucial to this thesis. It is sufficient for the reader to think ofthe Poisson point process as placing particles independently at random, with all positionsequally probable for all particles.28  Figure 2.5: Finding empty “clearings” of radius R− ρ (right) is equiv-alent to finding uncovered ground in the coverage problem withradius R (left).P([0,1]2 6⊆ ∪Ei) ≥(1+21.2epiR2B0piR2B02L2)−1. (2.15)The existence of at least one point that is a distance R from the centersof all our E particles implies the existence of at least one circle of radiusR− ρ that does not intersect any of our E particles (see figure 2.5).If such a “clearing” exists, then at worst it is perfectly surrounded by Eparticles and can be treated as having an absorbing boundary. The largesteigenvalue of λ f = ∆ f + α f associated with such a perfectly circular clear-ing is λ = α− j20,1/(R− ρ)2, where j0,1 ≈ 2.405 is the first zero of the Besselfunction J0. f = J0(√λ(R− ρ)) is the associated eigenfunction. By selectingR such that λ> 0, and plugging this value back into 2.15 we can get a lowerbound on the probability that a positive eigenvalue exists. In particular, itis important to note that regardless of any of the other parameters in thesystem, L→∞⇒ P(λ > 0)→ 1.While these theoretical arguments are all well and good, it also paysto investigate what is actually observed in practice. We implement the29chemical reactions proposed in 2.9 using the particle scale simulation soft-ware Smoldyn [2, 3]. Here we assume κ = 10−5m−2s−1, α = 0.1s−1 andρ = 0.17292m. ρ is selected by Smoldyn, hence the strange value. We runsimulations in a 2D box of size 300m × 300m, recording the total numberof particles every second. Full code for these simulations can be found inthe supplementary materials, and on github, saved as toy bonny D.txt,toy bonny E.txt and toy bonny DE.txt. Typical results for each sim-ulation are displayed in figure 2.6. Both immobile D and immobile E sim-ulations behave as has been discussed and predicted. Simulations whereboth particles are mobile appear to give bounded results, even for extendedsimulation times, although it must still be noted that “burst” like patchesare still observed.As a particle initially placed uniformly at random in our domain ages, itgains information about its immediate vicinity; the longer a particle sur-vives, the higher the probability it is currently in a clearing. This is reflectedin k(t) as a gradual reduction in reaction rate.The observation that particle number remains bounded for simulationswhere both particles are moving may indicate either a domain size effect,or potentially some previously unconsidered nuance in the calculation ofk(t). It is possible that the observation of a particle’s survival till a giventime may provide less information about the distance to nearby E particlesif this particle is one of a large number of “siblings”.2.4.4 When Can We Ignore All This?In all of the above, we have simultaneously assumed both too little andtoo much. We have assumed too much in that the majority of the papersdescribed assume systems starting with particles spread according to a uni-form Poisson point process across an infinite plane, and have (often) con-sidered only a single chemical reaction. However, this Poisson point initialcondition is, almost by definition, not a state we expect to observe in anyreal world system. The fact that it is not an equilibrium state is in fact themain cause of our problems. We thus find ourselves studying the decay30  Time: 98.55     D:47121    E: 9000Time: 352.77     D:22574    E: 9000Time: 500     D:121    E: 9000D mobileE mobileD/E mobileD(t)Figure 2.6: Typical results of Smoldyn simulations of the reactionsystem 2.9 with either DD = 0.8m2s−1, DE = 0.0m2s−1, DD =0.0m2s−1, DE = 0.8m2s−1 or DD = DE = 0.4m2s−1 (Top left, topright, and bottom left, respectively). D particles are denoted inred, with E in blue. While it may appear that the top right has thefewest D particles, this is simply the result of most D particles be-ing stacked on top of one another, due to D particle duplication,and DD = 0.(Bottom right) a plot of number of D particles vs time. As pre-dicted, with immobile traps, D particle growth is governed bythe maximum positive eigenvalue associated with the largestclearing. For immobile particles we observe exponential growthat rate α (the raw duplication rate), but note that because mostof our D particles are collected together in one place, a singlewandering trap is able to remove a significant portion of all Dparticles if it were to wander into one of these D stacks. Finally,in the intermediate case we observe no degenerate behavior, thethe number of D particles remains bounded. This suggests ei-ther a finite system size effect, or potentially some nuance in ourcalculation of k(t) brought about by D→ 2D.away from a state which we never expect to observe experimentally, to-wards long term asymptotic results that are too improbable to ever be ob-served by anything but the most dedicated and specialized simulation. Wehave assumed too little, in that for any actual system of interest, knowledgeof the physical system will both limit and resolve many of the degeneraciesdescribed. Real world systems have explicit domain sizes, and may wellimply a particular starting distribution. Real world systems have particleswith measurable diffusion rates and exist for finite time, and thus any the-ory that is accurate enough for long enough is sufficient.Before leaving then, it seems fitting to take a quick detour to a simplecase where none of the above issues cause trouble. In all that has been dis-cussed so far we consider purely diffusion limited reactions- all particlesthat meet must react, and the probability of reaction is a direct reflection ofthis probability of meeting. Many chemical systems however are reactionlimited- two molecules may come into contact with one another, but with-out sufficient energy and the correct approach angle no reaction will occur.Yogurctu & Johnson [77], studied the question of “When is a single reactionrate k a good enough approximation in 2D?”They show that if we replace the standard Smoulchowki absorbing bound-ary A(ρ, t) = 0, with a partially reflecting boundary κA(ρ, t) = D ∂A∂r (ρ, t),thenke f f (t) = 2piρκ(2Dtρ2e2γ)−ρκ/2D. (2.16)They go on to comment that for 2piρκ < 0.05D it can be shown ke f f willonly decay by 20% over the age of the universe. Given that even the slow-est chemical systems of interest will seldom last for more than a day, this iscondition is clearly sufficient for claiming that ke f f ≈ ke f f (0) for all relevanttimes. From an experimental perspective, what this means is that systemswhich are reaction limited (high energy barrier to reaction), as opposed todiffusion limited (low energy barrier, low diffusion rate), can be treated asobeying a mass action law of the form A˙ = ke f f AB, and that this approxi-mation will remain accurate up to experimental precision even in the case32of reactions lasting for multiple days.2.5 ConclusionsMathematical modeling of chemical systems is useful - both as it allows usto predict the action of well understood chemical systems, and as it allowsus to probe less understood systems by allowing us to predict the behaviorof various proposed reaction schemes.Chemical systems can be represented at a variety of different scales - ei-ther macroscopic, mesoscopic or microscopic, and each such representa-tion brings with it various strengths and weaknesses in terms of accuracy,tractability, and computational speed.In this chapter, I have discussed some of the weaknesses of the macro-scopic reaction diffusion based models of chemical systems: namely theway in which our PDE representations implicitly ignores correlation in par-ticle position, and the numerous difficulties this can produce when compar-ing to the “ground truth” Brownian dynamics approach, especially in 2D.The model presented in section 2.4.3, is (to the best of my knowledge) origi-nal work, but for the most part, the results I have presented here are known,albeit often ignored or forgotten. It is my hope that future researchers willbe able to use this review as an introduction to this constellation of unspo-ken assumptions and approximations that are implicit in our modeling ofchemical systems, and potentially this work will act as a starting point forfurther research into how to correctly convert between particle models andtheir corresponding differential equations (especially in 2D).Despite all that I have said in this chapter, and the various limitations ofusing classical reaction diffusion PDE’s, in what follows in this thesis, I willgenerally be relying on precisely this technique, and ignoring all the sub-tleties and concerns I have raised here. Partly this is a result of the order inwhich research was completed, and partly it is simply a compromise withpracticality; while we may aspire to create models that perfectly match real-ity in every way, in practice there is little benefit in describing a model thatis just as intractable as our original system. There is no point in painting a33map that is the same size as Paris.34Chapter 3Some Direct PDE Simulationsof the Min System3.1 IntroductionIn this chapter I make use of a chemical model of the Min system fromthe literature, and explore this model in 1D for a variety of bulk concen-tration values, in particular looking for nucleation behavior similar to thatobserved by Vecchiarelli et al. [64].Here I will be making use of the standard PDE representation of chem-ical systems and will not be concerning myself with the subtitles of noise,nor the difficulty of reaction rates in two dimensions. Random effects willenter my system only via my initial conditions.3.2 The ModelThere are several models of the Min System, each involving a variety ofdifferent reactions and different species. In this chapter, I take a yet to bepublished ODE model by Will Carlquist [71] 1. Carlquist’s model contains1The model used here comes from personal correspondence with Carlquist, based on pa-rameter fitting conducted during Carlquist’s PhD thesis. It is possible that by the time saidthesis is published, slightly different numbers will have been fit, and these two documentswill not agree exactly. Qualitatively, I do not anticipate any significant changes.35six independent species: D,E,d,e,de and ded (figure3.1).D and E refer to the cytosolic forms of MinD and MinE (respectively). d ande are the raw membrane bound forms of MinD and MinE, while de and dedare complexes of these molecules. All references to MinD in this chapterwill refer to the presence of MinD dimers, as opposed to monomers, asdimers are the complex relevant to my MinD/E system.  D Ee d de dedFigure 3.1: Basic dynamics of Carlquist’s Min model.The model is an “inclusive” model in that many conservative chemicalreactions were initially permitted, and most linear catalysis terms, hencethe model includes terms not only for “the rate at which D becomes d”, butalso for “The rate at which de catalyzes the conversion of D to d”.36A full list of non-zero reactions is:D d,de,ded−−−−→ d (3.1)E + d e,de,ded−−−−→ de (3.2)E + ded e,de,ded−−−−→ de + de (3.3)E + ded de,ded−−−−→ de + de (3.4)d + de −−−−→ ded (3.5)d + e −−−−→ de (3.6)dS(d)−−−−→ D (3.7)de + de −−−−→ded + E (3.8)de + de −−−−→ ded + e (3.9)ded + e −−−−→ de + de (3.10)e −−−−→ E (3.11)In all cases superscripts on our arrow refer to species which the modelallows to catalyze the corresponding reaction. S(d) refers to the Hill func-tion dynamics associated with the corresponding spontaneous D removalreaction, which has reaction rate k dsds+d+de+2ded+C d. This reaction rate is de-signed to account for “extended residence times at higher protein densi-ties” as observed by Loose et al[37]. The exact physical mechanism bywhich high protein density extends residence times is potentially a mat-ter of debate; here we ignore such discussion, in favor of simply makinguse of the model as parameterized by Carlquist.Unlike Carlquist, who sought to fit his parameters so as to match spa-tially homogeneous temporal oscillations, I am interested primarily in thepossibility of burst formation. As such, there are two significant changesthat I will be making to Carlquist’s model: first, I convert the system from37a spaceless ODE to a spatially distributed PDE. For the sake of the analysisin this chapter, one spatial dimension will prove sufficient, although for themost part the analysis is independent of the precise number of dimensionsconsidered 2. The second modification I make is to explicitly consider thesolution “bulk”. Carlquist’s original parameter fitting is based upon theassumption that solution concentrations of minD and minE are constant; avalid assumption given the experimental set up of the data Carlquist fits to[29], in which fresh solution is constantly pumped through the system at afixed concentration (figure 3.2, inset). I however am interested in burst for-mation, a phenomena shown by Vecchiarelli et al [64] to be closely tied withthe depletion of MinD from the cytosol. For this reason I consider a finitesized system, where D and E are allowed to vary in time so as to conservethe total mass of Min proteins.3.2.1 The Mathematical ModelWe implement Carlquist’s chemical reactions as a system of partial differ-ential equations. For the sake of notation let o = [d], l = [e], P = [de], B =[ded],O = [D] and L = [E]. The shapes of the letters are chosen such that Pis in some sense a combination of o and l and B is a combination of l andtwo o’s. Let O and L be the solution concentrations of minD and minE.Membrane concentrations are measured in particles per µm2, while solu-tion concentration I measure in particles per µm32Note that here we are imagining a two dimensional system in which we are free to ig-nore one dimension due to symmetry. This is very different to a system in which chemistryactually takes place in one dimension.38These assumptions lead to the system of equations:o˙ =Dmem∆o + (ω1 +ω2o +ω3P +ω4B)OO0κ − (o + P + 2B)κ−ω5ol +ω18P− (ω6 +ω7l +ω8P +ω9B)o LL0 −ω10oP− oω22γγ+ o + P + 2B,P˙ =Dmem∆P + (ω6 +ω7l +ω8P +ω9B)oLL0+ 2(ω11 +ω12l +ω13P +ω14B)BLL0+ω5ol −ω10Po−ω15P−ω16P−ω17P2 −ω18P + 2ω19Bl +ω20B,B˙ =Dmem∆B− (ω11 +ω12l +ω13P +ω14B)B LL0 +ω10oP + (ω17 +ω18)P2 −ω19Bl,l˙ =Dmem∆l +ω17P2 +ω18P +ω15P−ω21l −ω5ol −ω19Bl,O˙ =Dsol∆O− o˙− P˙− 2B˙,andL˙ =Dsol∆L− l˙ − P˙− B˙(3.12)Here ωi,κ,β and γ all represent reaction parameters, all estimated inCarlquist’s work. For values and physical descriptions of these see table3.1. Dmem and Dsol are the slow membrane bound diffusion coefficient andfast solution diffusion coefficient (respectively). For the sake of this analy-sis, I assert Dmem Dsol , and do not concern myself with the exact values.Carlquist’s parameter fitting is based upon the assumption of somefixed solution concentration . O0 = 319.17µm−3 and L0 = 819.01µm−3 rep-resent these fixed concentrations for MinD dimers and MinE monomersrespectively and are used to convert from Carlquist’s fitted “reaction pa-rameters” to more concrete reaction coefficients [29].If we wish to study spatial patterning, we will need to consider diffu-sion of particles, both in the three dimensional solution, and in the two di-mensional membrane. Rather than model both the solution and membraneexplicitly, I use the fact that diffusion in solution is known to be fast rela-tive to membrane diffusion, and make the approximation that the solution39  hxFigure 3.2: For large bulk, the solution concentration is largely inde-pendent of membrane concentration. For small bulk, any minprotein binding to the membrane will noticeably reduce solutionconcentration. We represent this “bulk height” by h. (Inset) InIvanov and Mizuuchis experimental set up, the solution is con-stantly replaced, the membrane has no effect on the bulk, andthus h is effectively well mixed. By conservation of particles this impliesO ≈O f ull − o¯ + P¯ + 2B¯h , (3.13)L ≈ L f ull − l¯ + P¯ + B¯h . (3.14)Here o¯ represents the spatially averaged o value (and so on for othervariables). h represents the assumed depth of our solution. Deep solu-tions will be relatively unaffected by the dynamics of our membrane, whileshallow solution layers are rapidly depleted as the proteins collect uponthe membrane. O f ull and L f ull represent the solution concentrations thatwould occur if our membranes were to become entirely empty. O f ull , L f ulland h are parameters that took on specific values in Ivanov’s experiments,but that we vary here in order to consider different experimental regimes,40as seen, for example, in [64].3.3 Exploring Parameter Space with ContinuationMethods.Before jumping directly into simulating our PDEs I would like to do a pre-liminary exploration of parameter space in order to determine which pa-rameter regimes are likely to give interesting behavior. I consider our sys-tem as determined by three parameters: O f ull , L f ull and h. Collectively werefer to these parameters as p = [O f ull , L f ull , h]′. We describe the state of thesystem by the vector u(x, t) = [o, P, B, l]′. The state of the system u changesin time due to the action of diffusion and chemical reactions, according toequations 3.12, which for the sake of parsimony and readability we denoteu˙ = Dmem∆u + f(u). Because we are (for now) considering homogeneousstates I assume ∆u = 0.For any given trio of parameters, I want to know the homogeneoussteady states of our system, along with the stability of these steady states.Stated algebraically, we seek u such that f(u) = 0, and for each such u weuse the eigenvalues of suitable Jacobian matrices to determine the statesstability against both spatially uniform and spatially localized perturba-tions, i.e. bursts.3.3.1 Newton’s Method and ContinuationFor most parameter values p, it is computationally straight forward to findhomogeneous steady states of our system using Newton’s method:uk+1 = uk − [ Jˆ(uk)]−1f(uk). (3.15)The Jacobian Jˆ(u) can be found by using Matlab’s symbolic mathematicstoolbox to differentiate eq. 3.12 with respect to o, l, P, B. In practice, we usethe symbolic toolbox once, and then transcribe the resulting output into anexplicit matlab function. Attempting to use the symbolic toolbox directlyevery iteration step is computationally expensive.41Symbol Value DescriptionL0 819.01µm−3 Bulk E concentration assumed in fitting.O0 319.17µm−3 Bulk D concentration assumed in fitting.O f ull free parameter Maximal cytosol concentration of MinD.L f ull free parameter Maximal cytosol concentration of MinE.h free parameter depth of cytosolic layer, measured in µm.ω1 0µm−2s−1 Base reaction rate D→ d.ω2 0.38336s−1 Catalysis of D→ d by d.ω3 0.19539s−1 Catalysis of D→ d by de.ω4 0.39594s−1 Catalysis of D→ d by ded.ω5 0.54231µm2s−1 Base reaction rate d + e→ de.ω6 0.0033428s−1 Base reaction rate d + E→ de.ω7 1.9835µm2s−1 Catalysis of d + E→ de by e.ω8 1.9885× 10−10µm2s−1 Catalysis of d + E→ de by de.ω9 1.2952× 10−9µm2s−1 Catalysis of d + E→ de by ded.ω10 4.7654× 10−5µm2s−1 Base rate of reaction d + de→ ded.ω11 3.3241× 10−8s−1 Base rate of reaction E + ded→ de + de.ω12 0.19988µm2s−1 Catalysis of E + ded→ de + de by e.ω13 1.6064× 10−5µm2s−1 Catalysis of E + ded→ de + de by de.ω14 2.2436× 10−5µm2s−1 Catalysis of E + ded→ de + de by ded.ω15 7.4886× 10−7µs−1 Base rate of reaction de→ D + e.ω16 0.19633s−1 Base rate of reaction de→ D + E.ω17 0.00059851µm2s−1 Base rate of reaction de + de→ ded + e.ω18 0.053617s−1 Base rate of reaction de→ d + e.ω19 9.7325µm2s−1 Base rate of reaction ded + e→ dede.ω20 0s−1 Base rate of reaction ded→ de + D.ω21 0.051613s−1 Base rate of reaction e→ E.ω22 0.315s−1 “Base” rate of reaction d→ D.κ 5346.6 Saturation concentration of d on membrane.γ 184.66Hill Coefficient of spontaneous release ofminD from membrane.Table 3.1: Parameter values and meanings from Carlquist’s model.ωi,κ,γ provided via private correspondence with Carlquist, andbased upon numerical fits of Ivanov et al’s data, [29]. L0,O0 asdescribed in Ivanov’s experimental methodology. Despite experi-mental uncertainty, we retain this level of significant figures so asto better compare our results with those of Carlquist.Because we are representing L and O implicitly, each column of our4× 4 Jacobian matrix Jˆ(uk) must account not only for increases in the cor-responding membrane bound species, but also for the corresponding de-crease in the cytosolic species: every e increase in the concentration ofmembrane bound d results in the e/h decrease in the concentration of cy-tosolic minD.For any given parameter trio, steady states of the system can be foundby initializing Newton’s method with randomly selected values, iterating50 or so times using eq. 3.15, and then checking that the resulting uk hasnon-negative concentrations, and has u˙ 1. In practice, I demanded that||u˙||2 < 10−15 and uk > −10−5. Allowing slightly negative concentrationswas necessary in order to prevent the computer rejecting the 0 state due tonumerical error.Once a steady state is found for one set of parameter values, this steadystate can be used as an starting point to find nearby steady states usingthe continuation package Coco, an open source extension built upon theproprietary software Matlab. Coco we treat as a black box. For details onthe inner workings of Coco, I recommend [11]. For the sake of this thesis itis sufficient to know that Continuation is a numerical method used to findsolutions to an equation f(u,p) = 0 for some value of p, based on knowinga solution for a nearby value of p.We use Coco to flag critical points, such as saddle node bifurcations,pitchfork bifurcations and other changes of stability, and then use Cocoagain to map out the path of these critical points in parameter space (seefigure 3.3). By mapping out these critical points we identify the boundariesbetween qualitatively different behaviors.3.3.2 Detecting Spatial-Temporal Stability and PatterningOne of our primary goals in our bifurcation analysis is to find regions ofspatial and temporal patterning in our parameter space. Rather than look-ing for patterning directly, we instead look for regions where we do notexpect pattern formation, regions where the system is stable. Here I give43  xp1xp1p1xp1p2A BC DFigure 3.3: Cartoon illustration of Continuation methods(A) Newton’s method is used to find an initial steady state. (B)Coco continuation package allows us to follow the location ofthis steady state as we vary one of our parameters. (C) Criticalpoints are identified. Branch points such as transcritical and sad-dle node bifurcations can be used as starting points for furtherCoco continuation. (D) Once critical points have been found,they can be tracked through parameter space. This is done by“releasing” one of our parameters and allowing it to vary, whilealso supplying Coco with the additional constraint “remain onthe Saddle-Node bifurcation” (or similar).an (incredibly) brief review of various forms of stability, before discussingtheir particular implementation with respect to our system. For a morethorough introduction to Bifurcation theory, I recommend Strogatz’ “Non-Linear Dynamics and Chaos” [55]A particular state of a system u is said to be an asymptotically stable steadystate if all trajectories starting “close to” u tend towards u as t→∞. If u isa steady state, and we start our trajectory at u + v, with ||v||2 sufficientlysmall, then our trajectory will move in the directiondvdt≈ Jˆ(u)v. (3.16)If we select v to be an eigenvector of Jˆ then it will either grow or shrink ac-cording to the sign of the corresponding eigenvalue λ. When all eigenval-ues have negative real part this indicates that all perturbations must shrinkback towards our steady state. When our eigenvalue with largest real partpasses from the negative real half plane to the positive real half plane, thiscorresponds to a change in stability. If stability is lost by a pair of complexeigenvalues passing from the negative half plane to the positive half-plane,this we recognize as a Hopf bifurcation, and indicates that perturbationswill lead to an “outward spiral” from the steady state, a common signatureoften associated with oscillating behavior. When a single real eigenvaluepasses from the negative to positive half planes via zero, this is the signa-ture of Saddle-Node, Pitchfork, or Transcritical bifurcation. Coco detectsand categorizes such changes in stability by default 3.All the above bifurcations and changes in stability help us to under-stand the ODE system, but do nothing to inform us of spatial patterningin the full PDE. In order to detect spatial patterning, we use “Local Per-turbation Analysis” (henceforth, LPA). LPA was first developed by A. F. M.Mare and Veronica Grieneisen [31, 65]. A full introduction to the theory hasbeen given by Holmes et al. [27]. A particularly readable use of the theory3Note, while classically a Saddle-node is associated with a stable and unstable steadystate colliding and annihilating one another, from the point of view of Coco this is stillviewed as a single stable steady state losing stability; Coco follows the trajectory of steadystates, even when such a trajectory folds over itself.45has been given by Mata et al. [39], in which they study pattern formationin actin regulatory proteins. Here I give a brief introduction to LPA as itapplies to small perturbations from a homogeneous steady state 4.Suppose we take a small patch of our membrane, and artificially per-turb its state to u + vl. Under the assumption that Dsol is large, and thesolution is well mixed, the changes to solution concentrations will be neg-ligible for sufficiently small patches of membrane (see figure 3.4).  Figure 3.4: An initially localized perturbation to some homogeneoussteady state u is rapidly averaged throughout the solution lead-ing to negligible change (top). In the slow diffusing membranehowever, the perturbation is only moderately affected by diffu-sion, and behavior is dominated by the reaction rates, as encodedby J4 (bottom).If we further assume that Dmem is small, then diffusion of membranebound proteins can be considered negligible, and the dynamics of our nar-row bump will be governed bydvldt≈ J4(u)vl. (3.17)The matrix J4 is defined as being the Jacobian of equation 3.12 formed if wedo not account for changes to solution concentrations O and L as we alterthe membrane; When making only localized perturbations we do not ex-4I assume ||vl||2  1 and hence J4(u)vl is a good approximation of f (u + vl). The fullLPA theory can be used even for ||vl||2 = O(1), and does not assume that the state we areperturbing from is a steady state, but this full generality is not needed here.46pect bulk concentrations to be affected. As before, the eigenvalues of our Ja-cobian matrix determine the stability of the system: eigenvalues with posi-tive real components indicate instability. In cases where Jˆ(u) has eigenval-ues with positive real component the system is unstable to global changes.In cases where Jˆ(u) has no eigenvalues in the positive half plane, but J4(u)does, we predict that the system will resist global changes, but that localchanges to membrane concentration can persist and grow in time, leadingto heterogeneous final solutions.3.4 Results3.4.1 Shallow SolutionFor h = 1µm we observe three major boundaries in parameter space, andthree “types” of homogeneous steady state: one associated with the ‘empty’membrane, one ‘d-dominated’ and the third ‘e-dominated’ (see figure 3.6).More delicate structures around the boundaries of these shapes demarcateregions of parameter space that permit more interesting pattern formationbehavior.In region A (high Dmax low Emax) there exists a homogeneous steadystate characterized by significant d concentration, with e minimal, and theconcentrations of de and ded varying through-out the region. The brightgreen border of this region represents a saddle node bifurcation, at whichpoint the “d-dominated” manifold ceases to exist as you pass from the topof the bifurcation plot towards the bottom. While the d-dominated steadystate is stable to spatially uniform perturbations, J4 has at least one positiveeigenvalue throughout region A, indicating instability to local perturba-tions.The dark blue line represents a transcritical bifurcation; below this line (re-gion C) the empty surface state u = [0,0,0,0] is stable. Above the line thisstate is unstable. For Emax < E0 (the left hand half of our plane), this trans-critical bifurcation is almost immediately followed by a saddle node as weincrease Dmax, giving it the appearance of a subcritical pitchfork bifurca-47tion. Even after increasing Dmax past these bifurcations, the empty mem-brane state remains as a steady state, albeit an unstable one. The upperright quadrant of our domain (regions B, Emax > E0, Dmax > D0) possesses astable steady state where e dominates. Here e and de are large, while d andded concentrations are close to zero. Below, this region is bounded by thetranscritical bifurcation (boundary B-C), where the stability of this e dom-inated steady state is transferred to the zero state. To the left, region B isbounded by a LPA boundary, Hopf bifurcation and Saddle node bifurcation(dark green) in quick succession. The saddle node bifurcation annihilatesthe e-dominated steady state altogether.  Steady States ,Dmax /D0=1.2 , h=1 Steady States ,Dmax /D 0=1.2 , h=1Figure 3.5: “Slice” view taken from figure 3.6, showing the transitionbetween states A and B. On the left, the high d A state is the onlystable homogeneous steady state, and on the right the high e Bstate is the only steady state. In between we encounter a bistableregion. Coloured circles indicate saddle-node bifurcations, eachdenoting the boundary of one of the two states.The Hopf and LPA boundaries bound regions E and D (respectively).These regions exist on our e dominated manifold, and are illustrated in de-tail in the lower panel of figure 3.6. While these regions do extend up allthe way along the boundary between A and E, they are only wide enoughto be physiologically relevant in the vicinity of Emax = E0, Dmax = D0. InRegion D, the e dominated state is stable to all spatially uniform pertur-bations, but unstable to spatially heterogeneous perturbations. As will be0 0.5 1 1.5 2Emax/E000., h=1Transcritcal at zeroSaddle node (lower)Hopf BifurcationLPA stability changeSaddle node (upper)A BC0.9 0.95 1 1.05 1.1Emax/E00.80.850.90.9511.05Dmax/D0Bifurcations, h=1ACBDEFigure 3.6: (top) Bifurcation diagram of system 3.12, parameters asgiven in 3.1, h = 1µm. (Bottom) detail near the origin. Black linedenotes the location of a vertical “slice” view, displayed in figure3.5. Region A has high d and low e, and extends as far as the“upper” saddle node (bright green). Region C denotes a regionwhere the empty membrane is a stable steady state and extendsup to the dark blue line. Regions B,D and E are regions wherethere exists a high e steady state. This steady state extends downto the dark blue line, and left up until the “lower” saddle nodebifurcation (dark green).discussed later, this region leads to ‘spike’ nucleation from a nearly homo-geneous background, similar to the behavior observed in the Min system.As we cross the red line into region E, the e dominated steady state un-dergoes a supercritical hopf bifurcation, and becomes unstable to spatiallyuniform perturbations. In practice, for all but the most perfectly uniformperturbation, spatial patterning quickly overwhelms the temporal oscilla-tions, leading to “wave pinning” (discussed in section 3.5).None of this stability analysis precludes the existence of periodic orbitsor more complex structures, even in regions where we encounter only onesteady state.3.4.2 Medium Depth SolutionFor h = 20µm the total amount of minD in the solution is approximatelyequal to the maximum amount of minD that will fit onto the membrane.The topology of our bifurcation plot is similar to that found for h = 1µm.Once again we encounter an empty membrane steady state, a d-dominatedsteady state and an e dominated steady state. The bifurcation boundariesbetween these states move, but remain qualitatively similar to our obser-vations for the shallow solution system (see figure 3.7).As before, Region A permits a d-dominated steady state, region C per-mits the empty membrane steady state, and region B has a stable e dom-inated steady state. The boundary between A and B is characterized bytightly packed LPA, Hopf and Saddle-node bifurcation (at which point thee-dominated steady state ceases to exist as Emax is reduced).In contrast to the previous case, Region A has expanded to significantlylower Dmax values, and is now divided into A and A’. In region A’, as in theh = 1µm case, membrane bound d is limited by the total amount of minDavailable in the solution. Local perturbations are predicted to expand, andpattern formation is observed (see figure 3.11). In contrast, in region A, dis limited instead by membrane saturation; regardless of the state of thesolution, Carlquist’s model limits the membrane bound concentration of dto κ= 5346.6 particles µm−2. In this region the system is stable against local500 0.5 1 1.5 2Emax/E000., h=20Transcritical at zeroLPA boundary (lower)Hopf boundary(lower)Saddle node (lower)LPA boundary (upper)Saddle node (upper)BCA'AA+BA+CFigure 3.7: Bifurcation diagram of system 3.12, parameters as given in3.1, medium depth, h = 20µm. Similar to figure 3.6 we have re-gions A,B and C, denoting regions of high d, high e and an emptymembrane respectively. Unlike figure 3.6, region A is now di-vided into regions A and A’: in region A, d is limited by mem-brane saturation, in A’ d is limited by solution depletion. RegionA is stable against local perturbations, region A’ is not.perturbations.As before, regions D and E exist on the e dominated manifold, butas predicted by theory, for increasing h, the distance between these twoboundaries shrinks considerably, to the point region D ceases to be physio-logically relevant.3.4.3 Deep SolutionAs a final exploration, we consider the case h = 200µm. This parameterregime gives a close approximation to Carlquist’s original model, whichtreats the bulk as having a fixed concentration independent of the amountof MinD bound to the membrane. For the sake of theoretical comparison, it51would have been preferable to consider even larger h, however, for larger hmy numerics become less stable as the associated Jˆ become more and moreill-conditioned.As previously, for h = 200µm we observe three main steady state re-gions, separated by qualitatively similar boundaries (see figure 3.8).I will dwell only briefly on the results of this bifurcation analysis. Onceagain Region A represents a d-dominated regime, B is associated with thee-dominated regime, and C with our 0 solution regime. Region A has ex-tended to even lower levels of Dmax. Region A’ (the region where our d-dominated regime is unstable to localized perturbations) has shrunk dra-matically, and now only exists in an exceedingly narrow band between theupper LPA boundary and upper saddle node. Similarly, the lower LPAboundary follows the Hopf boundary closer than ever. Both of these effectsare the inevitable consequence of J4 → Jˆ as h→ ∞. In practice what thisindicates is that for h = 200µm, pattern formation occurs only in incredi-bly small windows in parameter space. Unlike the h = 1µm where spatialpatterning tends to overwhelm oscillations in areas where both instabilitiesare applicable, for h = 200µm oscillations dominate.Our primary interest in this diagram is the point Dmax = D0, Emax = E0.This point represents the experimental parameter regime which Carlquistused when fitting the reaction parameters given in table 3.1. As can be seenfrom the bifurcation diagram, this point exists right at the boundary wherethe e-dominated steady state loses stability. This indicates that very smallchanges in the reaction kinetics are liable to qualitatively alter the behaviorof the system.Finally, for h = 200µm, we observe a “twisted” region in the vicinityof Dmax/D0 = 0.7, Emax/E0 = 1. In this region, the e-dominated manifoldstretches out over the 0 steady state, causing the lower saddle-node bifur-cation to fold over itself. The enclosed region (region F) represents a nar-row window of parameters space where the d-dominated, e-dominated andempty membrane states coexist. Despite this, the region is bistable, as op-posed to tri-stable, on the grounds that the e-dominated state is naturallyunstable in this region.520 0.5 1 1.5 2Emax/E000., h=200Transcrital at zeroLPA boundary (lower)Hopf boundary (lower)Saddle node(lower)Saddle node(upper)LPA boundary (upper)Ivanov's dataACA+CBA+B0.8 0.85 0.9 0.95 1 1.05 1.1 1.15Emax/E00.550.60.650.70.750.80.850.90.951Dmax/D0Bifurcations, h=200AEA+ECBFA+CFigure 3.8: Bifurcation diagram of system 3.12, parameters as given in3.1, h = 200µm (Upper) For h = 200µm we observe upper andlower LPA boundaries drawing close to Saddle-node and Hopfboundary (respectively). This indicates that spatial patterningoccurs only in extremely narrow regions of parameter space.As before, we observe Regions A, B and C, that is to say a d-dominated region, an e dominated region, and an empty cytosol.Where regions overlap, the system is bi-stable, at least to the ex-tent that each of the overlapping regions is stable. (Lower) detailimage of the “twisted” region, region F, along with Carlquist’s“origin” point Dmax = D0, Emax = E0, h PDE ResultsFor the sake of confirming our steady state results, and exploring how thesystem behaves away from uniform steady states, we use Matlab’s pdepefor a variety of starting conditions and parameter values. As might behoped, the results of PDE simulation match those predicted in our bifur-cation diagrams: bistable regions are indeed demonstrated to be bistable(see figure 3.9), regions where LPA predicts instability are indeed unstable5 when perturbed by spatial white noise (figures 3.10and 3.11 ).In the vicinity of the “origin” (Dmax = D0, Emax = E0, h  1µm), ourbifurcation predictions appear to be slightly off (figure 3.12), in that thebifurcation diagram (figure 3.8) predicts the e-dominated manifold to beever so slightly unstable to perturbations, while the pde appears to in-dicate stability. Still, given that this set of parameters lies precisely on aboundary in our bifurcation space, this is not entirely surprising. Tweak-ing our parameters so that they are just below the bifurcation boundary(Dmax = 0.99D0, Emax = 0.99E0, h 1) recovers the temporal dynamics ob-served in Ivanov’s data, as might be hoped (figure 3.12, lower).If we select parameters within the “twist” zone (Region F) of bifurcationdiagram 3.8, we predict (and observe) bi-stability: randomly selected near-homogeneous initial conditions tend towards either the d-dominated state,or towards 0. More interesting behavior can be observed if we select hetero-geneous initial conditions near the e-dominated manifold. In this case thesystem remains near the manifold for a time, before eventually breakingup into high and low d states (figure 3.13). Despite this interesting interme-diate behavior, the system invariably converges to either the uniform zerostate or uniform high d state eventually.As a final experiment, I considered parameter values within the narrowRegion D (see figure 3.6). In this region, theory predicts that the systemis stable to all uniform perturbations, but unstable to spatially localized5Given that LPA is applicable in some asymptotic limit of diffusion rates, and here mydiffusion rates are finite, the boundaries between regions as given by LPA will inevitablybe somewhat inexact... however observations based on PDE simulation appears to indicatethat the boundaries identified are a good approximation.540 20 40 60 80 100 120 140 160t010002000300040005000total membrane concentrationDmax =1.4D 0,   E max =E 0,   h=200 50 100 150 200 250 300 350t-5000500100015002000250030003500total membrane concentrationDmax =0.6D 0,   E max =E 0,   h=20Figure 3.9: (Top) Starting from random nearly-homogeneous initialconditions, with p = [1.4,1,20] we observe our PDE convergingto one of two homogeneous steady states- one d-dominated, theother e-dominated. Each simulation run is denoted by a separatecolour. For each run, maximum and minimum values of totalmembrane bound d concentration are shown. These lines con-verging to the same value indicates that the final steady state ishomogeneous, as is predicted by theory. (Bottom) A similar plotfor p = [0.6,1,20] shows random initial conditions converging toeither the d-dominated steady state, or 0.0 500 1000 1500 2000 2500t0100020003000400050006000Total membrane concentrationh=1h=20h=2000 10 20 30 40 50x0100020003000400050006000Final membrane concentrationh=1h=20h=200Figure 3.10: (Upper) For Dmax = 1.2D0, Emax = 0.5E0, we consider var-ious values of h. As predicted by LPA, for h = 1, the sys-tem evolves towards a spatially heterogenous state, while forh = 20,200 the system tends towards the d-dominated mem-brane state (regardless of initially conditions). Each colour de-notes a different h value, with the upper line of each colour in-dicating the maximum d concentration at a particular time, andthe lower line the minimum. Note that for h= 20,200 these max-imum and minimum lines converge, indicating a homogeneousstate, and for h = 1 the lines separate, indicating a spatially het-erogeneous final state (Lower) In the h = 1 case, our final stateconsists of a single bump. For h = 20,200 the final state is ho-mogeneous.0 200 400 600 800t01000200030004000total membrane concentrationh=1h=200h=200 10 20 30 40 50x050010001500200025003000total membrane concentrationFigure 3.11: (Upper) For Dmax = 0.4D0, Emax = E0, and random ini-tial conditions within the physically relevant range, we observethree distinct behaviors: convergence to zero (for small h,blue),convergence to a high d state (for h large, yellow), and conver-gence to a static heterogeneous state (medium h, red). Eachcolour denotes a different h value, with the upper line of eachcolour indicating the maximum d concentration at a particulartime, and the lower line the minimum. Regardless of h, initialvalues sufficiently close to 0 will converge to 0 - here we havecherry-picked starting values so as to demonstrate the threepossible behaviors. (Lower) Here we observe the final state ofthe h = 20 case. As can be seen the domain is partitioned intotwo regions: one rich in MinD, the other poor in it. Each regionis nearly homogeneous, and a narrow boundary layer separatesthe two.0 2000 4000 6000 8000 10000 12000 14000 16000t136136.2136.4136.6136.8137137.2137.4137.6137.8138total membrane concentrationd+de+2dede+de+ded0 500 1000 1500t01000200030004000500060007000total membrane concentrationd+de+2dede+de+dedFigure 3.12: (Top) PDE dynamics for p = [1,1,20000], that is to say,Dmax = D0, Emax = E0, h = 20,000µm. Our initial conditionis perturbed slightly away from the e-dominated steady state,but (after many oscillations) is observed to return to this steadystate. Here the dynamics are expected to mimic Ivanov’s datafairly closely, but this is not observed. (Bottom) by selecting pa-rameters slightly further inside region E (p = [0.99,0.99,20000])Ivanov’s dynamics are recovered. In both cases, we considerboth perfectly homogeneous initial conditions and initial con-ditions with low level noise- in both cases we observe spatiallyhomogenous behavior over a longer time scale, hence spatialeffects are not shown.perturbations, with J4 having a pair of complex conjugate eigenvalues inthe positive real half-plane. As predicted by the theory, we observe (fig-ure 3.14) spatially localized patches, oscillating with increasing amplitude.Given sufficient time, the domain separates into two static regions- onewith high MinD, the other with low MinD.3.5 Wave PinningIn cases where heterogeneity is predicted, PDE simulations indicate thatour domain separates into two phases- one rich in MinD, the other poor init (see for example figures 3.10 and 3.11 (lower), or 3.14). We observe theboundary between these two regions is evolves slowly in time, expandingone region and contracting the other until equilibrium is reached, and thewave front is “pinned”.3.5.1 Introduction to Wave PinningWave pinning is a phenomena first proposed by Mori et al [42], and laterexplored by Walther et al. [70]. In its original form, wave pinning wasused to explain the polarization of Rac and Rho proteins in cells. Ratherthan describe this phenomena in the abstract, it is instructive to consider aparticular toy example originally proposed by Mori et al [43]:ut = euxx + u(1− u)(u− 1− v),vt = e−1vxx − u(1− u)(u− 1− v).(3.18)The equations provide a canonical example of a system with wave pin-ning behavior. For concreteness, in the context of this thesis one can (veryloosely) think of u as representing membrane bound MinD, while v repre-sents fast diffusing cytosolic MinD 6 . If we assume e→ 0, the fast diffusionterm in v will ensure that v will be spread almost uniformly throughoutspace. Assuming any particular fixed v> 0, the system has steady states atu= 0,1+ v and 1, which we label u−,u+ and u0, respectively. u− and u+ are6u and v were instead used to represent entirely different proteins in their original con-text.590 100 200 300 400 500 600t01020304050x050010001500200025003000350040000 100 200 300 400 500 600t0100020003000400050006000Concentrationd+de+2dede+de+dedx=200 10 20 30 40 50x010002000300040005000Concentrationd+de+2dede+de+dedt=100Figure 3.13: (top) Total membrane minD concentration vs. space andtime for Dmax = 0.75D0, Emax = 0.975E0, h = 200µm (Region F).Initial conditions are close to the e dominated manifold, withlow amplitude white noise. (Bottom) Slices for a particular xand t values. Location of these slices marked by white dashedlines in upper panel. Our t slice (lower left) shows the concen-tration at x = 20 jumping from a low level at early times to theupper steady state at large times. Our x slice (lower right) showsthe spatial heterogeneity at t = 100. Given that both J4 and Jˆhave positive eigenvalues for these initial conditions, it is inter-esting, but not startling to observe that we do indeed observespatial patterning. The most positive eigenvalue of J4 is 0.127(associated with LPA), while for Jˆ it is 0.126 (associated withhomogeneous perturbations).0 500 1000 1500 2000 2500 3000 3500 4000t01020304050x1. 2000 4000 6000 8000 10000 12000t01020304050x1000200030002000 4000 6000 8000 10000t00.511.522.533.5Concentrationd+de+2dede+de+ded x=200 2000 4000 6000 8000 10000 12000t0100200300400500600Concentrationd+de+2dede+de+ded x=45Figure 3.14: Dmax = 0.9D0, Emax = 1.095E0, h= 1 (Region D). Initial con-ditions selected close to the e dominated manifold, with lowamplitude white noise. (Top) As indicated by the positive-complex eigenvalues of J4, PDE gives spatially localized oscil-lations. Over time these merge and overlap. (Middle) Givensufficiently long time, our low amplitude oscillation breaks upinto a single static high-d region, with the rest of the domainhaving low d concentration. White lines indicate vertical slicesfor constant x, (lower left and lower right). These slices give amore concrete picture of our oscillations followed by break upinto a high and low state. Please note the the two slices makeuse of significantly different y scales.stable, while u0 is an unstable steady state. Any region where u > u0 willevolve towards u+ on an intermediate time scale. Anywhere where u < u0will evolve towards u−. The only exception to these assertions is locationswhere uxx is large enough such that euxx = O(1). We thus expect (and ob-serve) the above system to evolve towards state where u = 0, u = v + 1, oru is transitioning between one of these two steady states.At this stage, the behavior within each regime is very simple. The onlyregion where we are unsure of the system’s behavior is at the boundarybetween these two regimes. Our boundary region will be in steady stateprecisely when:ut = 0 = euxx + u(1− u)(u− 1− v), (3.19)∫ x+x−0× uxdx = 0 =∫ x+x−euxxux + u(1− u)(u− 1− v)uxdx, (3.20)0 =[eu2x/2]x+x−+∫ u+u−u(1− u)(u− 1− v)du, (3.21)0 =∫ u+u−u(1− u)(u− 1− v)du. (3.22)Here we have assumed that u→ u− for low x and u→ u+ for high x.Because u(x, t)≈ u± outside of our boundary region, we have[eu2x/2]x+x−≈0, leaving only∫ 1+v0 u(1 − u)(u − 1 − v)du = 0. This happens preciselywhen v = 1Further analysis, which we will not get into here, can be used to showthat the u+ region will expand when∫ 1+v0 u(1− u)(u− 1− v)du> 0 (reduc-ing v) and contract when∫ 1+v0 u(1− u)(u− 1− v)du < 0 (thus increasingv). See [43] for details. The position of the boundary thus converges to aposition such that v satisfies∫ 1+v0 u(1− u)(u− 1− v)du = 0.In a general sense, wave pinning provides a potential lens throughwhich to consider a system whenever we have a fast and slow movingspecies such that each can be converted into the other, along with a clearboundary between two phases for our slow moving species. Clearly, theMin system as described by Carlquist has the required fast and slow diffus-ing species, and PDE demonstrate the co-existence of two separate “phases”.6250 100 150 200 25000.511.52t=0t=2000t=4000t=6000t=80000 2000 4000 6000 8000t050100150200250x00.511.522.50 0.5 1 1.5 2 2.5u-0.500.51f(u)v=0.5v=1v=1.5Figure 3.15: (top) Simulation of eq. 3.18. An initial localized pertur-bation causes the domain to separate into two distinct phases.Over time the “high u” phase expands, slowing down as v de-creases. Eventually v decreases to the point where further ex-pansion is impossible, and the wave becomes “pinned”. (Mid-dle) This same PDE, now represented as a kymograph, wherebyu(x, t) is represented by a colour, and we view this value for allx, t. (bottom) The stability of the boundary layer is determinedby∫ 1+v0 f (u)du. For v > 1 this integral is positive, for v < 1 thisintegral is negative. For v = 1 the area enclosed is precisely 0and the phase transition layer is stationary.3.5.2 Complications of Wave Pinning Theory for Multi-SpeciesSystemsWe would like to describe our observations of Carlquists Min model interms of wave pinning, unfortunately, this is not as straight forward asmight be hoped. Previously, wave pinning has been characterized by threeconditions: Bistability, Homogeneous stability, and the velocity sign condi-tion [43].Bi-stability states that for some given value of the cytosolic species v, thespaceless ODE system possess two stable steady states u+(v),u−(v), alongwith an unstable steady state um, for equations 3.22 these are given by v +1,0 and 1 respectively. The second condition (Homogenous stability), statesthat these steady states are stable not just in the spaceless ODE system, butthat they are also stable in the spatially disturbed system.Both these conditions are easy enough to adapt to our higher numberof species. Rather than finding u±(v) we instead search for u±(v) whereu = [o, P, B, l] and v = [O, L]. This can be accomplished numerically us-ing multivariate Newton iteration, and our results are both fast and accu-rate. We do not make any attempt to find um, as unlike the single variablecase where stable steady states are separated from one another by unstablesteady states, in our higher dimensional case we anticipate u+ and u− be-ing separated by an unstable manifold. Given that this object is both harderto detect, and irrelevant to our goals, I make no attempt to locate it. Onceu±(v) are found, it is easy enough to check stability, by checking the eigen-values of the appropriate Jacobians (See section 3.3.2). Testing these eigen-values in any particular case is trivial. As a concrete example, consider thesystem state as shown in figure 3.11. Here the Cytosolic concentrations arev = [0.13D0,0.98E0]. Newton’s method indicates that for these values, wehave the steady states u+ ≈ [1896,96.4,460,0] and u− = [0,0,0,0]. The Jaco-bians around each state have purely negative eigenvalues, although in thecase of u−, the largest eigenvalue is −O(10−5), and hence convergence tothe steady state is slow.The final condition mentioned in [43] is the so called “Velocity sign”condition which asserts that I(v) =∫ u+u− f (u,v)du = 0 for some value of v.64This condition essentially states that the connection between u− and u+ sat-isfies equations 3.22. This condition is significantly more difficult to trans-late into our higher dimensional case, as our single “Boundary region inte-gral” derived from u˙ = 0 is replaced by four separate integral conditions,one for each element of u˙ = 0. To give a particular example:B˙ = 0 = DmemBxx − (ω11 +ω12l +ω13P +ω14B)B LL0+ω10oP + (ω17 +ω18)P2 −ω19Bl,hence,Dmem∫BxxBxdx =∫ B+B−[−(ω11 + ω12l +ω13P +ω14B)B LL0+ω10oP + (ω17 +ω18)P2 −ω19Bl]dB.(3.23)We have similar conditions for each of o, P and l. Each of these conditions isa function of all membrane bound variables. In the original single speciescase the conversion from a dx integral to a du integral allowed Mori et alto determine the required v even without knowledge of the actual shape ofthe boundary layer. In the multidimensional case there is no such benefit:integrating with respect to B will provide no advantage when B˙ is a func-tion of P, B,o and l, as we do not know the exact shape of B,o and l relativeto P. In addition to this, there is no guarantee that all four of integral condi-tions can be satisfied simultaneously. This velocity sign condition is furthercomplicated by the fact that Mori’s original condition asks that “There ex-ists a v such that the integral is zero.” Here however we have two fastdiffusing species. We thus expect I(v) to be a function of both variables.For any given cytosolic concentration of MinD, we might hope that thereexists a corresponding MinE that will satisfy our integral (and vice versa).Given the complications discussed, confirming any form of “Velocitysign” condition for Carlquist’s model analytically seems unlikely. Nonethe-less, the underlying idea of wave pinning — the notion of two homoge-65neously stable regions separated by a boundary that must inevitably moveto some form of equilibrium based on global properties of the system is stillentirely reasonable, and seems a useful lens through which to view ourresults Evidence of Wave Pinning in Carlquist’s ModelThe most basic requirement for wave pinning theory to be applicable isthat a PDE representation of our equations will result in our domain beingseparated into two distinct regions, each close to one of the steady states ofthe “localized” system . For low h (that is to say, low cytosolic bulk), this isindeed observed, (see figures 3.10 and 3.11). The appropriate uniform andlocal stability conditions can also be checked by examining the appropriateJacobians numerically; we find that all eigenvalues are negative and real.Next, I would like to demonstrate that the pinning position of our waveis an artifact not of our domain size nor shape, nor our numerical method,but instead based purely on cytosolic concentrations. In order to test this,I allow my PDE to run for a time until it is close to equilibrium. I thenartificially pump both minD and minE into the Cytosol of my system fora time, and then observe as the boundary layer approaches a new equilib-rium. We select the total amount of MinD and MinE to pump in such that∆D = [1,1,2,0] · (u+− u−)∆A and ∆E= [0,1,1,1] · (u+− u−)∆A , and thenobserve to check that the our observed change in wave position matches∆A 8. Here I have followed in the footsteps of [70], who used a somewhatsimilar test to show that wave pinning was not the result of numerical prop-agation failure.We find that after 260 runs, we observe that the line (∆Apred,∆Aobs)has slope 0.9909 and x intercept −0.009, with 95% confidence intervals7Other systems, such as the Allen—Cahn[1] equations also have boundary layers, butin these cases the motion of boundary layers is governed by local properties, such as thecurvature of the boundary layer. In our case wave pinning is governed by global propertiesof the system.8Note that we are NOT free to choose ∆D and ∆E independently, as this will lead toour wave pinning equilibrium relaxing to a different part of the D/E plane, with differentu+,u−66[0.9907,0.9911] and [−0.012,−0.005]. Neither of these intervals include thepredicted slope and intercept values of 1 and 0 (respectively), however bothare close.Considering that the mesh size for these simulations was ∆x = 0.019,it seems likely that the intercept value is simply the result of finite meshsize. The slope being off is potentially the result of limited simulation timeand slow relaxation to the “exact” wave pinning position, but consideringsimilar results are found even as we increase simulation time substantially,it would appear that this answer is unsatisfying.3.6 ConclusionThe Min system has been observed to exhibit a wide range of interestingbehaviors [64]. In our study of Carlquist’s proposed system of chemicalreactions we hoped to determine which of the observed behaviors could berecreated. To this end we considered two simple extensions to the previousmodel; spatial extension and finite cytosol size.We observe that whenever our parameter regime permits a d-dominatedsteady state this state is stable if the amount of bound minD is controlledby membrane saturation. In contrast, when this d-dominated steady stateis limited by depletion of minD from the cytosol, we observe the forma-tion of spatial patterning, and followed by wave pinning behavior. At firstblush, this behavior is reminiscent of the nucleation behavior observed byVechhiarelli [64], but unlike experimental observations where zones of ele-vated minD nucleate, grow and decay in turn, in this PDE system we ob-serve that regions rich in MinD are stable, and changes in the bulk avail-ability of minD act only to move the boundaries between our zones, ratherthan change their stability. Furthermore it is worth highlighting that herewe primarily observe pattern formation as our system breaks away from aquasi-stable MinD rich state, in contrast to experimental conditions, wherethe system is close to the zero state prior to pattern formation. There doesexist a narrow band on the e-dominated manifold that is unstable to localperturbations (region D, figure 3.6), however this band would appear to be67narrow enough that its physiological relevance seems suspect.Our second major observation is that when using parameters that pro-vide the best match to Carlquist’s model, Dtot = D0, Etot = E0, h = 200, bi-furcation analysis places such parameter values right at the boundary ofour oscillating regime. The slightest changes in Dtot and Etot knock us intoeither a region with the appropriate cycling behavior, or into a region withno cycling at all. While we do know that changes in chemical concentra-tion can lead to changes in the behavior of the system, it seems unlikelythat Ivanov et al’s data [29] and the subsequent fit by Carlquist just hap-pened to be precisely at such a boundary. It seems far more likely that thecollected data belongs to some region far from such a boundary, and thatour bifurcation diagram ought to reflect this. Our “origin point” existingon a boundary would appear to be an artifact created by some aspect ofour data fitting- the result of Carlquist’s optimization algorithm getting aslightly better fit the closer if got to this bifurcation boundary. As such, itseems likely that there is some element of “over-fitting” at play.68Chapter 4Stochastic Forcing CausesSpikes To Evolve FromHomogeneous Neutrally StableSteady State4.1 IntroductionVecchiarelli et al’s experiments with the Min system have demonstratedthe previously unexplored burst pattern. In this concentration regime weobserve patches of empty membrane nucleating a small localized regionof high MinD concentration, which then expands, freezes, and finally dis-solves away under the influence of MinE (see figure 4.1). The nature of thisbehavior should put some significant constraints on the possible dynamicsleading to it. In particular, the apparently stochastic distribution of burstpositions may suggest that our system is highly sensitive to some underly-ing unobservable process, in stark contrast to previously explored modesof pattern formation (such as Turing patterns), which tend toward creatingorderly global structures.We begin this chapter by introducing a class of non-linear differential69  t=0 t=15 t=30 t=45t=60 t=75 t=90 t=105Figure 4.1: Burst patterning, as observed by Vecchiarelli et al. At t = 0,membrane has an almost homogeneous distribution of parti-cles. This initial condition evolves into bright, high concentration‘bursts’ of membrane bound MinD, which then freeze in place,and fade. While one generation of bursts fades, the next genera-tion of bursts appears (bottom row). Pictures from data acquiredvia private correspondence, experiments as described in [64].equations from the literature, and discussing why such equations mightprovide insight into the Min systems burst behavior. From there we explorea number of simple computer simulations, before moving on to discussthe modeling assumptions under which this “blow up” phenomena mightbe expected for the physical system at hand. We conclude by turning toStochastic PDE theory to provide some analytic and semi-analytic results.4.1.1 Blow-Up in Non-Linear Reaction-Diffusion EquationsConsider the equationut = uxx + u2. (4.1)70Starting with homogeneous initial conditions1 u(x,0) = u0, this equationadmits the solution u(x, t) = 1/(u−10 − t); the system remains bounded fort < u0, but tends to infinity as t→ u−10 . This finite time blow up is alsoobserved for heterogeneous initial conditions u(x, t) = u0(x)> 0, althoughin this later case the system will become unbounded at isolated points, sin-gularities perched atop infinitely tall spikes (see figure 4.2).0 5 10t0246810u(t)50 100 150x0102030405060u(x,t)t = 0t = 1t  TFigure 4.2: (left) Blow up of 4.1 with initial condition u(x,0) = 0.1.(right) Spatially heterogeneous initial conditions lead to infinitelytall spikes at isolated points.The finite time blow up of this system, along with variations of it (forexample different nonlinearities, different diffusion operators), has beenstudied extensively (see [18] for review).This “single point blow up” behavior bears passing resemblance to theproblem of burst nucleation: an initial condition with small differences inu between different x values quickly passes to a state where most valuesare finite, and some approach infinity - in some sense the greatest possibleheterogeneity. Rather than study the behavior of these non-linear equationsas they approach infinity, our goal in this chapter will be to explore whathappens as they “escape zero”. In particular, I examine how noise allows1or, equivalently, considering the well mixed system ut = u271the system to escape the neutrally stable steady state near zero, and thetendency for localized spikes to form as it does so.4.2 Computational ExplorationBefore spending extensive time delving into analytic results, we take abrief detour exploring neutrally stable non-linear systems computationally.Here I am less interested in proving anything statistically, and aim primar-ily to demonstrate the types of behavior I will be studying.4.2.1 Toy System SimulationsConsider the toy systemut = D∆u + u2(2− u) + eξ (4.2)over the region [0,1]2 with periodic boundary conditions. Here ξ is a spatio-temporal noise term, and e 1 is our low amplitude noise amplitude. Sucha system behaves similar to equation 4.1 close to zero, but reaches satura-tion at u = 2 rather than diverging towards infinity. I simulate using asimple finite difference scheme, and a 256 by 256 mesh, D = 10−4. Ourinitial condition is u(x,y,0) = 0. Noise is implemented by adding a nor-mal random variable to each grid element at each time step, with variancedt× (e/256)2. As will be discussed later in this chapter (section 4.5.5), foractual white noise equation 4.2 is ill posed in dimensions two or higher.Any finite mesh size as used in our simulations acts to ‘regularize’ the theequation, limiting the degeneracy of white noise in two dimensions. Be-cause the degree of regularization depends on the mesh used, we observesignificant of lattice size effects. Despite these analytic difficulties, the sim-ulations still provide a useful illustration of the types of behavior we willbe studying. The code can used in this simulation first can be found in bothgithub and the supplementary materials, saved as CartoonBurst.m.For e = 0.1 we observe u initially uniformly close to zero. Around t ≈3100 small circular areas of u > 1 grow from our background. The u value72in these areas approaches 2, and then the regions spread, eventually fillingthe entire space (see figure 4.3).For the sake of cheaper computation, we might also consider a slightlydifferent model, in which we apply noise to our initial conditions ratherthan as a constant forcing factor. As an example, consider ut = D∆u +u2(2−u), u(x,y,0) = eξ. This system is implemented in the file CartoonBurst_IC_noise.m,and gives rise to similar behavior (see figure 4.4).Simulations with lower levels of noise were conducted; however, thetime taken for these to display interesting behavior was substantially longer,and terminal behavior was similar.4.2.2 Mori’s SystemFor our second computational study, I borrow an equation from Mori et al.ut = D∆u + f (u) = D∆u + (1− u¯)(k0 +γu2K2 + u2)− δu. (4.3)Equation 4.3 was designed to model a system of cell signaling proteinscalled Rho GTPases. The model was originally used to demonstrate wavepinning2, a behavior reminiscent of the “cell polarization” behavior associ-ated with Rho proteins [42]. In this equation D, k0, γ, K and δ are reactionsand diffusion parameters, u is the concentration of some membrane boundprotein and u¯ is the mean concentration of this protein, averaged over ourone dimensional domain.I include Mori’s model here for several reasons: primarily, I wish todemonstrate that the non-linear burst formation is not purely an artifactof the toy models I have created to demonstrate it, and can be observedin pre-existing models found in the literature. Secondly, subsequent stud-ies of Mori’s wave pinning model explored the system’s sensitivity to “∆-perturbations”- tall narrow perturbations of finite volume [70]. In the paperthese are treated as external stimulus, usually baked into the initial condi-tions of the system. Here I demonstrate how homogeneous low level noise2See section 3.5 for a more extensive discussion of wave pinning.73Time = 3132 = 3179 4.3: Simulation results for equation 4.2 with e = 0.1, periodicboundary conditions and u(x,0) = 0. (left) u(x,3132). Here wesee the first spike just beginning to form from our nearly homo-geneous background. (right) The spike saturates to u = 2 andthen spreads.Time = 338 = 376 4.4: Simulations of the deterministic system ut = D∆u+ u2(u−2), with white noise initial conditions and periodic boundaryconditions. (left) u(x,338). Here we see the first spike just begin-ning to form from our nearly homogeneous background. (right)The spike saturates to u = 2 and then spreads.can lead such narrow spikes to form naturally from the system itself.Equation 4.3 has a neutrally stable homogeneous steady state if f (u0) =0 and d fdu |u0 = 0 3 for the same value of u. For the sake of our simulations Iselect a steady state value u = u0 = 0.1 and select our parameters such thatut = 0=d fdu for this value. Because this still leaves spare degrees of freedom,we also enforce δ = 1,K2 = 1. This leads toγ =(1+ u20)2(2u0(1− u0)2 ≈ 6.297, (4.4)k0 =u01− u0 − γu201+ u20≈ 0.488. (4.5)If we consider equation 4.3 along with an additive white noise termeξ, then as before we observe an extended “almost homogeneous stage”followed by spike formation. The code for this can be found in the supple-mentary materials, and on github, saved as MoriBurst.m. The resultingspatial distribution can be seen in figure 4.5. Overall, results are similarto those observed for equation 4.2; however, we note that for this modelbursts form significantly faster, and in many cases multiple bursts form,and then spread. Unlike the toy model, here we observe wave pinningbehavior, whereby regions of high u concentration spread, but then stabi-lize before covering the entire domain. This stabilization is the result ofour (1− u¯) term- a term that accounts for depletion of cytosolic Rho, andis absent from equation 4.2. Wave pinning is as discussed in the previouschapter (section 3.5).Much as last time, we can replace our stochastic PDE with a determinis-tic PDE with stochastic initial conditions. As previously, this does not leadto qualitatively different behavior.3Please note that here, when calculating d fdu , we assume that both u, and our perturbationto u is uniform, and consequently that u¯ = u. In doing so I take some liberties with the no-tation, treating d fdu as a simple derivative of a function, as opposed to a full scale functionalderivative.75Time = 59  k0 = 0.048765 00.511.522.5Time = 140  k0 = 0.048765 4.5: High concentration regions form (left) and spread (right).Note that unlike the previous examples where the right panelwas a purely transient state, in this case our right panel is quasi-stable, and only changes on the long time scale as the boundarybetween our high and low u phases adjusts slowly due to Oswaldripening [68], [48]4.3 A Particular Form of Petrasek ModelSo far we have a non-linearity, along with some suggestions from the liter-ature and computation that it may be conducive to the formation of sharpbursts. Before moving on to study this peculiar creature, I would like todiscuss why such a model may be plausible for the Min system we are ex-ploring.Consider a simple chemical model, based loosely upon that proposedby Petrasek et al. [45]Dcytα−→ D,D + Eβ−→ DE,DE + Eβ−→ E + E + Dcyt.Here, MinD binds to the membrane from an infinite cytosol, and two MinEproteins must work together in order to remove a MinD protein from themembrane. This model exhibits the “chase and release” mechanics gener-ally associated with the Min system (see section 1.1, figure 1.3). For sim-76plicity MinE is conserved, although in practice we expect total MinE toinstead be in some form of dynamic equilibrium. Dcyt is the cytosolic con-centration of MinD dimers. For the sake of this model we treat this valueas being a parameter external to our system, homogeneous in space, butpotentially varying in time4. Petrasek’s model is distinguished from otherprominent Min models primarily through the double E requirement forMinD removal.If we denote the concentrations of D, E, DE and Dcyt as u,v,w and qrespectively, these chemical equations lead to the differential equations:u˙ = αq− βuv, (4.6)v˙ = βv(w− u) = −w˙. (4.7)By conservation, we have v + w = C. At equilibrium, we also haveαq = βuv, and hencev˙ = βv(C− v− u)v˙ = βv(C− v)− αq. (4.8)This admits the solutions v± = C/2±√C2/4− αq/β. By consideringq = 0, it is easy enough to see that the higher of these two steady states isthe stable of the two (corresponding in turn to a lower value for u). In con-trast, if we consider αq sufficiently large then no steady state exists. αq willremain greater than βuv always, leading u to increase indefinitely as v→ 0.In the limit u˙ ≈ αq, and u grows linearly in time. This is clearly unphysicalbehavior, but easily solved by modeling either depletion of Dcyt or satura-tion on the membrane, as we did in chapter 3. As the unphysical behavioroccurs far from the regime I am interested in, and expanding the model toavoid such behavior needlessly complicates modeling of the phenomena Iam interested in, we forgo such model extensions for the time being. Theimportant feature is the loss of the stable v+ solution as our parameters are4This “spatially uniform, temporally varying” bulk is indeed observed by Vecchiarelli etal [64], although we have taken some liberties in choosing to treat the bulk as “independent”of the behavior on the membrane.77  uvuvuvFigure 4.6: Nullclines of our simplified Petrasek model, as C is var-ied. u nullcline given in red, satisfying the equation u˙ = 0⇒ v =αq/βu. v nullcline given in green, satisfying v˙ = 0⇒ v = C− u.There is an additional v nullcline for v = 0, this has been ne-glected so as to keep the axis clear. The above illustrations showthe effects of increasing C. Qualitatively similar results can befound by decreasing q.varied through that saddle node bifurcation.More formally, if we imagine q to be a slowly increasing cytosolic con-centration of MinD, as is observed in experiments, then the above systemwill initially approach the v+ steady state (with w = C− v+,u = αq/βv+).As αq passes slowly through the boundary αq= C2/4 we find v+,w+,u+→C/2, and the Jacobian of the system becomes:J =[−βv+ −βu+−βv+ β(C− 2v+ − u+)]=βC2[−1 −1−1 −1]. (4.9)This Jacobian has eigenvalues −βC and 0, and our system is thus ex-pected to be neutrally stable in the direction (1,−1). If we perturb fromu+,v+ to u+ + e,v+ − e, we findu˙ = +βe2, v˙ = 0. (4.10)We thus observe that our perturbation has non-linear growth, in at least78  Figure 4.7: (Left) Particle based simulation of simplified Petrasekmodel. Black denotes D particles and red denotes the DE com-plex. (Right) PDE simulation of Petrasek model. This figureshows u at time t = direction; our Petrasek style model thus exhibits the required non-linearity, at least in some limited regime as we pass through the saddlenode bifurcation at αq = C2/4.Simulation of this simplified Petrasek system in the vicinity of the saddle-node give Burst formation, much as seen in previous simulations. See fig-ure 4.7. Because we have explicit chemical equations for this system, wecan also simulate using particle dynamics - much like the Bonny systemstudied in section 2.4.3, heterogeneity quickly develops.4.4 Introducing Some ToolsBefore moving on to more analytic work, I take a quick detour to introducesome of the tools that will be relevant over the remainder of this chap-ter, namely Large Deviation Theory, “Action”, and Functional Derivatives.The reader who is familiar with such tools may prefer to skip over thissection. Those who wish for more detail on Large Deviation Theory arerecommended to read either [17], or [47] 5. For further details on Func-5 Rassoul-Agha and Seppa¨la¨inen’s book is in many senses easier to read and simpler, butis more suited to dealing with large deviations of Discrete systems. Freidlin and Wentzell’sbook is more concerned with continuous systems. Anyone attempting to read this secondbook is advised that M is used to denote mathematical expectation, rather than the more79tional derivatives and the calculus of variations, I recommend [20]. Here itis my goal to present these tools in such a manner as to render them useful,and I will thus focus on building intuition rather than mathematical rigor.4.4.1 Large Deviation TheoryLarge Deviation Theory (henceforth “LDT”) is a theory used to study un-likely noise driven events in stochastic systems (often, but not always SDEs).Here I will begin by discussing LDT as it applies to discrete time Stochas-tic processes, before generalizing to consider how it applies to SDE’s andSPDE’s.As a concrete example with which to frame our discussion, consider theDiscrete Orenstien-Uhlenbeck process[35]ui = 0.9ui−1 + eξi, (4.11)u0 = 0. (4.12)Here ξi is assumed to be a normal random variable with mean zero andvariance one, such that each ξi is independent. We assume 0< e 1.Suppose we wish to know the first time that ui > 1, at first glance, givenO(e) noise, and the decay term of our OU process, ui > 1 seems unlikelyto occur. That said, as we take i→ ∞, unlikely, even exceedingly unlikelyevents should occur eventually. LDT concerns itself with such questions as“how long will it take for X to occur?” and “given that X occurs, what pathis the system most likely to take in order to get there?”. In our particularcase X is “ui > 1”.The total probability of our event X is equal to the integral over theprobability density of all paths leading to that event. This integral can beformulated either in terms of the paths of the stochastic process u, or interms of the underlying noise ξ (see figure 4.8).P(X) =∫u∈Xpu(u)du =∫ξ∈Xpξ(ξ)dξ. (4.13)usual E.80The probability density function for a noise vector ξ of length N ispξ(ξ) = (2pi)−N/2e−∑ ξ2i /2. (4.14)Unfortunately, even with this well defined probability density, the bound-ary of the integral∫ξ∈X pξ(ξ)dξ is generally complicated enough so as toprevent us from evaluating P(X) directly.Instead we rearrange our recurrence relation (equation 4.12), and findξi =ui − 0.9ui−1e. (4.15)Using this one-to-one correspondence between u and ξ along with the changeof random variables formula, we findP(X) =∫u∈Xpu(u)du =∫ξ∈Xpξ(ξ)dξ (4.16)=∫ξ∈Xexp[−∑ ξ2i /2]dξ (4.17)=∫u∈Xexp[−∑(ui − 0.9ui−1)2/2e2]du (4.18)=∫u∈Xe−S(u)/e2du. (4.19)Here S(u) is said to be the “normalized action functional” of our problem.In the particular case discussed here S(u) = (ui − 0.9ui−1)2/2. In generalS(u) will depend both on the equation governing a stochastic process, andthe particular form of the noise generating it. S(u) can be thought of asa measure of the “total improbability” associated with a given path, andis associated with the amount of “energy” that noise must pour into thesystem in order to cause a particular path to occur.At this stage, in order to determine the probability of our event, we needto evaluate∫u∈X e−S(u)/e2 du. Typically, this integral can not be evaluatedexactly, but it can be well approximated (up to an order of magnitude) viaLaplace’s Principle [34, 44].81  U 1U 2ξ1ξ2Figure 4.8: Viewed from the point of view of a stochastic process U,the event X has a simple boundary, but the probability densityof U is complicated (Above left, Below left). Viewed from thepoint of view of the underlying noise ξ, the boundary of theevent X is complicated, but the probability density function p(ξ)is simple (Above right, Below right). Simple cases such as thediscrete Orenstien-Uhlenbeck process (above) may give analyti-cally tractable results, but for more complex processes (below),both the probability density pu(u) and the event boundary in ξspace are liable to make exact analytic results inaccessible. In allcases the “most probable” point satisfying X is marked with astar.  U 1U 2ϵξ1ϵξ2Laplace’s Principle states that:∫u∈Xe−S(u)/e2du ≈ exp[−minS(u)/e2] , (4.20)where here we minimize S(u) over all u ∈ X. Laplace’s principle is basedon the idea that for integrals of the from∫u∈X e−S(u)/e2 du, the overwhelm-ing majority of all probability mass will be concentrated in the vicinityof minS(u) whenever e 1, (see figure 4.9). From the point of view ofstochastic processes, what Laplace’s Principle is effectively stating is that ifan improbable event does occur, it is likely to occur via the most probablepath available (or at least, very close to this most probable path).-10 -5 0 5 1000.'s Principle=1=1/2=1/3Figure 4.9: For some arbitrary function S(x), as e→ 0 the vast major-ity of the integral∫e−S(x)/e2 dx can be found in a narrow windownear the minimum of S(x). As a result∫e−S(x)/e2 dx scales likee−S(x0)/e2 . In this particular example, for e = 1 probability massis spread across a number of separate “peaks” (local minima ofS(x)). For e = 1/3 by contrast, only the global minima of S(x)has non-negligible mass associated with it. While varying e doeschange the width of our global peak, this effect is negligible com-pared to the change in height resulting from e−S(x0)/e2 .Let us return to the original question proposed at the start of this sec-tion: how long will it take before the Discrete Orenstien-Uhlenbeck process83defined in equation 4.12 exceeds one for the first time?Suppose we wish to determine the probability that uN ≥ 1, for someparticular N, assumed to be large. By Laplace’s Principle, we must thusminimize S(u) = (ui − 0.9ui−1)2/2. Assuming any uN > 1 will only in-crease S(u) probability; we can therefore assume uN = 1. By taking deriva-tives of S(u) with respect to ui for all i < N we finddSdui= 0 = −1.8(ui+1 − 0.9ui) + 2(ui − 0.9ui−1). (4.21)As N → ∞ solutions to the above can be well approximated by ui =0.9N−i. For finite N some minor modification is needed to ensure that u0 =0, but for large N the effects of this are minimal. ui = 0.9N−i implies in turnthatS(u) ≈ (1− 0.92)2/2∞∑i=00.92i = (1− 0.92)/2 = 0.095, and hence P(uN ≥ 1)≈ exp[−0.095/e2]. If this is the probability of successfor any particular large N, then we infer that the expected time until ui > 1will scale such that E(τ) = O(e0.095/e2).Freidln and Wentzell provide the general formulation for the above tworesults. They state [17] (chapter 4, theorem 1.2) that for small epsilon theprobability of a particular rare event occuring by a given time P(τ < T) isgoverned by:lime→0e2lnP(τ ≤ T) = −minφST(φ). (4.22)Here, as is customary, φ denotes a particular trajectory of u and ST(φ) indi-cates that we are minimising over all trajectories φ such that our rare eventoccurs by time T.In the case where we are escaping from a region R which is attracted toa single stable steady state (what might be described as an “energy well”),LDT predictslime→0e2 lnE(τ) = minT,φST(φ). (4.23)Where here we minimize S(φ) over all possible T, and over all paths φ84beginning at our stable steady state and ending at the boundary of R.Equation 4.23 is taken from [17], theorem 4.1, chapter 4. We have takensome liberties so as to bring the notation in line with the rest of this thesis.The above theorems is most easily parsed by considering a concreteexamples; if we imagine e = 10−3, and ST(φ)→ 0 as T→ ∞, then we ob-serve that lnP(τ≤ T)≈−e−2 minφ ST(φ). In the limit, lnP(τ≤ T)→ 0, andhence the probability of our event approaches 1. However, for all T suchthat minφ ST(φ) e2 = 10−6 we find lnP(τ ≤ T)−1 and hence P(τ ≤ T)is close to zero; escape is almost impossible at these times.In cases where ST(φ)→ 0 as T→∞ the manner in which minφ ST(φ) ap-proaches 0 determines the asymptotics of our escape time. In cases whereST(φ) ≥ k > 0 equation 4.22 implies that escape becomes impossible in thelimit of small noise, but also dictates exactly how likely it is in the casewhere we have finite noise. Intuitively, this equation states High ST(φ)corresponds to a high “energy barrier”, and thus to a low probability ofescape.In all the discussion of LDT so far I have assumed a discrete time stochas-tic process. This was done in order to simplify both description, and visu-alization, but given that the rest of this chapter will deal with SDEs andSPDEs it is necessary to broaden the scope of this introduction to LDTslightly further before moving on. Both equations 4.22 and 4.23 apply inboth the discrete and continuous context, and indeed anywhere where anaction functional can be defined, and Laplace’s Principle invoked.Suppose I have some stochastic process u(t), governed by some stochas-tic differential equation, for example, the continuous Orenstien-Uhlenbeckprocess[19, 59]ut = −u + eξ (4.24)here, ξ is assumed to be Gaussian white noise. Here I assume noise asdefined by Walsh [69], such that the integral over any area A is given by∫A ξdt = N(0, |A|), and integrals over non-intersecting time intervals areindependent. The extension to spatio-temporal white noise (also discussed85by Walsh) is trivial, and simply makes use of higher dimensional integra-tion.In order to apply LDT to equation 4.24, I need to first define a probabil-ity density function on ξ(t), and then use a change of variables.Here, I follow the convention implied in Freidln and Wentzell’s work,and assert that:pξ(ξ(t)) = C exp[∫ξ2dt]. (4.25)This can be thought of as the continuous analogue of equation 4.14.Finding a change of variables between u and ξ proceeds as previously:we take our governing equation and rearrange it to isolate the ξ:ut + ue= ξ (4.26)And henceS(u(t))/e2 =∫ξ2dt =∫(ut + u)2/e2dt (4.27)In moving from a discrete time stochastic process we replace our multi-variate action function S(u) with the action functional S(u(t)). S is now afunction of a function. In order to perform minimization on this object wewill need to introduce one further tool.4.4.2 Functional Derivatives and Their UsesWhen optimizing a function over a variable, we look for states where thederivative is zero. These critical points then make up our potential op-tima. When optimizing over several variables we look for points wherethe derivative of our objective function with respect to all variables is zero.The functional derivative allows us to perform an equivalent operation inthe case when we are optimizing over continuous functions as opposed tofinite collections of variables.Suppose we have a functional I over a function space. Let φ be a par-86ticular function within this function space. Examples of such a functionalmight be I(φ) =∫ ba φdx or I(φ) =∫∫φ˙2dxdt/φ(0,0). The functional deriva-tive δIδφ is defined to be a function (over the same space as φ) such that∫δIδφ(x)ψ(x)dx = limh→0I(φ+ hψ)− I(φ)h(4.28)for all ψ. This definition is parallel to the definition of the gradient operator:∇ f · y =∑∇ fi · yi = limh→0f (x + hy)− f (x)h. (4.29)A function φ is a potential optimizer of the functional I(φ) only when∫δIδφ (x)ψ(x)dx = 0 for all valid ψ. The range of allowable ψ will depend onthe space we are optimizing over and its boundary conditions. Typicallythe above optimality condition is equivalent to δIδφ (x) = 0.As a concrete example, consider the OU problem discussed earlier (equa-tion 4.24). Suppose we want to find the path of least action connectingu(0) = 0 to u(Tf ) = 10. We assume Tf is fixed for the time being.In this case we haveut = −u + eξ, u(0) = 0, u(Tf ) = 10. (4.30)Minimise S(φ) =∫ Tf0e2ξ2dt =∫ Tf0(φt + φ)2dt. (4.31)The minima will be found when ∂S∂φ = 0⇒ 0 = limh→01h∫ Tf0(φt + hψt + φ+ hψ)2 − (φt + φ)2dt⇒ 0 = limh→01h∫ Tf0+2h(φt + φ)(ψt + ψ) + h2(ψt + ψ)2dt0 =∫ Tf02(φt + φ)(ψt + ψ)dt.(4.32)At this stage, we have our integral in terms of φ,ψ and their derivatives.In order to re-write this integral in the form∫δIδφ (x)ψ(x)dx we need to re-arrange using integration by parts so as to remove all ψ derivatives. In87using integration by parts, we use the boundary conditions of our problem:namely, because φ(0) = 0,φ(Tf ) = 10 we know that we can not perturbthese points, and hence ψ(0) = 0,ψ(Tf ) = 0. This leads to0 =∫ Tf02ψ(φt + φ) + 2ψt(φt + φ)dt (4.33)⇒ 0 =∫ Tf02ψ(φt + φ)− 2ψ(φt + φ)tdt + [2ψ(φt + φ)]Tf0 (4.34)⇒ 0 =∫ Tf02ψφ− 2ψφttdt + 0 =∫ Tf0δSδφ(t)ψ(t)dt (4.35)⇒ 0 = δSδφ= −2φtt + 2φ. (4.36)We have thus replaced our first order SDE with a 2nd order ODE prob-lem. This doubling of derivative order and removal of noise is typical whenminimising S(φ).Solving the above gives:2φtt = 2φ, (4.37)φ(t) = C1et + C2e−t. (4.38)Applying our initial boundary condition leads to C1 = −C2. Applying thesecond boundary condition gives 10 = C1eTF − C1e−Tf ⇒ C1 = 10/(eTf −e−Tf ) ≈ 10e−Tf4.5 “Escape from Zero” Time for Non-LinearEquationsNow that we have introduced the tools and approach of Large DeviationTheory, we apply these tools to explore the time for equations involving asimple u2 term to “escape from zero” - that is to say, starting from u = 0,how long do we expect it to take to reach some arbitrary positive value.Here we are less interested in the behavior as we approach our final value,and more interested in how we ‘exit’ from the neutrally stable steady state880.We begin this analysis in the spaceless (spatially homogeneous) casewhere both analysis and numerics are relatively simple, and then explorefurther to consider the case of one and two spatial dimensions. In theselater cases a mixture of both analytic and numerical techniques is required.4.5.1 Spaceless CaseConsider the SDEu˙ = u2 + eξ. (4.39)We wish to determine the amount of time it takes the Stochastic differ-ential equation to reach some constant φ f , which may be finite, or infinite.Let S(φ) be our normalized action functional.S(φ) =∫e2ξ2dt =∫(φ˙− φ2)2dt. (4.40)For any given T,φi,φ f , we wish to find the φ which minimises ST(φ) s.t.φ(0) = φi and φ(T) = φ f . For this we use the functional derivative. We wishto pick φ such that dS(φ)dφ = 0 for all t. To calculate the functional derivativewe considerS(φ+ hψ)− S(φ) =∫2hψ˙φ˙− 2hψ˙φ2 − 4hφ˙φψ+ 4hψφ3 +O(h2)dt. (4.41)Integration by parts givesS(φ+ hψ)− S(φ) =∫−2hψφ¨+ 4hψφ˙φ− 4φ˙φhψ+ 4hψφ3 +O(h2)dt(4.42)Which implies that: dS(φ)dφ = 0 = −2φ¨+ 4φ389Multiplying through by φ˙ and integrating gives:0× φ˙ = −2φ¨φ˙+ 4φ3φ˙, (4.43)(φ˙)2 = φ4 + C, (4.44)φ˙√φ4 + C= 1. (4.45)Integrating both sides gives∫ t0φ˙√φ4 + C1dt˜ = [t˜]t0, (4.46)∫ φ fφi1√φ4 + C1dφ = T − 0. (4.47)For C1 = 0 we recover the deterministic solution φ(t) = 1/(φ−1i − t). ForC1 6= 0 the above integral can be solved using wolfram|alpha [73]. If weassume φi = 0, thent = τ(φ) = −√iC−1/4 × ellipticF(i ∗ asinh(√iC−1/4 ∗ φ),−1). (4.48)Here our solution defines t in terms of φ, as opposed to the other wayaround. We thus have an implicit definition of φ. This function is unwieldy,but exhibits all the smoothness properties we might hope for from an op-timal solution. Before evaluating the above, we must pick C in order toachieve τ(φ f ) = T. This is not generically an easy problem to solve exactly,but for large φ f it can be well approximated.First, let us note that τ(∞) = 4Γ(5/4)2√piC1/4 , [74]. We then note that the de-terministic time taken to get from φ = φ f to φ = ∞ is 1/φ f , thus τ(∞) ≈T + 1/φ f . Rearranging givesC ≈(4Γ(5/4)2√pi(T + 1/φ f ))4.While this result is technically an approximation, numerical experimenta-90tion appears to indicate that it is a very good one; evaluating τ(φ f ) us-ing our assumed C we find typically find |τ(φ f )− T|/T < 10−6 wheneverT,φ f ≥ 4.0 10 20 30 40 50t00.511.522.533.544.55Minimal noise pathsFigure 4.10: Some optimal φ found for particular boundary conditionsusing equation 4.48. The assumed boundary conditions for eachline are marked with stars. Colors are arbitrary.Now that I have identified an optimal φ(t) (albeit implicitly) I wouldlike to evaluate S(φ). Numerical experiments indicate that for large φ f ,Twe have S(φ) = O(CT) = O(T−3) = O(C3/4). (see figure 4.11)We would like to prove the above numerical result exactly. Startingwith the definition of S(φ) we haveST(φ) =∫ T0(φ˙− φ2)2dt =∫ T0φ˙φ˙− 2φ˙φ2 + φ4 + C− Cdt, (4.49)9110 1 10 2 10 3Tf10 -810 -610 -410 -2S()Figure 4.11: Action vs T. This graph constructed via numeric integra-tion of equation 4.50 - more direct methods of evaluating S(φ)run into numerical instability.using φ˙ =√φ4 + C we find∫ T0φ˙√φ4 + C− 2φ˙φ2 + φ˙√φ4 + C− Cdt=∫ φ f02√φ4 + C− 2φ2dφ− CT.(4.50)The problematic term here is the first term. Applying integration by92parts to it, we find:∫ φ f02√φ4 + Cdφ =[2φ√φ4 + C]φ f0−∫ φ f0φ4φ3√φ4 + Cdφ, (4.51)=[2φ√φ4 + C]φ f0−∫ φ f04φ4 + C− C√φ4 + Cdφ, (4.52)=[2φ√φ4 + C]φ f0−∫ φ f04√φ4 + Cdφ+∫ φ f04C√φ4 + Cdφ.(4.53)3∫ φ f02√φ4 + Cdφ =[2φ√φ4 + C]φ f0+∫ φ f04C√φ4 + Cdφ. (4.54)Dividing by 3 and making use of eq. 4.47 gives∫ φ f02√φ4 + Cdφ =23[φ√φ4 + C]φ f0+43CT. (4.55)Substituting back into our original integral we findST(φ) =∫ φ f02√φ4 + C− 2φ2dφ− CT, (4.56)=23[φ√φ4 + C]φ f0+43CT −[23φ3]φ f0− CT, (4.57)=23[φ3(√1+ Cφ−4 − 1)]φ f0+13CT. (4.58)Assuming φ f is large,ST(φ) ≈ C3φ f − 0+13CT. (4.59)Remembering that C ≈(4Γ(5/4)2√piT)4= O(T−4) we recover the numerical re-sults that ST(φ) = O(T−3).93Now that we know the ST(φ) scaling we apply equation 4.22 to findlime→0e2 ln[P(τ ≤ T)] = −minφST(φ) =13CT, (4.60)⇒ P(τ ≤ T) ∼ exp[−13(4Γ(5/4)2√pi)4T−3e−2]. (4.61)Using the layercake representation of expectation value we find 6E(τ) =∫ ∞0P(τ > T)dt ≈∫ ∞01− exp[−kT−3e−2]dT, (4.62)now, making use of Wolfram|alpha[72], this givesE(τ) ∼ 2pi√3Γ(1/3)(e2/k)−1/3 = O(e−2/3). (4.63)Simulation of equation 4.39 for a variety of e agrees with these asymp-totics (see figure 4.12). Hence we find that time taken for the equationut = u2 + eξ to escape from zero and approach infinity is predicted to scalelike O(e−2/3). The majority of this time is spent close to u = 0, and theescape time behavior of the system is dominated by the behavior of thesystem in this region.4.5.2 Spaceless Equation With an Energy BarrierThe results in the previous subsection, while fascinating, are in some sensephysically meaningless. Equation 4.1 assumes a u2 term, and also assumesthat our constant and linear terms are exactly equal to zero. This assump-tion is in many ways unreasonable, and the slightest perturbation from this“perfect balance” leads to qualitatively different behavior.Non-zero linear terms, such as ut = u2 + γu lead to a pair of steadystates near one another- one stable, the other unstable. Because the behav-ior for such systems is qualitatively similar to simply having a negative6E(X) =∫ ∞−∞ x fX(x)dx, where fX(x) is a probability density function. If X > 0, thenE(X) =∫ ∞0 x fX(x)dx, and integration by parts givesE(X) = [x(FX(x)− 1)]∞0 −∫ ∞0 FX(x)−1dx =∫ ∞0 P(X > x)dx.94-2 -3 -4 -5 -6 -7log 10  123log 10 TEscape TimeFigure 4.12: Passage time from 0 to 1 vs e for ut = u2 + eξ (equation4.39). For each e value we use 4000 simulations. Boxplots in-dicate distribution of passage times. Green line indicates thepredicted scaling of passage time T = O(e−2/3). The slope fromour theory matches simulation results. Note that since we onlypredict scaling, the vertical shift between theory and simulationis of no concern.constant term, we will restrict our analysis here to only considering pertur-bations of the constant term.In the case of a positive constant term we haveu˙ = u2 + γ2 + eξ. (4.64)In this case our neutrally stable steady state is lost, and u will determin-istically increase through all positive u values in finite time:∫ u f01u2 + γ2du =[tan−1(u/γ)/γ]u fui= T. (4.65)If we assume ui = 0 and u f =∞ this leads to T = pi/2γ, even in the case95of e = 0.In the case of negative constant term or non-zero linear term, our neu-trally stable steady state breaks up into a stable and unstable node. In thiscase, the “escape time” of the system is governed by the effective energybarrier created by these two nodes- the improbability required to get fromthe bottom of the valley surrounding the stable node to the top of the hillassociated with the unstable node. In this caseu˙ = u2 − γ2 + eξ, (4.66)and thusS(φ) =∫(φ˙− φ2 + γ2)2dt. (4.67)As in the previous section, we find the functional derivativeδSδφ= −2φ¨+ 4φ(φ2 − γ2). (4.68)Continuing as last time leads to2φ¨φ˙ = 4φ(φ2 − γ2)φ˙, (4.69)⇒ φ˙2 = (φ2 − γ2)2 + C2, (4.70)⇒∫ φ fφi1√(φ2 − γ2)2 + C2 dφ = T. (4.71)If we think about the above in terms of “How long does the integralspend near φ = 3” (for example), then this simply indicates that φ is fastmoving in places where u˙ has large magnitude, and small in places whereit has small amplitude (near u = ±γ). In places where u˙ < 0, u wouldnaturally be decaying, in order to keep u increasing (or even stationary), eξmust compensate for this decay, and the longer u spends in this decaying96region, the greater the associated energy cost. For optimal trajectories wethus observe that u travels quickly through these decaying regions. Foru˙ > 0 the deterministic growth moves u quickly without input from ournoise term. It is only near u = ±γ where u˙ ≈ 0 that the system can affordto move slowly. C acts as a “nudge” term which turns this barrier into anarrow bottleneck- the more time we have the narrower the bottleneck ispermitted to be. In the limit of infinitesimally small C we travel throughthe bottleneck, taking an arbitrarily large amount of time.For C = 0 the above indicates that infinite time is taken. After we crossthe energy barrier we move according to deterministic dynamics u˙ = u2 −γ2. Before we cross the barrier we obey the dynamics u˙ = −(u2 − γ2).Calculating the minimal S(φ) associated with this givesS(φ) =∫ TfTiφ˙φ˙− 2φ˙(φ2 − γ2) + (φ2 − γ2)2dt, (4.72)using φ˙ =√(φ2 − γ2)2 we findS(φ) =∫ TfTi2φ˙√(φ2 − γ2)2 − 2φ˙(φ2 − γ2)dt (4.73)=∫ φ fφi2|φ2 − γ2| − 2(φ2 − γ2)dφ. (4.74)If we assume that we start at the deterministic steady state, φi = −γ, andthat our final state is beyond the unstable node φ f > γ then this leads toS(φ) =∫ γ−γ−4(φ2 − γ2)dφ+∫ φ fγ0dφ, (4.75)= −4[φ3/3− γ2φ]γ−γ dφ, (4.76)= 16γ3/3. (4.77)Unlike previously, where our steady state was either neutrally stable ornon-existent, here we have an actual stable steady state. The time to escapefrom a region attracted to an asymptotically stable steady state is governed97by equation 4.23, which in this particular case givesE(τ) = exp[e−216γ3/3].4.5.3 The Spatially Distributed ProblemNow that we have built up some small level of understanding using thespaceless model, let us turn our attention to the spatially distributed model:ut = uxx + u2 + eξ. (4.78)Both u and ξ are functions of x and t. the function u(x, t) representsconcentration (or some suitable rescaling of it), while ξ once again can bethought of either as a white noise term, or some sort of “forcing” function.In order to investigate spike formation we solve:given ut = uxx + u2 + eξ, (4.79)u(x,0) = 0, (4.80)Minimise Sˆ(ξ) =∫0T∫−∞∞e2ξ2dxdt (4.81)such that u(0, T) = U f . (4.82)If we consider ξ as a form of forcing function, then the above asks forthe lowest energy forcing required to lift u(0, t) to height U f by time T.Thinking of ξ as a form of noise we are asking for the most probable noise.For the rest of this section we will think of ξ in terms of energy input, al-though we will keep the white noise formulation in mind.In order to find our minima, we first re-write S in terms of u, and thentake the functional derivative:98S =∫0T∫ ∞−∞e2ξ2dxdt =∫0T∫ ∞−∞[ut − uxx − u2]2dxdt. (4.83)From the definition of functional derivatives we have∫∫δSδuψdxdt = limh→0S(u + hψ)− S(u)h. (4.84)Combining and taking limits gives∫∫δSδuψdxdt = 2∫∫(ψt − ψxx − 2uψ)(ut − uxx − u2)dxdt. (4.85)Remembering that eξ = ut − uxx − u2 gives∫∫δSδuψdxdt = 2∫∫ψtξ − ψxxξ − 2uψξdxdt. (4.86)Now using integration by parts on each ψ term as appropriate, we find∫∫δSδuψdxdt = 2e∫[ψξ]t=Tt=0 dx + 2e∫[ψξx − ψxξ]x=∞x=−∞ dt (4.87)+ 2e∫∫−ψξt − ψξxx − 2uψξdxdt. (4.88)Because we seek to minimise∫ ∫ξ2dxdt, we can safely demand thatξ,ξx → 0 as x → ±∞ and thus [ψξx − ψxξ]∞−∞ = 0. We are left with thepotentially problematic∫[ψξ]Tf0 ξdx term, however, since u is fixed at zerofor t = 0 and at U f for x = 0, t = T we know ψ (our perturbation to u) is zeroin these location. Further, we can assume that ξ = 0 for t = T, x 6= 0, as anynon-zero forcing at these locations would increase S, but have no effect onu(0, T). Hence∫[ψξ]T0 ξdx = 0, and soδSδu= 0 = −ξt − ξxx − 2uξ. (4.89)The above can be stated as a fourth order differential equation purely99in terms of u, but this provides little illumination. Instead we leave it as abackwards heat equation, ready to be coupled to 4.78.As argued previously, for our optimal solution ξ(x, T) = 0 wheneverx 6= 0, as non-zero values of x in these locations increase S but have noeffect on u(0, T). In order for ξ to have any effect on u it must have non-zero total mass, and since∫ξtdx =∫ −2uξdx we know that it must havenon-zero total mass at time T. Hence ξ(x, T) = αδ(x) for some α.Combining the above, minimal noise for our system will thus solve:ut = uxx + u2 + ξ,u(x,0) = 0,−ξt = ξxx + 2uξ,ξ(x, T) = αδ(x).(4.90)Solving 4.5.5 numerically in either direction in time requires that wesolve a backwards heat equation, either in u or ξ, depending on which di-rection we solve in. Creating a full spatial-temporal mesh and solving forboth ξ and u simultaneously is possible, but quickly becomes computation-ally expensive as the mesh becomes finer. We can however solve iteratively.This is done by initially assuming u0(x, t) = 0, and solving−ξˆt = ξˆxx + 2uξˆwith the boundary condition ξˆ(x, T) = δ(x) to determine ξˆ1. We use αξˆ1 asa forcing term to find u1, and u1 to determine ξˆ2 and so on, until |S(ui)−S(ui+1)| < 0.0005|S(ui) + S(ui+1)|. (See figure 4.13).One slight detail is the determination of α. During each iteration stepwe solve for ξˆ using the initial condition ξˆ(x, T) = δ(x). I then solve for uusing αξˆ, and compare the observed value of u(0, T) to the intended valueUF. When u(0, T) is too small I increase α and try again. When u(0, t)reaches U f too early I reduce α. In order to determine how much to decreaseor increase, I observe empirically that f (α) = t+ 1/u(0, t) is approximatelylinear in α. By recording f (αi) ≈ T + 1/u(0, T) whenever I undershoot,100  tξ1 ξ2 ξ3iterationsu1 u2 u3Figure 4.13: Schematic view of my iteration scheme, as used to solveequations 4.5.5.and f (αi) ≈ t f + 1/U f whenever I overshoot, I can approximate f (α), andin turn use this approximation to select future values of α. Here t f is thetime such that u(0, t f ) = U f for a given value of α. The interested readercan refer to the code for details, see BackForwardSolveAdaptive.m insupplementary materials and Github.4.5.4 Numeric Results for Spatially Distributed SystemUsing matlab’s pdepe, along with the script IterateBFsolve.m we im-plement the iteration scheme previously described. We assume a reflect-ing boundary at the origin, ux(0, t) = ξx(0, t) = 0 along with an absorbingboundary at some point far from the origin u(Xmax, t) = ξ(Xmax, t) = 0. Wechoose Xmax = 5√4DmemT such that our far field boundary will have neg-ligible effect on spike formation.In one dimension, we find S(φ) = O(T−5/2) as T→ ∞ (see figure 4.14,left). By equation 4.22 this implies expected spike formation time ofE(τ) =O([e2]−2/5) = O(e−4/5).As well as “energy cost” of our optimal escape path, we are also inter-ested in the shape of u. Figure 4.15 gives u(x, T) for a variety of T and U f .Most obvious and notable is the observation that we do indeed observesharp spikes, as might be hoped, based both on our SPDE simulations, andthe experimental data we are matching to. These spikes become increas-ingly sharp as we increase U f .10140 60 80 100 120T10 -410 -310 -2S()Uf=0.1Uf=1Uf=10Figure 4.14: I calculate the action cost ST(φ) to reach u(0, T) = U f bytime t = T numerically. The resulting lines have mean slope−2.465,−2.495−2.498 in log-log space, suggesting that ST(φ)≈O(T−5/2).Of some practical note is the fact that spike width varies only slightlybased on changing T. For real world data, this would appear to imply thatthe width of any spikes formed should not vary with time taken, but is in-stead determined by “static” parameters of the system, such as reaction anddiffusion coefficients. It would be useful to take this “spike shape” infor-mation and use it to extract useful physical parameters from experimentaldata - both in a general sense, and also from the specific data provided byVecchiarelli et al [63]. In order to make such a comparison, we would berequired to extend our model to 2D. Unfortunately, as alluded to in section4.2, stochastic partial differential equations are ill-posed in two dimensions(and higher).4.5.5 The Difficulties in Two DimensionsThe ill-posedness of non-linear stochastic partial differential equations inhigher dimensions is well established [15, 32], and it is not my intention toget into a detailed analytic treatment of this phenomena here. I will how-ever aim to give a brief introduction so as to give the reader some intuitionas to the underlying problems, and will give a detailed account of howthese problems express themselves in our analytic approach.102-10 0 10x0.,T)T=25T=65T=110-10 -5 0 5 10x012345678910u(x,110)Uf=0.1Uf=1Uf=10Figure 4.15: Final u profile for U f = 1, T varying (upper). We observethat the width of our peak is relatively insensitive to T. (lowerpanel) Final u profiles for T = 110, U f varying.In order to consider these difficulties, it is sufficient to discuss them inthe context of the equationut = ∆u + ξ. (4.91)Suppose we simulate this equation using finite different methods on a grid,with mesh size dx, using some fixed time step dt. When representing whitenoise in simulation, white noise will cause the integral over each mesh ele-ment to change by N(0,dt× dxd) each time step. This requires that the valueu at each mesh element changes as N(0,dt × dxd)dx−d = N(0,dt × dx−d).Rephrasing in a chemical manner, smaller boxes means smaller noise whenthinking in terms of number of particles, but a greater noise when think-ing in terms of concentrations. While noise acts to perturb mesh elementseither up or down, diffusion acts to reduce the difference between neigh-boring elements. Diffusion causes u to change with a term that scales likedt× dx−2, regardless of the dimensionality of our system. In each time step,noise will introduce differences that scale like dx−ddt, while diffusion willbring neighbouring values close together at a rate that scales like dx−2dt.In one dimensional systems, we observe that as dx tends to zero diffusionwill dominate noise, and u will be continuous [24, 25]. In higher dimen-sional systems noise will dominate diffusion. In two dimensional systemsthis heuristic argument is insufficient to determine results, but simulationdemonstrates that diffusion is insufficient to dominate noise.From a practical perspective this means that the behavior and statisticalproperties of simulations are mesh size dependent, and that any particularchoice of mesh size implies particular assumptions about spatial correla-tions in ξ. Truly uncorrelated white noise can not be adequately repre-sented using this style of mesh based simulation.In order to gain a more analytic handle on these difficulties, we mightinstead follow the approach of Ryser et al [50]. Consider equation 4.91 onthe domain [0,pi]d, and represent u using the Fourier seriesu = ∑k∈Nd0αk(t)cos(k · x).104This representation results inα˙k = −|k|2αk + dWk, (4.92)where dWk are a collection of standard Wiener processes. By breaking uup into its Fourier modes, we replace the SPDE 4.91 with a collection ofindependent Ornstein-Uhlenbeck processes, each of which approachesαk(t) = N(0,1/2|k|2)as t→∞. The value of u at (for example) x = 0 is thus given byu(0, t) = ∑k∈Nd0αk(t)cos(0) (4.93)= ∑k∈Nd0N(0,1/2|k|2) (4.94)= N(0, ∑k∈Nd01/2|k|2). (4.95)When d = 1 the sum ∑k∈Nd0 1/2|k|2 converges, but for d ≥ 2 the sum isunbounded, and thus u(0, t) has infinite variance. This argument appliesregardless of the particular choice of x = 0, and can be applied with onlyminor modifications for any t> 0. Once again, increasing the dimensional-ity of our system allows noise to overwhelm diffusion.If we re-frame equation 4.91 in terms of the optimal spike formationmethodology discussed in this chapter, then it can be shown thatut = ∆u + ξ, u(x,0) = 0, (4.96)−ξt = ∆ξ, ξ(x, T) = αδ(x). (4.97)Unlike the u2 case, this equation can be explicitly solved, giving ξ(x, t) =αN(x, T − t) = α[4pi(T − τ)]−d/2 exp[|x|2/4(T − τ)]. By noting that u(x, t)can be found by taking the convolution of all ξ added to the system, and105then diffusing based on the time added, we findu(x, t) =∫∫N(x− y, t− τ)ξ(y,τ)dydτ (4.98)= α∫∫N(x− y, t− τ)N(y, T − τ)dydτ (4.99)= α∫N(x, T + t− 2τ)dτ (4.100)U f = u(0, T) = α∫[8pi(T − τ)]−d/2dτ. (4.101)We can also find the total action required;S(u) =∫∫ξ(x, t)2dxdt (4.102)=∫∫α2N(x, T − t)N(x, T − t)dxdt (4.103)=∫∫α2[4pi(T − τ)]−2d/2 exp[2|x|2/4(T − τ)]dxdt (4.104)= α2∫[8pi(T − τ)]−d/2∫N(2x, (T − τ))dxdt (4.105)= α2∫[8pi(T − τ)]−d/2dt = αU f (4.106)For d = 1 the integral∫[8pi(T − τ)]−d/2dτ = [2[T/8pi]1/2] The integralis finite and bounded, leading to α > 0 and a well defined energy cost toreach some height U f by a particular time. For d ≥ 2 however, the inte-gral diverges towards infinity, suggesting that α can be taken arbitrarilyclose to zero, and hence S(u) can be forced close to zero also. In order toavoid dealing with this divergent integral directly, we might avoid usingthe Dirac delta function as our ξ boundary condition- instead choosing (forexample) ξ(x, t) = N(x,e). In this case however, we could consider a seriesof decreasing e, leading in turn to decreasing α and S(u), and reaching thesame result; for any T > 0, the action cost S(u) can be pushed arbitrarilyclose to zero. We once again encounter the degeneracy of high dimensionalwhite noise — this time expressed in terms of the “sharpness” of the Diracdelta function for d ≥ 2. Attempting to solve equations for d ≥ 2 results infailure of convergence, and other numeric artifacts, a stark contrast to the106full convergence observed in one dimension.4.6 Comparison to “Approach to Infinity”Let us return briefly to our results in one dimension, before moving on toconclude the chapter. Our work here on burst formation near semi-stablesteady states has in many ways been inspired by previous work in the non-linear blow up literature, as discussed in section 4.1.1. Before moving on Iwish to determine whether the “escape from zero” problem is fundamen-tally different to the non-linear blow up phenomena previously discussed.If the two problems give similar results, this suggests that results from onemaybe be of use in studying the other. To the extent that the problemsdiffer, this suggests that each is its own distinct and interesting avenue ofresearch.The most readily comparable result is Galaktionov and Posashkov’s asymp-totic approximation for “approach to infinity” for the problem ut = uxx + upin one dimension [60] (see [18] for a more accessible source). For p = 2 theirformula gives:u(x, t) ≈ (T∞ − t)−1(1+ χ2/8)−1, (4.107)where χ =|x|√(T∞ − t)|ln(T∞ − t)|. (4.108)Here T∞ is the “blow up” time of the equation- the point in time whereu→∞.Figure 4.17 gives a comparison of my “escape from zero” results to thecorresponding “approach to infinity”. As can be seen, Galaktionov andPosashkov’s asymptotic results give a narrower and sharper spike than isfound for the “zero escape” problem I solve here. These two curves, whilesimilar appear distinct enough to suggest that the “infinite approach” and“zero escape” are indeed distinct mathematical problems, deserving of dif-fering treatments. Given that previous results (figure 4.15 ) suggests thatu(x, T) gets wider as T → ∞ there appears to be no reason to expect thatu(x, T) will tend towards Galaktionov and Posashkov’s asymptotics, even107in the limit of large T.4.7 ConclusionEquations of the form ut = Duxx + u2 are well known to exhibit finite timeblow up behavior, however, such non-linear equations are interesting notjust for how they approach infinity, but also in the manner in which they“escape from zero” when under the influence of low amplitude white noise.Even when higher order terms are added so as to keep such equationsfinite, simulations demonstrate that systems possessing a neutrally stablesteady state allow the development of sharp spikes in finite time.Analytic and partially analytic explorations of ut = D∆u+ u2 + eξ showthat, in the limit of large time, there exist spike-like solutions of arbitrarilylow “improbability cost” S(φ), thus indicating that even in the limit of lownoise, such spike formation is inevitable.While here I have studied only the particular equation ut = D∆u+ u2 +eξ, it seems inevitable that these results will prove applicable to any PDEsystem in the vicinity of a suitable saddle-node bifurcation. In terms ofthe Min system, this includes both the simplified Petrasek model, as dis-cussed in section 4.3, but also Carlquists model (chapter 3, equation 3.12)as it passes through any of the numerous saddle-node bifurcations impliedby the model.108-40 -20 0 20 40x050100150200(x,0)T=25T=65T=110-40 -20 0 20 40x0123456 (x,0)10 -3T=25T=65T=110Figure 4.16: ξ(x,0) for variety of T values. U f = 1. (left) we have the“raw” ξˆ, on the right we have αξˆ = ξ, the noise term that isactually used.-15 -10 -5 0 5 10 15x12345678910u(x,T)0 escape approachFigure 4.17: A comparison of my “escape from zero” results withGalaktionov and Posashkov’s “approach to infinity” results.Here I assume T = 130, U f = 10 and T∞ = 130.1.Chapter 5Concluding RemarksThis thesis has been motivated by the modeling of the Min system, andin particular, the study of the numerous mechanisms capable of producingburst nucleation, as observed by Vecchiarelli et al. [64]. In chapter 2, I exam-ined some of the difficulties that can arise when mapping between particlemodels and PDE systems, particularly when doing so in 2D surfaces, suchas the cell membrane. I then constructed a simplified version of Schweizerand Bonny’s Min model [6, 51], and used it to demonstrate how particlebased simulations can lead to burst formation and heterogeneity, even incases where the associated mass action based PDE model might not.In chapter 3, I took a fully parameterized model of the Min system [71],and extend that model in order to predict system behavior under a vari-ety of “experimental conditions”. Using local perturbation analysis andnumeric PDE simulations, I was able to show that in the MinD limited pa-rameter regime, spike formation is indeed observed. These spikes wereshown to expand outwards and “freeze” as is observed in Vecchiarelli’s ex-periments, but do not subsequently collapse. This expansion and freezingprocess I studied through the lens of Wave pinning theory[42].Finally, in chapter 4 I examined how low levels of noise can lead tospike formation from neutrally stable steady states, such as might be ob-served when passing through a saddle node bifurcation. Numeric simu-lations demonstrated this burst nucleation for a number of different sys-110tems, including a simplified version of a Min model due to Petrasek et al.[45]. Subsequent Large Deviation Theory analysis was demonstrated thatgiven sufficient time, burst solutions can be formed using an arbitrarilysmall amount of noise for the equation ut = uxx + u2. Unlike past mod-els demonstrating pattern formation for systems of coupled reactions withdistinct diffusion rates [58], here I am able to demonstrate pattern forma-tion even when dealing with only a single partial differential equation anda single diffusion constant.5.0.1 Future WorkIn chapter 4, I considered burst formation from a neutrally stable steadystate in the abstract and do not bind myself to any particular model (Minsystem or otherwise). While this allowed me to consider the cleanest possi-ble example of non-linear burst formation, now that the tools and method-ology is developed there are a number of questions which must still beanswered before this work can be applied to answering questions aboutthe Min system.To start with, I would like to test the analytic results given in chapter4 against simulations, a computationally expensive exercise, but one that Isuspect is necessary in order to ratify the somewhat abstract results. Fur-ther, the results considered here assume that our system is perfectly bal-anced at a neutrally stable steady state. It would seem more physicallyreasonable to instead consider a system losing stability through a saddlenode bifurcation via some slowly varying parameter. Determining howthe bursting behavior is effected by the interplay between the small noiseamplitude and the slowly changing bifurcation parameter will be criticalin determining how relevant this method of burst formation is to the Minsystem.In order to prove relevant to the Min system in a more diagnostic sense,the analysis must be repeated for a number more physiologically relevantSPDE, with appropriate consideration given not only to the reactions used,but also to how these might be implemented in a PDE, and what form of111noise belongs with them. With this accomplished, we would then needto extract from such a model some form of prediction, or statistical testthat could be used to discriminate between the various proposed reactionschemes given the data currently available. Based on the available datapotentially discriminating measures include the number of bursts formingper unit area per unit time and the rate at which bursts increase in size.More fine grained properties, such as the precise shape of bursts, is belowthe resolution of the data currently available.In the longer term chapter 2 demonstrates a wide view of difficulties in2D chemistry, but makes no move to resolve any of the issues proposed.Although I doubt the existence of any “one size fits all” formula that mighttake the place of the law of mass action, there is a need for a systematicmeans of proposing a reaction scheme in a low dimensional system, andpassing from a full particle model to some form of condensed represen-tation that balances the desire for analytic tractability against the need toaccount for particle-particle correlation in a more accurate manner.112Bibliography[1] S. M. Allen and J. W. Cahn. A microscopic theory for antiphaseboundary motion and its application to antiphase domaincoarsening. Acta Metallurgica, 27(6):1085–1095, June 1979. ISSN0001-6160. doi:10.1016/0001-6160(79)90196-2. URL →page 66[2] S. S. Andrews and D. Bray. Stochastic simulation of chemicalreactions with spatial resolution and single molecule detail. PhysicalBiology, 1(3-4):137–151, Dec. 2004. ISSN 1478-3967.doi:10.1088/1478-3967/1/3/001. → page 30[3] S. S. Andrews, N. J. Addy, R. Brent, and A. P. Arkin. DetailedSimulations of Cell Biology with Smoldyn 2.1. PLOS ComputationalBiology, 6(3):e1000705, Mar. 2010. ISSN 1553-7358.doi:10.1371/journal.pcbi.1000705. URL→ pages 11, 30[4] E. Bi and J. Lutkenhaus. FtsZ ring structure associated with divisionin Escherichia coli. Nature, 354(6349):161–164, Nov. 1991.doi:10.1038/354161a0. URL →page 1[5] R. A. Blythe and A. J. Bray. Survival probability of a diffusing particlein the presence of Poisson-distributed mobile traps. Physical Review.E, Statistical, Nonlinear, and Soft Matter Physics, 67(4 Pt 1):041101, Apr.2003. ISSN 1539-3755. doi:10.1103/PhysRevE.67.041101. → pages23, 25113[6] M. Bonny, E. Fischer-Friedrich, M. Loose, P. Schwille, and K. Kruse.Membrane Binding of MinE Allows for a Comprehensive Descriptionof Min-Protein Pattern Formation. PLOS Comput Biol, 9(12):e1003347,Dec. 2013. ISSN 1553-7358. doi:10.1371/journal.pcbi.1003347. URL→ pages 4, 26, 110[7] M. Bramson and J. L. Lebowitz. Asymptotic Behavior of Densities inDiffusion-Dominated Annihilation Reactions. Physical Review Letters,61(21):2397–2400, Nov. 1988. doi:10.1103/PhysRevLett.61.2397. URL → pages 23, 25[8] A. J. Bray and R. A. Blythe. Exact Asymptotics for One-DimensionalDiffusion with Mobile Traps. Physical Review Letters, 89(15), Sept.2002. ISSN 0031-9007, 1079-7114. doi:10.1103/PhysRevLett.89.150601.URL → pages23, 25[9] H. S. Carslaw and J. C. Jaeger. Some Two-Dimensional Problems inConduction of Heat with Circular Symmetry. Proceedings of theLondon Mathematical Society, s2-46(1):361–388, Jan. 1940. ISSN1460-244X. doi:10.1112/plms/s2-46.1.361. URL→ page 18[10] E. N. Cytrynbaum and B. D. L. Marshall. A Multistranded PolymerModel Explains MinDE Dynamics in E. coli Cell Division. BiophysicalJournal, 93(4):1134–1150, Aug. 2007. ISSN 0006-3495.doi:10.1529/biophysj.106.097162. URL →page 4[11] H. Dankowicz and F. Schilder. Recipes for Continuation.Computational Science & Engineering. Society for Industrial andApplied Mathematics, May 2013. ISBN 978-1-61197-256-6.doi:10.1137/1.9781611972573. URL → page 43[12] P. A. J. de Boer, R. E. Crossley, and L. I. Rothfield. A division inhibitorand a topological specificity factor coded for by the minicell locusdetermine proper placement of the division septum in E. coli. Cell, 56(4):641–649, Feb. 1989. ISSN 0092-8674.114doi:10.1016/0092-8674(89)90586-2. URL →pages 1, 4[13] M. D. Donsker and S. R. S. Varadhan. Asymptotics for the wienersausage. Communications on Pure and Applied Mathematics, 28(4):525–565, July 1975. ISSN 1097-0312. doi:10.1002/cpa.3160280406. URL → pages23, 24[14] A. Einstein. ber die von der molekularkinetischen Theorie der Wrmegeforderte Bewegung von in ruhenden Flssigkeiten suspendiertenTeilchen. Annalen der Physik, 322(8):549–560, Jan. 1905. ISSN1521-3889. doi:10.1002/andp.19053220806. URL →page 9[15] M. Erbar. Low noise limit for the invariant measure of amulti-dimensional stochastic Allen-Cahn equation. arXiv:1012.2718[math], Dec. 2010. URL arXiv: 1012.2718.→ page 102[16] S. A. Flores, M. Howell, J. J. Daniel, R. Piccolo, and P. J. B. Brown.Absence of the Min System Does Not Cause Major Cell DivisionDefects in Agrobacterium tumefaciens. Frontiers in Microbiology, 9,Apr. 2018. ISSN 1664-302X. doi:10.3389/fmicb.2018.00681. URL → page 2[17] M. I. Freidlin and A. D. Wentzell. Random Perturbations of DynamicalSystems. Grundlehren der mathematischen Wissenschaften.Springer-Verlag, Berlin Heidelberg, 3 edition, 2012. ISBN978-3-642-25846-6. URL //→ pages 79, 84, 85[18] V. A. Galaktionov and J. L. Vazquez. The problem of blow-up innonlinear parabolic equations. Discrete and Continuous DynamicalSystems, 8:399–433, Apr. 2002. URL →pages 71, 107[19] C. Gardiner. Stochastic Methods: A Handbook for the Natural and SocialSciences. Springer Series in Synergetics. Springer-Verlag, Berlin115Heidelberg, 4 edition, 2009. ISBN 978-3-540-70712-7. URL// → page 85[20] M. Giaquinta and S. Hildebrandt. Calculus of Variations I.Grundlehren der mathematischen Wissenschaften, Calculus ofVariations. Springer-Verlag, Berlin Heidelberg, 2004. ISBN978-3-540-50625-6. URL //→ page 80[21] D. T. Gillespie. Exact stochastic simulation of coupled chemicalreactions. The Journal of Physical Chemistry, 81(25):2340–2361, Dec.1977. ISSN 0022-3654. doi:10.1021/j100540a008. URL → page 12[22] D. T. Gillespie. The chemical Langevin equation. The Journal ofChemical Physics, 113(1):297–306, June 2000. ISSN 0021-9606.doi:10.1063/1.481811. URL→ pages 14, 15[23] C. M. Guldberg and P. Waage. tudes sur les affinits chimiques. Brgger &Christie, 1867. Google-Books-ID: BrRXAAAAYAAJ. → pages 9, 17[24] I. Gyngy. Lattice Approximations for Stochastic Quasi-LinearParabolic Partial Differential Equations driven by Space-Time WhiteNoise II. Potential Analysis, 11(1):1–37, Aug. 1999. ISSN 1572-929X.doi:10.1023/A:1008699504438. URL → page 104[25] I. Gyngy and D. Nualart. Implicit scheme for quasi-linear parabolicpartial differential equations perturbed by space-time white noise.Stochastic Processes and their Applications, 58(1):57–72, July 1995. ISSN0304-4149. doi:10.1016/0304-4149(95)00010-5. URL →page 104[26] J. Hattne, D. Fange, and J. Elf. Stochastic reaction-diffusionsimulation with MesoRD. Bioinformatics (Oxford, England), 21(12):2923–2924, June 2005. ISSN 1367-4803.doi:10.1093/bioinformatics/bti431. → page 15[27] W. Holmes, M. Mata, and L. Edelstein-Keshet. Local PerturbationAnalysis: A Computational Tool for Biophysical Reaction-DiffusionModels. Biophysical Journal, 108(2):230–236, Jan. 2015. ISSN 0006-3495.116doi:10.1016/j.bpj.2014.11.3457. URL →page 45[28] Z. Hu, A. Mukherjee, S. Pichoff, and J. Lutkenhaus. The MinCcomponent of the division site selection system in Escherichia coliinteracts with FtsZ to prevent polymerization. Proceedings of theNational Academy of Sciences, 96(26):14819–14824, Dec. 1999. ISSN0027-8424, 1091-6490. doi:10.1073/pnas.96.26.14819. URL → page 4[29] V. Ivanov and K. Mizuuchi. Multiple modes of interconvertingdynamic pattern formation by bacterial cell division proteins.Proceedings of the National Academy of Sciences of the United States ofAmerica, 107(18):8071–8078, May 2010. ISSN 1091-6490.doi:10.1073/pnas.0911036107. → pages x, 38, 39, 42, 68[30] J. L. W. V. Jensen. Sur les fonctions convexes et les ingalits entre lesvaleurs moyennes. Acta Mathematica, 30:175–193, 1906. ISSN0001-5962, 1871-2509. doi:10.1007/BF02418571. URL → page 24[31] A. Jilkine, A. F. M. Mare, and L. Edelstein-Keshet. Mathematicalmodel for spatial segregation of the Rho-family GTPases based oninhibitory crosstalk. Bulletin of Mathematical Biology, 69(6):1943–1978,Aug. 2007. ISSN 0092-8240. doi:10.1007/s11538-007-9200-6. → page45[32] R. V. Kohn, F. Otto, M. G. Reznikoff, and E. VandenEijnden. Actionminimization and sharp-interface limits for the stochastic Allen-Cahnequation. Communications on Pure and Applied Mathematics, 60(3):393–438, 2007. ISSN 1097-0312. doi:10.1002/cpa.20144. URL → page 102[33] G.-L. Lan, Z.-M. Ma, and S.-Y. Sun. Coverage of Random DiscsDriven by a Poisson Point Process. In Probability Approximations andBeyond, Lecture Notes in Statistics, pages 43–59. Springer, New York,NY, 2012. ISBN 978-1-4614-1965-5 978-1-4614-1966-2.doi:10.1007/978-1-4614-1966-2 4. URL 4. → page28117[34] P. S. Laplace. Memoir on the Probability of the Causes of Events.Statistical Science, 1(3):364–378, 1986. ISSN 0883-4237. URL → page 81[35] H. Larralde. A first passage time distribution for a discrete version ofthe OrnsteinUhlenbeck process. Journal of Physics A: Mathematical andGeneral, 37(12):3759, 2004. ISSN 0305-4470.doi:10.1088/0305-4470/37/12/003. URL → page 80[36] M. Loose, E. Fischer-Friedrich, J. Ries, K. Kruse, and P. Schwille.Spatial Regulators for Bacterial Cell Division Self-Organize intoSurface Waves in Vitro. Science, 320(5877):789–792, May 2008. ISSN0036-8075, 1095-9203. doi:10.1126/science.1154413. URL → page 5[37] M. Loose, E. Fischer-Friedrich, C. Herold, K. Kruse, and P. Schwille.Min protein patterns emerge from rapid rebinding and membraneinteraction of MinE. Nature Structural & Molecular Biology, 18(5):577–583, May 2011. ISSN 1545-9993. doi:10.1038/nsmb.2037. URL → page37[38] J. Martins, K. R. Naqvi, and E. Melo. Kinetics of Two-DimensionalDiffusion-Controlled Reactions: A Monte Carlo Simulation ofHard-Disk Reactants Undergoing a Pearson-Type Random Walk. TheJournal of Physical Chemistry B, 104(20):4986–4991, May 2000. ISSN1520-6106. doi:10.1021/jp993902z. URL → pages 20, 23[39] M. A. Mata, M. Dutot, L. Edelstein-Keshet, and W. R. Holmes. Amodel for intracellular actin waves explored by nonlinear localperturbation analysis. Journal of Theoretical Biology, 334:149–161, Oct.2013. ISSN 0022-5193. doi:10.1016/j.jtbi.2013.06.020. URL →page 46[40] E. Melo and J. Martins. Kinetics of bimolecular reactions in modelbilayers and biological membranes. A critical review. BiophysicalChemistry, 123(23):77–94, Sept. 2006. ISSN 0301-4622.doi:10.1016/j.bpc.2006.05.003. URL118 →pages 10, 23[41] R. Merkel and E. Sackmann. Nonstationary Dynamics of ExcimerFormation in Two-Dimensional Fluids. The Journal of PhysicalChemistry, 98(16):4428–4442, Apr. 1994. ISSN 0022-3654.doi:10.1021/j100067a034. URL →page 20[42] Y. Mori, A. Jilkine, and L. Edelstein-Keshet. Wave-Pinning and CellPolarity from a Bistable Reaction-Diffusion System. BiophysicalJournal, 94(9):3684–3697, May 2008. ISSN 0006-3495.doi:10.1529/biophysj.107.120824. URL →pages 59, 73, 110[43] Y. Mori, A. Jilkine, and L. Edelstein-Keshet. ASYMPTOTIC ANDBIFURCATION ANALYSIS OF WAVE-PINNING IN AREACTION-DIFFUSION MODEL FOR CELL POLARIZATION.SIAM journal on applied mathematics, 71(4):1401–1427, 2011. ISSN0036-1399. doi:10.1137/10079118X. URL → pages59, 62, 64[44] E. Olivieri and M. E. Vares. Large Deviations and Metastability.Cambridge University Press, Feb. 2005. ISBN 978-0-521-59163-8. →page 81[45] Z. Petrek and P. Schwille. Simple membrane-based model of the Minoscillator. New Journal of Physics, 17(4):043023, 2015. ISSN 1367-2630.doi:10.1088/1367-2630/17/4/043023. URL → pages 4, 76, 111[46] D. M. Raskin and P. A. J. de Boer. Rapid pole-to-pole oscillation of aprotein required for directing division to the middle of Escherichiacoli. Proceedings of the National Academy of Sciences of the United Statesof America, 96(9):4971–4976, Apr. 1999. ISSN 0027-8424. URL → pages 3, 4, 5[47] F. Rassoul-agha and T. Seppalainen. A Course on Large Deviations Withan Introduction to Gibbs Measures. Amer Mathematical Society,Providence, Rhode Island, Apr. 2015. ISBN 978-0-8218-7578-0. →page 79119[48] L. Ratke and P. W. Voorhees. Growth and Coarsening: Ostwald Ripeningin Material Processing. Springer Science & Business Media, Jan. 2002.ISBN 978-3-540-42563-2. Google-Books-ID: baKRnEuSBXkC. → page76[49] K. Razi Naqvi. Diffusion-controlled reactions in two-dimensionalfluids: discussion of measurements of lateral diffusion of lipids inbiological membranes. Chemical Physics Letters, 28(2):280–284, Sept.1974. ISSN 0009-2614. doi:10.1016/0009-2614(74)80073-4. URL →pages 18, 23[50] M. D. Ryser, N. Nigam, and P. F. Tupper. On the well-posedness ofthe stochastic Allen-Cahn equation in two dimensions. Journal ofComputational Physics, 231(6):2537–2550, Mar. 2012. ISSN 00219991.doi:10.1016/ URL arXiv:1104.0720. → page 104[51] J. Schweizer, M. Loose, M. Bonny, K. Kruse, I. Mnch, and P. Schwille.Geometry sensing by self-organized protein patterns. Proceedings ofthe National Academy of Sciences of the United States of America, 109(38):15283–15288, Sept. 2012. ISSN 1091-6490.doi:10.1073/pnas.1206953109. → pages 4, 5, 26, 110[52] M. Smoluchowski. Drei Vortrge ber Diffusion, BrownscheMolekularbewegung und Koagulation von Kolloidteilchen. PismaMariana Smoluchowskiego, 2(1):530–594, 1927. URL→ pages 9, 17, 23[53] F. Spitzer. Electrostatic capacity, heat flow, and brownian motion.Zeitschrift fr Wahrscheinlichkeitstheorie und Verwandte Gebiete, 3(2):110–121, June 1964. ISSN 1432-2064. doi:10.1007/BF00535970. URL → page 24[54] F. Spitzer. Electrostatic Capacity, Heat Flow, and Brownian Motion.In R. Durrett and H. Kesten, editors, Random Walks, Brownian Motion,and Interacting Particle Systems: A Festschrift in Honor of Frank Spitzer,Progress in Probability, pages 53–65. Birkhuser Boston, Boston, MA,1991. ISBN 978-1-4612-0459-6. doi:10.1007/978-1-4612-0459-6 4. URL 4. → page 25120[55] S. H. Strogatz. Nonlinear Dynamics and Chaos: With Applications toPhysics, Biology, Chemistry, and Engineering, Second Edition. WestviewPress, Boulder, CO, 2 edition edition, Mar. 2015. ISBN978-0-8133-4910-7. → page 45[56] Q. Sun and W. Margolin. FtsZ Dynamics during the Division Cycle ofLiveEscherichia coli Cells. Journal of Bacteriology, 180(8):2050–2056,Apr. 1998. ISSN 0021-9193, 1098-5530. URL → page 1[57] D. C. Torney, T. T. Warnock, and P. Kollman. Computer Simulation ofDiffusion-Limited Chemical Reactions in Three Dimensions. TheInternational Journal of Supercomputing Applications, 1(2):33–43, June1987. ISSN 0890-2720. doi:10.1177/109434208700100204. URL →page 21[58] A. M. Turing. The Chemical Basis of Morphogenesis. PhilosophicalTransactions of the Royal Society of London. Series B, Biological Sciences,237(641):37–72, 1952. ISSN 0080-4622. URL → pages 9, 111[59] G. E. Uhlenbeck and L. S. Ornstein. On the Theory of the BrownianMotion. Physical Review, 36(5):823–841, Sept. 1930.doi:10.1103/PhysRev.36.823. URL → page 85[60] V. A. Galaktionov and S. A. Posashkov. The equation u t = u {xx} +uˆ . Localization, asymptotic behaviour of unbounded solutions.Keldysh Inst. Appl. Math. Acad. Sci. USSR Preprint No. 97, 1985. →page 107[61] J. M. Vanderkooi, S. Fischkoff, M. Andrich, F. Podo, and C. S. Owen.Diffusion in two dimensions: Comparison between diffusionalfluorescence quenching in phospholipid vesicles and in isotropicsolution. The Journal of Chemical Physics, 63(8):3661–3666, Oct. 1975.ISSN 0021-9606. doi:10.1063/1.431761. URL → page 20[62] A. Varma, K. C. Huang, and K. D. Young. The Min System as aGeneral Cell Geometry Detection Mechanism: Branch Lengths inY-Shaped Escherichia coli Cells Affect Min Oscillation Patterns and121Division Dynamics. Journal of Bacteriology, 190(6):2106–2117, Mar.2008. ISSN 0021-9193, 1098-5530. doi:10.1128/JB.00720-07. URL → pages 4, 5[63] A. G. Vecchiarelli, M. Li, M. Mizuuchi, and K. Mizuuchi. Differentialaffinities of MinD and MinE to anionic phospholipid influence Minpatterning dynamics in vitro. Molecular Microbiology, 93(3):453–463,Aug. 2014. ISSN 1365-2958. doi:10.1111/mmi.12669. URL → page 102[64] A. G. Vecchiarelli, M. Li, M. Mizuuchi, L. C. Hwang, Y. Seol, K. C.Neuman, and K. Mizuuchi. Membrane-bound MinDE complex actsas a toggle switch that drives Min oscillation coupled to cytoplasmicdepletion of MinD. Proceedings of the National Academy of Sciences ofthe United States of America, 113(11):E1479–E1488, Mar. 2016. ISSN0027-8424. doi:10.1073/pnas.1600644113. URL → pagesvi, 4, 5, 6, 7, 35, 38, 41, 67, 70, 77, 110[65] Veronica Grieneisen. Dynamics of auxin patterning in plantmorphogenesis. Phd Thesis, University of Utrecht, Utrecht, TheNetherlands., 2009. → page 45[66] M. S. Veshchunov. A new approach to diffusion-limited reaction ratetheory. Journal of Engineering Thermophysics, 20(3):260, Sept. 2011.ISSN 1810-2328, 1990-5432. doi:10.1134/S1810232811030040. URL → page 23[67] E. O. Voit, H. A. Martens, and S. W. Omholt. 150 Years of the MassAction Law. PLOS Computational Biology, 11(1):e1004012, Jan. 2015.ISSN 1553-7358. doi:10.1371/journal.pcbi.1004012. URL→ page 17[68] C. Wagner. Theorie der Alterung von Niederschlgen durch Umlsen(Ostwald-Reifung). Zeitschrift fr Elektrochemie, Berichte derBunsengesellschaft fr physikalische Chemie, 65(7-8):581–591, Sept. 1961.ISSN 0005-9021. doi:10.1002/bbpc.19610650704. URL →page 76122[69] J. Walsh. An Introduction to Stochastic Partial-Differential Equations.Lecture Notes in Mathematics, 1180:265–437, 1986. ISSN 0075-8434.WOS:A1986D006200003. → page 85[70] G. R. Walther, A. F. M. Mare, L. Edelstein-Keshet, and V. A.Grieneisen. Deterministic Versus Stochastic Cell PolarisationThrough Wave-Pinning. Bulletin of Mathematical Biology, 74(11):2570–2599, Sept. 2012. ISSN 0092-8240, 1522-9602.doi:10.1007/s11538-012-9766-5. URL → pages59, 66, 73[71] William Christopher Carlquist. A Homotopy-Minimization Method forParameter Estimation in Differential Equations and its Application inUnraveling the Reaction Mechanism of the Min System. PhD thesis,University of British Columbia, 2018. → pages vi, 6, 35, 110[72] Wolfram|Alpha. Expected Escape Time Functional. URL*m%5E-2*x%5E-3)+++%5Dx%3D0...inf. → page 94[73] Wolfram|Alpha. Wolfram|Alpha: Calculating tau of phi, Oct. 2018.URL →page 90[74] Wolfram|Alpha. Wolfram|Alpha: Infinite Burst Time, Oct. 2018. URL 0%5Einf+(x%5E4%2BC)%5E-0.5+dx.→ page 90[75] L. J. Wu and J. Errington. Nucleoid occlusion and bacterial celldivision. Nature Reviews Microbiology, 10(1):8–12, Jan. 2012. ISSN1740-1526. doi:10.1038/nrmicro2671. URL →page 1[76] C. A. Yates and M. B. Flegg. The pseudo-compartment method forcoupling partial differential equation and compartment-basedmodels of diffusion. Journal of the Royal Society Interface, 12(106), May2015. ISSN 1742-5689. doi:10.1098/rsif.2015.0141. URL → page 16123[77] O. N. Yogurtcu and M. E. Johnson. Theory of bi-molecularassociation dynamics in 2d for accurate model and experimentalparameterization of binding rates. The Journal of Chemical Physics, 143(8):084117, Aug. 2015. ISSN 0021-9606. doi:10.1063/1.4929390. URL → pages 23, 32[78] S. B. Yuste, G. Oshanin, K. Lindenberg, O. Bnichou, and J. Klafter.Survival probability of a particle in a sea of mobile traps: A tale oftails. Physical Review E, 78(2), Aug. 2008. ISSN 1539-3755, 1550-2376.doi:10.1103/PhysRevE.78.021105. URL → page 23[79] D. Zhang, L. Gyrgyi, and W. R. Peltier. Deterministic chaos in theBelousovZhabotinsky reaction: Experiments and simulations. Chaos:An Interdisciplinary Journal of Nonlinear Science, 3(4):723–745, Oct.1993. ISSN 1054-1500. doi:10.1063/1.165933. URL → page 9124


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items