Interface Motion in the OstwaldRipening and Chemotaxis SystemsbyEamon KavanaghB.Sc., McMaster University, 2011A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinThe Faculty of Graduate and Postdoctoral Studies(Mathematics)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)August 2014c© Eamon Kavanagh 2014AbstractOstwald ripening and chemotaxis are two different mechanisms that describeparticle motion throughout a domain. Ostwald ripening describes the redis-tribution of a solid solution due to energy potentials while chemotaxis is acellular phenomenon where organisms move based on the presence of chem-ical gradients in their environment. Despite the two systems coming fromdisparate fields, they are connected by the late-stage dynamics of interfacialmotion.For the Ostwald ripening system we consider the case of N droplets inthe asymptotic limit of small radii ri 1. We first derive a system of ODEsthat describe the motion of the droplets and then improve this calculation byincluding higher order terms. Certain properties, such as area preservationand finite time extinction of certain droplets are proved and a numericalexample is presented to support the claims.In the chemotaxis model we look at the asymptotic limit of diffusiveforces being small compared to that of chemotactic gradients. We use aboundary-fitted coordinate system to derive an equation for the velocity ofan arbitrary interface and analyze a few specific examples. The asymptoticresults are also explored and confirmed using the finite element and level setmethods.Our analysis reveals the mechanism of movement to be motion by cur-vature in Ostwald ripening and a surface diffusion law in chemotaxis. Thegoverning rules of motion may be different in the two systems but the endresult is typically characteristically similar- exchange of mass and smoothingin favor of a larger and more stable configuration of drops.iiPrefaceChapter 2 and 3 are based on research conducted by Dr. Michael Ward ofUBC’s mathematics department and Eamon Kavanagh. I was responsiblefor much of the analysis, verification, and numerical simulations. Chapter4 is based on work of myself and Dr. Anthony Peirce, also of UBC’s math-ematics department. I was responsible for coding, numerical simulations,and error analysis.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Ostwald Ripening . . . . . . . . . . . . . . . . . . . . . . . . 11.1.1 Literature Review . . . . . . . . . . . . . . . . . . . . 21.2 Chemotaxis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.2.1 Literature Review . . . . . . . . . . . . . . . . . . . . 132 Ostwald Ripening . . . . . . . . . . . . . . . . . . . . . . . . . . 232.1 Mullins-Sekerka to Ostwald Ripening Dynamics . . . . . . . 232.2 Properties of the System . . . . . . . . . . . . . . . . . . . . 272.2.1 Area Preserving . . . . . . . . . . . . . . . . . . . . . 272.2.2 Perimeter Reducing . . . . . . . . . . . . . . . . . . . 282.2.3 Finite Time Extinction . . . . . . . . . . . . . . . . . 282.3 Summing the Logarithmic Terms . . . . . . . . . . . . . . . . 302.4 Two Droplet Example . . . . . . . . . . . . . . . . . . . . . . 333 The Chemotaxis Model . . . . . . . . . . . . . . . . . . . . . . 363.1 Nondimensionalization . . . . . . . . . . . . . . . . . . . . . 363.2 Boundary Fitted Coordinate Expansion . . . . . . . . . . . . 373.3 Specific Domain Choices . . . . . . . . . . . . . . . . . . . . 443.3.1 Concentric Circles . . . . . . . . . . . . . . . . . . . . 453.3.2 Circular Domain with Perturbed Circular Interface . 50ivTable of Contents3.3.3 Many Circular Interfaces . . . . . . . . . . . . . . . . 673.3.4 Arbitrary Shaped Initial Droplet Pattern . . . . . . . 794 Numerical Results for the Chemotaxis Model . . . . . . . . 854.1 Steady State Solution Using the Finite Element Method . . 864.1.1 Comparison to Asymptotics . . . . . . . . . . . . . . 894.2 Calculating the Interface Velocity . . . . . . . . . . . . . . . 914.3 Interface Evolution Using the Level Set Method . . . . . . . 944.3.1 The Signed Distance Function . . . . . . . . . . . . . 954.3.2 Regular Grid Initialization and Velocity Extension . . 984.3.3 Interface Evolution . . . . . . . . . . . . . . . . . . . 1004.3.4 Discretizations . . . . . . . . . . . . . . . . . . . . . . 1004.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1034.4.1 Off-Center Circle . . . . . . . . . . . . . . . . . . . . 1034.4.2 Two Axisymmetric Perturbed Circles . . . . . . . . . 1034.4.3 Asymptotic Result: Boundary Effects Cause Instabil-ities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044.4.4 Asymptotic Result: Perturbations to Small Drop De-cay Faster Than Perturbations to Large Drop . . . . 1044.4.5 Asymptotic Result: High Frequency Perturbations De-cay Faster Than Low Frequency Perturbations . . . . 1055 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118AppendicesA Boundary Fitted Coordinates . . . . . . . . . . . . . . . . . . 121B Green’s Function for the Unit Disk . . . . . . . . . . . . . . 125vList of Tables4.1 A table of the error for decreasing values of δ. The errorquickly decreases as δ decreases, coinciding with the asymp-totic limit of δ 1. . . . . . . . . . . . . . . . . . . . . . . . 93B.1 This table shows the convergence rate for the improved Green’sfunction for select values of ρ and ρ0. M is the number of it-erations required to reach the specified tolerance of 1e-8 forPM . D = 1 and θ = θ0. . . . . . . . . . . . . . . . . . . . . . 127viList of Figures1.1 u±m denote max and min concentrations respectively and fmis the equilibrium free energy level . . . . . . . . . . . . . . . 31.2 1D simulations for the models considered. Minimal model isgiven at different times where as others have varying relevantparameters. The solid line is the adapted model where as thedot-dash line is the minimal model steady state solution. Inorder, the models are: minimal, signal-dependent sensitivity,volume-filling, non-linear diffusion, saturating signal produc-tion, and cell kinetic. D = .1, χ = 5. ICs: u(x, 0) = 1,v(x, 0) = 1 + .1 exp(−10x2). . . . . . . . . . . . . . . . . . . . 201.3 One dimensional numerical simulations for the models con-sidered, now on a larger domain. Models appear in the sameorder as the previous figure. D is given to be .1 and χ = 2.Initial conditions are set at u(x, 0) = 1, v(x, 0) = 1 + r(x)where r(x) is a random spatial perturbation of the steadystate, .01 ≤ r(x) ≤ .01. We see that in most cases a coarsen-ing process occurs and what initially started as a multi-peakstructure evolves into fewer and fewer regions of concentra-tion. This is not the case in the cell kinematics regularizationas we see a continual emergence of cells and decay of peaks. . 211.4 Here is an example of a 2D simulation. This is a time evolu-tion of a combination of a few regularizations, namely signal-dependent sensitivity (α = .4), volume-filling (γ = 10), non-linear diffusion (n = 1) and saturating chemical production(φ = 1). D = .1, χ = 5, Ω = [0, 20]× [0, 20]. ICs: u(x, y, 0) =1, v(x, y, 0) = 1 + r(x, y), where r(x, y) is the 2D analog fromthe previous figure. We once again see a coarsening processas time goes on. . . . . . . . . . . . . . . . . . . . . . . . . . . 211.5 A 1D time evolution picture that shows coarsening occurringon a log time scale for the volume-filling model. In this case,initial cell density is set at .5 with D = .25 and χ = 4. . . . . 22viiList of Figures1.6 Coarsening of the volume filling model. The top row hasinitial concentration of u = .5 and the bottom has u = .25. Asabove, D = .25, χ = 4. Here, the domain, Ω = [0, 25]×[0, 25].In this picture black is low cell density where white is highcell density. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.1 Here is a plot of the solutions to the system of ODEs thatcome from both the leading order expansion as well as theone that includes all of the logarithmic terms. You can clearlysee in both cases that the system is area preserving and thesmallest drop extinguishes in finite time. . . . . . . . . . . . . 353.1 A depiction of the boundary fitted coordinate system. Ω isthe outer domain and can be partitioned into two regions,Ω− (η > 0) and Ω+ (η < 0) with Γ representing the interfacebetween them. Note the inward normal and sign conventionfor η. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.2 Asymptotic solution plotted with the exact solution for 0 <r ≤ 1. Here, δ = .05. We can see that for everywhere exceptvery close to r = 0, they are in almost complete agreement. . 503.3 Asymptotic solution plotted with the exact solution for r = 0.Here, δ ranges from .5 to .01. As δ → 0, the approximationbecomes more accurate but is surprisingly accurate for fairlylarge values of δ. . . . . . . . . . . . . . . . . . . . . . . . . . 513.4 A plot of β as given in (3.32) showing monotonicity for a fewvalues of λ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.5 Gn(r0, λ) for n = 2, 3, 4, 5, λ = .2, 1, 5 on 0 < r0 < .95. Ineach plot it is clear that there is a critical r0 value such thatGn < 0 and thus η˙ > 0 when cos(nθ) > 0, signifying growthof the perturbation and an instability. . . . . . . . . . . . . . 683.6 Profile for u across a cross-section of one drop centered at xi,where the drop is of size O(δ). . . . . . . . . . . . . . . . . . 693.7 A depiction of the results for the case described in Result3.3.3.4 (iv). We see that the opposite sign for the velocityin the chemotaxis model causes the attractive behavior andthus, a circle off the center deforms and moves towards theboundary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794.1 An example of a tessellation generated using distmesh andthe described methodology. . . . . . . . . . . . . . . . . . . . 86viiiList of Figures4.2 A depiction of a triangle with a mapped edge and the refer-ence triangle in ω, ξ space. The dotted line indicates the pre-viously unmapped quadratic or possibly linear triangle edge.G(x, y) is the equivalently written as F−1(ω, ξ). . . . . . . . . 884.3 Log-log plot of the L2 error versus mesh size. The isopara-metric mapping improves accuracy and leads to a higher con-vergence rate indicated by the steeper slope. The slopes are3.67 for the isoparametric elements and 2.02 for quadratic. . . 904.4 The solution to the steady state equation with a single cellcolony centered at the origin. Here, δ = .25 and the initialedge length is .05. . . . . . . . . . . . . . . . . . . . . . . . . 914.5 The solution to the steady state equation with two cell colonieslocated at [-.15, -.45] and [.25, .25] with radii of δ and 2δ re-spectively. Here, δ = .03125. . . . . . . . . . . . . . . . . . . . 924.6 Log-log plot of maximum velocity value on the interface ver-sus mesh size. Once again the isoparametric elements providean improvement in accuracy. . . . . . . . . . . . . . . . . . . 944.7 An example of a set of points (identified by the crosses) onegrid node away from an interface. . . . . . . . . . . . . . . . . 964.8 An example where reinitialization, particularly with a subcellfix, is very effective at correcting defects in a level set function.Here, φ = f ∗ (√(x/4)2 + (y/2)2 − 1), where f = .1 + (x +3.5)2 + (y + 2)2. . . . . . . . . . . . . . . . . . . . . . . . . . . 974.9 An example of a normal velocity curve on the interface andthe resulting narrow band extension. . . . . . . . . . . . . . . 1014.10 Off-circle example. Blue is the outer domain, the dotted blackline is the initial position and dark red is the moving drop.For the triangulated mesh we start with an initial edge sizeof .05 with 100 fixed nodes on the interface. The time stepfor reinitialization, constructing the extension velocity andsolving the level set equation was chosen to be h/5, where his the regular grid mesh size. . . . . . . . . . . . . . . . . . . 1064.11 Plot of area loss versus iteration count for the off-center circleexample. Area is lost at each iteration but remains small. . . 107ixList of Figures4.12 Phase 1 of the two perturbed circle example. Blue is theouter domain, the dotted black line is the initial position anddark red is the moving drop. For the triangulated mesh westart with an initial edge size of .05 with 100 fixed nodes oneach interface. The time step for reinitialization, constructingthe extension velocity and solving the level set equation waschosen to be h/5, where h is the regular grid mesh size. . . . 1084.13 Phase 2 of the two perturbed circle example. Blue is the outerdomain, the dotted black line is the initial position and darkred is the moving drop. For the triangulated mesh we startwith an initial edge size of .05 with 100 fixed nodes on eachinterface which becomes 200 fixed nodes when the domainsmerge. The time step for reinitialization, constructing the ex-tension velocity and solving the level set equation was chosento be h/5, where h is the regular grid mesh size. . . . . . . . 1094.14 Phase 3 of the two perturbed circle example. Blue is theouter domain, the dotted black line is the initial position anddark red is the moving drop. For the triangulated mesh westart with an initial edge size of .05 with 200 fixed nodes onthe interface. The time step for reinitialization, constructingthe extension velocity and solving the level set equation waschosen to be h/5, where h is the regular grid mesh size. . . . 1104.15 Phase 4 of the two perturbed circle example. Blue is theouter domain, the dotted black line is the initial position anddark red is the moving drop. For the triangulated mesh westart with an initial edge size of .05 with 200 fixed nodes onthe interface. The time step for reinitialization, constructingthe extension velocity and solving the level set equation waschosen to be h/5, where h is the regular grid mesh size. . . . 1114.16 An example of an unstable perturbation due to boundaryeffects. Blue is the outer domain, the dotted black line isthe initial position and dark red is the moving drop. Forthe triangulated mesh we start with an initial edge size of.05 with 180 fixed nodes on the interface. The time stepfor reinitialization, constructing the extension velocity andevolution of the level set equation was selected as h/5, whereh is the regular grid mesh size. . . . . . . . . . . . . . . . . . 112xList of Figures4.17 Different sized perturbed circle example. Blue is the outerdomain and dark red is the moving drop. For the triangulatedmesh we start with an initial edge size of .05 with 100 fixednodes on the interface when r0 = .25 (left) and 200 fixednodes on the interface when r0 = .5 (right). The time stepfor reinitialization and constructing the extension velocity wasselected as h/5, where h is the regular grid mesh size. In orderto capture the quick perturbation-flattening effects, the timestep for the level set equation was h2/5. . . . . . . . . . . . . 1134.18 Different frequency perturbed circle example. Blue is theouter domain and dark red is the moving drop. For the tri-angulated mesh we start with an initial edge size of .05 with150 fixed nodes on the interface. The time step for reinitial-ization and constructing the extension velocity was selectedas h/5, where h is the regular grid mesh size. In order to cap-ture the quick perturbation-flattening effects, the time stepfor the level set equation was h2/5. . . . . . . . . . . . . . . . 114xiAcknowledgementsI would like to thank my supervisors Anthony Peirce and Michael Ward fortheir help and guidance over the past two years as well as giving me this greatproject to work on. I would also like to thank the professors who offeredcourses that I took for their time spent lecturing and mentoring: Neil Balm-forth, George Bluman, Chen Greif, Michael Friedlander and Brian Wettonwith additional thanks to Ian Mitchell for his help on level set methods.I want to thank Michael Lindstrom and Iain Moyles for answering manyquestions along the way and providing the advice that comes with years ingraduate school. Last but not least I want to express my appreciation tothe helpful staff of the Mathematics Department as well as the Institute ofApplied Mathematics.xiiChapter 1Introduction1.1 Ostwald RipeningOstwald ripening, colloquially known as ’survival of the fattest’, is a coars-ening process that describes the change of an inhomogeneous structure overtime. Similar particles separate into groups throughout a domain and ex-change mass with each other, favoring larger regions over smaller ones. Theresult is an increase in size and a decrease in the overall number of individ-ual groups. Mass diffuses against concentration gradients in order to reduceinterfacial energy (the mean curvature of all interfaces). By the Gibbs-Thomson condition, this reduces the gradient of the chemical potential andthus the driving force for the diffusion (see [2]). Ostwald ripening can beseen in an oil and water emulsion separating over time and typically occursafter a phase transformation such as spinodal decomposition.Spinodal decomposition is a form of phase separation in which a ho-mogenous solution separates into regions of distinct concentration; that is,uniform within the region. Spinodal decomposition may occur when an al-loy composed of only two metals is sufficiently heated and then quenched(rapidly cooled) into an unstable state. After this metastable equilibrium isreached, the stage is set for coarsening and particles diffuse to reach a staticequilibrium.The form of phase separation described above has been modeled by thenonlinear Cahn-Hilliard equation, first proposed in [4]. The ground work forthis model comes from an earlier work by John W. Cahn and John E. Hilliardin [5] where a measure for total free energy in a binary medium is derived.From this free energy functional one can derive the chemical potential andapply the diffusion equation to obtain a model of phase separation. Muchlike how Ostwald ripening occurs after a phase transformation, a Mullins-Sekerka free boundary problem can be derived from the late stages (t 1)of the Cahn-Hilliard equation that describes this coarsening process. Thisis quite remarkable and gives credence to the belief that the Cahn-Hilliardequation is an accurate model for phase separation.In the literature review below, we present the Cahn-Hilliard equation and11.1. Ostwald Ripeningits derivation along with some interesting asymptotic results from [23]. Weemploy matched asymptotic expansions and multiple time scales to derivethe aforementioned Mullins-Sekerka problem that will eventually lead toOstwald ripening.1.1.1 Literature ReviewWe begin with the concentration u(x, t) of one phase in a binary mixturecontained in a domain Ω ⊆ Rn, and a function F that is a measure of bulkfree energy density. Performing a Taylor expansion of this function F , weseeF (u,∇u,∇2u, . . .)= F (u) +∑i∂uxiF∂xiu+∑i,jα1ij∂2xixju+12∑i,jα2ij∂xiu∂xju+ . . . ,where α1,2ij are defined byα1ij = ∂F/∂(∂2u/∂xi∂xj), α2ij = ∂2F/∂(∂u/∂xi)∂(∂u/∂xj).We assume the solution to be isotropic, which allows us to discard the’off diagonal’ terms i 6= j. This means that there is no favored direction;everything diffuses as one would expect. Assuming isotropy also means thefree energy needs to be symmetric under reflection and rotation of axes,which allows us to conclude that the polarization vector (second term in theTaylor expansion) must be zero. Thus, we have∂uxiF = 0,α1ii = α1 = [∂F/∂∇2u],α2ii = α2 = [∂2F/(∂|∇u|)2].Upon integrating over the domain, we obtain∫Ω[F (u) + α1∇2u+ α2|∇u|2 + . . .] dV.Applying Green’s first identity to the second term in the integral, where theboundary terms vanish due to imposed Neumann conditions, we have∫Ωα1∇2u dV = −∫Ω∇α1 · ∇u dV.21.1. Ostwald RipeningRealizing that ∇α1 = (∂α1/∂u)∇u and letting 122 = α2 − (∂α1/∂u) with 1, we arrive at the free energy functional, a measure of free energy ofu(x, t) in Ω, as derived in [5]I[u] =∫ΩF (u) + 122|∇u(x)|2 dx. (1.1)The second term in the functional is motivated by the thought thathigher surface tension will increase the free energy in a system. F (u) ischosen to be a double welled quartic polynomial. This double welled struc-ture is known as the ’Flory-Huggins’ free energy density and, with its shape,spinodal decomposition can occur (see [27]).um− us− us+ um+ufmF ′(u)Figure 1.1: u±m denote max and min concentrations respectively and fm isthe equilibrium free energy levelNow, we take the variational derivative to derive the chemical potentialµ = δIδu= limδ→0I[u+ δv]− I[u]δ.We calculate,I[u+ δv] =∫ΩF (u+ δv) + 122|∇u(x) + δv|2 dx=∫ΩF (u) + F ′(u)δv + . . .+ 122(|∇u(x)|2 + 2|∇u · ∇(δv)|+ . . .) dx.31.1. Ostwald RipeningThus, we have∫ΩF ′(u)δv + 2∇u · ∇δv dx.Now, applying Green’s first identity with Neumann boundary conditionsand with δv being arbitrary, we haveµ = F ′(u)− 2∆u. (1.2)Finally, we apply the diffusion equation to above to obtain the Cahn-Hilliardequationut = ∆µ = ∆(F ′(u)− 2∆u),n · ∇µ = 0, x ∈ ∂Ω; n · ∇u = 0, x ∈ ∂Ω.Energy DecayHere we prove that dI[u]/dt ≤ 0, which implies that the energy will decayas t→∞ and thus the system will eventually reach an equilibrium concen-tration. Taking a derivative of the free energy functional given in equation(1.1), we see thatdI[u]dt=∫Ω(dF (u)dt+ 122 ddt|∇u|2)dx =∫Ω(F ′(u)ut + 2∇u · ∇ut)dx.Applying Green’s first identity, we have∫Ω(2∇u · ∇ut)dx =∫∂Ω(ut∂u∂n)dx−∫Ω(2ut∆u)dx.Where the boundary term vanishes by the imposed boundary conditions.Thus, we havedI[u]dt=∫Ωut(−2∆u+ F ′(u)) dx =∫Ω∆µ(−2∆u+ F ′(u)) dx.Now, with µ defined in (1.2), we substitute µ into the equation aboveand apply Green’s first identity once again. As before, the boundary termsvanish due to the imposed boundary conditions and we obtaindI[u]dt= −∫Ω|∇µ|2 dx ≤ 0.41.1. Ostwald RipeningThus, we conclude that the energy decays in time and we will eventuallyreach an equilibrium solution.After Internal Layer EquilibrationAt this point many different directions can be taken with the Cahn-Hilliardequation. [23] first looks at the behavior on a very fast (T2 = t/2) timescale. Instead of following this route we will restrict the analysis to considerthe normal t scale and, in this way, derive a limiting system characterizingOstwald ripening.First, we define the front Γ, that moves in time and divides Ω into twoseparate regions Ω+ and Ω−. We also assume that Γ does not intersect theboundary of Ω and that u reaches the maximum concentration in Ω+ andthe minimum in Ω−.Most of the behavior in which we are interested occurs near Γ. In orderto accurately characterize u and µ near this front we define a new variable,z, that measures the stretched normal distance from the front.z = φ(x, t),where φ(x, t) is the signed distance from Γ to a point x, φ > 0 if x ∈ Ω+,φ < 0 if x ∈ Ω−.One thing to note is that if a function depends upon (z,x, t) and xchanges in a direction normal to Γ with z held fixed then the functionshould remain unchanged, up to terms of O(). That is, w(z,x, t) = w(z,x+δ∇φ(x, t), t) for small real δ.We also need to definen = ∇φ(x, t); κ = ∇ · n; V = ∂tφ(x, t).Here n is the unit normal to Γ pointing towards Ω+, κ is the mean curvatureof the interface, positive when the center of curvature is in Ω− and V is thenormal velocity of the front, positive when moving towards Ω−.With the new variable definition, v(x, t) = w(z,x, t) = w(φ(x,t) ,x, t), thederivatives change in the following way∇v = ∇xw +1n∂zw,∆v = ∆xw +1κ∂zw +12∂zzw,∂tv = ∂tw +1V ∂zw.(1.3)51.1. Ostwald RipeningThere is no ∂z∇xw · ∇φ term in the second equation above because werequire that the function be unchanged when z is held fixed and you move ina direction normal to the front. Continuing with the asymptotic expansion,we expand µ and u near the front asµ˜ = µ˜0 + µ˜1 + 2µ˜2 + . . . ; u˜ = u˜0 + u˜1 + 2u˜2 + . . . .Expanding terms of the Cahn-Hilliard equation to the lowest order yields0 = ∂zzµ˜0 = ∂zz(F ′(u˜0)− ∂zzu˜0),V ∂zu˜0 = ∂zzµ˜1 + κ∂zµ˜0.Integrating the equation for µ˜0 we see thatF ′(u˜0)− ∂zzu˜0 = a0z + a1,but we assume u˜0 must be bounded as z → ±∞ and thus a0 = 0.For a1 we look at far field values of z, z 0 or z 0. Here, since thephase transformation has ended and we are in a state of quasi-equilibrium,we expect that the values of u˜0 will be constant in space; that is, u˜0 = um+or u˜0 = um− . Thus, ∂zzu˜0 = 0 for z 0 or z 0 and, assuming u is smooth,we determine that a1 = F ′(u˜0(±∞)) = fm (see figure 1.1 for an example ofthe structure of F ′). Thus, the equation to solve isµ˜0 = F ′(u˜0)− ∂zzu˜0 = fm. (1.4)Without loss of generality, we take F ′(u˜0) = u˜30− u˜0 and as a result fm =0, which centers the bulk chemical potential about 0 and has the maximumand minimum concentrations of u normalized to 1 and −1, respectively. Theproblem becomesu˜30 − u˜0 = ∂zzu˜0limz→±∞u˜0(z) = ±1, u˜0(0) = 0.Now, let W = ∂zu˜0, thus ∂zzu˜0 = Wz = WWu˜0 . This gives us a separableequation that we can integrate as(u˜30 − u˜0)du˜0 = WdW −→u˜404 −u˜202 + c0 =W 22 .By using the fact that u˜0(±∞) = ±1 and that the derivative is zero in thefar field as we are in phase equilibrium, we see that c0 = 1/2, which gives us61.1. Ostwald Ripening12(u˜20 − 1)2 = (∂zu˜0)2.Choosing the negative square root we obtain another separable equation∂u˜0(1− u˜20)= ∂z√2=⇒ arctanh(u˜0) =z√2+ c1.To match the boundary condition u˜0(0) = 0 we set c1 = 0. The othertwo boundary conditions are already satisfied and we obtain an interfacethat connects the two phases smoothly. Thus, the planar front profile isu˜0(z) = tanh(z√2).This can be easily adapted to fit any F of this type. To conclude this sectionwe define the front profile as U = u˜0 and U satisfies the planar front profileequation given byF ′(U)− ∂zzU = fm (1.5)Late-Stage DynamicsNext, we provide a formal derivation of the Mullins-Sekerka free boundaryproblem that gives us the equations used in [2]. We assume the solutionis at equilibrium on the O(1) time scale and define a slower time variable,τ = t. To continue with the matching, µ1 and µ2 are necessary. The higherorder expansions of µ are given byµ =F ′(u0 + u1 + 2u2 + . . .)− 2∆(u0 + . . .)=F ′(u0) + F ′′(u0)u1 + 2(F ′′′(u0)u2 + 12F′′(u0)u21 −∆u0) +O(3)By writing µ = µ0 + µ1 + µ2 + . . . we identify thatµ0 = F ′(u0), µ1 = F ′′(u0)u1, µ2 = F ′′′(u0)u2 + 12F′′(u0)u21 −∆u0.This gives us these three equations0 = ∆µ0 = ∆F ′(u0),∂τu0 = ∆µ1 = ∆F ′′(u0)u1,∂τu1 = ∆µ2 = ∆(F ′′(u0)u2 + 12F′′′(u0)u21 −∆u0).71.1. Ostwald RipeningThese equations, as before, have Neumann boundary conditions and theinterfacial matching conditions come from matching to the inner solution.The next step is to proceed with the inner expansion, now on the τ timescale.The derivative with respect to τ for an arbitrary function v(x, t) = w(z,x, t)is∂tv = ∂τw + V ∂zw.As with the outer expansion, we need to develop more information aboutthe inner equations. The change in the spatial derivatives for the innerfunctions µ˜ and u˜ are the same as in (1.3). When we substitute these spatialderivative changes with the asymptotic expansion into (1.2) we obtainµ˜0 = F ′(u˜0)− ∂zzu˜0, (1.6a)µ˜1 = F ′′(u˜0)u˜1 − ∂zzu1 − κ∂zu˜0, (1.6b)µ˜2 = F ′′(u˜0)u˜2 − ∂zzu˜2 − κ∂zu˜1 + 12F ′′′(u˜0)−∆xu˜0. (1.6c)Using this information we expand ∂tu = ∆µ as on the t = O(1) timescale. Since our lowest order term involving time is O(1) and the lowestorder term for spatial derivatives is O(−2), the time derivative term onlyarises in the third equation. We obtain,0 = ∂zzµ˜0, (1.7a)0 = ∂zzµ˜1 + κ∂zµ˜0, (1.7b)V ∂zu˜0 = ∂zzµ˜2 + κ∂zµ˜1 + ∆xµ˜0. (1.7c)The equation for µ˜0 has already been solved on the t = O(1) time scaleand thus µ˜0 = fm from (1.4). This is to be expected as to leading order thesolution should still be in equilibrium.For the equation at the next order, we integrate it twice in z and sub-stitute for µ˜1 as defined in (1.6b) to arrive atb0z + b1 = µ˜1 = F ′′(U)u˜1 − ∂zzu1 − κU ′(z). (1.8)To determine the constants b0 and b1 we need to derive the matchingconditions for µ. Roughly speaking, for a point x on Γ, the value of thechemical potential near this point can be described as x + zn. Thus, to81.1. Ostwald Ripeningderive boundary conditions, as z → 0± in the outer expansion and z → ±∞in the inner, the chemical potentials need to agree. We require(µ0 + µ1 + . . .)(x + zn, t) ∼ (µ˜0 + µ˜1 + . . .)(z,x, t). (1.9)For the left hand size we take z → 0± and for the right z → ±∞. Performinga Taylor expansion (for z → 0+),µ+0 + (µ+1 + zDnµ+0 ) + 2(µ+2 + zDmµ+1 + 12z2D2mµ+0 ) + . . . , (1.10)where Dn is the directional derivative along n. The same argument canbe made for z → 0−. With this in hand we can describe the matchingconditions for the chemical potential by collecting terms of the same orderfrom (1.9) and (1.10).µ±0 (x, t) = µ˜±0 (z,x, t), (1.11a)(µ±1 + zDmµ±0 )(x, t) = µ˜±1 (z,x, t), (1.11b)(µ±2 + zDmµ±1 + 12z2D2mµ±0 )(x, t) = µ˜±2 (z,x, t), (1.11c)where for everything on the right hand side we are taking limz→±∞.Now, with these matching conditions in hand, we can solve for b0 inequation (1.8). Since we know that ∂zµ˜1 = b0(x, t) from (1.8), we can a takepartial derivative of the matching condition for µ˜1 given in (1.11b) withrespect to z to determine b0. This leads us ton · ∇µ±0 = ∂zµ˜1, as z → ±∞.Since µ0 must match µ˜0 near the front from (1.11a), and µ˜0 = fm from(1.4), we know that, near the front, ∇µ0 = 0 because fm is a constant. Thisimplies that the far field behavior of ∂zµ˜1 = 0 from the equation above andthus b0 = 0. To determine b1(x, t) in (1.8) we first rearrange the equationslightly and substitute in for b0, yieldingb1 + κU ′(z) = F ′′(U)u˜1 − ∂zzu1.We now multiply this by U ′ and integrate from −∞ to ∞. This gives us∫ ∞−∞(b1U′ + κU ′2)dz =∫ ∞−∞[(F ′′(U)U ′u˜1 − U ′∂zzu˜1]dz. (1.12)91.1. Ostwald RipeningUpon integrating the second term on the right hand side of the expressionby parts twice, we see that∫ ∞−∞U ′∂zzu˜1 dz = U ′∂zu˜1∣∣∣∞−∞− ∂zU ′∂zu˜1∣∣∣∞−∞+∫ ∞−∞∂zzU′u˜1 dz.The boundary terms vanish because U is the equilibrium planar frontsolution and so the derivatives in the far field are zero. Substituting thisinformation into (1.12) we have∫ ∞−∞(b1U′ + κU ′2)dz =∫ ∞−∞[(F ′′(U)U ′−˜U ′′′]u˜1 dz. (1.13)Now consider taking a partial derivative with respect to z of equation (1.5).This tells us thatF ′′(U)U ′ − U ′′′ = 0.Combining the above information with (1.13) we determine thatb1∫ ∞−∞U ′dz = −κ(x, τ)∫ ∞−∞U ′2 dz,which yieldsb1 = −κ(x, τ)S[U ] ,withS =∫ ∞−∞U ′2 dz, [U ] = U(∞)− U(−∞).From (1.8) and the fact that b0 = 0 we know that b1 = µ˜1. Using thisas the matching condition to the outer equation for the chemical potential,we have arrived at a free boundary Mullins-Sekerka problem that describesthe late-stage dynamics of the Cahn-Hilliard equation, and, as we will soonsee, Ostwald ripening. The system is formulated as∆µ1 = 0, x ∈ Ω+ and Ω−,µ1 = −κ(x, τ)S[U ] , x ∈ Γ,n · ∇µ1 = 0, x ∈ ∂Ω.(1.14)101.2. ChemotaxisThe physical interpretation is that on this time scale (t = O(−1)) thechemical potential is in equilibrium and we have steady state diffusion. TheGibbs-Thomson condition appears on the interface, relating the mean cur-vature to the chemical potential. Finally, we derive how the front evolvesin time by looking at equation (1.7c). Since µ˜0 and µ˜1 are constant withrespect to x and z respectively, the equation reduces toV ∂zu˜0 = ∂zzµ˜2.Upon integrating both sides of the equation from z = −∞ to z = ∞ wehaveV = [U ]−1∂zµ˜2∣∣∣∞−∞(1.15)We can substitute in the matching condition for µ˜2 from (1.11c) after takingthe derivative of it with respect to z. This tells us thatn · ∇µ±1 (x, τ) = ∂zµ˜2(z,x, τ) as z → ±∞,which, when substituted into (1.15), yieldsV (x, τ) = [n · ∇µ1(x, τ)]±[U ] .Note that a very similar calculation can be done on an O(1) timescale,which shows the front velocity to be zero at this order. In this case V isrelated to the jump in the normal derivative of µ0 instead of µ1. Since wehave already shown µ0 = fm, it is easy to see that the normal derivativemust be zero.In summary, the late-stage dynamics characterizing the coarsening phe-nomena for the Cahn-Hilliard equation is given by the limiting system (1.14).1.2 ChemotaxisChemotaxis is a biological phenomenon in which organisms move about theirenvironment due to the presence of a chemical concentration. Bacteria suchas E. coli move seemingly randomly, switching between periods of swimming,characterized by aligned flagella (arm-like appendages) resulting in straightmotion, and tumbling, where the flagella spread out and the bacteria rotatesinto a new direction. In the presence of an attractive chemical gradient, E.coli will swim in this direction and tumble when realignment with the gradi-ent is necessary. They preference their movement along increasing amounts111.2. Chemotaxisof this chemical attractant (see [3]). Sperm cells also exhibit chemotacticbehavior, moving towards the oocyte due to a compound released from itsouter wall. Chemotaxis need not be attractive; some organisms sense poi-sonous molecules (such as carbolic acid for E. coli) and direct their movementaway.Aggregation of cells has been seen since the early use of microscopes butit wasn’t until 1881 with the work of Theodor Engelmann and later in 1884with the work of Wilhelm Pfeffer that an accurate description of chemotaxiswas made. By the 1930s the significance of chemotaxis was widely accepted.Since then, much progress has been made, advancing the field not onlybiologically but also mathematically.The mathematical study of chemotaxis dates back to the 1950s withwork by Patlak in [22] and later in the 1970s with the work of Keller andSegel [15]. Although there have been other attempts, the Patlak-Keller-Segel (PKS) model is the prevailing one. This is due to the intuitive natureof the equations, tractability, and clear representation of sought-after phe-nomenon. The derivation is reasonable and clear; the assumptions madeare both biologically and mathematically logical. There is a strong connec-tion between the source of the assumptions and how they manifest in theequations. The general model isut = ∇ · (k1(u, v)∇u− k2(u, v)u∇v) + k3(u, v),vt = Dv∆v + k4(u, v)− k5(u, v)v,where u is the organism concentration/density in a domain Ω ∈ Rn andv is the concentration of the chemical signal. k1 represents the diffusivityof the organism modeled by u and k2 is the chemotactic sensitivity. Asthe gradient of v increases, advection of u increases. k3 is a cell kinematicterm for u, measuring growth and death. k4, and k5 are chemical signalkinematics, representing production and degradation. We see that v obeysa basic diffusion equation with an additional growth/death factor while u isslightly different, involving diffusion and advection.The original model, seen in [22] and later in [15], referred to as the’minimal model’ in [12], isut = ∇ · (Du∇u− χuu∇v); vt = Dv∆v + µu− δv,with zero-flux boundary conditions imposed on the outer walls of the domainΩ and some specified initial concentration. Diffusivity (Du, Dv), chemotacticsensitivity (χu), and cell kinematic factors (µ, δ) are assumed to be constant.Assuming that chemical kinematics occur on a much larger time scale or121.2. Chemotaxispossibly do not occur during aggregation, we can also ignore k3. This isa reasonable assumption in many cases (see [12]). For the chemical signal,kinematics are taken to be linear. There is an additional condition that themodel has, namely mass conservation.∂∂t∫Ωu dx =∫Ωut dx =∫Ω∇ · (Du∇u− χuu∇v) dx=∫∂Ω(Du∇u− χuu∇v) · n dx = 0,due to the zero-flux boundary condition. Thus, we see that mass in con-served as∫Ωu(x, t) dx =∫Ωu0(x) dx = M.The model can be nondimensionalized to have a homogeneous steadystate such that (us, vs) = (1, 1) and M = 1.During the 1990s and early 2000s a series of papers (see [14]) provedthat the minimal model has a major deficiency. In 2D and 3D domains,global existence depends on a threshold level of initial mass. If there is toomuch mass in the system we see finite time blow-up. The cells in the modelaggregate to a spot with infinite density, which is obviously not biologicallyfeasible. Since, for a typical application, the initial mass lies above thisthreshold, regularizations of the model are necessary. Initial aggregation isonly one small part in chemotaxis and thus, there is a need for a modelthat allows for longer time scales without blow-up. We present a few ofthese regularizations but focus specifically on the volume-filling case in thispaper.1.2.1 Literature ReviewConsidering the simplified system of PDEs,ut = ∇ · (D(u)∇u−A(u)B(v)u∇v) + f(u),vt = ∆v + g(u)u− v.Below is a table outlining the choices of functions for each of the regulariza-tions considered.131.2. ChemotaxisModel D(u) A(u) B(v) f(u) g(u)Minimal D u χ 0 1Nonlinear Diffusion Dun u χ 0 1Volume Filling D u(1-uγ)χ 0 1Signal Dependent Sensitivity D u χ(1+αv)2 0 1Cell Kinematics D u χ ru(1− u) 1Saturated Signal Production D u χ 0 11+φuThis list is certainly not exhaustive. There are more covered in [12] andeven more that they mention as being left out. These excluded variationshave such properties as nonlinear gradient functions, nonlocal terms, or evenzero diffusion models.We will now work through the details of each model individually, ex-plaining the reasoning behind the regularization and the impact it has onthe behavior.The Minimal ModelFor completeness and ease of comparison, we present the details of the min-imal model as well as the derivation. We consider the movement of each cellas modeled by a biased random walk, where the bias is due to the chemo-tactic gradient. This allows for an easier transition from microscopic tomacroscopic behavior.Consider an infinite 1D lattice where particles may instantaneously jumpa constant distance h to the left or the right. Assuming the particles donot interact directly, we have the following continuous-time discrete spacesystem for the particle density u(x, t) given by∂u(x, t)∂t= T+x−hu(x− h, t) + T−x+hu(x+ h, t)− (T+x + T−x )u(x, t).T±x is a probability function that gives a chance per unit time for aparticle at x to make a jump to x± h. In the model, at each time step, wehave possible contributions from the point to the left and the right, x − h,and x+h, respectively. This is represented by T+x−h and T−x+h. There is alsoloss from the currently position as particles jump away, which is indicatedby T+x and T−x .As stated before, the chemotactic sensitivity is incorporated through thebias in the random walk, which we see in the T function. We want to bias the141.2. Chemotaxisjump with regards to a local spatial gradient of v, which we do by specifyingT±x asT±x = a+ b(v(x± h, t)− v(x, t)),where a and b are positive constants. The probability to jump in a givendirection increases if the concentration of the chemical is larger in that direc-tion. Substituting this into the equation and performing a Taylor expansionof the right hand side, we obtainut = h2(aux − 2buvx)x +O(h4).Now, we scale time by the transformation t = λtˆ. This results inutˆ = λh2(aux − 2buvx)x +O(h4),where we assume that the limits limh→0,λ→∞ aλh2 = Du and limh→0,λ→∞2bλh2 =χu, which results in a continuous space and time equation. Dropping thehat for convenience, we arrive atut = (Duux − χuuvx)x.We assume the equation for v is a standard diffusion equation with linearkinetics. Thus, in higher dimensions we have the following system of PDEswith zero-flux boundary conditions on the outer wall of the domain Ωut = ∇ · (Du∇u− χuu∇v); vt = Dv∆v + µu− δv.As mentioned previously, the minimal model can be nondimensionalizedto have mass, M , equal to 1 and the number of parameters reduced. Inone dimension, we have global existence of solutions, which was recentlyproved in [20]. In two dimensions the model has finite time blow-up whenthe initial mass is over some threshold level, which is why we employ theseregularizations. This result can be found in [14].Nonlinear DiffusionThe nonlinear diffusion model considered isut = ∇ · (Dun∇u− χu∇v),with the equation for v given as before. Thus, the only change is the non-linear dependence on cell density for the diffusion term. As stated in [12],151.2. Chemotaxisthis dependence is neglected in cell movement but crops up in ecological ap-plications such as population-induced insect movement. Hillen and Paintersuggest that, although most applications assume a constant diffusion co-efficient, a nonlinear dependence is much more likely. This specific case,involving un, was studied by Kowalczyk in [16] and Eberl in [9] as a modelfor biofilm growth.Much like the minimal model, after nondimensionalization the nonlineardiffusion model has a homogeneous steady state of (u, v) = (1, 1). Kowal-czyk proved in [16] that this model has global existence of solutions in ndimensions. The Lyapunov functionLnonlinear(u) =Dn(n+ 1)χun+1is used for this result.Volume FillingThe effects of volume filling are explored in [11]. Since this model is ofparticular importance to this paper we will go through the derivation of thevolume effects; that is, how and at what stage of the model they crop up.Higher concentrations of cells should inhibit the ability for other cells tomove into a given region, creating a hard cap on the density in a given area.This is modeled by a function q(u) that gives the probability of finding spacebased on the local density u. We want q to be a positive and decreasingfunction. One such choice is q(u) = 1 − u/γ, γ > 1, with u having amaximum density of γ. Now, the jump function T becomesT±x = q(u(x± h))(a+ b(v(x± h)− v(x))).Following the same steps as before, we arrive atut = ∇ · (D(q − uqu)∇u− χuq(u)∇v); vt = Dv∆v + µu− δv.With q(u) given as q(u) = 1− u/γ, the equation for u reduces tout = ∇ ·(D∇u− χu(1− uγ)∇v).The volume filling approach is intuitive and easy to implement. Globalexistence is achieved, as expected, and was proved in [10, 30] using theLyapunov function161.2. ChemotaxisLvolume(u) =Dχ(u log(u) + γ(1− uγ)log(1− uγ)).Signal-dependent SensitivityThe signal-dependent sensitivity model, also referred to as the receptormodel, isut = ∇ ·(D∇u− χu(1 + αv)2∇v),with v given as before (diffusion & linear kinetics).The model is motivated by in vivo experimentation and observation.Chemotactic response is controlled first by signal detection, through bind-ing of the chemical to exterior receptors, followed by transduction, wherea change takes place in the internal receptor component. This not onlycauses a change in movement but may also affect receptor production anddegradation. We want to build these effects into the model and thus add asignal-dependent sensitivity function, specifically, the receptor formB(v) = χ(1 + αv)2 .At high concentrations of v we see that B(v) decreases, indicating that,in a sense, the receptors on a cell are full and can no longer impact move-ment. This is only one such model for signal-dependent sensitivity and thereare others that employ slightly different assumptions. Global existence ofsolutions for this model was proved only very recently in [29].Cell KinematicsAs stated previously, it is occasionally reasonable to ignore cell growth anddeath. Dictyostelium discoideum, for instance, halts cell proliferation duringaggregation stages. In other cases the movement takes place on a much fastertime scale than growth, also allowing us to ignore those additional terms.This is not always the case and with most bacteria these time scales matchup. Using a standard logistic growth models, we derive a cell kinematicsmodel given byut = ∇ · (D∇u− χu∇v) + ru(1− u).Global existence of solutions in the cell kinematic model in n dimensionsis proved in[30].171.2. ChemotaxisSaturated Signal ProductionThe basis for saturated signal production comes from what was mentionedpreviously about receptor binding and saturation effects. Since, biologi-cally, a receptor binding to a site on the cell has other affects not limitedto movement but also influences signal production, the model is changedaccordingly. A saturated signal production model tries to correct the overlysimplistic assumption of linear kinetics through the termg(u) = 11 + φu,which decays as u → ∞. Thus, the production of chemical saturates theenvironment and slows to a halt when cell density increases. The resultingmodel isut = ∇ · (D∇u− χu∇v); vt = ∆v +u1 + φu − v.Global existence of solutions in n dimensions was proved in [13]. Thus,we have global existence for all models considered with the exception of theminimal model.StabilityThe survey by Hillen and Painter ([12]) gives a brief rundown of one di-mensional linear analysis, which we summarize here. We assume the modelin question has a spatially homogeneous steady state solution, (u¯, v¯) andlinearize around this solution, which gives usUt = D(u¯)Uxx −A(u¯)B(v¯)Vxx + f ′(u¯)U,Vt = Vxx + (g(u¯) + u¯g′(u¯))U − V.We now define A¯ = A(u¯), B¯ = B(v¯), f¯ = f(u¯), g¯ = g(u¯). We take U and Vto be the typical small perturbation function to derive the stability matrixwhose eigenvalues tell us the stability of the linear homogeneous solution.M =(−k2D¯ + f¯ ′ k2A¯B¯g¯ + u¯g¯′ −k2 − 1)On a closed interval [0, L] with Neumann boundary conditions we have k =npi/L, n = 0, 1, 2, . . .. If the eigenvalues have positive real part then thehomogeneous solution is unstable and we expect pattern formation due tothe global existence of solutions for each model in one dimension. This gives181.2. Chemotaxisus the following necessary conditions for instability in each of the modelsconsideredModel Necessary Conditions Unstable Modes kMinimal χ > D k2 < χD − 1Nonlinear Diffusion χ > D k2 < χD − 1Volume Filling χ(1− 1γ)> D k2 <χ(1− 1γ)D − 1Signal Dependent Sensitivity χ > D(1 + α)2 k2 < χD(1+α)2 − 1Cell Kinematics χ > (√D +√r)2 k2 ∈ (k1, k2)Saturated Signal Production χ > D(1 + φ)2 k2 < χD(1+φ)2 − 1where k1,2 = χ−D−r2D ± 12D√(D + r − χ)2 − 4rD. We see that in all modelsconsidered the spatially homogeneous solution has a region of instability andpatterns may develop.Numerical ResultsIn this section we present numerical results from [12] for the models con-sidered and additional results from [11] for the volume-filling model. Thisis to show the many different solutions that can arise. We provide extrainformation on the volume-filling case to give evidence of coarsening as wellas comment on the parameter values that bring about these long transients.In figure 1.2 we see the myriad solutions that not only arise from the dif-ferent way the model was regularized but also the different parameter valuesinherent in each of these regularizations. The asterisk indicates, as in [11], a”plateau” solution. We see that aggregation, when it transpires, can occurin many different ways although qualitatively the results behave similarly(grouping around a high concentration of the chemical signal). Figure 1.3is similar to 1.2 but depicts the solution over longer time scales. Finally,figures 1.4 through 1.6 show coarsening effects for select regularizations.This concludes the literature review and introduction. Next, we move onto a more in depth look at Ostwald ripening where new results are presented.191.2. ChemotaxisA user’s guide to PDE models for chemotaxis 203Fig. 1 (M1) Numerical simulation of minimal model showing evolution of cell density (solid line) andchemical concentrations (dash-dot line) to the steady state. (M2)–(M8) Numerical results for the regu-larised models, showing steady state cell distributions (solid line) at different values of the regularisationparameter. For comparison, the steady state distribution of the minimal model is plotted as the dash-dottedline. The cell distributions marked with an asterisk have been classified (numerically) as plateau type—allothers are spikes. For all numerics, the same model set-up is considered: parameters D and χ are set at 0.1and 5.0, respectively; initial conditions are set to be u(x, 0) = 1 and v(x, 0) = 1 + 0.1 exp(−10x2); 201discretisation points are employed on a domain of length 1123A user’s guide to PDE models for chemotaxis 203Fig. 1 (M1) Numerical simulation of minimal model showing evolution of cell density (solid line) andchemical concentrations (dash-dot line) to the steady state. (M2)–(M8) Numerical results for th regu-larised models, showing steady state cell distributions (solid lin ) at different values of the regularisationparameter. For comparison, the steady ate di ribution of he m nimal model is plotted as the dash-d ttedline. The cell distributions marked with an asterisk have been classified (numerically) as plateau typ —allothers are spikes. For all nume ics, the ame model set-up is consider d: parameters D and χ are set at 0.1and 5.0, respectively; initial conditions are set to b u(x, 0) = 1 and v(x, 0) = 1 + 0.1 exp(−10x2); 201discretisation points a employed o a domain of length 1123A user’s guide to PDE models for chemotaxis 203Fig. 1 (M1) Numerical simulation of minimal model showing evolution of cell density (solid line) andchemical concentrations (dash-dot line) to the steady state. (M2)–(M8) Numerical results for the regu-larised models, showing steady state cell distributions (solid line) at different values of the regularisationparameter. For comparison, the steady state distribution of the minimal model is plotted as the dash-dottedline. The cell distributions marked with an asterisk have been classified (numerically) as plateau type—allothers are spikes. For all numerics, the same model set-up is considered: parameters D and χ are set at 0.1and 5.0, respectively; initial conditions are set to be u(x, 0) = 1 and v(x, 0) = 1 + 0.1 exp(−10x2); 201discretisation points are employed on a domain of length 1123A user’s guide to PDE models for chemotaxis 203Fig. 1 (M1) Numerical simulation of minimal model showing evo uti n of c ll density (so id li e) andchemical concentrations (dash-dot li e) to the s eady state. (M2)–(M8) Numerical results for the regu-larised models, showing st ady state cell distribu ions (solid line) at different values of the reg larisationparameter. For comparison, the steady state distribution of the min mal model s plotted as the dash- ottedline. The cell distributions marked with an asterisk have been classifi d (numerically) as plateau type—allothers are spikes. For ll num rics, the same model set-up is consid red: paramet rs D and χ are set t 0.1and 5.0, respectively; initial conditions are set to be u(x, 0) = 1 and v(x, 0) = 1 + 0.1 exp(−10x2); 201discretisation points are employed on a domain of length 1123Figure 1.2: 1D simulations for the models considered. Minimal model isgiv n t different times where as others av va y ng r evant parameters.The solid line is the adapted model where as the dot-dash line is the minimalmodel st ady state solution. In o der, the odels ar : minimal, ignal-dependent sensitivity, vo ume-filli g, non-linear diffusion, saturating signalproduction, and cell kinet c. D = .1, χ = 5. ICs: u(x, 0) = 1, v(x, 0) =1 + .1 exp(−10x2).201.2. ChemotaxisA user’s guide to PDE models for chemotaxis 205Fig. 2 Numerical simulations of models (M1)–(M8) on a larger domain from unbiased initial data. Forall numerical simulations the following conditions are employed: parameters D and χ are set at 0.1 and2.0, respectively; initial data u(x, 0) = 1 and v(x, 0) = 1.0 + r(x) where r(x) is a 1% random spatialperturbation of the steady state; domain [0, 20] with 401 grid parametersminimal model displays finite-time blow up. A summary of the global existence resultsfor the various models (M2)–(M8) was given in Sect. 3.Numerical simulations on the unit square are plotted in Fig. 3. For the minimalmodel, solutions quickly evolve into a blow-up. Following a critical time, which weclassify as numerical blow-up, we are unable to track the solution any further, there-fore a plot of the cell density is shown just prior to this point in the appropriate plotof Fig. 3a.Results from the numerical simulations of models (M2)–(M8) are displayed in theremaining plots of Fig. 3a. The result for both forms of (M2) are somewhat ambig-uous: for the model (M2a) (receptor) a steady state distribution in the cell density isachieved, yet the aggregation is highly concentrated and we should query as to whetherthe numerical scheme is sufficiently accurate at such steep gradients. A similar ambi-guity appears for model (M2b) (logistic); global existence has been proved by Biler[7] for β > 0, whereas for β = 0 global existence has only been determined belowa threshold (see 6.1.1 in [40]). The numerical results here, for β = 0 and above thethreshold in [40], lead to numerical blow-up at time t = 1.13. To fully resolve thenature of these solutions it will be necessary to develop more sophisticated numericalschemes, for example an adaptive-meshing algorithm similar to that developed byBudd et al. [9]. The remaining cases, (M3)–(M8), are less ambiguous and the mod-els have globally existing solutions. The temporal evolution of the cell density at theaggregation peak is tracked in Fig. 3b. Two cases deserve special attention. Firstly, forthe nonlinear diffusion model (M5), the cell density aggregate appears to form a com-pact mass with SHARP fronts. Secondly, for the cell kinetics model (M8), solutionsinitially lie close to those of the minimal model before cell kinetics dominate and thepeak density drops to a steady state profile.123A user’s guide to PDE models f r chemotaxis 205Fig. 2 Numerical simulations of models (M1)–(M8) on a larger domain from unbiased initial data. Forall numerical simulations the following conditions are employed: p rameters D and χ are set at 0.1 and2.0, respectively; initial data u(x, 0) = 1 and v(x, 0) = 1.0 + r(x) where r(x) is a 1% random spatialperturbation of th steady state; domain [0, 20] with 401 grid parametersminimal model displays finite-time blow up. A summary of the global exist nce resultsfor the various models (M2)–(M8) was given in Sect. 3.Numerical simulations on the unit square are plotted in Fig. 3. For the minimalmodel, solutions quickly evolve into a blow-up. Following a critical time, which weclassify as numerical blow-up, we are unable to track the solution any further, there-fore a plot of the cell density is show just prior to this oint in t e a propriate plotof Fig. 3a.Results from the numerical simulations of odels (M2)–(M8) are displayed in theremaining plots of Fig. 3a. The result for both forms of (M2) are somewhat ambig-uous: for the model (M2a) (receptor) a steady state distribution in the cell density isachieved, yet the aggregation is highly concentrated and we should query as to whetherthe numerical sche e is sufficiently accurate at such steep gradients. A similar ambi-guity appears for model (M2b) (logistic); global existence has been proved by Biler[7] for β > 0, whereas for β = 0 global existence has only been determined belowa threshold (see 6.1.1 in [40]). The numerical results here, for β = 0 and above thethreshold in [40], lead to numerical blow-up at time t = 1.13. To fully resolve thenature of these solutions it will be necessary to develop more sophisticated numericalschemes, for exa ple an adaptive-meshing algorithm similar to that developed byBudd et al. [9]. The remaining cases, (M3)–(M8), are less ambiguous and the mod-els have globally existing solutions. The temporal evolution of the cell density at theaggregation peak is tracked in Fig. 3b. Two cases deserve special attention. Firstly, forthe nonlinear diffusion model (M5), the cell density aggregate appears to form a com-pact mass with SHARP fronts. Secondly, for the cell kinetics model (M8), solutionsinitially lie close to those of the minimal model before cell kinetics dominate and thepeak density drops to a steady state profile.123A user’s guide to PDE models for chemotaxis 205Fig. 2 Numerical simulations of models (M1)–(M8) on a larger domain from unbiased initial data. Forall numerical simulations the following conditions are employed: parameters D and χ are set at 0.1 and2.0, respectively; initial data u(x, 0) = 1 and v(x, 0) = 1.0 + r(x) where r(x) is a 1% random spatialperturbation of the steady state; domain [0, 20] with 401 grid parametersminimal model displays finite-time blow up. A summary of the global existence resultsfor the various models (M2)–(M8) was given in Sect. 3.Numerical simulations on the unit square are plotted in Fig. 3. For the minimalmodel, solutions quickly evolve into a blow-up. Following a critical time, which weclassify as numerical blow-up, we are unable to track the solution any further, there-fore a plot of the cell density is shown just prior to this point in the appropriate plotof Fig. 3a.Results from the numerical simulations of models (M2)–(M8) are displayed in theremaining plots of Fig. 3a. The result for both forms of (M2) are somewhat ambig-uous: for the model (M2a) (receptor) a steady state distribution in the cell density isachieved, yet the aggregation is highly conc ntrated an we should query as to whetherthe numerical sc me is sufficiently accurate at such steep gradients. A similar ambi-guity appears for mod l (M2b) (logisti ); glob l existence has b en proved by Biler[7] for β > 0, whereas for β = 0 global existence has only been dete mined belowa threshold (see 6.1.1 in [40]). The numerical results here, for β = 0 and above thethres old in [40], lead to numerical blow-up at time t = 1.13. To fully resolve thena u of these solutions it will b necessary to develop more sophisticated numericalschemes, or example an adaptive-meshing algorithm similar to that d veloped byBudd et al. [9]. The rem ining cases, (M3)–(M8), are le s ambiguous and the mod-els have globally existing solutions. The temporal evolution of the cell de si y at theaggreg tion peak is tracked in Fig. 3b w cases des rve special attention. Firstly, forthe nonlinear diffusion model (M5), the cell density aggreg t appears to form com-pact mass with SHARP fronts. Secondly, for the cell kin tics mod l (M8), soluti nsinitially lie close to those of the minimal model b fore cell kinetics dominate and thepeak density drops to a steady state profile.12A user’s guide to PDE models for chemotaxis 205Fig. 2 Numerical simulations of models (M1)–(M8) on a larger domain from unbiased initial data. Forall numerical simulations the following conditions are employed: p ameters D and χ are set at 0.1 and2.0, respectively; initial data u(x, 0) = 1 and v(x, 0) = 1.0 + r(x) where r(x) is a 1% andom sp tialperturbation of the steady state; domain [0, 20] with 401 grid parametersminimal model displays finite-time blow up. A summary of the global exist nce resultsfor the various models (M2)–(M8) was given in Sect. 3.Numerical simulations on the unit square are plotted in Fig. 3. For the minimalmodel, solutions quickly evolve into a blow-up. Following a critical time, whi h weclassify as numerical blow-up, we are unable to track the solution any further, there-fore a plot of the cell density is shown just prior to this oint in t e a propriate plotof Fig. 3a.Results from the numerical simulations of odels (M2)–(M8) are displayed in theremaining plots of Fig. 3a. The result for both forms of (M2) are somewhat ambig-uous: for the model (M2a) (receptor) a steady state distribution in the cell density isachieved, yet the aggregation is highly concentrated and we should query as to whetherthe numerical scheme is sufficiently accurate at such steep gradients. A similar ambi-guity appears for model (M2b) (logistic); global existence has been proved by Biler[7] for β > 0, whereas for β = 0 global existence has only been determined belowa threshold (see 6.1.1 in [40]). The numerical results here, for β = 0 and above thethreshold in [40], lead to numerical blow-up at time t = 1.13. To fully resolve thenature of these solutions it will be necessary to develop more sophisticated numericalschemes, for example an adaptive-meshing algorithm similar to that developed byBudd et al. [9]. The remaining cases, (M3)–(M8), are less ambiguous and the mod-els have globally existing solutions. The temporal evolution of the cell density at theaggregation peak is tracked in Fig. 3b. Two cases deserve special attention. Firstly, forthe nonlinear diffusion model (M5), the cell density aggregate appears to form a com-pact mass with SHARP fronts. Secondly, for the cell kinetics model (M8), solutionsinitially lie close to those of the minimal model before cell kinetics dominate and thepeak density drops to a steady state profile.123Figure 1.3: O e di ension l numerical simulatio s for the mo els consid-ered, now on a larger do ain. Mod ls a pear i the same order as theprevious figure. D is given to be .1 and χ = 2. Initi l conditi ns are set atu(x, 0) = 1, v(x, 0) = 1 + r(x) where r(x) is a random spatial perturbationof the steady state, .01 ≤ r(x) ≤ .01. We see that in most cases a coarseningprocess occurs and what initially started as a multi-peak structure evolvesinto fewer and fewer regions of concentration. This is not the case in thecell kinematic regularization as we see a continu l merg e of cel s anddecay of peaks.A user’s guide to PDE models for chemotaxis 207Fig. 4 a Numerical simulation of the 2D model for chemotaxis incorporating receptor binding (withα = 0.5), volum -filling (γ = 10), non-linear diffusion (n = 1.0) and saturating chemical produc-tion (φ = 1.0) b As for (a), but with the additio of logistic cell growth (r = 0.1). For both sets ofnumerics we let D = 0.1, χ = 5.0 on a domain of size 20 × 20 wi h initial data u(x, y, 0) = 1 andv(x, y, 0) = 1.0 + r(x, y), where r(x, y) is a 1% random spatial perturbation of the steady stateto the 1D findings. Here the average wavelength between peaks roughly remains thesame.5.3 3D numericsThe properties for the various models in higher dimensions are less well understood.Due to the computationally exhaustive nature of 3D simulations we restrict our numer-ical exploration to just 2 cases: the minimal model (M1), for which finite time blow-upoccurs in 3D, and the volume-filling model (M3a), which is known to have globallyexisting solutions [36,110]. As expected, 3D numerical simulations of the minimalmodel (not shown) demonstrate a rapid evolution to a blow-up. For the volume-fillingmodel, however, solutions evolve into a stable spherical aggregation of cells, Fig. 5a123Figure 1.4: Here is an example of a 2D simulation. This is a time evolution ofa combination of a few regularizations, na ely signal-dependent sensitivity(α = .4), volume-filling (γ = 10), non-lin a diffusion (n = 1) and saturatingchemic l p oducti (φ = 1). D = .1, χ = 5, Ω = [0, 20] × [0, 20]. IC :u(x, y, 0) = 1, v(x, y, 0) = 1 + r(x, y), whe e r(x, y) is the 2D analog fromthe previous figure. We once again see a coarsening process as time goes on.211.2. ChemotaxisVOLUME-FILLING AND QUORUM-SENSING 525? ∡ ⌡ ␡ ┡?Ω?Ω?Ω?Ω??⤪⬫Ⱝ⨮⼰ㄲ? ∡ ⌡ ␡ ┡?Ω?Ω?Ω?Ω??⤪⬫Ⱝ⨮⼰ㄲ? ∡ ⌡ ␡ ┡?Ω?Ω?Ω?Ω??⤪⬫Ⱝ⨮⼰ㄲ?∡⌡␡?????Ω??⼵㘩?⬷?∡ⰱ⤪⬫Ⱝ⨮⼰ㄲ? ? ∡ ∴ ⌡?Ω?Ω?Ω?Ω??⤪⬫Ⱝ⨮⼰ㄲ? ? ∡ ∴ ⌡??????⤪⬫Ⱝ⨮⼰ㄲ™? ℤ?℥???? ℧?ℨ?FIGURE 5: (a) Early evolution of a multi-peak pattern for Model (1)with χ(u, v) = χ0(1−u) and with initial cell density of 0.5 shown at T=0(dot), 20 (dot-dash) and 100 (solid). (b) Initial density = 0.2, at T=0(dot), 50 (dot-dash) and 100 (solid). (c) Initial density = 0.8, at T=0(dot), 50 (dot-dash) and 100 (solid). (d) Long time evolution for initialcell density of 0.5 showing coarsening to a single half-peak pattern. (e)q(u) = 1 − uγfor γ = 4.0 (solid), 1.0 (dot) and 0.25 (dash). We use adomain of length 20 and parameters Du = 0.25 and χ0 = 4.0 Data isplotted at T = 500. (f) q(u) = exp(−γu) for γ = 1.0 (dot, T = 300.0),3.0 (dot-dash, T = 5000) and 5.0 (solid, T = 5000).5.2 Pattern formation: simulations in one-dimension5.2.1 Zero Cell Kinetics We consider the volume-filling chemotaxissystem, given by Model (1) with cell kinetics f(u, v) = 0 and χ(u, v) =χ0(1 − u), χ0 > 0 a constant. Unless stated otherwise, throughoutthe following sections we shall assume chemical kinetics take the formg(u, v) = u − v, and zero flux boundary conditions. Initial conditionswill be set at the homogeneous steady states, but spatially perturbingthe chemical concentration by a small random amount.For initial conditions u(x, 0) = us (constant) the homogeneous steadystate is (u∗, v∗) = (us, vs). The results of the linear stability analysisFigure 1.5: A 1D time evolution picture that shows coarsening occurring ona log time scale for the volume-filling model. In this case, initial cell densityis set at .5 with D = .25 and χ = 4.532 KEVIN J. PAINTER AND THOMAS HILLENFIGURE 10: Coarsening process in the two-dimensional model with nocell kinetics. Top row u(x, y, 0) = 0.5, Middle row: u(x, y, 0) = 0.25,Bottom row: u(x, y, 0) = 0.75. Colourscale shows cell density (black= low cell density, white = high cell density). Parameters are Du =0.25,χ = 4.0 on the domain [0, 25]× [0, 25].™??␢??☧⠩⨩┢?™??␢??┢??☧⠩⨩⌢?????☧⠪␡∡FIGURE 11: Two dimensional patterns generated using the quo-rum/chemical mediated approaches of Section 2. Here, depending onthe system parameters, we can see the formation of ring structures.Figure 1.6: Coarsening of the volume filling model. The top row has initialconcentration of u = .5 and the bottom has u = .25. As above, D = .25,χ = 4. Here, the domain, Ω = [0, 25] × [0, 25]. In this picture black is lowcell density where white is high cell density.22Chapter 2Ostwald RipeningWe begin this chapter with a deeper look at Ostwald ripening. From theintroduction, we left off with the free boundary Mullins-Sekerka problemseen in (1.14). The problem has been rewritten slightly as∆u = 0, x ∈ Ω \ Γ,u = −κ, x ∈ Γ; ∂u∂n= 0, x ∈ ∂Ω,V = −[∂u∂n], x ∈ Γ.(2.1)Here, the variable has been changed to u and has been scaled so that onlyκ, the mean curvature, shows up on the interface.As mentioned in the introduction, the next step is to draw the connectionbetween this and the equations that govern Ostwald ripening. There aremany ways to do this, one of which can be seen in [2]. Instead, we willpresent a matched asymptotic approach to arrive at a system of ODEs thatdescribe the motion of each drop.2.1 Mullins-Sekerka to Ostwald RipeningDynamicsWe start by considering an arbitrary domain Ω ⊂ R2 with a collection of Nsmall circular droplets inside the domain whose boundaries are described byΓi = |x − xi| = ri for i = 1, 2, . . . , N , where ri is the radius of the dropletand xi the center. The interface between the two regions, Γ, is given asΓ =⋃Γi. Since we assume the droplets are small, ri = ρi for 1 andρi = O(1). The droplets are expected to move in time so ri and ρi dependupon t. Since the interfacial velocity V is negative for a shrinking drop byconvection,V = −dridt= −dρidt,232.1. Mullins-Sekerka to Ostwald Ripening Dynamicsfor a given i. We also know the mean curvature, κ, for a circular drop, isgiven byκ = 1ri= 1ρi.Inside each drop we know that u− = 1/ri(0), a constant, which meansthat ∂nu− = 0 on Γi. This tells us[∂u∂n]= ∂u∂n∣∣∣+− 0 = ∂u∂r∣∣∣r=ri= 1∂u∂ρ∣∣∣ρ=ρi.We know from (2.1) that the jump in the normal derivative given above isalso equal to the velocity V . Therefore,2dρidt= ∂u∂ρ∣∣∣ρ=ρi.Collecting this information, the problem that we need to solve can be rewrit-ten as∆u = 0, x ∈ Ω \ Γ,u = 1ρi, x ∈ Γi;∂u∂n= 0, x ∈ ∂Ω,with the droplet dynamics satisfying2dρidt= ∂u∂ρ∣∣∣ρ=ρi.Determining u means we obtain a system of ODEs that describe themotion of each drop throughout the domain. We can scale u by 1/ tosimplify the problem and remove the that appears on the boundary ofeach interface. Note that this also causes an additional power of to appearin our system of ODEs.Proceeding with the asymptotic analysis, we expand u asu = u0 + νu1 + ν2u2 + . . . ,where u0 is readily seen as an unknown constant that is determined as asolvability condition in later expansion and ν is the order of the subsequentterms, also to be determined. For u1, the problem to solve is∆u1 = 0, x ∈ Ω \ {x1, . . . ,xN},∂u1∂n= 0, x ∈ ∂Ω; u1 ∼ ?, as x→ xj , for j = 1, . . . , N.(2.2)242.1. Mullins-Sekerka to Ostwald Ripening DynamicsTo figure out this singularity condition as x→ xj , we move to an innerexpansion. We define y = −1(x − xj) near the jth droplet and U be theinner u function. Now, we haveU = U0 + νU1 + . . .As with all matched asymptotic expansions, we require the outer solutionand inner solution agree as x→ xj and as |y| → ∞ respectively. So we musthave U0 → u0 as |y| → ∞. The first two inner equations to solve are∆yU0 = 0, y 6∈ Γj ,U0 =1ρj, y ∈ Γj ; U0 ∼ u0, as |y| → ∞.(2.3)∆yU1 = 0, y 6∈ Γj ,U1 = 0, y ∈ Γj ; U1 ∼ ?, as |y| → ∞.(2.4)Both equations can be solved by switching to polar coordinates andlooking for a radially symmetric solution. The corresponding solutions areU0 =1ρj+B0,j(log(ρ)− log(ρj)), (2.5a)U1 = B1,j(log(ρ)− log(ρj)), (2.5b)for constants B0,j and B1,j . To determine B0,j and B1,j , we look to thematching condition for the inner and outer solution. Near the jth droplet,we must have (as x→ xj and as |y| → ∞)u0 + νu1 + ν2u2 ∼ U0 + νU1 + . . .Substituting in for U0 and U1 from (2.5) and switching to a common variable,we see thatu0 + νu1 + ν2u2 ∼1ρj+B0,j(log(|x− xj |)− log()− log(ρj))+ νB1,j(log(|x− xj |)− log()− log(ρj)) + . . .In order to keep the order of the right hand side and left hand side con-sistent, the log() term must vanish. This tells us that Bi,j = Cij/ log() forsome arbitrary constant Cij , where j corresponds to the droplet in question.This also specifies ν = −1/ log(). Rearranging terms, we now have252.1. Mullins-Sekerka to Ostwald Ripening Dynamicsu0 + νu1 ∼1ρj+ C0j + ν[C0j(log(|x− xj |)− log(ρj)) + C1j ] +O(ν2).Since u0 = 1/ρj + C0j , we see that C0j = u0 − 1/ρj which tells us thesingularity condition for u1, specifically, u1 ∼ (u0 − 1/ρj) log(|x − xj |) asx → xj , for j = 1, . . . , N since there are no other terms to match on O(ν).We can substitute this into equation (2.2) and rewrite it as∆u1 = 2piN∑j=1(u0 −1ρj)δ(x− xj), x ∈ Ω,∂u1∂n= 0, x ∈ ∂Ω.Here, u0 is still undetermined. We use the divergence theorem to reveal thatu0N =N∑j=11ρj,and thusu0 =∑Nj=11ρjN= 1ρharm,where ρharm is the harmonic mean. This also tells us that C0j = 1/ρharm −1/ρj .At this point we can conclude with a leading order expansion for thedynamics of each droplet and the resulting system of ODEs. We substituteB0,j = −νC0j into (2.5a) and take a derivative with respect to ρ to determinethatU0 = ν[ 1ρharm− 1ρj] 1ρ,and since, to leading order, u ∼ U0 as x→ xj (ρ→ ρj), we havedρjdt= ν3∂u∂ρ∣∣∣ρ=ρj≈ ν3[ 1ρharm− 1ρj] 1ρj, j = 1, . . . , N.Principal Result 2.1: Consider the 2-D free boundary Mullins-Sekerkaproblem given below.262.2. Properties of the System∆u = 0, x ∈ Ω \ Γ,u = −κ, x ∈ Γ; ∂u∂n= 0, x ∈ ∂Ω,V = −[∂u∂n], x ∈ Γ.where u is the chemical potential, κ the curvature and V is the interfacialvelocity. In the case where Γ is the boundary of a collection of small circles(radius ∼ O(), 1), the following system of ODEs describes, to leadingorder, the coarsening phenomenon present.dρjdt= ν3∂u∂ρ∣∣∣ρ=ρj≈ ν3[ 1ρharm− 1ρj] 1ρj, j = 1, . . . , N,where ν = −1/ log() and ρharm is the harmonic mean.2.2 Properties of the SystemHere we will derive a few properties of the system before moving on to ahigher order expansion. These properties are (i) the system is area preserv-ing, (ii) the perimeter of droplets is non-increasing and (iii) the smallestdroplet will vanish in finite time.2.2.1 Area PreservingConsider the area of a given droplet, piρ2i , and total area of the systempi∑Ni=1 ρ2i , the sum of all the individual areas. We proceed by taking aderivative with respect to time of the total area.ddtN∑i=1ρ2i = 2N∑i=1ρiρ′i = 2N∑i=1ρi( 1ρharm− 1ρi) 1ρiν3= 2ν3(N∑i=11ρharm−N∑i=11ρi)= 2ν3(Nρharm−N∑i=11ρi)= 2ν3NN∑Ni=11ρi−N∑i=11ρi = 0.272.2. Properties of the SystemAs a result the total area of the system is unchanged and the dropletsexchange mass with each other without any loss. The initial mass is thesame as the mass at any future time.2.2.2 Perimeter ReducingConsider the perimeter of a given droplet, 2piρi. The total perimeter of thesystem will be the sum of all the individual perimeters. We take a derivativewith respect to time of the total perimeter to determine thatddtN∑i=1ρi =ν3N∑i=1( 1ρiρharm− 1ρ2i)= ν3[(N∑i=11ρi)1ρharm−N∑i=11ρ2i]= ν31N(N∑i=11ρi)2−N∑i=11ρ2i .Since ν/3 is positive it suffices to show that∑Ni=1 α2i ≥ (∑Ni=1 αi)2/N .This result means the derivative is negative signifying that the perimeterdecreases. We begin with0 ≤N∑i=1N∑j=1(αi−αj)2 =N∑i=1N∑j=1(α2i +α2j−2αiαj) = 2NN∑i=1α2i −2(N∑i=1αi)2,and thusNN∑i=1α2i ≥(N∑i=1αi)2.The perimeter therefore decreases in time and the system moves to aconfiguration with a smaller total perimeter.2.2.3 Finite Time ExtinctionHere we wish to prove that, for the particles in the system, all but thelargest go extinct in finite time. We are assuming that the timescale hasbeen adjusted to remove ν/3 on the right hand side of the ODE equation.To do this, we assume ρ1(0) ≤ ρ2(0) ≤ . . . ≤ ρN (0) and show that for thesmallest particle, ρ1, there exists a time T1 such that ρ1(T1) = 0 and282.2. Properties of the Systemρ1(0)33 ≤ T1 ≤Nρ1(0)3ρN (0)ρN (0)− ρ1(0).This is sufficient because the calculation can then be repeated for the nextsmallest particle. Consider the system of ODEs derived in principal result2.1dρ1dt= 1ρ1[ 1ρharm− 1ρ1]≥ − 1ρ21, (2.6)where the inequality arises from discarding the first term on the right. Now,we haveρ21dρdt≥ −1.If we separate the equation and integrate, we wind up withρ1(t)3 ≥ −3t+ ρ1(0)3 ≥ 0, t ≤13ρ1(0)3.Thus ρ1 is greater than zero for all t less than the value shown above andthus the extinction time must satisfy T1 ≥ ρ1(0)3/3. For the upper bound,we again look to the system of ODEs. First, consider1ρharm− 1ρ1= 1N( 1ρ2− 1ρ1+ . . .+ 1ρN− 1ρ1)≤ 1N( 1ρN− 1ρ1). (2.7)From (2.6) we can see thatρ21dρ1dt= ρ1[ 1ρharm− 1ρ1].Combining this and (2.7) we see thatρ21dρdt≤ 1N(ρ1 − ρNρN)≤ 1N(ρ1(0)− ρN (0)ρN (0)).The last inequality arises because 1/ρN ≤ 1/ρharm ≤ 1/ρ1 which meansρ′N ≥ 0 and ρ′1 ≤ 1. This means the difference between ρ1 and ρN will growin time and is smallest when t = 0.We can integrate the above equation as with the lower bound to showthat292.3. Summing the Logarithmic Termsρ1(t)3 ≤tN(ρ1(0)− ρN (0)ρN (0))+ ρ1(0)3 ≤ 0, t ≥Nρ1(0)3ρN (0)ρN (0)− ρ1(0).This shows that p1 is less than zero (extinct) for t greater than the valuegiven above. We conclude withρ1(0)33 ≤ T1 ≤Nρ1(0)3ρN (0)ρN (0)− ρ1(0).2.3 Summing the Logarithmic TermsIn this section we go back to our leading order approximation for the dropletevolution and make further progress, calculating the asymptotic solution forall terms of O(νk) for any k. We begin by taking another look at the innerexpansion for u. We can rewrite U , this inner expansion, asU = 1ρj+ νBj(ν)U1 + U2 + . . . , (2.8)for functions Bj(ν), j = 1, 2, . . . , N to be found. The equation U1 satisfiesis the same as in (2.4) and thus has the same form: U1 = log(ρ) − log(ρj).Bj(ν) is taking the place of B0,j and B1,j from equation (2.5). Notice thatwe have separated logarithmic dependence from U0 in (2.5a) and rearrangedit to lie within U1 and Bj(ν).The idea is that all terms of logarithmic order can be captured by onefunction for each droplet. Since this captures the terms of all logarithmicorders it will result in a more accurate solution. Switching to the outercoordinate, we haveU = 1ρj+ νBj(ν)[log |x− xj |+1ν− log(ρj)]+ . . . (2.9)The matching works very similarly to the leading order case but we nowgroup u0 and u1 as one function, uH . The equation uH satisfies is∆uH = 0, x ∈ Ω \ {x1, . . . ,xN},∂uH∂n= 0, x ∈ ∂Ω,uH ∼1ρj+Bj(ν) + νBj(ν)[log |x− xj | − log(ρj)], as x→ xj , for j = 1, . . . , N.302.3. Summing the Logarithmic TermsThis can be solved by making use of the Neumann Green’s functionG(x; xj) which satisfies∆G = 1|Ω| − δ(x− xj), x ∈ Ω,∂G∂n= 0, x ∈ ∂Ω;∫ΩGdx = 0,G(x; xj) ∼ −12pi log |x− xj |+Rjj + o(1), as x→ xj ,where Rjj is the regular (non-singular) part of the jth Neumann Green’sfunction and depends upon the domain Ω. Thus, uH can be written asuH = −2piN∑j=1Bj(ν)G(x; xj) + u0. (2.10)As before, u0 is an arbitrary constant to be found. This system has N+1unknowns (Bj(ν) and u0) but only N equations. To resolve this issue weimpose the area preserving conditionN∑j=1Bj(ν) = 0To determine the rest of the unknowns we expand (2.10) as x → xj ,giving usuH ∼ νBj(ν) log |x− xj | − 2piBj(ν)Rjj + ν∑i 6=jBi(ν)Gji+ u0,where Gji is the ith Green’s function evaluated at xj . Now, we match thiswith the singularity behavior determined from the inner solution in equation(2.9). After canceling the log |x− xj | term in (2.9) and above, we wind upwith−2piBj(ν)Rjj + ν∑i 6=jBi(ν)Gji+ u0 =1ρjBj(ν)− νBj(ν) log(ρj),for j = 1, 2, . . . , N .312.3. Summing the Logarithmic TermsWe can rewrite this as a linear algebraic system for the unknowns Bj(ν)and u0.−2piνGB + u0e = ρ0 + B− νPB, (2.11)where G, e, ρ0 B and P are given asG =R11 Gij. . .Gji Rnn ; P =log(ρ1) 0. . .0 log(ρN ) ,e =1...1 ; ρ0 =1ρ1...1ρN ; B =B1(ν)...BN (ν) .If we multiply (2.11) on the left by eT , we realize that by the areapreserving condition on Bj(ν), eTB = 0. We are left with−2piνeTGB +Nu0 = eTρ0 − νeTPB,and therefore,u0 =1N[2piνeTGB + eTρ0 − νeTPB].We can now return to (2.11) and substitute in for u0. Since u0 is aconstant, we can swap which side of e it appears on. Thus,−2piνGB + 2piνEGB + Eρ0 − νEPB = ρ0 + B− νPB, (2.12)where E is given byE = 1N1 . . . 1... . . . ...1 . . . 1 .Finally, we can rearrange (2.12) to give us the following result:[I + ν(I−E)(2piG − P)]B = (E− I)ρ0.Recall that the components of B are Bj(ν). Therefore, when this systemis solved, we will determine the system of ODEs that describes the motion of322.4. Two Droplet Exampleeach droplets. We substitute Bj(ν) and U1 into (2.8) and take a derivativewith respect to ρ, giving usdρjdt= ν3∂u∂ρ∣∣∣ρ=ρj≈ ν3Bj(ν)ρj, j = 1, . . . , N.Principal Result 2.2: Consider the 2-D free boundary Mullins-Sekerkaproblem given below.∆u = 0, x ∈ Ω \ Γ,u = −κ, x ∈ Γ; ∂u∂n= 0, x ∈ ∂Ω,V = −[∂u∂n], x ∈ Γ.where u is the chemical potential, κ the curvature and V is the interfacialvelocity. In the case where Γ is the boundary of a collection of small cir-cles (radius ∼ O(), 1), the following system of ODEs describes, tologarithmic order, the coarsening phenomenon present.dρjdt= ν3∂u∂ρ∣∣∣ρ=ρj≈ ν3Bj(ν)ρj, j = 1, . . . , N,where ν = −1/ log() and Bj(ν) comes from the solution to the linear alge-braic system given in (2.12).As stated before, this will be more accurate than the previous exampleas it contains higher order correction terms; in particular, it contains all ofthe terms of O(νk).2.4 Two Droplet ExampleIn this section we look at a specific example of the system above with onlytwo droplets. This serves as a check to ensure that everything behaves asexpected.Let Ω ⊂ R2 be a circle of radius 1 centered at the origin. That is,Ω = {x||x| ≤ 1}. Let = .05, r1(0) = .025, r2(0) = .05 which givesρ1(0) = 1/2 and ρ2(0) = 1. The droplets will be centered at x1 = (−1/2, 0)and x2 = (2/3, 0), along the x-axis.In order to solve the algebraic system from the previous section we needto first solve for the Green’s function that arises for this domain and bound-ary condition. Once again, the problem to solve is:332.4. Two Droplet Example∆G = 1|Ω| − δ(x− xj), x ∈ Ω∂G∂n= 0, x ∈ ∂Ω;∫ΩGdx = 0,G(x; xj) ∼ −12pi log |x− xj |+Rjj + o(1) as x→ xj .The domain was chosen in this example so that the Green’s function iswell-known. It isG(x,xj) =12pi[− log(|x− xj |)− log(∣∣∣∣x|xj | −xj|xj |∣∣∣∣)+ 12(|x|2 + |xj |2)−34],with regular partR(x,xj) =12pi[− log(∣∣∣∣x|xj | −xj|xj |∣∣∣∣)+ 12(|x|2 + |xj |2)−34].Plugging in the given parameters, we determine thatR11 ≈ −.0338; R22 ≈ .04492; G12 = G21 ≈ −.1344.Now we must go through the steps of solving the 2× 2 algebraic system.We know thatI−E = 12(1 −1−1 1); 2piG−P =(2piR11 − log(ρ1) 2piG122piG21 2piR22 − log(ρ2)).Thus, we solveAB = ρ0,whereA =(1 + νpi(R11 −G21)− ν2 log(ρ1) νpi(G12 −R22) + ν2 log(ρ2)νpi(G21 −R11) + ν2 log(ρ1) 1 + νpi(R22 −G12)− ν2 log(ρ2)),B =(B1(ν)B2(ν)); ρ0 =12(1ρ2− 1ρ11ρ1− 1ρ2).342.4. Two Droplet Example0 0.05 0.1 0.15 0.200.20.40.60.81Leading order and Log Term Expansion for Two Droplet Systempτ Leading p(0) = .5Leading p(0) = 1Log p(0) = .5Log p(0) = 1Figure 2.1: Here is a plot of the solutions to the system of ODEs that comefrom both the leading order expansion as well as the one that includes allof the logarithmic terms. You can clearly see in both cases that the systemis area preserving and the smallest drop extinguishes in finite time.Since the system is 2 × 2 it is trivial to invert. We use MATLAB’sODE23s to solve the resulting system of ODEs. The first step was to scale tso that the RHS of the system is O(1). We let t = τ3/ν, a slow timescale.As you can see from figure 2.1, the larger drop grows at the expense of thesmaller drop and we see the area preservation property as well as finite timeextinction.35Chapter 3The Chemotaxis ModelNow we return to our volume-filling chemotaxis model as mentioned in theintroduction. As we will see, the chemotaxis model has very similar be-havior to that of Ostwald ripening, with surface diffusion replacing motionby curvature but still maintaining a coarsening effects. This is the link be-tween the two different systems- the mechanism for exchange of mass maybe different but the phenomena is qualitatively similar.The system will be nondimensionalized and then studied in a general two-dimensional domain with the use of a boundary-fitted coordinate framework.After deriving the velocity equation that governs the interfacial motion wewill look at a few specific cases to verify intuition and motivate numericalstudies.3.1 NondimensionalizationWith the model in hand we can begin nondimensionalization to simplify ourequations and highlight key parameters. The full model isut = ∇ ·(Du∇u− χu(1− uγ)∇v), x ∈ Ω (3.1a)vt = Dv∆v + αu− βv, x ∈ Ω (3.1b)∂u∂n= 0, x ∈ ∂Ω.; ∂v∂n= 0, x ∈ ∂Ω. (3.1c)We make the following variable changesxˆ = xL; tˆ = tT; uˆ = uu∗.Dividing by χ in equation (3.1a) and β in equation (3.1b), we have363.2. Boundary Fitted Coordinate Expansionu∗uˆtˆχT= 1L2∇xˆ ·(Duu∗χ∇xˆuˆ− u∗uˆ(1− u∗γuˆ)∇xˆv),vtˆβT= DvβL2∆xˆv +αβu− v.We choose u∗ = γ, T = γL2/χ and defineD1 =Duχ; D2 =DvβL2; τ = χγβL2; σ = αβ.The hats over the variables are dropped for convenience and we assumethat D1 1 as the diffusion strength, Du, is typically much smaller thanthe chemotactic sensitivity, χ. We rename D1 as and remove the subscriptfrom D2, relabeling it as D. The nondimensionalized system isut = ∇ · (u− u(1− u)∇v),τvt = D∆v + σ − v,∂u∂n= 0, x ∈ ∂Ω.; ∂v∂n= 0, x ∈ ∂Ω.(3.2)3.2 Boundary Fitted Coordinate ExpansionMotivated by numerical results and other similar situations where this ap-proach has been fruitful, we switch to a boundary fitted coordinate system.The idea is to describe points in space by their distance from an interfacewall instead of standard cartesian coordinates. The derivation of the coor-dinate system can be seen in appendix A.We begin the calculation by unraveling the nondimensionalized chemo-taxis model (3.2) into this boundary fitted coordinate framework. The equa-tions becomeut = [uηη −κ1− κηuη +11− κη∂s(us1− κη)]−f(u)[vηη −κ1− κηvη+11− κη∂s(vs1− κη)]− f ′(u)[usvs(1− κη)2 + uηvη],373.2. Boundary Fitted Coordinate ExpansionFigure 3.1: A depiction of the boundary fitted coordinate system. Ω is theouter domain and can be partitioned into two regions, Ω− (η > 0) and Ω+(η < 0) with Γ representing the interface between them. Note the inwardnormal and sign convention for η.Ω η < 0, Ω+nΓη > 0, Ω−η > 0, Ω−η > 0, Ω−τvt = D[vηη −κ1− κηvη +11− κη∂s(vs1− κη)]+ σu− v,where f(u) = u(1 − u). We solve these two PDEs in some domain Ωwith Neumann boundary conditions on the outer wall. We define Ω− to beregions where u has value 1 and Ω+ = Ω \ Ω−. Γ, known as the front orinterface, is the boundary of Ω−.Now we expand near the front, using the inner variable ηˆ = η/. WeobtainτVt = D[ 12Vηˆηˆ −1(κ1− κηˆ)Vηˆ +11− κηˆ∂s(Vs1− κηˆ)]+ σu− V.Expanding V as V = V0 + V1 + 2V2 + . . ., we determine that for t = O(1),383.2. Boundary Fitted Coordinate Expansionτ∂tV0 = D[ 12∂ηˆηˆV0 +1∂ηˆηˆV1 + ∂ηˆηˆV2−κ∂ηˆV0 − κ∂ηˆV1− κ2ηˆ∂ηˆV0 + ∂ssV0]+ σu− V0 + . . .We can see from above that the leading order equation is∂ηˆηˆV0 = 0,which has solutionV0 = A0(s)ηˆ +B0(s). (3.3)Since v is bounded, V0 cannot grow linearly as ηˆ → ±∞ and thus A0(s) ≡0. Thus, we set V0 = V0(s). This also tells us that ∂ηˆV0 = 0 which gives thefollowing first correction equation∂ηˆηˆV1 = 0,with solutionV1 = A1(s) +B1(s). (3.4)where in this case we cannot make any conclusions about the growth of ηˆyet.Now we apply the same treatment to the inner equation for U . Expand-ing U as U = U0 + U1 + . . ., we arrive at the following equation∂tU0 = [ 12∂ηˆηˆU0 +1∂ηˆηˆU1 −κ∂ηˆU0 + ∂ssU0]−f(U0)[ 12∂ηˆηˆV0 +1∂ηˆηˆV1 + ∂ηˆηˆV2 −κ∂ηˆV0 − κ∂ηˆV1 − κ2ηˆ∂ηˆV0 + ∂ssV0]− f ′(U0 + U1)[∂sU0∂sV0 +12VηˆUηˆ]+ . . .(3.5)The 12 f ′(U0 + U1)VηˆUηˆ that is left unexpanded is of particular impor-tance and produces many terms; we shall focus on this now. Note that∂ηˆV0 = 0 and as a result can be eliminated from the equation. We calculate,393.2. Boundary Fitted Coordinate Expansion12f ′(U0 + U1)VηˆUηˆ ≈1f ′(U0)∂ηˆU0∂ηˆV1 + f ′(U0)∂ηˆU1∂ηˆV1+ f ′′(U0)U1∂ηˆU0∂ηˆV1 + f ′(U0)∂ηˆU0∂ηˆV2.Since we know that ∂ηˆηˆV0 = ∂ηˆηˆV1 = 0 they can be eliminated fromequation (3.5). Additionally, we can replace instances of ∂ηˆV with A1(s).Substituting this information and the above equation into (3.5), the fullexpansion becomes∂tU0 = [ 12∂ηˆηˆU0 +1∂ηˆηˆU1 −κ∂ηˆU0 + ∂ssU0]− f(U0) [∂ηˆηˆV2 − κA1(s) + ∂ssV0]−f ′(U0)[∂sU0∂sV0 +1∂ηˆU0A1(s)+∂ηˆU1A1(s) + ∂ηˆU0∂ηˆV2]−f ′′(U0)U1∂ηˆU0A1(s) + . . .(3.6)Our goal is to derive an equation for the motion of the front, that is,an equation that η′(t) satisfies. Supposing that the front changes slowly intime, η = η(T ), where T = pt. The change to ut becomesut = uηˆp−1η′(T ). (3.7)We let η˙ = η′(T ) be the normal velocity to Γ where η˙ > if Γ is expanding. pis chosen to be 1 to match the slow evolution assumption and first correctionterm.Since the front evolves slowly, the leading order equation does not changein time and is∂ηˆηˆU0 − f ′(U0)∂ηˆU0A1(s) = 0,which can be written as[∂ηˆU0 − f(U0)A1(s)]ηˆ = 0.After integrating once, we have a term depending on s on the righthand side. Since we will have to integrate once more and can’t have growthas ηˆ → ∞, this term must be zero. We now have a 1-D profile equationparametrized by s.∂ηˆU0 − f(U0)A1(s) = 0,U0 −→ 0 as ηˆ → −∞; U0 −→ 1 as ηˆ → +∞.(3.8)403.2. Boundary Fitted Coordinate ExpansionTo identify A1(s) we consider the matching condition between the innerand outer expansion for V . Returning to (3.3) and (3.4), the two-term innerexpansion for V isV = A0(s) + [A1(s)ηˆ +B1(s)] + . . . (3.9)In the outer region (to leading order), we have that u ∼ 1 in Ω− andu ∼ 0 in Ω+, which tells us that the outer problem for v isvt = D∆v − v + σ{1 : x ∈ Ω−0 : x ∈ Ω+ ,∂v∂n= 0, x ∈ ∂Ω.(3.10)Now, if the initial condition for v is such that v|Ω| = σ|Ω−|, where |Ω| isthe area of Ω, then we expect that v is near its steady-state and will evolveslowly in time. Hence, v(x, 0) = σ|Ω−|/|Ω| for v satisfying (3.10).If we expand the outer equation for v as x → x0(s) (as η → 0), wherex0(s) is a point on the interface Γ, approaching from the ± direction, wehave (by Taylor’s theorem)v = v∣∣Γ +∇v∣∣Γ · (x− x0) + . . .Recall from the boundary fitted coordinate system that∇v = vs1− κη t + vηn; x = x0(s) + ηn(s).Thus, we derive thatv = v∣∣Γ + ηvη∣∣η=0 + . . .as x→ x0. Recalling that ηˆ = η/ and noting (3.9), we have thatv∣∣Γ + ηvη∣∣η=0 + . . . ∼ A0(s) + [A1(s)η+B1(s)] + . . .We conclude thatA0(s) = v∣∣Γ; A1(s) = vη∣∣η=0 = ∇v · n∣∣Γ.To summarize, in the outer region on the long timescale inherent to theevolution of the front where vt = vT , v solves413.2. Boundary Fitted Coordinate ExpansionD∆v − v = −σ{1 : x ∈ Ω−0 : x ∈ Ω+ ,∂v∂n= 0, x ∈ ∂Ω,and with a given Γ, A0(s) = v∣∣Γ and A1(s) = ∇v · n∣∣Γ.Returning to the 1-D profile equation for U0 (3.8), we can write the exactsolution asU0 =U0(0)U0(0) + (1− U0(0))e−ηˆA1(s),where U0(0) is some arbitrary value of U0 on Γ. If A1(s) > 0 then as ηˆ →∞,which is inside of Γ, U0 → 1. Likewise, as we move outside of Γ (ηˆ → −∞),U0 → 0.Conjecture 3.1: Consider the outer problem (3.10) for v. The solution of(3.10) is such thatA1(s) = ∇v · n∣∣Γ > 0.For now we just assume this to be the case. Under this assumption we mayset U0(0) = 1/2, the mid-value, which gives a precise definition of Γ.Now we return to equation (3.6). From (3.7) we have∂ηˆU0η˙ = ∂ηˆηˆU1−κ∂ηˆU0 − f(U0)∂ηˆηˆV2 + κf(U0)A1(s)− f(U0)∂ssV0 − f ′(U0)∂sU0∂sV0 − f ′(U0)∂ηˆU0∂ηˆV2− f ′(U0)∂ηˆU1A1(s)− f ′′(U0)U1∂ηˆU0A1(s).(3.11)From the leading order equation for U given in (3.8), we know that∂ηˆU0 = f(U0)A1(s) so we can cancel those terms in the above equation. Wenow define the operator L, such thatLφ = φηˆηˆ − f ′(U0)A1(s)φηˆ − f ′′(U0)A1(s)∂ηˆU0φ. (3.12)Thus, (3.11) becomes∂ηˆU0η˙ = LU1−f(U0)∂ηˆηˆV2−f(U0)∂ssV0−f ′(U0)∂sU0∂sV0−f ′(U0)∂ηˆU0∂ηˆV2.423.2. Boundary Fitted Coordinate ExpansionThis can be written this more elegantly as∂ηˆU0η˙ = LU1 − (f(U0)∂ηˆV2)ηˆ −(∂ηˆU0∂sV0A1(s))s. (3.13)We want to eliminate the terms contained in L so we need to determinethe adjoint operator L∗. We look for L∗ such that (Lφ, ψ) = (φ,L∗ψ) Afterrewriting (3.12), we have(Lφ, ψ) =∫ ∞−∞[φηˆηˆ −(f ′(U0)A1(s)φ)ηˆ]ψ dηˆ.Upon integrating the first term in the integral by parts twice and thesecond term once, with the boundary terms vanish due to imposed boundaryconditions, we obtain(φ,L∗ψ) =∫ ∞−∞[ψηˆηˆ + f ′(U0)A1(s)ψηˆ]φdηˆ,and therefore,L∗ψ = ψηˆηˆ + f ′(U0)A1(s)ψηˆ.Lemma 3.1: Suppose thatLφ = φηˆηˆ − f ′(U0)A1(s)φηˆ − f ′′(U0)A1(s)∂ηˆU0φ,on −∞ < ηˆ <∞, with φ→ 0 as ηˆ → ±∞. Then for ψ bounded as ηˆ → ±∞,we have∫ ∞−∞ψLφdηˆ =∫ ∞−∞φL∗ψ dηˆ,as the boundary terms cancel. Here,L∗ψ = ψ′′ −A1f ′(U0).The function in the kernel of L∗ is just a constant function. Without lossof generality, we say the kernel is 1. Thus, using this solvability condition, wemultiply everything in (3.13) by 1 and integrate from −∞ to∞ with respectto ηˆ. This eliminates the terms in L (due to the boundary conditions andadjoint operator kernel) and we are left with∫ ∞−∞∂ηˆU0η˙ dηˆ = −∫ ∞−∞[(f(U0)∂ηˆV2)ηˆ +(∂ηˆU0∂sV0A1(s))s]dηˆ.433.3. Specific Domain ChoicesWe can immediately integrate the left hand side and apply the fact thatU0 → 1 as ηˆ → ∞ and U0 → 0 as ηˆ → −∞, leaving us with η˙. Note thatη˙ does not depend upon ηˆ and can be factored out of the integral. Since V0and A1(s) only depend on s, we can switch the order of the derivatives inthe second term of the right hand side and integrate ∂ηˆU0 as we did on theleft hand side. This leaves us withη˙ = −∫ ∞−∞(f(U0)∂ηˆV2)ηˆ dηˆ −(∂sV0A1(s))s.The final integral becomes∫ ∞−∞(f(U0)∂ηˆV2)ηˆ dηˆ = f(U0)∂ηˆV2∣∣∣∞−∞.Recall that f(U0) = U0(1−U0) and that U0 → 1 or U0 → 0 as ηˆ → ±∞,respectively. Thus, the integral vanishes and we are left with our key result,η˙ = −(∂sV0A1(s))s.Principal Result 3.1: Let Γ be a simple closed smooth curve and let T = twith T = O(1). Consider the outer solution v defined by the quasi-steadyproblemD∆v − v = −σ{1 : x ∈ Ω−0 : x ∈ Ω+ ,∂v∂n= 0, x ∈ ∂Ω,and Γ evolves on the slow timescale t. The normal velocity η˙(T ) to Γ onthe long timescale T satisfies the surface diffusion lawη˙ = −(∂sV0A1(s))s, (3.14)where s is the arc length of Γ, V0 = A0 = v∣∣Γ and A1 = ∇v · n∣∣Γ. Note thatη˙ > 0 if the interface is expanding.3.3 Specific Domain ChoicesIn order to analyze principal result 3.1 and verify some intuition about thebehavior of the system, we take a closer look at a few specific examples.443.3. Specific Domain ChoicesThe first example is the case of two concentric circles, where the domainis the unit disk (Ω = {x||x| ≤ 1}) and Γ, the interface that separates Ω+ andΩ−, is a circle. Next, we analyze the effects of a small perturbation to thiscircular interface, where Γ = r0 + δh(θ), h(θ) periodic, for δ 1. Finally,we examine the case of many circular interfaces and how they interact witheach other as well as a brief look at the effects of small perturbations tothese many circular interfaces. In all of these examples we highlight howthe velocity from principal result 3.1 changes.3.3.1 Concentric CirclesIn our first example, we assume the initial amount of mass in the systemis small and choose the outer domain to be the unit disk; that is, Ω ={x : |x| ≤ 1}. We also assume that the initial aggregation phase for u hasalready occurred and we are in a metastable state of two concentric circles.We have u = 1 for x ∈ Ω− and u = 0 for x ∈ Ω+, where Ω− = {x : |x| ≤ δ},Ω+ = {x : |x| > δ}, and Γ = {x : |x| = δ}, for some constant δ.Consider M , the initial mass of u. Since u is uniform within Ω+ and Ω−,we haveM =∫Ωu dx =∫Ω−1 dx = pir2 = piδ2,and therefore,δ =√Mpi,which means δ = O(√M). Since we assume M is small (M 1), we mustalso have δ 1.We suspect that in this case, due to the radial symmetry of the domain,the solution will be radially symmetric and therefore not depend upon s,the arc length of Γ. Since the solution will be independent of s, the velocityfrom principal result 3.1 and (3.14) will be zero and we are actually in thecase of a stable equilibrium, not a metastable one.The reduced problem we have to solve isD∆v − v = −σ{1 : 0 ≤ |x| ≤ δ0 : δ < |x| ≤ 1 ,∂v∂n= 0, |x| = 1.(3.15)453.3. Specific Domain ChoicesExpanding v in terms of δ as v = v0 + µ(δ)v1 + . . . and substituting in,we have the following leading order equation.D∆v0 − v0 = 0, |x| ≤ 1,∂v0∂n= 0, |x| = 1,which has solution v0 = 0. The O(µ(δ)) equation isD∆v1 − v1 = 0, |x| ≤ 1,x 6= 0,∂v1∂n= 0, |x| = 1; v1 ∼ ?, as x→ 0,where the singular behavior of v1 as x → 0 is determined by matching tothe inner expansion around 0. Now, consider the inner variable y = x/δ andthe inner v function V . Using this change of variables for equation (3.15),our inner equation to solve becomesD∆Vδ2− V = −σ{1 : |y| ≤ 10 : |y| > 1 .We now scale V as V = δ2W and expand W asW = log(δ)W0 +W1 + . . . (3.16)The choice of the log(δ) term will be made clear shortly. Substitutingin this scaling and expansion, we wind up solving the following O(log(δ))equation for W0,∆W0 = 0,W0 bounded as y→∞,which implies that W0 = C0, a constant that will be determined by matchingto the outer solution. The O(1) equation is∆W1 = −σD{1 : |y| ≤ 10 : |y| > 1This equation can be solved by switching to polar coordinates and using thefact that the solution must be radially symmetric. With ρ = |y| we have∂ρρW1 +1ρ∂ρW1 = −σD{1 : ρ ≤ 10 : ρ > 1 ,463.3. Specific Domain Choiceswith W1 well-behaved as ρ→ 0. Solving in each region separately, we haveW1 ={− σD(14ρ2 +A0): ρ ≤ 1B0 log(ρ) + C1 : ρ > 1We impose continuity and differentiability at ρ = 1 to patch together thetwo solutions together and solve for the unknown constants. The solution isW1 ={− σD(14ρ2 − C1σ − 14): ρ ≤ 1−12 σD log(ρ) + C1 : ρ > 1.where C1 is undetermined and comes from matching to higher order termsin the outer expansion. To proceed with matching the inner and outersolutions and determining C0 and C1, we examine the behavior of W asρ → ∞. Adding together W0 and W1 and switching to the outer radialcoordinate, r = ρ/δ, we see thatW ∼ log(δ)C0 −σ2D log(r) +σ2D log(δ) + C1 + . . .As ρ → ∞, W has an unmatchable log(δ) term. This explains why weexpanded W as in (3.16) and forces C0 = −σ/2D to cancel this term. As aresult, for ρ > 1, W becomesW = − σ2D log(δ)−σ2D log(ρ) + C1 + . . .Since V = δ2W , for |y| ≤ 1 (ρ ≤ 1), we haveV = δ2[− σ2D log(δ)−σD(14 |y|2 − C1Dσ− 14)+ . . .], (3.17)and for |y| > 1 (ρ > 1), we haveV = δ2[− σ2D log(δ)−σ2D log(|y|) + C1 + . . .]. (3.18)Now, as y → ∞, we must have the inner and outer solutions match.Switching to the outer variable in (3.18), we haveµ(δ)v1 + . . . ∼ δ2[− σ2D log(|x|) + C1 + . . .],which means µ(δ) = δ2 and determines the problem v1 satisfies as473.3. Specific Domain ChoicesD∆v1 − v1 = 0, |x| ≤ 1,x 6= 0,∂v1∂n= 0, |x| = 1; v1 ∼ −σ2D log(|x|) + C1, as x→ 0.This equation can be solved by once again switching to polar coordinatesand leveraging the radial symmetry of the domain. The problem becomes∂rrv1 +1r∂rv1 − λ2v1 = 0, r < 1,∂rv1 = 0, r = 1; v1 ∼ −σ2D log(r) + C1, r → 0,where λ = 1/√D. This ODE has solutionv1 =σ2DK0(rλ) +σK1(1)2I1(1)I0(rλ),where I0 and K0 are modified Bessel functions of the first and second kindrespectively. As r → 0, using [1] for the asymptotic expansions of K0(r), v1becomesv1 ∼σ2D (− log(rλ) + log(2)− γ) +σK1(1)2DI1(1)= − σ2D + C1,where γ is the Euler-Mascheroni constant. We can see that C1 must beC1 =σ2D(log(2)− γ + K1(1)I1(1)). (3.19)Finally, since v = δ2v1 + . . ., the solution to equation (3.15) isv = δ2(σ2DK0(|x|λ) +σK1(1)2DI1(1)I0(|x|λ))+ . . . (3.20)As suspected, due to the symmetry of the domain the solution v, toO(δ2), has no dependence on θ and thus is independent of s. This meansthe derivatives with respect to s in (3.14) will be 0 and there is no movement.As a check, we verify our asymptotic results numerically. For this choiceof domain Ω and interface Γ, the solution can be found exactly by expressingequation (3.15) in polar coordinates, using radial symmetry, and imposingcontinuity and differentiability on the interface. The solution isv ={σ +AI0(|x|λ) : |x| ≤ δBI0(|x|λ) + CK0(|x|λ) : δ < |x| ≤ 1 , (3.21)483.3. Specific Domain Choiceswith A, B, and C given asA = C[(K0(δ)− σ)I1(1) +K1(1)I0(δ)I1(1)I0(δ)],B = CK1(1)I1(1); C = I1(δ)σW(K0(δ), I0(δ)),where W(a, b) = ab′ − ba′ is the Wronskian.The two figures below compare the asymptotic and exact solution. Wechose σ = 1 and D = 1. Figure 3.2 compares (3.21) to (3.20) for 0 < r ≤ 1.In figure 3.3, we compare the exact solution (3.21) and the inner asymptoticsolution given in (3.17) at r = 0 for changing values of δ. In figure 3.3 weplot against 1− δ.493.3. Specific Domain ChoicesFigure 3.2: Asymptotic solution plotted with the exact solution for 0 < r ≤1. Here, δ = .05. We can see that for everywhere except very close to r = 0,they are in almost complete agreement.0 0.2 0.4 0.6 0.8 12345678x 10−3rv1(r)Exact vs. Asymptotic Solution AsymptoticExact3.3.2 Circular Domain with Perturbed Circular InterfaceIn this example we look at the case where the interface is a near annulus.Once again, Ω is the unit disk and Γ at time t = 0 is the curver = r0 + δh(θ), with h(θ) + 2pi) = h(θ),for 0 < r0 ≤ 1 and 0 < δ 1.The question to ask is how does Γ evolve for near-circular initial inter-faces. Since we don’t assume the interfaces themselves are small, we don’thave an inner variable but instead two comparably sized solutions in whichwe impose continuity and differentiability where they meet. The problem to503.3. Specific Domain ChoicesFigure 3.3: Asymptotic solution plotted with the exact solution for r = 0.Here, δ ranges from .5 to .01. As δ → 0, the approximation becomes moreaccurate but is surprisingly accurate for fairly large values of δ.0.5 0.6 0.7 0.8 0.9 100.050.10.150.20.250.30.351−δv1(.001)Exact vs. Asymptotics for Varying δ AsymptoticExactsolve isD∆v+ − v+ = 0, r0 + δh(θ) < r ≤ 1,D∆v− − v− = −σ, 0 < r ≤ r0 + δh(θ),v,∂v∂ncontinuous across r = r0 + δh(θ); v regular as r → 0,vr = 0, r = 1,where v+ indicates the solution in the outer region and v− is the solution inthe inner region.The first thing we will do is unravel the inward normal for this interface.In polar coordinates, the gradient and level surface, φ = 0, are513.3. Specific Domain Choices∇v = vrr +1rvθθ; φ = r − (r0 + δh(θ)).As a result,∂v∂n= −∇v · ∇φ‖∇φ‖ = −vrφr + 1r2 vθφθ√φ2r +φ2θr2.At this point we can substitute in φr = 1, φθ = −δh′(θ), and r = r0 + δh(θ)is φ = 0 to derive∂v∂n= − vr − ωvθ√1 + δh′(θ)ω, (3.22)with ω given asω = δh′(θ)(r0 + δh(θ))2.Now we will do a perturbation calculation by starting with∆v+ − 1Dv+ = 0, r0 + δh(θ) < r ≤ 1,v+r = 0, r = 1,(3.23)∆v− − 1Dv− = 0, 0 < r ≤ r0 + δh(θ),v− non-singular as r → 0,(3.24)and we have the transmission and continuity conditions on r = r0 + δh(θ)v+ = v−; ∂v+∂n= ∂v−∂n, (3.25)with the normal derivative defined as in (3.22). The goal is to calculateA0(θ) = v+∣∣Γ, A1(θ) = ∂nv+∣∣Γ with Γ = r0 + δh(θ) for the velocity inequation (3.14).The inner and outer equations are expanded in polar coordinates asv+ = v+0 + δv+1 + δ2v+2 + . . . ; v− = v−0 + δv−1 + δ2v−2 + . . . ,and we use523.3. Specific Domain Choicesv+(r0 + δh(θ), θ) = v+(r0) + δh(θ)v+r (r0) +δ2h(θ)22 v+rr(r0) + . . . ,to gives us our radially symmetric leading order problem∆v+0 −1Dv+0 = 0, r0 < r ≤ 1; ∆v−0 −1Dv−0 = −σD, 0 < r ≤ r0,v+0 = v−0 , r = r0; ∂rv+0 = ∂rv−0 , r = r0,v−0 non-singular as r → 0; ∂rv+0 = 0, r = 1.This is precisely the problem specified in (3.15), which was solved exactlyin section 3.3.1. Written compactly, the solution isv+0 (r) = C0K0(λr) + K1(λ)I1(λ) I0(λr)K0(λr0) + K1(λ)I1(λ) I0(λr0), r0 < r < 1,v−0 (r) = σ + (C0 − σ)I0(λr)I0(λr0), 0 < r < r0,(3.26)where λ = 1/√D, K0 and I0 are once again modified Bessel Functions andC0 comes from enforcing differentiability at r = r0. To determine C0, wewill define τ1 and τ2 asτ1(r) = K0(λr) +K1(λ)I1(λ)I0(λr); τ2(r) = I0(λr), (3.27)which gives usv+0 (r) = C0τ1(r)τ1(r0); v−0 (r) = σ + (C0 − σ)τ2(r)τ2(r0).Imposing differentiability on r = r0, we calculate thatC0(τ ′1(r0)τ1(r0)− τ′2(r0)τ2(r0))= −στ′2(r0)τ2(r0).If we multiply both sides of the above equation by τ1(r0)τ2(r0) we obtainC0W(τ2, τ1) = −στ ′2(r0)τ1(r0),533.3. Specific Domain Choiceswhere W(τ2, τ1) is the Wronskian of τ2 and τ1. It is not hard to work outthat W(τ2, τ1) = W(I0(λr),K0(λr)) and, since W(I0(z),K0(z)) = −1/z byAbel’s identity, we determine thatC0 = r0στ ′2(r0)τ1(r0), (3.28)with τ1 and τ2 given in (3.27).This tells us that the leading order approximation of A0 isA0 = v+0∣∣∂Γ +O(δ) = C0, (3.29)and we calculate ∂rv−0 (r0) to determine A1 asA1 = ∂rv+0 (r0) = −∂rv−0 (r0) = λ(σ − C0)I ′0(λr0)I0(λr0)+O(δ), (3.30)with C0 defined as in (3.28).Notice that A1 is independent of θ and as a result, independent of arclength s along Γ. Thus, A1 can be used in the denominator of the surfacediffusion law in equation (3.14). We must show that A1 > 0 to ensure wehave the correct sign of the velocity. Note that this was conjectured insection 3.1, conjecture 3.1; with this specific case we can make asymptoticand numerical progress. Substituting C0 from (3.28) into (3.31) we are leftwithA1 = λσ(1− r0τ ′2(r0)τ1(r0))I ′0(λr0)I0(λr0). (3.31)Since I0(z) and I ′0(z) > 0 ∀z, we only need to focus on the terms in theparenthesis above. Now, consider β given byβ(r0) = r0τ ′2(r0)τ1(r0). (3.32)Determining the sign of A1 reduces to showing that β < 1 for 0 < r0 < 1.First we consider the case when r0 → 0. Here, τ1(r0) = O(− log(r0))and τ ′2(r0) = O(r0) due to the fact that, as z → 0, we haveK0(z) ∼ −[log( z2) + γe](1 + z24 + . . .); I0(z) ∼ 1 +z24 + . . . ,which implies β = O(r20(− log(r0))) and thus β → 0 as r0 → 0. Now, whenr0 = 1, we have543.3. Specific Domain Choicesβ(1) = λ[K0(λ)I ′0(λ)−K ′0(λ)I0(λ)] = −λW(I0,K0) = 1,by Abel’s identity.What is left to prove is that β is monotone increasing for 0 < r0 < 1. Fornow all we can do is conjecture that this is the case and provide numericalevidence for a few values of λ in figure 3.4.Figure 3.4: A plot of β as given in (3.32) showing monotonicity for a fewvalues of λ.0 0.2 0.4 0.6 0.8 100.20.40.60.81r0ββ vs. r0 for different values of λ λ = .5λ = 3λ = 10We now make progress to determine the O(δ) and O(δ2) contributions.First we take a closer look at our transmission and continuity conditions(3.25). Recallv+(r0 + δh(θ), θ) = v+∣∣0 + δh(θ)v+r∣∣0 +δ2h(θ)22 v+rr∣∣0 + . . .where f∣∣0 = f(r0, θ). We expand v+ asv+ = v+0 + δv+1 + δ2v+2 + . . .553.3. Specific Domain ChoicesCombining these two equations we arrive atv+(r0+δh, θ) = v+0∣∣0+δ(v+1∣∣0+h∂rv+0∣∣0)+δ2(v+2∣∣0+h∂rv+1∣∣0+h22 ∂rrv+0∣∣0)+. . .From this equation we can determine the O(δ) and O(δ2) continuitycondition. They areO(δ) : v+1∣∣0 − v−1∣∣0 = h(∂rv−0∣∣0 − ∂rv+0∣∣0) = 0, (3.33a)O(δ2) : v+2∣∣0 − v−2∣∣0 = h(∂rv−1∣∣0 − ∂rv+1∣∣0) +h22 (∂rrv−0∣∣0 − ∂rrv+0∣∣0),(3.33b)where the O(δ) equation is simplified in that way since we already know∂rv+0∣∣0 = ∂rv−0∣∣0 from the leading order equation. Next we examine thesecond condition in (3.25) using (3.22). Since the denominator in (3.22) isindependent of v it plays no role in the transmission condition and thus wecan ignore it and labelD =√1 + δh′r0 + δh2.We can also expand 1/(r0 + δh)2 as1(r0 + δh)2∼ 1r20(1 + δhr0)−2∼ 1r20(1− 2δhr0+ . . .).Using the above, (3.22) gives∂v∂n= − 1D[v+r (r0 + δh, θ)−δh′r20(1− 2δhr0+ . . .)v+θ (r0 + δh, θ)], (3.34)where v+θ (a partial derivative, not total derivative) can be expanded asv+θ = v+θ (r0, θ) + δhv+θr(r0, θ) + . . .We can now substitute this and the fact that v+0 is radially symmetric in to(3.34) while also expanding v+ as v+ = v+0 + δv+1 + δ2v+2 + . . . to derive563.3. Specific Domain Choices∂v∂n=− 1D[∂rv+0∣∣0 + δ(∂rv+1∣∣0 + h∂rrv+0∣∣0)+ δ2(∂rv+2∣∣0 + h∂rrv+1∣∣0 +h22 ∂rrrv+0∣∣0 −h′r20∂θv+1∣∣0)].With the transmission condition given as v+n = v−n , we can see that,to leading order, ∂rv+0∣∣0 = ∂rv−0∣∣0 (as expected). The O(δ) transmissionequation is∂rv+1∣∣0 − ∂rv−1∣∣0 = h(∂rrv−0∣∣0 − ∂rrv+0∣∣0), (3.35)and O(δ2) gives∂rv+2∣∣0 − ∂rv−2∣∣0 = h(∂rrv−1∣∣0 − ∂rrv+1∣∣0) +h22 (∂rrrv−0∣∣0 − ∂rrrv+0∣∣0)+ h′r20(∂θv+1∣∣0 − ∂θv−1∣∣0).(3.36)We combine (3.23), (3.24), (3.33a), and (3.35) to obtain theO(δ) problem∆v+1 −1Dv+1 = 0, r0 < r ≤ 1; ∆v−1 −1Dv−1 = 0, 0 < r ≤ r0,∂rv+1 = 0, r = 1; v−1 non-singular as r → 0,v+1∣∣0 = v−1∣∣0; ∂rv+1∣∣0 − ∂rv−1∣∣0 = h(∂rrv−0∣∣0 − ∂rrv+0∣∣0).The O(δ2) problem can be found by combining (3.23), (3.24), (3.33b),and (3.36) to give us∆v+2 −1Dv+2 = 0, r0 < r ≤ 1; ∆v−2 −1Dv−2 = 0, 0 < r ≤ r0,∂rv+2 = 0, r = 1; v−2 non-singular as r → 0,v+2∣∣0 − v−2∣∣0 = −h22 (∂rrv−0∣∣0 − ∂rrv+0∣∣0),with transmission condition∂rv+2∣∣0 − ∂rv−2∣∣0 = h(∂rrv−1∣∣0 − ∂rrv+1∣∣0) +h22 (∂rrrv−0∣∣0 − ∂rrrv+0∣∣0)+ h′r20(∂θv+1∣∣0 − ∂θv−1∣∣0).573.3. Specific Domain ChoicesThe main goal is to find terms of A0 that include θ and breaks the radialsymmetry. We haveA0(θ) = v+∣∣∂Γ = v+0 (r0) + δ(∂rv+0 (r0) + v+1 (r0, θ)) +O(δ2). (3.37)which upon taking a derivative with respect to θ givesA′0(θ) = δ(∂rv+0 (r0)h′(θ) + ∂θv+1 (r0, θ)) + . . .This shows that our goal is to calculate ∂θv+1 (r0, θ) and despite developingthe equations for the O(δ2) correction term, it may, in principle, not benecessary.We return our focus to the O(δ) equation, which can be rewritten as∆v+1 − λ2v+1 = 0, r0 < r ≤ 1; ∆v−1 −1Dv−1 = 0, 0 < r ≤ r0,∂rv+1 = 0, r = 1; v−1 non-singular, as r → 0,v+1∣∣0 = v−1∣∣0; ∂rv+1∣∣0 − ∂rv−1∣∣0 = hγ0,(3.38)with γ0 = ∂rrv−0 (r0)−∂rrv+0 (r0). In order to solve this PDE we first assumethat h(θ) has a complex Fourier series expansion; that is,h(θ) =∞∑n=−∞hneinθ; hn =12pi∫ 2pi0h(θ)e−inθ dθ.Since the problem (3.38) is linear and the only inhomogeneous term is fromthe jump in the right hand side, we can write the solution asv1 = γ0∞∑n=−∞hnFneinθ, (3.39)where Fn solvesF ′′n +1rF ′n −n2r2Fn −1DFn = 0, 0 < r ≤ 1; F ′n(1) = 0,Fn non-singular as r → 0; F ′n(r+0 )− F ′n(r−0 ) = 1, r = r0.Fn is very similar in form to the solution for v0 given in (3.26) and onceagain involves modified Bessel functions. The solution is583.3. Specific Domain ChoicesFn =τˆnKn(λr)−K′n(λ)I′n(λ)In(λr)Kn(λr0)−K′n(λ)I′n(λ)In(λr0)r0 < r ≤ 1τˆnIn(λr)In(λr0)0 < r ≤ r0,where τˆn = τˆn(r0) is determined by imposing F ′n(r+0 ) − F ′n(r−0 ) = 1. Toimpose this condition, let us first define τ1(r, n) and τ2(r, n) in an analogousfashion to τ1 and τ2 from (3.27). We haveτ1(r, n) = Kn(λr)−K ′n(λ)I ′n(λ)In(λr); τ2(r, n) = In(λr).The derivative jump condition for Fn is thenF ′n(r+0 )− F ′n(r−0 ) = τˆn(τ ′1(r0, n)τ1(r0, n)− τ′2(r0, n)τ2(r0, n))= 1.Multiplying both sides by τ1(r0, n) and τ2(r0, n), we obtainτˆnW(τ2(r0, n), τ1(r0, n)) = τ1(r0, n)τ2(r0, n).Once again, it is not hard to see thatW(τ2(r0, n), τ1(r0, n)) =W(In(λr0),K0(λr0))and thus again using Abel’s identity we haveτˆn = −r0τ1(r0, n)τ2(r0, n), (3.40)which fully determines the solution for v1 in (3.39). With this informationin hand we can now analyze how a perturbation to a circle evolves in time.Recall the velocity equationη˙ = −(∂sA0A1)s, (3.41)and since A1 is independent of s, we can write this asη˙ ∼ − 1A1∂ssA0 +O(δ).To translate the arc length derivatives in polar coordinates we use the factthatdsdθ=√(r0 + δh)2 + δ2h′2 ∼ r0 + δh+O(δ2).Thus,593.3. Specific Domain Choices∂θA0 = ∂sA0dsdθ; ∂θθA0 = ∂ssA0(dsdθ)2+ ∂sA0d2sdθ2.But, since ds/dθ ∼ r0 + δh, we have that d2s/dθ2 ∼ O(δ). This gives us∂θθA0 = ∂ssA0r20 +O(δ)∂sA0.This can be substituted into (3.41) along with the expression for A0(θ) givenin (3.37) to obtainη˙ ∼ − δA1r20(∂rv+0 (r0)h′′(θ) + ∂θθv+1 (r0, θ)).Recalling that on this order A1 = −∂rv−0 (r0) and that v+0 (r0) = v−0 (r0).Using this, we can simply the above expression to beη˙ ∼ δr20h′′(θ)− δA1r20∂θθv+1 (r0, θ).Substituting in v1 from (3.39) and the fact that Fn(r0) = τˆn, we now haveη˙ ∼ − δr20∞∑n=−∞n2hneinθ + δγ0A1r20∞∑n=−∞n2hnτˆneinθ = δr20∞∑n=∞(τˆnγ0A1− 1)n2hneinθ,where γ0 = ∂rrv−0 (r0) − ∂rrv+0 (r0). This can easily be calculated from theODE that v0 satisfies. This ODE is∂rrv−0 +1r∂rv−0 −1Dv−0 = −σD,∂rrv+0 +1r∂rv+0 −1Dv+0 = 0.Taking the limit as r → r+0 and r → r−0 and then subtracting the v+0 equationfrom the v−0 one (noting the use of continuity and differentiability), we arriveatγ0 = ∂rrv−0 − ∂rrv+0 = −σD.After substituting in A1 from (3.31) and γ0 from above, the velocity becomesη˙ ∼ − δr20∞∑n=−∞(λτˆnI0(λr0)(1− β)I ′0(λr0)+ 1)n2hneinθ.603.3. Specific Domain ChoicesNote that the velocity is independent of σ. We can substitute in for τˆn from(3.40) to isolate terms that depend upon n. The velocity becomesη˙ ∼ − δr20∞∑n=−∞(1− λr0I0(λr0)Cn(1− β)I ′0(λr0))n2hneinθ,with Cn given byCn = In(λr0)Kn(λr0)−K ′n(λ)I ′n(λ)In(λr0)2.We remark that β is simplyβ(r0) = r0λI ′0(λr0)[K0(λr0)−K ′0(λ)I ′0(λ)I0(λr0)]and that we showed numerical evidence that 0 < β < 1 ∀ r0 in 0 < r0 < 1,∀λ > 0.To analyze how a perturbed initial interface evolves in time we look toa specific example with h(θ) = cos(nθ) for some choice of n. Thus, we havethat h±n = 1/2 and all other terms are zero. The only terms that remainareη˙ ∼ − δr20(1− λr0I0(λr0)(1− β)I ′0(λr0)Cn)n2 cos(nθ). (3.42)Notice that the negative sign in front is trying to make η˙ < 0 whencos(kθ) > 0, which means a bulge will flattens out. The second term, dueto the Cn, has an unknown sign which we will now investigate further. Todo this we consider a few limiting cases.Case 1: Suppose λ→∞, i.e. D → 0 and r0 D. The following identitiescan be found in [1]. As z →∞, we haveIn(z) ∼ez√2piz(1− 4n2 − 18z); Kn(z) ∼√pi2z e−z(1 + 4n2 − 18z),I ′n(z) ∼ez√2piz(1− 4n2 + 38z); K ′n(z) ∼√pi2z e−z(1 + 4n2 + 38z),In(z)Kn(z) ∼12z(1− 4n2 − 18z2),(3.43)which tells us that613.3. Specific Domain ChoicesK ′n(z)I ′n(z)∼ −pie−2z(8z + 4n2 + 38z − 4n2 − 3)∼ −pie−2z(1 + 4n2 + 34z),and thusK ′n(λ)I ′n(λ)In(λr0)2 ∼ −pie−2λ(1 + 4n2 + 34λ)e2λ2piλ(1− 4n2 − 18λ)2∼ − 12λr0e−2λ(1−r0)(1 +O( 1λ)).(3.44)We can combine (3.43) and (3.44) together to determine Cn asCn ∼12λr0(1− 4n2 − 18λ2r20)+O(e−2λ(1−r0)λ).The second term is exponential small which means it can safely be ignored.The conclusion is thatCn ∼12λr0(1 +O(1λ2)), as λ→∞.What remains is to estimate β and I ′0(λr0)/I0(λr0).β(r0) = r0λ[I ′0(λr0)K0(λr0)−K ′0(λ)I ′0(λ)I0(λr0)I ′0(λr0)]∼ r0λ[eλr0√2piλr0√pi2λr0e−λr0(1− 38λr0)(1− 18λr0)+O(e−2λ(1−r0)λ)]∼ r0λ[12λr0(1− 12λr0)+O(e−2λ(1−r0)λ)]∼ 12[1− 12λr0], as λ→∞.For I0(λr0)/I ′0(λr0) we haveI0(λr0)I ′0(λr0)∼eλr0√2piλr0(1 + 18λr0)eλr0√2piλr0(1− 38λr0) ∼(1 + 18λr0)(1− 38λr0) ∼ 1 + 12λr0,which leads us to our end result for this case623.3. Specific Domain Choicesλr0I0(λr0)(1− β)I ′0(λr0)Cn ∼λr0[1− 12(1− 12λr0)](1 + 12λr0)( 12λr0)(1 +O(1λ2))∼(1 + 12λr0)2(12 + 14λr0)(1 +O(1λ2))∼ 1 +O(1λ2),and therefore1− λr0I0(λr0)(1− β)I ′0(λr0)Cn ∼ O(1λ2),as λ→∞.As λ → ∞ we see that the velocity η˙ 1 but the sign of the O(1/λ2)term has yet to be determined. What is clear is that if there is any instabilityit will be very weak.Case 2: Suppose n is fixed and λ→ 0 which corresponds to large diffusiveforces. For n ≥ 1 and z → 0 we have the following identitiesIn(z) ∼1n!(z2)n; Kn(z) ∼(n− 1)!2(z2)−n,I ′n(z) ∼12(n− 1)!(z2)n−1; K ′n(z) ∼ −n!4(z2)−(n+1),which tells usIn(λ)Kn(λ) ∼12n.Now,K ′n(λ)I ′n(λ)In(λr0)2 ∼ −n!4(λ2)−(n+1)12(n−1)!(λ2)n−11(n!)2(λr02)2n∼ −r2n02nand thus for λ→ 0 we have Cn > 0 andCn ∼12n(1− r2n0 ).To finish off this case we again need to estimate β and I0(λr0)/I ′0(λr0)as λ→ 0. For the Bessel functions we have633.3. Specific Domain ChoicesI0(λ)I ′0(λ)∼ 2λ,and for ββ(r0) = r0λ[I ′0(λr0)K0(λr0)−K ′0(λ)I ′0(λ)I0(λr0)I ′0(λr0)]∼ r0λ[λr02 O(log(λ))−1λλ2λr02]∼ r0λ[O(λ log(λ)) + r0λ]∼ r20.From this we can conclude thatλr0I0(λr0)(1− β)I ′0(λr0)Cn ∼λr01− r0( 2λr0)(1− r2n02n)∼ 1− r2n0n(1− r20).Therefore for (3.42) in this limit, we obtainη˙ ∼ − δr0(1− 1− r2n0n(1− r20))n2 cos(nθ). (3.45)We conclude that as D → ∞, perturbations will stabilize. To see this, wedefinef(k) = n− 1− kn(1− k) ,where k = r20. Since 0 < r0 < 1, the second term in f(k) can be representedas1− kn(1− k) = (1 + k + . . .+ kn−1).Now, consider f(0) = n− (1 + 0 + . . .+ 0) > 0 if n > 1. We also have thatf(1) = n−(1+. . .+1) = n−n = 0. Now, since f ′(k) is monotone decreasingon 0 < k < 1, we have that f(k) cannot have a zero in 0 < k < 1. Thismeans the velocity in (3.45) is negative when cos(nθ) > 0, i.e. a perturbationreturns to a circle.Case 3: Now, we consider the case where r0 1 but λ = O(1). We haveβ(r0) = r0λ[I ′0(λr0)K0(λr0)−K ′0(λ)I ′0(λ)I0(λr0)I ′0(λr0)].643.3. Specific Domain ChoicesNow, as r0 → 0, we haveB(r0) ∼ r0λ[O(r0 log(r0)) +O(r0)]→ 0, as r0 → 0.Thus,λr0I0(λr0)(1− β)I ′0(λr0)Cn ∼2λr0λr0Cn ∼ 2Cn, as r0 → 0,since I0(λr0) ∼ 1 and I ′0(λr0) ∼ λr0 in this limit. What is left is to computeCn. For n ≥ 1, as r0 → 0, we haveIn(λr0)Kn(λr0) ∼12n.Also,K ′n(λ)I ′n(λ)In(λr0)2 = O(rn0 ), as r0 → 0.Thus,λr0I0(λr0)(1− β)I ′0(λr0)Cn ∼1n.We conclude that for r0 1, thatη˙ ∼ − δr20(1− 1n)n2 cos(nθ). (3.46)Assuming that r0 O(δ), with r = r0 +δ cos(nθ), this law suggests thata bulge flattens out with velocity proportional to O(δ/r20),Case 4: Finally, we consider the case where λ and r0 are fixed and letn→∞. We want to establish is the interface is stable to a ‘high frequency’perturbation cos(nθ) for n 1.The only term that requires estimation is Cn. Recall that Cn is given asCn = In(λr0)Kn(λr0)−K ′n(λ)I ′n(λ)In(λr0)2For n→∞ we useIn(z) ∼1√2pin( ez2n)n; Kn(z) ∼√pi2pin( ez2n)−n,and therefore,653.3. Specific Domain ChoicesIn(z)Kn(z) ∼12n, as n→∞.For the derivative terms we haveI ′n(z) ∼n√2pin( ez2n)n−1 e2n =Nz√2pin( ez2n)n,K ′n(z) ∼ −n√pi2pin( ez2n)−(n+1) e2n = −nz√pi2pin( ez2n)−n.This gives usK ′n(λ)I ′n(λ)∼ −pi(eλ2n)−2n,soK ′n(λ)I ′n(λ)In(λr0)2 ∼ −12n(eλ2n)−2n(eλr02n)2n∼ −r2n02n .Combining these results we can see that for n 1, we haveCn ∼1− r2n02n ,but since 0 < r0 < 1, we have Cn ∼ O(1/n). In the asymptotic limit ofn 1, we conclude thatη˙ ∼ − δr20(1− λr0I0(λr0)(1− β)I ′0(λr0)12n)n2 cos(nθ). (3.47)Sinceη˙ ∼ − δr20(1−O( 1n))n2 cos(nθ),it follows that a bulge flattened out rather quickly.Remark 3.3.2.1: Consider Gn(r0, λ) given byGn(r0, λ) = 1−λr01− β(r0)I0(λr0)I ′0(λr0)Cn.Note that Gn(r0, λ) is the term with an undetermined sign that appears in(3.42). In figure 3.5, we plot Gn(r0, λ) versus r0 for a few values of λ andn. For each pair of λ and n, Gn crosses the r0-axis for a certain criticalvalue of r0. We believe that this sign change is not an instability but comes663.3. Specific Domain Choicesfrom boundary effects. The Neumann boundary conditions on the outer wallcreate a mirroring effect, attracting the perturbed drop to its image outsidethe domain.3.3.3 Many Circular InterfacesWe now consider the case where Γ is initially a collection of N disjoint disksof radii δri, i = 1, . . . , N so thatΩδ =N⋃i=1Ωδi ; Ωδi = {x : |x− xi| ≤ δri}.We then must solveD∆v − v = −σ{1 : x ∈ Ωδ0 : x ∈ Ω \ Ωδ,∂v∂n= 0, x ∈ ∂Ω.(3.48)We will assume that δ 1. In our original analysis we used 1,so here we need to assume δ 1 so that the profile for u across across-section of one drop is as in figure 3.6.Now from our previous analysis we know that the normal velocity of∂Ωδj (the boundary of the jth droplet) isη˙j = −(∂sjV0Aj(sj))sj, x ∈ ∂Ωδj , (3.49)where sj is the arc length of the boundary of the jth drop and Aj(sj) isgiven asAj(sj) = ∇v · n∣∣∂Ωδj,andV0 = v∣∣∂Ωδj.For a circular drop of radius δrj , we have sj = δrjθ, where θ is a local polarcoordinate.We will use strong localized perturbation theory to calculate a high-orderapproximation for v. We begin by looking at the inner region near the jthdrop, where we let673.3. Specific Domain Choices0 0.2 0.4 0.6 0.8 1−12−10−8−6−4−202r0βG2 vs. r0 for different values of λ λ = .2λ = 1λ = 5(a) G2 for λ = .2, 1, 50 0.2 0.4 0.6 0.8 1−8−6−4−202r0βG3 vs. r0 for different values of λ λ = .2λ = 1λ = 5(b) G3 for λ = .2, 1, 50 0.2 0.4 0.6 0.8 1−5−4−3−2−101r0βG4 vs. r0 for different values of λ λ = .2λ = 1λ = 5(c) G4 for λ = .2, 1, 50 0.2 0.4 0.6 0.8 1−4−3−2−101r0βG5 vs. r0 for different values of λ λ = .2λ = 1λ = 5(d) G5 for λ = .2, 1, 5Figure 3.5: Gn(r0, λ) for n = 2, 3, 4, 5, λ = .2, 1, 5 on 0 < r0 < .95. Ineach plot it is clear that there is a critical r0 value such that Gn < 0 andthus η˙ > 0 when cos(nθ) > 0, signifying growth of the perturbation and aninstability.683.3. Specific Domain ChoicesFigure 3.6: Profile for u across a cross-section of one drop centered at xi,where the drop is of size O(δ).|x− xi|u1O(ǫ)O(δ)V (y) = v(xj + δy), y =x− xjδ,Then,Dδ2∆yVj − Vj = −σ{1 : |y| ≤ rj0 : |y| > rj .This suggests that we expand Vj asVj = δ2 log(δ)W0,j + δ2W1,j + . . .We substitute to obtain that W0,j is a constant and that∆yW1,j = −σD{1 : |y| ≤ rj0 : |y| > rj . (3.50)To solve this equation we look for a radially symmetric solution and obtain,upon imposing W1,j is C1 across ρ = |y| = rj , thatW1,j ={− σ4Dρ2 + Cj + σ4Dr2j : ρ ≤ rjkj log(ρrj)+ Cj : ρ > rj, (3.51)693.3. Specific Domain Choiceswhere kj = −σr2j/2D and Cj is an arbitrary constant to be found frommatching to the outer solution. Note that kj is found by imposing differen-tiability across ρ = rj .Remark 3.3.3.1: Suppose that (3.50) was such that Ωδj was an arbitraryclosed region. Then we could not solve as in (3.51) but we could still calcu-late kj and show it is constant. Let Ωj = Ωδj/δ so that Ωj has area that isO(1). The analogous W1,j equation is∆yW1,j = −σD{1 : |y| ∈ Ωj0 : |y| 6∈ Ωj .Now, as |y| → ∞ it still follows that W1,j ∼ kj log(|y|) for some constant kjto be determined. Using the fact that W1,j is C1 we have from the divergencetheorem thatlimR→∞∫BR∆W1,j dy = 2pi limR→∞R∂W1,j∂|y|∣∣∣∣∣|y|=R= 2pikj ,where BR = {y : |y| ≤ R} with R large enough to contain Ωj . However,since ∆W1,j = −σ/D inside Ωj , we also havelimR→∞∫BR∆W1,j dy = −σD|Ωj | = 2pikj ,and thus,kj = −σ|Ωj |2piD ,where |Ωj | is the area of Ωj . Notice that if |Ωj | = pir2 as for a circle, thenkj = −σr2j/2D as derived for equation (3.51). We conclude that the solutionto∆yW1,j = −σD{1 : |y| ∈ Ωj0 : |y| 6∈ Ωj , (3.52)has the far field behavior as |y| → ∞ given byW1,j ∼ kj log(|y|) + Cj + o(1), as y→∞.Here, Cj is an arbitrary constant. We observe that since a constant solves(3.52) then Cj is a constant that is undetermined from the inner problemand will be found by matching to the outer solution.703.3. Specific Domain ChoicesReturning to the circular drop case, recall that the inner expansion isgiven asVj = δ2 log(δ)W0,j + δ2W1,j + . . .In terms of the outer variable ρ = |x − xj |/δ, the far field behavior for theinner expansion isVj ∼ δ2 log(δ)W0,j + δ2[kj log(|x− xj |)− kj log(δ)− kj log(rj) + Cj ] + . . .Rearranging the above equation,Vj ∼ δ2 log(δ)[W0,j−kj ] + δ2[kj log(|x−xj |)−kj log(rj) +Cj ] + . . . , (3.53)with kj = −σr2j/2D.Here, we see that W0,j must beW0,j = kj , j = 1, . . . , N.If W0,j was not chosen in this way, the outer solution, a second-order ellipticPDE, would be forced to satisfy a point constraint (v = δ2 log(δ)v∗, v∗(xj) =W0,j − kj), which is ill-posed.It follows that in the outer region we must expand the outer solution asv = δ2v1 + . . .Upon substituting this expansion into (3.48), we obtain that v1 solves∆v1 − λ2v1 = 0, x ∈ Ω \ {x1, . . . ,xN},∂v1∂n= 0, x ∈ ∂Ω,v1 ∼ kj log(|x− xj)| − kj log(rj) + Cj , as x→ xj , for j = 1, . . . , N.(3.54)As before, λ = 1/√D.To solve (3.54), we introduce the reduced-wave Green’s functionGλ(x; xj),which satisfies,∆Gλ − λ2Gλ = −δ(x− xj), x ∈ Ω;∂Gλ∂n= 0, x ∈ ∂Ω.713.3. Specific Domain ChoicesWe can decompose Gλ(x; xj) globally in Ω asGλ(x; xj) = −12pi log(|x− xj |) +Rλ(x; xj), (3.55)for some globally defined C1 function Rλ(x; xj). As x→ xj , we have,Rλ(x; xj) ∼ Rλj +∇Rλ∣∣xj· (x− xj) +O(|x− xj |2 log(|x− xj |)),where Rλj = Rλ(xj ; xj) is the regular part of the reduced-wave Green’sfunction in (3.55). Additionally,∇Rλ∣∣xj= ∇Rλ(x; xj)∣∣x=xj.We can now write v1 globally in Ω by introducing singular Dirac forcesas∆v1 − λ2v1 = 2piN∑i=1kiδ(x− xj), x ∈ Ω;∂v1∂n= 0, x ∈ ∂Ω.The solution to this PDE can be represented as the superpositionv1 = −2piN∑i=1kiGλ(x; xj).Now we expand as x→ xj to obtainv1 ∼ −2pikj[− 12pi log(|x− xj |) +Rλj +∇Rλ∣∣xj· (x− xj)]− 2piN∑i 6=jki[Gλji +∇Gλ∣∣xj· (x− xj)],which can be rearranged intov1 ∼ kj log(|x− xj |)− 2pi[kjRλj +N∑i 6=jkiGλji + kj∇Rλ∣∣xj· (x− xj)+N∑i 6=jki∇Gλ∣∣xj· (x− xj)].723.3. Specific Domain ChoicesThis long expression can be written more compactly asv1 ∼ kj log(|x− xj |)− 2pikjRλj +N∑i 6=jGλji+ aj · (x− xj), (3.56)as x → xj and with v ∼ δ2v1. In this equation, aj is a vector-quantitydefined byaj = −2pikj∇Rλ∣∣xj+N∑i 6=jki∇Gλ∣∣xj . (3.57)Ignoring aj · (x−xj) in (3.56) for the moment and matching (3.56) withthe required behavior in (3.53), we obtain thatCj − kj log(rj) = −2pikjRλj +N∑i 6=jkiGλji , for j = 1, . . . , N,which yieldsCj = kj log(rj)− 2pikjRλj +N∑i 6=jkiGλji .This can be written in matrix form as(P − 2piGλ)k = C, (3.58)whereP =log(ρ1) 0. . .0 log(ρN ) ; Gλ =Rλ1 Gλij. . .Gλji RλN ,k =k1(ν)...kN (ν) ; C =C1(ν)...CN (ν) .Remark 3.3.3.2:733.3. Specific Domain Choices(i) With Cj determined by (3.58), the inner solution for O(δ2) is determinedfrom (3.51).(ii) By Remark 3.3.3.1, with Cj determined by (3.58), the O(δ2) term forthe inner problem for W1,j is a well-posed problem for an arbitraryshape Ωj . In particular, if xj is any point in Ωδj , then we must solveuniquely the following problem.∆yW1,j = −σD{1 : y ∈ Ωj0 : y 6∈ Ωj ,W1,j ∼ kj log(|y|) + Cj + o(1), as |y| → ∞,where Cj is found from (3.58) and kj = −σ|Ωj |/(2piD). For thisgeneral problem, we can set rj = 1 in P so that P = 0 since in(3.51), we would write W1,j ∼ kj log(|y|) + Cj as |y → ∞ and put|y| = |x− xj |/δ. Thus, for the general problem, −2piGλk = C.Remark 3.3.3.3: We now compare our theory with results obtained forthe annulus δ < |x| < 1 with r = |x|. For the annulus, we can find anexact solution. Note that this has already been found in (3.21). Expresseddifferently, the exact solution isv =(vE − σ) I0(λr)I0(λδ) + σ : 0 < r < δvEK0(λr)−K0(λ)I′0(λ)I0(λr)K0(λδ)−K′0(λ)I′0(λ)I0(λδ): δ < r < 1 ,with vE given asvE = λσδ[I ′0(λδ)K0(λδ)−K ′0(λ)I ′0(λ)I0(λδ)I ′0(λδ)].Now, in the outer region with rj = 1, j = 1, we havev ∼ −2piδ2(− σ2D)Gλ(x; 0) =piσδ2DGλ(x; 0),where for the unit disk the Green’s function is given asGλ(x; 0) =12pi[K0(λr)−K ′0(λ)I ′0(λ)I0(λr)].Since K0(z) ∼ − log(z/2)− γ as z → 0 with γ as Euler’s constant, we have743.3. Specific Domain ChoicesGλ(x; 0) ∼ −12pi log(r)−12pi[log(λ2)+ γ + K′0(λ)I ′0(λ)]+ o(1), as r → 0.Thus,Rλ1 = −12pi[log(λ)− log(2) + γ + K′0(λ)I ′0(λ)],is the regular part of Gλ at x = 0. In the inner region with Rλ1 given above,we have from (3.58) thatC1 =(log(λ)− log(2) + γ + K′0(λ)I ′0(λ)])k1, k1 = −σ2D.Note that with D = 1, we can see that this is exactly the constant found in(3.19). Thus, in the inner region, we have for r = O(δ) thatV1 ∼ k1δ2 log(δ) + δ2{− σr24Dδ2 + C1 + σ4D : r ≤ δk1 log(r)− k1 log(δ) + C1 : r ≥ δ.We can see that this is the exact inner expansion derived in section 3.3.1(equations (3.17) and (3.18)) and our analysis for many droplets reduces toa single drop when N = 1.Now we return to the case of N circular drops and we go to higher orderto determine non-radially symmetric contributions that result in a non-zeronormal velocity.We return to (3.56), and notice that there is an unmatched term due tothe gradient effects. We write the outer solution as (with x→ xj),v ∼ δ2kj log(|x− xj |)− 2pikjRλj +N∑i 6=jkiGλji+ aj · (x− xj) ,where aj is given in (3.57).Note that all the terms except for aj ·(x−xj) have already been matched.In the inner variable this unmatched term can be expressed as δ3[aj ·y+o(1)].This suggests that in the inner region near the jth drop we must expandV = δ2 log(δ)kj + δ2W1,j + δ3W2,j + . . .753.3. Specific Domain ChoicesUpon substituting into (3.48) we obtain that in all of y ∈ R2 that∆yW2,j = 0, y ∈ R2; W2,j ∼ aj · y + o(1), as |y| → ∞. (3.59)The o(1) condition at infinity eliminates constant terms and thus, to have asmooth solution in R2 we must have that W2,j is linear in y. Thus, we haveexactly in R2 thatW2,j = aj · y.With this solution we have the three term inner expansion near the jth dropasV = δ2 log(δ)kj + δ2{ − σ4Dρ2 + Cj + σ4Dr2j : ρ ≤ rjkj log(ρ)− kj log(rj) + Cj : ρ ≥ rj+ δ3aj · y + . . .(3.60)This is the key result that will allow us to predict the interface motionstarting from an initial configuration of N circular small disjoint droplets ofradii δr1, . . . , δrN .The question we are trying to answer is how do the droplets move ini-tially. We know theoretically that for all time t > 0 we have that the areaof each droplet is preserved in time sinceddt∫Ωj1 dx =∫∂Ωjη˙j ds = −(∂sjV0Aj(sj)) ∣∣∣∂Ωj= 0,by periodicity.We now return to (3.49). The normal velocity with sj = δrjθ isη˙ = − 1δ2r2j∂θθV0Aj,where we have used that, to a first order approximation, Aj is a constantindependent of θ. Now we use the inner solution to calculate Aj . Recall,Aj(sj) = ∇v · n∣∣∂Ωδj= −vr∣∣r=δrj= −δ2kj1r∣∣r=δrj= δ kjrj.Now we write the non-radially symmetric terms of v asv = (. . .) + δ3[aj1ρ cos(θj) + aj2ρ sin(θj)],763.3. Specific Domain Choiceswhere (. . .) encapsulates the radially symmetric terms and θj is the localpolar coordinate measured with respect to y = 0 (x = xj). Thus, we have∂θθV0 ∼ −δ3rj [aj1 cos(θj) + aj2 sin(θj)], ρ = rj .Substituting in for Aj and ∂θθV0, we see thatη˙j = −1δ2r2j−δ3rjaj · eθj−δ kjrj,where eθj = (cos(θj), sin(θj)). Simplifying, we haveη˙ = −aj · eθjkj.Substituting in for kj , we arrive at the non-zero normal velocityη˙ =2Daj · eθjσr2j, (3.61)where aj is given in (3.57).This result shows that the normal velocity to the interface is asymp-totically independent of the radius O(δ). We can write equation (3.61)equivalently asη˙ = 2pi∇Rλ∣∣xj+N∑i 6=jrirj∇Gλ∣∣xj · eθj . (3.62)This is the main result for this section. The initial motion of a collectionof small circular droplets of radii δr1, . . . , δrN is on an O(1) timescale (recallη˙ is on an O(1/) for the general PDE) and is critically influenced by theconfiguration of all the other drops. Notice the radius factor ri/rj . If rj islarge compared to the other drops, then the jth drop moves essentially by∇Rλ∣∣xj.Remark 3.3.3.4:(i) Suppose x1, . . . ,xN are such that equation (3.62) is identically zero foreach j. These locations are an equilibrium state for the dynamics and,in particular, if ri = rc ∀ i, then the equilibria would be to put N smalldisks of a common radius at the roots of unity on a ring concentricwithin the unit disk. There are other ‘symmetric configurations’ thatcan be found. These will be touched upon shortly.773.3. Specific Domain Choices(ii) The result (3.62) shows that an initial configuration of N small circulardrops will not remain circular as time increases and the boundary of∂Ωj will change as t increases.(iii) In (3.62), the first term measures the self interaction with the boundaryof the domain, ∂Ω. The second term is larger when ri/rj is large, i.e. ifa small drop of radius rj is surrounded by larger drops of radii ri > rj ,the small drop will deform the most due to the other drops.(iv) To determine the direction which the drops move, we first consider aprevious result on spot dynamics for the Gray-Scott model as derivedin [7]. There, a 1-spot solution with center at x0 ∈ Ω ⊂ R2 satisfiesx0(t) withx′0 = −2k∇Rλ(x; x0)∣∣x=x0,for some k > 0.A collection of N spots with centers x0, . . . ,xN−1 satisfiesx′j = −kjSj∇Rλ(x; xj)∣∣x=xj+N−1∑i 6=jSi∇Gλ(x; xj)∣∣x=xj ,for j = 0, . . . , N − 1, and some kj > 0, Sj > 0 (see principal result 3.1in [7]). The dynamics are known to be a repulsive interaction. For asingle spot, x0 → x0e in Ω as t → ∞, where x0e is the minimum ofRλ(x; x0). Thus, the boundary is repelling and spots too close togetherare pushed away.Our result, in (3.62), has the opposite sign. Thus, for one dropletaligned with the x-axis in the unit disk we haveη˙1 = 2pi[∂xRλ∣∣x=(x1,0)]cos(θj).The bracketed term is positive from results in [7] if x1 > 0. Thus,for cos(θj) > 0, we have η˙ > 0 and η˙ < 0 for cos(θj) < 0. Here,droplet interactions are attractive in the sense that they will deform toeach other. In the single droplet case the droplet deforms towards theboundary due to the ”mirror” effect of Neumann boundary conditions.A depiction of this can be seen in figure 3.7.783.3. Specific Domain ChoicesFigure 3.7: A depiction of the results for the case described in Result 3.3.3.4(iv). We see that the opposite sign for the velocity in the chemotaxis modelcauses the attractive behavior and thus, a circle off the center deforms andmoves towards the boundary.Deformation to boundaryInitial configuration(v) In part (i) of this remark we discussed equilibrium locations for a col-lection of droplets. To find alternative configurations, we must findpoints x1, . . . ,xN for given radii r1, . . . , rN , that satisfy η˙ = 0, whereη˙ is given in (3.62). This can be done numerically by evolving thesystem of ODEsx′j = −r2j∇Rλ∣∣xj+N∑i 6=jr2i∇Gλ∣∣xj , j = 1, . . . , N,with the given initial positions and radii. Due to the negative sign andanalogy with spot dynamics, we will have xj → xje as t → ∞, wherexje is the equilibrium location and satisfies η˙ = 0.3.3.4 Arbitrary Shaped Initial Droplet PatternWe now discuss the case where the initial droplet pattern has N small dropsof ‘radius’ O(δ) but arbitrary shape. Let xj ∈ Ωδj be any point in Ωδj andall such points are O(δ) close together.Let Ω denote the entire region with holes included. We must solve793.3. Specific Domain Choices∆v − λ2v = − σD{1 : x ∈ Ωδ0 : x ∈ Ω \ Ωδ,∂v∂n= 0, x ∈ ∂Ω.We now solve this problem by a matched asymptotic expansion. In theinner region, near the jth drop, we haveV ∼ δ2 log(δ)kj + δ2W1,j + δ3aj · y, (3.63)where W1,j now satisfies (as described in Remark 3.3.3.2),∆yW1,j = −σD{1 : y ∈ Ωj0 : y 6∈ Ωj ,W1,j ∼ kj log(|y|) + Cj + o(1), as |y| → ∞,(3.64)wherekj = −σ|Ωj |/(2piD); Cj = −2pi(Gλk)j .Here, the j subscript for ((G)k)j signifies the jth entry of the vector. Thesolution to this problem is unique since Cj is specified and was found bymatching to the outer solution. Since it is found by solving C = −2piGλk,it involves all other droplets.Note that in (3.63) the O(δ3) term is precisely the same as for the Ncircular droplets. This results from the fact that the problem for W2,j (3.59)is independent of Ωj . The one key new feature is that we must solve (3.64)for an arbitrary Ωj and then determine how the shape changes under thesurface diffusion law given by equation (3.49). The solution for W1,j can bedecomposed asW1,j = Φj + Cj , (3.65)where Φj(y) is the unique solution to∆yΦj = −σD{1 : y ∈ Ωj0 : y 6∈ Ωj ,Φj ∼ kj log(|y|) + o(1), as |y| → ∞.(3.66)The o(1) condition above is the condition that makes the PDE uniquelydetermined. A key feature here is that Φj is a local problem in the sense that803.3. Specific Domain Choicesit is independent of any interaction with the other drops. The interactioneffect with other droplets is simply additive in (3.65).Lemma 3.3: The solution to (3.66), with ds = ds1ds2, is given asΦj(y) = −σ2piD∫Ωjlog(|y − s|) dsProof: We let Gf = − log(|s − y|)/2pi satisfy, in the s-plane, the equationfor the free space Green’s function, ∆sGf = −δ(s− y). Now,∆yΦj = −σD{1 : s ∈ Ωj0 : s 6∈ Ωj ,Φj ∼ kj log(|s|) + o(1), as |s| → ∞.(3.67)We havelimR→∞∫BR(Gf∆sΦj − Φj∆sGf ) ds = limR→∞∫∂BR(Gf∂Φj∂n− Φj∂Gf∂n)ds,(3.68)where Br = {s : |s| = R}. The left hand side islimR→∞(∫BRGf∆sΦj ds + Φj(y))= Φj(y)−σD∫ΩjGf ds= Φj(y) +σ2piD∫Ωjlog(|s− y|) ds(3.69)Now, the right hand side of (3.68) islimR→∞∫∂BR(Gf∂Φj∂n− Φj∂Gf∂n)ds == limR→∞∫∂BR(− kj2piR log(|y − s|)− (kj log(R) + o(1))∂Gf∂n)ds.But,log(|s− y|)∣∣s=R ∼ log(R) +y · s|s|2 +O( 1R2), for |s| = R 1,and813.3. Specific Domain Choices∂Gf∂n∣∣s=R = −12pi∂|s| log(|s− y|)∣∣s=R = −12pi∂|s|(log(|s|)− y · s|s|2),which yields∂Gf∂n∣∣s=R ∼ −12piR +O( 1R2)Substituting this information into the right hand side of (3.68), we obtainwith s = RθlimR→∞∫∂BR(Gf∂Φj∂n− Φj∂Gf∂n)ds == 2pi limR→∞[(− kj2piR log(R) + (kj log(R) + o(1))12piR)R]In the right hand side of this equation we see that the log(R) terms cancel.Note that it is critical that the additional term on the right hand side beo(1) to ensure the boundary integral vanishes. Thus, since the right handside of this equation vanishes we obtain the desired result from (3.69).Φj(y) = −σ2piD∫Ωjlog(|y − s|) ds.We now conclude thatW1,j =σ2piD∫Ωjlog(|s− y|) ds + Cj ,the exact solution of (3.64).Remark 3.3.4.1: We would like to verify directly that Φj satisfies (3.67).First note that if y 6∈ Ωj , then ∆y log(|s− y|) = 0 so ∆yΦj = 0. If y ∈ Ωj ,we have∆yΦj = −σ2piD∫Ωj∆y log(|s− y|) ds = −σD∫Ωjδ(s− y) ds = − σD.To establish the far-field behavior of Φj(y) we let |y| → ∞ and writelog(|y−s|) = 12 log(|y−s|2) = 12 log[(y−s)·(y−s)] =12 log[yyT−sTy−yT s+ssT ]823.3. Specific Domain ChoicesSince sTy = yT s, we havelog(|y − s|) = 12 log[|y|2 − 2y · s + |s|2]= 12 log[|y|2(1− 2y · s|y|2 +|s|2|y|2)]= log(|y|) + 12 log(1− 2y · s|y|2 +|s|2|y|2).Noting that log(1− z) ∼ −z if z → 0, we havelog(|y − s|) ∼ log(|y|) + y · s|y|2 +O( 1|y|2), for |y| → ∞.Thus, for |y| 1,Φj(y) = −σ2piD∫Ωjlog(|y−s|) ds ∼ − σ2piD∫Ωj(log(|y|) + y · s|y|2 +O( |s|2|y|2))ds.So,Φj(y ∼ −|Ωj |σ2piD log(|y|)−σy2piD|y|2 ·∫Ωjs ds.The second term is indeed o(1) as y→∞ and in factΦj(y) ∼ kj log(|y|) + o(1), as |y| → ∞.In summary, the exact solution to (3.64) is simplyW1,j =σ2piD∫Ωjlog(|s− y|) ds + Cj . (3.70)To determine the local change to the interface due to W1,j we writeη˙ = −(∂sjV0Aj)sj, x ∈ ∂Ωδj .We put sj = δsˆj , with sˆj ∈ (0, |∂Ωj |), which is the length of ∂Ωj mea-sured in the y-variable. This yieldsη˙ = − 1δ2(∂sˆjV0Aj)sˆj, x ∈ ∂Ωδj . (3.71)833.3. Specific Domain ChoicesWe have∂sˆjV0 ≈ δ2∂sˆjW1,j∣∣∂Ωj∼ −δ2 σ2piD∂∂sˆj∫Ωjlog(|y(sj)− s|) ds,Aj = ∇xv · n =1δ∇yv · n ∼ −1δδ2∂W1,j∂n.Substituting these into the velocity equation (3.71), we obtainη˙ ∼ −1δ(∂sˆjW1,j∂nW1,j)sˆj,which shows that η˙ = O(1/δ). This means the interface responds quickly toan initial deformation of the non-circular initial shape. This deformation isonly due to the drop itself and not the other drops.84Chapter 4Numerical Results for theChemotaxis ModelIn this chapter we discuss numerical solutions for the volume filling model.This serves as a verification of the analytic results for geometries in whichasymptotic analysis is not possible as well as a way to see interface evolution.We solve the quasi-steady problem discussed in previous chapter, where theorganism has aggregated into regions of uniform concentration. The problemto solve isD∆v − v = −σ{1 : x ∈ Ω−0 : x ∈ Ω+ ,∂v∂n= 0, x ∈ ∂Ω,(4.1)where Ω ⊂ R2 is some domain and Ω− is some initial configuration of cellcolonies. The inner region may be a single mass or a collection of disjointgroups. Each of these inner regions represent the initial aggregation of thecells in the system and the interface is the boundary of Ω−, given by Γ.The main goal is to use the finite element method to solve the steadystate equation of the chemical potential and then to calculate the velocityon each interface using our asymptotic results (principal result 3.1). Wethen pass this information to a level set framework where the velocity isextended in a narrow band around each interface and then evolved in time.With the new location of the interface Γ, the steady state problem is solvedonce again and the process is repeated.The level set method was chosen for many reasons. It is known forits ability to easily handle complicated domain evolution where merging orsplitting may occur (see [26]). It has been successfully used in applicationssuch as Stefan problems, flame front evolution, and computer vision. Sincethe chemotaxis equations devolve into a moving boundary problem it seemslike the natural framework with which to view it.854.1. Steady State Solution Using the Finite Element Method4.1 Steady State Solution Using the FiniteElement MethodThe first step is to solve the steady state problem in (4.1) so the velocitymay be calculated. The finite element method was chosen so we can easilyfit the mesh to complicated interfaces. Since the forcing term in the problemis a step function it is desirable to have the tessellation respect the innerand outer regions. To achieve this we use two different tessellations, one forthe inner and and one for the outer regions, that share the same nodes ontheir common interface (see figure4.1).For the tessellation, Per-Olof Persson’s distmesh (see [24]) package wasused. The first step is to construct Ω+ and Ω− separately. We then marrythe two tessellations together by adjusting the triangle node numbering ina consistent fashion and appending the additional nodes and triangles ontothe original lists. This gives us a single tessellation of Ω with a ‘tight’ holdon the interface. Note that around the interface we adaptively refine thetriangulation for improved accuracy in calculating the velocity.Figure 4.1: An example of a tessellation generated using distmesh and thedescribed methodology.864.1. Steady State Solution Using the Finite Element MethodThe steady state problem is solved in a standard manner. We constructthe weak form by first assuming that if v solves (4.1), then for any u wehave∫ΩD∆vu dx−∫Ωvu dx = −σ∫Ω−u dx.Applying Green’s identity to the first term on the left hand side, we nowhave−∫Ω(D∇v · ∇u+ uv) dx +∫∂Ωu∂v∂ndx = −σ∫Ω−u dx.Since v is subject to Neumann boundary conditions, the boundary integralvanishes and we obtain the following weak form of (4.1).∫Ω(D∇v · ∇u+ uv) dx = σ∫Ω−u dx.where u is an arbitrary ‘trial’ function in H1(Ω), the Sobolev space given asH1(Ω) = {u ∈ L2(Ω) : ux1 , ux2 ∈ L2(Ω)}.We use a Galerkin approximation to approximate v and u asv(x, y) =N∑i=1viNi(x, y); u(x, y) =N∑i=1uiNi(x, y),where Ni are finite element basis functions. Since u is arbitrary we can seteach ui = 1 in turn. The problem to solve, for each i, is[∫Ω(D∇Ni(x, y) · ∇Nj(x, y) +Ni(x, y)Nj(x, y)) dx]vi = σ∫Ω−Nj(x, y) dx.(4.2)We approximate the solution by using quadratic basis functions overmapped triangles. For mapped triangles that lie on ∂Ω or Γ, we use anisoparametric mapping to improve the accuracy with which these curves arerepresented (see figure 4.2).The integrals in (4.2) are evaluated by mapping the basis functions to areference element (see figure 4.2). For a given triangle, the reference map isdefined byx = F1(ω, ξ) =6∑i=1xiNˆi(ω, ξ); y = F2(ω, ξ) =6∑i=1yiNˆi(ω, ξ), (4.3)874.1. Steady State Solution Using the Finite Element Methodwhere xi and yi are the vertices of the triangle that correspond to the nodewhere the basis function Ni(ω, ξ) is one and (ω, ξ) are the reference elementcoordinates. On the reference element the basis functions can be writtenexplicitly asNˆ1 = (1− ω − ξ)(1− 2ω − 2ξ); Nˆ2 = ω(2ω − 1); Nˆ3 = ξ(2ξ − 1),Nˆ4 = 4ωξ; Nˆ5 = 4ξ(1− ω − ξ); Nˆ6 = 4ω(1− ω − ξ).(4.4)Applying the map from (4.3) to the integrals in (4.2), we have (for a mappedtriangle K and corresponding reference triangle Kˆ),D∫K∇Ni · ∇Nj dA = D| det(J)|∫Kˆ(J−T ∇ˆNˆi) · (J−T ∇ˆNˆj) dA,∫KNiNj dA = |det(J)|∫KˆNˆiNˆj dA;∫KNj dA = | det(J)|∫KˆNˆj dA,where J is the Jacobian of the map given in (4.3), which can be writtenexplicitly asJ =(∂ω∑6i=1 xiNˆi(ω, ξ) ∂ξ∑6i=1 xiNˆi(ω, ξ)∂ω∑6i=1 yiNˆi(ω, ξ) ∂ξ∑6i=1 yiNˆi(ω, ξ)). (4.5)Once on the reference element we use 10-point Gauss quadrature toevaluate the integrals.x, y ω, ξG(x, y)F (ω, ξ)(0,0) (1,0)(0,1)(x1, y1)(x2, y2)(x3, y3)1 23456Figure 4.2: A depiction of a triangle with a mapped edge and the referencetriangle in ω, ξ space. The dotted line indicates the previously unmappedquadratic or possibly linear triangle edge. G(x, y) is the equivalently writtenas F−1(ω, ξ).884.1. Steady State Solution Using the Finite Element MethodInitially, we tried piecewise linear elements but they were not sufficientlyaccurate for our purposes. Since the velocity on the interface depends uponthe arc length of Γ, accuracy of the initial solution and representation ofthe interface is very important. Quadratic elements were also tried beforeimplementing isoparametric quadratics. Since the interface will typicallybe curved and not polygonal, mapping the triangle edges that lie on theinterface gives us smooth tangent and normal vectors as well as a moreaccurate representation of the forcing term in (4.1). Since the domain isdiscretized with special care taken at the interface and boundary, we areable to handle the forcing term properly and maintain O(h3) accuracy inthe L2 norm.As a check, we consider problem (4.1) with Ω = {x : |x| ≤ 1}, the unitdisk, Ω− = {x : |x| ≤ .25}, σ = 1 and D = 1. With this given dropletconfiguration the exact solution, v, can be found and is given in (3.21). Thesolution isv ={σ +AI0(|x|λ) : |x| ≤ δBI0(|x|λ) + CK0(|xλ|) : δ < |x| ≤ 1 ,with A, B, and C given asA = C[(K0(δ)− σ)I1(1) +K1(1)I0(δ)I1(1)I0(δ)],B = CK1(1)I1(1); C = I1(δ)σW(K0(δ), I0(δ)),where δ = .25, σ = 1 and λ = 1.We calculate the numerical solution, V , with quadratic and isoparametricquadratic elements and create a log-log plot of the L2 error ‖V − v‖L2against the mesh edge length h. Clearly the isoparametric elements aremore accurate and converge faster (O(h3) for isoparametric and O(h2) forquadratic), as seen in figure 4.3. A surface plot of the numerical solutioncan be seen in figure 4.4.4.1.1 Comparison to AsymptoticsThe next step is to ensure that the numerical and asymptotic results agree.To this end, we investigate the case where Ω is a circular domain withmany circular interfaces analyzed in section 3.3.3. The problem we solveis given in equation (4.1) with Ω = {x : |x| ≤ 1}, σ = 1, D = 1 andΩ− = {x : |x − (−.15,−.45)| ≤ δ} ∪ {x : |x − (.25, .25)| ≤ 2δ}. Since the894.1. Steady State Solution Using the Finite Element Method−4 −3.5 −3 −2.5 −2−18−16−14−12−10−8−6h|V − v| L2Log−log plot of L2 Error vs. Mesh Size QuadraticIsoparametricFigure 4.3: Log-log plot of the L2 error versus mesh size. The isoparametricmapping improves accuracy and leads to a higher convergence rate indicatedby the steeper slope. The slopes are 3.67 for the isoparametric elements and2.02 for quadratic.problem is solved asymptotically in the limit δ → 0, we analyze the errorfor decreasing δ on a fixed initial mesh size h = .015.The derivation of the asymptotic solution can be found in section 3.3.3and is given asv ≈ −δ22piN∑i=1kiGλ(x; xj),where Gλ is the reduced-wave Green’s function that solves∆Gλ − λ2Gλ = −δ(x− xj), x ∈ Ω;∂Gλ∂n= 0, x ∈ ∂Ω.When Ω is a unit disk, as in this case, the reduced-wave Green’s functioncan be found explicitly and is given in appendix B. A surface plot of thenumerical solution using this reduced-wave Green’s function is given in figure4.5.As δ → 0, the difference between the numerical and asymptotic solutionshould decrease. This can be seen in table 4.1, where the l2 norm of the904.2. Calculating the Interface Velocity−1−0.500.51−1−0.500.510.050.060.070.080.090.1Single Colony Solution with δ = .25Figure 4.4: The solution to the steady state equation with a single cell colonycentered at the origin. Here, δ = .25 and the initial edge length is .05.difference between solutions is shown for different values of δ. Note that, fora given mesh size, there is a lower bound constraint on δ as there need tobe sufficient degrees of freedom in the FEM model to represent the interiorregions accurately.This result helps to confirm the asymptotic solutions presented in section3.3.3 as well as the accuracy of the numerical computations.4.2 Calculating the Interface VelocityRecall our asymptotic result,η˙ = −( ∂sV0A1(s))s= −( vs∇v · n)s,where V0 is the first term in the inner expansion of (4.1) and A1 arises fromthe second term. This is the velocity we need to calculate in order to use914.2. Calculating the Interface Velocity−1−0.500.51−1−0.500.51345678910x 10−3Two Colony Solution with δ = .03125Figure 4.5: The solution to the steady state equation with two cell colonieslocated at [-.15, -.45] and [.25, .25] with radii of δ and 2δ respectively. Here,δ = .03125.the level set method and capture interface motion.The arc length derivatives are calculated by determining the tangentto the interface from the isoparametric mapping (4.3) and taking the dotproduct of that with the gradient of v. This is because ∂sf(x) = ∇f(x) · t,where t is the tangent vector of the curve at the point x. The gradient iscalculated using the finite element framework. For a given triangle K andpoint (x, y) ∈ K, the gradient is defined as∇v(x, y) =6∑i=1vi∇Ni(x, y),where Ni(x, y) are the non-zero basis functions on that triangle. We cantake advantage of the reference element to say that the gradient is924.2. Calculating the Interface Velocityδ difference (l2).25000 3.3275.12500 .44805.06250 .05868.03125 .00977Table 4.1: A table of the error for decreasing values of δ. The error quicklydecreases as δ decreases, coinciding with the asymptotic limit of δ 1.∇v(x, y) = J−T6∑i=1vi∇ˆNˆi(ω, ξ),where Nˆi(ω, ξ) are the local basis functions and J−T is the inverse transposeof the Jacobian of the reference map. The reference basis functions are givenin (4.4) and the Jacobian is given in (4.5). The normal vector n is calculatedusing an outward normal by taking the vector perpendicular to the tangentt.Since the gradient and normal vectors are not well defined at the verticesof a triangle we instead only calculate the velocity at the mapped quadraticnodes that sit between two vertices that lie on the interface. To calculatethe velocity elsewhere we pass a periodic spline through those points andinterpolate. This gives us the normal velocity at all of the nodes on theinterface and in principle anywhere on the curve.To check the accuracy of the velocity, we again look to the problemproblem (4.1) with Ω = {x : |x| ≤ 1}, the unit disk, Ω− = {x : |x| ≤ .25},σ = 1 and D = 1. Since the solution is independent of θ, the arc lengthderivative will be 0 and thus the velocity is 0. We calculate the velocityon the interface with quadratic and isoparametric elements and use themaximum value as a measure of accuracy. Figure 4.6 is a log-log plot of themaximum velocity value versus h, the initial mesh size. The convergencerates (slopes) are 1.24 and 1.68 for the quadratic and isoparametric elements,respectively.934.3. Interface Evolution Using the Level Set Method−4 −3.5 −3 −2.5 −2−7.5−7−6.5−6−5.5−5hMaximum VelocityLog−log plot of the Maximum Velocity Value vs. Mesh Size QuadraticIsoparametricFigure 4.6: Log-log plot of maximum velocity value on the interface versusmesh size. Once again the isoparametric elements provide an improvementin accuracy.4.3 Interface Evolution Using the Level SetMethodThe level set method was developed by Osher and Sethian in [21] and itoffers an accurate and robust way of evolving interfaces. The formulationfor the level set method is as follows. Given a normal velocity field V, weevolve the surface z = φ(x, y, t) according to the hyperbolic conservationlawφt + V|∇φ| = 0,where φ(x, y, t) = 0 defines the moving interface we want to track. Atthe first step φ is initialized as a signed distance function that, at eachpoint in space, describes the distance to the interface. Since the level setequation requires a normal velocity field for all level sets, we need to buildan extension velocity off the interface. Thus, before we proceed with theevolution, we need to construct this signed distance function and extensionvelocity field.As an example, a circular interface of radius R can be represented as thesigned distance function944.3. Interface Evolution Using the Level Set Methodφ(x, y) =√x2 + y2 −R.An interface comprised of a collection of disjoint regions can be repre-sented as a union of a collection of signed distance functions φi, where each φiis an embedding of a simple closed curve. The value for φ(x, y) = ⋃i φi(x, y),at a point (x, y) in space, is given by φ(x, y) = mini(φi(x, y)). This simplefact allows one to represent complicated domain configurations as a mini-mum over much simpler ones.The level set equation is typically solved on a rectangular grid with uni-form spacing. We chose to do the same and thus we embed the finite elementmesh in a square domain and impose artificial periodic boundary conditions.Since curve evolution only requires local information, the boundary effectsnever come into play. We will now take a deeper look at more specific pointsof the implementation.4.3.1 The Signed Distance FunctionEvolution by the level set equation does not strictly require a signed distancefunction but a signed distance function has many nice properties that im-prove the accuracy of numerical computations. A signed distance function,φ, gives the shortest distance from a point x to the boundary φ = 0 and alsohas the property that |∇φ| = 1. This normalized gradient diminishes thenegative effects that large variations in the gradient may have on numericalcomputations. This is why a signed distance function is often chosen.Throughout evolution, the level set equation can have a distorting effecton the interface, causing very steep or flat gradients (see [18]). This meanssmall perturbations in the velocity or interface can have large and unphys-ical effects on the solution. This problem is ameliorated by reinitializingφ as a signed distance function at every iteration. In order to reinitializea level set function into a signed distance function we evolve the followingreinitialization equation proposed in [28] to a steady state.φt = sign(φ)(1− |∇φ|).Typically, very few iterations are required for a reasonably accurate reini-tialization.For points within one grid space of the interface we use a subcell fix(see figure 4.7). Russo and Smereka in [25] explain that for these ‘close’points, a standard discretization will not remain upwind. Thus, for the954.3. Interface Evolution Using the Level Set MethodFigure 4.7: An example of a set of points (identified by the crosses) one gridnode away from an interface.point (i, j) on the nth iteration, the subcell fix changes the right hand sideof the reinitialization equation given above tosign(φ0i,j)|φni,j | −Di,j ,where φki,j = φ(xi, yj , tk) and Di,j is an approximation of the distance fromthe node (xi, yj) to the interface. Di,j is given byDi,j = hφ0i,j∆φ0i,j∆φ0i,j =√(Dxφ0i,j)2 + (Dyφ0i,j)2where h is the grid spacing, Dxφ0i,j and Dyφ0i,j are the maximum absolutevalue of a small parameter , along with the central, forward, and backwarddifference operators in the x and y directions, respectively. ∆φ0i,j is given asabove because merging or breaking interfaces may result in small values forthe denominator and this formulation keeps the distance, Di,j , numericallyrobust. Note that this fix is only done for points within one grid space awayfrom the interface and for all others we use Godunov’s method.An example of the effectiveness of reinitialization with a subcell fix canbe is seen in figure 4.8964.3. Interface Evolution Using the Level Set MethodxyContour Lines for Skewed Level Set−5 0 5−505xyContour Lines Ater Reinitialization−5 0 5−505Figure 4.8: An example where reinitialization, particularly with a subcellfix, is very effective at correcting defects in a level set function. Here, φ =f ∗ (√(x/4)2 + (y/2)2 − 1), where f = .1 + (x+ 3.5)2 + (y + 2)2.974.3. Interface Evolution Using the Level Set Method4.3.2 Regular Grid Initialization and Velocity ExtensionAs mentioned previously, the level set equation is solved on a regular grid.Since the velocity is only defined on the interface, it needs to be interpolatedto this grid and then extended outward in a reasonable fashion. The onlystrict stipulation [26] imposes upon this extension is that it be continuousat the interface. This can be done in the manner suggested in [17]. First,the velocity is extended to the rest of the triangulated mesh by solving thefollowing equation.∆V = 0, x ∈ Ω \ Γ; V = η˙, x ∈ Γ[∂V∂n]= 0, x ∈ Γ; ∂V∂n= 0, x ∈ ∂Ω.(4.6)We solve Laplace’s equation with Dirichlet boundary conditions awayfrom the interface and initialize the velocity that was already solved foron the nodes corresponding to the interface. We also enforce a zero jumpcondition in the normal derivative on the interface to impose differentiability.Since this system is overdetermined it is solved in the least squares senseusing the normal equations.The reason we extend the velocity in this way is because, in a weak sense,we enforce the fact that ∇φ · ∇V = 0. To show this, consider the integral∫Ωφ∆V dx = 0,which is true because of (4.6). This can be decomposed into a sum of twointegrals,∫Ω−φ∆V dx +∫Ω+φ∆V dx = 0,Applying Green’s identity, we have−∫Ω∇φ · ∇V dx +∫∂Ωφ∂V∂ndx−∫∂Ω+φ∂V∂ndx +∫∂Ω−φ∂V∂ndx = 0where the sign for the last two integrals is opposite because of the differentnormal directions. The last two integrals vanish due to the imposed jumpcondition on the interface and the second integral vanishes because of theNeumann boundary conditions. Thus, we have∫Ω∇φ · ∇V dx = 0,984.3. Interface Evolution Using the Level Set Methodand therefore, in a weak sense, we have ∇φ · ∇V = 0. The reason why thiscondition is important is because it helps maintain φ as a signed distancefunction. To see this, suppose that initially |∇φ| = 1 and we evolve theinterface using the level set equation φt + V|∇φ| = 0. Now, considerd|∇φ|2dt= 2∇φ · ddt∇φ = −2(∇φ · ∇V|∇φ|+∇φ · ∇|∇φ|V).Since |∇φ| = 1, we have that ∇|∇φ| = 0. Now, with ∇φ · ∇V = 0, we havethatd|∇φ|2dt= 0,and therefore |∇φ| = 1 for all time as it is a unique solution to the ODE.This is why we impose ∇φ · ∇V = 0; having φ as a signed distance func-tion, as mentioned previously, improves numerical accuracy and robustnesssignificantly.Once the velocity has been extended to the rest of the triangulated do-main we can initialize it to a set of points on the regular grid. We choosethese points by selecting those that lie within a certain small distance (givenby φ) from the interface, φ = 0. The finite element method gives us a nat-ural interpolation scheme; we find which triangle a mesh point lies in andinterpolate using the basis functions for that triangle. For the rest of thepoints in the domain we set the velocity to zero.Once the velocity has been initialized we can smooth the jump betweeninitialized and non initialized grid points with the velocity extension equa-tion. This extends the initial set of points with non-zero velocity furtheroutwards and alleviates discontinuities. The velocity extension equation is∂V∂t+ sign(φ) ∇φ|∇φ| · ∇V = 0.This equation advects the initial velocity away from the front in theproper upwind direction (see [6]), maintains continuity between the interfa-cial velocity and extension velocity at the interface, and in equilibrium hasthe property that ∇φ · ∇V = 0. This property, once again, helps maintainφ as a signed distance function and works in conjunction with the finiteelement extension method described in (4.6).Since the velocity depends upon the solution to the steady state PDEgiven in (4.1) and the shape of the interface, it is only valid on a very smalltimescale for the level set equation. This means we only need to extend the994.3. Interface Evolution Using the Level Set Methodvelocity to a narrow band around the interface as we are forced to discard itand re-solve the steady state equation. An example of an interface velocitycurve and the corresponding extension can be seen in 4.94.3.3 Interface EvolutionFinally we come to the interface evolution step. Here, we solve the level setequationφt + V|∇φ| = 0,for a small time step before recovering the interface and passing the newconfiguration back to the steady state equation (4.1). The interface is re-covered by passing a periodic spline through the oriented collection of pointsthat represent the zero level set φ(x, y, t) = 0 and then sampling this splineat evenly spaced values along the curve.As φ moves towards equilibrium the values of the velocity field will de-crease. Since we need to interpolate the velocity from the interface to aregular grid, the interpolation process will cause unphysical results whenthe interpolation error is the same order as the velocity. To fix this problemand continue with the evolution we can switch to a simpler front trackingmethod when the maximum velocity value falls below some tolerance basedon grid size. By this stage the level set method will have handled the topo-logical changes and non-convex interfaces that are so difficult to capturewith front tracking.Front tracking is very simple to implement and only requires that onecalculate the normal vector for nodes on the interface and advance thesepoints in that direction, scaled by the velocity η˙.4.3.4 DiscretizationsFollowing the intuition given in [6], we use a third-order total variationdiminishing Runge-Kutta scheme for solving the reinitialization, level setand extension velocity equations. This scheme was chosen to avoid temporalinstabilities arising from the discretization. Consider the following generalequationφt = L(φ),where L is the spatial operator. The third-order TVD scheme is then givenas1004.3. Interface Evolution Using the Level Set Method−1 −0.50 0.51−0.500.5−20246xVelocity on the InterfaceyV−0.8 −0.6 −0.4 −0.20 0.2 0.4 0.60.8−1−0.500.51−2−1012345xExtension Velocity Field yVFigure 4.9: An example of a normal velocity curve on the interface and theresulting narrow band extension.1014.3. Interface Evolution Using the Level Set Methodφ(1) = φ(0) + ∆tL(φ(0)),φ(2) = 34φ(0) + 14φ(1) + 14∆tL(φ(1)),φ(3) = 13φ(0) + 23φ(2) + 23∆tL(φ(2)),where L is the discrete spatial operator.For the spatial discretization we again follow [6]. Consider the spatialoperator of the level set equation, given asL(φ) = −V|∇φ|.This is discretized using Godunov’s method to make sure upwinding is han-dled properly. Godunov’s method gives usL(φ) = −max(V, 0)∇+ −min(V, 0)∇−,where∇+ =[max(max(D+xφ, 0)2,min(D−xφ, 0)2)+max(max(D+yφ, 0)2,min(D−yφ, 0)2)]1/2,∇− =[max(max(D−xφ, 0)2,min(D+xφ, 0)2)+max(max(D−yφ, 0)2,min(D+yφ, 0)2)]1/2,with D±x representing the forward and backward difference operators in thex direction, respectively. To improve accuracy the forward and backward dif-ference operators are discretized using the second-order ENO scheme foundin [18]. The schemes areD+xφi,j =φi+1,j − φi,jh− h2 minmod(Dxxφi,j , Dxxφi+1,j),D−xφi,j =φi,j − φi−1,jh+ h2 minmod(Dxxφi,j , Dxxφi−1,j),where Dxx is the standard central difference approximation to the secondderivative and the minmod function returns the argument with the smallerabsolute value. These are all reproduced in an analogous fashion for the ydirection.The reinitialization and velocity extension equations are handled withsimilar discretizations, also using Godunov’s method. For the sign functionthat arises in these equations, we use a smoothed version to alleviate anynumerical issues given as1024.4. Resultssign(φ) = φ√φ2 + h2,where h is the grid spacing.4.4 ResultsIn this section we present a few canonical results from which the behavior ofmany different configurations can be explained. We also verify some of theasymptotic results found in section 3.3.2, 3.3.3 and 3.3.4, where instabilitiesand different timescales are present. In all examples the outer domain Ωis a circle of radius one centered at the origin and the level set equation issolved on a regular grid with 200 grid points in each direction.4.4.1 Off-Center CircleFigure 4.10 is an example of interface evolution where Γ is an off-centercircle. Here, Ω− = {x : |x− (.25, .25)| ≤ .25}, σ = 1 and D = 1.The reason the drop moves away from the center is due to the Neu-mann boundary conditions on the domain wall which creates a mirroringeffect. The droplet is attracted to its image droplet and ‘reaches’ out to-wards it. This also confirms the analysis from section 3.3.3, in particularremark 3.3.3.4 (iv), where it was found that when off-center the drop ini-tially bulges outwards. With this level set formulation it would be hardto capture exactly what occurs when it reaches the boundary due to ourembedding the circular domain in a regular gridded square.We also plot the area loss versus iteration count (area at current iteration- initial area) in figure 4.11. This is a rough metric for accuracy of the levelset method in this case when there is no exact solution. Despite losing areaper time step the total loss remains small.4.4.2 Two Axisymmetric Perturbed CirclesIn figures 4.12 through 4.15 we examine the case of two axisymmetric per-turbed circles, where the perturbation is relatively small. In polar coordi-nates the shapes are given by r = .0175(cos(6θ) + 12) with a shift of .45 and−.45 in the x direction. Here, D = 1 and σ = 1.The evolution of this example can be broken into four phases. The firstphase is that in which the perturbations rapidly smooth out to a circularshape (figure 4.12). The second phase is the much slower evolution of the two1034.4. Resultscircles moving together and eventually merging into one mass (figure 4.13).The third phase is the barbell shape of the two merged circles smoothingout into an ellipse (figure 4.14). Finally, the last phase is the equilibriumstate of a circular drop centered in Ω at the origin (figure 4.15).These four phases serve as canonical results that explain the chemotac-tic evolution process for many different droplet configurations. For typicalparameter values, perturbations to circles will smooth out and drops willmove towards each other and merge into one large group. The behavior isvery similar to that of the Ostwald ripening in this regard.The slow merging in the second and third phases relative to the quicksmoothing in the first suggests that the two evolution processes happen ondifferent timescales. This confirms the asymptotic result derived at the endof section 3.3.4.4.4.3 Asymptotic Result: Boundary Effects CauseInstabilitiesIn figure 4.16, we confirm the asymptotic result in section 3.3.2, remark3.3.2.1. In this case, Γ = r0 + δ cos(nθ) was shown to be unstable whenr0 was above some critical value that depends upon n and λ. Instead ofreturning to a circular shape as in figures 4.12 through 4.15, we have growthof the perturbation. This instability is due to effects of the domain boundaryand Neumann boundary conditions, which cause the drop to reach outwardstowards its mirror image. Notice that the entirety of the perturbation doesnot grow, only the sides that reach out beyond r = .5.The parameter values selected that satisfy this condition are D = 250,r0 = .65, δ = .15, n = 2 and σ = 1. Figure 4.16 highlights this instabilityand the aforementioned growth of the perturbation.4.4.4 Asymptotic Result: Perturbations to Small DropDecay Faster Than Perturbations to Large DropFigure 4.17 is a confirmation of the result from section 3.3.2, specificallyequation (3.46). Here, we assumed that the interface, Γ, is a near circlegiven by r = r0 + δ cos(nθ). In the limit where δ r0 1 and λ =1/√D = O(1) (D from (4.1)), we have that a perturbation flattens out withvelocity proportional to O(δ/r20). This indicates that a small circle with asize δ perturbation should flatten out faster than a larger circle with thesame size perturbation.For the steady state PDE, we set D = 1 and σ = 1. The interface for1044.4. Resultsthe small near-circle is given by r = .25 + .05 cos(6θ) and large near-circle isr = .5 + .05 cos(6θ). Figure 4.17 shows that the smaller near-circle returnsto an equilibrium state faster than its larger counterpart.4.4.5 Asymptotic Result: High Frequency PerturbationsDecay Faster Than Low Frequency PerturbationsFigure 4.18 is a confirmation of the result from section 3.3.2, equation (3.47).Here, we assumed that the interface, Γ, is a near circle given by r = r0 +δ cos(nθ). In the limit where D and ρ0 are fixed and n → ∞, where n isthe order of the perturbation, we have that the perturbations flatten withvelocity proportional to O(n2). This means a high frequency perturbationwill decay faster than a lower frequency one for the same r0 and δ.For the steady state PDE, we set D = 1 and σ = 1. The high frequencyinterface is given by r = .35 + .05 cos(9θ) and the low frequency is r =.35+ .05 cos(6θ). Figure 4.18 shows that the higher the frequency, the fasterthe perturbations decay.1054.4. ResultsT = 0.0020101 T = 0.084422T = 0.19095 T = 0.27136Figure 4.10: Off-circle example. Blue is the outer domain, the dotted blackline is the initial position and dark red is the moving drop. For the triangu-lated mesh we start with an initial edge size of .05 with 100 fixed nodes onthe interface. The time step for reinitialization, constructing the extensionvelocity and solving the level set equation was chosen to be h/5, where h isthe regular grid mesh size.1064.4. Results0 20 40 60 8000.20.40.60.811.21.4x 10−3IterationArea LossArea Loss vs. IterationFigure 4.11: Plot of area loss versus iteration count for the off-center circleexample. Area is lost at each iteration but remains small.1074.4. ResultsTwo Drops: Phase 1T = 4.0201e−05 T = 8.0402e−05T = 0.0001206 T = 0.00020101Figure 4.12: Phase 1 of the two perturbed circle example. Blue is the outerdomain, the dotted black line is the initial position and dark red is themoving drop. For the triangulated mesh we start with an initial edge size of.05 with 100 fixed nodes on each interface. The time step for reinitialization,constructing the extension velocity and solving the level set equation waschosen to be h/5, where h is the regular grid mesh size.1084.4. ResultsTwo Drops: Phase 2T = 0.00020101 T = 0.080402T = 0.15075 T = 0.20101Figure 4.13: Phase 2 of the two perturbed circle example. Blue is the outerdomain, the dotted black line is the initial position and dark red is themoving drop. For the triangulated mesh we start with an initial edge sizeof .05 with 100 fixed nodes on each interface which becomes 200 fixed nodeswhen the domains merge. The time step for reinitialization, constructingthe extension velocity and solving the level set equation was chosen to beh/5, where h is the regular grid mesh size.1094.4. ResultsTwo Drops: Phase 3T = 0.20101 T = 0.26533T = 0.33166 T = 0.40201Figure 4.14: Phase 3 of the two perturbed circle example. Blue is the outerdomain, the dotted black line is the initial position and dark red is themoving drop. For the triangulated mesh we start with an initial edge size of.05 with 200 fixed nodes on the interface. The time step for reinitialization,constructing the extension velocity and solving the level set equation waschosen to be h/5, where h is the regular grid mesh size.1104.4. ResultsTwo Drops: Phase 4T = 0.40201 T = 0.46633T = 0.53266 T = 0.60302Figure 4.15: Phase 4 of the two perturbed circle example. Blue is the outerdomain, the dotted black line is the initial position and dark red is themoving drop. For the triangulated mesh we start with an initial edge size of.05 with 200 fixed nodes on the interface. The time step for reinitialization,constructing the extension velocity and solving the level set equation waschosen to be h/5, where h is the regular grid mesh size.1114.4. ResultsT = 0.0020101 T = 0.064322T = 0.15075 T = 0.20101Figure 4.16: An example of an unstable perturbation due to boundary ef-fects. Blue is the outer domain, the dotted black line is the initial positionand dark red is the moving drop. For the triangulated mesh we start withan initial edge size of .05 with 180 fixed nodes on the interface. The timestep for reinitialization, constructing the extension velocity and evolution ofthe level set equation was selected as h/5, where h is the regular grid meshsize.1124.4. ResultsT = 2.0202e−05 T = 2.0202e−05T = 6.0605e−05 T = 6.0605e−05T = 0.00010101 T = 0.00010101Figure 4.17: Different sized perturbed circle example. Blue is the outerdomain and dark red is the moving drop. For the triangulated mesh westart with an initial edge size of .05 with 100 fixed nodes on the interfacewhen r0 = .25 (left) and 200 fixed nodes on the interface when r0 = .5(right). The time step for reinitialization and constructing the extensionvelocity was selected as h/5, where h is the regular grid mesh size. In orderto capture the quick perturbation-flattening effects, the time step for thelevel set equation was h2/5.1134.4. ResultsT = 2.0202e−05 T = 2.0202e−05T = 4.0403e−05 T = 4.0403e−05T = 6.0605e−05 T = 6.0605e−05Figure 4.18: Different frequency perturbed circle example. Blue is the outerdomain and dark red is the moving drop. For the triangulated mesh westart with an initial edge size of .05 with 150 fixed nodes on the interface.The time step for reinitialization and constructing the extension velocity wasselected as h/5, where h is the regular grid mesh size. In order to capture thequick perturbation-flattening effects, the time step for the level set equationwas h2/5.114Chapter 5ConclusionIn this thesis we studied the interfacial motion present in the two-dimensionalOstwald ripening and volume-filling chemotaxis systems. For both prob-lems, we derive the governing partial differential equation model and usethe method of matched asymptotic expansions to determine the laws whichdictate motion. After this motion law is derived, we verify the result againstintuition and expected behavior with numerical simulations and specific ex-amples.The Ostwald ripening system is derived as the late-stage dynamics ofthe Cahn-Hilliard equation and results in a Mullins-Sekerka free boundaryproblem. We consider a collection of N small circular droplets in a domain,Ω ⊂ R2. We construct an inner and outer solution to the problem and en-force matching to derive a system of ordinary differential equations whichdescribe the motion of the radius of each drop, to leading order. Followingthis, we improve our solution, solving the problem to terms of all logarith-mic order where, at this order, the interaction is captured by a NeumannGreen matrix. The system of ordinary differential equations is improvedand now requires solving a linear algebraic system involving the radius ofeach drop and the Neumann Green’s function. The system is seen to be areapreserving, perimeter reducing, and have the finite time extinction propertywhere the smallest droplet vanishes in finite time. These properties pushthe system towards an equilibrium state of a single, large drop that holdsall of the initial mass. We verify the derived system of ordinary differentialequations with a simple two drop numerical simulation.For our study on the volume-filling chemotaxis system, we derive themodel by assuming an organism exhibits a random walk on a one-dimensionalinfinite lattice with preferred movement in the direction of a chemical gradi-ent. Volume-filling is included with a monotone decreasing function whichdecays to zero when the concentration of the organism reaches the normal-ized maximum value of one. We define the interface, Γ, as a collection ofsimple closed curves which separate regions with concentration one and con-centration zero of the organism. In the limit of small diffusive forces, we usea boundary fitted coordinate system and matched asymptotic expansions to115Chapter 5. Conclusionderive a surface diffusion law that describes the velocity on this interface.The velocity is then used to determine the behavior in a few specific cases.In the case of two concentric circles, the velocity is identically zero and weare in a stable equilibrium. This is confirmed by solving the problem exactlyas well as with a matched asymptotic expansion when the initial mass ofthe system is small. Next, we examine what occurs when the circular shapeof the droplet is perturbed. In the limit of a high frequency perturbation orlarge chemical diffusion, we see the velocity stabilizing the system. When inthe limit of small diffusive forces, we cannot confirm the system stabilizesbut if any instability occurs it will be very weak. The third case is that ofmany small circular interfaces. Here the movement is critically influencedby the configuration of all other drops due to gradient terms in the Green’sfunction. We see that there is a weakness in the model as the equationsdictate an unphysical stable equilibrium of N small disks of a common radiuslocated at the roots of unity on a ring concentric within the unit disk. Wealso determine that a single off-center drop will move away from the center,in the direction of its image across the domain wall. This case also showsthat a small drop will deform the most and a collection of N small circulardrops will not remain circular. In the final case study we derive informationabout the velocity for N small arbitrarily shaped drops. Here we see that, toleading order, the drops are influenced not by interaction among themselvesbut due to their arbitrary shape.We analyze and verify the asymptotic results with numerical simulations,making use of the finite element and level set methods to capture compli-cated geometries and merging of interfaces. An interface-fitted mesh is usedalong with isoparametric quadratic elements in the finite element methodto solve the steady state partial differential equation with sufficient accu-racy. We use this solution to numerically calculation the velocity on eachinterface. This interfacial velocity is then interpolated to a regular-griddeddomain and extended outwards for use with the level set method. The levelset method robustly handles the evolution and merging of each interface bydescribing them as a signed distance function. This process is repeated byrecovering the interface and again solving the steady state problem. Weconfirm many of the asymptotic results as well as show a set of canonicaldrop interactions.The analysis of these two systems suggest many open problems and fu-ture work. For Ostwald ripening, we would like to extend the work in [8]to two dimensions. Here, they consider the Mullins-Sekerka free bound-ary problem with kinetic drag. We would also like to extend the equationsof motion in Ostwald ripening to incorporate perturbations to the circu-116Chapter 5. Conclusionlar shapes. In the chemotaxis model we assumed the volume-filling termwas q(u) = 1 − u. The surface diffusion law can be extended to includea more general volume-filling term. For chemotaxis, numerical simulationsrevealed boundary effects when the radius of a near-circular drop was be-yond a certain critical value. This phenomenon was only observed and notcharacterized; it would be interested to see analytic results verifying thesesimulations. Additionally, we conjecture in section 3.1, conjecture 3.1 thatthe denominator of the surface diffusion law must be positive. Provingthis is important for completeness of understanding the surface diffusion inchemotaxis. Finally, the analysis on a collection of arbitrarily shaped smallinitial droplets concludes with a local solution that is independent of anyinteraction with the other drops. Extending this analysis to higher orderswill answer the question of whether the system leads to a circular shape ona longer timescale.117Bibliography[1] Milton Abramowitz and Irene A. Stegun, editors. Handbook of Math-ematical Functions with Formulas, Graphs, and Mathemtical Tables.Dover Publications, New York, 1972.[2] Nicholas D. Alikakos, Giorgio Fusco, and Georgia Karali. Ostwaldripening in two dimensions- the rigorous derivation of the equationsfrom the mullins-sekerka dynamics. Journal of Differential Equations,205:1–49, 2004.[3] Howard C. Berg. E. coli in Motion. Biological and Medical Physics,Biomedical Engineering. Springer, New York, 2004.[4] John W. Cahn. On spinodal decomposition. Acta Meterialia, 9(795-801), 1961.[5] John W. Cahn and John E. Hilliard. Free energy of a nonuniformsystem. i. interfacial free energy. The Journal of Chemical Physics, 28,1958.[6] S. Chen, B. Merriman, S. Osher, and P. Smereka. A simple level setmethod for solving stefan problems. Journal of Computational Physics,135(CP975721), 1997.[7] W. Chen and M. J. Ward. The stability and dynamics of localized spotpatterns in the two-dimensional gray-scott model. SIAM Journal ofApplied Dynamical Systems, 10(2):582–666, 2011.[8] Shibin Dai, Barbara Niethammer, and Robert L. Pego. Crossover incoarsening rates for the monopole approximation of the mullins-sekerkamodel with kinetic drag. Proceedings of the Royal Society of Edinburgh,140:553–571, 2010.[9] H. J. Eberl, D. F. Parker, and M. C. M. van Loosdrecht. A new de-terministic spatio-temporal continuum model for biofilm development.Journal of Theoretical Medicine, 3:161–175, 2001.118Bibliography[10] Thomas Hillen and Kevin Painter. Global existence for a parabolicchemotaxis model with prevention of over-crowding. Adv. Appl. Math,26:280–301, 2001.[11] Thomas Hillen and Kevin Painter. Volume-filling and quorum-sensingin models for chemosensitive movement. Canadian Applied Mathemat-ics Quarterly, 10:501–543, 2002.[12] Thomas Hillen and Kevin Painter. A users guide to pde models forchemotaxis. Journal of Mathematical Biology, 58:183–217, 2009.[13] D. Horstmann. Lyapunov functions and lp-estimates for a class ofreaction-diffusion systems. Coll. Math, 87:113–127, 2001.[14] D. Horstmann. From 1970 until present: the keller-segel model inchemotaxis and its consequences. I. Jahresberichte DMV, 105:103–165,2003.[15] E. F. Keller and L. A. Segel. Model for chemotaxis. Journal of Theo-retical Biology, 30:225–234, 1971.[16] R. Kowalczyk. Preventing blow-up in a chemotaxis model. J. Math.Anal. Appl., 305:566–588, 2005.[17] Bo Li and John Shopple. An interface-fitted finite element level setmethod with applications to solidification and solvation. Commun.Comput. Phys., 10(1):32–56, July 2011.[18] Chohong Min. On reinitializing level set functions. Journal of Compu-tational Physics, 229:2764–2772, 2010.[19] Derek Van Orden and Vitaliy Lomakin. Rapidly convergent represen-tations for periodic green’s functions of a linear array in layered media.IEEE Transactions on Atennas and Propagation, 60(2), February 2012.[20] K. Osaki and A. Yagi. Finite dimensional attractor for one-dimensionalkeller-segel equations. Funkcial. Ekvac, 44:441–469, 2001.[21] S. Osher and J. A. Sethian. Fronts propagating with curvature depen-dent speed: Algorithms based on hamilton-jacobi formuatlion. Journalof Computational Physics, 1988:12–49, 1988.[22] C. S. Patlak. Random walk with persistence and external bias. TheBulletin of Mathematical Biophysics, 15:331–338, 1953.119[23] R. L. Pego. Front migration in the nonlinear cahn-hilliard equation.Proceedings of the Royal Society of London. series A, Mathematicaland Physical Science, 422:261–278, 1989.[24] Per-Olof Persson and Gilbert Strang. A simple mesh generator in mat-lab. SIAM Review, 46(2):329–345, June 2004.[25] Giovanni Russo and Peter Smereka. A remark on computing distancefunctions. Journal of Computational Physics, 163:51–67, 2000.[26] J. A. Sethian. Level Set Methods and Fast Marching Methods: Evolv-ing interfaces in Computational geometry, fluid mechanics, computervision, and material science. Cambridge Monographs on Applied andComputational Mathematics. Cambidge University Press, 1999.[27] Roy H. Stogner, Graham F. Carey, and Bruce T. Murray. Approxi-mation of cahn-hilliard diffuse interface models using parallel adaptivemesh refinement and coarsening with c1 elements. International Journalfor Numerical Methods in Engineering, 64:1–19, 2006.[28] Mark Sussman, Peter Smereka, and Stanley Osher. A level set approachfor computing solutions to incompressible two-phase flow. Journal ofComputational Physics, 114:146–159, 1994.[29] M. Winkler. Absence of collapse in a parabolic chemotaxis model withsignal-dependent sensitivity. Math. Nachr., 283:1664–1673, 2010.[30] D. Wrzosek. Global attractor for a chemotaxis model with preventionof overcrowding. Nonlin. Ana., 59:1293–1310, 2004.120Appendix ABoundary Fitted CoordinatesHere we derive the boundary fitted coordinate system used in the chemotaxismodel. Let η be the minimal distance from a point in some domain Ω toboundary ∂Ω. We choose η such that η > 0 inside Ω and η < 0 outside Ω.Assuming Ω is smooth, there exists have a unit tangent vector t and unit(inward) normal vector n at each point on ∂Ω.We can parametrize ∂Ω by arc length giving us (γ1(s), γ2(s)) as theparametrization. We also know that the normal vector and tangent vectorare related through curvature, κ. That is, dtds = κn. κ is positive for convexsets, as seen from inside.Let (x1, x2) be any point near ∂Ω in cartesian coordinates (the notionof near is clarified below). The change of variables becomesx1 = γ1(s) + ηn1(s); n = (n1, n2),x2 = γ2(s) + ηn2(s); t = (γ′1, γ′2).Differentiating with respect to x1 and x2 and letting ex1 = (1, 0) andex2 = (0, 1) we wind up withex1 = γ ′(s)∂s∂x1+ ∂η∂x1n + ηn′(s) ∂s∂x1,ex2 = γ ′(s)∂s∂x2+ ∂η∂x2n + ηn′(s) ∂s∂x2.(A.1)Since n is a unit vector, n · n = 1 and thusdds(n · n) = 2dnds· n = 0.This means that dn/ds is perpendicular to n and thus we have dn/ds = αtfor some α to be found. To find α we take two derivatives with respect to sof t, giving usd2tds2= κ′(s)n + κdnds.Taking the dot product of above with respect to t we arrive at121Appendix A. Boundary Fitted Coordinatest · d2tds2= 0 + κt · dnds= καt · t = κα.Since, as stated above, dt/ds = κn and n is perpendicular to t, t · (dt/ds) =0. Thus, consider0 = dds[dtds· t]= t · d2tds2+ dtds· dtds= κα+ κ2n · n = κ(α+ κ).Since κ 6= 0, we have that α = −κ. Substituting the information we justderived into the equation A.1 for ex1 and ex2 we wind up withex1 = (1− κη)dsdx1t(s) + dηdx1n(s),ex2 = (1− κη)dsdx2t(s) + dηdx2n(s).Now, taking the dot product of these two equations with respect to t wewind up with∇s =(dsdx1,dsdx2)= t1− κη (A.2)Similarly, by taking the dot product of the two equations with n we windup with∇η =(dηdx1,dηdx2)= n. (A.3)This implies that η, the distance, must satisfy η < 1/κ. This is the precisedefinition of a point being near the boundary.With this in hand we take an arbitrary function u(x1, x2) and write ∇uin terms of the new coordinates. This gives us∇u = ux1ex1 + ux2ex2= (ussx1 + uηηx1)[(1− κη)sx1t + ηx1n]+ (ussx2 + uηηx2)[(1− κη)sx2t + ηx2n].Expanding and rearranging, we wind up with∇u = (1− κη)ust(s2x1 + s2x2) + uηn(η2x1 + η2x2)+ usn(sx1ηx1 + sx2ηx2) + (1− κη)uηt(sx1ηx1 + sx2ηx2).122Appendix A. Boundary Fitted CoordinatesRealizing that sx1ηx1 + sx2ηx2 = ∇s · ∇η = 0, we have∇u = ‖∇s‖2(1− κη)ust + ‖∇η‖2uηn,but, since we have expressions for ∇s and ∇η (A.2 and A.3), we can simplifyfurther∇u = us1− κη t + uηn. (A.4)We would also like to see how ∆u changes under this new coordinatesystem. We begin by calculating ∆η as∆η = dnds· ∇s = −κt · t1− κη = −κ1− κη . (A.5)We do the same for ∆s to give us∆s = t′(s) · ∇s1− κη + t · ∇( 11− κη).We know that ∇s = t/(1−κη) from A.2 so the first term on the right handside vanishes by the fact that t′ · t = 0. We wind up with∆s = t ·[κ′η∇s+ κ∇η(1− κη)2].From A.3, we know that ∇η = n and n is perpendicular to t so the secondterm in the numerator vanishes. Substituting in for ∇s we arrive at∆s = t ·[κ′ηt(1− κη)3]= κ′η(1− κη)3 . (A.6)We are now in a position to finish the calculation. Taking a dot productof the gradient in the boundary fitted coordinate system and ∇u from A.4,we have∆u = uηη‖∇η‖2 + uss‖∇s‖2 + 2usη∇s · ∇η + uη∆η + us∆s.Substituting in for ∇η, ∇s, ∆η and ∆s from A.3, A.2, A.5 and A.6, we cansimplify to∆u = uηη −κ1− κηuη +1(1− κη)2uss +κ′η(1− κη)3us,or, written in a slightly different way, we arrive at our final equation123Appendix A. Boundary Fitted Coordinates∆u = uηη −κ1− κηuη +11− κηdds(us1− κη).124Appendix BGreen’s Function for theUnit DiskThis work was taken from a paper [7]. We want to find the Green’s functioncorresponding to the PDE given below in the domain Ω = {x : |x| ≤ 1}.∆Gλ − λ2Gλ = −δ(x− x0), x ∈ Ω;∂Gλ∂n= 0, x ∈ ∂Ω, (B.1)where λ = 1/√D.The first step is to switch to polar coordinates. The PDE above becomes∂ρρGλ +1ρ∂ρGλ +1ρ2∂θθGλ − λ2Gλ = −1ρδ(ρ− ρ0)δ(θ − θ0),where the boundary conditions becomeGλ(ρ, θ+2pi) = Gλ(ρ, θ), ∂ρGλ(1, θ) =0 and Gλ has the proper log singularity as ρ→ 0.This can be solved by making use of the complex Fourier series. We haveGλ(ρ, θ; ρ0, θ0) =12pi∞∑n=−∞G¯λ,n(ρ; ρ0, θ0)e−inθ,G¯λ,n(ρ; ρ0, θ0) =∫ 2pi0einθGλ(ρ, θ; ρ0, θ0)dθ.Substituting this in leads us to∂ρρG¯λ,n +1ρ∂ρG¯λ,n −n2ρ2G¯λ,n − λ2G¯n =1ρδ(ρ− ρ0)einθ0 , 0 < ρ < 1,G¯nρ(1; ρ0, θ0) = 0; G¯n(0; ρ0, θ0) <∞.(B.2)The solution to this isG¯λ,n(ρ; ρ0, θ0) =[Kn (ρ>λ)−K ′n(λ)I ′n(λ)In (ρ>λ)]einθ0In (ρ<λ) ,125Appendix B. Green’s Function for the Unit Diskwhere this can be found by solving equation B.2 on either side of ρ0 andthen enforcing continuity and the jump condition. Here, ρ< = min(ρ0, ρ)and ρ> = max(ρ0, ρ). Thus, the full solution to B.1 isGλ(ρ, θ; ρ0, θ0) =12pi∞∑n=−∞e−in(θ−θ0)[Kn(ρ>λ)−K ′n(λ)I ′n(λ)In(ρ>λ)]In(ρ<λ).(B.3)We wish to improve the convergence of this series which is known toconverge slowly ([19]). Consider the free space Green’s function Fourierrepresentation,Gf (ρ, θ; ρ0, θ0) =12piK0(Rλ) =12pi∞∑n=−∞e−in(θ−θ0)Kn(ρ>λ)In(ρ<λ),where R =√ρ2 + ρ20 − 2ρρ0 cos(θ − θ0). At this point the ρ< and ρ> nota-tion can be dropped and replaced with ρ0 and ρ. Rewriting the term abovein B.3, we see thatG(ρ, θ; ρ0, θ0) =12piK0(Rλ)−12pi∞∑n=−∞e−in(θ−θ0)K ′n(λ)I ′n(λ)In(ρ0λ)In(ρλ).To approximate this numerically we truncate the infinite series. Be-fore doing this, realizing that Kn(z) = K−n(z) and In(z) = I−n(z) we canrewrite the infinite sum to instead go from n = 0, 1, 2, . . .. If we rewrite theexponential term using Euler’s formula the sine terms will cancel and givesusG(ρ, θ; ρ0, θ0) =12piK0(Rλ)−12piK ′0(λ)I ′0(λ)I0(ρ0λ)I0(ρλ)− 1piM∑n=1cos(n(θ − θ0))K ′n(λ)I ′n(λ)In(ρ0λ)In(ρλ), 0 < ρ < 1.Numerically, this can be implemented with MATLAB’s built in Kn(z)and In(z) and using the recurrence relations, ∂zKn(z) = −(Kn−1(z) +Kn+1(z))/2 and ∂zIn(z) = (In−1(z) + In+1(z))/2 for the modified Besselfunction derivatives. Terms in the infinite sum are calculated until the next126Appendix B. Green’s Function for the Unit Diskρ ρ0 M PM.08 .08 32 -9.7502e-09.05 .08 17 -4.9680e-09.05 .05 12 -2.4060e-09.02 .08 9 -3.6761e-09Table B.1: This table shows the convergence rate for the improved Green’sfunction for select values of ρ and ρ0. M is the number of iterations requiredto reach the specified tolerance of 1e-8 for PM . D = 1 and θ = θ0.term is below some tolerance, say 1e-8. Under these conditions the Green’sfunction for the unit disk converges very quickly, in as few as 9 iterationsfor some values of ρ and ρ0. The required number of iterations and PM forthe final term in the sum for different values of ρ and ρ0 are given in tableB.1.127
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Interface motion in the Ostwald ripening and chemotaxis...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Interface motion in the Ostwald ripening and chemotaxis systems Kavanagh, Eamon 2014
pdf
Page Metadata
Item Metadata
Title | Interface motion in the Ostwald ripening and chemotaxis systems |
Creator |
Kavanagh, Eamon |
Publisher | University of British Columbia |
Date Issued | 2014 |
Description | Ostwald ripening and chemotaxis are two different mechanisms that describe particle motion throughout a domain. Ostwald ripening describes the redistribution of a solid solution due to energy potentials while chemotaxis is a cellular phenomenon where organisms move based on the presence of chemical gradients in their environment. Despite the two systems coming from disparate fields, they are connected by the late-stage dynamics of interfacial motion. For the Ostwald ripening system we consider the case of N droplets in the asymptotic limit of small radii [formula omitted]. We first derive a system of ODEs that describe the motion of the droplets and then improve this calculation by including higher order terms. Certain properties, such as area preservation and finite time extinction of certain droplets are proved and a numerical example is presented to support the claims. In the chemotaxis model we look at the asymptotic limit of diffusive forces being small compared to that of chemotactic gradients. We use a boundary-fitted coordinate system to derive an equation for the velocity of an arbitrary interface and analyze a few specific examples. The asymptotic results are also explored and confirmed using the finite element and level set methods. Our analysis reveals the mechanism of movement to be motion by curvature in Ostwald ripening and a surface diffusion law in chemotaxis. The governing rules of motion may be different in the two systems but the end result is typically characteristically similar- exchange of mass and smoothing in favor of a larger and more stable configuration of drops. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2014-08-19 |
Provider | Vancouver : University of British Columbia Library |
Rights | Attribution-NonCommercial-NoDerivs 2.5 Canada |
DOI | 10.14288/1.0165961 |
URI | http://hdl.handle.net/2429/50035 |
Degree |
Master of Science - MSc |
Program |
Mathematics |
Affiliation |
Science, Faculty of Mathematics, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 2014-09 |
Campus |
UBCV |
Scholarly Level | Graduate |
Rights URI | http://creativecommons.org/licenses/by-nc-nd/2.5/ca/ |
AggregatedSourceRepository | DSpace |
Download
- Media
- 24-ubc_2014_september_kavanagh_eamon.pdf [ 3MB ]
- Metadata
- JSON: 24-1.0165961.json
- JSON-LD: 24-1.0165961-ld.json
- RDF/XML (Pretty): 24-1.0165961-rdf.xml
- RDF/JSON: 24-1.0165961-rdf.json
- Turtle: 24-1.0165961-turtle.txt
- N-Triples: 24-1.0165961-rdf-ntriples.txt
- Original Record: 24-1.0165961-source.json
- Full Text
- 24-1.0165961-fulltext.txt
- Citation
- 24-1.0165961.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0165961/manifest